
Was AI in your RSAC Convention 2025 bingo card? To nobody’s shock, it was the subject of the 12 months on the cybersecurity business’s large present, which drew to an in depth final week.
Following its emergence because the breakout star of RSAC 2024, AI — and its 2025 buzzphrase companion agentic AI — could not be averted at keynotes, periods and on social media.
It is not stunning. AI adoption is booming. In accordance with the newest analysis from McKinsey & Co., 78% of organizations use AI in at the very least one enterprise perform. Essentially the most cited AI use circumstances are in IT, advertising, gross sales and repair operations.
But with the adoption growth has come some dire warnings about AI safety. The next roundup highlights Informa TechTarget’s RSAC 2025 AI protection:
Most cyber-resilient organizations aren’t essentially prepared for AI dangers
A report from managed safety service supplier LevelBlue launched at RSAC discovered that whereas cyber-resilient organizations are well-equipped to deal with present threats, many underestimate AI-related dangers.
The report famous that AI adoption is occurring too quick for laws, governance and mature cybersecurity controls to maintain tempo, but solely 30% of survey respondents mentioned they acknowledge AI adoption as a provide chain threat. This represents a serious disconnect — and a priority for future AI-enabled assaults.
Learn the complete story by Arielle Waldman on Darkish Studying.
Fraudulent North Korean IT staff extra prevalent than thought
A panel at RSAC outlined how North Korean IT staff are infiltrating Western corporations by posing as distant American staff, producing hundreds of thousands for North Korea’s weapons program. A single vendor, CrowdStrike, discovered malicious exercise in additional than 150 organizations in 2024 alone, with half experiencing knowledge theft.
These operatives use stolen identities to safe positions at organizations of all sizes, from Fortune 500 corporations to small companies. The panel mentioned crimson flags to search for — corresponding to requests for alternate tools supply addresses and suspicious technical behaviors — in addition to how organizations can shield themselves via cautious hiring practices and enhanced monitoring.
AI is stopping risk sharing on account of knowledge and privateness laws
Throughout a SANS Institute panel about essentially the most harmful new assault strategies, Rob T. Lee, chief of analysis and head of school at SANS Institute, highlighted that the cybersecurity business is dealing with vital challenges in terms of AI regulation. Specifically, privateness legal guidelines corresponding to GDPR limit defenders’ skill to totally use AI for risk detection whereas attackers function with out such constraints.
Lee mentioned these laws stop organizations from comprehensively analyzing their environments and sharing essential risk intelligence.
Learn the complete story by Becky Bracken on Darkish Studying.
GenAI classes realized emerge after two years with ChatGPT
An RSAC panel defined how, for the reason that launch of ChatGPT in late 2022, generative AI has dramatically remodeled how cybercriminals function. The panel highlighted 4 key classes:
- GenAI hasn’t launched new ways, nevertheless it has enhanced attackers’ capabilities, resulting in a 1,000% enhance in phishing emails and extra convincing scams.
- Present legal guidelines can be utilized to prosecute AI-enabled crimes, as demonstrated by current circumstances towards DPRK staff and the Storm-2139 community.
- Vital challenges stay, together with knowledge leakage dangers and the necessity for complete AI laws.
- AI safety greatest practices are rising.
Learn the complete story by Sharon Shea on SearchSecurity.
Editor’s observe: Our workers used AI instruments to help within the creation of this information transient.
Sharon Shea is govt editor of Informa TechTarget’s SearchSecurity web site.