Thursday, November 6, 2025
HomeFINANCE NEWSWhy this company says the state of AI security is ‘grim’

Why this company says the state of AI security is ‘grim’



Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…Mark Zuckerberg and Priscilla Chan have restructured their philanthropy to focus on AI and science…Apple is reportedly finalizing a deal to pay Google about $1 billion per year to use a 1.2-trillion-parameter AI model to power a major overhaul of Siri…OpenAI CFO Sarah Friar clarifies comment, says company isn’t seeking government backstop.

As the wife of a cybersecurity pro, I can’t help but pay attention to how AI is changing the game for those on the digital front lines—making their work both tougher and smarter at the same time. I often joke with my husband that “we need him on that wall” (a nod to Jack Nicholson’s famous A Few Good Men monologue), so I’m always tuned in to how AI is transforming both security defense and offense.

That’s why I was curious to jump on a Zoom with AI security startup Cyera’s co-founder and CEO Yotam Segev and Zohar Wittenberg, general manager of Cyera’s AI security business. Cyera’s business, not surprisingly, is booming in the AI era–its ARR has surpassed $100 million in less than two years and the company’s valuation is now over $6 billion–thanks to surging demand from enterprises scrambling to adopt AI tools without exposing sensitive data or running afoul of new security risks. The company, which is on Fortune’s latest Cyber 60 list of startups, has a roster of clients that includes AT&T, PwC, and Amgen.

“I think about it a bit like Levi’s in the gold rush,” said Segev. Just as every gold digger needed a good pair of jeans, every enterprise company needs to adopt AI securely, he explained. 

The company also recently launched a new research lab to help companies get ahead of the fast-growing security risks created by AI. The team studies how data and AI systems actually interact inside large organizations—tracking where sensitive information lives, who can access it, and how new AI tools might expose it. 

I must say I was surprised to hear Segev describe the current state of AI security as “grim,” leaving CISOs—chief information security officers—caught between a rock and a hard place. One of the biggest problems, he and Wittenberg told me, is that employees are using public AI tools such as ChatGPT, Gemini, Copilot, and Claude either without company approval or in ways that violate policy—like feeding sensitive or regulated data into external systems. CISOs, in turn, face a tough choice: block AI and slow innovation, or allow it and risk massive data exposure.

“They know they’re not going to be able to say no,” said Segev. “They have to allow the AI to come in, but the existing visibility controls and mitigations they have today are way behind what they need them to be.” Regulated organizations in industries like healthcare, financial services or telecom are actually in a better position to slow things down, he explained: “I was meeting with a CISO for a global telco this week. She told me, ‘I’m pushing back. I’m holding them at bay. I’m not ready.’ But she has that privilege, because she’s a regulated entity, and she has that place in the company. When you go one step down the list of companies to less regulated entities. They’re just being trampled.” 

For now, companies aren’t in too much hot water, Wittenberg said, because most AI tools aren’t yet fully autonomous. “It’s just knowledge systems at this point—you can still contain them,” he explained. “But once we reach the point where agents take action on behalf of humans and start talking to each other, if you don’t do anything, you’re in big trouble.” He added that within a couple of years, those kinds of AI agents will be deployed across enterprises.

“Hopefully the world will move at a pace that we can build security for it in time,” he said. “We’re trying to be make sure that we’re ready, so we can help organizations protect it before it becomes a disaster.” 

Yikes, right? To borrow from A Few Good Men again, I wonder if companies can really handle the truth: when it comes to AI security, they need all the help they can get on that wall.

Also, a small self-promotional moment: Yesterday I published a new Fortune deep-dive profile on OpenAI’s Greg Brockman — the engineer-turned-power-broker behind its trillion-dollar AI infrastructure mission. It’s a wild story, hope you’ll check it out! It’s one of my favorite stories I worked on this year.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI

Meet the power broker of the AI age: OpenAI’s ‘builder-in-chief’ helping to turn Sam Altman’s trillion-dollar data center dreams into realityby Sharon Goldman

Microsoft, freed from relying on OpenAI, joins the race for ‘superintelligence’—and AI chief Mustafa Suleyman wants to ensure it serves humanity–by Sharon Goldman

The under-the-radar factor that helped Democrats win in Virginia, New Jersey, and Georgiaby Sharon Goldman

Exclusive: Voice AI startup Giga raises $61 million to take on customer service automationby Beatrice Nolan

OpenAI’s new safety tools are designed to make AI models harder to jailbreak. Instead, they may give users a false sense of securityby Beatrice Nolan

AI IN THE NEWS

Mark Zuckerberg and Priscilla Chan have restructured their philanthropy to focus on AI and science. The New York Times reported today that Mark Zuckerberg and Priscilla Chan’s philanthropy, the Chan Zuckerberg Initiative, is going all-in on AI. Once known for its sweeping ambitions to fix education and social inequality, CZI announced a major restructuring to focus squarely on AI-driven scientific research through a new organization called the Chan Zuckerberg Biohub Network. The group even acquired the team behind AI startup Evolutionary Scale, naming its chief scientist Alex Rives as head of science. It’s a boomerang move for Rives: When I interviewed him about Evolutionary Scale last year, he explained that he had led a research cohort known as Meta’s “AI protein team” that in August 2023 was disbanded as part of Mark Zuckerberg’s “year of efficiency” that led to over 20,000 layoffs at Meta. Undeterred, he immediately spun up a startup with a core group of his former Meta colleagues, called Evolutionary Scale, to continue their work building large language models that, instead of generating text, images, or video, generate recipes for entirely new proteins.

Apple is reportedly finalizing a deal to pay Google about $1 billion per year to use a 1.2-trillion-parameter AI model to power a major overhaul of Siri. According to Bloomberg, after testing models from Google, OpenAI, and Anthropic, Apple has chosen Google’s technology to help rebuild Siri’s underlying system. The partnership would give Apple access to Google’s massive AI infrastructure, enabling more capable, conversational versions of Siri and new features expected to launch next spring. Both companies declined to comment publicly. While the hope is reportedly to use the technology as an interim solution until Apple’s own models are powerful enough, my colleague Jeremy Kahn and I both wonder if this might ultimately signal that Apple has given up trying to compete in the AI model game with their own native technology for Siri.

OpenAI CFO Sarah Friar clarifies comment, says company isn’t seeking government backstop. CNBC reported that OpenAI CFO Sarah Friar clarified late Wednesday that the company is not seeking a government “backstop” for its massive infrastructure buildout, walking back remarks she made earlier at the Wall Street Journal’s Tech Live event. Friar said her comments about a potential federal guarantee “muddied the point,” explaining that she meant the U.S. and private sector must both invest in AI as a national strategic asset. Her clarification comes as OpenAI faces scrutiny over how it will finance more than $1.4 trillion in data center and chip commitments despite reporting roughly $13 billion in revenue this year. CEO Sam Altman has brushed off concerns, calling AI infrastructure the foundation of America’s technological strength.

AI CALENDAR

Nov. 10-13: Web Summit, Lisbon. 

Nov. 19: Nvidia third quarter earnings

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

82%

That’s how many CISOs face pressure from boards or executives to increase efficiency using AI-driven automation, according to a new survey of 100 chief information security officers from Nagomi Security called the 2025 CISO Pressure Index

Other key findings included: 

  • 59% of CISOs say they fear AI attacks more than any other over the next 12 months. 

  • 47% expect agentic AI to be their top concern within the next two to three years.

  • 80% of CISOs say they are under high or extreme pressure right now, and 87% report that pressure has climbed over the past year.

 

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments