5.2 C
Casper
Sunday, March 30, 2025

Straiker Launches AI Security with $21M Funding

Must read

AI-native Straiker emerges from stealth with funding, offering the first solution to secure enterprise AI apps and autonomous agents.

Straiker, an AI-native security company, announced its launch from stealth mode today with $21 million in initial funding from Lightspeed Ventures and Bain Capital Ventures. Straiker is introducing the first solution designed to safeguard the full range of enterprise AI applications–from AI native apps to Agents. Backed by top-tier investors and a team of seasoned AI and cybersecurity experts, Straiker aims to help organizations confidently deploy AI today and in the future by addressing critical security and safety risks with the AI apps enterprises are building and deploying.

“Enterprises are moving beyond simple AI chatbots to fully autonomous agents—but with this evolution comes an exponential rise in security and safety risks. The threat vector has escalated from basic prompt injection attacks to mass data exfiltration, supply chain attacks, and even autonomous chaos, as adversaries exploit AI’s language and reasoning layer vulnerabilities. AI is rapidly becoming one of the most significant cybersecurity risks. Enterprises must act now to stay ahead of these emerging risks and make AI security a top priority, said Ankur Shah, Co-founder and CEO at Straiker. “Straiker is the only AI-native platform that delivers real-time protection against attacks aimed at each layer of AI apps and Agents.”

Straiker’s automated, red-team-level assessment is integrated with the company’s runtime safety and security guardrails for continuous analysis and automated blocking.  The innovative solutions harness intelligence from each layer of the AI application stack—including user, models, application, agents, identity, and data—ensuring precise assessment results and industry-leading protection.

Straiker’s launch includes the general availability of its first two AI-native modules:

  • Ascend AI: Performs an in-depth attack simulation using an advanced, curated set of AI-specific safety and security threats. Elevate the security of AI apps and agents with a one-time risk assessment or continuous testing to proactively detect and resolve root issues.
  • Defend AI: Extends protection beyond prompt-level threats to shield AI applications and agents from a wide range of security and safety threats targeting different layers of AI apps and agentic systems. Seamless integration provides automated threat blocking for risks detected by Ascend AI.

Also Read: Can AI Enhance Conversations Without Sacrificing Privacy?

Both modules are powered by the Straiker AI Engine, which uses a medley of small, fine-tuned models that reason across intelligence from every layer of the AI applications stack. This design provides precision and low latency for lighting fast app performance. The Striker AI engine enables customizations to address unique safety and security requirements with an architecture designed to preserve privacy. Straiker’s STAR team supports these industry-leading capabilities. This dedicated AI security research team continues to investigate the latest model and autonomous agent risks and research the tactics, techniques, and procedures being employed by adversaries.

Looking ahead, Straiker will introduce additional modules to secure every stage of the AI app development lifecycle and to safeguard the use of third-party AI applications.

More articles

Latest posts