Seven steps to AI supply chain visibility — before a breach forces the issue

AI Security: Closing the Visibility Gap

Did you know that four in 10 enterprise applications will incorporate task-specific AI agents this year? Despite this, a recent study by Stanford University revealed that only 6% of organizations have a comprehensive AI security strategy in place.

Looking ahead to 2026, Palo Alto Networks predicts that executives may face personal liability for rogue AI actions, marking a significant shift in accountability. The challenge lies in managing the fast-paced and unpredictable nature of AI threats, as traditional governance methods struggle to keep up.

One critical issue highlighted by a recent survey conducted by Harness is the lack of visibility into the usage and modification of AI models across organizations. This visibility gap poses a significant vulnerability, making incident response challenging and leaving security teams in the dark.

Furthermore, the rise of prompt injection, vulnerable code, and jailbreaking in AI models presents a growing concern for organizations. Despite heavy investments in cybersecurity, many are unable to detect these sophisticated attacks, which are often cloaked in elusive techniques.

Addressing the Risks

IBM’s 2025 Cost of a Data Breach Report revealed that breaches of AI models or applications cost organizations significantly, with 97% of breached entities lacking proper AI access controls. Shadow AI incidents, in particular, can result in costly consequences.

While standards like SBOMs and AI Risk Management Frameworks exist, adoption remains a challenge. The shift towards AI-specific Bill of Materials (BOMs) is crucial for enhancing transparency and traceability in AI supply chains.

As the attack surface expands with the influx of new AI models, organizations must prioritize AI supply chain visibility. Implementing AI-BOMs can provide crucial forensic insights for incident response, although they are not a substitute for runtime security measures.

Steps Towards Better AI Governance

  1. Build a comprehensive model inventory to track AI models across the organization.
  2. Manage shadow AI use by implementing advanced techniques and secure platforms.
  3. Require human approval for production models to ensure accountability.
  4. Consider adopting SafeTensors for new deployments to mitigate risks.
  5. Pilot ML-BOMs for high-risk models to enhance provenance documentation.
  6. Treat model pulls as supply chain decisions to strengthen security protocols.
  7. Include AI governance in vendor contracts to enforce SBOM requirements and incident notification protocols.

Looking Ahead

As AI security becomes a top priority, organizations must prepare for stringent regulations and compliance requirements. The EU AI Act and Cyber Resilience Act are already shaping the landscape of AI governance, with severe penalties for non-compliance.

With the increasing complexity of AI models and the evolving threat landscape, proactive measures are essential. The time to build visibility and governance in AI supply chains is now, before the risks escalate.

Leave a Reply

Your email address will not be published. Required fields are marked *