
Hey there! Did you hear about the recent federal directive regarding Anthropic technology? It’s causing quite a stir, with all U.S. government agencies being asked to stop using it within a six-month timeframe. But here’s the catch – most agencies don’t even know where Anthropic’s models are in their systems!
And it’s not just government agencies facing this issue. Many enterprises are in the same boat. The disconnect between what they think they’ve approved and what’s actually being used is a major concern for security leaders.
AI vendor dependencies are like a tangled web that extends beyond just the initial contract. They weave through various vendors, platforms, and tools that may have been adopted without proper review. It’s a chain that most enterprises have never fully mapped out.
Uncovering the Hidden Inventory
A recent survey revealed that only 15% of U.S. CISOs have full visibility into their software supply chains, and almost half had adopted AI tools without proper approval. This means that undocumented AI vendor dependencies are piling up, waiting to cause problems when a migration is forced.
According to Merritt Baer, CSO at Enkrypt AI, the challenge lies in understanding the dynamic and indirect nature of AI, which traditional security programs may not be equipped to handle.
Preparing for the Unexpected
The directive for agencies to move away from Anthropic technology is a wake-up call for all organizations relying heavily on a single AI vendor. The risks associated with such dependencies are high, as shown by the increase in shadow AI incidents leading to breaches.
Organizations need to be proactive in identifying and addressing these dependencies before they become a liability. The key is to understand the entire supply chain and ensure that critical workflows are not impacted by a sudden vendor cutoff.
For those doing business with the Pentagon, this means proving that their workflows are free from Anthropic technology. It’s a complex task that requires a thorough understanding of the dependencies at play.
Taking Action Now
So, what can you do as a security leader to tackle this issue head-on? Baer suggests four practical steps that can be implemented within 30 days:
-
Map execution paths, not vendors. Track where services are making model calls and what data is involved.
-
Identify control points you actually own. Ensure you have control at all levels of data interaction.
-
Run a kill test on your top AI dependency. Simulate the removal of a critical AI vendor to uncover hidden dependencies.
-
Force vendor disclosure on sub-processors and models. Make sure your AI vendors can provide detailed information on their dependencies.
Preparing for the Storm
Enterprises need to understand that simply approving an AI vendor interface is not enough. The real dependencies run deeper and can have significant consequences when under pressure.
By mapping out AI vendor dependencies and taking proactive measures to address potential risks, organizations can better prepare for any future challenges that may arise. Don’t wait for the next forced migration to catch you off guard!
