Why do leading AI labs use IAL3 identity proofing for teams and suppliers?

5 min read

         

In this post, we will delve into the top trends from leading AI companies seeking to protect their intellectual property. AI companies are increasingly requiring our identity services to meet IAL3 (Identity Assurance Level 3), even though they have no business with any government. The benefit of IAL3 is that it prevents nation-state-level actors from opening a wide door into your company by detecting remote IT worker fraud. It can be an overwhelming experience; however, understanding the why and how will make it a smooth process. First, we will briefly cover what happens when insider threats gain access to your IP or other proprietary data. Most of the time, companies will never realize their data is being stolen or used when sophisticated actors are involved. In the case where we detected an insider threat, it was only months later that an artifact left over from a Docker build process was found, with one folder named after the company’s original model. It is trivial for bad actors to rewrite and obfuscate the true origins of their stolen source code and data, but sometimes you get lucky and find a mistake like this that is fixed in subsequent versions. Years of work can then be in the hands of a malicious actor who abuses it for their own purposes. Also, the threats are expanded, with unskilled actors able to use AI to generate their entire attack plan, especially its technical components. In previous years, companies didn’t have a massive scope to defend, since the skills of nation-state actors kept the number of attackers in check. Now, we are seeing expansionary practices by bad actors that can quickly monetize stolen IP.

Next, many security countermeasures should be in place to prevent these types of leaks, and identity verification is just one layer of protection. Other tools such as IAM, endpoint security, DLP, encryption, network controls, penetration testing, dependency analysis, and more should be part of the defense pipeline. Our solution primarily protects the digital identity with physical human presence verification. However, there are cases in which threat actors work with people to deceive your entire hiring and relationship pipeline, only to turn rogue at the right time. In these cases, your company would need an active insider threat team to detect these risks, but leading-edge AI companies don’t entertain that threat and eliminate it with additional gating measures.

Trust Swiftly is seeing demand across the AI ecosystem to verify contractors, employees, suppliers, and, essentially, any related party throughout the entire AI development process. Unfortunately, trust is out the door for many companies, and they are looking for high assurance that their IP will not be misused. Before granting access to systems and data, diligent AI companies require their suppliers and employees to be vetted at a high level of assurance. This is no different from what the government does through the FedRAMP program to ensure it is dealing with trustworthy individuals. Given the rapid advancement of some AI models, it will become necessary to gate access to them to ensure they are used appropriately. Companies like OpenAI are already restricting access to their models to verified organizations and will likely expand this requirement once more powerful versions are released. Vetting any parties will become crucial for AI companies to ensure they do not embed malicious connections into their environment. Identifying where a person is from and their true identity will ensure they can be trustworthy. For example, in another case, we discovered a partnership was opened between two companies, and in reality, the bad actor was using it as a false pretense to gain access to their networks and steal proprietary data. Even seemingly benign or trustworthy companies can have different motives, and a company's reputation should not be passed down to its individuals.

Depending on the level of protection you want to protect your data, there are multiple supply chains and vendors to check. Building state-of-the-art models requires coordination with countless companies; most companies aren’t going to the granular level, such as verifying the background of the chips being used to run their models. There are engineers at all levels, and it becomes impractical to microscopically examine the sand supplier of the silicon you purchase. However, there are basic things you should verify, such as that the device you insert your YubiKey authenticator into is not a custom-built Windows PC, but rather a hardware-encrypted device running a trusted operating system through an externally verifiable chain of custody process.

However, conducting basic reconnaissance on the company and employee identities can give you an overview of whether you need to conduct IAL3 checks or whether they are already trustworthy. For example, if the company has significant employees with links to other countries and your model is export-controlled and restricted, then it likely is not worth the risk to involve them in sensitive areas of your AI research. The lengths some bad actors will go to are extraordinary, and we are unable to write about them due to the sensitivity of their actions. The vetting processes we see today may look trivial compared to those that will occur in the future.

In review, the top AI labs will remain targets as they race to develop more advanced models that garner broader use. Threat actors will lurk for easy targets and take what they can get, but as long as intrusions are blocked or limited in scope, the risk is mitigated. Take, for example, the case where the AI model was stolen and resold, only to become completely obsolete after 2 years, requiring another breach to stay on top. In the long run, the AI companies that work to develop positive, beneficial models will continue to outperform any short-term setbacks, such as stolen IP. The companies that ignore this risk will continuously ponder how their competitor is constantly nipping at their heels, wondering how they cracked the same code, only a little slower. The correct approach is to invest early in identity security to ensure you have a trusted foundation of employees with access to sensitive materials.

About the Trust Swiftly Team

We publish practical guidance on identity assurance, fraud prevention, and FedRAMP-aligned controls for high-risk workflows.

Comments