OpenAI has reportedly overhauled its safety operations to guard in opposition to company espionage. According to the Financial Times, the corporate accelerated an present safety clampdown after Chinese language startup DeepSeek launched a competing mannequin in January, with OpenAI alleging that DeepSeek improperly copied its fashions utilizing “distillation” strategies.
The beefed-up safety consists of “info tenting” insurance policies that restrict employees entry to delicate algorithms and new merchandise. For instance, throughout growth of OpenAI’s o1 mannequin, solely verified workforce members who had been learn into the mission might talk about it in shared workplace areas, in response to the FT.
And there’s extra. OpenAI now isolates proprietary know-how in offline laptop methods, implements biometric entry controls for workplace areas (it scans workers’ fingerprints), and maintains a “deny-by-default” web coverage requiring specific approval for exterior connections, per the report, which additional provides that the corporate has elevated bodily safety at information facilities and expanded its cybersecurity personnel.
The modifications are mentioned to replicate broader issues about international adversaries making an attempt to steal OpenAI’s mental property, although given the continuing poaching wars amid American AI firms and more and more frequent leaks of CEO Sam Altman’s feedback, OpenAI could also be making an attempt to deal with inside safety points, too.
We’ve reached out to OpenAI for remark.