by Yashin Manraj, CEO — Pvotal Technologies
R1, the open-source artificial intelligence model developed by Chinese start-up DeepSeek, has become a trending news topic in the tech sector. However, not all of its coverage has been positive. Just days after the AI assistant unseated ChatGPT to become the top-rated free app on Apple’s US App Store, news broke that cloud security company Wiz had found a dangerous vulnerability in the DeepSeek system that allowed anyone “full control over database operations, including the ability to access internal data.”
The DeepSeek debacle highlights the fact that AI companies need to look beyond cybercriminals as they develop their cybersecurity strategies. Insider threats and data vulnerabilities pose just as significant a threat as hackers seeking to gain unauthorized access. When exploited, they can be more costly, especially in terms of reputational damage.
The following are a few key issues AI companies must consider as they seek to keep their systems and data secure against both outsider and insider threats.
Internal vulnerabilities open doors to unauthorized access
Breaches typically occur when hackers overwhelm an organization’s security system, breaking through safeguards to gain unauthorized access. Internal vulnerabilities are weaknesses in security systems that leave open doors for outsiders to walk through, rather than presenting a barrier that they must break through.
A variety of scenarios can lead to an internal security vulnerability. One culprit is improper data handling that fails to adhere to security protocols. Misconfigured systems, which can result from improperly configured firewalls, databases, and cloud storage, can also create vulnerabilities that put data at risk.
In the case of the DeepSeek exposure, two open HTTP ports were found, leading to a database containing highly sensitive data. The vulnerability allows the database to be accessed without any authorization.
Penetration testing, which can involve internal and external components, is key to identifying internal vulnerabilities before they can be exploited. Regular security audits also help ensure proper controls are in place and proper protocols are followed.
Insider threats can make controls ineffective
Whether done maliciously or negligently, insider activity can create dangerous vulnerabilities in security systems. An employee accidentally neglecting to issue a security patch, or even a disgruntled employee seeking financial gain, poses an insider security threat.
Addressing insider threats requires a number of steps. Clear policies and procedures must be developed to address acceptable use, data handling, and the reporting of suspicious activity. Companies also must develop strong access controls, preferably applying principles of least privilege and role-based access.
Employee training is also valuable for addressing insider threats. Security awareness training helps employees understand and identify threats. Ethical training educates employees on the role they play in keeping data secure and the consequences of failing in that role.
Data sharing can lead to security gaps
Collaboration is typical in the AI industry, especially among startups with limited resources for securing data. But sharing data can create new attack vectors that increase the risk of unauthorized access.
Leveraging encryption to keep data secure while being transferred is paramount. Virtual private networks, or VPNs, can be used to create secure connections for sharing. When APIs are used for sharing, companies should make sure they are secured with encryption, authorization, and authentication.
Data minimization is a step that can help keep data secure when shared. This process limits sharing to only the data necessary for the collaboration’s specific purpose rather than granting wholesale access to a database.
Data sharing agreements should be used to define the terms of usage and stipulate the security controls that will be in place. Agreements should also establish a timeline for data retention and detail the process companies will use to securely delete data when the sharing period comes to an end.
Standard cybersecurity strategies focused primarily on outsider attacks won’t provide the type of protection AI companies need. To ensure their data stays secure, AI companies must address internal vulnerabilities, insider threats, and the unique challenges associated with data sharing. Ignoring any of those components introduces weaknesses that can be easily exploited by cybercriminals.
Yashin Manraj, CEO of Pvotal Technologies, has served as a computational chemist in academia, an engineer working on novel challenges at the nanoscale, and a thought leader building more secure systems at the world’s best engineering firms. His deep technical knowledge from product development, design, business insights, and coding provides a unique nexus to identify and solve gaps in the product pipeline.