
Opinion
The ticking time bomb: How public AI puts your organization at risk
From leaking sensitive information to relying on incorrect information, the increasing accessibility of artificial intelligence tools poses a significant challenge to companies and organizations; Moshe Karako, Chief Technology Officer at NTT Israel, explains the dangers, how they can be overcome, and why we must act now.
Many organizations today face a dual threat when using public artificial intelligence tools like ChatGPT, Grok, Claude, and others: first, the constant leakage of sensitive information from the organization, such as customer data, trade secrets, and strategy; and second, a quieter but equally destructive danger – reliance on incorrect or generic information. To understand the full scope of the problem, we need to examine it through the AI TRiSM framework, which provides a comprehensive methodology for addressing the unique challenges posed by generative AI and enables analysis across five critical dimensions: technology, regulation, impact, security, and ethics.
In the course of my work as an information security researcher and consultant, I frequently encounter alarming cases of disclosure of sensitive organizational and personal information. Employees, acting in good faith and seeking to become more efficient, input sensitive internal information into public AI tools. A 2024 McKinsey study shows that organizations that did not invest in secure AI systems experienced three times more data breaches than those that invested in secure solutions.
Another growing risk in the AI era is the leakage of internal organizational information to unauthorized parties within the organization itself. For example, a recent case at an Israeli tech company reveals the depth of the problem: A junior-level employee tried to find out the CEO's salary through a targeted query to ChatGPT, while creating indirect access to internal documents. The system, which had already "learned" the information from previous documents entered by other employees, provided the requested information. This incident highlights the critical need to define clear policies on the use of AI tools and restrict its access to sensitive information, even internally.
The danger is compounded when considering the quality of information obtained from these systems. Employees who rely on answers from public systems receive information that is not tailored to their organization's unique reality and work processes. According to a Deloitte survey, 78% of organizations that relied on public AI systems reported incorrect business decisions resulting from inaccurate or irrelevant information and, in some cases, information that led to security breaches that compromised the organization.
Legal exposure presents a particularly critical issue. Strict privacy protection regulations such as GDPR impose heavy fines on organizations that fail to protect customer data. According to a Gartner report from 2023, 75% of the world will be covered by data protection legislation by 2025. A tangible example of these risks is Samsung's case in Korea, where the company was fined $14 million for leaking sensitive information through the use of ChatGPT.
The solution to these challenges lies in developing in-house AI systems. These systems, trained on organization-specific information and processes, not only prevent information leakage but also provide accurate and relevant answers based on targeted access permissions. Research shows that organizations that have invested in such systems have seen an 82% decrease in information security incidents and a 64% improvement in response accuracy.
Moving forward, action is required on three levels: investment in internal AI systems, employee training, and implementation of control and supervision processes. According to a study published by Deloitte, the average cost of a data breach is $3.2 million. This figure emphasizes that investing in secure solutions is not only necessary, but also cost-effective.
The time to act is now. Organizations that use the AI TRiSM model to grasp the full range of risks and act quickly to implement comprehensive solutions will be the ones that survive and succeed in the AI era.
Moshe Karako is the Chief Technology Officer at NTT Innovation Lab Israel.