Amit Elazari<span style="font-weight: 400;">, </span>Elad Schulman

Opinion
Israel’s AI treaty: The first step toward AI regulation and enhanced cybersecurity

Artificial intelligence is entering more areas of our lives, and regulators seeks to catch up; Elad Schulman and Amit ElazarI explain how Israel's signing of the treaty marks the beginning of the road to tighter regulation in the field of AI, and companies will need to take immediate steps to avoid cyber risks, regulatory fines and lawsuits.

AI is increasingly becoming a vital part of our lives and workforce, and it was only a matter of time before regulation catches up with the emerging security issues and risks. That time has now arrived, with legislation on both sides of the Atlantic defining the duties and responsibilities associated with the responsible use of GenAI and LLM technologies. This wave of regulation has now reached Israel.
Recently it was announced that Israel had signed an international treaty to regulate the use of artificial intelligence (AI). While the treaty primarily addresses the use of AI applications and models in the public sector, its impact will extend to the business world, where the pace of change is just as rapid. An example can be seen in the U.S., where a similar influence is felt from the presidential executive order, initially aimed at the public sector but already affecting private enterprises. This signature is a significant step for businesses utilizing AI and handling public data security, but it marks only the beginning of a long road for organizations in both the public and private sectors.
1 View gallery
אלעד שולמן ו עמית אלעזרי
אלעד שולמן ו עמית אלעזרי
Amit Elazari, Elad Schulman
(Photo: Sharon Gadasi, Shaun Mader)
What is AI Compliance?
This term covers all the steps that an organization takes to ensure that its use of AI and GenAI aligns with relevant rules and regulations. From a business and operational point of view, that includes a range of interlocking risk and compliance priorities the collection of training data for an AI model needs to be ethical and privacy-preserving.
Both the EU AI Act and the US Executive Order on AI set out to establish standards for the use of these technologies, emphasizing privacy, safety and transparency. For vendors and users of AI and LLM, these laws also limit certain types and uses of GenAI and LLM technology and impose responsibility on organizations to guarantee compliance.
The Importance of AI Regulation in the Private and Public Sectors
As the risks associated with AI and generative AI (GenAI) continue to grow, governments and regulatory entities are stepping in with new tools and requirements aimed at protecting businesses and consumers. In recent years, the dangers of AI have been amplified, particularly with the introduction of GenAI-based chatbots like Gemini, Claude, and ChatGPT. First, there's the privacy risk—advanced technologies could expose sensitive user and company data. Moreover, there’s the danger of inaccuracies or bias in AI outputs, leading to flawed decisions that could harm individuals or businesses. In cybersecurity, AI technologies could enhance the capabilities of malicious attacks, creating new threats to information security. Lastly, there's concern that these technologies may be used unethically or in ways that conflict with societal values.
What Does the Future Hold for Businesses Implementing Generative AI?
To meet the new requirements, companies must adopt technological solutions for information security that will help them comply with increasingly strict and complex AI regulations. This is not a one-time adaptation to new technology, but a dynamic process that will require executives and professionals to stay updated and apply ongoing regulatory updates from the moment the organization first integrates AI. For example, private companies using GenAI tools will need to reassess how these technologies are used and managed to comply with regulations. Focusing on processes like transparency, data security, and accountability will help them stay compliant with the new laws.
Israeli AI Regulation- This Just the Beginning
The treaty is not a one-time solution but a milestone in an ongoing regulatory process that sets broad policy on the ethical and safe use of AI for the Israeli market.
The biggest challenge for organizations will be keeping pace with change. Companies that want to stay relevant must act quickly to implement mechanisms and information security solutions that will help them manage risks in the field of generative AI and promote innovative processes within the regulatory framework.
Organizations in both the public and private sectors will need to adopt advanced technologies and tools to help them comply with regulations. The future not only promises new technologies but also demands responsibility and a commitment to protect data and privacy. Failure to comply with cybersecurity regulations when deploying AI solutions could lead to serious consequences, including regulatory fines or personal lawsuits, as well as a loss of customer trust, which can be difficult to rebuild after a security breach.
Elad Schulman is CEO and Co-Founder of Lasso Security.
Amit Elazari is CEO and Co-Founder of Open Policy.