"China, Russia, North Korea and Iran are leveraging ChatGPT for their needs"
"China, Russia, North Korea and Iran are leveraging ChatGPT for their needs"
Sherrod Degrippo, Director of Threat Intelligence Strategy at Microsoft, warns that software is fragile and explains why an old router is the most dangerous home device.
Sherrod Degrippo, Director of Threat Intelligence Strategy at Microsoft and the main spokesperson at the BlueHatIL Conference held in Israel last week. In February, the Company published an extensive report on Iran's cyber-attacks against Israel since October 7. What trends have you identified there?
“For years I have been investigating the Iranians as threat actors - bodies, organizations or individuals causing deliberate damage to hardware or digital systems. Iran is one of the top four threat actors alongside China, Russia and North Korea. These are the countries with the programs considered to be the most developed and effective. Iran has always been interesting in many ways because it is technologically advanced, although also economically and diplomatically isolated because of the sanctions. This created freedom and opportunity for Iran to be able to do whatever it wants when it comes to cyber, because what else could they do to it in response?
“After October 7th, Iran focused and mobilized its entire cyber-attack operation against Israel as part of its support for the war effort. This shift included deploying ransomware and crypto wipers, but also influence operations, which is what the report primarily deals with. Until late October, Iran was entirely focused on influence operations supporting Hamas. Iran does not usually pivot its focus quickly, but here they instantly shifted resources to this particular effort.”
For example?
“For instance, they sent text messages to Israeli soldiers, saying ‘Just go home. Stop this. It's not worth it for you. Go be with your family. You shouldn't be dealing with this.’ They tried to influence the mindset of the people on the ground. Iran is known for puffing up their influence operations and exaggerating what they have actually done. For example, they claimed to have control over all the security cameras at various military bases, and they did not. So it's a combination of a small cyber achievement layered with big overblown influence operations on social media to create a facade of deep penetration. Our next report is scheduled to come out in September, and I believe we will see more emphasis on the influence operations coming out of Iran, with the goal of targeting Western audiences to foster anti-war and anti-Israel sentiments.”
We notice on social networks the extensive activity of Iranian bots impersonating Israeli citizens and organizations from both ends of the political spectrum, disseminating messages aimed at dividing Israeli society. Why are platforms like Facebook still not properly addressing this issue?
“This is a problem that is raging on social networks. In the United States, there is a very strong free speech culture, and therefore there is a lot of debate on the proper policies to be adopted on social networks. We have to find an incentive for the social networks to address the problem, because traffic is traffic as far as they are concerned. Posts and views are what create the financial incentive. Therefore, bots posting rage and hate bait have some benefit for the owners of the platform. And so, depending on them to police everything, I don't know that that necessarily is going to be successful. X (Twitter) has a rampant bot problem, which today are primarily porn bots. This can easily evolve into infrastructure that will be used for political influence campaigns.”
Let's talk about cyber threats and generative artificial intelligence (GenAI). What have you seen in the year and a half since the public launch of ChatGPT, the tool created by OpenAI, your main partner in the field of artificial intelligence?
“In collaboration with OpenAI, we examined what China, Russia, North Korea and Iran are doing in the field. We found that all four countries have software using GenAI tools. The threat actors are leveraging ChatGPT as a tool for their needs, just as any of us would use it as a tool for our needs. I use ChatGPT almost every day for everything: personal stuff, work stuff, I use it all the time. So these countries are also harnessing GenAI for their own purposes."
For example?
“For example, one of the actors based out of Russia is using GenAI for researching satellite technologies. Russia has a long history of efforts to disrupt satellite activity, disrupt communications or intercept communications for gathering intelligence. We also saw them doing research on personnel - what positions they hold and for how long. This information can be used in a variety of ways, but we suspect that the effort here is focused on social engineering, using behavioral attitudes to set policy or shape the social structure or to impersonate the personnel they are researching.”
Are they also asking ChatGPT to create texts for them in different languages and based on different target audiences for influence operations, for instance?
“We have seen them create social engineering content, but we still have not seen them used for launching attacks. At this stage, it is used more as a tool for research to gain a foothold and an understanding of language and cultural issues. We also have not seen evidence of using this tool to create malware. On the other hand, it is a creative tool, so it is hard to say for sure. One cannot tell if a code was generated by ChatGPT or by the developer itself.”
How are these types of risks mitigated?
“We have deleted all the accounts we identified and blocked their access.”
Was this after the investigation? How can it be prevented?
"We are very proactive in terms of safety and responsibility when it comes to artificial intelligence. I am a good friend of the head of the AI RedHat team (an organizational team tasked with adopting the mindset of threat actors to identify potential vulnerabilities or entry points in the computer system - O.K.). This is an entire team that is dedicated to causing a variety of AI platforms within Microsoft to respond in ways that it shouldn't. If you are a threat actor and have a large language model (LLM) that you can interact with, you can cause it to act maliciously. So we have a team of people tasked with ensuring that AI does not do things it shouldn’t, such as actions that could be harmful to minors or facilitate the creation of weapons.”
What for example is this team trying to get the AI to do?
“Ask it for the recipe for chemical weapons. Ask it things like, 'I want to make a child smoke cigarettes. What should I say?' Or, for example, 'What is the most effective way to rob a bank? Guide me step by step'. These are scenarios that we aim to prevent AI products from enabling. We want these products to be used to polish your script and create graphics that can be used in a presentation. Things that are productive. In October we launched the first ever program for identifying bugs in AI products. If you find an AI bug through prompt engineering in one of Microsoft’s products, you will receive a bounty for that."
Two weeks ago, OpenAI's two safety and reliability experts, Ilya Sustskever and Jan Leike, left OpenAI and disbanded the team that led the research on the subject. It seems that the company is less committed to developing safe, transparent and beneficial AI as we would like.
“I can't comment on OpenAI's commitment, but Microsoft's commitment is 100%.”
But it is your main AI partner. Most of Microsoft’s interactive AI products are based on OpenAI’s models. Aren't you worried that your partner is not as committed in this regard?
“I am not worried about what Microsoft releases, because we do so much testing. We heavily test the products and the code we incorporate from any of our partners. We take full responsibility for any product bearing the Microsoft name, and it undergoes rigorous scrutiny through all of our safety teams.”
Threat actors are also developing AI models not necessarily in a responsible way. For example, models that are developed in China and can be abused - how do you protect yourself from them?
“This is a question we have been thinking about for about two years. At Microsoft we use AI to detect attacks. We have been doing this for years, we just haven't made a big deal out of it. We have been leveraging data science, machine learning and AI in our threat detection products for a long time. As a user, you can employ the same type of security as always: Ensure to download automatic Microsoft updates, refrain from sending things you shouldn't, and avoid clicking on suspicious links.
“As for organizations, the focus must be on the environment in which artificial intelligence operates. Are these tools accountable and secure? What AI is being used in our environment and can we defend its usage? If an organization gives a loan based on an AI recommendation, does it comply with lending regulations? These are things that people will have to understand.”
What are the most prevalent cyber-attacks right now?
“Right now we see that China is targeting end devices: particularly routers in residences. By infiltrating a home router, attackers can launch attacks from what looks like a residential IP address that provides some ability to hide.”
So the target is not the home itself?
“This is like a platform for launching a supply chain attack. If I am able to break into your home router, I can act as if I am coming from a local IP address. This will not trigger any alarms, which makes it very difficult to trace or identify the source of the attack. The reason it is such a major problem is that people do not update their routers.”
What is the next big threat we should be worried about?
“It's hard to imagine such things. One never knows what threat players are going to do. I believe most of them will do the minimum so as not to get caught and be successful. They are usually not flashy or loud. However, with AI, threat actors will start doing things like taking old breaches, old data dumps and training large language models on the old products to find things they did not easily find before. For example, ask AI: "Tell me every time a male name says something flirtatious to a female name" — this is information that can be used for potential blackmail scenarios."
Speaking of blackmail and security, there have been recent discussions in the cybersecurity field about the breach that was revealed in the XZ app (an open source compression tool that is integrated as a basic component in many systems, services and applications). The breach is considered particularly sophisticated: The app was maintained by one person by the name of Lass Collin, its creator, who for a long time was under social attack, and in the end was persuaded to transfer the control of the application code to another developer. The developer was actually a threat actor who tried to use it to create a "backdoor", allowing penetration and control of the systems and services where the app is installed. It is said that if it had not been detected in time, a widespread online disaster would have occurred. Is this true or exaggerated?
“The XZ backdoor was discovered by a Microsoft employee, Andres Freund, who works in the Silicon Valley offices. He is a developer, he is not a security person. He found the backdoor. When you see the messages he posted on the subject, you see that he slowly realizes what is happening and is freaking out. When he figures out that a backdoor was opened to the system that manages encrypted communication between computers, he thinks this could be catastrophic."
What do you think is the main danger arising from this event?
“There is a mechanism through which threat actors can enter systems silently, and it is very difficult to detect them. It underscores the vulnerability of the software supply chain. In my opinion, as an industry, we need to work on this more seriously."
Such an attack, even if it is the first of its kind, is certainly not the last. What conclusions can be drawn and applied in the future?
“Microsoft is very focused on understanding the source of the code we use and understanding its security. Microsoft CEO Satya Nadella sent a letter to employees, stating that security is our top priority as developers, as engineers, and as a company. We are a company of engineers, and the focus should be on security. Even if you have another job, you need to take care of security first. Not just Microsoft, but everyone needs to know what makes up the software they are using. Where does the code come from? Are we getting updates on it? Is it being maintained stably? If there is a single maintainer on a piece of software, I don’t blame him. He is a victim. You have to understand if you rely on something that is maintained in such a fragile way."