SEO poisoning and Bengal cats: A lesson in cybersecurity risks
SEO poisoning and Bengal cats: A lesson in cybersecurity risks
The manipulation of web search results to deliver malware or political messages has escalated with the advent of generative AI tools.
The Bengal cat attack has been making waves among Australian users in recent weeks. Anyone who searched online with the unique legal query, "Are Bengal cats legal in Australia?" was directed to a seemingly legitimate website containing a decoy link. Once the unsuspecting user clicked, they were prompted to download a zip file, which unfortunately contained malware. This malware allowed hackers to take control of the user’s device, seize it, and demand a ransom for its release.
A team of researchers from the British cybersecurity company Sophos, which uncovered the scam and recently published its findings, estimates that the hackers' strategy—using legal terms to lure users—was specifically designed to attract corporate targets.
Hackers, it must be said, often display a certain sense of humor. A Bengal cat is a domestic breed designed to resemble a wild cat. The visual mimicry of the breed’s developers parallels the deception of the hackers—sometimes amusing, but often harmful. Their choice to use cats in this scam is likely not coincidental, given the enduring connection between cats and the internet. Cats are often considered the internet’s unofficial mascots, with a long-standing cultural presence spanning from Nyan Cat to Grumpy Cat, both of which have inspired countless theories and memes. As Sophos aptly noted, "The internet is full of cats, and in this case—fake cat websites." They advised users to exercise caution when performing particularly niche searches.
The professional term for this type of attack is "SEO poisoning" (Search Engine Optimization poisoning). This strategy is a more sophisticated alternative to phishing emails or fake links like, "Your package is in customs, click here for quick release." By manipulating search engine rankings, scammers ensure their fraudulent sites appear at the top of search results.
To achieve this, bad actors employ various tactics, including registering domain names resembling legitimate sites to exploit typos, as well as incorporating relevant keywords and leveraging private link networks. The goal is to manipulate search engine algorithms, increase their site visibility, and target unsuspecting users—particularly those conducting niche searches. As Sophos noted, niche search queries lower user vigilance: "What are the chances of a phishing attack around a marginal topic?" an Australian user might wonder while checking the legality of Bengal cats as pets.
Interestingly, the goal of SEO poisoning is not always to implant malware. For instance, take the case of the Israeli streaming site Sdarot, which has been locked in a long-running copyright battle with the Zira Company. This conflict has forced Sdarot’s administrators to frequently change domain names. Devoted users searching for the latest site often encounter a flood of fraudulent sites mimicking Sdarot’s appearance, complete with fake Hebrew text about live streaming and movie downloads.
These sites, however, hide a different agenda: promoting political propaganda. Users may stumble upon videos about the humanitarian situation in Gaza or captions criticizing "Israeli lies," alongside Palestinian flags and slogans like "I stand by the innocent."
This tactic—known as "keyword stuffing"—involves saturating a webpage with irrelevant or excessive keywords to rank higher in search results. It reflects a similar method of search result poisoning but for ideological rather than financial motives.
The companies behind generative AI tools are inadvertently exacerbating the already toxic state of internet search. These tools, while heralded as revolutionary, present their own set of challenges. Beyond the traditional SEO poisoning, they enable a new type of manipulation: poisoning the model itself.
Search engines shape and organize global knowledge, making their integrity critical. However, when search is controlled by for-profit companies, as with Google, concerns arise about their monopoly over information. Despite numerous attempts to challenge Google’s dominance, no competitor has succeeded—until now, perhaps.
OpenAI, the company behind ChatGPT, recently announced plans to launch its own search engine powered by generative AI. While this sparked widespread interest, early concerns have emerged. One troubling example involved ChatGPT’s refusal to process queries containing the name "David Mayer." Users who queried the AI received error messages, and attempts to uncover the reason led to cryptic responses about "specific filters or rules in the system."
Whether "David Mayer" was deliberately blacklisted or not, the incident highlights the risk of inherent biases or manipulations within AI models. Unlike traditional SEO poisoning, this represents a deeper form of corruption—poisoning the very algorithm that underpins the search process.
For OpenAI, positioning ChatGPT as a search engine faces a rocky start. Instead of solving the issue of search manipulation, it risks introducing a new and potentially more insidious form of bias into the already fragile ecosystem of internet search.