Russian Influence Operation Likely Used ElevenLabs' AI Voice

Generative AI has emerged as a powerful tool with both remarkable capabilities and potential for misuse. In recent times, it has been found to be involved in various activities, including those related to state influence operations. This article delves into the details of how generative AI is being used and the implications it holds.

Uncovering the Dark Side of Generative AI in State Affairs

Well-Documented Misuses of Generative AI

Generative AI has a long history of well-documented misuses. It has been used to create academic papers that are not genuine and to copy the works of artists. This shows the unethical side of this technology. For example, there have been cases where generative AI has been used to produce fake research papers that seem legitimate at first glance but are actually plagiarized. Such misuses not only undermine the integrity of academia but also harm the creative industries.Another example is in the field of art. Generative AI has been used to create paintings and sculptures that imitate the style of famous artists. This not only takes away the originality of the artists but also dilutes the value of their work. In some cases, these AI-generated artworks have been sold at high prices, which is a clear indication of the potential for misuse.

Russian-Tied Campaign and AI-Generated Voiceovers

One recent campaign that has drawn attention is a Russian-tied operation dubbed "Operation Undercut." This campaign was designed to undermine Europe's support for Ukraine and prominently used AI-generated voiceovers on fake or misleading "news" videos. The videos targeted European audiences and attacked Ukrainian politicians as corrupt or questioned the usefulness of military aid to Ukraine.For instance, one video claimed that "even jammers can't save American Abrams tanks," which was a blatant attempt to undermine the support for Ukraine. The report states that the video creators "very likely" used voice-generated AI, including ElevenLabs tech, to make their content appear more legitimate. To verify this, Recorded Future's researchers submitted the clips to ElevenLabs' own AI Speech Classifier and got a match.Although ElevenLabs did not respond to requests for comment, the use of AI in this campaign highlights the potential for misuse. It also shows how easily such misinformation can be spread through the use of advanced technology.

Inadvertent Showcase of AI Voice Generation

The influence campaign's own orchestrators inadvertently showcased the usefulness of AI voice generation. Some videos released by them had real human voiceovers with a discernible Russian accent, while the AI-generated voiceovers spoke in multiple European languages like English, French, German, and Polish, with no foreign-sounding accents.This shows how AI can be used to create content that appears more natural and legitimate. It also highlights the need for careful monitoring and regulation of AI technology to prevent its misuse.

AI's Role in Multilingual Misinformation

According to Recorded Future, AI allowed for the misleading clips to be quickly released in multiple languages spoken in Europe like English, German, French, Polish, and Turkish. This shows the speed and efficiency with which AI can be used to spread misinformation on a global scale.All these languages are supported by ElevenLabs, which further emphasizes the potential for misuse. The ability to quickly generate and release content in multiple languages makes it easier for misinformation to spread and have a wider impact.

Attribution and Impact

Recorded Future attributed the activity to the Social Design Agency, a Russia-based organization that was sanctioned by the U.S. government. This shows the extent to which state actors are using generative AI to further their agendas.However, the overall impact of the campaign on public opinion in Europe was minimal, according to Recorded Future. This shows that while generative AI can be used to spread misinformation, it may not always have the desired effect.

Previous Incidents and Company Responses

This isn't the first time ElevenLabs' products have been singled out for alleged misuse. The company's tech was behind a robocall impersonating President Joe Biden that urged voters not to go out and vote during a primary election. In response, ElevenLabs said it released new safety features like automatically blocking voices of politicians.ElevenLabs bans "unauthorized, harmful, or deceptive impersonation" and uses various tools to enforce this, such as automated and human moderation. This shows the company's efforts to address the issue of misuse and ensure the responsible use of its technology.

Growth and Investors of ElevenLabs

ElevenLabs has experienced explosive growth since its founding in 2022. It recently grew ARR to $80 million from $25 million less than a year earlier and may soon be valued at $3 billion. Its investors include Andreessen Horowitz and former Github CEO Nat Friedman.This growth shows the potential of generative AI and the interest of investors in this technology. However, it also highlights the need for proper regulation and oversight to ensure that this technology is used for good and not for harm.