Deepfakes: Profi warns of increasingly complex deceptions

Last summer, the danger of deepfakes became tangible.

Deepfakes: Profi warns of increasingly complex deceptions

Last summer, the danger of deepfakes became tangible. Several mayors of major European cities, including Franziska Giffey (44) in Berlin and Michael Ludwig (61) in Vienna, believed they had phoned Kiev Mayor Vitali Klitschko (51). In truth, the person on the screen was a digital fake, not Klitschko, but a Russian comedian duo were behind the calls and controlled the Ukrainian's likeness. Now a high-ranking Microsoft employee warns of the further dangers of this development.

Eric Horvitz is Chief Science Officer at Microsoft. As such, he deals with the potential and dangers of modern technologies. In a research paper he recently published, he warns that the possibilities of artificial intelligence could be so potent in the future that "our children and grandchildren will find themselves in a world in which it is difficult or impossible to distinguish fact from fiction".

Interactive deepfakes, such as the Klitschko fakes, are particularly problematic, as they are difficult to identify as fakes even by other machines. Because the fine-tuning of these technologies is progressing and deepfakes can now convert faces and voices in real time, Horvitz warns of potential horror scenarios. In this way, several deceptively real deepfakes could be used at the same time in order to simulate a specific event that never took place.

In order to counter the dangers of deepfakes, Horvitz calls for a rethinking of the public. Because information campaigns could also manipulate their audience in a targeted manner over a longer period of time with the ever-improving technical possibilities. "We can expect the emergence of new forms of deepfakes," warns Horvitz in this context.

In order to counteract this circumstance, not only the broader population and media reporting would have to be aware of the potential dangers and increase their media competence. Rather, the new technical possibilities would require new ways to confirm the authenticity of digital content and to verify the identity of people on screens, believes Horvitz. As a possible measure, he recommends scanning fingerprints or digital watermarks, as well as changing legislation that criminalizes the use of deepfakes for disinformation purposes.