Scariest Deepfake Of Them All


Thanks, gocomics.org

Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us?…

Generated media, such as deepfaked video or GPT-3 output…if used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check. In the early 2000s, it was easy to dissect pre-vs-post photos of celebrities and discuss whether the latter created unrealistic ideals of perfection. In 2020, we confront increasingly plausible celebrity face-swaps on porn, and clips in which world leaders say things they’ve never said before. We will have to adjust, and adapt, to a new level of unreality. Even social media platforms recognize this distinction; their deepfake moderation policies distinguish between media content that is synthetic and that which is merely “modified”.

But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. Rather than being deployed at sensitive moments in order to create a mini scandal or an October Surprise, as might be the case for synthetic video or audio, textfakes could instead be used in bulk, to stitch a blanket of pervasive lies. As anyone who has followed a heated Twitter hashtag can attest, activists and marketers alike recognize the value of dominating what’s known as “share of voice”: Seeing a lot of people express the same point of view, often at the same time or in the same place, can convince observers that everyone feels a certain way, regardless of whether the people speaking are truly representative—or even real. In psychology, this is called the majority illusion. As the time and effort required to produce commentary drops, it will be possible to produce vast quantities of AI-generated content on any topic imaginable. Indeed, it’s possible that we’ll soon have algorithms reading the web, forming “opinions,” and then publishing their own responses. This boundless corpus of new content and comments, largely manufactured by machines, might then be processed by other machines, leading to a feedback loop that would significantly alter our information ecosystem.

And another chapter in the endless rewrite of “BRAVE NEW WORLD” drops into view.

5 thoughts on “Scariest Deepfake Of Them All

  1. Here and Now says:

    “The Majority Illusion in Social Networks” Kristina Lerman, Xiaoran Yan, and Xin-Zeng Wu USC Information Sciences Institute (2015) https://arxiv.org/pdf/1506.03022v1.pdf
    [Note: “This work was supported in part by Air Force Office of Scientific Research (contract FA9550-10-1-0569), by the National Science Foundation (grant CIF-1217605) and by Defense Advanced Research Projects Agency (contract W911NF-12-1-0034). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”]

  2. Synthespian says:

    Inside the strange new world of being a deepfake actor (MIT) https://www.technologyreview.com/2020/10/09/1009850/ai-deepfake-acting/
    MIT “In Advent of Moon Disaster” (complete film 7:46 : Nixon begins at 4:26) https://www.youtube.com/watch?v=LWLadJFI8Pk&feature=emb_logo
    See also Kim Jong-Un on the U.S. 2020 election https://www.youtube.com/watch?v=ERQlaJ_czHU&feature=emb_logo
    and Vladimir Putin https://www.youtube.com/watch?v=sbFHhpYU15w&feature=emb_logo

    • lɐǝɹ sᴉ ƃuᴉɥʇou says:

      “Seeing no longer believing: the manipulation of online images : Online images are not always what they seem, especially on social media (Queensland University of Technology) https://www.eurekalert.org/pub_releases/2020-10/quot-snl102020.php
      “…When it is possible to alter past and present images, by methods like cloning, splicing, cropping, re-touching or re-sampling, we face the danger of a re-written history – a very Orwellian scenario.” Also: “Detection of false images is made harder by the number of visuals created daily – in excess of 3.2 billion photos and 720,000 hours of video – along with the speed at which they are produced, published, and shared,” said Dr Thomson, lead author “Visual Mis/disinformation in Journalism and Public Communications: Current Verification Practices, Challenges, and Future Opportunities” https://www.tandfonline.com/doi/abs/10.1080/17512786.2020.1832139?journalCode=rjop20
      “In the last days of August, with the clock ticking down until Election Day, senior Republican officials pulled off a disinformation hat trick: Over the course of two short days, figures affiliated with the GOP published three different deceptively edited videos on social media.
      These were not deepfakes—hyperrealistic, synthetic audio or video that shows real people doing or saying things they never did or said. They were, rather, what are sometimes called “cheapfakes” or “shallowfakes”—synthetic media that doesn’t require any sophisticated technology to cobble together and is sometimes less convincing, and more easily detectable by experts, as a result.” (Lawfare) https://www.lawfareblog.com/thirty-six-hours-cheapfakes
      MIT: “Deepfakes are solvable—but don’t forget that “shallowfakes” are already pervasive” (3/25/19) https://www.technologyreview.com/2019/03/25/136460/deepfakes-shallowfakes-human-rights/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.