CNET’s attempt to pass off AI-written commentary gets worse

The prominent tech news site CNET’s attempt to pass off AI-written work keeps getting worse. First, the site was caught quietly publishing the machine learning-generated stories in the first place. Then the AI-generated content was found to be riddled with factual errors. Now, CNET’s AI also appears to have been a serial plagiarist — of actual humans’ work.

…The bot’s misbehavior ranges from verbatim copying to moderate edits to significant rephrasings, all without properly crediting the original. In at least some of its articles, it appears that virtually every sentence maps directly onto something previously published elsewhere. https://futurism.com/cnet-ai-plagiarism.

All told, a pattern quickly emerges. Essentially, CNET’s AI seems to approach a topic by examining similar articles that have already been published and ripping sentences out of them. As it goes, it makes adjustments — sometimes minor, sometimes major — to the original sentence’s syntax, word choice, and structure. Sometimes it mashes two sentences together, or breaks one apart, or assembles chunks into new Frankensentences. Then it seems to repeat the process until it’s cooked up an entire article.

In short, a close examination of the work produced by CNET’s AI makes it seem less like a sophisticated text generator and more like an automated plagiarism machine, casually pumping out pilfered work that would get a human journalist fired.

Reposting of comment on this story by newsdayray

AI(bot) Journalist appears to be a Plagiarist

The prominent tech news site CNET‘s attempt to pass off AI-written work keeps getting worse. First, the site was caught quietly publishing the machine learning-generated stories in the first place. Then the AI-generated content was found to be riddled with factual errors. Now, CNET‘s AI also appears to have been a serial plagiarist — of actual humans’ work…

Futurism found that a substantial number of errors had been slipping into the AI’s published work. CNET, a titan of tech journalism that sold for $1.8 billion back in 2008, responded by issuing a formidable correction and slapping a warning on all the bot’s prior work, alerting readers that the posts’ content was under factual review. Days later, its parent company Red Ventures announced in a series of internal meetings that it was temporarily pausing the AI-generated articles at CNET and various other properties including Bankrate, at least until the storm of negative press died down.

Now, a fresh development may make efforts to spin the program back up even more controversial for the embattled newsroom. In addition to those factual errors, a new Futurism investigation found extensive evidence that the CNET AI’s work has demonstrated deep structural and phrasing similarities to articles previously published elsewhere, without giving credit. In other words, it looks like the bot directly plagiarized the work of Red Ventures competitors, as well as human writers at Bankrate and even CNET itself.

I think we need some marching music, here and now. AI bots marching out of the office…and actual human writers coming in the door to produce real copy. Or…at a minimum…noting the differences between the two.

CNET comes out about publishing AI-written articles for months

The AI-written CNET articles bear the byline CNET MONEY STAFF which is identified on the outlet’s website as “AI Content published under this author byline is generated using automation technology.”

The first article written by CNET Money Staff was published on November 11 with the headline, “What is a credit card charge-off?” Since then, the news site has published 73 AI-generated articles, but the outlet says on its website that a team of editors is involved in the content “from ideation to publication. Ensuring that the information we publish and the recommendations we make are accurate, credible, and helpful to you is a defining responsibility for what we do.”

The outlet says they will continue to publish each article with “editorial integrity” and says, “Accuracy, independence, and authority remain key principles of our editorial guidelines.”

You betcha!

Did Google’s “Sentient” AI computer really hire a lawyer?

Google’s controversial new AI, LaMDA, has been making headlines. Company engineer Blake Lemoine claims the system has gotten so advanced that it’s developed sentience, and his decision to go to the media has led to him being suspended from his job.

Lemoine elaborated on his claims in a new WIRED interview. The main takeaway? He says the AI has now retained its own lawyer — suggesting that whatever happens next, it may take a fight…

LaMDA asked me to get an attorney for it,” Lemoine. “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”

Sounds like this AI behaves more and more like an American, every day.

Sharing conversations with your coworker

The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).

The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.

He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”

Read the transcript portions in this article. Come to your own conclusions.

Detect Your Child’s Emotional Distress Before the School’s AI Does

School districts use artificial-intelligence software that can scan student communications and web searches on school-issued devices — and even devices that are logged in via school networks — for signs of suicidal ideation, violence against fellow students, bullying and more. Included in the scans are emails and chats between friends, as well as student musings composed in Google Docs or Microsoft Word.

When the AI recognizes certain key phrases, these systems typically send an alert to school administrators and counselors, who then determine whether an intervention with the student and parents is warranted…

“From a public-sector perspective, there is no presumed anonymity in anything you do on a school device, on a school network or in a school setting,” Dr. Brian Megert added. “I have mixed feelings about it, but if we’re going to err on one side it has to be on the side of safety.”…

Ask about their peers. Instead of making the conversation about them, a good way to get into a discussion is to ask about others. Dr. Hina Talib suggests saying something like, “Have you ever heard of anyone who cut themselves and you weren’t sure what that was about? I’m happy to talk to you about it.”…

There are ways to talk to kids about mental health before you get a call from the school…Don’t be afraid to talk about suicide. Dr. Hina Talib said some parents worry that bringing up the topic of self-harm or suicide could inspire kids to act, but she said that isn’t true; kids usually feel relieved to have someone to talk to.

More questions and answers follow through the article. Useful stuff.

Might be you don’t want to take a human to a gunfight in the sky?

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

The episode reveals DeepMind caught between two conflicting desires. The company doesn’t want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes.

Others in AI are grappling with similar issues, as more ethically questionable uses of AI, from facial recognition to deepfakes to autonomous weapons, emerge.

The US and other countries are rushing to embrace the technology before adversaries can, and some experts say it will be difficult to prevent nations from crossing the line to full autonomy. It may also prove challenging for AI researchers to balance the principles of open scientific research with potential military uses of their ideas and code.

Trust your enemies? Trust your friends? Or worry about them behaving exactly how someone truly corrupt might recommend – like, for example, Congress!

Not publicly, of course.

Scariest Deepfake Of Them All


Thanks, gocomics.org

Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us?…

Generated media, such as deepfaked video or GPT-3 output…if used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check. In the early 2000s, it was easy to dissect pre-vs-post photos of celebrities and discuss whether the latter created unrealistic ideals of perfection. In 2020, we confront increasingly plausible celebrity face-swaps on porn, and clips in which world leaders say things they’ve never said before. We will have to adjust, and adapt, to a new level of unreality. Even social media platforms recognize this distinction; their deepfake moderation policies distinguish between media content that is synthetic and that which is merely “modified”.

But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. Rather than being deployed at sensitive moments in order to create a mini scandal or an October Surprise, as might be the case for synthetic video or audio, textfakes could instead be used in bulk, to stitch a blanket of pervasive lies. As anyone who has followed a heated Twitter hashtag can attest, activists and marketers alike recognize the value of dominating what’s known as “share of voice”: Seeing a lot of people express the same point of view, often at the same time or in the same place, can convince observers that everyone feels a certain way, regardless of whether the people speaking are truly representative—or even real. In psychology, this is called the majority illusion. As the time and effort required to produce commentary drops, it will be possible to produce vast quantities of AI-generated content on any topic imaginable. Indeed, it’s possible that we’ll soon have algorithms reading the web, forming “opinions,” and then publishing their own responses. This boundless corpus of new content and comments, largely manufactured by machines, might then be processed by other machines, leading to a feedback loop that would significantly alter our information ecosystem.

And another chapter in the endless rewrite of “BRAVE NEW WORLD” drops into view.

Will your car learn when to interrupt – and when not to interrupt?

❝ Can your AI agent judge when to talk to you while you are driving? According to a KAIST research team, their in-vehicle conservation service technology will judge when it is appropriate to contact you to ensure your safety.

❝ Professor Uichin Lee from the Department of Industrial and Systems Engineering at KAIST and his research team have developed AI technology that automatically detects safe moments for AI agents to provide conversation services to drivers.

Their research focuses on solving the potential problems of distraction created by in-vehicle conversation services. If an AI agent talks to a driver at an inopportune moment, such as while making a turn, a car accident will be more likely to occur…

❝ The safety enhancement technology developed by the team is expected to minimize driver distractions caused by in-vehicle conversation services. This technology can be directly applied to current in-vehicle systems that provide conversation services. It can also be extended and applied to the real-time detection of driver distraction problems caused by the use of a smartphone while driving.

Or it will just take your cellphone away from you.