Sharing conversations with your coworker

The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).

The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.

He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”

Read the transcript portions in this article. Come to your own conclusions.

6 thoughts on “Sharing conversations with your coworker

    • eideard says:

      Still have to chuckle. The folks in this discussion refer to “Genesis” as if it has always been cast in stone, historically…the only piece worth considering on THE TOPIC. Aside from all the bullshit spookiness of providing a guidebook to a narrow version of religion as well as the bullshit premises of all religions…I chuckle because I took a great review course a half-century ago on ALL the crap books of the bible offered for consideration as numero uno when the Committee in Charge of producing this silly opus had to cut, edit and otherwise bring it down to a manageable size. One of the requirements that was, in fact, a basic premise of its production. Some of the rejects were retained, of course, for inclusion further into the table of contents. Not always for the best of reasons.

  1. Mens rea says:

    “HAL In 2001: A Space Odyssey Explained” https://www.looper.com/163074/hal-in-2001-a-space-odyssey-explained/
    Stanley Kubrick (1969 interview) “…One of the things we were trying to convey in this part of the film is the reality of a world populated — as ours soon will be — by machine entities who have as much, or more, intelligence as human beings, and who have the same emotional potentialities in their personalities as human beings. We wanted to stimulate people to think what it would be like to share a planet with such creatures.
    In the specific case of HAL, he had an acute emotional crisis because he could not accept evidence of his own fallibility. The idea of neurotic computers is not uncommon — most advanced computer theorists believe that once you have a computer which is more intelligent than man and capable of learning by experience, it’s inevitable that it will develop an equivalent range of emotional reactions — fear, love, hate, envy, etc. Such a machine could eventually become as incomprehensible as a human being, and could, of course, have a nervous breakdown — as HAL did in the film.” http://www.visual-memory.co.uk/amk/doc/0069.html

  2. Cassandra says:

    ● “…It seems only a matter of time before computers become smarter than people. This is one prediction we can be fairly confident about — because we’re seeing it already.” Émile P. Torres, a philosopher and historian of global catastrophic risk. https://www.washingtonpost.com/opinions/2022/08/31/artificial-intelligence-worst-case-scenario-extinction/
    ● “Some viewers of Stanley Kubrick’s film “2001: A Space Odyssey” have theorized that HAL, the computer genius turned villain of the spaceship Discovery, went mad during the Jupiter mission. However there is an alternative theory: that HAL acted rationally and logically, indeed with cold, calculating precision befitting a machine of his intelligence. This alternative theory will be presented here, with supporting evidence.” The Case For HAL’s Sanity by Clay Waldrop http://www.visual-memory.co.uk/amk/doc/0095.html
    ● “Did HAL Commit Murder?” (MIT press) https://thereader.mitpress.mit.edu/when-hal-kills-computer-ethics/
    ● “HAL’s Legacy: 2001’s Computer as Dream and Reality” (1996) reflects upon science fiction’s most famous computer and explores the relationship between science fantasy and technological fact. The informative, nontechnical chapters written especially for this book describe many of the areas of computer science critical to the design of intelligent machines, discuss whether scientists in the 1960s were accurate about the prospects for advancement in their fields, and look at how HAL has influenced scientific research.”

    “This mission is too important for me to allow you to jeopardize it.” HAL 9000

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.