Sharing conversations with your coworker

The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).

The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.

He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”

Read the transcript portions in this article. Come to your own conclusions.

4 thoughts on “Sharing conversations with your coworker

    • eideard says:

      Still have to chuckle. The folks in this discussion refer to “Genesis” as if it has always been cast in stone, historically…the only piece worth considering on THE TOPIC. Aside from all the bullshit spookiness of providing a guidebook to a narrow version of religion as well as the bullshit premises of all religions…I chuckle because I took a great review course a half-century ago on ALL the crap books of the bible offered for consideration as numero uno when the Committee in Charge of producing this silly opus had to cut, edit and otherwise bring it down to a manageable size. One of the requirements that was, in fact, a basic premise of its production. Some of the rejects were retained, of course, for inclusion further into the table of contents. Not always for the best of reasons.

  1. Mens rea says:

    “HAL In 2001: A Space Odyssey Explained” https://www.looper.com/163074/hal-in-2001-a-space-odyssey-explained/
    Stanley Kubrick (1969 interview) “…One of the things we were trying to convey in this part of the film is the reality of a world populated — as ours soon will be — by machine entities who have as much, or more, intelligence as human beings, and who have the same emotional potentialities in their personalities as human beings. We wanted to stimulate people to think what it would be like to share a planet with such creatures.
    In the specific case of HAL, he had an acute emotional crisis because he could not accept evidence of his own fallibility. The idea of neurotic computers is not uncommon — most advanced computer theorists believe that once you have a computer which is more intelligent than man and capable of learning by experience, it’s inevitable that it will develop an equivalent range of emotional reactions — fear, love, hate, envy, etc. Such a machine could eventually become as incomprehensible as a human being, and could, of course, have a nervous breakdown — as HAL did in the film.” http://www.visual-memory.co.uk/amk/doc/0069.html

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.