Are you willing to sacrifice robots to save human lives?


Supposed she looks like, sounds like, your daughter? Your wife?

❝ A team led by Sari Nijssen…and Markus Paulus…have carried out a study to determine the degree to which people show concern for robots and behave towards them in accordance with moral principles.

❝ …The study set out to answer the following question: “Under what circumstances and to what extent would adults be willing to sacrifice robots to save human lives?” The participants were faced with a hypothetical moral dilemma: Would they be prepared to put a single individual at risk in order to save a group of injured persons? In the scenarios presented the intended sacrificial victim was either a human, a humanoid robot with an anthropomorphic physiognomy that had been humanized to various degrees or a robot that was clearly recognizable as a machine.

❝ The study revealed that the more the robot was humanized, the less likely participants were to sacrifice it. Scenarios that included priming stories in which the robot was depicted as a compassionate being or as a creature with its own perceptions, experiences and thoughts, were more likely to deter the study participants from sacrificing it in the interests of anonymous humans. Indeed, on being informed of the emotional qualities allegedly exhibited by the robot, many of the experimental subjects expressed a readiness to sacrifice the injured humans to spare the robot from harm. “The more the robot was depicted as human – and in particular the more feelings were attributed to the machine – the less our experimental subjects were inclined to sacrifice it,” says Paulus.

As robots become more like humans, develop redirective processes in their “brains”, communicate with us on a level comparable to or better than our meat-machine peers…how might you decide to act?

Autonomous robots can be bigots. Short-term payoffs work on machines, too.

❝ Showing prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by artificially intelligent machines, new research has suggested.

Computer science and psychology experts from Cardiff University and MIT have shown that groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another…

❝ Though some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans, this new work demonstrates the possibility of AI evolving prejudicial groups on their own…

❝ The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.

Your new self-driving car might not take you to the polls if it thinks you won’t vote for Trump or one of his lackeys.

Human Bankers Are Losing to Robots

❝ Something interesting happened in Swedish finance last quarter. The only big bank that managed to cut costs also happens to be behind one of the industry’s boldest plans to replace humans with automation.

❝ Nordea Bank AB, whose Chief Executive Officer Casper von Koskull says his industry might only have half its current human workforce a decade from now, is cutting 6,000 of those jobs. Von Koskull says the adjustment is the only way to stay competitive in the future, with automation and robots taking over from people in everything from asset management to answering calls from retail clients.

I imagine that Sweden’s labor culture will require, enable, a fair amount of retraining and education as required to meet this critical change in professional employment. Do I think anything comparable will be the response in the United States when similar job cuts take place?

That’s a rhetorical question, right?

Rolls Royce developing cockroach robots

❝ Rolls-Royce…is developing tiny “cockroach” robots that can crawl inside aircraft engines to spot and fix problems.

The U.K. engineer said the miniature technology can improve the way maintenance is carried out by speeding up inspections and eliminating the need to remove an engine from an aircraft for repair work to take place…

❝ Sebastian de Rivaz, a research fellow at Harvard Institute, said the inspiration for their design came from the cockroach and that the robotic bugs had been in development for eight years.

He added that the next step was to mount cameras on the robots and scale them down to a 15-milimeter size…

Miniaturization isn’t even the hard part nowadays.

Canadian AI company uses humans to mentor robots


The one on the left is the CEO of the company

❝ A secretive Canadian startup called Kindred AI is teaching robots how to perform difficult dexterous tasks at superhuman speeds by pairing them with human “pilots” wearing virtual-reality headsets and holding motion-tracking controllers.

The technology offers a fascinating glimpse of how humans might work in synchronization with machines in the future, and it shows how tapping into human capabilities might amplify the capabilities of automated systems. For all the worry over robots and artificial intelligence eliminating jobs, there are plenty of things that machines still cannot do. The company demonstrated the hardware to MIT Technology Review last week, and says it plans to launch a product aimed at retailers in the coming months. The long-term ambitions are far grander. Kindred hopes that this human-assisted learning will foster a fundamentally new and more powerful kind of artificial intelligence…

❝ Kindred’s system uses several machine-learning algorithms, and tries to predict whether one of these would provide the desired outcome, such as grasping an item. If none seems to offer a high probability of success, it calls for human assistance. Most importantly, the algorithms learn from the actions of a human controller. To achieve this, the company uses a form of reinforcement learning, an approach that involves experimentation and strengthening behavior that leads to a particular goal…One person can also operate several robots at once…

❝ …The technical challenges involved with learning through human tele-operation are not insignificant. Sangbae Kim, an associate professor at MIT who is working on tele-operated humanoid robots, says mapping human control to machine action is incredibly complicated. “The first challenge is tracking human motion by attaching rigid links to the human skin. This is extremely difficult because we are endoskeleton animals,” Kim says. “A bigger challenge is to really understand all the details of decision-making steps in humans, most of which happen subconsciously.”

❝ “Our goal is to deconstruct cognition,” Geordie Rose, cofounder and the CEO of Kindred says. “All living entities follow certain patterns of behavior and action. We’re trying to build machines that have the same kind of principles.”

Sooner or later, all this interesting stuff will come together in some sectors of the world’s economy – and a significant number of humans will be declared redundant. The good news is that pretty much every educated industrial society already has a diminishing population. Independent, self-conscious women with easy access to birth control are taking care of that.

Won’t make the transition period any easier for middle-age not-so-well-educated guys.

“Robots? We don’t serve their kind here!”

❝ For the time being, robots don’t need civil rights — they have a hard enough time walking, let alone marching — but the European Union doesn’t expect that to be the case forever. The European Parliament’s committee on legal affairs is considering a draft report, written by Luxembourg member Mady Delvaux, that would give legal status to “electronic persons.”

❝ Delvaux’s report explores the growing prevalence of autonomous machines in our daily lives, as well as who should be responsible for their actions. It’s not intended to be a science-fiction thought experiment…but rather an outline of what the European Commission should establish: what robots are, legally; the ethics of building them; and the liability of the companies that do so.

“Robots are not humans, and will never be humans,” Delvaux said. But she is recommending that they have a degree of personhood — much in the same way that corporations are legally regarded as persons — so that companies can be held accountable for the machines they create, and whatever actions those machines take on their own.

Robots can donate to Super-PACs!

❝ Delvaux’s report does suggest that the more autonomy a machine has, the more blame should fall with it over its human operators. But robots are generally only as smart as the data they learn from. It might be difficult to determine what a robot is responsible for, and what was because of its programming — a sort of robot version of the “nature versus nurture” argument.

Nice to see that some political beings, public political forums, have the foresight to consider potential problems before they arise. Of course, that can be taken to extremes.

But, in the United States? We’re lucky if Congress considers, say, flood protection before rising waters reach the top step.

How long will it be before you lose your job to a robot?


ReshoringMatt Blease

❝ How long will it be before you, too, lose your job to a computer? This question is taken up by a number of recent books, with titles that read like variations on a theme: “The Industries of the Future,” “The Future of the Professions,” “Inventing the Future.” Although the authors of these works are employed in disparate fields — law, finance, political theory — they arrive at more or less the same conclusion. How long? Not long.

❝ “Could another person learn to do your job by studying a detailed record of everything you’ve done in the past?” Martin Ford, a software developer, asks early on in “Rise of the Robots: Technology and the Threat of a Jobless Future”…“Or could someone become proficient by repeating the tasks you’ve already completed, in the way that a student might take practice tests to prepare for an exam? If so, then there’s a good chance that an algorithm may someday be able to learn to do much, or all, of your job.”

Later, Ford notes, “A computer doesn’t need to replicate the entire spectrum of your intellectual capability in order to displace you from your job; it only needs to do the specific things you are paid to do.”…

❝ The “threat of a jobless future” is, of course, an old one, almost as old as technology…Each new technology displaced a new cast of workers: first knitters, then farmers, then machinists. The world as we know it today is a product of these successive waves of displacement, and of the social and artistic movements they inspired: Romanticism, socialism, progressivism, Communism.

Meanwhile, the global economy kept growing, in large part because of the new machines. As one occupation vanished, another came into being. Employment migrated from farms and mills to factories and offices to cubicles and call centers.

❝ Economic history suggests that this basic pattern will continue, and that the jobs eliminated by Watson and his ilk will be balanced by those created in enterprises yet to be imagined — but not without a good deal of suffering. If nearly half the occupations in the U.S. are “potentially automatable,” and if this could play out within “a decade or two,” then we are looking at economic disruption on an unparalleled scale. Picture the entire Industrial Revolution compressed into the life span of a beagle.

And that’s assuming history repeats itself. What if it doesn’t? What if the jobs of the future are also potentially automatable?

RTFA. Sooner or later this will be key to a national election. In every nation in the industrial world. Probably every nation, industrial or otherwise. Mechanizing most agricultural work doesn’t even require AI.

Cynic that I am I expect the United States to drift into a tidy, tightly-class-structured version of Dicken’s 19th Century industrial England. It will take Socialist-led Scandinavian nations or a later version of China’s morphing Communist-led economy to build inclusive models. American capitalism and American workers will probably continue to elect variations of Trump or Hillary depending more on ad campaigns, sloganeering, than competent economics.

They sent robots in to clean up Fukushima – it was too dangerous for humans. The robots died!

The robots who went into Fukushima’s no-man’s land have not returned after radiation levels in the power plant proved too strong for their circuit boards to handle.

The clean-up continues almost five years to the day after the Fukushima Daiichi nuclear power station experienced three meltdowns after a tsunami crashed into the coastal power plant in 2011. The deathly high levels of radiation means it’s impossible for humans to go into areas of the plant to dispose of or contain the radioactivite materials. And it turns out, robots don’t fare much better either.

TEPCO and Toshiba developed a series of robots that were able to go underwater in the plant’s damaged cooling pools to remove the radioactive nuclear rods.

Five of the custom-built robots have been sent into the plant to work their magic. So far, none of them have returned. As soon as they get close to the reactors, their wiring becomes destroyed by the high levels of radioactivity and they are unable to move.

❝ “It is extremely difficult to access the inside of the nuclear plant,” said Naohiro Masuda, TEPCo’s head of decommissioning. “The biggest obstacle is the radiation.”

Yup. No one at TEPCo figured that out before they spent five years and lots of money on 5 custom robots.

Anyone surprised their safety systems failed during the tsunami?