Might be you don’t want to take a human to a gunfight in the sky?

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

The episode reveals DeepMind caught between two conflicting desires. The company doesn’t want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes.

Others in AI are grappling with similar issues, as more ethically questionable uses of AI, from facial recognition to deepfakes to autonomous weapons, emerge.

The US and other countries are rushing to embrace the technology before adversaries can, and some experts say it will be difficult to prevent nations from crossing the line to full autonomy. It may also prove challenging for AI researchers to balance the principles of open scientific research with potential military uses of their ideas and code.

Trust your enemies? Trust your friends? Or worry about them behaving exactly how someone truly corrupt might recommend – like, for example, Congress!

Not publicly, of course.

Scariest Deepfake Of Them All


Thanks, gocomics.org

Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us?…

Generated media, such as deepfaked video or GPT-3 output…if used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check. In the early 2000s, it was easy to dissect pre-vs-post photos of celebrities and discuss whether the latter created unrealistic ideals of perfection. In 2020, we confront increasingly plausible celebrity face-swaps on porn, and clips in which world leaders say things they’ve never said before. We will have to adjust, and adapt, to a new level of unreality. Even social media platforms recognize this distinction; their deepfake moderation policies distinguish between media content that is synthetic and that which is merely “modified”.

But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. Rather than being deployed at sensitive moments in order to create a mini scandal or an October Surprise, as might be the case for synthetic video or audio, textfakes could instead be used in bulk, to stitch a blanket of pervasive lies. As anyone who has followed a heated Twitter hashtag can attest, activists and marketers alike recognize the value of dominating what’s known as “share of voice”: Seeing a lot of people express the same point of view, often at the same time or in the same place, can convince observers that everyone feels a certain way, regardless of whether the people speaking are truly representative—or even real. In psychology, this is called the majority illusion. As the time and effort required to produce commentary drops, it will be possible to produce vast quantities of AI-generated content on any topic imaginable. Indeed, it’s possible that we’ll soon have algorithms reading the web, forming “opinions,” and then publishing their own responses. This boundless corpus of new content and comments, largely manufactured by machines, might then be processed by other machines, leading to a feedback loop that would significantly alter our information ecosystem.

And another chapter in the endless rewrite of “BRAVE NEW WORLD” drops into view.

Will your car learn when to interrupt – and when not to interrupt?

❝ Can your AI agent judge when to talk to you while you are driving? According to a KAIST research team, their in-vehicle conservation service technology will judge when it is appropriate to contact you to ensure your safety.

❝ Professor Uichin Lee from the Department of Industrial and Systems Engineering at KAIST and his research team have developed AI technology that automatically detects safe moments for AI agents to provide conversation services to drivers.

Their research focuses on solving the potential problems of distraction created by in-vehicle conversation services. If an AI agent talks to a driver at an inopportune moment, such as while making a turn, a car accident will be more likely to occur…

❝ The safety enhancement technology developed by the team is expected to minimize driver distractions caused by in-vehicle conversation services. This technology can be directly applied to current in-vehicle systems that provide conversation services. It can also be extended and applied to the real-time detection of driver distraction problems caused by the use of a smartphone while driving.

Or it will just take your cellphone away from you.

Computing Genius And WW2 Hero, Alan Turing, Will Be On U.K.’s 50-Pound Note

❝ Alan Turing, the father of computer science and artificial intelligence who broke Adolf Hitler’s Enigma code system in World War II — but who died an outcast because of his homosexuality — will be featured on the Bank of England’s new 50-pound note…

❝ Turing was just 41 when he died from poisoning in 1954, a death that was deemed a suicide. For decades, his status as a giant in mathematics was largely unknown, thanks to the secrecy around his computer research and the social taboos about his sexuality. His story became more widely known after the release of the 2014 movie The Imitation Game.

The Turing commemoration is the U.K. government’s latest public reevaluation of the genius who was convicted of homosexuality under “gross indecency” laws in 1952. By the time he died, Turing had been stripped of his security clearance and was forced to undergo a “chemical castration” regime of estrogen shots to avoid serving a two-year prison term.

The treatment of non-conformity by “polite” society over the centuries ranged from indecent to inhumane and cruel. Those who offended political pop culture and religion both – with their sexual preference – received the worst of it. Admitting narrow-minded stupidity of this sort is typically accompanied by “not on my watch” hand-wringing. And not much more. Still, the admission is part of the process of sorting out the dross remaining in our cultural brain.

McDonald’s trying out AI-powered menu boards

❝ Savory bacon, sweet Donut Sticks and a $5 value meal contributed to better-than-expected U.S. results for McDonald’s Corp. despite negative guest counts in the first quarter…

Going forward this year, McDonald’s said it is pivoting its focus to overall restaurant operations, especially at the drive-thru. McDonald’s said it has deployed menu boards with automated suggestive selling at 700 restaurants through Dynamic Yield. In late March, the Chicago-based chain purchased the decision logic technology company, which uses artificial intelligence to automate the upselling of menu items based on time of day, trending items and weather…

❝ When asked by an analyst where McDonald’s stands on adding a plant-based dish, CEO Steve Easterbrook said his culinary teams are “paying close attention to it.”

“The key for us is to identify the sustaining consumer trends,” he said.

I zeroed in on the plant-based consideration because of the dynamic IPO this past week for BEYOND MEAT. Their CEO emphasized that the simplest advantage they will offer consumers – beside a healthier planet and healthier consumers – is lower prices than meat. I haven’t any confidence in McDonald’s using lower wholesale commodity prices to reduce the tab for consumers. But, I look forward to taking advantage of the difference in the cut-throat world of supermarkets.

And – I wonder at the intelligence of fast food retailers who utilize the tastebud brain-switch of salty or sweet to bump their profits and don’t consider saving money for consumers to be equally compelling.

Elon Musk said what?

❝ In many ways, Tesla — Elon Musk’s lightning rod of a car company — is the perfect allegory for modern Silicon Valley. The ongoing psychodrama of personalities drowns out the amazing technical achievements that are happening all around us…

As usual, this has been a real “Dr. Jekyll and Mr. Hyde” kind of week for Tesla. It had a disastrous earnings report card, and Elon keeps creating all the wrong sorts of headlines. But in the middle of this maelstrom, the company announced a new chip that is going to eventually become the brain for their electric car. This chip is not just any chip — it will be able to make sense of a growing number of sensors that allow the car to become better and better at assisted (if not fully automated) driving…

❝ Tesla’s module is based on two AI chips — each one made of a CPU, a GPU, and deep learning accelerators. The module can deliver 144-trillion operations per second, making it capable of processing data from numerous sensors and other sources and running deep neural network algorithms. Ian Riches, an analyst with Strategy Analytics, told EE Times that this is “effectively the most powerful computer yet fitted to a production vehicle.” And Tesla is going to make a next-generation module that will be more powerful and will consume a lot less power.

As usual, Om Malik provides more depth, analysis and understanding than most of his peers. Please, RTFA, gather in another chunk of insight into Elon Musk’s apparently endless journey to reinvent the automobile along with any other software and hardware he bumps into in his young life.

An AI model showed Flint how to find lead pipes. What do you think they did after that?

❝ …Volunteer computer scientists, with some funding from Google, designed a machine-learning model to help predict which homes were likely to have lead pipes. The artificial intelligence was supposed to help the City dig only where pipes were likely to need replacement. Through 2017, the plan was working. Workers inspected 8,833 homes, and of those, 6,228 homes had their pipes replaced — a 70 percent rate of accuracy.

Heading into 2018, the City signed a big, national engineering firm, AECOM, to a $5 million contract to “accelerate” the program, holding a buoyant community meeting to herald the arrival of the cavalry in Flint…

❝ As more and more people had their pipes evaluated in 2018, fewer and fewer inspections were finding lead pipes…The new contractor hasn’t been efficiently locating those pipes: As of mid-December 2018, 10,531 properties had been explored and only 1,567 of those digs found lead pipes to replace. That’s a lead-pipe hit rate of just 15 percent, far below the 2017 mark…

❝ There are reasons for the slowdown. AECOM discarded the machine-learning model’s predictions, which had guided excavations. And facing political pressure from some residents, Mayor Weaver demanded that the firm dig across the city’s wards and in every house on selected blocks, rather than picking out the homes likely to have lead because of age, property type, or other characteristics that could be correlated with the pipes.

After a multimillion-dollar investment in project management, thousands of people in Flint still have homes with lead pipes, when the previous program would likely have already found and replaced them.

Life in America seems about as predictable as ever. Doesn’t have to be. Still, don’t get smug about analyzing the causes. Just fix it!