An AI model showed Flint how to find lead pipes. What do you think they did after that?

❝ …Volunteer computer scientists, with some funding from Google, designed a machine-learning model to help predict which homes were likely to have lead pipes. The artificial intelligence was supposed to help the City dig only where pipes were likely to need replacement. Through 2017, the plan was working. Workers inspected 8,833 homes, and of those, 6,228 homes had their pipes replaced — a 70 percent rate of accuracy.

Heading into 2018, the City signed a big, national engineering firm, AECOM, to a $5 million contract to “accelerate” the program, holding a buoyant community meeting to herald the arrival of the cavalry in Flint…

❝ As more and more people had their pipes evaluated in 2018, fewer and fewer inspections were finding lead pipes…The new contractor hasn’t been efficiently locating those pipes: As of mid-December 2018, 10,531 properties had been explored and only 1,567 of those digs found lead pipes to replace. That’s a lead-pipe hit rate of just 15 percent, far below the 2017 mark…

❝ There are reasons for the slowdown. AECOM discarded the machine-learning model’s predictions, which had guided excavations. And facing political pressure from some residents, Mayor Weaver demanded that the firm dig across the city’s wards and in every house on selected blocks, rather than picking out the homes likely to have lead because of age, property type, or other characteristics that could be correlated with the pipes.

After a multimillion-dollar investment in project management, thousands of people in Flint still have homes with lead pipes, when the previous program would likely have already found and replaced them.

Life in America seems about as predictable as ever. Doesn’t have to be. Still, don’t get smug about analyzing the causes. Just fix it!

Taylor Swift used facial recognition software to detect stalkers at Rose Bowl concert

❝ The periphery of a Taylor Swift concert is as thought out as the show she presents on stage. Beyond the traditional merchandise stands, there are often dedicated selfie-staging points and staff distributing light-up bracelets. When Swift performed at the Los Angeles Rose Bowl venue on 18 May, fans could watch rehearsal clips at a special kiosk.

What they didn’t know was that a facial recognition camera inside the structure was taking their photographs and cross-referencing the images with a database held in Nashville of hundreds of Swift’s known stalkers, according to a Rolling Stone report…

❝ While some have raised privacy concerns over the ownership and storage of the images, concerts are technically private events, and Swift has no obligation to notify ticket holders that they may be surveilled. The Guardian has contacted Swift’s representatives for comment.

Swift has a number of known stalkers. In September, she got a restraining order against Eric Swarbrick, who had been harassing her with letters threatening rape and murder since September 2016. In April, 38-year-old Julius Sandrock was arrested outside her Beverly Hills home. He was wearing a mask and had a knife in his car, and told police that he had driven from Colorado to visit the singer. Swift took out a restraining order against him in May.

RTFA and don’t blame the tech. Tech, science, generally are lifetyle neutral. Use is what determines the social value – or detriment to society – at any specific time. Even that can change with societal norms. Checking out customers for known creeps and villains seems pretty useful and proper to me.

AI System learned to master Rubik’s Cube in 44 hours

Meet DeepCube, an artificially intelligent system that’s as good at playing the Rubik’s Cube as the best human master solvers. Incredibly, the system learned to dominate the classic 3D puzzle in just 44 hours and without any human intervention.

“A generally intelligent agent must be able to teach itself how to solve problems in complex domains with minimal human supervision,” write the authors of the new paper, published online at the arXiv preprint server. Indeed, if we’re ever going to achieve a general, human-like machine intelligence, we’ll have to develop systems that can learn and then apply those learnings to real-world applications…

On the surface, the Rubik’s Cube may seem simple, but it offers a staggering number of possibilities. A 3x3x3 cube features a total “state space” of 43,252,003,274,489,856,000 combinations (that’s 43 quintillion), but only one state space matters — that magic moment when all six sides of the cube are the same color. Many different strategies, or algorithms, exist for solving the cube. It took its inventor, Erno Rubik, an entire month to devise the first of these algorithms…

RTFA. Interesting stuff – and you may as well get used to the topic whether you’re ready or not. Your next job interview might be with an entity built on systems like this. 🙂

AI diagnosis to make medical decisions is just about here

AP Photo/M. Spencer Green

❝ The US Food and Drug Administration approved this week the first software powered by artificial intelligence that replaces the need for a specialized doctor to interpret medical imagery.

The software is called IDx-DR, made by diagnostic AI startup IDx, and specifically analyzes images of the retina to detect whether a person with diabetes has a complication from the disease called diabetic retinopathy…

❝ Diabetic retinopathy is a complication of diabetes where blood sugar damages the back of the eye, according to the FDA, and is the main cause of the loss of vision for those with diabetes…

By allowing this software to be marketed in the US, the FDA is setting a bar for the accuracy needed in order for AI to take over for human doctors. When validating that the AI system worked, the FDA used images from 900 US patients. The software correctly detected more than mild diabetic retinopathy 87.4% of the time, and identified when patients did not have more than mild retinopathy 89.5% of the time. Accuracy for humans naturally varies from doctor to doctor, but for the FDA to approve the technology it “must provide for more effective treatment or diagnosis of a life-threatening or irreversibly debilitating disease or condition.”

No doubt a predictable percentage of Americans will demonstrate fear of this technology to a greater degree than any other educated nation. Part of that education and, more important, political processes, electoral politics, religious folderol, come together to work harder than anywhere else – to keep citizens from modernizing their lives and thinking. Why – we might even question authority.

Robot Fear Index stands at 30.9

❝ …Consumer adoption of artificial intelligence and robotics is already quite broad, and yet, fear of robots is also pervasive. We fear that they’ll replace our jobs or somehow overthrow us; and to be blunt, those fears are valid. That said, our 2017 survey indicates acceptance for these technologies continues to grow. Our most recent Robot Fear Index value of 30.9 (vs. 31.5 in late 2016) suggests that public perception of robots is essentially unchanged over the last year despite increased awareness of artificial intelligence, robotics, and the potential impact of these technologies. Notably, the related increase in media coverage of these issue does not seem be causing the rise in fear that we might expect. In fact, the slight year-over-year decline in our index value suggests slightly less fear of automation technologies.

❝ We believe that consumer awareness of robotics is closely correlated to the rise of domestic robots within households. Domestic robots are classified as robot vacuum cleaners, mops and lawn mowers, and over the next 10 years we believe this category will be one of the fastest growing robot markets in the world.

Glance through the whole report. Designed as a quarterly evaluation for investors – that, in itself, speaks volumes about the acceptability of robots and artificial intelligence growing in our society.

Personally, I think Gene Munster leads one of the sharpest firms dealing with advanced technology of any American investment firm.

AI could increase global GDP by $15.7 trillion by 2030

❝ Much has already been made about how artificial intelligence is going to transform our lives, ranging from visions of the future in which robots make humans obsolete to utopias in which technology solves intractable problems and frees up people to pursue their passions. Consultancy firm PwC ran the numbers, and came up with a relatively rosy scenario with regards to the impact AI will have on the global economy. By 2030, global GDP could increase by 14%, or $15.7 trillion, because of AI…

❝ Almost half of these economic gains will accrue to China, where AI is projected to give the economy a 26% boost over the next 13 years—the equivalent of an extra $7 trillion in GDP. North America can expect a 14.5% increase in GDP, worth $3.7 trillion…

❝ A large part of the forecast GDP gains — $6.6 trillion—are expected to come from increased labor productivity, with businesses automating processes or using AI to assist their existing workforce. This suggests PwC believes AI will generate a productivity boost that’s bigger than previous technological breakthroughs—despite recent advancements, global productivity growth is very low and economists are puzzled about how to get out of this trap.

The rest of the projected economic growth would come from increased consumer demand for personalized and AI-enhanced products and services. The sectors that have the most to gain on this front are health care, financial services, and the auto industry.

Given appropriate political smarts, I’m in the Utopian crowd. Contemporary global economics says there’s a chance. Even in a nation silly enough to elect a fake president.

Conjecture on where AI is going doesn’t make it so. Yet.

❝ Artificial intelligence is grossly misunderstood, but you can’t really blame the public. However well-intentioned, we’re up against multiple coordinated efforts to distort the field, whether that’s technologist doomsaying or Singularity marketing. And, as is often the case in overhyped and-or distorted science, there aren’t really people on the inside doing the work of bullshit-calling.

❝ DARPAtv published the video below a few weeks ago and it’s worth your 15 minutes. It’s a rare clear-eyed look into the guts of AI that’s also simple enough for most non-technical folks to follow. It’s dry, but IRL computer science is pretty dry. The key point is that that this stuff is still really hard, and many of the things that we imagine AI to be capable of or imminently capable of are in fact looming challenges in the field—problems just now being formulated.

Click it and watch it.

20 percent of the world’s vacuum cleaners are now robots

❝ Robot vacuums may have once seemed an eccentricity, but they now represent a non-trivial portion of the overall vacuum market – 20 percent worldwide, according to iRobot CEO and co-founder Colin Angle…And Roomba makes up 70 percent of that market, giving iRobot a commanding lead in the space.

Exactly how many robots does that translate to? Over 14 million Roombas sold to date, Angle said, which is a steady business for a consumer product that starts at a price point that tends to be a bit higher than your average human-powered home cleaning hardware.

❝ iRobot’s lead in the market should be easily defensible, Angle says, because the company has a long lead in terms of working on the problem, and because it’s focused on consumer home cleaning products exclusively. iRobot’s become even more focused of late, since the company recently divested itself of its defense and security robotics division and is now focused entirely on the home consumer space.

How long will we continue with individual operating systems for each home electronic assistant – as artificial intelligence becomes more commanding. A deliberate choice, that word. Seems easier to have a centralized house intelligence to run home-based devices. Encrypted and secure from both private and government hackers, of course.

Pentagon research in artificial intelligence moves us closer to robot wars

Human-robot strike teams, autonomous land mines, and covert swarms of minuscule robotic spies: the US Department of Defense’s idea of the future of war seems like a sci-fi movie.

In a report that dreams of new ways to destroy adversaries and protect American assets in equal portions, the DOD’s science research division cements the idea that artificial intelligence and autonomous robotic systems will be a crucial part of the nation’s ongoing defense strategy.

US military already uses a host of robotic systems in the battlefield, from reconnaissance and attack drones to bomb disposal robots. However, these are all remotely-piloted systems, meaning a human has a high level of control over the machine’s actions at all times.

The new DOD report sees tactical advantages from humans and purely self-driven machines working together in the field. In one scenario, a swarm of autonomous drones would flock above a combat zone to jam enemy communications, provide real-time surveillance of the area, and autonomously fire against the enemy.

Might be satisfying to some to presume our robots are only killing their robots. Kind of like believing that hacker techniques are only used by the NSA, FBI, etc., to spy on other folks in other countries.

Wishful thinking.