EOW Reading List: AI Diagnosticians, Musk’s Warnings, Mnuchin’s Radar

The flood of good reporting and writing on artificial intelligence, automation, and its implications continues. At Xconomy’s End of Work Reading List, we select and highlight some of the best of it.

In this edition, we’re pointing to recent long-form pieces in The New Yorker and Vanity Fair, and the recent flap over Treasury Secretary Steve Mnuchin’s contrarian view of AI’s impact on employment.

—In one of the best pieces I’ve read on this topic, oncologist and author Siddhartha Mukherjee dives deep into AI diagnostics in medicine. The lengthy story in The New Yorker focuses on efforts to teach machines to “know how”—the author’s shorthand for a subconscious coming together of learned facts and experience—as opposed to just “know that”—a reference to the existing rules-based diagnostic algorithms that do things like pattern-matching of waveforms in electrocardiograms.

Mukherjee interviews computer scientist Sebastian Thrun, who provides one of the better descriptions of how a neural network actually works:

“Imagine an old-fashioned program to identify a dog. A software engineer would write a thousand if-then-else statements: if it has ears, and a snout, and has hair, and is not a rat … and so forth, ad infinitum. But that’s not how a child learns to identify a dog, of course. At first, she learns by seeing dogs and being told that they are dogs. She makes mistakes, and corrects herself. She thinks that a wolf is a dog—but is told that it belongs to an altogether different category. And so she shifts her understanding bit by bit: this is ‘dog,’ that is ‘wolf.’ The machine-learning algorithm, like the child, pulls information from a training set that has been classified. Here’s a dog, and here’s not a dog. It then extracts features from one set versus another. And, by testing itself against hundreds and thousands of classified images, it begins to create its own way to recognize a dog—again, the way a child does.”

Thrun’s team at Stanford created a machine learning system to identify skin cancer by training it on thousands of images of dermatologic ailments that had been classified by dermatologists. The resulting machine outperformed the dermatologists.

It’s important to note, and Thrun does, that no one is quite sure how the child or the machine learning system actually converts this process to the ability to recognize a dog, or a cancerous lesion. This is the so-called “black box” problem.

Geoffrey Hinton, the machine learning pioneer and University of Toronto computer scientist, tells Mukherjee it’s a difficult problem to solve, but he thinks we can live with it. Hinton offers this explanation:

“Imagine pitting a baseball player against a physicist in a contest to determine where a ball might land. The baseball player, who’s thrown a ball over and over again a million times, might not know any equations but knows exactly how high the ball will rise, the velocity it will reach, and where it will come down to the ground. The physicist can write equations to determine the same thing. But, ultimately, both come to the identical point.”

One thing machine diagnosticians are still a long way from doing is answering the big question: Why? Why does the irregular border of a lesion indicate cancer? That requires physician-researchers with the experience of the baseball player and the knowledge of the physicist, as Mukherjee writes, using “diagnostic acumen to understand the pathophysiology of disease.” This “daily, spontaneous intimacy between implicit and explicit forms of knowledge” is often the path to novel treatments. Mukherjee wonders whether there is a risk of losing that path if more clinical diagnosis is handled by machines.

—There’s a lot of rehashing in this profile of Elon Musk, and specifically his efforts to avert the end of humanity at the hands of an artificial superintelligence, in Vanity Fair by Maureen Dowd. The most valuable thing I took away from it was the degree to which there’s a perceived movement of opinion among AI practitioners. Speaking about the AI safety gathering at Asilomar in January, nearly three years after he first voiced his AI fears in 2014, Musk said: “There’s no question that the top technologists in Silicon Valley now take AI far more seriously—that they do acknowledge it as a risk. I’m not sure that they yet appreciate the significance of the risk.”

There’s also this quote from Peter Theil on why AI has caused such strong divisions among technologists and the public at large: “There’s some sense in which the AI question encapsulates all of people’s hopes and fears about the computer age. I think people’s intuitions do just really break down when they’re pushed to these limits because we’ve never dealt with entities that are smarter than humans on this planet.”

One problem that Dowd might have pointed out after her visits with many of the world’s highest-profile AI experts and opinionators: They’re all men. The only women in the story are Musk’s wives, Ayn Rand, and Dowd.

—Count Treasury Secretary Steven Mnuchin among those who believe large-scale economic disruption from artificial intelligence ain’t happening any time soon. In an interview with Axios, the former Goldman Sachs executive and Trump appointee, had this to say on the topic: “I think that is so far in the future. In terms of artificial intelligence taking over American jobs, I think we’re like so far away from that—that, uh, not even on my radar screen. … I think it’s 50 or 100 more years.”

Goldman Sachs Research released a brief video in December that describes A.I. as “The Apex Technology of The Information Age.” Heath Terry, head of Internet research at Goldman Sachs Research, says “when we look at the impact of artificial intelligence and machine learning, we think you’ll see hundreds of billions of dollars in cost savings and revenue opportunities in the next decade alone. … Artificial intelligence is something that we see ultimately impacting every industry.”

Larry Summers, Treasury Secretary under President Bill Clinton, wrote in The Washington Post:

“Mnuchin’s comment about the lack of impact of technology on jobs is to economics approximately what global climate change denial is to atmospheric science or what creationism is to biology. Yes, you can debate whether technological change is in net good. I certainly believe it is. And you can debate what the job creation effects will be relative to the job destruction effects. I think this is much less clear, given the downward trends in adult employment, especially for men over the past generation. But I do not understand how anyone could reach the conclusion that all the action with technology is half a century away.”

—Paul Allen, speaking at the University of Washington last month, turned his attention to the economic impacts of artificial intelligence, in which he’s deeply involved through his Allen Institute for Artificial Intelligence.

“What’s next? You can’t talk about computer science and artificial intelligence without thinking about the impact progress there will have on some jobs.

“But to me the promise vastly outweighs the risks. In the same way that while the invention of the airplane negatively affected the railroad industry it opens much wider doors to human progress. As more intelligent computer assistance comes into being, it will amplify human progress.”

Photo credit: “Evenings at the microscope” via Flickr by Thomas Fisher Rare Book Library, UofT cropped and used under a Creative Commons license.

Benjamin Romano is editor of Xconomy Seattle. Email him at bromano [at] xconomy.com. Follow @bromano

Trending on Xconomy