Artificial Intelligence and Medicine

When teaching the machine, the team had to take some care with the images. Thrun hoped that people could one day simply submit smartphone pictures of their worrisome lesions, and that meant that the system had to be undaunted by a wide range of angles and lighting conditions. But, he recalled, “In some pictures, the melanomas had been marked with yellow disks. We had to crop them out—otherwise, we might teach the computer to pick out a yellow disk as a sign of cancer.”

It was an old conundrum: a century ago, the German public became entranced by Clever Hans, a horse that could supposedly add and subtract, and would relay the answer by tapping its hoof. As it turns out, Clever Hans was actually sensing its handler’s bearing. As the horse’s hoof-taps approached the correct answer, the handler’s expression and posture relaxed. The animal’s neural network had not learned arithmetic; it had learned to detect changes in human body language. “That’s the bizarre thing about neural networks,” Thrun said. “You cannot tell what they are picking up. They are like black boxes whose inner workings are mysterious.”

The “black box” problem is endemic in deep learning. The system isn’t guided by an explicit store of medical knowledge and a list of diagnostic rules; it has effectively taught itself to differentiate moles from melanomas by making vast numbers of internal adjustments—something analogous to strengthening and weakening synaptic connections in the brain. Exactly how did it determine that a lesion was a melanoma? We can’t know, and it can’t tell us.

More here, from Siddhartha Mukherjee in the New Yorker (h/t Azra Raza).

And, in the same vein, here are some thoughts on terrorism.

Advertisements

7 thoughts on “Artificial Intelligence and Medicine

  1. If you’re interested in AI checkout the Creative Destruction Lab at the Rotman School. IIRC, Rotman just got a ~ $7 million donation for more AI work recently. This has been on my mind recently, I have a PhD engineering student in my research methods seminar that just added me to his dissertation committee; he’s working on an AI for engineering ‘mega-projects’.

  2. Thanks for the link. I admit I’m puzzled by how my comments (on some comments) on terrorism are in “the same vein” as the AI problems that Mukherjee discusses. Unless you mean to suggest that my intelligence is wholly artificial, in which case I suppose I can’t protest too much.

    • Hahah! In my mind, both issues are a knowledge problem that we don’t quite have a handle on yet…

      • Hrm, I hadn’t thought of the terrorism business as a knowledge problem. The thrust of my post was more that ‘what is terrorism?’ is not a question about which fruitful inquiry or debate is possible, at least without stipulating one of several possible rival conceptions of ‘terrorism.’ Such stipulation would, of course, not be a very good reason to think that the stipulated definition is the correct one, but the animating thought of the post was that there is no single correct definition because ‘terrorism’ is a rhetorically charged word used in a variety of inconsistent ways by different people for different purposes. If that’s right, then the only knowledge problem would be that some people think there is knowledge to be had here when there isn’t. There may well be reasons to think that there really is some answer to the question ‘what is terrorism?’ that doesn’t involve a great deal of stipulation. But even if there is, answering that question seems less important than determining what, among the acts typically labeled as ‘terrorism,’ is justified or unjustified, and why. I suppose there might be a knowledge problem lurking there, but it looks to me like it’s just a problem about elaborating an adequate moral view, and I’m not inclined to think that no minimally adequate views about this exist. Of course probably no theory is perfectly adequate, and of course for any theory there will be some people who disagree with it. But I wouldn’t call those knowledge problems. I’m not sure they even always amount to problems.

        But that’s a lot of my not being sure about stuff, so I’ll stop there.

      • The thrust of my post was more that ‘what is terrorism?’ is not a question about which fruitful inquiry or debate is possible, at least without stipulating one of several possible rival conceptions of ‘terrorism.’

        That sounds like a knowledge problem to me, but I studied anthropology in college, so…

      • Looks like I have a knowledge problem about knowledge problems. No wonder life is so challenging.

Please keep it civil

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s