“Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle” – Reason.com via @nuzzel
(disclaimer: I haven’t dug any deeper than reading the above linked article.)
Apparently Elon Musk is afraid of the potential downsides of artificial intelligence enough to declare it “a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late.”
Like literally everything else, AI does have downsides. And, like anything that touches so many areas of our lives, those downsides could be significant (even catastrophic). But the most likely outcome of regulating AI is that people already investing in that space (i.e. Elon Musk) would set the rules of competition in the biggest markets. (A more insidious possible outcome is that those who would use AI for bad would be left alone.) To me this looks like a classic Bootleggers and Baptists story.
Another facet…
http://www.cnn.com/2017/07/18/politics/paul-selva-gary-peters-autonomous-weapons-killer-robots/index.html
In the same vein: https://m.youtube.com/watch?v=_Wlsd9mljiU
The precautionary principle /definitely/ should be used for AI specifically built for killing people! On the other end of the spectrum, AI for self driving cars (also potentially lethal) could almost certainly be handled well enough with fairly simple liability rules in line with how we deal with human error