An article at the NY Times opens:
The medical profession has an ethic: First, do no harm.
Silicon Valley has an ethos: Build it first and ask for forgiveness later.
Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.
Far be it from me to tell people to avoid spending time considering ethics. But something seems a bit silly to me about all this. The “experts” are trying to teach students the consequences of the complex interactions between the services they haven’t yet created and the world as it doesn’t yet exist.
My inner cynic sees this “ethics of tech” movement as a push to have software engineers become nanny-state-like social engineers. “First do no harm” is not the right standard for tech (which isn’t to say “do harm” is). Before 2016 Facebook and Twitter were praised for their positive contribution to the Arab Spring. After our dumb election the educated western elite threw up our hands and said, “it’s an ethical breach to reduce our power!” Freedom is messy, and “do no harm” privileges the status quo.
The root problem is that computer services interact with the public in complex ways. Recognizing this is important and an ethics class ought to grapple with that complexity and the resulting uncertainty in how our decisions (including design decisions) can affect the well being of others. My worry is that a sensible call to think about these issues will be co-opted by power-hungry bureaucrats. (There really ought to be ethics classes on the “Dark Side of Ethical Judgments of Others and Education Policy”.)
I don’t doubt that the motivations of the people involved are basically good, but I’m deeply skeptical of their ability to do much more than offer retrospective analysis as particular events become less relevant. History is important, but let’s not trick ourselves into thinking the lessons of 2016 Facebook will apply neatly to whatever network we’re on in 2026.
It hardly seems reasonable to insist that Facebook be put in charge of what we get to see. Some argue that’s already the world we live in, and they aren’t completely wrong. But that authority is still determined by the voluntary individual decision of users with access to plenty of alternatives. People aren’t always as thoughtful and deliberate as I’d like, but that doesn’t mean I should step in and be a thoughtful and deliberate Orwellian figure on their behalf.