There's a lot of people talking about why AI needs ethics. In fact, more generally, there's been a lot of chat about why technology needs ethics for quite a few years now, as if technologists work in some kind of moral vacuum, which is pretty demeaning way of refereing to people who are very often quite aware of things like Pugwash and Asilomar. Computer Science related technology is one of the most inter-disciplinary of all science&technology disciplines, and practitioners are exposed to many application domains and sub-cultures. At one extreme, people have worked in cybernetics for 6 or more decades (e.g. ICA Cybernetic Serendipity show, curated in London in 1969). At another extreme, computing and cybernetic artefacts have been embedded in creative works for around a century (e.g. Karel Capek's play, R.U.R. from 1920 ). Much of the fictional work that is based on speculation about science has a strong moral element. This is often used as a simplifying approach to plot or even character, to see how a technology (as yet not realized) might play out in another (possibly future) world, or society. Thus recreational and societal control through drugs in Huxley's Brave New World, Robot detectives in Asimov's Caves of Steel, Genetically engineered aristocrats in Frank Herbert's Eye's of Heisenberg, are all doubly genre fiction (dystopia, 'tec and costum drama, as well as, of course SciFi).
However, these shorthands come with a powerful baggage - that of morality tales - Since Aesop, story tellers want to have a message (don't feed the troll or it will feed on you, don't fly too lose to the sun or your wings will fall off, don't imbue the device with intelligence without a soul or it may turn on you). These tales also build in solutions (it takes years of disciplined training to become a dragonrider of Pern, with great power comes great responsibility, a robot may not harm humanity or, through inaction, allow humanity to come to harm). Indeed, popular TV series from the last 4 decades (from Star Trek to Firefly) are often based directly on earlier morality plays (sometimes two removed from religious allegories via something slightly less old such as the Western movie genre).
Technogeeks are highly aware of this. They do not operate in a vacuum. Critics confuse the behaviour of large capitalist organisations with the interests or motives of people that work on the tech. This is an error. Society needs more fixing than individual crafts. Much more.
So on to Machine Learning. We here so oft-repeated the negative stories of the misuse of "AI" from medical diagnosis with insufficient testing, through self-driving cars which require humans to stop driving (or cycling, or even just walking across the road - off their trolley!) on to the cliches of algorithms used for sentencing in courts which embed the biases of the previous decisions, including wrong decisions and so re-enforce or amplify societies discrimination. Note what I just said - the problem wasn't in the algorithm - it was in the data, taken from society. The problem wasn't in the code, it was in the examples set by humans. We trained the machine to make immoral decisions, we didn't program it that way ("I'm not bad, I'm just drawn that way", as Jessica RabBit memorably said).
But as with the zeroth law, we can learn from the machines. We could (in the words of Pat Cadigan in Synners) change for the machines. We can quite easily devise algorithms that explain the basis for their output. Most ML is not black box, contrary to a lot of popular press. And much of it is amenable to Counterfactual reasoning even when it is somewhat dark in there. We can use this to reverse engineer the bias in society. And to train people to learn to reduce their unconscious prejudice, by revealing its false basis, and possibly socialising that evidence too.
We can become more than human if we choose this mutual approach.
However, these shorthands come with a powerful baggage - that of morality tales - Since Aesop, story tellers want to have a message (don't feed the troll or it will feed on you, don't fly too lose to the sun or your wings will fall off, don't imbue the device with intelligence without a soul or it may turn on you). These tales also build in solutions (it takes years of disciplined training to become a dragonrider of Pern, with great power comes great responsibility, a robot may not harm humanity or, through inaction, allow humanity to come to harm). Indeed, popular TV series from the last 4 decades (from Star Trek to Firefly) are often based directly on earlier morality plays (sometimes two removed from religious allegories via something slightly less old such as the Western movie genre).
Technogeeks are highly aware of this. They do not operate in a vacuum. Critics confuse the behaviour of large capitalist organisations with the interests or motives of people that work on the tech. This is an error. Society needs more fixing than individual crafts. Much more.
So on to Machine Learning. We here so oft-repeated the negative stories of the misuse of "AI" from medical diagnosis with insufficient testing, through self-driving cars which require humans to stop driving (or cycling, or even just walking across the road - off their trolley!) on to the cliches of algorithms used for sentencing in courts which embed the biases of the previous decisions, including wrong decisions and so re-enforce or amplify societies discrimination. Note what I just said - the problem wasn't in the algorithm - it was in the data, taken from society. The problem wasn't in the code, it was in the examples set by humans. We trained the machine to make immoral decisions, we didn't program it that way ("I'm not bad, I'm just drawn that way", as Jessica RabBit memorably said).
But as with the zeroth law, we can learn from the machines. We could (in the words of Pat Cadigan in Synners) change for the machines. We can quite easily devise algorithms that explain the basis for their output. Most ML is not black box, contrary to a lot of popular press. And much of it is amenable to Counterfactual reasoning even when it is somewhat dark in there. We can use this to reverse engineer the bias in society. And to train people to learn to reduce their unconscious prejudice, by revealing its false basis, and possibly socialising that evidence too.
We can become more than human if we choose this mutual approach.
1 comment:
AI Patasala provides you with the ideal platform to take Machine Learning Training within Hyderabad and learn about the subject with experts from the industry.
Machine Learning Course
Post a Comment