mercoledì 23 novembre 2016

Can Machines Become Moral? Don Howard

Can Machines Become Moral?
Don Howard
Citation (APA): Howard, D. (2016). Can Machines Become Moral? [Kindle Android version]. Retrieved from

Parte introduttiva
Evidenzia (giallo) - Posizione 2
Can Machines Become Moral? By Don Howard
Evidenzia (giallo) - Posizione 7
question is a hard
Evidenzia (giallo) - Posizione 7
it is beset by many confusions
Evidenzia (giallo) - Posizione 7
sorting out some of the different ways
Evidenzia (giallo) - Posizione 9
For some, the question is whether artificial agents, especially humanoid robots, like Commander Data in Star Trek: The Next Generation, will someday become sophisticated enough and enough like humans in morally relevant ways so as to be accorded equal moral standing with humans.
Nota - Posizione 11
Evidenzia (giallo) - Posizione 12
holding the robot morally responsible for its actions
Evidenzia (giallo) - Posizione 13
the right answer is, “We don’t know.”
Evidenzia (giallo) - Posizione 13
Only time will tell
Evidenzia (giallo) - Posizione 13
convince ourselves that it is wise and good— or necessary— to choose to include
Evidenzia (giallo) - Posizione 15
If movies and television were a reliable guide to evolving sentiment, it would seem that many of us may even be eager to embrace our mechanical cousins as part of the clan, as witness recent films like Ex Machina and Chappie or the TV series Humans.
Nota - Posizione 17
Evidenzia (giallo) - Posizione 18
Why we are drawn to such a future
Evidenzia (giallo) - Posizione 18
Why are we so enchanted by an ideal of mechanical, physical, and moral perfection unattainable by flesh-and-blood beings?
Nota - Posizione 19
Evidenzia (giallo) - Posizione 19
cultural anxiety
Evidenzia (giallo) - Posizione 22
Some pose the question “Can machines become moral?” so that they may themselves answer immediately, “No,”
Evidenzia (giallo) - Posizione 23
robots cannot be intelligent or conscious.
Nota - Posizione 23
Evidenzia (giallo) - Posizione 26
robots cannot understand and express emotions.
Nota - Posizione 26
Evidenzia (giallo) - Posizione 28
Start with consciousness.
Nota - Posizione 28
Evidenzia (giallo) - Posizione 28
Evidenzia (giallo) - Posizione 29
often point to John Searle’s “Chinese room” argument
Evidenzia (giallo) - Posizione 31
Imagine yourself, ignorant of Chinese, locked in a room with a vast set of rule books, written in your native language, that enable you to take questions posed to you in Chinese and then, following those rules, to “answer” the questions in Chinese in a way that leaves native speakers of Chinese thinking that you understand their language. In fact, you don’t have a clue about Chinese and are merely following the rules. For Searle, a robot or a computer outfitted with advanced artificial intelligence would be just like the person in the box,
Nota - Posizione 35
Evidenzia (giallo) - Posizione 36
Criticisms of the Chinese room
Evidenzia (giallo) - Posizione 36
we humans don’t really understand that which we call “consciousness” even in ourselves, how do we know it isn’t just the very competence that such a machine possesses?
Nota - Posizione 37
Evidenzia (giallo) - Posizione 38
Surely, the reasoning goes, my graphing calculator doesn’t understand the mathematics
Nota - Posizione 39
Evidenzia (giallo) - Posizione 43
artificial neural nets are remarkably simple. Modeled explicitly on the neuronal structure of the human brain,
Evidenzia (giallo) - Posizione 44
they consist of neuron-like nodes and dendrite-like connections among the nodes, with weights on each connection like activation potentials at synapses. But, in practice, such neural nets are remarkably powerful learning machines that can master tasks like pattern recognition that defy easy solution via conventional, rule-based computational techniques.
Nota - Posizione 47
Evidenzia (giallo) - Posizione 50
what can or cannot be done in the domain of artificial intelligence is always an empirical question,
Nota - Posizione 50
Evidenzia (giallo) - Posizione 51
Confident a priori assertions about what science and engineering cannot achieve have a history of turning out to be wrong, as with Auguste Comte’s bold claim in the 1830s that science could never reveal the internal chemical constitution of the sun and other heavenly bodies, a claim he made at just the time when scientists like Fraunhofer, Foucault, Kirchhoff, and Bunsen were pioneering the use of spectrographic analysis for precisely that task.
Nota - Posizione 54
Evidenzia (giallo) - Posizione 55
it would be unwise to put a bet on any claim that “computers will never be able to do X.”
Evidenzia (giallo) - Posizione 57
technology forecasting, especially in this arena, is a risky business.
Evidenzia (giallo) - Posizione 57
don’t be surprised if in a few years claims about computers not possessing an emotional capability begin to look as silly as the once-commonplace claims back in the 1960s and 1970s that computers would never master natural language.
Nota - Posizione 59
Nota - Posizione 59
Evidenzia (giallo) - Posizione 59
Some thinkers
Nota - Posizione 59
Evidenzia (giallo) - Posizione 60
think that it is critically necessary that we begin to outfit smart robots with at least rudimentary moral capacities
Evidenzia (giallo) - Posizione 62
Two arenas
Evidenzia (giallo) - Posizione 62
ethics for self-driving cars
Evidenzia (giallo) - Posizione 62
ethics for autonomous weapons.
Evidenzia (giallo) - Posizione 64
we will soon be delegating morally fraught decisions
Evidenzia (giallo) - Posizione 66
we can produce robot warriors that are “more moral” than the average human combatant.
Evidenzia (giallo) - Posizione 67
I spend a lot of time thinking about the rapid expansion of health care robotics,
Evidenzia (giallo) - Posizione 68
patient-assist robot that might soon be helping my ninety-year-old mother into the bathtub
Evidenzia (giallo) - Posizione 70
In the book Moral Machines, Wendell Wallach and Colin Allen argue
Nota - Posizione 71
Evidenzia (giallo) - Posizione 74
The question for them is not whether but how.
Evidenzia (giallo) - Posizione 75
two different approaches to programming machine morality: the “top-down”
Evidenzia (giallo) - Posizione 75
Top-down approaches to programming machine morality combine conventional, decision-tree programming methods with Kantian, deontological or rule-based ethical frameworks and consequentialist or utilitarian, greatest-good-for-the-greatest-number frameworks (often associated with Jeremy Bentham and John Stuart Mill).
Nota - Posizione 77
Evidenzia (giallo) - Posizione 77
one writes an ethical rule set into the machine code and adds a sub-routine for carrying out cost-benefit calculations.
Evidenzia (giallo) - Posizione 78
the approach endorsed by Arkin in his book Governing Lethal Behavior in Autonomous Robots.
Evidenzia (giallo) - Posizione 80
the rule set consists of the International Law of Armed Conflict and International Humanitarian Law (basically, the Geneva Conventions),
Evidenzia (giallo) - Posizione 82
correct objection to this approach
Evidenzia (giallo) - Posizione 82
one cannot write a rule to cover every contingency;
Evidenzia (giallo) - Posizione 83
second, consequentialist calculations quickly become intractable in all but the simplest cases
Evidenzia (giallo) - Posizione 84
Some critics also fault the inflexibility of the deontological
Evidenzia (giallo) - Posizione 86
shortcomings of the top-down approach might be compensated by a bottom-up approach
Evidenzia (giallo) - Posizione 87
deep-learning techniques
Evidenzia (giallo) - Posizione 87
to make the moral machines into moral learners
Evidenzia (giallo) - Posizione 88
This approach borrows from the virtue ethics tradition
Evidenzia (giallo) - Posizione 89
the idea that moral character consists of a set of virtues understood as settled habits or dispositions to act, shaped by a life-long process of moral learning and self-cultivation.
Nota - Posizione 90
Nota - Posizione 90
Evidenzia (giallo) - Posizione 90
Evidenzia (giallo) - Posizione 91
moral competence of such machines is black-boxed and inherently unpredictable.
Evidenzia (giallo) - Posizione 92
human moral agents is similarly
Evidenzia (giallo) - Posizione 94
machine should be able to justify its actions by reconstructing,
Nota - Posizione 95
Evidenzia (giallo) - Posizione 96
reply that human moral agents normally do not act on the basis of explicit, algorithmic or syllogistic moral
Evidenzia (giallo) - Posizione 97
offering ex-post-facto rationalizations
Evidenzia (giallo) - Posizione 99
Future efforts in programming machine morality will surely combine top-down and bottom-up approaches.
Nota - Posizione 100