The Hanson-Yudkowsky AI-Foom DebateRead more at location 3
Note: due approcci x prevedere: analogia vs. modello H è un economista evolutivo (hayekiano), E. è un ingegnere amante dei progetti che ottimizzano. Il primo è + empirista, il secondo + razionalista IA: emersione concentrata o diffusa? 2 IA possibili: hand coded e brain emulator Previsioni: esperto in scommesse (analista di precedenti) vs. esperto di settore (analista specifoco) effetto scala vs. abilità specifiche ** le diseguaglianze: il modo + efficiente x contenerle consiste nel lasciare una certa xmeabilità all informazoons (leak)Edit
Sometimes a set of tool types will stumble into conditions especially favorable for mutual improvement.Read more at location 92
Such favorable storms of mutual improvement usually run out quickly, however, and in all of human history no more than three storms have had a large and sustained enough impact to substantially change world economic growth rates.Read more at location 93
Note: STORIA UMANA: TRE TEMPESTE
Imagine you are a venture capitalist reviewing a proposed business plan. UberTool Corp has identified a candidate set of mutually aiding tools, and plans to spend millions pushing those tools through a mutual improvement storm.Read more at location 96
UberTool does not plan to stop their closed self-improvement process until they are in a position to suddenly burst out and basically “take over the world.”Read more at location 101
Now given such enormous potential gains, even a very tiny probability that UberTool could do what they planned might entice you to invest in them.Read more at location 103
Yesterday I described UberTool, an imaginary company planning to push a set of tools through a mutual-improvement process;Read more at location 122
He understood not just that computer tools were especially open to mutual improvement,Read more at location 128
Now to his credit, Doug never suggested that his team, even if better funded, might advance so far so fast as to “take over the world.”Read more at location 137
Doug Engelbart understood what few others did—not just that computers could enable fantastic especially-mutually-improving tools, but lots of detail about what those tools would look like.Read more at location 140
Just as humans displaced chimps, farmers displaced hunters, and industry displaced farming, would a group with this much of a head start on such a general better tech have a decent shot at displacing industry folks? And if so, shouldn’t the rest of the world have worried about how “friendly” they were?Read more at location 166
In fact, while Engelbart’s ideas had important legacies, his team didn’t come remotely close to displacing much of anything. He lost most of his funding in the early 1970s, and his team dispersed.Read more at location 169
But what makes that scenario reasonable if the UberTool scenario is not?Read more at location 179
how much better will the best firm be relative to the average, second best, or worst?Read more at location 195
Resource Variance—The more competitors vary in resources, the more performance varies.Read more at location 197
Lumpy Design—The more quality depends on a few crucial choices, relative to many small choices, the more quality varies.Read more at location 203
Info Leaks—The more info competitors can gain about others’ efforts, the more the best will be copied, reducing variance.Read more at location 205
Network Effects—Users may prefer to use the same product regardless of its quality.Read more at location 213
Some key innovations in history were associated with very high variance in competitor success. For example, our form of life seems to have eliminated all trace of any other forms on Earth.Read more at location 215
On the other hand, farming and industry innovations were associated with much less variance.Read more at location 216
attribute this mainly to info becoming much leakier, in part due to more shared standards,Read more at location 218
If you worry that one competitor will severely dominate all others in the next really big innovation, forcing you to worry about its “friendliness,” you should want to promote factors that reduce success variance.Read more at location 219
Feasible approaches include direct hand-coding, based on a few big and lots of little insights, and on emulations of real human brains.Read more at location 364
Machine intelligence will, more likely than not, appear within a century,Read more at location 365
Math and deep insights (especially probability) can be powerful relative to trend fitting and crude analogies.Read more at location 368
Some should be thinking about how to create “friendly” machine intelligences.Read more at location 370
We seem to disagree modestly about the relative chances of the emulation and direct-coding approaches;Read more at location 371
Our largest disagreement seems to be on the chances that a single hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I’d put it as less than 1% and he seems to put it as over 10%.Read more at location 372
My style is more to apply standard methods and insights to unusual topics. So I accept at face value the apparent direct-coding progress to date, and the opinions of most old AI researchersRead more at location 375
putting apparently dissimilar events into relevantly similar categories. (I’ll post more on this soon.) These together suggest a single suddenly superpowerful AI is pretty unlikely.Read more at location 380
Eliezer seems to instead rely on abstractions he has worked out for himself, not yet much adopted by a wider community of analysts, nor proven over a history of applications to diverse events.Read more at location 381
I’m not that happy with framing our analysis choices here as “surface analogies” versus “inside views.”Read more at location 507
More useful, I think, to see this as a choice of abstractions. An abstraction (Wikipedia) neglects some details to emphasize others.Read more at location 509
For example, consider the oldest known tool, the hammer (Wikipedia). To understand how well an ordinary hammer performs its main function, we can abstract from details of shape and materials. To calculate the kinetic energy it delivers, we need only look at its length, head mass, and recoil energy percentage (given by its bending strength).Read more at location 511
To see that it is not a good thing to throw at people, we can note it is heavy, hard, and sharp. To see that it is not a good thing to hold high in a lightning storm, we can note it is long and conducts electricity. To evaluate the cost to carry it around in a tool kit, we consider its volume and mass.Read more at location 516
Whether something is “similar” to a hammer depends on whether it has similar relevant features.Read more at location 525
The issue is which abstractions are how useful for which purposes, not which features are “deep” vs. “surface.”Read more at location 528
The future story of the creation of designed minds must of course differ in exact details from everything that has gone before. But that does not mean that nothing before is informative about it.Read more at location 532
Yes, when you struggle to identify relevant abstractions you may settle for analogizing,Read more at location 535
Analogies are bad not because they use “surface” features, but because the abstractions they use do not offer enough relevant insight for the purpose at hand.Read more at location 536
I claim academic studies of innovation and economic growth offer relevant abstractions for understanding the future creation of machine minds,Read more at location 538
previous major transitions, such as humans, farming, and industry, are relevantly similar.Read more at location 539
You have previously said nothing is similar enough to this new event for analogy to be useful, so all we have is “causal modeling” (though you haven’t explained what you mean by this in this context). This post is a reply saying, no, there are more ways using abstractions; analogy and causal modeling are two particular ways to reason via abstractions, but there are many other ways.Read more at location 560
Everything is new to us at some point; we are always trying to make sense of new things by using the abstractions we have collected from trying to understand all the old things.Read more at location 621
I said the abstractions I rely on most here come from the economic growth literature