Tuesday, 30 June 2015

Deceptive singulatarians

No, this isn't about evil transhumanists that are in league with the NWO to enslave us all (thomas horns ranting aside, there is no connection between these two groups). This is about some unchecked beliefs that have run rampant among singulatarians, and will hamper our ability to participate in the intelligence explosion. They've been obvious to patient observers for a long time, but since there are new recruits getting involved in singulatarianism all the time, its worth pointing out some of the more egregious examples.
 
The first unchecked belief is a contrived gish gallop called algernons law, whereby: Any simple major enhancement to human intelligence is a net evolutionary disadvantage. Disturbingly enough, this quote comes from eliezer yudkowsky. While he is a well known seed AI chauvinist, outright dishonesty like this comes as a surprise. In the darwinian jungle, all organisms are as stupid as they can get away with. Only a handful of species have the right kind of selection pressures forcing them towards higher intelligence, and even then, it was just to get them to some localised optima (which doesn't in anyway reflect hard biological limits). There will undoubtedly turn out to be many ways that human intelligence can be enhanced through genetics or cybernetic implants, but the seed AI chauvinists try to discourage all of that so that they can have the cake for themselves. They want us to rely on their untested concept as a first line of defense, just so that they can feed their egos.
 
The second unchecked belief is that there will be such a thing as a hard takeoff, whereby: An AI makes the transition from human-equivalence to superintelligence over the course of days or hours. If this only referenced a seed AI going from the baseline human level onto superintelligence and getting stalled there, that wouldn't be so objectionable. But the hard takeoff scenario doesn't stop there: It claims that the AI will be able to continually increase its intelligence at a linear or even exponential rate, without needing to conform to the laws of physics. You would think this machine would need to acquire more resources and carry out R&D in order to become smarter, but according to these fruitcakes, it just has to stay in place and meditate (err, rearrange its source code). Robin hanson and charles stross have done their own criticisms of hard takeoff, which cover all the relevant details, so theres no need to belabor the point here.

The third unchecked belief is that the singularity will happen soon enough in the future to make attempts at containing rogue AIs futile. In one of his papers, nick bostrom of all people explicitly came out and said this! Like eliezer yudkowsky, he showed an obvious disdain for the prospect of humans becoming smart enough to compete with AIs, and stop them from securing first move advantage. The AI chauvinists want to discourage people from trying to keep up their machines, and force them to put all their hopes on a seed AI. Thats why you get unrealistic estimates of the singularity happening in just 15 or 20 years: Because, if you stretch it out even to 30 years (as ray kurzweil predicts), then you have to take into account humans who begin to genetically enhance themselves or integrate themselves with cybernetics. Why is that bad? Because, transhumans aren't as easily trampled as the regular apes, and they won't need to rely on some AI chauvinist with a god complex.

Tuesday, 9 June 2015

Singularity primer

This is a list of articles that give an excellent introduction to singulatarianism. They are best read in sequence.

"There are several technologies that are often mentioned as heading in this direction. The most commonly mentioned is probably Artificial Intelligence, but there are others: direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high-resolution scans of the brain followed by computer emulation. Some of these technologies seem likely to arrive much earlier than the others, but there are nonetheless several independent technologies all heading in the direction of the Singularity - several different technologies which, if they reached a threshold level of sophistication, would enable the creation of smarter-than-human intelligence.

A future that contains smarter-than-human minds is genuinely different in a way that goes beyond the usual visions of a future filled with bigger and better gadgets. Vernor Vinge originally coined the term "Singularity" in observing that, just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human." -Continue reading here

 
http://users.digitalkingdom.org/~rlpowell/beliefs/sysop.html
 
https://intelligence.org/ie-faq/
 
http://www.yudkowsky.net/obsolete/principles.html

http://www.yudkowsky.net/obsolete/singularity.html
 
http://kajsotala.fi/2007/07/why-care-about-artificial-intelligence/
 
http://www.preventingskynet.com/prevent-skynet-we-are-skynet/
 
 
And here is some slightly more advanced stuff.
 
http://www.sentientdevelopments.com/2009/06/ranking-most-powerful-forces-in.html
 
http://www.yudkowsky.net/singularity/power
 
http://kajsotala.fi/2007/10/14-objections-against-aifriendly-aithe-singularity-answered/
 
http://www.preventingskynet.com/thinking-of-ais-as-humans-is-misguided/
 
https://www.youtube.com/watch?v=mDhdt58ySJA
 
https://www.youtube.com/watch?v=S-BkGEh806M
 
 
And finally, here are a few articles which talk about singularity fun theory and eudamonics.
  
http://transhumanisme.nl/oud/yudkowsky.html
 
http://www.yudkowsky.net/singularity/fun-theory/
 
http://lesswrong.com/lw/x8/amputation_of_destiny/