Tuesday, 30 June 2015

Deceptive singulatarians

No, this isn't about evil transhumanists that are in league with the NWO to enslave us all (thomas horns ranting aside, there is no connection between these two groups). This is about some unchecked beliefs that have run rampant among singulatarians, and will hamper our ability to participate in the intelligence explosion. They've been obvious to patient observers for a long time, but since there are new recruits getting involved in singulatarianism all the time, its worth pointing out some of the more egregious examples.
 
The first unchecked belief is a contrived gish gallop called algernons law, whereby: Any simple major enhancement to human intelligence is a net evolutionary disadvantage. Disturbingly enough, this quote comes from eliezer yudkowsky. While he is a well known seed AI chauvinist, outright dishonesty like this comes as a surprise. In the darwinian jungle, all organisms are as stupid as they can get away with. Only a handful of species have the right kind of selection pressures forcing them towards higher intelligence, and even then, it was just to get them to some localised optima (which doesn't in anyway reflect hard biological limits). There will undoubtedly turn out to be many ways that human intelligence can be enhanced through genetics or cybernetic implants, but the seed AI chauvinists try to discourage all of that so that they can have the cake for themselves. They want us to rely on their untested concept as a first line of defense, just so that they can feed their egos.
 
The second unchecked belief is that there will be such a thing as a hard takeoff, whereby: An AI makes the transition from human-equivalence to superintelligence over the course of days or hours. If this only referenced a seed AI going from the baseline human level onto superintelligence and getting stalled there, that wouldn't be so objectionable. But the hard takeoff scenario doesn't stop there: It claims that the AI will be able to continually increase its intelligence at a linear or even exponential rate, without needing to conform to the laws of physics. You would think this machine would need to acquire more resources and carry out R&D in order to become smarter, but according to these fruitcakes, it just has to stay in place and meditate (err, rearrange its source code). Robin hanson and charles stross have done their own criticisms of hard takeoff, which cover all the relevant details, so theres no need to belabor the point here.

The third unchecked belief is that the singularity will happen soon enough in the future to make attempts at containing rogue AIs futile. In one of his papers, nick bostrom of all people explicitly came out and said this! Like eliezer yudkowsky, he showed an obvious disdain for the prospect of humans becoming smart enough to compete with AIs, and stop them from securing first move advantage. The AI chauvinists want to discourage people from trying to keep up their machines, and force them to put all their hopes on a seed AI. Thats why you get unrealistic estimates of the singularity happening in just 15 or 20 years: Because, if you stretch it out even to 30 years (as ray kurzweil predicts), then you have to take into account humans who begin to genetically enhance themselves or integrate themselves with cybernetics. Why is that bad? Because, transhumans aren't as easily trampled as the regular apes, and they won't need to rely on some AI chauvinist with a god complex.

No comments:

Post a Comment