Saturday, 8 February 2014

Monotheists and future shock

Within the religious community, there has been considerable ire raised against the transhumanist and singulatarian movements. Probably the biggest purveyor of this has been thomas horn, a devout christian who has vocally opposed the tenets of self enhancement. His theme has really made its mark amongst the god-fearing patriots in america. You only need to take a look at any number of alex jones or mark dices videos on youtube to confirm this. Of course, much of their vitriol can be understood as a knee jerk reaction to the endless march of technical progress, a repeat of the luddite movement in old england. Futurists have known for quite some time that this dynamic would play out again, simply because there are still so many people who live at the bottom of the future shock spectrum. [1] The vast majority of these individuals seem to be barely at the SL1 level, and cannot be bothered to read seminal works like CFAI or nanosystems.

What are some ways we can curb this unhealthy attitude? Only by making sure not to force more futurism on them than they can handle. This means being keenly aware of the uncanny valley. [2] No matter what optimistic projections are made by psychologists or sociologists, sentient AI (much less humanoid robots) will never be able to work alongside most people. Unless they can perfectly emulate human behaviour and appearance (like data) the presence of an android will inevitably end up alienating people, and no amount of top-down peer pressure will change this. You cannot force people to comingle with ever smarter AIs without invoking the adversarial attitude. [3] Simply put, if humans are made to feel threatened or alarmed by a non human (but highly intelligent) agent, they will respond with aggressive tribal behaviour. Under no circumstances should anyone make their creation look human, act human, or in any way mask what they are. Nor should we mass produce them for use as accessories. Thats asking for trouble.

Now, back to the neo luddites. They actually have a lot of overlap with the bioconservatives, albeit with a distasteful religious bend: Anyone who thinks they can find the answer to their hangups about the future by thumbing through the bible, well, they're beyond hope! Hackneyed attempts to ban GNR (genetics, nanotechnology, robotics) will only drive their proponents underground, where research is conducted in more risky circumstances, with a greater risk of dangerous malfunctions. Cutting funding could prove difficult as well. Nobodys want practiononers of these dark arts being forced to set up a black market, trading their knowledge and technology with criminals in exche for money. On the other hand, if you really want to create a dystopian cyber punk setting, you could always just set up a replica of the ATF to curb this activity! (Wait, thats not funny...) Like it or not, GNR is something that must proceed in the open, where it can be subjected to the scrutiny of the scientific community.

A drug lords brain trapped in the body of an
armed combat droid. What could go wrong?
 
The bioconservatives will scream bloody murder, but what other alternatives are there? Computer assisted genetic enhancement is going to be a big industry in a couple decades, something that any college level biologist will be able to try his or her hand at. This is our gateway to morphological freedom. An individuals right to body modification is something that should be guaranteed under international law, even if its end result is mankind cladding off into multiple different directions: There are numerous social and ethnic groups who will envoke the right to self determination, up to and including the alteration of their genomes. In the grand scheme of things, this is not a major concern: We'll simply be seeing a return to the kind of genetic diversity that was known 40 or 50,000 years ago, when homo sapiens lived alongside at least three other hominid species. Unfortunately, the same cannot be said for nanotechnology. Even though the fear of open air nanomachines has proved groundless, other concerns remain.
 
Molecular manufacturing will allow engineers to create many different kinds of exotic chemical bonds, which could lead to revolutionary improvements in battery technology, solar panels, capacitors, and fuel cells. Even ordinary substances like steel and concrete could benefit. They are, after all, 'a macro-material strongly influenced by its nano-properties.' Through the use of nanofactorys, we might be able to mass produce mono crystalline iron, which have a tensile and compressive strength 100 times greater than ordinary steel grades. This would have far reaching industrial applications, particularly in skyscrapers or megaships. And who knows what other breakthroughs could come from nanofactorys? The military could use highly reactive compounds like thermium nitrate in bombs and artillery shells. Doctors could use medical nanomachines that synthesise raw atp and deliver it intravenously, alleviating the need to consume dead plant or animal matter.
 
Last but not least, we come to the neo-luddites greatest concern, artificial intelligence. Sigh. So many bad ideas have been put into the publics mind through the medium of science fiction. The most popular fear, that androids will eventually tire of being treated as slaves and rebel against their human creators, is not even on the experts top 10 list. Pioneers like eliezer yudkowsky, ben goertzel, and nick bostrom are much more worried about the threat posed by recursively self improving AI, which can potentially wreak havoc even with minimal access to the physical world. There really is only one way that you can solve this problem, and it doesn't involve remaining perpetually on guard for rogue superintelligences. Indeed, this is a task that will become more and more difficult in the future: The more powerful computers become through moores law, the easier it will be to create a seed AI. Once you pass a certain computing threshold, all it will take is for a programmer to install the wrong source code onto his machine.

Afterwards, you will likely get a paperclip maximiser scenario. There have been theorys posited about how you could take down a seed AI, but this is really only an interim measure. If we fail to intercept even one of these hostile superintelligences, that will be enough of an opening for it to secure first move advantage and take over the planet. That is why yudkowsky and others have proposed the friendly AI concept: All you have to do is create a machine which has superhuman benevolence, make sure it runs as designed, and allow the AI to bootstrap itself to omnipotence. How we will achieve this is a highly technical question subject to intense debate. There is little an uninformed public can do to help the issue, and much they can do to hamper and obfuscate it. They and the neo-luddites would be better off preparing to confront the department of homeland security, and federal law enforcement. After all, they represent a clear and present threat for which there is an obvious solution to.


[1] http://www.acceleratingfuture.com/michael/works/shocklevelanalysis.htm
[2] http://hplusmagazine.com/2010/01/15/valley-dogbots-war/
[3] www.acceleratingfuture.com/lexicon/

No comments:

Post a Comment