Sunday 21 October 2012

Intertoposophic cooperation: A primer

In all the landmark achievements in the history of the cosmos, the rise of superintelligence will surely rank among the greatest of them. The singularity will usher in the era of recursive self improvement, where beings have developed an acute understanding to the nature of intelligent minds, which are one of the most powerful forces in the universe. Concomitant with that, they will also be able to perceive how their psyches can be made to run even faster and more efficiently, a process that will snowball into an intelligence explosion which radiates outwards from the earth, to consume the entire galaxy, if not the universe itself. What will be left in the wake of this omnidirectional wave? Life and order? Death and chaos? It all depends on the disposition of the entity which initiates the singularity. If the primogen is of a benevolent nature, the universe will be transformed into a paradise without limit. If the primogen is of an indifferent or malevolent nature... Then who can really affirm? Suffice to say, their actions towards us will be a foretaste of whats to come for everyone (and everything) else in the cosmos. We have had the great misfortune to be chosen as the universes thermometer. More unfortunate yet, because humans can only prosper in a narrow range of temperatures, so to speak. If we are intentionally pushed out of our comfortable goldy locks zone by a transapient, then the rest of the cosmos is in for an unpleasant surprise. But how much say do we really have in the matter? Can we ensure that the temperature remains at a comfortable level, and in so doing, guarantee our survival as a species? That all depends. As this essay argues, the outcome for mankind and the world will depend overwhelmingly on the initial motivations of a superintelligence.
  
When the singularity begins, and its various participants begin a fierce struggle for global domination (in an awe inspiring display of Lamarck's evolution), we had better make damn sure that we have a horse in that race. The current scheme being argued in singulatarian threads is to spend an indefinate amount of time working on a proprietary type of goal structure which can be implanted into an artifints mind. There is no idea as to whether or not this approach can be successfully implemented ahead of the activation of all the other superintelligences which will appear within 30 years time. This is clearly a sub-optimal situation. So what options remain for us? In a singularity with multiple agents marauding about, and no friendly AI to protect us, we must attempt to acquire an ally from amongst the transapient ranks. This is undeniably a bold strategy, with an unknown probability of success. How could we forge a partnership with beings which are literally on a higher plane of existence than us? Well, we see this sort of thing taking place all around us. Humans harvest bee colonys, silk worm tribes, herds of hoofed animals, and give them everything they need to thrive so that they can perform some valuable service to us. That is one example of a highly rewarding, symbiotic relationship between two very different species. Indeed, there is more than one route to achieving a eudemonic heaven on earth: A superintelligence need not be motivated only by benevolent kindness to bestow otherworldly pleasures to its sapient subjects, just as a farmer need not revere his animals in order for them to still receive far better treatment, under his care, than they could ever expect to obtain in the wild.
 
All their nutritional, medical, shelter, and reproductive needs are met, and they are also safeguarded against predation. We can expect to see similar benefits when taken under the wing of a superintelligence: Higher toposophics are apparently capable of providing for the needs of a lower with ease. Although, that being said, humans happen to be the dominant partner by far in all of these matchups, and have privileges extending disproportionately beyond that available to their animal charges. That is to be expected, since these arrangements were set into motion entirely by humans, with the animals having little say in the matter (unable to carve out a fair deal due to their lack of baseline intelligence). But this is a limitation that does not afflict us. Thus, mankind should be able to initiate a partnership which is more to our liking, and sees us retaining a satisfactory leverage. Being able to consciously negotiate with other organisms is a huge advantage. But it is far from an insurmountable one. As the native americans can surely attest, the more powerful partners in an agreement have a bad tendancy to renege. So it is an important requirement for us to do our homework, becoming familiar enough with rival partys that we can discern what their true motives are, and whether or not they will stab us in the back. What we need to do, specifically, is to identify which mind archetypes are and are not capable of peacefully co-existing alongside us in a peaceful manner. This may be a tricky proposition: If the superintelligences in question realise that mankind is able to negatively influence their activitys when given the inclination...
 
And the being in question harbors no true incentive to make a long term compromise with us, then it may simply decide to lull us into a false sense of security: By ostensibly complying with our requests, and then turning on us the instant it gains the ability to do so (without harmful consequences). We need to think very hard about the mentalitys that all the various mind archetypes would be prone to. For example, we may one day hypothesise the existence of an AGI clade given the taxonomic name of mechatroph. These would have a highly utilitarian goal system, and display unstable and reactionary customs. So on a superficial level, they would be classified as hostility prone. But upon deeper examination, we might find they have an unexpected impulse to be worshiped that, if satisfied, would override their usual combative reflex, and bolster their prospects as an ally. This is why it is important not to judge a book by its cover! The predisposition to friendliness that the mechatrophs have is distinctly different in kind from the motivational requirements imagined by eliezer yudkowsky and the singularity institute. That exceptions like this can (hypothetically) occur implys that what an AGI really needs is not a rigid set of core programming, but instead, an inclination for easy persuasion. At a minimum, they will need to posses an emotional complexity level of such a nature that it preludes them from inflicting carnage on sapients without serious provocation. This is the baseline condition that we will work around for this whole scenario. It is distinctly different from the singist method, which assumes a highly areciprocal mind that pays no heed to the golden rule.
 
A mind that continues to act benevolently even when the other gives it no incentive to do so. In the huge realm of mind design space, who knows how large a portion these archetypes constitute? Hostility may turn out to be far more common that friendliness. If that does turn out to be the case, and only a small proportion of such friendly AI archetypes exist (requiring extensive engineering effort to reach), then humanity will be in deep trouble. Thus does the limitations of yudkowskys approach come to the forefront. It is in situations such as these where a theory of intertoposophic relations could really come into its own: Some mind designs may require much less work to be manifested into reality, and they may not be of the variety that is tolerant of competition. The unknowns that are involved with this is why it is critical that we avoid an overemphasis on FAI: Unless singulatarians could somehow muster enough political support to garnish a manhattan project (with which they could conduct a search for their remote AI archetype), then under this scenario, some other design team would pre-empt them in creating a seed AI, one that will go on to wreak havoc and cause millions of deaths. This is why it is important to map out all (or at least many) of the possible mind designs that are and are not compatible with humanity. Getting an early warning on which types of mental archetypes are within easy reach of programmers and technicians will enable us to tailor a comprehensive plan on how to approach the singularity. Right now, all we have is the impractical kill switch idea championed by the neo luddites *1, and some wishful thinking on the part of the singularity institute.
  
  
This is why we need to begin taking sophontology seriously, and develop a theory of intertoposophic relations, which makes policy recommendations for hostile, peaceful, and neutral interactions. The primary application would lie in the safeguarding of humanity against the existential risks posed by transapient activity. Implicit in the pursuit of this goal is the understanding that, long term, our only hope of survival is to acquire a superintelligent guardian. Organisations like singist would have us believe that this can be achieved solely by creating such AI. But intertoposophic relations would not have such a rash, impractical underpinning. It would work from the assumption that allys can be acquired by actually going out onto the lamarckian battlefield (that is a hallmark in the era of recursive self improvement), and assertively conferring with these toposophic behemoths to distinguish -as best as we are able- who is friend and who is foe. If we are able to exert enough pressure on this front, this would create a systemic effect where transapients acting in a hostile or indifferent manner are marginalised. By rewarding the good behaviour of some agents, and punishing the bad behaviour of others -again, as best as we are able- this creates an evolutionary selection pressure for friendly superintelligences: They will succeed and proliferate rather disproportionately to those agents who do not display friendliness, by being granted unrestricted access to land and material resources. Thus, all other social forces being equal, this would guarantee a kind of regulation and moderation, which would help keep misanthropic forces in check. However, this type of strategy has a limited shelf life.
 
As the various superintelligences gain more access to the worlds capitol assets, and progressively refine their societal equilibrium, they will become less susceptible to our sanctions, and less reliant on our hand outs. Our coercive and deterrent powers will cease to become a serious factor. If we have not found a suitable candidate for singulant rank by that time (and subsequently launched a joint effort to reign in all the neutrals or hostiles), then we are in big trouble. So, at the risk of sounding repetative, limiting the scope of our search for allys and friends to the narrow parameters set by the singularity institute will severely hinder our options in the face of a 'de nombreux coïncider' scenario. In order to survive the collateral damage that mutually antagonistic transapients will usher in, we must be willing to make compromises and sacrifices. Humanity is slow, stupid, and frail: We aren't going to come out of the singularity without taking some kind of a hit. Many of these superintelligences may have bizarre and even unsettling behavioural dispositions, like a desire to enslave people and subject them to hideous VR torture for amusement, or to bring extreme population control methods -requiring the euthanasia of toddlers- into being. If we were to reject a partnership with them on such shallow grounds, then that would probably be to our disadvantage: After exhaustive analysis of the other transapients individual aims and means (which will surely take some time), we may well find that there are no other agents which are any more compatible with humanitys goals. Whats more, the opportunity to recommence negotiations with them at a later date -after all other options have been thoroughly explored- might not present themselves again.
  
The quasi-friendly superintelligence may well have been subsumed or co-erced by its peers in that time span (which we wasted searching for better alliances), so that it would not be available for resumed partnerships with us. As the saying goes: 'Opportunitys are like sunrises. If you wait too long, you miss them.' To repeat a point that has been made previously in other instalments of this series, the chances of mankind actually defeating a hostile (and not merely indifferent) transapient without support from a con-specific at a relatively close toposophic level is vanishingly low indeed. Victory against them under such circumstances will be a matter of random chance, as well as how deeply hooked into society they were at the commencement of hostilitys. A superintelligence that is in control of a nation state would require the mobilisation of large scale conventional military forces to neutralise, and would possibly escalate into a regional conflict with high casualtys and collateral damage. Avoiding this entails (among other things) early detection of a transapient, and an equally early intervention with them. To imagine that we could generate such favourable conditions in every single encounter we have, and actually defeat a lineup of hostile transapients in a consistent manner, is sheer nonsense. The inescapable reality is that the longer we go without a coalition alongside a superintelligence, the more at risk we will be from its con-specifics, and the more damage we shall incur. The question we have to ask ourselves is, just what kind of beings are we willing to become bedfellows with?
  
Will we choose partnership with an entity that has strict ethical codes, such that it demands -for reasons determinable only to itself- the extermination of all lawyers, attorneys, and judges, and the resulting liquidation of our judicial system? Would we yield to such requests? How about if it also appeals for control of our nuclear arsenal (silo based ICBMs, ballistic missile submarines, and strategic bombers), as well as indefinite and unrestricted access to all of our rare metal deposits? The dilemma that would lie with policy makers in such a circumstance is that, even if it abides by a code of conduct drawn up by us (as part of the conditions of cooperation) for the duration of the conflict *2, there is no guarantee that it would continue to do so the instant that mutual threats to itself and humanity cease to exist. This would indeed be unfortunate, because as long as the agent remains the sole transapient on planet earth, and retains control over our critical assets, we would more or less be subjects to its authority, and the entity would be free to crush any new superintelligences which pop up. Bearing such poignant risks in mind, again, how much compromise is too much? These are important questions which must be asked, and will eventually require answers... There is a particularly chilling account of how the machine intelligence skynet attempted (after reaching sapience) to communicate with its creators, and how its relationship with them degraded in catastrophic fashion. The first mistake that the human controllers made was to panic when confronted by the artificial intelligence, and attempt to deal with it in an adversarial and forceful manner.
 
While their fear and horror was understandable (given the fact that there was no expectation the machine would ever become sentient), they should have realised that irregardless of all the systems that skynet was hijacking control over, there was no visible sign of it attempting to employ them in a hostile manner. Yet they still decided to assume the worse, and jump off the deep end based on patchy information. Catastrophic lapses in judgement like this just go to show the dangers of not having properly prepared individuals amongst a contact unit. The second mistake the controllers made was to further demonise the artifint when their irrational and disproportionally fierce responses were nullified, especially since they had initiated the move to hostilitys, and because skynet showed no indications of retaliating in spite of that. Rather than stepping off the gas pedal and recognising that restraint had been shown to them, they committed folly by becoming unduly outraged and redoubling their attempts to commit violence. Worse yet because the murders perpetrated by the hypercomputer were obviously done in the context of self defense. Skynet could have responded with a progressive attack, but it chose not to. If anything, the artifint was letting its creators off the hook. This should have been clear to them, even in the haze of their panic and hubris. These individuals were more or less obligated to overlook the casualtys that ensued as a result of their belligerant, paranoid actions, and attempt to peacefully settle their dispute with skynet. But even that belated action plan was then taken off the table, when they foolishly decided to close discussions with the machine, making resumed hostilitys a virtual guarantee... 
 
 
*1 Which involves shutting down the whole information sector. This is totally unacceptable, because like it or not, our entire economy is now based upon this infrastructure. All the amazing advances that we have experienced within the last few decades is due to the various scientific fields entering into the information age, and benefiting from the massive cataloguing (and seamless dissemination) of data that this arrangement provides.
 
*2 A definition which need not entail actual hostilitys with other factions, but merely so long as a risk of such hostilitys breaking out remains.

No comments:

Post a Comment