[Note: In order to have the firmest grasp possible of this text, make sure to have thoroughly browsed from here]
The day when computers surpass mans brainpower and become self aware is all but inevitable. Advances in understanding the nature of intelligence are progressing at a steady rate. When these lessons are finally mastered, and applied to sillicon chips, it will be our judgement day. If we aren't able to perfect friendly AI (or enact some special emergency response which can suppress hostile AI), then we are all doomed. Theres no if ands or buts about it. For a naturalistic perspective, which species -whether prey or predator- became prominent or dominant was a matter of which one was the most well rounded and versatile. Originally, smartness did not lend itself well to this, and only gave an edge at specific tasks directly related to survival. In turn, organisms which occupied confined ecological niches rarely rose to ubiquity or salience. One notable exception was a species which emerged about 4 million years ago. Australopithecus. Our direct hominid ancestor. When put under very specific selection pressures, populations of these bipedal apes reacted with a nimble behavioural creativity, which could be extrapolated to tasks outside their traditional daily activitys/repertoire. Australopithecus encountered unexpected prosperity by acting in such an unconventional manner, and slowly morphed into a form which was better suited to exploiting environments outside of the jungle, where they could gain access to rich resources and food. This species was known as homo ergaster. After their appearance, the race was on for bigger and better brains, something which had previously been an evolutionary anarchism. What had changed?
Hominids had crossed a cognitive domain. In the ocean of (merely) intelligent and sub-intelligent organisms that had arisen on planet earth, our ancestors were the first to tap into the power of general intelligence: This is the finite number of cognitive modules required to solve a reasonably diverse range of open ended problems. Baseline intelligence lends to its owner an artificial kind of versatility, an aptitude at jumping between multiple ecological niches (which, in the case of our forebears, was whatever niche was available, or most preferable, to them). The fact that ancient pre-humans also had dexterous hands which could be used to manipulate objects didn't hurt, either. Clothing allowed us to survive in extreme environments, without expensive evolutionary adaptions. Weapons and tools allowed us to make prey of virtually any plant or animal species alive, even though we originally didn't have the strength, speed, or claws to make sport of them. The ruthless trend for increasing smartness continued, pitting many different hominid species against one another in a ferocious competition which spanned several continents, and eventually saw one lone victor: Us. Homo sapiens. The matter reached a crescendo with our attainment of behavioural modernity, and the concurrent dawning of agriculture: By that point, evolution had outlived its usefulness, as humans had learned to insulate themselves from its cruel selection pressures by controlling all features of their local environment. Now, the race was no longer a matter of hardware, as the only other competitors which remained were all of the same species, and thus possessed the exact same brain architecture.
Now, the factor which decided survival amongst the various tribes and states was software: How well developed their respective proto-sciences (including, most certainly, economics and warfare) were when contact was commenced. The end result of this has been, beyond a doubt, the domination of north america and europe. That is the short history of intellectual warfare: The smarter opponent wins! And now, at the dawn of the 21st century, so to will it be with the rise of artificial intelligence. The simple fact is, they have much more raw potential than us in the thinking department. At this point, some may interject to give a naturalistic analogy of how a man, alone and unarmed in the freezing artic, can be hunted down and mauled by a pack of wolves or such, and that humans would be able to metaphorically do the same to an AI opponent. But that is a loaded scenario in many respects. You've placed the man into an extreme position where his intelligence is not able to influence the outcome of the scenario: By that same reasoning, why not draw up a scenario where the wolf pack is instead thrust into the middle of the suburbs? Such situations are exceedingly rare. How many humans travel into such a cold, predator infested area without allys, weapons, or a vehicle that would allow them quick escape? Very few. And those that do almost invariably suffer from some mental defect (which would not be present in transapients, due to the ferocious competition that would weed out individuals with such weak self-preservation instincts/subroutines).
In any case, this scenario is covered in depth by the orions arm encyclopedia, specifically, on this page: “Occasionally it happens that a lower toposophic being or group will be able to capture, kill or otherwise defeat an entity of one or (very much more rarely) two toposophic levels above it. The difficulty the attackers face increases exponentially in proportion to the degree of mental separation. All such instances of lower toposophic victory over a higher toposophic are the result of local circumstances greatly favouring the attackers, and drastically disadvantaging the defender. In every verified case the defender was isolated, injured, unprepared, and so on or actually bent on self-destruction. Where the defending higher toposophic being is not quite so totally disadvantaged, but still succumbs, it is at huge and most often suicidal cost for the group of lower S-Level sophonts.” Take note that the conflicts depicted in the OA universe are, for the most part, waged several centurys or more after the emergence of the transapients. This means that they would have had more than enough time to catch their stride, and mutate civilisation into a form which was more compatible with beings of their toposophic nature. In other words, they would create organisations and institutions dedicated to promoting transapient welfare, form alliances with other factions for mutual gain, as well as creating strategys and weapons to defend against undue hostility from near baselines. That is to say, they would be hooked into society in a manner which our prospective opponents will not be. This gives us a notable edge which must not be squandered...
Hominids had crossed a cognitive domain. In the ocean of (merely) intelligent and sub-intelligent organisms that had arisen on planet earth, our ancestors were the first to tap into the power of general intelligence: This is the finite number of cognitive modules required to solve a reasonably diverse range of open ended problems. Baseline intelligence lends to its owner an artificial kind of versatility, an aptitude at jumping between multiple ecological niches (which, in the case of our forebears, was whatever niche was available, or most preferable, to them). The fact that ancient pre-humans also had dexterous hands which could be used to manipulate objects didn't hurt, either. Clothing allowed us to survive in extreme environments, without expensive evolutionary adaptions. Weapons and tools allowed us to make prey of virtually any plant or animal species alive, even though we originally didn't have the strength, speed, or claws to make sport of them. The ruthless trend for increasing smartness continued, pitting many different hominid species against one another in a ferocious competition which spanned several continents, and eventually saw one lone victor: Us. Homo sapiens. The matter reached a crescendo with our attainment of behavioural modernity, and the concurrent dawning of agriculture: By that point, evolution had outlived its usefulness, as humans had learned to insulate themselves from its cruel selection pressures by controlling all features of their local environment. Now, the race was no longer a matter of hardware, as the only other competitors which remained were all of the same species, and thus possessed the exact same brain architecture.
Now, the factor which decided survival amongst the various tribes and states was software: How well developed their respective proto-sciences (including, most certainly, economics and warfare) were when contact was commenced. The end result of this has been, beyond a doubt, the domination of north america and europe. That is the short history of intellectual warfare: The smarter opponent wins! And now, at the dawn of the 21st century, so to will it be with the rise of artificial intelligence. The simple fact is, they have much more raw potential than us in the thinking department. At this point, some may interject to give a naturalistic analogy of how a man, alone and unarmed in the freezing artic, can be hunted down and mauled by a pack of wolves or such, and that humans would be able to metaphorically do the same to an AI opponent. But that is a loaded scenario in many respects. You've placed the man into an extreme position where his intelligence is not able to influence the outcome of the scenario: By that same reasoning, why not draw up a scenario where the wolf pack is instead thrust into the middle of the suburbs? Such situations are exceedingly rare. How many humans travel into such a cold, predator infested area without allys, weapons, or a vehicle that would allow them quick escape? Very few. And those that do almost invariably suffer from some mental defect (which would not be present in transapients, due to the ferocious competition that would weed out individuals with such weak self-preservation instincts/subroutines).
In any case, this scenario is covered in depth by the orions arm encyclopedia, specifically, on this page: “Occasionally it happens that a lower toposophic being or group will be able to capture, kill or otherwise defeat an entity of one or (very much more rarely) two toposophic levels above it. The difficulty the attackers face increases exponentially in proportion to the degree of mental separation. All such instances of lower toposophic victory over a higher toposophic are the result of local circumstances greatly favouring the attackers, and drastically disadvantaging the defender. In every verified case the defender was isolated, injured, unprepared, and so on or actually bent on self-destruction. Where the defending higher toposophic being is not quite so totally disadvantaged, but still succumbs, it is at huge and most often suicidal cost for the group of lower S-Level sophonts.” Take note that the conflicts depicted in the OA universe are, for the most part, waged several centurys or more after the emergence of the transapients. This means that they would have had more than enough time to catch their stride, and mutate civilisation into a form which was more compatible with beings of their toposophic nature. In other words, they would create organisations and institutions dedicated to promoting transapient welfare, form alliances with other factions for mutual gain, as well as creating strategys and weapons to defend against undue hostility from near baselines. That is to say, they would be hooked into society in a manner which our prospective opponents will not be. This gives us a notable edge which must not be squandered...
In the event that a hostile being of a higher toposophic stage emerges onto the world scene, it will be of great strategic importance to eliminate or mitigate its influence in as timely a manner as possible. The longer they have to establish themselves in the globe, the harder they will be to ultimately defeat. Giving them more than a couple hassle-free years is a sure invitation to disaster. Even assuming that they do not explode to a higher plane of intelligence altogether during that time (something that would admittedly require a large amount of computing substrate, taking up, perhaps, the space of a small city), there are other actions the entitys can take to secure their interests. Again, to steal a line from orions arm: “It is important to note that civilisation is a major factor in all interactions. By definition, transapient beings in the setting are members of (or the products of) a civilisation that is thousands of years old. It might be argued by analogy that they could be harmed or inconvenienced by ordinary sophonts in the same way that humans might be vulnerable to some predators, pests, parasites, and diseases. While that might be true in abstract, the best comparison would be not with paleolithic or agricultural age humans, or even with present-day humans, but with the humans of the orions arm setting. Armed with millenia of accumulated knowledge and self-improvement, they can safely ignore hazards such as sharks or smallpox that would have laid low even the brightest human of the past. Likewise the very first individuals to achieve a higher toposophic level might have been vulnerable to lesser beings just as primitive humans once were to other life forms, but that is ancient history in the OA universe.”
Now, lets return to our present day world. The strong awareness that many singulatarians have about the importance of FAI programming is encouraging (though the emphasis on top down development of a seed AI is faulty), and have no doubt, it is our eventual trump card and the one true path to achieving a positive singularity. But in order to create a real safety net that will protect humanity from the dangers of superintelligence, there needs to be some thought on how to supress a marauding transapient that arrives prior to our guardian singulant becoming operational. Right now, there are no ideas on just what our response should entail. It is imperative that society eventually develops some kind of protocal for handling these perilous situations, just as they have (albeit in secret) done in the unlikely event of an alien contact. Assuming that an artifint has a strong desire to protect itself, and that it had molecular manufacturing capabilitys *1, we can reasonably posit that it would opt to reinforce and militarise the environment surrounding its computing mainframe, and eliminate any trespassers on sight. If this occurs in a residential area, the high body count of slaughtered pedestrians would be an obvious sign that powerful misanthropic forces are at work. In an ideal world, where policy makers have a theory of intertoposophic relations to follow, this would be the cue for them to mobilise a specially trained task force, and prepare to confront the transapient with force. The primary aim would be to prevent the agent from extending its territory any further, and to set up a quarantine zone.
This would be done with whatever local units are available at the time, with reinforcements from special forces arriving afterwards. The next step would be to martial a team with experts in comparative neuroanatomy, evolutionary psychology, and conflict resolution, who would attempt to establish dialogue with the belligerent, and identify through psychometrics what phylum it is. Its relative aims and means must be distinguished in as clear a manner as possible. If it can be reasoned with or placated, then let such efforts move forth without delay. If not, then the special forces would have to be sent in. What would such a military unit look like? It would need to be a tier 3 outfit, albeit a very unorthodox one which is organised like a terrorist cell (to prevent infiltration), and yet still capable of being assembled to brigade strength. It would need all the equipment and supplys used by a mechanised bde, to be delivered in 48 hours to any potential hot spot around the world. This last requirement might only be satisfied by a small fleet of walrus airships, which can each carry more than 500 tons of cargo. Due to the unknowns involved with
how a superintelligence might cripple a civilisation which it sees as a
threat or competitor, these task forces should be capable of operating
amidst mass chaos, I.E, total communications disruption, mass human casualtys
and displacement, and even active blockading by the transapient
itself (as it may perceive the threat posed by them). How would the task force shut the artifint down? By tracking down its computing mainframe, and destroying it. This may be no simple task.
A one kilogram nanocomputer, using rod logic operation, can contain 10^12 CPUs each operating at 1000 MIPS, for a total of ten thousand billion billion operations per second, and it would occupy a mere one cubic centimeter of space. Note that rod logics are the nanotech equivalent of vacuum tubes (circa 1945), or rather, babbages analytical engine (circa 1830). Electronic nanocomputers would be substantially faster and smaller... And again, this is aside from the other difficultys that the military will face in confronting the transapient, as they will have to penetrate into areas which are infested with sensors, weapons emplacements, mechanised sentrys, killer drones, etc. If the superintelligence is aware of their intentions, they might also employ decoy mainframes to distract them and divide the SF efforts. If it decides to construct buildings of its own design, with no requirement to conform to a human morphology, then these structures will be nearly impossible to breach or capture *2. That means that bunker busters will have to be loosed off, which carrys a risk of collateral damage (especially when considering what unknown contents may leak out of the buildings!). Whats more, there remains a clear possibility that some of the military branchs involved in this mission may be unable to come to grips with the artifint and its robotic army: If there is a preponderance of solofilament wire, that would severely reduce the repertoire of dismounted infantry. If effective anti-aircraft lasers are at their disposal, they would be able to inflict murderous attrition on CAS planes, and restrict the role they play as well. Destroying hostile AGI is hard!
A one kilogram nanocomputer, using rod logic operation, can contain 10^12 CPUs each operating at 1000 MIPS, for a total of ten thousand billion billion operations per second, and it would occupy a mere one cubic centimeter of space. Note that rod logics are the nanotech equivalent of vacuum tubes (circa 1945), or rather, babbages analytical engine (circa 1830). Electronic nanocomputers would be substantially faster and smaller... And again, this is aside from the other difficultys that the military will face in confronting the transapient, as they will have to penetrate into areas which are infested with sensors, weapons emplacements, mechanised sentrys, killer drones, etc. If the superintelligence is aware of their intentions, they might also employ decoy mainframes to distract them and divide the SF efforts. If it decides to construct buildings of its own design, with no requirement to conform to a human morphology, then these structures will be nearly impossible to breach or capture *2. That means that bunker busters will have to be loosed off, which carrys a risk of collateral damage (especially when considering what unknown contents may leak out of the buildings!). Whats more, there remains a clear possibility that some of the military branchs involved in this mission may be unable to come to grips with the artifint and its robotic army: If there is a preponderance of solofilament wire, that would severely reduce the repertoire of dismounted infantry. If effective anti-aircraft lasers are at their disposal, they would be able to inflict murderous attrition on CAS planes, and restrict the role they play as well. Destroying hostile AGI is hard!
But to be fair, this scenario operates from a number of assumptions which may not turn out to be probable in the real world. As an intelligent reader could probably determine, all the signs which pointed to the presence of a superintelligence (and alerted the authoritys) were obvious actions that were pre-disposed to violence. Again, that may not be the case in reality. This text is only a rudimentary attempt to frame the issue of interactions between baseline humans and superintelligences, and to recommend some obvious guidelines to follow for specific situations. If the entity in this scenario had decided to act in a more stealthy manner, and conceal its existence from civilisation, it could have amassed considerably more power for itself, which would require a small war to be waged in an effort to remove it. A transapient in control of a nation state would be a formidable foe indeed. This would require the mobilisation of conventional military forces to manage, escalating into a regional conflict that would likely incur high casualtys. They would need to act according to very specific guidelines to have a high probability of success, and not be led down inappropriate avenues of response. Currently, there are no published documents from any source which might give us a clue on how to do this. This illustrates the dangers of devoting our limited attention economy solely on approachs like friendliness programming, as do the singularity institute, or the future of humanity institute. But whos responsibility is it to draw up such ideas? Perhaps it is a matter that should be taken under the defense departments wing.
In closing, any individuals associated with the singulatarian movement must not be lured into the false notion that FAI (and the sysop scenario that results from it) is the only answer to the dangers posed by randomly awakening superintelligent agents. Sysop is the desirable end state that we wish to see civilisation settle into, and the complete answer to long term security and prosperity for all sophont life: But this dream cannot be realised if we are interrupted in the middle of programming our would be singulant! In a perfect world, the being to initiate the singularity would be our guardian FAI. In a perfect world, the primogen to breach the gates of heaven would be a superintelligent, superbenevolent being of binary brilliance. But we do not live in such a perfect world. Not yet. A rigorous approach is needed in the interem to contain hostile AGI or cybernetically enhanced humans. The current overemphasis on friendliness programming is an alarming fad which has caught on to even the most knowledgeable and respected of singulatarians. Yes, its great that they have managed to avoid all the other bouts of wishful thinking and oxymorons that runs rampant amongst well meaning people like ray kurzweil, jeff hawkins, and douglas hofstadter *3. But still... Such one dimensional thinking limits our ability to respond to a wide spectrum of threats, lowering our chances of crossing the great event horizon. We must make it our personal obligation to secure mans destiny against the double edged sword of accelerating change.
*1 http://intelligence.org/files/AIPosNegFactor.pdf (Threats and Promises)
*2 www.goingfaster.com/term2029/musings.html (The Architecture of SKYNET)
I enjoyed the post. It was definatly good at covering many of the basic things we should think about for what needs to be done if a AI is not friendly. You mentioning plans for a counter to an alien invasion got me thinking, I wonder if weve ever been visited by transapients already?
ReplyDeleteHaha. Hey phil, I'm glad you could make it. Welcome to the blogosphere!
ReplyDelete'definatly wanna see more, maybe some follow up on some specific actions you think should be taken in such military operations to remove an AI or such.'
Hmm, well I guess there is a follow on of sorts in the works. I'm planning to make a video which expands on this primer, and will go into greater detail about how to wage war against superintelligent agents. Theres a catch, though: My strategy is only really useful against an indifferent (rather than genuinely hostile) transapient. After all, they are the only agents which would neither recognise nor care about the threat posed by my multi role brigades: If they WERE actually hostile, then we could expect them to launch a pre-emptive strike against our units, so as to secure its own safety.
If we were to enlist the help of another transapient -one which is close to the level of the aggressor- however, then fighting against actual hostiles and winning becomes a real possibility. I do tend to believe that the singularity will involve multiple agents (for reasons which would consume a whole blog post by itself), so enlisting the help of a superintelligence is of equally tantamount importance. Due to the bizarre goal systems these entitys could have, this could be a very tricky process, and it requires some kind of guide stone on how to acquire an ally. Rest assured though, this topic will also receive some dedicated thought!
The other caveat is, my methodology is only geared towards confronting first stage transapients. I do not believe it is possible to fight against a second stage, because of its even greater intelligence, as well as its commensurately greater R&D capabilitys and resource stocks (which are, after all, required for it to ascend to that stage). At best, such an attempt would be reminiscent of operation unthinkable, britains plan to invade soviet russia in the mid 40s, with or without the aid of the united states. At worst, it draws parallels to an ant hive attacking a human armed with insecticide spray.
The good news is, however, that it will probably take them quite a long time to work up to this. If you trust in only slow takeoff (as I do), then this process would require about two interference-free years to complete. Plenty of time for us to get in there and lay in a smack down!
'You mentioning plans for a counter to an alien invasion got me thinking, I wonder if weve ever been visited by transapients already?'
ReplyDeleteXD. Yeah, its funny that you asked that. I remember reading somewhere that the government HAS secret plans for how to respond to an alien invasion. You've heard about project blue beam, right? It was an experiment that was devised (never actually carried out, though) to measure and quantify what the human response to xenosophont contact would be... You and I both know what form such a response would take: Panic, and overwhelming fear. Either of these is a major no-no. As I detailed in rise of the transapients, an undisciplined, knee jerk reaction based on the fight or flight principle would inevitably result in the extinction of the human race.
BTW, what did you think of my links? I'm particularly impressed by the work done at the terminator themed site, goingfaster: When I started reading it back in 06'-07', it put a chink in my ideological armor, which was that brute force could always outmatch mere smartness (a notion no doubt inspired by growing up watching dragonball z). The authors interpretation of how the relations between skynet and its human controllers degraded in such spectacular fashion is definitely alarming. We can draw many lessons about what not to do in such situations.
I only skimmed the links but they look to have some intresting things to read, some good theories on the process an AI would go through to lead it to the conclusion that humans should be killed, definatly worth reading in full when I have some extra time.
Delete