Friday 18 April 2014

Computational incompressibility

Some of the things that go on in this mysterious reality of ours are totally beyond the ability of sapients to understand. Just as it would be impossible for a dog or cat to grasp the theory of relativity, it may well be impossible for a human to discern the true meaning of life. Even if god itself decided to step into our world and explain such things, the words would have no meaning to us. Some information is so densely encoded that it simply cannot be passed on from a higher toposophic to a lower. The superintelligent entitys which populate science fiction will often say and do things which simply cannot be explained to baseline humans. One example of this would be in the popular game mass effect, when commander shepard attempts to interrogate an AI named sovereign, and find out why it desires to exterminate all life in the galaxy. Shepard: “What do you want from us? Slaves, resources?” Sovereign: “My kind transcends your very understanding. We are each a nation, independent, free of all weakness. You cannot even grasp the nature of our existence.”

 A conversation with sovereign

Why would the game have a villain that can't explain its motives to the heroes? Isn't that just a lazy plot device to enable a giant battle? Well, no. In an example of fridge brilliance, the developers of mass effect had just hit upon one of the major issues that will complicate relations between man and god: Computational incompressibility. In a recent article [1], philosopher paul humphreys describes this as a behavioural facet which is underivable by a process simpler than whatever actualizes that behaviour. One manifestation of this is the inability of great apes and parrots (some of whom have a vocabulary of hundreds of words) to have a real conversation with their caretakers. These creatures may be able to vocalise/signal, but they cannot use language or any other human domain features. This is not surprising: Homo sapiens are the worlds only general intelligence, which means we have a quantitative and qualitative superiority over all other animal species in terms of cognition. There may be near-equals in one or two categorys, but none compete with us in all eight.
 
A necessary simplification
   
It goes without saying, but intelligence isn't isotropic. This becomes very obvious when observing individuals with savantism, who exhibit peak human abilitys in the realm of mathematics and aesthetics (particularly music), but are extremely deficient in all other areas of cognition. Many savants are not even able to dress themselves! This alone should be enough to put doubt upon charles spearmans g-factor hypothesis: When you throw in the possibility that there may be entire toposophic realms above the human level, then its usefulness as a universal intelligence test becomes nul and void. Anyway... Computational incompressability represents an anthropomorphic problem for us, in that even when we know that we are dealing with an alien mind far smarter than us, we tend to underestimate what its true capabilitys might be [2]. This tendancy explains why so many people either disregard the dangers posed by a superintelligence, or assume that such creatures could be easily bargained or reasoned with.
 
 
[1] http://philpapers.org/rec/HUMCAC

[2] http://www.wjh.harvard.edu/~lds/readinggroup/barrett1996.pdf

Video transcript

[Fighting and containing transapients, part 1. This video was released to youtube on october 8, 2012. It was eventually removed on behalf of fox broadcasting, so the contents will be reposted in text format. An archived copy is available here]

A superintelligent intellect is one that has the capacity to radically outperform the best human brains in practically every field, including problem solving, brute calculation, scientific creativity, general wisdom and social skills. Such entities may function as super-expert systems that work to execute on any goal it is given so long as it falls within the laws of physics and it has access to the requisite resources. Sometimes, a distinction is made between weak and strong superintelligence. Weak superintelligence is what you would get if you could run a human intellect at an accelerated clock speed, such as by uploading it to a fast computer. If the uploads clock-rate were a thousand times that of a biological brain, it would perceive reality as being slowed down by a factor of a thousand. It would think a thousand times more thoughts in a given time interval than its biological counterpart. Unfortunately, no matter how much you speed up the brain of a creature like a dog, you're not going to get the equivalent of a human intellect. Analogously, there might be kinds of smartness that wouldn't be accessible to even very fast human brains given their current capacities. Something else is needed for that. Strong superintelligence refers to an intellect that is not only faster than a human brain but also smarter in a qualitative sense. Something as simple as increasing the size or connectivity of our neuronal networks might give us some of these capacities. Other improvements may require wholesale reorganization of our cognitive architecture or the addition of new layers of cognition on top of the old ones. When discussion of increasing ones smartness comes up, the question often arises: Does intelligence progress linearly, exponentially, or both?
 
In other words, is intelligence something that is isotropic? Does it look the same when scaled up or improved? Current evidence suggests not. Because if it did, then problems that realistically require one genius to solve should also be solvable by two or three non-genius, and that clearly is not the case. The only benefit that comes from having multiple thinkers on a subject is that each individual usually has a different viewpoint, specialise in different things, and have more brute force to throw at the problem. Thats where the synergistic effect of multiple minds coming together stems from. But clearly, this has a limit. The feats that can be performed by one person with an IQ of 180 cannot, in practise, be replicated by two people with an IQ of just 90. There are interesting examples of this phenomenon. (The right genius in the right place) Dozens of philosophers had pondered the solution to the paradoxes raised by zeno of elea. Some of the greatest minds in recorded history tried their hand at cracking it, but the paradoxes did not budge. They managed to withstand two millenia of attempts at scrutinising them. It was not until recently that a definitive answer was provided by peter lynds, in the paper time and classical and quantum mechanics. This suggests a non linear intelligence gradient. But why stop at the human level? After all, from a cosmopolitan viewpoint, the difference in smartness between individual humans is tiny compared to the difference between a human and a primate, or a reptile, or an arthropod. Giant disparitys in intelligence are what is of interest to us, especially given the task of repelling a hostile force of transapients.

This is because no matter how many primates you assemble, they will still not be able to perform the feats of a human, them being unable to understand why three minus two equals one, or to utter rené descartes famous philosophical statement. Let us posit a brand new theory of intelligence differentials. In a nod to orions arm, it will henceforth be known as sophontology, a dicipline which shall concern hypothesising the natures of minds occupying all points on the great toposophic plane. What ought to be the main yardstick of this approach? One idea comes to mind. It will go under the name of domain thresholds. A domain is a landscape that encapsulates minds whose natures follow a certain pattern. This pattern corresponds to the kinds of thought that a being is capable of. For humans, this includes language, self awareness, rationality, abstractness, theory of mind, objective permanence, mathematics, aesthetics, and others. Domain features are the reason why it is impossible to say, for example, that humans are x amounts of times smarter than an animal:We simply posses cognitive abilitys that they do not (which makes numerical comparisons impossible). That begs the question, how can a reasonable comparison be made between something which is present, and something which is not? Theres no clear answer to this. Suffice to say, domain features are the point on the chart where the incremental curves into the exponential. That is something which has important ramifications in an intertoposophic war. After all, it is by definition not possible to compete with a being who exhibits domain features you lack. A reptile cannot compete with a human at arithmetic. It does not even have any concept of numbers.
  
Domain thresholds: Non-sapients occupy the 
1st rung, sapients occupy the 2nd rung, while 
transapients occupy the 3rd rung
  
There is every reason to believe that there are more domain features which the human archetype has not evolved to exploit. A superintelligence will be able to take advantage of this, and compete with us in behavioral categorys which we have no hope of responding to. There are a number of historical precedents for these sorts of things happening. Most recently, was the rise of the hominids. With only a mere tripling in brain size, the descendants of australopithecus were able to surpass all others and dominate the world: Without objective permanence or abstract thought, the animals that these ancient pre-humans competed against had no way to fight back strategically. They had no notions of area denial, of scorched earth, of distance weapons or physical traps. Why should we think the situation would be any different for us, going up against a band of transapients? In the orions arm encyclopedia, a wide variety of such conflicts are portrayed in realistic fasion. One area in which the OA are unique is the fact that they have multiple different stages of superintelligence, six in total, each more powerful and more foreign than the previous. This notion would have much in common with the concept of domain thresholds. Of particular interest is the encyclopedias rejection of the so called plucky baseline meme, which is the idea that ordinary unenhanced human beings can still give a good account of themselves in the face of overwhelming posthuman intelligence and firepower. The OA have determined it an impossibility for any non-superintelligent individual or group to carry out the following actions:
  
  • to hack into an Angelnet or transapient Operating System (so it is impossible, literally, for a baseline-equivalent to hack into even an SI:1 angelnet, no matter how infinitely lucky e might be)
  • Outwit or fool an Angelnet (e.g. smuggle in weapons, commit a murder, perform an act of sabotage, conceal one's position or motives, whatever)
  • Out think or outperform on its own terms a transapient
  • Correctly operate non-baseline friendly transapient tech (including weapons)
  • In any way comprehend or reverse engineer transapient tech
  • escape unaided from ahuman exploitation
  • have any victory against a transapient "pest extermination team" sent to get rid of you
  • outperform or beat or overwhelm in a military manner and/or by superior force of numbers an individual or group of transapients whose job is to get rid of you
  • in any way harm an archailect
 
So - for the purposes of this discussion - the only way that a mere sapient can match a transapient is by emself becoming a transapient. A flatworm in a muddy pond cannot appreciate works of art, or understand general relativity. But if it evolves or is provolved to human equivalence, and becomes human, then it can. It would be ridiculous to have a "plucky flatworm" beating up a human, or out-performing one in literary criticism or university calculus, while still remaining a flatworm. But for a flatworm to evolve into a human, that also means it would no longer be the same being, it would be changed, totally, in every way; ascended and transcended beyond its original condition.