Sunday 7 December 2014

Bantokfomoki in space, part 2

In the first entry of this rebuttal, we went over two of the four videos released by bantokfomoki which attempt to refute the concept of relativistic space travel. Homing in on what he calls the apex reaction engine, an anti-matter drive used in the avatar film, bantokfomki (otherwise known as EBTX) sets forth a premise: ''If it cannot be made to work in principle, then no reaction rocket of any sort is suitable for interstellar space travel.'' Variations of this theme are restated numerous times throughout his video series. EBTXs argument is that in order to accelerate to fractions of light speed, the engines will emit so much heat that they would inadvertently vaporise themselves and whatever ship they are attached to. It is the same observation made by arthur c clarke in his book, the exploration of space, and it holds a grain of truth.
 
While EBTX had indeed managed to prove that the ISV venture star would be inoperable under the circumstances given in the movie, that does not imply it couldn't work under a different set of circumstances (even though he frequently insists this is the case). For instance, when the ships velocity and acceleration are significantly decreased, the problems with venting waste heat are easier to manage, and mass ratios become more practical. And as for why anyone should care so much about this subject? The starship depicted in avatar is a viable merger between two of the best propulsion concepts known to science: Charles pellegrinos valkyrie, and robert forwards photon sail. If they turn out to work in some shape or form, then humanity will have a real shot at becoming a space-farring species.
 
  
Video #3
 
[1] If you lower anything at all that resembles an interstellar engine into the sun with a magic rope, and you pull it up 6 months later, all you will have is a loop in your magic rope with a knot... This goes for any materials anywhere in the universe. There are no chemical bonds that can indefinitely withstand solar temperatures without breaking.
 
This is true, but also somewhat irrelevant. EBTX seems to be confusing temperature with heat, which is a common mistake. The two are not the same. For example, while the corona of the sun can reach a temperature of 1 million celsius, the gas has a density of just 0.0000000001 times that of the earths atmosphere at sea level. The thermal energy of a cubic meter of corona gas is less than 0.002 joules.
 
[2] With that in mind, we can make some calculations about the venture stars anti-matter propulsion system. And because anti-matter is the most efficient fuel possible, we can be sure that no other type of engine would do any better.
 
Thats tough to say for certain, since nobody knows what the ships exhaust velocity is. There are many different kinds of AMAT engines on the books, and its never stated which type is used in the movie.
 
[3] To proceed, we need to know what the diameter of the rocket nozzle is on the venture. From the picture, we measure about 13.5 meters. The area of the hole is Pi r squared, or 143 square meters. Theres two engines, so that gives us about 300 square meters total.
 
The nozzle actually appears to be less than 13.5 meters across, going by this picture. Not that it really matters, given how collimated the matter stream is.
 
[4] The accumulated energy of the 125,000 metric ton venture star at 70% of light velocity is 2.75 x 10 24 joules. Dividing that by 15,768,000 seconds (thats six months) gives us an engine power output of 174 petawatts, which is the solar output to the earth.
 
EBTX is rounding up the mass of the venture star, so that the energy requirements for its propulsion are equal to the solar energy received by earth. This is a useful fiction which apparently makes the calculations easier to do, although it contradicts the mass of 100,000 tons he gave in earlier videos.
 
[5] Now we want to know how much of the suns surface area is devoted to lighting up the earth with its bountiful energy. The surface area of a sphere with a radius equal to the distance of the earth to the sun is 2.8 x 10 23 square meters. The area of the earth that blocks sunlight is Pi r square, where r is the earths radius, at 6,371,000 meters. So the earth subtends 1.275 x 10 14 square meters. Dividing this by the other shows that the earth takes up... 2.73 x 10 9 square meters.
 
Thats a long chain of numbers, but they are correct. The incident area of the sun comes out at 2,730,000,000 square meters exactly. Another interesting note: The sun radiates 63 megawatts per square meter at its surface, and by the time this energy reaches earth, it has dissipated to just 1360 watts due to the inverse square law.
 
[6] So, for an interstellar ship like that envisioned in avatar, if it had a mass of 125,000 metric tons and accelerated to .7 C in 6 months, with exhaust holes from whence comes the products of anti-matter annihilation that are about 300 square meters total, this means that the energy output from those holes are approximately 9,100,000 times hotter than the surface of the sun.
 
Thats a very pessimistic stance to take. First, the engines are being supported by heat radiators which have a surface area at least 500 times greater than them. Dividing the 2,730,000,000 square meters of the suns incident area by the 150,000 square meters of the ships radiators reveals a difference of 18,200 fold, and not 9,100,000 fold as EBTX claimed. Second, because the engines could have an efficiency rating of up to 99%, the actual waste heat that needs to be vented into space is decreased by 2 orders of magnitude. Unfortunately, that still leaves the ship running 182 times hotter than the suns surface, equal to a blistering 1,051,596 kelvin.
 
No matter how you twist the numbers here, one cannot help but conclude that the circumstances given in the movie are impossible to work with. 174 petawatts is way too much power for the venture star to safely cope with, since it would need to be radiating many, many times more energy per unit area than the sun does! Luckily, there is a solution in sight. We can turn back to the parameters used in the first entry of this article, which specify accelerating for 60 months up to 7% of light velocity. By accepting a significantly longer travel time (67.5 years, to be exact), we can decrease the ships power draw by 3 orders of magnitude, and get it down to just 174 terawatts. With this change to the mission parameters, the crafts temperature will have been decreased to just .182 times hotter than the surface of the sun, or 1051 kelvin. YMMV, of course.
 
[7] Of course, if the venture star were much lighter, the energy output of its engines will be proportionately less. So that a ship 1/10th the mass will have an orifice energy output of only 910,000 times more than an area of the sun the same size as that orifice. And a ship lighter by 1/100th, or about 1250 metric tons, will have an orifice energy output of only 91,000 times greater than that of 300 square meters on the surface of the sun.
  
Our host is no doubt aware that decreasing the velocity by a factor of 10 will decrease the kinetic energy (and hence the energy needed to propel the ship) by a factor of 100. And that since the ship is going to be in transit for a longer period of time, its acceleration could also be decreased by a factor of 10 without any penaltys. This neatly circumvents the need to play around with the ships mass, which wouldn't work anyway, because even if you try to make a miniature venture star, the surface area will decrease in addition to the volume. Not by a proportionate amount, but more than enough to make such a strategy futile.
 
[8] If you understand that an engine cannot in principle sit submerged under the surface of the sun for 6 months without being destroyed, you can also understand that an engine cannot last 6 months in an environment thats millions of times hotter still. The case is closed for anti-matter engines that propel a ship to .7 C in a mere six months.
  
While that may be strictly true, it doesn't imply that humanity will not achieve interstellar travel using different speed regimes. As demonstrated in this article, a velocity of .07 C reached after 60 months keeps mass ratios and heating problems to a minimum. It is a TALL order to get to nearby stars with currently known technologys, but it can be done. Not this century, and maybe not even the next, but someday...
 
 
Video #4
 
 
[9] So the engines must put out energy equivalent to all the energy coming out of the surface of the sun from an area of 2730 square kilometers, and it must do so for 6 months to get to 70% of light velocity. But in fact, that was being overly generous. The engines must actually output over twice that energy, or 18,200,000 times the suns output per unit area.
 
Twice your original estimate? What will you pull out of the magicians hat next?
 
[10] In the rocket venue, at the beginning of acceleration, nearly all the energy is wasted out the rear exhaust, with a small amount being invested in the kinetic energy of the payload. At the end of the acceleration, and if that acceleration is some reasonable fraction of light velocity (and the exhaust velocity approximates light velocity), much more energy is deposited in the craft and less in the exhaust, simply by the nature of the energy book keeping process.
  
 EBTX is referring to the oberth effect, a phenomenon whereby the ships propellant will provide more thrust than normal after exceeding a certain velocity (since it will have a kinetic energy close to its potential energy). Its a bit like driving your car down a steep hill and gaining a speed boost because of gravity. Whether or not this is a good thing depends on just how high your starships delta-v is: Slow ships don't get to take advantage of this, but fast ships like the venture star do. With this understood, the oberth effect cannot be construed as something which forces starships to output more energy in order to attain their cruising speed.
 
[11] Because temperature is related to energy output in the star as 4th root, we can estimate the avatar engines temperature by its output at 18 million times the solar output. The 4th root of 18,200,000 is 65, while the surface temperature of the sun is 5778 degrees kelvin. So the internal engine temperature of the venture engine would be 65 times 5778, which is equal to 375,570 degrees kelvin.
  
If you completely ignore the role played by the heat radiators and engine efficiency, then yes, that would be a reasonable conclusion. The stefan-boltzmann law acknowledges that the total energy radiated per unit time by a blackbody is proportional to the 4th power of its temperature. Whats less clear is how decreasing the surface area of a blackbody will affect its temperature. Will halving the surface area double the temperature? If so, then how much will the luminosity change? The devil is in the details!
 
[12] There are only a couple of things that can happen in the engine context when a proton and anti-proton annihilate. Of the two gamma rays formed, one must exit out the exhaust port, thats the wasted half. The other gamma ray can then go through the engine compartment, unimpeded, in which case it is useless as propellant.
 
EBTX neglects to mention that gamma rays are merely the byproducts from electron and positron collisions: When the protons and anti-protons themselves annihilate, they produce neutral and charged pions travelling at relativistic speeds which can be magnetically deflected to produce thrust. There are various efficiency losses associated with this process, since these particles have short half lives and decay into more unstable forms, but they are not major. Its worth mentioning also that a proton is 1837 times heavier than an electron, and hence has alot more relevance in terms of kinetic energy (so this commentary on gamma rays comes off as intentionally misleading).
 
[13] Or it can collide with the engine compartment and thereby plasmify whatever it hits, thus contributing to the propulsion of the ship at the expense of its disintegration. Or, it can collide with the engine compartment and somehow reflect back out the rear exhaust hole, thus giving up the maximum energy of the ship without destroying it.
 
Again, protecting the craft from ionizing radiation is not a major dilemma. We already know how to build efficient shields using a sheet of tungsten, with a v-shaped cross section to reflect neutrons and x-rays (although gamma rays are a tougher proposition). In addition, some regions of the venture star were specifically mentioned as using almost no metal, so as to reduce the possibility of gamma and x-rays ablating the hull and producing secondary radiation.
 
[14] A fair approximation for the venture is a cylinder around each radiator. The radiators appear to be about 300 meters long, and maybe 80 meters wide, so the effective radiative surface is about Pi times 80 times 300 times 2 radiators, equals about 150,000 square meters. The maximum temperature they can glow at is the temperature of the surface of the sun, that is they glow white hot when running for six months. Don't ask how they glow without melting, we'll just give them that.
  
Thats a rather conservative estimate on the area of the radiators, but even so, the ship would never reach such high temperatures (assuming that the measures adopted here are viable). EBTX trys to create the illusion of impossibility by focusing on the individual problems faced by each system, and ignoring how these systems synergistically work together. If you study things in isolation, then of course the individual parts will seem absurd and inadequate. In a combustion engine, pistons are impractical without a radiator grill to collect the heat, and pointless if you don't have spark plugs to ignite the fuel.
  
[15] So the engine energy output is equivalent to 2730 square kilometers on the surface of the sun, or 2,730,000,000 square meters, while the radiators give off the suns energy at 150,000 square meters. This means that the efficiency of the engine is minimally, just about theoretically perfect. Or for every erg wasted as excess heat in the engine, 18,199 ergs go directly into the kinetic energy of the ship. Wow.
  
If anyone designed the venture star in such a foolish way, then a perpetual motion machine of the 2nd kind would be in order.
 
 

This rebuttal should wrap up most of the loose ends raised by EBTX and get the ball of critical thought rolling again. All in all he made some valid observations, and mapped out alot of unexplored territory, to the benefit of his audience. With that said, its interesting to note what problems didn't warrant a mention from him: Most obvious is the unimaginable cost of electricity required just to power the ship out of the solar system. Even if we meter the energy usage at the same rate as BC hydro, which is 11.27 cents per kilowatt-hour, the venture star still has a power draw of 1.4 x 10 17 watts. This means that the cost of powering it for the 1st hour alone comes out at a staggering $1577 trillion! Thats more than the entire world GDP. If you were to adopt a more modest speed regime of .07 C, that still leaves the electricity bill at $1.577 trillion. The economy would need to grow many orders of magnitude larger before such an undertaking could even begin to be considered.
 
Heres another problem that EBTX didn't foresee. A natural consequence of the ships tensile truss is that it puts the crew habitat behind the engines, which means the exhaust flare passes within 100 meters of it on both sides. For some idea on the dangers of this engine plume, consider that a welder emits light at an intensity of around 50,000 lux (enough to damage the retina if observed for more than a few seconds), whereas the exhaust flare was described as being 'an incandescent plasma a million times brighter than a welding arc, and over thirty kilometers long.' The further an object juts out from the centerline of the truss, the more radiant heat it will be exposed to. There was no excuse not to provide the crew habitat with a cone shaped canopy for protection. Another problem is that the whipple shield stack can only offer coverage to the venture star during the departure to alpha centauri: When the ship returns to sol, the shield is forced to drag behind it, thus leaving everyone exposed to micrometeorites.
  
Six months is a helluva long time to remain unprotected while your ship is getting up to 70% of light velocity! There is a throwaway mention about the shield stack getting detached and moved by thrusters on a vector ahead of the venture star, but this method is only used after cruise speed has been reached. Theoretically, you could just bolt it onto the front of the ship, but there doesn't seem to be a convenient spot where it could be mounted. There is another reason why the shield stack cannot be left at the back: It is so large that even with the engines canted several degrees outward, the exhaust plume cannot help but wash against it! The dangers presented by this should be obvious. As one source put it: 'Melt isn't the proper word for what happens to a solid substance bombarded by a relativistic particle stream. Spallation is more like it. Chemical bonds simply aren't strong enough to prevent relativistic particles from stripping away affected atoms.'
 

Friday 31 October 2014

Bantokfomoki in space, part 1

This is a response to the series of videos released by bantokfomoki on the subject of space travel, as depicted in the avatar movie by james cameron. Bantokfomoki has made four such videos so far, all centered on the performance of the starship known as the venture star. There is alot of good science in them, but they suffer from a certain absolutism. The message he consistently trys to push is that without some kindof reactionless drive, travel to other stars within a human life span is fundamentally impossible. Bantokfomoki is better known by his website EBTX, which stands for Evidence Based Treatment X. He is an expert generalist able to contribute insights on a variety of different topics. With regards to the avatar film, though, he may have jumped the shark.
  
Bantokfomoki (who we'll call EBTX for now) released his latest video on october 4th, 2014. Referring to the venture stars anti-matter engine, he lays down a premise: ''If it cannot be made to work in principle, then no reaction rocket of any sort is suitable for interstellar space travel.'' Geesh, talk about laying down the gauntlet. EBTX assumes that because this one star ship may not be feasible under one set of mission parameters, it cannot be feasible under any other set of parameters. Thats going to create alot of problems for him down the road: Even if the original specifications are at odds with the laws of physics, they can be altered and rearranged. Playing with the numbers until you get a desirable result is an integral part of what rocket science is!
  
Now, since this latest video builds upon the work of the others in his series, we're going to do a quick review of them to get up to speed. For the best outcome, readers should watch the videos in their entirety before going over the rebuttal here. The reason for EBTXs detraction of relativistic space travel is that, in order to accelerate to fractions of light speed, the engines will emit so much heat that they would inadvertently vaporise themselves and whatever ship they are attached to. He comes to this conclusion using a different approach in each video, building a cumulative argument that has befuddled most space-exploration advocates. We will check his work for consistency and see whether or not there are easy alternatives to this dilemma.
  
 
Video #1
 
[1] ...Travel to other star systems in time frames on the scale of a human life span are pure drivel.
 
He has a very strong conviction of his, as we will see.

[2] Lets first get a picture of what 4 light years really are. Imagine the sun is a pea on a plate in the middle of your dinner table. Then the earth will be a grain of salt at the edge of the table. Jupiter will be about as big as the printed letter o (small case) in a newspaper on the wall of your dining room.
 
The sheer distances involved with interstellar travel are often under appreciated. The swedish have a series of monuments which represent a scale model of the solar system, the center of which is the erricsson globe: Even at a 1:20,000,000 scale, neptune is located a breathtaking 229 kilometers away!
 
[3] Since the avatar website doesn't specify a mass, lets assume a mass of 100,000 metric tons for the mass of the craft. Thats the weight of a large aircraft carrier.
 
If EBTX is referring to its wet mass (I.E, after it is loaded with propellent), that seems like a pretty safe guess. The venture star was, after all, based on the design philosophy laid down by project valkyrie. You can tell because the engines are mounted on the front of the ship, whereas the payload is dragged behind it on a truss. This allows designers to use flimsier materials and skimp on radiation shielding, which results in a very lightweight craft weighing a fraction of what conventional starships would. What we cannot say with certainty is what the venture stars dry mass would be...
 
[4] The kinetic energy of this craft at 70% of light velocity is 2.2 x 10 24 joules. One half of a year is 15,768,000 seconds. Dividing that by 2.2 x 10 24 joules gives us the average power output we need to get from our matter anti-matter annihilation engine. This gives us 1.4 x 10 17 watts.
 
EBTX is going over the ships mission profile, which specifys an acceleration of 1.5 gs for half a year. This leads to a delta v (change in velocity) of 210,000 km/s, and a voyage lasting 6.75 years from start to finish. His estimates on the energy consumption are an absolute minimum, since they do not factor in the various inefficiencys that come with reaction engines. With this established, he then moves to the crux of the matter.
 
[5] But lets suppose that our engine is unbelievably 99.99% efficient, and only 1/10,000th part of our energy will end up as heat to effect our engine. But 1/10,000th part is 1/6th of the energy released by a hiroshima bomb per second. That amount of heat would destroy any conceivable engine in the 1st second.
 
EBTX doesn't provide any calculations on what the heat capacity of the ship might be, so hes basically just speculating on this. We'll need to do some basic math in order to find out how much heat the engines can handle. Lets start off by assuming that the engines weigh 10,000 tons, and are made of mere carbon steel. This substance has a specific heat of 490 joules per kilogram, I.E, you have to input 490 joules to raise its temperature by 1 celsius. If the temperature before ignition is 0 celsius, and the safe operating limit is 600 celsius, this gives the engines a heat capacity of 2.94 x 10 12 joules. Unfortunately, the heat load from the engines (even at 99.99% efficiency) is 1.05x13 joules, too much to handle. It would seem that EBTX has a valid point. But wait, he didn't merely say that the venture star was unworkable under the circumstances given in the movie: He made the much stronger claim that it was unworkable under any circumstances whatsoever! According to him, a journey from sol to alpha centauri in a human lifespan is out of the question. Well, lets put that assumption to the test.
  
What we have to do is reduce the velocity of the ship to something more realistic. Instead of .7 C lets try just .07 C, which is 21,000 km/s. By decreasing the velocity by a factor of 10, the kinetic energy (and hence the energy needed to propel the ship) is decreased by a factor of 100. Since the ship is going to be in transit for a longer period of time, we might as well reduce its acceleration by a factor of 10, too. Instead of a 6 month engine burn, the venture star will be accelerating for 60 months. So now, the total energy budget of the craft has been reduced to 2.2 x 10 21 joules, at a rate of 1.4 x 10 14 watts. Thats more like it. And while we're at it, we'll need to ditch the 99.99% efficiency rating EBTX suggested (as a reductio ad absudem), and determine a more plausible rating. Anti-matter engines normally have an efficiency of over 90%, but when you throw room-temperature super conductors into the equation, you can probably get up to about 99% efficiency. If so, that implys the total heat transference would be 1.05 x 10 12 watts. Thats low enough that we don't need to worry about the engines melting, especially when they are being assisted by the radiator panels. Voila, problem solved!

[6] In point of fact, an anti-matter propulsion system is simply ludicrous for high-g accelerations or even moderate ones, because though propulsion by means of electromagnetic radiation alone is the most efficient possible, it does not provide the acceleration produced by throwing out gobs of matter at high speeds.
  
Thats quite true, which is why the acceleration phase should be drawn out as long as practical. AMAT engines cannot provide nearly as much thrust as chemical or nuclear engines, and they must be given as much time as possible to get a starship up to speed. 60 months is better than 6, in this case. But its curious that EBTX limits his criticism to the venture stars engine (which is only used to decelerate upon reaching alpha centauri, and then to accelerate back to sol), and not to the titanic laser batterys that propel it at the same rate. Does he even know that the ship uses a photon sail?
 
[7] There is no plan for going to other star systems under the presently known conservation laws in a human lifespan time frame as anything but idiotic.
 
Such an optimist we are :o)
 
   
 Video #2
 
This (silent) video starts off from a shaky premise. Whereas his previous estimate for the venture stars wet mass was a slim 100,000 tons, EBTX trys to up the scale by comparing it alongside an aircraft carrier. Bizarrely, he suggests that the radiator panels each have as much mass as the ocean going behemoth, whereas the two engines combined are only as massive as one aircraft carrier. Who knows how he came to that conclusion... Adding in the propellant tanks and the tensile truss, he arrives at a final mass of 500,000 tons. But wait, thats not all: EBTX actually has the gaul to claim that this is merely the ships dry mass! There are so many things wrong with that, its hard to articulate. Aircraft carriers are a dense lump of metal that float on water via buoyancy, whereas the venture star is just a spindly rope of carbon nanotubes and containers flying through space. The designs have nothing in common, and its obvious he is just trying to inflate the ships mass. Complementing this deception, EBTX then uses a hopelessly backward method to try and determine how much propellant is needed to accelerate the ship up to 70% light velocity.
 
Using an exhaust velocity of .693 C (a figure seemingly made up on the spot), and a dry mass of 500,000 tons, he concludes that 500,000 tons of propellant would be required to accelerate the ship to .7 C. But if you input all the numbers EBTX used, this bloated ship would only reach a final velocity of .48 C. A word of caution: In order to correctly use tsiolkovskys rocket equation, you need to know the ships mass ratio and exhaust velocity. You can't just pull numbers out of your a$$. If we return to our previous assumptions about a wet mass of 100,000 tons, and a delta v of 7% light velocity, we can proceed with far more clarity than our host. Using an exhaust velocity of 59,000 kilometers per second (which is what the valkyrie engine was supposed to max out at), we get a dry mass of about 70,000 tons. This means that just 30,000 tons of matter-antimatter propellant are required to get underway, a very comfortable mass ratio that allows lots of safety room. This is important because some of the byproducts in a matter-antimatter reaction cannot be redirected through the engine chamber, and hence cannot be used to generate thrust.
  
 

Friday 24 October 2014

Americanised canada

This has been a really lousy week for canada. In the wake of the shootings on parliament hill, this nation has adopted exactly the posture it shouldn't have. Instead of breathing a sigh of relief that only one person was killed *, canadians are puffing themselves into a panicked frenzy, debating whether or not police should be given the authority to detain people without probable cause. The prime minister wants to have an escort of RCMP wherever he goes. And some dip$hits are actually surprised that michael zehaf-bibeau managed to get his hands on a rifle, in spite of his prohibition from owning firearms. Christ, how many times do these people need to be told? Firearm laws only apply to law abiding citizens: Criminals pay no heed of them. A gun ban wouldn't work even if every firearm in existence (including those used by the state!) is tracked down and destroyed, because the knowledge and machinery needed to create them is universal. Small engine mechanics, metal fabricators, and plenty of other careers require an understanding of the same principles upon which firearms are based.
 
In order to enforce real gun control, we would need to revert back to pre-industrial times, which would make us easy prey against more militaristic nations. So please, liberals, shut up about firearms already. There is nothing you can do about it but accept the fact that citizens have the right to self defense, and that people will occasionally die as a result of this. Canada needs to take a gulp of fresh air. We have adopted the american belief that turning towards a police state will make us safer, and that achieving this safety is worth the cost of relinquishing our civil libertys (well, the few libertys that remain after 911). This is a preposterous notion and it needs to be shot full of holes. As for the parliament hill shooting being perpetrated on behalf of ISIS, who cares? It was a minor incident on behalf of a minor terrorist organisation on the other side of the planet. Anyone with two brain cells to rub together can see that this is not exactly on our list of national prioritys. And yet stephen harper (who was calmly drinking wine during the 'crisis') is now attempting to use the shooting as a pretext to expand government power. Like it or not, it seems we're on a dangerous path to americanisation.
 
*Chrissy teigen had it right when she said: ''Active shooting in canada, or as we call it in america, wednesday.'' Better ammend that to friday, too.

Friday 10 October 2014

The convergence: Existential threats to mankind

Three years ago, I released a video on youtube which told of a monumental threat that civilisation would someday have to confront. This threat is a series of man made and environmental disasters which will overlap and amplify into something resembling a great filter. I was supposed to have followed up this warning with a full length video, but was unable to do so due to a serious illness. Before I knew it, 2011 rolled into 2012, then 2012 rolled into 2013, etc. The mistake I made was assuming that men more educated than myself would be able to connect the dots, create a hypothesis, and get it out to the public. Suffice to say, that didn't happen. It is now my challenge to try and explicate the complex nature of this crisis, at a time when global warming has totally ensnared the worlds attention economy. Please bare with me!
 
Now, exactly what is the convergence? It is a collusion of more than a dozen existential threats which not only exist, and continue to worsen, but will peak in intensity sometime in the 2030s. Their relative dangers and receptivity to change varys. Some of the threats have quite minor effects, but are hard to fix, and dangerous because they exacerbate and amplify the other threats (like how overpopulation forces us to find more sources of energy and food, or how pollution ruins arable land and makes it harder to create that extra food, etc). Other threats would have devastating effects if left to build up, but can be quite easily fixed before they reach critical mass. Each of these tendrils acts as a stress point in the foundations of our global society: A fissure in one can quickly spread to others, leading to an unpredictable domino sequence.
 
In this article, I will content myself simply with naming the specific threats, and describing the backgrounds of the more nefarious ones. It would also be useful to have a simple typology through which we can classify their nature. For now, this can take the form of three categorys. #1 will determine the disasters potential for collateral damage. #2 will determine whether or not countermeasures are practical against it. #3 will determine whether or not the threat is an adjuvant (I.E, if it negatively affects the other disasters). What I will not do is attempt to offer a comprehensive solution to these looming disasters. Because the connections and interplay between these threats are not well understood, I would also caution anyone who thinks they can enact treatments in isolation: That would be like trying to stop a volcanic eruption by plugging up the holes to the surface!
 
 
Global warming
According to the EPA, average global temperatures are expected to increase by 2 to 11.5 degrees fahrenheit by the year 2100. Depending on how high the temperature ceiling is, this would aggravate storm systems, raise sea levels, and damage ecosystems. Government agencys allege that this will happen because of the sheer quantity of carbon dioxide being pumped into the atmosphere by gasoline and diesel engines, and that the only way to prevent this is through a carbon tax. While scholars like david m.w evans have managed to punch holes in their climate model, and prove that it is inconsistent with data collected by the ARGO system, that only serves to debunk some of the more hysterical claims put forward by the global warming alarmists: This includes the belief that the earths climate system has already passed an insurmountable tipping point, or that the rate of warning is happening faster than at any point in earths history.
 
There are other problems with the AGW narrative, at least with regards to how it will be solved. According to ryan dawson: 'The carbon tax and the climate controls and security bill, regardless of what you think about global warming, doesn't do a thing to prevent it. All it does is divide up greenhouse gases into allotments which can be traded and sold and even invested in by third parties, just creates a market environment which lets the larger government subsidized agrobusinesses to gobble up allotments from the smaller farms and ranches and create tighter virtual monopolies. It also put control of industry into the hands of a small group of government hacks and threatens property rights all under the guise of being green. The only kind of green here is envy and money.' Carbon taxes are the wrong answer to this problem. We should pay more attention to bjorn lomborg and his studys on marine cloud whitening, which could mitigate global warming for a relatively small price tag.
 
Depletion of arable land
 
Peak water
 
The misandry bubble
According to one theory, feminism (which had its roots in the 'free love' movement of the 60s) is a social virus which destroys the marriage system. Nuclear familys are formed by beta males and their women, who enter into marriage chaste. It is a very stable unit which caters to the needs of the man, the woman, and their children. All advanced, patriarchal societys depend upon the nuclear family: Since they all work together to support the man, the man can devote his full effort to the job. This high productivity is what has allowed the west to outcompete all other cultures, and the marriage system itself is airtight. The only vulnerable link is the young woman herself, often before she becomes sexually active. Girls are told that its okay to delay marriage and engage in casual sex, and that this will not have a negative imapct for them in the future. Television paints an unrealistic ideal for them to emulate, and sure enough... Women get stuck in the lifestyle of sleeping with men whose only lot in life is seduction.
 
These men only comprise a small portion of the population, which gives them nearly unlimited options: They can pick any woman they want, and treat them however they want, with scarce few consequences. As a result, women spend years and years chasing men who are out of their league and have no intention of making a commitment, while they pass up relationships with men in their own social groups. But a womans beauty has a brief shelf life, and by the time of her 30th to 35th birthday, she will find herself kicked off the carousel by the alphas. Even the betas will not desire her as much, since men instinctively distrust women with high n-counts. This phenomenon destroys the pool of marriageable women, and consequently diminishs the incentive of men to work. With no wives or children to care for, they are able to live a more convenient, spend thrift life. Of course, this has the consequence of slowing down economic productivity, which is a death sentence to patriarchal societys.
 
Ozone pollution
 
Ocean acidification
 
The energy crisis
Global supplys of fossil fuels are dwindling at a rapid rate. If consumption patterns continue as they have indefinitely, there will only be enough oil reserves will last for 40 to 50 years, while gas reserves will be depleted in 70 years, and coal will disappear in about 200 years. Of course, these are just the official figures... One must remember that this crisis is partly artificial, since the US government not only supresses alternative energy sources, but has also concealed the existence of massive oil fields like those in prudhoe bay, alaska.
  
Military defense death spiral
  
The singularity
  
Demographic crisis
This also ties in with feminism to a certain extent. With most children now being born out of wedlock to single mothers (many of them the product of miscegenation), the indigenous populations of north america and europe are being culturally and genetically weakened. Without the benefit of growing up in a nuclear family, they will be unable to compete at home and abroad with the 3rd world, leaving them vulnerable to a takeover by baby boom. Within a generation from now, formerly strong western nations will have devolved into balkanised hellholes with low productivity.
 
Economic collapse
  
Natural disasters
 
 
One thing that should be obvious by now is how most of the threats are of very different backgrounds, and have ambiguous taxonomys (kindof like the seven deadly sins). Take natural disasters, for example. No one would consider everyday occurrences like volcanism, earthquakes, floods, hurricanes, and solar storms to be an existential threat in an of themselves. But if their magnitude could be amplified by things like global warming? And if they were to be assigned under a single category? Then they would definitely make the cut. Now that I have assembled something resembling a hypothesis, it is up to you the public to digest and critique. Comments of all kind are welcome. If you think there were any threats I marginalised, exaggerated, or forgot to even include, now is your time to speak.

Sunday 7 September 2014

Physiology of a super soldier

What will soldiers in the next few decades look like, when the world has run out of oil and is fighting over coal and gas? For once, lets forget about equipment and weapons, and ponder on the men themselves. We know that in many combat units a pareto principle is in play, whereby a handful of veteran 'aces' score the majority of the kills. It is no small problem that in a violent confrontation, most soldiers behave in a passive manner. On average, a 100 man company may have 80 men who are little more than 'an ammunition porter for the main weapons and a rear/flank security man.' Implementing stricter entry requirements and harsher training can only partially alter this ratio: After all, militarys have a manpower quota that must be met even in an era that is awash with gory war movies and memoirs.
 
Another concern is whether or not baseline humans will be able to even participate in late 21st century conflicts. Judging from the mind numbing power of modern weaponry, this seems dubious. Over the past decades, military machines have become more and more capable, but the same cannot be said for the humans who use them. Those military theorists who have acknowledged the problems posed by a lethal, high tempo battlefield will often propose the adoption of powered exoskeletons. But even if armys could afford to give every infantryman such expensive equipment, it wouldn't provide the most bang for their buck. As we shall see, that distinction belongs to a class of humans bred to fight and die on their nations behalf. Soldiers who will rely not on articulated suits of armor, but a physiology geared towards the harsh demands of warfare. This approach is heavily dependent upon gene therapy and eugenics.
   
 Say hello to homo validus
    
The concept of a super soldier is certainly not new. Greek mythology was rich with storys about orion, achilles, and hercules. Demi-gods who lived among humans yet possessed incredible strength, speed, and durability. Many cultures that came into contact with the greeks would subsequently adopt heros with such attributes, something to serve both as inspiration and instruction for young men going to war. Armed conflict has always been a tough business with more losers than winners [1], but this trend has become even more pronounced since the industrial revolution. So what abilitys does the mainstream press have in mind when it comes to super soldiers? Here is one headline: 'Tomorrow's soldiers could be able to run at Olympic speeds and will be able to go for days without food or sleep, if new research into gene manipulation is successful. According to the U.S. Army's plans for the future, their soldiers will be able to carry huge weights, live off their fat stores for extended periods and even regrow limbs blown apart by bombs.' These are all good starting points, but are they really going to be sufficient in a peer vs peer conflict? Normal humans may not be able to keep up with the pace determined by their machines, forcing militarys to work under their capacitys. If this turns out to be the case, then we may need to envision a completely different kind of soldier.
 
For instance, gunshot wounds are an all encompassing problem that can currently only be addressed through body armor and medical care. No one has devised a biological system which would allow a soldier to take a handgun or rifle round to the torso, and have a very good chance of surviving.  Is such a thing achievable or desirable? Steering committees would be needed to determine what adaptations are suitable for a future soldier, and what kind of procedures could be used to attain them. Gene therapy comes in two different forms, somatic and germline. 'Somatic cells, such as skin or muscle cells, contain 23 chromosomal pairs and do not transmit genetic information to succeeding generations. Germline cells, which are the sperm and ova cells, contain 23 unpaired chromosomes and provide genetic information to offspring, as well as to the future generations descended from those offspring.' Military theorists are under the impression that we can rely on one-off somatic alterations to create super soldiers on demand, with no long-term ramifications. They have not considered the very real possibility that some of the adaptations required may be too extreme for a mature adults body to cope with. If so, then we're not looking at a genome army, but something more akin to the saiyans: A warrior race who will need to be socially and reproductively isolated from the native population.
  
They would be born in a government facility, raised as government operatives, and may pledge allegiance to a nation that abhors their very existence. Upon reaching retirement age, these soldiers will not simply disappear: They will require large living spaces where they can settle down and procreate, so that a new generation (their children) may replace them. And what of the powers and abilitys wielded by this sub-species of human? It is known that the ancestors of modern homo sapiens originally had physical strength on par with other members of the great ape family. How strong is that? Comparative studys on chimpanzees indicate they could lift four times their own body weight overhead. Thats equivalent to a 200 lbs man military pressing 800 lbs! But at some point in time, hominids had reached an evolutionary crossroads where high intelligence and manual dexterity became increasingly more relevant for their survival. Consequentially, muscular strength eroded over hundreds of thousands of years to its current low, as detailed in numerous articles. [2] This fact obviates the need to devise exotic methods of performance enhancement: In practise, an individual will simply be restoring the strength and speed that were native to their most distant forebears. Through the use germline engineering and in-vitro fertilisation, we may see humans returned to their former physical glory.
   
A mastiff from MGR
 
Operational
 
The couples who agree to have their (unborn) children genetically enhanced to serve in the army should ideally be military veterans themselves, to ensure that they have the long term commitment required for the project to work. Once several thousand couples have been assembled, they would each need to donate sperm and ova samples to a lab, and sign a contract allowing geneticists to insert micro-chromosomes into them. Embryos with the altered genomes would then need to have their gender pre-determined as male or female, to ensure that a breeding population of metahumans can flourish: Too few females will limit the sizes of succeeding generations, but too many females will compromise the units fighting strength (presumably, since only males will be allowed to serve in front line roles). The fertilised embryo would then be implanted into the woman, and nine months later she would give birth to a child with unbelievable physical powers.
 
Th next step in this process is even more interesting. During their early years, the children will be raised by their familys in a communal estate, owned and paid for by the government. They would grow up in an inclusive atmosphere with other metahumans for company, and an absence of extremists who might discriminate against them. From the age of 5 to 14, they will spend their summer months in a military themed camp where they become acquainted with many of the skills that will define their adulthood. At the age of 15, the soldiers were drawn together as a training battalion and shipped out to an actual base, learning to work alongside their peers and co-operate as an actual unit. During this time, they would be under the command of the army reserves. At the age of 18, the soldiers and their battalion were considered operational, and would go into active duty. This is when they were transferred to the regular armys command.
 
Metahuman soldiers will be organised into so called shock brigades. These are tier three units which are capable of serving many different roles. In high intensity conflicts, they would form the spearhead for corps level attacks and counter-attacks, enduring in conditions that would shatter the morale of ordinary soldiers. They can establish defensive screens that depleted formations or support troops can shelter behind in times of crisis. Their physiology would also make them extremely effective in close quarters fighting so typical of urban combat. In low intensity conflicts, shock brigades can be stripped of their vehicles and heavy weaponry, and sent behind enemy lines on foot to assist special forces units. This was possible because of the soldiers great strength and skill at man packing, enabling them to carry everything they needed with only an occasional air drop to replenish ammunition, food, batterys, and spare parts. Fighting as light infantry, they can pacify environments which are inhospitable to tier two and one units.
 
The abdominal cavity
  
Physiology
 
Muscular system: Metahumans have an archaic musculo-skeletal system granting them enormous physical strength, sufficient to deadlift approximately 2000 lbs, and military press well over 800 lbs. With training, they can do even more. (A normal man can press 110 to 130 lbs, while top power lifters in peak condition can triple this) Most of this performance increase comes down to complex differences in how the muscle fibers are recruited, and a mutation in the MSTN and NCOR1 gene, which regulate muscle mass. They have a vertical jump of 72 inchs, and a standing long jump of 288 inchs. Hand speed and reflexes are also improved. In addition, metahumans have enlarged adrenal glands which are connected to a set of GVA nerve fibers: With training, they can empty its contents in around 2 seconds, giving the body explosive speed at the cost of fine motor skills [3]. The adrenal glands can also be activated in the normal manner, dumping hormones into the blood stream at a slower pace in response to stress.
 
Dermal and circulatory system: A metahumans skin (particularly the dermis and epidermis layer) is much thicker, making them less susceptible to cuts, burns, bruises, or abrasions. Their major arteries are smaller, more numerous and more disperse, often hidden deep within the flesh for extra protection. This makes them unlikely to bleed to death from slash or stab wounds, and prevents their circulation from being cut off by ropes, chains, or other restraints.
 
Skeletal system: Arguably the most extensive alteration made to their physiology, the skeleton is coated in a strong organic material which make it more resistant to chipping fractures and other impacts. Osteocytes in the lacunae of the bones are responsible for the breakdown and buildup of calcium. A mutation in the thyroid and osteoblasts would cause the massive absorption of calcium needed... Metahumans have what is termed a cranial ridge, basically a smooth growth of bone along the outside of the skull. The anterior ridge is part of the frontal and maxillary bones: It has an hour glass shape, extending from the top of the forehead to the base of the nose. The posterior ridge is part of the parietal bone: It has a more round profile. Small caliber bullets striking the ridge are stopped outright or deflected, although a hairline fracture will often result. In short, the bone around these regions of the skull are much thicker and devoid of dangerous shot traps.
    
Their abdominal wall is lined with a carapace composed of yellow fibro-cartilage and red bone marrow. This structure serves two important functions with regards to gunshot wounds. First, because of the dissimilar propertys of these tissues (elasticity, viscosity, density, etc), bullets will experience a phenomenon called impedance mismatch, which prevents the formation of a temporary cavity. Second, even after the bullet penetrates the carapace, it has lost so much energy that it tends to push organs aside instead of crushing them. This arrangement is so effective that most pistol calibers cannot deliver a through and through chest wound to a metahuman. In those instances when organs are damaged enough to begin leaking blood, the carapace performs another remarkable feat: Macrophages in the putty-like marrow are able to quickly break down RBCs and recycle their materials for use elsewhere in the body, which limits the severity of internal bleeding [4]. Although the individual can still die from exsanguination if the wound is severe enough, this is not common, since their blood clots more quickly.
 
Minor alterations: The brain has a thicker layer of cerobrospinal fluid (a mild form of hydrocephalus), which makes them less susceptible to concussions. Their eyes have a widened fovea, giving them an enhanced field of central vision. They also have highly mobile ears, enabling them to determine the exact location of a sound more quickly. Finally, the length of their metacarpals have been equalised to create a more solid punching surface.
  
Miscellaneous
  
One major advantage of the metahuman soldier program is how their genomes can be updated every generation to meet emergent battlefield demands, and take advantage of progress in biotechnology: Unlike homo sapiens, they will never become obsolete. There may be times when steering committees come up with useful adaptations that are too difficult for current technologys to implement. So even though super soldiers then in service wouldn't receive the enhancements, their children and grandchildren very well could!
 
They are significantly heavier than a human of comparable build, since they have alot more bone and skin. Because of their abdominal carapace, metahumans are visually perceived as overweight. Most body fat is not stored on the hips or stomachs, as that would lead to excessive waistlines (it goes to the forearms and lower leg instead). The average specimen is 5 feet 8-11 inchs tall, weighs 220-250 lbs, with a mesomorphic-endomorph body type.
   
Metahumans will easily beat humans in short distance sprints, but not anything over 400 meters. Human physiology dictates that a runner's near-top speed cannot be maintained for more than thirty seconds or so as lactic acid builds up and leg muscles begin to be deprived of oxygen. The only long distance races they win at are events where a weighted pack is mandatory (like the army speed marchs).
  
They would not train in any specific hand to hand combat system, mostly because their natural strength and speed enables them to quickly overwhelm a human opponent. In addition, metahumans are virtually impossible to knock unconscious, and cannot suffer broken noses or jaws. While they may be taught to execute basic strikes, even this is overkill: With homo validus, every punch is a knockout punch.
 
Metahumans should practise knife survival drills: Even though their abdominal carapace protects them from lethal stab wounds, they are still vulnerable to bio-mechanical cutting which traverses large sections of the body . Many courses which claim to offer such survival skills actually teach something akin to dueling, which is not useful at all. As one source put it: 'Criminal and military history reveals that a real world knife fight is more like football and less like fencing.'
   

Note

[1] According to an old german adage: 'A great war leaves the country with three armies - an army of cripples, an army of mourners, and an army of thieves.'
   
[2] This decline was already evident in our ancestors of 200,000 years ago. Compare the homo sapiens of that time to homo ergaster: The difference in skeletal robustness is obvious.

[3] Experiments showed that adrenaline increases twitch, but not tetanic force, and not rate of force development (which is necessary for fast and high power movements).
  
[4] Internal bleeding is serious for two reasons: The excess blood can compress organs and cause their dysfunction (as can occur in hematoma). When the bleeding does not stop spontaneously, the loss of blood will cause hemorrhagic shock, which can lead to brain damage and death. If there is pressure, it may lead to death or a brain hemorrhage.
  
*edit made april 8, 2015

Thursday 7 August 2014

No Glory in War: the only way to commemorate WW1

We must always question our ability to know, especially with regards to how millions of men can be swayed and pressured into fighting a senseless conflict.


The gruesome consequences of war

To what end do we fight? Who benefits from war? Who pays the ultimate price?


Friday 30 May 2014

Repost: Common a priori objections by “debunkers”

(Note: This entire article has been duplicated from  activistnyc.wordpress.com. It can be read in full with this link)


Common a priori objections by “debunkers,” including arguments from authority and the “someone would have talked” and “too many people” arguments

 
Every now and then I get a wave of “debunkers” visiting this blog. They’re welcome to post here; I’ve learned a lot from them. But, in the future, I would like to try to avoid certain repetitious arguments, or at least confine those particular arguments to relevant threads such as this one.
There are some a priori arguments they almost always bring up in an effort to prove that there could not have been any government complicity in the attacks of 9/11. In recent debates here, those arguments got jumbled together with other, meatier issues in comment threads.
 
To avoid such jumbling in the future, I’ve decided to devote this post to the more common a priori arguments. I’ll then add a rule to my comment policy requiring that, in the future, these and similar a priori arguments be discussed only in comments below this post (or other posts on these same topics), rather than jumbled together with other, more substantive discussions.
 
In this post I’ll also provide a brief review of my debates with “debunkers” in general, for the benefit of “debunkers” visiting this blog for the first time. Some of the discussions we’ve had here have been very worthwhile.


The most common a priori arguments
 
The most common a priori argument is a claim that the culprits could not have gotten away with it.

The usual form of this is the “someone would have talked” argument. Related to this is the “too many people” argument, the claim that too many people would have to have been in on the plot, or at least would have noticed something strange going on, thereby making it impossible to keep a secret.
 
Also common are arguments from authority, It is claimed that we should not question the experts who say that WTC 1, 2, and 7 all collapsed as a result of nothing more than two jet crashes and the resulting fires.
 
Also it is often claimed that there were no likely motives, either for government complicity in the 9/11 attacks in general or for the destruction of WTC 7 in particular. And, in my opinion, many 9/11 Truth activists don’t do the best possible job of stating the likely motives. I will try to remedy what I see as some deficiencies in that regard.
 
One type of complicity hypothesis, defended here

In this post I’ll be defending the feasibility of one particular type of hypothesis about possible government complicity in the 9/11 attack. The type of hypothesis outlined below would be categorized by Nicholas Levis as “LIHOP plus”:
 
There were real hijackers. Al Qaeda is a real organization, but heavily infiltrated by the FBI, the CIA, and intelligence agencies from various other countries around the world. The infiltrators likely include agent provocateurs as well as just plain spies. At least a few high U.S. officials had very specific and detailed foreknowledge, which might have come to them not only via the FBI, the CIA, the NSA, etc., but also via warnings from foreign governments, plus informal high-level channels such as, perhaps, the Bush / bin Laden family friendship. The attacks were aided and supplemented in various ways by one or more high officials and a handful of other people, e.g. by preventing FBI interference in the hijackers’ plot, slowing down the air defense system to allow the hijackers to reach their targets, and possibly by either demolishing the WTC buildings or at least supplementing the attack on the WTC buildings via bombs or arson.
 
I do not necessarily believe that any particular hypothesis along the above lines actually happened, but I do believe it to be a real possibility, one of many possible scenarios that might lie behind the 9/11 coverup. The point of this post is not to prove that anything in particular actually did happen, but merely to show that hypotheses of the above type are possible.
 
Could this sort of thing have actually happened? Or would it have entailed too many people, who would have talked? I’ll explore these questions below.
 
But first, I should note that not everyone in the 9/11 Truth movement agrees with hypotheses of the above type. Not everyone in the 9/11 Truth movement advocates the WTC demolition idea. Some focus only on air defense issues, warnings and foreknowledge, etc. On the other hand, there are many other people in the 9/11 Truth movement who hold a pure “MIHOP” view, according to which there were no hijackers, and the planes were just remote-controlled, or something. I will not be defending the latter view here.
 
I’m currently agnostic on the question of what was done to the WTC buildings. In this post, I will defend the possibility that the collapses of WTC 1, 2, and 7 could have been something other than purely natural consequences of the two jet crashes. (In this post, I won’t discuss the arguments against the purely natural collapse idea. I’ll discuss those in future posts.)

 
Letting the hijackers do their thing: People DID talk!
 
In my opinion, the only parts of the above “LIHOP plus” scenario that would need to have been known by more than a handful of people would have been (1) the foreknowledge by the FBI and by the intelligence agencies, and (2) the prevention of FBI and airport-security interference in the hijackers’ plot.
 
On these matters, lo and behold, there are people who have talked, such as FBI whistleblower Sibel Edmonds and others listed on this page of the National Security Whistleblowers Coalition site. The Jersey Girls have reported that they were approached by quite a few other whistleblowers who wanted to be called to testify before the 9/11 Commission, but who were never called, for whatever reason.
 
Also there has been quite a bit of talk about Able Danger, a military intelligence program which is said to have identified at least some of the 9/11 hijackers over a year before 9/11/2001.
FAA whistleblower Bogdan Dzakovic has remarked, “Since 9/11, I learned to have less contempt for the terrorists than I do for the bureaucrats and politicians who could have prevented 9/11 but didn’t.
 
They served in very pivotal positions of influence but due to gross incompetence or the fear of actually fulfilling their oaths of office to defend this country or possibly even something a bit more sinister, they failed to take any action. … Many of the FAA bureaucrats that actively thwarted improvements in security prior to 9/11 have been promoted by FAA or the Transportation Security Administration.” (Flying the deadly skies: Whistle-blower thinks the state of U.S. aviation security invites another attack by Bill Katovsky, San Francisco Chronicle, Sunday, July 9, 2006.)
 
Alas, whistleblowers too often don’t get listened to, as discussed in 9/11 whistleblowers ignored, retaliated against by Michael Hampton, who concludes, “It’s clear now to anyone paying the least bit of attention that the September 11, 2001, terrorist attacks were completely preventable.”
One could argue that the problems here were (and still are) just incompetence, negligence, and cronyism. Perhaps. Then again, that very same incompetence, negligence, and cronyism, on the part of the relevant bureaucracies as a whole, could also have enabled a few people at the top to get away with a whole lot worse. More about this below.

 
Air defense failures: Hiding any interference?
 
Was something done to slow down NORAD’s response? Was something done to prevent the hijacked planes from being even intercepted, let alone shot down? If so, how many people would have been needed to accomplish that?
 
Not very many. On a matter such as air defense, where every second counts, it takes only a few lackadaisical people in top positions to slow everything to a crawl.
 
According to Chapter 1 of the 9/11 Commission Report:
The FAA, the White House, and the Defense Department each initiated a multiagency teleconference before 9:30. Because none of these teleconferences – at least before 10:00 – included the right officials from both the FAA and Defense Department, none succeeded in meaningfully coordinating the military and FAA response to the hijackings.
 
Why were “the right officials” unavailable for so long? It was already obvious, by the time the second plane hit WTC 2 at 9:03 AM, that “America is under attack,” as Andrew Card informed George Bush at 9:05 AM. By that time it was already known that American Airlines Flight 77 had been hijacked too.
 
It is hard to believe that “the right officials” could genuinely have been that incompetent. Even if they really were just that incompetent, they should have been at least fired, if not prosecuted for negligence.
 
Everyone can see that “the right officials” were, at the very least, criminally negligent. That obvious fact cannot be hidden. The only question is whether their crime was something worse than mere negligence.
 
Had NORAD succeeded in intercepting any of the planes, there were only two people who could then authorize a shoot-down: the President and the Secretary of Defense. Both Bush and Rumsfeld were less than diligent about making themselves available for consultation on such an every-second-counts decision. (See my posts Bush at Booker School on the morning of 9/11 George Bush, Donald Rumsfeld, Richard Myers: Their whereabouts on 9/11, and see the comment threads below these posts for more details.)
 
An even more important question is why no planes were even intercepted in the first place. Some people in the 9/11 Truth movement have speculated that someone may have deliberately created extra confusion for NORAD/NEADS, e.g. via false blips on their radar screens. If indeed something like that was done, it probably would not have been difficult to hide. It could have been done by just one person plus whoever gave that person access.
 
In any case, we do know that we’ve been lied to about the air defense failures. (See the section The lack of air defense in my post My main reasons for being suspicious about 9/11.) We just don’t know exactly what was covered up.


WTC: Hiding the planting of explosives, incendiaries, etc.?
 
Many people in the 9/11 Truth movement believe that other things were deliberately done to World Trade Center buildings 1, 2, and 7, besides just crashing planes into the Twin Towers.
A common objection to this idea, by “debunkers,” is that the perpetrators’ activities could not have gone unnoticed.
 
But it seems to me that their work could easily have been disguised as some sort of maintenance or upgrade, provided that their entry into the building was authorized by someone in a position of authority at the WTC complex.
 
Six weeks before 9/11/2001, Larry Silverstein became the new leaseholder for the entire WTC complex (except for WTC 7, which he already owned). Given the change in management, it was only to be expected that there might be some new contractors as well, or perhaps some special maintenance consultants. So, the entry of some new and unusual (but authorized) contractors would not have raised any eyebrows on the part of the security staff.
 
Any work involving the core columns could easily have been disguised as some sort of elevator work.
 
The perimeter spandrels could perhaps have been accessed via crawlspaces (ceiling chambers).
Any work involving perimeter columns would have been a bit more difficult to hide, because it would have required going into offices, meaning that the tenants would have known that there was some sort of work being done in their offices. But it could have been disguised as some sort of maintenance that required drilling holes in walls, such as electrical work, and it could have been done late at night when the office staff was absent.
 
I will assume that it was not necessary to access perimeter columns on every floor, but only on a minority of the floors, if indeed that was done at all. Thus the workers could easily have avoided those relatively few offices that were open 24 hours a day.
 
Whatever was done, it need not have required anywhere nearly as much work as a standard controlled demolition. The perpetrators need not have concerned themselves with various safety issues that are of concern to commercial demolition contractors. The collapse of the Twin Towers, and to a lesser extent WTC 7 too, spewed debris all over the place, damaging surrounding buildings, which is something a standard controlled domolition aims to avoid.
 
So I think it’s possible that the work could have done by as few as three or four people, maybe at most six or so.
 
(Reminder to “debunkers”: The point of the above is only to defend the possibility that the WTC collapses may have had a little extra help. I’m aware that my speculation does not prove that any such thing was actually done. Actual evidence for any such scenario is a separate topic, to be discussed in other posts.)
 
(P.S., 3/6/2008: The above is a summary of my replies to westprog99, originally in the comment thread following my post He oughta know better: Mark Roberts and the iron spherules, recently transplanted to my new post Hiding the planting of incendiaries, explosives, or whatever? Response to a common a priori objection. Comments on this topic should be posted underneath the latter post rather than here on this page.)

 
WTC: Why were no devices found in the rubble?
 
Another common objection by “debunkers” is that no remnants of explosive or incendiary devices were found in the rubble. Or, at least, no such remnants were noticed and publicly commented on.
In the first place, any such devices would most likely have gotten pulverized, along with nearly everything else.
 
Of course, they might not have all gotten pulverized.
 
Perhaps some remnants of devices were found, but people might have assumed that they were something else. A lot would depend on exactly what these devices looked like.
 
Another thing to consider is that the CIA was openly involved in the cleanup. The CIA had had an office in WTC 7 and searched the rubble during the cleanup, presumably to ensure that no classified documents went astray. (See the New York Times article The Intelligence Agency: Secret C.I.A. Site in New York Was Destroyed on Sept. 11 by James Risen, November 5, 2001. A copy of the printed version of this article is shown in the WTC 7 section of the video Loose Change Final Cut.)
 
If, by any chance, the CIA was in any way involved in the destruction of the WTC buildings, then, in addition to searching for classified documents, the same CIA agents could have searched for other things too, preventing those other things, too, from falling into the hands of other people. Those other things could have included … well, who knows what. Whatever the CIA agents might have been searching for besides papers, they had a perfect excuse to search the rubble thoroughly. And most likely they were around during pretty much the entire cleanup. How else could they have ensured that no classified documents fell into the wrong hands?
 
The CIA agents who searched the rubble might not, themselves, have known exactly what these mysterious objects were that they were searching for. They might have just been shown pictures and told to retrieve any objects that looked like thus-and-so, without being told exactly what those objects were. In that case, they would simply have accepted that they didn’t need to know what the objects were. I would expect that kind of compartmentalization to be commonplace in intelligence agencies.
 
Then again, the above hypothesis about CIA involvement in the cleanup might not have been necessary, if the devices themselves were sufficiently well-disguised.
 
Likewise, residues may or may not have been easily detectable, depending on what was used. For example, thermite leaves a residue consisting mostly of iron and aluminum oxide, both of which would have been present in the rubble anyway. So, to detect thermite residue, one would also need to look for other evidence, e.g. evidence that sufficiently high temperatures were reached in at least a few places.
 
(Reminder to “debunkers”: Again, the point of the above is only to defend only the possibility of an inside-job hypothesis regarding the destruction of the WTC buildings. I don’t claim to know what actually happened to the WTC buildings, or whether the CIA was in any way involved.)


Structural engineers and arguments from authority
 
A common response to WTC demolition hypotheses is: “Not a single structural engineer in the entire world agree with you! Every structural engineer in the entire world says that the WTC buildings collapsed solely due to jet impacts and resulting fires.”
 
First, that’s not true. There are some structural engineers who are members of Architects and Engineers for 9/11 Truth. Admittedly, not very many yet, but we’ll see who else joins.
When that’s pointed out, a “debunker” will usually reply, “Well, every single one of the world’s other structural engineers, besides the very few listed on the AE911T website, agrees that the WTC buildings collapsed solely due to jet impacts and fire.”
 
Well, that’s probably not accurate either.
 
First, the majority of the world’s structural engineers have probably not studied the WTC collapses themselves, beyond reading the pronouncements of mainstream experts. Most structural engineers probably have not even read the entire NIST report, for example. And, as the “debunkers” themselves are fond of pointing out in other contexts (e.g. in response to the argument that no steel-frame skyscraper has ever collapsed due to fire), every building is unique, with its own strengths and vulnerabilities. So, a structural engineer would have to have studied the WTC buildings oneself in order to have an informed opinion about them.
 
If one is going to do a numbers comparison, it would be fair to compare only those on both sides who have actually studied the WTC buildings and the WTC collapses on their own, rather than just reading summary reports in trade journals.
 
Furthermore, it’s likely that there exist at least a few structural engineers who agree with AE911T but who hesitate to join, out of fear of having their full legal names published on a controversial website. (I myself have not joined AE911T for that very reason. I’m playing it safe in that regard, given how some 9/11 Truth activists here in New York have been personally harassed. I’m not a structural engineer, but I do have a different kind of engineering background. It wouldn’t surprise me if at least a few structural engineers preferred to support the 9/11 Truth movement in more anonymous ways too.)
 
However, as far as I am aware, the “debunkers” are indeed correct that the purely-natural collapse hypothesis is the unanimously accepted orthodoxy in structural engineering journals.
 
So the question now is whether the consensus in structural engineering journals should be considered infallible – or, at least, sufficiently close to infallible that no one, and especially no one outside the field of structural engineering itself, should ever presume to question it.
 
(P.S., 2/28/2008: The remainder of this section is mis-stated. See the P.S. after this section, and see the discussion in comments below this thread.)
 
Well, let’s consider what kind of a field structural engineering is. In their more honest moments, structural engineers will admit that their field is on a somewhat less sound scientific experimental footing than most other kinds of engineering. For example, this post in the JREF forum, by Newton’s Bit, ends with the following quote:
“Structural Engineering is the art of molding materials we do not wholly understand into shapes we cannot precisely analyze so as to understand forces we cannot really assess in such a way that the community at large has no reason to suspect the extent of our own ignorance.” James E Amrhein
 
The problem is this: In almost any other kind of engineering, complete prototypes of any given product are built and thoroughly tested before the product is mass-produced and sold. If structural engineering were like almost every other kind of engineering, complete prototypes would be built and thoroughly built and tested, not necessarily for every single building, but at least for every building of a new and unusual design. Alas, that would be prohibitively expensive. For skyscrapers, it’s common to build and test prototype floor assemblies, but not a prototype for the entire building.
 
If structural engineering were like almost every other kind of engineering, there would be frequent experiments in which buildings, of various different kinds, were built for the purpose of setting them on fire to see whether and how they collapse. Such experiments have been done occasionally (e.g. Cardington Fire Test: The Behaviour of a Multi-storey Steel Framed Building Subjected to Fire Attack), but not very often. There’s just not enough funding for such experiments.
 
Ditto for experiments involving other kinds of damage to buildings. Also, there has not been much experimental study of the phenomenon of progressive collapse, which is said to have brought down WTC 1, 2, and 7. (Regarding the rarity of such study, see, for example, The science of how buildings fall down by Colin Nickerson, Boston Globe, December 3, 2007.)
 
For more about this matter, please see my post Engineers were surprised by the WTC collapses, December 7, 2007.
 
Another problem is this: In a field with relatively little hard scientific data (compared to what one might expect in other engineering disciplines that have been around for a long time), there’s lots of wiggle room for as-yet-unfalsified hypotheses. And, given whatever wiggle room they may have, most people will wiggle in a direction which they see as beneficial to their careers, and which avoids any boat-rocking. This factor might well have inflated the consensus in favor of the official story.
 
Even one of the most knowledgeable and articulate “debunkers,” Frank Greening, has acknowledged that institutional bias can be a problem. He’s a co-author of the latest Bazant paper, but he is nevertheless highly critical of the NIST report. (See his fascinating post Confessions of a 9/11 Agnostic, on page 6 of the thread Debate! What debate? in the JREF forum.) As far as I can tell, he suspects a coverup, not of 9/11 being an inside job, but of possible flaws in the design of the WTC buildings. (See also this post of his in the JREF forum thread Another engineer criticizes NIST & FEMA.)
 
So, an orthodoxy in the field of structural engineering should not be regarded as beyond question.
At the same time, those of us who question that consensus should certainly not regard ourselves as infallible either. The collapses of the WTC buildings involved many factors that need to be examined cautiously and in quantitative detail, and on which too many people in the 9/11 Truth movement have jumped to premature conclusions. (See my post Demolition of WTC: Let’s not overstate the case, please.) We should not simply dismiss the orthodox view; we should at least try to understand it, to the best of our ability. And we should recognize that we ourselves do not have the final word. If we ourselves do not have the expertise needed to resolve the issues, we should encourage those challengers of the official story who do have at least some relevant expertise, but we should not endorse their views without question either. We should be willing to wait and see.
 
(P.S., 2/28/2008: In this and subsequent comments, westprog99 argues that structural engineering is, in fact, a “mature discipline.” See the ensuing discussion. It might have been more accurate for me to say, as I do in this comment, that the field of structural engineering is probably “mature” on the question of what is needed to ensure the safety of buildings under normal conditions, but perhaps not quite so “mature” on the question of how buildings of various different kinds perform under various extreme conditions.)


The most likely main motive for complicity, by high officials, in the 9/11 attacks
 
In my long-ago post Reply to some folks at Screw Loose Change, one objection I did not respond to very well was this one:
4. If your whole plan is to attack Iraq, why not blame the attacks on Saddam? Wouldn’t that actually help to further your goals, instead of the nonsensical “frame Osama to attack a country that has nothing to do with our target” scheme?
I still stand by most of what I said in reply to that objection:
Perhaps because Al Qaeda already existed, and perhaps because making use of Al Qaeda was both easier and less suspicious-looking than cooking up a totally new CIA-front Iraqi “terrorist group” from scratch. Also, the point of the 9/11 attacks, from the point of view of the American perpetrators, probably wasn’t just to justify a short-term war against Iraq or Afghanistan, but to justify a century of “war on terror,” with a variety of targets. That would explain why the attacks had to be so exceedingly massive and dramatic, as well as explaining why the exact nationality of the terrorist group wasn’t too important, as long as they were Islamist and thus could be tied, in the American popular imagination, to just about any Muslim country that the administration might choose to go to war with for whatever reason.
But I would now add the following:
 
The most likely main motive probably had to do with Afghanistan, the country which was invaded almost immediately after 9/11. Before 9/11/2001, there were actual military preparations, not just a plan, to invade Afghanistan. However, if 9/11 had not happened, it would have been extremely difficult to sell the American public on the idea of invading Afghanistan, given what a meat grinder Afghanistan had been for the Soviet Union back in the 1980’s. (For documentation on the preparations for an invasion of Afghanistan, see the section on The war in Afghanistan in my post My main reasons for being suspicious about 9/11.)
 
This, the need to justify an invasion of Afghanistan is a likely motive for complicity in the 9/11 attacks, regardless of the reasons why the Bush administration wanted to invade Afghanistan in the first place. Most likely there were several different motives for the invasion. The relative importance of these motives can be debated.
 
One topic that has been debated quite a bit is the role of oil (and other energy sources such as natural gas) in U.S. foreign policy. Some interesting information on this topic can be found on the website of the Council on Foreign Relations. (See especially this collection of pages on “energy security.”)
The Bush administration got lots of mileage out of 9/11, on more matters than just Afghanistan. It provided an excuse for the Iraq war too, and for torture, and for various curtailments of civil liberties. These other things could perhaps be seen as secondary motives, but the most likely primary motive was to provide an excuse for invading Afghanistan.

 
But why destroy WTC 7?
 
Various “debunkers” have asked what the motive could have been for destroying WTC 7 in particular.
 
Some people in the 9/11 Truth movement have speculated that the motive might have been the destruction of records. WTC 7 had a lot of interesting tenants, including the CIA, the FBI, the EEOC, and the SEC, any one of which might have had something that the perpetrators wanted destroyed.
To this, “debunkers” have had two responses: (1) If the point is to destroy papers, why not just use a shredder? (2) All but the most recent records were likely backed up somewhere anyway, so could not be destroyed.
 
My reply is that there’s a difference between (a) actually destroying records and (b) having a good excuse to lose records that one isn’t supposed to lose. The destruction of WTC 7 might have been good for the latter purpose, if not the former. Furthermore, in the event that the CIA was involved, they might have had something other than just records that they wanted destroyed.
 
Anyhow, there’s also another possible motive for the destruction of WTC 7. My guess is that, from the point of view of those who planned the destruction of the WTC buildings, there might have been a lot of unknowns, and that these unknowns were dealt with via redundancy.
 
The planners might not have known, for sure, whether the hijackers would succeed in hijacking the planes, and, if so, whether they would succeed in crashing into the buildings, and, if so, whether the resulting damage and fires (plus whatever devices might have been planted in the Towers, if any) would succeed in bringing the Towers down. In case the above did not fully succeed, then perhaps the collapse of a third skyscraper might provide the desired melodrama.
 
Given how WTC 7 was hit by flying debris, the planners may have decided, on the fly, to use that as an excuse for an allegedly “natural” collapse, and to time their remote-controlled devices accordingly, e.g. by first setting off incendiary devices on floors 7 and above to start the fires, and then waiting 7 hours to destroy the building by setting off incendiaries or explosives further down, say, on floor 5. But, in the event that WTC 7 had not been hit by flying debris, I’m assuming here that the planners would have had some sort of backup excuse for the collapse of WTC 7 – perhaps even bombs (or incendiaries) planted by a “terrorist.” But the collapse of WTC 1 and 2 provided a much better excuse, so they went with that one.
 
In addition, if the planners also wanted to destroy one or more specific offices in WTC 7, for whatever reason, they would have done so by two redundant means: First, by just setting the specific office(s) on fire. But the planners could not have known in advance whether fire fighters would succeed in putting out the fires in WTC 7. To deal with that uncertainty, demolishing the entire building might have been a backup means of destroying whatever offices the planners wanted to destroy.
 
Disclaimer: This entire discussion about WTC 7 is hypothetical, since I don’t claim to know what happened to the WTC buildings, let alone what the motives were.