Hello, Welcome to this website. This website is managed by Christophe Pochari, inventor and designer of all the concepts and technologies listed.



Christophe Pochari Energietechnik is the sole inventor and developer of the world’s first pneumatic self-tensioned guyed tower technology.

Because Christophe Pochari Energietechnik receives a large number of inquiries about selling concepts that we write about on the website, we would like to kindly ask prospective interested parties to be serious and committed before contacting. We do not offer free consulting unless it is related to the technology being proposed. Viewers should note that all articles on this website are about concepts the author sincerely believes are technically possible with extant human knowledge, but they are not physical prototypes nor are any of their performance claims guaranteed. All concepts and ideas can be mathematically vetted with existing theory, but humans are not infallible, we cannot guarantee anything, it is up to individuals willing to take immense risk to develop something in the real world. If you are interested in making money alone, this is not for you, you have a greater chance at the Las Vegas Casino than in invention and technology development! Interested parties are encouraged to perform their own analysis. Christophe Pochari Energietechnik is a self-funded company and financially independent, we do not derive any commercial benefits from our ideas, thus have no conflict of interest.

Office location: 21108 Hummingbird Ct, Bodega Bay, CA 94923, PO BOX 716

Mobile (primary telephone): 707 774 3024 (please contact by email first, we do not have a secretary yet!)

Email: christophe.pochari@yandex.com


Visit the splendid Northern California Pacific Coast!

Arrhenius’s Demon: The Chimera of the Greenhouse effect



The greenhouse effect being hauled away into the scientific junkyard by the Polizei

Note: The radiative heat transfer equation based on Stefan-Boltzmann law is erroneous and cannot be relied upon. The only way to possibly measure radiative heat transfer is by measuring the intensity of infrared radiation with electrically sensitive instruments.

The Ultraviolet Catastrophe Illustrated.

The Stefan-Boltzmann law states that radiation intensity scales to the 4th power of temperature, drastically overestimating radiation at high temperatures. If we heat a one cubic meter cube to 2000 C°, we radiate 1483 kW/m2, since a one cubic meter cubic has 6 square meters, we would be radiating 8890 kW, or nearly 9 megawatts of power! Clearly, this is impossible because it would mean heating and melting metal would be physically impossible since it would cool through radiation faster than it can be heated! To heat 7,860 kg worth of steel to 2000 C° in one hour, we need to impart 2032 kWh worth of thermal energy, far less than what we would radiate every second. The Stefan-Boltzmann is wrong and must be modified. Rather than quantizing radiation as Planck did, we can simply assign it a non-linear exponent, where a rise in temperature is accompanied by a reduction in the sharpness of the slope. It therefore appears as if the entire greenhouse effect fallacy is not only caused by the confusion over power and energy and its amplifiability, but also by the incorrect mathematical formulation of radiative heat transfer. If the Stefan-Boltzmann law based on the 4th power exponent is true, hot bodies would cool within seconds and nothing could be heated, lava would solidify immediately and smelting iron, melting glass, or any any high temperature process becomes impossible!

In August of 2021, I had become suspicious that perhaps the entire greenhouse effect was suspect and decided to see if anyone had managed to refute the greenhouse effect. I searched the term “greenhouse effect falsified” and found a number of interesting results in Google scholar. At the time, I had a difficult time believing that each and every single expert, Ph.D. academic, etc, could be so wrong. I kept thinking in the back of my mind, “this cannot be, the whole thing is a fraud?” But then upon reading the fascinating articles and blog posts put together by the slayers, I immediately identified the origin of the century-long confusion: the conflation of energy and power. A number of individuals in the 21st century have put into question the greenhouse effect theory. The first serious effort to refute the greenhouse effect is the now quite famous “G&T” paper, by Gerhard Gerlich and Ralf D. Tscheuschner. Although it is not known who was the first to refute the greenhouse effect, I have found no articles or papers in the Google book archive during the entire 20th century, except for some arguments made by the quite kooky psychoanalyst Immanuel Velikovsky. In fact, I cannot find evidence that anyone had ever seriously questioned (serious defined by scientific papers or articles published) Arrhenius, Tyndall, or Poynting during the 19th and early 20th centuries. This is likely because atmospheric science remained largely obscure and occupied little time in the mind of natural philosophers, physicists and what we they now call “scientists”. It appears that it took the increased discussion of the greenhouse effect during the global warming scare driven by Al Gore’s propaganda to get people to finally scrutinize it. With the introduction of the internet and the growth of the “blogosphere”, individuals could contribute outside of the scientific guild. Those who “deny” the greenhouse effect go by the term “slayers”. They accrued the name “slayers” after the title of the first ever book refuting the greenhouse effect: “Slaying the Sky Dragon: Death of the Greenhouse Gas Theory”, by John O’Sullivan. So far, I have found only these following publications challenging the fundamental assumptions of the greenhouse effect: Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics, by Gerhard Gerlich, The Greenhouse Effect as a Function of Atmospheric Mass, by Hans Jelbring, There is no Radiative Greenhouse Effect, by Joseph Postma, No “Greenhouse Effect” is Possible from the way the Intergovernmental Panel on Climate Change Defines it, by John Elliston, Refutation of the “Greenhouse Effect” Theory on a Thermodynamic and Hydrostatic basis, by Alberto Miatello, The Adiabatic Theory of Greenhouse Effect, by OG Sorokhtin, Comprehensive Refutation of the Radiative Forcing Greenhouse Hypothesis, by Douglas Cotton, Thermal Enhancement on Planetary Bodies and the Relevance of the Molar Mass Version of the Ideal Gas Law to the Null Hypothesis of Climate Change, by Robert Ian Holmes, and, On the Average Temperature of Airless Spherical Bodies and the magnitude of Earth’s Atmospheric Thermal Effect, by Ned Nikolov. In addition to these publications, the blog “tallboke” run by Roger Tattersall has provided invaluable data on the gravito-thermal effect, most of which is thanks to the work of Roderich Graeff. It is unlikely that without the efforts of Roderich Graeff, anyone would have noticed the obscure gravito-thermal effect. In the Springer book: Economics of the International Coal Trade: Why Coal Continues to Power the World, By Lars Schernikau, the author mentions briefly the gravito-thermal effect and the possibility the entire greenhouse effect is faulty.

The article is a synthesis of the largely informal and cluttered online literature on “alternative climate science”, with a special emphasis on the gravito-thermal effect. The word alternative is something regrettable to say, since it implies it is just another “fringe” alternative theory competing against a widely established and well-founded mainstream. Due to a lack of clarity in the current state of climate science, I felt it would be useful to summarize the competing theories. One could divided the “alternative climate” theorists into three broad camps.

#1: Radiative GHE refutation based on the 2nd law only, this includes Gerlich & Tscheuschner, Klaus Ermecke and the GHE slayer book authors.

#2: Gravito-thermal Models. This includes Sorokhtin, Chilingar, Cotton, Nikolov, and Zeller, and a Huffman.

#3: “Sun only” theories. I know of only Postma who has propounded a climate theory based purely on the heating of the sun.

The first “school” focuses mainly on the deficits within the existing radiative greenhouse mechanism, and while this is important, it misses other important aspects and provides no alternative explanation. Since we are attempting “overthrow” the dogma that the earth amplifies solar energy by slowing down cooling, if we have completely ruled out this mechanism, then we can either say the earth can be warmed solely by the sun or that some other previously ignored mechanism warms it above and beyond what the sun can provide. We argue that the only parsimonious mechanism allowed by our current laws of physics is a gravito-thermal mechanism. Although “sun-only” models have been proposed, they are shown to be erroneous. A great deal of work needs to be done to finally build a real science of climate, it will take generations since all the textbooks have to be rewritten. Millions of scientific papers, thousands of textbooks, and virtually every popular media article need to be updated so that future generations do not keep being miseducated. Most engineers working in the energy sector are also gravely misinformed. This is especially important because many politicians and engineers are incorrectly using non-baseload energy sources, wind, photovoltaic, otherwise useful technologies, to decarbonize, as opposed to supplement and hedge against uncertain future hydrocarbon supplies.

Does the greenhouse effect’s falsity signify a great deal of parallels in other scientific domains? It is indicting to modern science that the backbone of climatology, the science that deals with the climate of our very earth, is a vacuous mess.

What other areas of science could be predicated entirely on a completely erroneous foundation? Excluding theoretical physics, which is a den of mysticism, we should turn to more practical and real-world theories, those that try to explain observable, measurable phenomena. Which other mainstream postulates or theories could be suspect?

It does seem as if the greenhouse effect was somewhat unique since it was one of the few physical theories, that while untested and speculative, fulfilled some mental desire, and due to its relative insignificance prior to the 21st century, did not garner the attention needed for a swift refutation. Few other theories that are so deeply ingrained in society could have perpetuated for so long on a false foundation because most axioms of modern science are empirical, simply updated versions of the 19th century Victorian methods of rigor and confirmation. The greenhouse effect is truly the outlier, something that had caught the attention of one of the weaker fields within science: climate, but never the attention of the engineer, actual thermodynamicist or physicist who built real useful machines. As John O’Sullivan said, the greenhouse effect was never something observed by actual “applied scientists” who worked with CO2, industrial heaters, heat transfer fluids, cooling systems, insulation, etc. It is implausible that the marvelous “insulating” properties of this wonder gas would not have been noticed by experimentalists in over a century. As we’ve mentioned before, if one searches the term “greenhouse effect wrong, false, refuted, erroneous, impossible, violates thermodynamics etc,” no scientific paper, journal articles, or discussions are retrieved in the Google books archive, suggesting that this theory received little attention. Wood’s experiment doesn’t count because all he says is that the real greenhouse does not work via infrared trapping, he says nothing of the atmosphere, or that the entire thing violates the conservation of energy by magically doubling energy flux. The only record I could find is one mention by Velikoskvy, claiming that the greenhouse effect violated the 2nd law of thermodynamics.

“I have previously raised objections to the greenhouse theory though most have been rejected for publication. But recently even the greenhouse advocates have begun to note certain problems. Suomi et. al. [in the Journal of Geophysical Research, Vol. 85 (1980), pp. 8200-8213] notes that most of the visible radiation is absorbed in the upper atmosphere of Venus so that the heat source [the cloud cover] is at a low temperature while the heat sink [the surface] is at a high temperature, in apparent violation of the second law of thermodynamics.”

Carl Sagan and Immanuel Velikovsky, By Charles Ginenthal

“Later efforts by astronomers to account for the high temperatures by means of a “runaway greenhouse effect” were denounced by Velikovsky as clumsy groping – “completely unsupportable” he called it in 1974, adding that such an idea was “in violation of the Second Law of Thermodynamics”

How Good Were Velikovsky’s Space and Planetary Science Predictions, Really? by James E. Oberg

The greenhouse effect is just another “superseded” theory in the history of science. Wikipedia, despite being edited by spiteful leftists, is more than willing to acknowledge the long list of superseded theories, but somehow they think this process magically stopped in the 21st century! The greenhouse gas theory will join the resting place of a very long list of now specious theories, although, at the time, they were perfectly reasonable and even rational. We must be careful to avoid a “present bias”. The list of disproven theories, while not by any means expansive, includes phlogiston theory, caloric theory, geo-centrism (Ptolemaic earth), tectonic stasis (pre-Wegener geology), Perpetuum mobile, Newton’s corpuscular light theory, Lamarckism, or Haeckel’s recapitulation theory, just to name a few. Unsurprisingly, Wikipedia also lists “scientific racism” as a “superseded” theory, even though ample evidence exists for fixed racial differences in intelligence and life history speed.
We cannot accuse its mistaken founders of fraud, but we can blame the veritable army of the global warming industrial complex for systematic fraud, deception, and duplicity. Arrhenius, the god of global warming, wanted to believe that burning coal could avert another ice age and make the climate more palatable for human settlement. Those who have used the greenhouse gas theory as an excuse to “decarbonize” civilization, can indeed be accused of fraud, because they have willingly suppressed counter-evidence by censoring, firing, or rejecting challenging information, and they have knowingly falsified historical temperature data. The conclusion is that catastrophic anthropogenic global warming (CAGW) is the single largest fraud in world history, simply unparalleled in scale, scope, and magnitude by any other event. We do not know how global warming has grown to be such a monster, but one explanation is that it has been used as a political machination to spread a new form of “Bolshevism” to destroy the West.

I have decided to call the greenhouse effect “Arrhenius’s Demon” after “Maxwell’s demon”, a fictitious being that sorts gas molecules according to their velocity to generate a thermal gradient from an equilibrium.

Atmospheric climate demystified and the universality of the Gravito-Thermal effect

This artist's conception illustrates the brown dwarf named 2MASSJ22282889-431026.
Artist’s conception illustrates the brown dwarf named 2MASSJ22282889-431026. 

A “Brown Dwarf”, a perfect example of the gravito-thermal effect in action.

The confusion over the cause of earth’s temperature is in large part due to the historical omittance of atmospheric pressure as a source of continuous heat. Gases possess high electrostatic repulsion, which is why they are gases to begin with. The atoms of elements that exist as solids under normal conditions strongly adhere to each other, forming crystals, but gases can only exist as solids at extremely low temperature or extremely high pressure, in the GPa range. Many have erroneously argued that because the oceans and solids do not display a visible gravito-thermal effect, the gases in the atmosphere somehow cannot. This is obviously explained by the fact that liquids and solids are not compressible, so they generate little to no heating when confined. Gas molecules possess extremely high mean velocity, a gas molecule in thermal equilibrium at ATP possesses a velocity of 500 m/s. As the molecular density increases, the mean free path decreases, and the frequency of collisions increases since the packing density has increased, generating more heat. But since atmospheres are free to expand if they become denser, a given increase in pressure does not produce a proportional rise in temperature, since the height of the atmosphere will grow. Unsurprisingly, fusion in stars occurs when gaseous molecular clouds accrete and auto-compress from their own mass.

There is nothing mysterious about the gravito-thermal effect, for some reason, it has been clouded in mystery and poorly elucidated and virtually ignored by most physics texts. The gravito-thermal effect is what we see happening in the stars that shine all around us. People have somehow forget to ask where the energy comes from to power these gigantic nuclear reactors? All the energy from fusion ultimately derives from gravity, because nuclei do not fuse on their own! We know that gas centrifuges used for enriching uranium develop a substantial thermal gradient.

Modern climate science is one of the great frauds perpetrated in the 20th century, along with relativity theory, confined fusion, and artificial intelligence. 

Brief summary of the status of “dissident climate science”, or more appropriately named: “real climate science”

Most “climate denial” involves a disagreement over the degree of warming that is posited to occur from emissions of “greenhouse gases”, not whether “greenhouse gases” are even capable of imparting additional heat to the earth. The entire premise of the debate is predicated on the veracity of the greenhouse effect, so most of these debates between climate skeptics and climate alarmists, for example between a “skeptic” like William Happer and an alarmist like Raymond Pierrehumbert, are based on a vacuous foundation, so the entire debate is erroneous and meaningless. We have found ourselves in a situation where an entire generation of physicists believe in an entirely non-existent phenomenon. While we have mentioned that there exist a number of “greenhouse slayers”, they have very little visibility and there has been no major public debate between them and the alarmists. In fact, most have never heard of the slayers, even within the relatively large “climate denial” community. Jo Nova is typical of modern AGW skeptics in that she ardently defends the greenhouse chimera and argues entirely on the merit of the alarmist dogma, quibbling only over magnitude. Other skeptics but champions of the greenhouse effect are Anthony Watts and Roy Spencer. Anthony Watts is just a weatherman and has a weak grasp of physics or thermodynamics, but Roy Spencer considers himself well-versed in these areas. Willis Eschenbach is perhaps the most glaring case study of a deluded skeptic. He went out of his way on Anthony Watt’s blog to defend Arrhenius’s Demon. In attempting to show just how brilliant the IPCC was, he created a hypothetical “steel greenhouse” where the earth was wrapped in a thin metal layer that reflected all the outgoing radiation while absorbing all incoming radiation. Below is an illustration of Eschenbach’s “steel greenhouse”. Apparently, he and Watts, and virtually every “climate scientist”, believes it is possible to simply double the incoming radiation by nothing more than reflecting it. It has evidently not dawned on them that no lens, mirror, reflector, radiant barrier, or surface in existence has ever been shown to increase the power density of radiative flux, whether it is UV, infrared, or Gamma Rays.


#1: There is no greenhouse effect as it violates the conservation of energy. The theory originated from the confusion that energy flux or power could be amplified by “slowing down cooling”. The grave error made was believing that slowing down heat rejection could raise the steady state temperature of a continuously radiated body without the addition of work. Earth’s temperature is a full 15°C warmer than solar radiation can support alone, around -1.1 C°.

#2: The gravito-thermal effect, coined by Roderich Graeff, provides the preponderance of the above-zero temperature on earth. The gravito-thermal effect is simply the gravitional confinement of gas molecules which produces kinetic energy and releases heat through collisions between gas molecules. The gravito-thermal effect can predict the atmospheric lapse rate and surface temperature with nearly 100% accuracy using the ideal gas law, for both Earth and Venus. The “adiabatic lapse rate” is not some artificially generated number derived from the ideal gas law, static air temperature gauges on cruising airliners measure a temperature almost identical to that predicted by the ideal gas law. In fact, current theory cannot even explain the cause of the lapse rate, various nebulous concepts such as convective cooling or “radiative height” are proposed but none of these explanations can be correct if we can predict the lapse rate perfectly with the ideal gas law. The original atmospheric-driven climate theory proposed by Oleg Georgievich Sorokhin, later articulated in the West by independent researcher Douglas Cotton, is the only veridical mechanism and is the only known solution compatible with current physical laws that can account for the temperature of the earth and other planetary bodies. The gravito-thermal effect produces 72.46 W/m², while the sun produces 303 W/m². The sun therefor accounts for 78% of the earth’s thermal budget while the atmosphere accounts 22%.

#3: The moon’s temperature is likely much higher than currently assumed, with solar radiation predicting a mean surface temperature of between 10 and 12°C depending on the exact emissivity value. Current mean lunar temperature estimates place the mean at between of between minus 24 and minus 30°C, but this would mean the moon only receives 194 W/m² assuming an emissivity of 0.98, requiring it to have an albedo of 0.47. It is preposterous that the moon could have such a high albedo, so the current temperature estimates produced by probes are either way off, or the moon has a much high reflectivity, failing to absorb perhaps the more energetic portion (UV, UV-C, visible) portion of the sun’s spectrum. The moon can be seen to be very reflective from earth, glowing a bright yellowish color, this may be because it reflects more energy. Either way, the probes are either way off, or the moon reflects more energy, because no stellar body can absorb more or less radiation than its spherical “unwrapped” surface area, as this would violate the conservation of energy. The only possible solution to this problem is that when radiation hits a body at a shallower angle of incidence (where radiation is received at the poles), more of it is reflected for a given emissivity value, resulting in a less than theoretical absorbed power density. This has not something that has been mentioned before as a solution to some of the temperature paradoxes.

#4: The present concept of an albedo of only 0.44 is entirely erroneous and serves only to underestimate the heating power of the sun. The earth receives at least 300 W/m², because the gravito thermal effect only generates 75 W/m², but the earth must radiate close to or exactly 375 W/m² since our thermometers do not lie, the earth is 13.9°C, there is no arguing with this number. Depending on the exact absorptivity value. The albedo has been deliberately overestimated by excluding the entire 55% of the infrared spectrum to deliberately show that a “greenhouse effect” is absolutely required to generate a warm climate.

#5: Using the ideal gas law, the temperature estimates of the Mesozoic can be explained by a denser atmosphere. In fact, since solar radiation should not have been much more intense, the ideal gas law can be used to predict with near-perfect accuracy the density of the Mesozoic atmosphere by simply using the isotope records. The Paleocene–Eocene Thermal Maximum may have featured temperatures as high as 13°C hotter than today or 28°C as recent as 50 Myr. In order to arrive at the required pressure and density, we can simply construct a continuum from the sea level pressure and temperature. In order to do this, we must establish the hydrostatic pressure gradient. A linear hydrostatic gradient is only valid for incompressible solids, compressible columns “densify” with depth. I have performed this calculation up to a temperature of 25.2 C°. Because the calculation is performed manually, it is very time consuming, I plan on continuing to a temperature of 30 C°, equivalent to Mesozoic temperatures. From the chart below you can see that an increase in atmospheric density of only 15.07% generates an additional 10.2 C° of surface temperature. Robert Dudley argues oxygen concentration of the late Paleozoic atmosphere may have risen as high as 35 %, assuming nitrogen levels are largely fixed since nitrogen is unreactive, this would have resulted in an atmosphere with a density of 12.6% higher, but the actual number is likely much higher since the high temperatures of the Phanerozoic necessitate a denser atmosphere. The origin of atmospheric nitrogen is quite mysterious, nitrogen is sparce in the crust and does not form compounds easily, the only abundant nitrogenous compounds are ammonium ions, which have been bound to silicates and liberated during subduction and volcanic activity. The temperature lapse rate with altitude is a constant value, since gas molecules evenly segregate according to the local force that confines them together. But the relationship between pressure, density and temperature are not linear values and can only be arrived at by performing an individual calculation of each hypothetical gas layer and generating a mean density for the layer above it to predict the amount of compression. With the amount of compression per layer established, it is then possible to use this pressure value to arrive at the density. The calculation is very simple, simply use a constant thermal gradient of 0.006 C°/m and average the density of each increment of gas layer. The ideal gas law cannot predict pressure and density with temperature alone, you cannot just “solve” for density and pressure with temperature as the only known variable, you must establish pressure as well, and this can only be done by knowing the mass above the gas. I have not found an exponent that can arrive at this number, the calculation has to be performed individual for each discrete layer.

negative gravito-thermal numbers-1

If we hypothetically dug out an entire cavern in the earth a few kilometers deep, it would not increase in density because the atmosphere would simply “fall down” and reach a lower altitude, the pressure wouldn’t change. Conversely, by adding mass, the denser atmosphere reaches a greater altitude and moves further into space. Current atmospheric losses to space are 90 tons annually, or just 0.00000087% over 50 million years. Clearly, some form of mineralization or solidification transpired where gaseous oxygen ended up bound into solids. Certain chemical processes removing the highly reactive oxygen and forming solids must have occurred starting during the Mesozoic. An alternative scenario is that gigantic chunks of the atmosphere were ripped away during the average 450,000 year geomagnetic reversal interval when the earth is most vulnerable to solar energetic particles. Geomagnetic reversals are thought to leave the earth with a much weaker temporary magnetic field, which could generate Mars-like erosion of the atmosphere. The last reversal was 780,000 years ago, called the “Brunhes–Matuyama reversal”. The duration of a geomagnetic reversal is thought to be 7,000 years. For a polarity reversal to occur, a reduction in the field’s strength of 90% is required. Estimates place the number of geomagnetic reversals at a minimum of 183 reversals over the time frame spanning back to 83 Myr. Biomass generally contains 30-40% oxygen, since bound oxygen does not appear to be released back into the atmosphere during its decomposition into peat and other fossil materials, it is conceivable much of the paleo-atmosphere’s mass is bound up in oxidized organic matter buried in the crust as sedimentary rock with only a tiny fraction reduced into hydrocarbons. Organic matter is thus an “oxygen sink”.

#6: Short-term climate trends can only be explained by solar variation since atmospheric pressure only changes over very long periods of time due to mineralization of oxygen. A tiny drop in solar irradiance equivalent to +-3 W/m² can produce a temperature change of 0.7°C. A 10 W/m² difference in solar irradiance drops the surface temperature by 2.3°C, enough to cause a mild glaciation. But there is no evidence fluctuations in the magnetic activity of the photosphere can produce such changes, requiring an intermediate mechanism, namely cosmic ray spallation of aerosols.

#7: Joseph Postma’s theory of dividing solar radiation by two is valid only geometrically, but it does not change temperature, because geometry, tilt, or rotation speed, does not affect the total delivered insolation or power density. The real “flat earth” theory is the removal of infrared and the fake “albedo” of 0.44. Postma attempted to increase the available power density of the sun by averaging it over a small area, but this cannot increase temperature since there is still the other half of the sphere radiating freely into space. There is simply no way to employ a “sun-only” model of climate that is utterly ridiculous.

#8: The Gravito-Thermal effect, as predicted by Roderick Graeff, is indeed a source of infinite work, but does not violate the 2nd law, since the work is derived from the continuous exertion of gravitational acceleration. This is something Maxwell and Boltzmann were wrong about. Gravitational acceleration on earth, which is quite strong at 9.8 m/s^2, provides an infinite source of work to generate heat, just as brown dwarfs glow red due to gravitational compression, or molecular clouds collapse forming nuclear cores. Brown dwarfs usually have surface temperatures of 730 °C.

#9: Venus would have a temperature of 40°C without a dense 91 bar atmosphere, but Venus’s true temperature is likely closer to 480°C predicted by the ideal gas law, although the super-critical quasi-liquid nature of the Venusian atmosphere may somewhat compromise its accuracy at low altitudes. Denser atmospheres extend into space further, that is they are “taller” and but should not have a significantly different thermal gradient or “lapse rate”.

We can now finally answer: does CO2 cool or warm the earth? Strictly speaking, radiatively, it can do neither because it is utterly incapable of changing the energy flux. Because some may argue that because the partial pressure of the atmosphere increases due to the addition of carbon, releasing CO2 increases the density of the atmosphere and could produce a tiny amount of warming. It turns out that because hydrocarbons contain a substantial amount of hydrogen, and hydrogen forms water when combusted, the net result of hydrocarbon combustion is a reduction in atmospheric pressure and hence temperature, although the magnitude of this effect is extremely small. How ironic is it that how three century long voracious appetite for carbon has cooled our climate by a few microkelvins?

By burning hydrocarbons, hydrogen converts atmospheric oxygen into liquid water, which is nearly a thousand times denser than air, so there is a net reduction in atmospheric mass. Refined liquid hydrocarbons contain 14% hydrogen on average, to combust 1 kg of hydrogen requires 8 kg of oxygen. Per ton of hydrocarbon combusted, 1120 kg of oxygen is converted to water. Most of this water condenses into liquid, so it results in a reduction of atmospheric mass. The 86% of the hydrocarbon that consists of pure carbon forms carbon dioxide and consumes 2.66 kg of oxygen per kg, so 2287 kg of oxygen has been consumed, releasing 3.66 kg of CO2 per kg of carbon, or 3153 kg. If we subtract the oxygen, we are left with 866 kg of carbon, less than the 1120 kg of oxygen that has been converted to water, so we are left with a mass deficit of 254 kg of oxygen per ton of hydrocarbon burned. Therefore, the combustion of hydrocarbons reduces the density of the atmosphere, increasing the amount of water on earth, and therefore must result in a net cooling effect, albeit insignificant.
The total estimated hydrocarbon burned since 1750 is 705 gigatons, representing a 0.0000347% reduction in atmospheric mass, or 1.7907e+14 kg of oxygen removed from the atmosphere, which is 5.1480e+18 kg. Using the ideal gas law, the predicted cooling is -0.00014°C.

The only possible way humans could warm the planet is by releasing massive amounts of oxygen from oxides to significantly raise the pressure of the atmosphere but without available reducing agents, this would be impossible. It can thus be concluded that under the present knowledge of atmospheric physics, it is effectively impossible for technogenic activity to raise or lower temperatures. Short-term variations, Maunder minimum, medieval warm period, etc, are driven solely by sunspot activity caused by changes in the sun’s magnetic field. No other mechanism can be invoked that stands scrutiny.

The fallacious albedo of 0.44 and the missing infrared 

The albedo estimate of the earth is deliberately inflated to buttress the greenhouse effect. At least 55% of the sun’s energy is in the infrared regime, and virtually all of this energy would be absorbed by the surface, with very little of it reflected by the atmosphere.

The Moon’s temperature anomaly

The mean receives a mean solar irradiance almost identical to the earth, about 360 watts per square meter. If the moon’s regolith is assumed to have an emissivity of 0.95, the mean surface temperature will be 12.76 C, which is far higher than the estimate by Nikolov and Zeller of 198-200 K (-75°C). The Moon’s either considerably more reflective than present estimates, or it’s much hotter, there can be no in-between if we are not to abandon the Stefan Boltzmann law, which would make any planetary temperature prediction virtually impossible. Moon should have virtually no “albedo” because it has effectively no atmosphere which would be capable of reflecting any significant amount of radiation.

The ideal gas law can be used to predict lapse rate and planetary temperatures with unparalleled accuracy.

The ideal gas law predicts with nearly 100% accuracy the atmospheric lapse rate and the temperature at any given altitude. The calculation was performed for a typical airline flight level since there is extensive temperature data to confirm the results. The answer was minus 56°C, within decimal points of the measured temperature at the altitude. Therefore we can state with near certainty that the temperature of any gas body subject to a gravitational field will be solely determined by the density (molar concentration) and pressure, a function of the local gravity. The atmosphere is thus a gigantic frictional heat engine, continuously subjecting gas molecules to collisions and converting gravitational energy to heat, much like a star does, using the core pressure, a product of the massive gravity, to fuse nuclei. Brown dwarfs are compressed just enough by gravity to achieve core pressures of a 100 billion bar, they generate enough heat in the process for their outer surface glows red. The same principle is in action for a main sequence star, brown dwarf, or a low pressure planetary atmosphere. The temperature of a gravitationally compressed gas volume should be equal to the frequency and intensity of the collisions. If this is correct, the kinetic theory of gases should predict the temperature of any body of gas on any planet with near-perfect accuracy, regardless of solar radiation. It is not the solar radiation that heats the gas molecules, but solely gravity. If a planet gets a small amount of solar irradiance, then a layer of the atmosphere continuously exposed to the cold surface will be cooled, with some of its gravitational collision energy transferred to the cold surface, so the temperature of the gas will be below the equilibrium temperature predicted by the ideal gas law. This is precisely what we see on earth. Since a pressure of 101.325 kPa, with a molar density of 42.2938, yields 14.99144°C, but the mean surface temperature is only 13.9°C, then the earth must receive at least 303 watts per square meter assuming an emissivity of 0.975. This very closely corresponds to an infrared-adjusted albedo of less than 20%. The earth must then be heated to around minus 1°C by solar radiation alone. For Mars, with an atmospheric pressure of 610 Pascal and a density of around 20 grams/m3, the predicted atmospheric temperature is -110.11°C. Mars receives spherical average of 147.5 W/m2, or -45.88°C, which appears very close to the -63°C estimate, so just like with the moon, probes have underestimated the temperature.

Nikolov and Zeller erroneously assumed the one-bar atmosphere could produce 90 K worth of heating, but there is insufficient kinetic energy at a pressure of 1 bar to produce this heat. They are correct in rejecting the unphysical greenhouse effect, but they cannot count on a 1-bar atmosphere to produce 90 Kelvin of heating. The ideal gas law predicts a temperature of exactly 15°C for a 1013 mbar atmosphere and it predicts 440°C for Venus at 91 bar, it must be correct. Harry Dale Huffman calculated the temperature of Venus at 49 km, where its atmosphere equals earth (1013 mbar), the temperature is exactly 15°C! The molar mass of the molecules do not matter, only their concentration and the force pushing them together, which contributes to more violent and frequent collisions. Postma’s theory that we must treat the earth as a half-sphere only exposed to solar radiation is theoretically correct insofar as the sun never shines on the entire surface at once, but it doesn’t change the mean energy flux per unit area, which is required for a given temperature. The interval of solar exposure time does not change the mean energy flux. Temperature can only be changed by raising or lowering the delivered energy to the body. Since much of the sun’s energy is in the infrared spectrum, we can assume close to 83% of the sun’s energy contributes to the heating of the surface. Current climate models ignore the fact that the sun produces 55% of its energy in the infrared spectrum, all of which is absorbed. The “real” albedo is in fact much less, which allows more of the sun’s energy to be absorbed. 

What about short term variation in temperature?

Carbon dioxide has been a useful little demon for climate science since it serves as a veritable “knob” that entirely controls climate. Modern climate science is such a fraud that they will have you believe there were no poles during the Eocene because of carbon dioxide! Of course, Arrhenius’s demon is but a fictional entity, so if we want to understand short term variation, clearly we cannot claim that the atmosphere has gained any mass since the Maunder minimum!

Short-term variations are mediated by cosmic ray spallation of sulfuric acid and other atmospheric aerosols that produce nano-meter-sized cloud condensation nucleons. This increases the reflection of the more energetic UV portion of the spectrum and lowers global temperatures by the plus or minus a few degrees, what we have witnessed over the past millennia.
Isotope records of beryllium 10, chlorine 36, and carbon 14 provide ample evidence that indeed these cosmic rays mediate temperature because they overlap sharply with temperature records using ice cores. This phenomenon is called “cosmoclimatology”, coined by Henrik Svensmark who first proposed the mechanism. Don Easterbrook and Nir Shaviv are two other proponents of this mechanism. Disappointingly, all seem to still endorse the greenhouse effect from comments in their lectures available on Youtube where they compared the effect of the “forcing effect” of cosmic rays compared to CO2.
Variation in sunspot activity is mediated by sunspot activity, large magnetic fields that burst out of the photosphere and produce visible black spots. When these magnetic fields are stronger and more numerous, fewer solar energetic particles or cosmic rays reach earth, producing fewer aerosols and allowing more UV to strike the earth.


A Thermodynamic Fallacy

We must first define what POWER is. The sun delivers power, not energy. Energy, dimensionally, is defined as mass times length squared times time squared: L2M1T-2. Power is energy over time, energy divided by the time spent delivering the energy. 

Energy is not power. Power is flux, a continuous stream of a “motive” substance capable of performing work. In dimensional analysis, power is measured as mass times length square times time cubed: L2M1T-3. Power could be said to be analogous to pressure and flow rate, while energy is just the pressure. Note that below we use the term energy flux and power interchangeably, they are both the same units.

The greenhouse effect treats energy as a compressible medium with an infinite source of available work

Work or energy flux cannot be compressed or made denser by slowing the rate at which energy leaves a system, this treats energy flux as a multipliable medium, which it is clearly not. Using mechanical analogies for the sake of clarity, we can express energy flux as gas flowing through a pipeline. The energy flux would be analogous to gas molecules and the area in which this energy is expressed is the surface of the earth. Using the pipe analogy, we can evoke Bernoulli’s theorem to show that mass is always conserved. If we squeeze our pipe, the mass flow rate drops but the velocity increases, a basic law of proportionality or equiveillance. With the greenhouse effect, the energy flux flowing through the pipeline is subject to a constriction (reduction in cooling), the constriction now alters the ability of energy to exit the pipeline, thereby increasing the density of energy particles within the volume. This is in essence the current greenhouse effect power multiplication phenomenon. By “constricting” the pipe, energy flux “particles” pile up and increase in their proximity, creating a “zone” of higher intensity. But this is clearly a fallacy since it produces additional energy flux density (work) from nothing. This scheme has found a way to increase power density without changing total delivered power or area/volume, therefore it has created work from nothing, and it thus cannot exist in reality. No degree of constriction (analogous to back radiation) can increase the flux density, required to heat the earth. 

The fact that a century’s worth of top scientists failed to identify this error strongly confirms our hypothesis that most technology and discovery is largely a revelatory phenomenon, as opposed to being the expression of deep insight. The fact that modern science cannot even explain the climate of the very earth we live on is quite astonishing. Modern technology can construct transistors a few nanometers in diameter, yet we are still debating elementary heat flow and energy conservation axioms.

Some GHE deniers go wrong by incorrectly stating that a radiatively coupled gas can “cool” the atmosphere, again this makes the same error that led to the erroneous greenhouse effect in the first place. Cooling can never lower the temperature of a continuously radiated and radiating body, such a scheme is impossible because it would eventually deplete all the energy from the body. The term heating and cooling with respect to the atmosphere need to be dispensed with altogether. Think of the atmosphere as a water wheel, damming up the river in front of the water will not speed up the water wheel, whose speed is solely determined by the mass flow and velocity of the river beneath it. A body receiving a steady-state source of radiation can never be cooled, via radiation, at a rate greater than it is heated due to the reversibility of emissivity and absorptivity, in other words, cooling can never exceed warming and vice versa. The fundamental basis of the greenhouse effect is the assumption that power delivered can exceed power rejected. Since the sun continuously emits “new” radiation per second, the radiation that is “consumed” and converted to molecular kinetic energy is always released at an equal rate than it is delivered. Radiation forms a reversible continuum of thermal energy transfer, without the ability to accumulate or transfer this heat energy at a greater rate than is received. Conduction or convective cooling has no applicability in radiative heat transfer in the vacuum of space since convection or conductive heat transfer scenarios on earth have virtually infinite low-temperature bodies to cool to. Therefore, all stellar bodies are in perfect radiative equilibrium, neither trapping, storing, or rejecting more radiant energy than they can absorb and reject per second. 

The confusion over the “amplifiability of power

We have already defined power as fundamentally mass times area (length squared) times time cubed, expressed in dimensional analysis as L2M1T-3. Energy is a cumulative phenomenon, energy as a stored quantity is punctuated, while power or energy flux is a continuous or “live” phenomenon, being measurable only in its momentary form, imparting action on a non-stop basis. Mice can produce kilowatt-hours worth of energy by carrying cheese around a house over the course of a few years, but they can never produce one kilowatt. A one-watt power source can produce nearly 9 kWh in a year, but a nine watt-hours can never produce 9 kilowatts! Energy gives the wrong impression that power is somehow accumulated. This rather confusing distinction, the distinctiveness of the different entities or expressions of energy, being inherently time-dependent, led to the fallacy of the greenhouse effect. Because energy can be “stored” and accumulated to form a larger sum, it was assumed energy flux could be amplified as well, by simply slowing down the rate of energy loss relative to energy input, leading to an inevitable increase in temperature. Amplification through altering energy loss could never increase flux, as this would mean insulation would amplify the output of a heater. Insulation can only prolong the lifespan of thermal energy in a finite quantity, it has no bearing on flux values or power. This is because power is a constant value, not mutable, amplified, or attenuated. Power is a time-dependent measure of the intensity of the delivery of work or energy, power is simply energy divided by time. 

To increase the temperature of the planet, one would need to increase the flux. 

Slowing the rate of heat loss can only work to extend a body’s finite internal energy, a body that is donated a quantity of energy and never replenished, but is unable to raise the temperature of a continuously heated body, because such a body’s emissions are the product of its own temperature, and recycling these emissions can never exceed the source temperature.

A good analogy would be low-grade heat (say 100°C) versus high-grade heat. One could have a million watts of “low-grade heat”, but this low-grade heat can never spontaneously upgrade itself to even a single 1 watt worth of high-grade heat, say 1000°C. Heat can never be “concentrated” to afford a higher temperature, it must always follow the law of “disgregation”, the original true meaning of “entropy” coined by Clausius. The “lifespan” of a concentrated form of energy can be prolonged or extended via modulating perviousness or retentiveness of the storage medium, but the time-invariant flux equivalent sum remains constant. The greenhouse gas theory is therefore quite an elementary mistake, the conflation of the permeability of heat with the flux intensity required to achieve said heat. To raise the temperature of the earth to 15°C, the total flux must increase, one can never trap or amplify a lower flux value to reach a higher flux value, because flux is not a modulable entity.

Many greenhouse effect “slayers” get worked up over the concept of back radiation and radiative heat transfer from hot to cold, but this is not the issue with the greenhouse effect, the greenhouse effect is a 1st law violation, not a 2nd law violation. Of course, one still cannot warm a body with less intense radiation emitted by a hotter surface, but this is a secondary problem, the principle error is the confusion between flux and energy.

Low-grade heat cannot be transformed into high-grade heat, such a scheme would require energy input and an “upgrading heat pump” usually employing exothermic chemical reactions such as water and sulfuric acid. Heat upgrading heat pumps exist in industry and evidently do not violate any laws of thermodynamics because they work! These pumps obviously require work to perform this “upgrading” in the first place.

The greenhouse effect is impossible because it leads to a buildup of energy, it forbids a thermal equilibrium. All stable systems are in perfect thermal equilibrium. The reason the conservation of energy (first proposed by von Mayer) is a universal law of nature is because its absence would mean the spontaneous creation or destruction of energy. Since energy and mass are the same form but differently expressed (first proposed by Olinto De Pretto), a universe without the 1st law would disappear within seconds. Stability requires continuity, and continuity requires conservation. Energy flux is not a cumulative phenomenon, it is not possible to trap and store more energy since this energy would continuously build up and lead to thermal runaway. Energy itself is cumulative, it can be built up, drawn down, and stored, but flux cannot, but flux represents a volume of flow, while energy represents the time-dependent accumulation or cumulative sum of said flow. Energy can be pumped or accumulated to form a larger sum over a period of time, but flux can never be altered, it is impossible to change the power output of an engine, laser, or flame by any scheme that does not result in the addition of extra work. If greenhouse gases store more heat than can otherwise flux into space, this greater heat content generates more radiation by raising the temperature, and now this radiation is blocked from leaving, generating even more heating of the surface, which produces yet still more radiation. The process goes to infinity and therefore must be unphysical. Such a scenario is impossible because it’s totally unstable. A mechanism must exist that continuously provides the thermal energy to maintain a constant surface temperature, this mechanism cannot be solar radiation alone. 

Kirchhoff’s law forbids emissivity from exceeding absorptivity and vice versa, so the greenhouse effect violates Kirchhoff’s law. One cannot selectively “tune” emissivity to retain more heat to slowly build up a “hotter” equilibrium. By definition, one cannot “build up” an equilibrium, since an equilibrium requires input and output to be perfectly synced, and by definition, the greenhouse effect is when these values are not synced, but considerably diverged, since there is more retained that imparted into the system, but such a condition inevitably leads to infinity. 

There are two ways of falsifying the greenhouse effect. One way is to find errors in the predictive power of a CO2-driven paleoclimate or ancient climate record, another better way is to identify and highlight the major physical errors in the mechanism itself. 

During the Paleogene-Eocene thermal maximum, there were no poles and sea levels were considerably higher, likely close to a hundred meters higher.

Henry’s law is temperature dependent, when liquids rise in temperature, the solubility value for gases decreases, so less gas can be stored in oceans. CO2, therefore, outgases from the oceans following a temperature increase.

The difference between 1600 and 400 ppm cannot account for the complete absence of ice in the Eocene, the ice ages, or millennia temporal variation, this would require close to 5000 ppm CO2 according to current 1 c/doubling sensitivity. Paleogene-Eocene maximum up to 13°C warmer, but CO2 concentrations were only 3.3 times higher than the present, which would translate to a sensitivity of 4°C/doubling, but this is far too high even if one subscribes to the non-existent greenhouse effect. Even water vapor, which on average accounts for 2.5% of the volume of the atmosphere, would decrease emissivity by 2.5%, or raise or lower temperature by only 0.32 degrees.

Even if the concept of back radiation is valid, which it is not, the tiny concentration of CO2, even at an absorptivity of 1, will yield only a minuscule difference in net atmospheric emissivity. CO2 is 0.042% by volume, assuming each CO2 molecule acts as a perfect radiant barrier, the total increase in emissivity can only by definition, be 0.042%. 

Milankovitch cycles cannot account for ice ages since the distance to the sun does not change, or only very slightly.

Loschmidt firmly believed contrary to Maxwell, Boltzmann, Thomson, and Clausius, that a gravitational field alone could maintain a temperature difference which could generate work. Roderich W. Graeff measured gravitational temperature gradients as high as 0.07 K/m in highly insulated hermetic columns of air, which corroborates Loschmidt’s theory and confirms the adiabatic atmosphere theory.

“Thereby the terroristic nimbus of the second law is destroyed, a nimbus which makes that second law appear as the annihilating principle of all life in the universe, and at the same time we are confronted with the comforting perspective that, as far as the conversion of heat into work is concerned, mankind will not solely be dependent on the intervention of coal or of the sun, but will have available an inexhaustible resource of convertible heat at all times” — Johann Josef Loschmidt

“In isolated systems – with no exchange of matter and energy across its borders – FORCE FIELDS LIKE GRAVITY can generate in macroscopic assemblies of molecules temperature, density, and concentration gradients. The temperature differences may be used to generate work, resulting in a decrease of entropy”—Roderich W. Graeff

Fullscreen capture 8152022 32818 PM.bmp

[1] http://theendofthemystery.blogspot.com/2010/11/venus-no-greenhouse-effect.html

[2] https://aip.scitation.org/doi/10.1063/1.1523808

Viability of high speed centrifugal ore separation

Christophe Pochari, Pochari Technologies, Bodega Bay, CA.

707 774 3024, christophe.pochari@pocharitechnologies.com

In this text, we discuss the possibilities of both extracting nickel from ultramafic by using closed-loop acid leaching and carbonation, as well as by using centrifuges for density-based segregation.

Note to the reader: The method proposed below (centrifugal separation) is purely theoretical, it is not a technology in strict terms such as our pneumatic tower or aluminum oxide heat exchanger which can be calculated to exacting precision and whose real world performance can be confidently predicted. This inquiry falls under the category of research interest, for the sake of probing what is possible with current human knowledge. We do not know precisely to what extent different forms of ultramafic rock hosts nickel bearing minerals in “clusters” or inclusions, and to what extent nickel is widely diffused through the host mineral at the atomic scale or even nanoscale. This question represents the major unknown with respect to the viability of the proposed concept. The only way to determine this is to scan a sample of ultramafic rock with an mass spectrometer and perform a color-coded map of the crystal because optimal microscopy alone is insufficient. An alternative method may simply involve crushing a representative sample of ultramafic rock and measuring the variance in particle density through settling. Until this is performed, all discussion is purely theoretical. For the proposed method to work, fine comminution, up to 10 micron, must be able to produce particles of discrete density, however minute this difference, to gravitationally segregate. The basic assumption made is that because nickel has an atomic weight of 58.7 while magnesium and silicon are 24 and 28 respectively, a micron size particle carrying even slightly higher diffused nickel will possess more mass than one carrying slightly less. This small differences in mass will allow segregation, so while we can calculate the energy consumption of the centrifuge and comminution, we cannot calculate the segregation efficiency because we cannot know the diffusiveness of the nickel within the host rock because no data can be found. While we believe it is possible to liberate sufficient discrete particles of more nickel-concentrated rock, we cannot be certain so we treat the entire proposal hereinafter as conceptually feasible but unproven.

Note: this treatise is a speculative effort to evaluate whether a non-chemical means of ore separation is possible. It may very well be the case, and probably is, that chemical separation is the only possible method, in that case a method should be developed to employ a close-cycle system where the acid reactant is recycled continuously. Lower grade ores mean acid consumption grows proportionally to the ore grade, in the absence of an effect regeneration scheme, it is economically absurd. In our opinion, in light of the ability to regenerate the sulfur/nitric and carbon dioxide use in nickel extraction, it makes little sense to pursue centrifugation. Efforts have been made to extract nickel from ultramafic rocks, for example the Chinese patent CN102517445A Method for extracting minerals from olivine-serpentine ore deals with a acid leach method, and another Chinese patent CN1972870B Process for complete utilization of olivine constituents, deals with a similar method. The discussion of centrifugation below is thus merely an intellectual inquiry. Such a scheme could operate at the expense of energy alone, namely comminution and crushing. Sulfuric acid does not react with silica, silica only reacts with bases, being acidic, it does not react with an acid. Thus only the magnesium oxide will react with the acid, along with the trace metals. When magnesium oxide reacts with sulfuric acid, it yields magnesium sulfate. Magnesium sulfate can then be reacted with water to yield brucite (magnesium hydroxide) and sulfuric acid in the following reaction MgSO4 + 2H2O → Mg(OH)2 + H2SO4. Sulfuric acid will first react with magnesium carbonate according to the following reaction: MgCO3 + H2SO4 → MgSO4 + CO2 + H2O. The rest of the metal oxides, if one excludes silica which is inert, are negligible by mass so might as well be ignored for the sake of this analysis. In the case of nickel minerals, sulfuric acid will react with the nickel to produce a nickel sulfide compound which can then be decomposed to liberate the sulfur and pure nickel oxide. In a perfect cycle, all the sulfur can be recovered reducing the cost of the acid procurement, sulfur costs $183/ton to purchase in bulk so it must be procured each time, the cost per kg of nickel would be exceedingly high in the absence of recovery. Sulfur is not a widely abundant element, occurring in the crust at a concentration of only 350 mg/kg. It must be remembered that these are chemical, there is no destruction of matter, only rearrangement, this is something Lavoisier knew in the 18th century! Carbon dioxide can be used to carbonate the magnesium, increasing the nickel extraction yield. The paper Nickel Extraction from Olivine: Effect of Carbonation Pre-Treatment, by Rafael M. Santos, suggests that with carbonation of the ultramafic rock (olivine), nickel yields of over 90% can be achieved with nitric acid and a particle size of 35 microns.

metals-05-01620-g009 (1)

Sulfuric acid is not the only acid that can be used, nitric or hydrochloric acid could be used as well, but in the case of nitric, its inability to be easily regenerated hampers its use As long as the temperatures are not brought high enough to catalyze the decomposition of the nitrogen oxide bond, nitric acid can be recovered. Upon reacting with nitric acid, magnesium turns to magnesium nitrate via the following reaction: MgO + 2 HNO3 → Mg(NO3)2 + H2O. The magnesium nitrate can then be decomposed upon reacting with water to form magnesium oxide yielding nitrogen dioxide and oxygen in the following reaction: Mg(NO3)2 → MgO + NO2 + O2. Iron oxide will be attacked by nitric acid and form ferric nitrate via the reaction: Fe2O3 + 6HNO3 → 2 Fe(NO3)3 + 3H2O. Ferric nitrate can then be decomposed to yield iron oxide and nitric acid: Fe(NO3)3 + 3H2O → Fe(OH)3 + 3HNO3. Iron(III) oxide-hydroxide (Fe(OH)3) will then decompose into iron oxide and water: 2 Fe(OH)3 → Fe2O3 + 3H2O. The above reaction suggests half of the nitric acid is destroyed, prompting the use of sulfuric acid instead since sulfur is easily converted back to sulfuric acid through its own combustion with air and water using a simple vanadium oxide catalyst, so called “wet sulfuric acid process”. Upon reacting with sulfuric acid, iron oxide forms a sulfate salt: Fe2O3 + 3H2SO4 → Fe2(SO4)3 + 3H2O. This iron sulfate then reacts with water to yield ferrous hydroxide: FeSO4 + 2 H2O → Fe(OH)2 + H2SO4. Ferrous hydroxide then reacts with water to yield magnetite and hydrogen via the Schikorr reaction: 3 Fe(OH)2 → Fe3O4 + H2 + 2 H2O. None of the elemental sulfur is lost, but much of the acid itself is decomposed into via these oxide-salt-oxide reactions. These reaction pathways are very elegant, the salts selectively strip oxides into their own salts and back, allowing very efficient separation provided the operation features an in-house sulfuric acid reactor to operate closed-loop. The catalyst for sulfuric acid production is 6-8% wt. vanadium pentoxide supported on diatomaceous earth. Catalyst consumption is around 0.26 kg V2O5/ton-H2SO4/yr. In the case of hydrochloric acid, the reaction is: MgCO3 + 2HCl → MgCl2 + CO2 + H2O. The magnesium chlorides then reacts with water yielding magnesium hydroxide: MgCl2 + 2H2O → Mg(OH)2 + 2HCl. No hydrogen is lost in the magnesium reaction. The magnesium hydroxide then liberates water and yields magnesium oxide Mg(OH)2 → MgO + H2O. In the above reaction pathway, there has been no oxidization of hydrogen. In the case of iron oxide, the reaction is: Fe2O3 + 6 HCl → 2 FeCl3 + 3 H2O. The ferric chloride then reacts with water to yield iron hydroxide: 2 FeCl3 + 6 H2O → 2 Fe(OH)3 + 3 H2 + 3 Cl2. The iron hydroxide then decomposes: 2Fe(OH)3 → Fe2O3 + 3H2O. Half of the hydrochloric acid is lost, requiring newly produced hydrogen to regenerate the chlorine. Hydrochloric acid 1300 is times more powerful than sulfuric acid on the acid dissociation constant (Ka) scale. Hydrochloric acid can be regenerated using sulfuric acid via the following reaction: 2Cl + H2SO4 → 2HCl + SO4.

In contrast to sulfuric acid, nitric acid must be produced from fixed nitrogen, an energetically intensive pathway. The attractiveness of acid leaching is that the acid reacts with only one element at a time, it does not form “intermediate compounds”, for example “magnesium-nickel sulfate”, such a compound is not stable or energetically favored. For example, let’s say one dissolves iron in a strong acid, the acid will form a salt of the iron and leave a residue of carbon. To the result of acid leaching is streams of separate metal “salts”, while still oxidized, are distinct compounds capable of being precipitated. Nickel and magnesium can only form a compound in the host rock crystal. For example, in our case, the sulfuric or nitric acid will attack the chemical bond between nickel and magnesium within the ultramafic rock, namely (liebenbergite Ni,Mg2SiO4), and form separate compounds of nickel sulfide or nickel nitrate. The tiny dimensions of the comminuted particles generates large surfaces areas for the acid to attack these chemical bonds. The reactors are usually operated at considerable pressure to intensify these reactions. Acid leaching can thus be thought of as a process of splitting the chemical bonds of the ore to yield distinct separable metal compounds. Once these distinct metal compounds are separated and their acids removed and return to their original oxides, reduction can begun and the only the desired metal is reduced, leaving the iron, magnesium, aluminum as oxides. These oxides can then be sold or used elsewhere.

The carbon dioxide used to carbonate the magnesium oxide, which comprises 48% of the rock (silica does not react with CO2), can be released upon heating. MgCO3 → MgO + CO 2 (ΔH = +118 kJ/mol), the decomposition temperature is 350°C. The reaction is exothermic so heat is not needed. In theory, with acid separation and a complete or close to complete recycling of the sulfur, a nickel cost of the base crushing and ore extraction cost can be achieved. If 3.5 kg of valuable transition metals are yielded per ton, as long as the ore processing cost do not exceed $10 per ton, the nickel cost is only $2.85. It is uncertain whether centrifugation can compete with an optimized closed-cycle acid leaching method. We can roughly calculate the cost of ore extraction using a moderate depth open-bit mining strategy. The cost of rock extraction is principally found as fuel, operator wages, and equipment amortization. Comminution CAPEX costs are exceedingly low, for example, a one ton per hour 35 micron Raymond mill costs only $22,000, or $0.16/ton over a 15 year amortization time.

Fullscreen capture 12202022 72715 PM.bmp

Blasting costs are virtually nothing, one kg of ammonium nitrate can yield 10 tons of rock (Geology for Civil Engineers By C. Gribble, A. McLean, 2017 pp 239). Since ammonium nitrate costs around $500/ton, this is less than 5 cents per ton of rock liberated. Excavation and transportation are more variable and difficult to calculate, since they depend on geography, distance, and terrain. Taking a more small-scale example since this is our ideal customer base, a typical 50 ton excavator (i,e Caterpillar 345) has a cycle time of around 18 seconds, with a bucket volume of 2.45 m3 and a rock density of 50% of original (due to large void volume), the hourly tonnage processed is 734 tons. The fuel consumption of the excavator is 27 kg/hr, or around $33/hr, or $0.05/ton. The operator wage is $23.2/hr or $0.03/ton. (https://www.bls.gov/ooh/construction-and-extraction/construction-equipment-operators.htm). The rock density assumes a mean fragment size from rock blasting of around 150mm and an ultramafic rock density of 2.82 to 3.3 g/cm3. (U.S. Geological Survey Bulletin, Volume 2044 pp 10, Measurement of Size Distribution of Blasted Rock Using Digital Image Processing, Siddiqui et al). In short, the bulk of the cost is expected to lie in the reactor vessel for carbonation, the sulfuric acid regeneration, and component replacement due to corrosion form the acidic substances. The cost of ore processing is virtually nothing if minimal transportation is performed, on the other hand, if large distances must be traveled, it becomes uneconomic. Therefore, it is essential the processing take place close to the ultramafic deposits. The lifespan of the reactor is assumed to be 10 years. Below is a map of global ultramafic rock deposits, notice the U.S Appalachian mountain, it has been known for a long time that the Appalachian mountains contain rich deposit of ultramafic rocks. The Piedmont plateau covers 210,000 km2 and consists of a deep layer of oceanic crust below the Appalachian mountain range, (Ultramafic Rocks of the Appalachian Piedmont, Steven K. Mittwede). From a mining perspective, this site is ideal since most of the land is privately owned and can be mined on a small scale without multi-decade environmental approval. Christophe Pochari Engineering strongly believes in Rudolf Diesel’s idea of “Solidarismus”, a political philosophy favoring small scale, decentralized independent producers and craftsman free from the exploitation of monopolist corporate entities, (Solidarismus: Natürliche wirtschaftliche Erlösung des Menschen, Solidarity: Natural economic salvation of man, Rudolf Diesel, 1903). By allowing the extraction of these valuable materials from more abundant and widely distributed rocks, manufactures can bypass the markup charged by the monopoly held by multinational mining companies who must constantly generate large returns to shareholders. For example, if we look at the Rio Tinto stock, we find a net profit margin of 30%! Fortescue Metal Group boasts a net profit of 36%! A healthy competitive industry should not feature net margins above 5%, much work needs to be done to lower these obscene profit margins so that manufactures can access the materials they need for the cost of actually producing them, not to make ticker symbols on trading floors.

The resource potential of this region is virtually unlimited.





The vast majority of land owned by the federal government is virtually worthless desert. A few national parks in Appalachia belong to federal lands, but the bulk of the Piedmont plateau is private hands, with relatively low land-costs and low population density. Using Zillow, Landwatch, and other websites, we have estimated land costs in the region, most large plots seem to sell for $2500-5000/acre. For a hypothetical 800 acre site (3.25 km2), if excavation takes place at a depth of 100 meters excluding the sedimentary layer, a total of  9 × 108 tons of rock could be generated. Assuming only 15% is actually ultramafic, the potential nickel, chromium and cobalt yield would be 472,500 tons, worth approximately 7 billion USD at current market prices assuming an average sale price of $15000, since not all of it is nickel, around half is chromium which is only worth $10/kg. So evidently land costs place a very small role in the cost of mining. California also possesses some very interesting ultramafic geologies, predominantly in the Northerly Coast. Unfortunately for California, most of the ultramafic deposits appear to fall right into federal land, so mining will never happen, and if it does, it will be hoarded by greedy mining companies!


Reactor design considerations for the sulfuric/nitric ultramafic leaching system

Fullscreen capture 12212022 125001 AM.bmp

In the regenerated sulfuric process, a corrosion resistant reactor is filled with comminuted ultramafic rock powder, the reactor is filled with CO2 and carbonated if hydrochloric or nitric acid is used. Once carbonation is performed, the reactor is filled with sulfuric acid. The purpose behind carbonation is the selective removal of magnesia (nickel bearing) from the inert silica. This allows the sulfuric acid to preferentially target the acid since more of the nickel bearing mineral is exposed. Designing a non-glass coated reactor to be handle sulfuric acid, hydrochloric, or nitric acid, is indeed challenging. Most alloys are intensely attacked by this acid, tantalum and Hastelloy provide the best protection. The reactor serves two purposes. First, it is used to introduce carbon dioxide pressurized to 30 bar or more to strip much of the magnesium from the silica and generate a high surface area magnesium carbonate mineral that can be more effectively leached by acid. Secondly, the reactor vessel must then be able to withstand the highly corrosive sulfuric acid bath that will be pressurized to the same 30 bar. The residence time of the carbonation and acid leaching may be several hours or more per batch. The reactor cost is around $2500-3000/m3 depending on size, that is a 30 cubic meter reactor sells for $76,000 (Weihai Huixin Chemical Machinery Co.,Ltd). The leaching and carbonation period is set at 12 hours, so assuming the reactor contents an 80% slurry content at a density of 2000 kg/m3, it will yield 2920 kg of metal annually, resulting in a reactor CAPEX cost of only $0.885/kg-metal assuming the lifespan of the reactor is 10 years.




Returning to the possibility of centrifugation, in the event acid leaching employing a closed-cycle for recovering the sulfur is not viable whatever reason (unlikely), we may be able to extract nickel by density-gradient centrifugation. Most of the article deals with this method since it is “new” and worth investigating. It should be emphasized that regenerated sulfuric acid leaching is a very simple and very crude technology, dating back centuries to the days of alchemy. Centrifuge is a “high-tech” and one could say “exotic” method, and while unproven, has the upside of being extremely clean and easily down-scaled. But it’s not without its downsides, leaving aside fundamentally feasibility concerns, the cost of carbon fiber centrifuges, high speed bearing, motors, vibrational issues, sieves, etc, the cost may not competitive with regenerated acid leaching.

Liberation of minerals from gangue is predicated on the assumption that the mineral occurs as discrete pockets or parcels within the host. The principle behind liberation through comminution relies on the difference between the mean size of the discrete mineral pocket and the mean size of the final comminuted particle. If the size of the mineral pocket is greater than the size of the comminuted particle, then by definition comminution will possess either a larger or greater fraction of the mineral pocket. On the other hand, if the element desired is entirely diffused atom by atom in every 100 atoms of the host oxide, then by definition it is physically impossible to separate the material in question from the host rock, because no matter how small the particle is, the mineral can never be concentrated. But evidence suggests few elements are distributed atomically this way, most form isolated mineral formulations that are disparate from the host mineral as veins or tiny aggregates within the “mother” crystal. Liberation through comminution represents the basis of modern mining technology. In our case, the method proposed here relies even more heavily on very fine comminution to liberate nickel-rich particles that are conducive to centrifugal separation. Below are some schematics to illustrate the principle of mineral liberation with comminution. Comminution energies as high as 200 kWh/ton of rock are tolerable for the economic production of nickel, chromium, and cobalt from ultramafic rock. 200 kWh/ton is roughly equivalent to a minuscule 5-micron particle size.


Fullscreen capture 12202022 44309 PM.bmp


The word metal derives from the Greek word “metallon”, which meant “mine or “quarry”. Mining is mankind’s oldest industry after agriculture. Entire historical epochs were named after metals or alloys of metals, a testament to the immense role they played in these early settled civilizations. The mining profession as we know it began on a large scale in Bohemia (today known as Czechoslovakia) in the 16th century. The town of St. Joachimsthal/Jáchymov operated very productive silver mining operations generating great wealth for their prospectors. Georgius Agricola wrote the world’s authoritative book on the subject. De re metallica (On the nature of metals) was published a year after Agricola died in 1556. The knowledge he accrued in this book still forms the basis of modern metallurgical technique and mining. The foundation of modern civilization lies in the efficient and cost effective extraction of materially useful elements from the crust, principally metals but also metalloids, which allows for the construction of virtually every heavily-loaded precisely manufactured component in use today. Without metals, man would evidently still be living in the “stone” age, forced to construct everything around him with wood or brittle stones. Metals are also useful as catalysts, catalyzing essential chemical reactions for hundreds of different compounds. The role of metal is so deeply cemented in modern civilization that one could argue we still live in the “metallurgical age” which started sometime in the 18th century. Metallurgy made the jet engine possible, and in some way or another facilitates virtually every high-tech process known to man. The only engineer to ever be elected U.S president was a mining expert.

But over the course of the global expansion of techno-civilization in the past century, many of the more useful elements which are not copiously distributed have been depleted. While civilization has not even begun to deplete these elements as the share of the earth’s gigantic crust, it has quite severely in the form of highly concentrated ores. Most conventional mining restricts itself to a select few highly propitious formations, which due to sheer luck, host large concentrations of the desired elements. In light of this, a number of technically dubious mining ideas have been proposed recently. The first of these ideas involves mining the seabed for manganese nodules, which contain substantial nickel and cobalt. The second is perhaps so preposterous as to not warrant mentioning but we feel the need to because some credible individuals continue to give it credence. This preposterous idea is to “mine” asteroids using probes that will somehow grapple onto these massive rocks darting through space at phenomenal speeds. Somehow, their advocates claim, these little probes will take off and make their way back to earth carrying platinum and iridium! It is obvious that the latter idea is fiction and can be rightly ignored. But the former is not really preposterous at all, and is indeed technically possible with current technology, but the deeper question pertains to its practicality and whether it would actually produce lower cost metals. It seems only obvious to mine the 75% of the earth that is submerged in water. After all, if we assume the current reserves on earth are equally represented in the oceanic crust (they are likely overrepresented due to the oceanic crust being more mafic), we can assume a 4-fold increase in available supplies if the seabed were mined. But the situation is perhaps less exciting than it seems due to a number of technical limitations. Reliable machinery must be developed to both excavate, consolidate, and transport this rock to a surface vessel. The expense of constructing these machines for hyperbaric environments including the corrosion, degradation, and their total reliance on remote control, is yet to be proven. Any breakdown or the rupturing of even a single hydraulic hose will require the machine to be lifted as much a few thousand meters to the surface to be repaired. Any operator of heavy earth moving equipment will attest to their maintenance intensity and proneness to breakdown. Without personnel to attend to these machines, it is not certain that automation alone can perform the critical coordination functions required for them to operate effectively. Very heavy winches will be required on the vessels to lift this ore to the surface, requiring specialized vessels. Although this criticisms is valid, surely many believed it impossible to extract oil from the deep oceans when the idea was first proposed in the 1940s. But while this is surely true, there is a notable difference. Oil occurs in concentrated pockets or reservoirs that are easily tapped and drained once a hole is drilled, metals occurs as sparsely distributed oxides in the host rock, requiring large amounts of material to be processed underwater, while for oil rigs, the bulk of the work is done in the safety and comfort of the floating rig. This is a major difference and has pronounced implications for seabed mining. Moreover, while strictly non-technical, international waters are an inherently nebulous and contested concept, so only major nation-states will have the ability to carry out these endeavors, almost certainly leading to gigantic monopolies no better than current mining which offers no benefit to users of the material. Additionally, once the “low hanging fruit”, namely the shallow seabed packed with these nodules is scraped clean, deeper inhospitable waters will need to be trekked, which is beyond the capabilities of present technogenic civilization. Smaller private companies will likely be left out and the fruits of these seabed elements will be hogged by states and large corporations, providing little tangible economic benefit to most users of these metals. We can thus conclude that these current alternative mining ideas are unlikely to transpire anytime soon leaving improved methods of “terrestrial mining” as the only plausible candidate. Christophe Pochari Energy Engineering has proposed a very modest and technically conservative solution. Rather than engage in highly technically daunting schemes, we can simply turn our eyes to the massive ultramafic rock reserves that sit beneath our feet. Current nickel mining companies harvest ores with 1% Ni content, considering ultramafic rock contains 0.2% Ni, it is not outlandish to propose mining these ores, since after, it is only a 5-fold increase in material processing required. A 0.2% concentration is not exactly like proposing to extract uranium from seawater which occurs at an infinitesimally small concentration of 3.3 parts per billion! Imagine the amount of brine that must be processed to produce a ton of uranium? If more effective non-chemical and thermal methods of separating the metal oxides from the host silicate and magnesia are developed, this increase in material volume adds surprisingly little cost to the final product. Better yet, since ultramafic rocks occur quite copiously across the earth, plots of land can be purchased allowing small companies to mine them, without the bothersome regulatory issues faced by large scale mines. In essence, we have proposed to use ultra-high g gravity separation to dramatically reduce energy consumption, paired with efficient comminution and particle-size filtration, we can afford to process five fold more rock, especially with low-cost solar energy at the source.

The significance of nickel

While a truly rigorous analysis would include chromium and cobalt (the two other extractable elements found within ultramafic rock), for this study we briefly look at nickel as the sole element of interest. Nickel is an indispensable alloying agent for high-strength corrosion-resistant steels, a catalyst for hydrogen production, and as a cathode for batteries. If hydrogen production is to significantly grow to replace hydrocarbons via ammonia, a large expansion of alkaline electrolysis will be required. Alkaline electrolyzers use nearly pure nickel anodes, with current densities of only <0.2 watts/cm2, nickel loadings of up to 8 kg/kW are common. For example, if the entirety of present ammonia production were to be replaced with electrolyzed hydrogen, a total of 40,000 MW of electrolyzer capacity would be needed, totaling 320,000 tons of nickel alone. Some may view this as a small number compared to global nickel production of 2.2 million tons, but such an increase in demand will place considerable strain on existing mines sending the price soaring, in turn making these electrolyzers uneconomic and forcing less active substitute catalysts. Moreover, ammonia production is not the only sector that will need hydrogen, much of commercial transportation, if it is to become hydrocarbon-free, will need energy-dense chemical fuels like ammonia. Presently, 69% of nickel consumption goes to stainless steel, batteries 13%, and superalloys (Inconel, Incoloy, Hastelloy) 7%, with the balance electroplating.

The principal motivation of this study was Christophe Pochari Engineering’s keen interest in ultra-high strength nickel cobalt alloys for high ductility, high strength, yet machinable components. It has been shown that an alloy of equal molar ratios of nickel, chromium, cobalt, and nickel with small amounts of silicon achieves 1000 MPa tensile strength and 500 MPa yield strength while boasting unprecedented ductility and fracture toughness. Such a metal is ideal for high-fatigue components. (Novel Si-added CrCoNi medium entropy alloys achieving the breakthrough of strength-ductility trade-off, Chang et al 2021). Additionally, conventional high strength steel alloys like 40Ni2Cr1Mo28 can have their molybdenum replaced with cobalt, these alloys typically have at least 1.6% nickel, 0.9% chromium, and 0.3% molybdenum. But with existing nickel costs, these alloys are somewhat too expensive for liberal use. A molybdenum free nickel-cobalt-chromium ferrous alloy is the ideal future material for highly loaded components. Unfortunately, these three metals are presently too expensive for widespread use in a wind turbines. But unlike carbon fiber whose cost is dominated by production technologies, these three elements are by no means scarce in the true sense of the definition. With more intelligent operations and the breaking of the monopoly of existing mines, the production of these three elements becomes almost unlimited and at a fraction of the current cost. Metal costs have been escalating recently due not only to growing demand primarily from Asia, but a disturbing trend of regulatory smothering and a veritable “war on mining”. While this term is perhaps a bit too extreme, the realities on the ground testify to this problem. In 1983, there were 940 metal mines operating in the U.S, today the number is only 270. Many may argue this is due to a decline in silver mines in Nevada, and while this is probably true, there still has been a decline in U.S mining activity overall. This disturbing situation has led many Western countries to become heavily dependent on China, Russia, and many other countries, for its critical metal needs. Environmental activists, unresponsive federal lease programs, long approval times etc, make it difficult for new mines to be opened in the West, so vast resource deposits hiding beneath our feet are squandered. To fill the gap, expensive imports from Asia are used to fill the gap. https://www.texaspolicy.com/how-environmentalists-are-making-it-harder-to-produce-the-green-energy-they-claim-to-love/


Alternatives to either ocean floor mining and centrifugation of ultramafic rocks do exist, and that is “phytomining” or “agromining”. Ultramafic rock can be crushed and artificial serpentine soils can be produced to grow nickel hyperaccumulator plants in green-houses that mimic the optimal climate that fosters growth of the assorted 450 nickel hyperaccumulator plants known to exist. Pycnandra acuminata is known to excrete green resin in New Caledonia, this green resin is rich in nickel oxide. A protein coded in the “ZIP gene” appears to facilitate extremely high uptake of nickel and other heavy metals in these plants. Thlaspi cypricum, 52120 mg/kg, Thlaspi oxyceras, 35600 mg/kg, Peltaria emarginata, 34400 mg/kg, Bornmuellaria tymphea, 31200 mg/kg, Thlaspi sylvium, 31000 mg/kg, Alyssum argenteum, 29400 mg/kg, Thlaspi jaubertii, 26900 mg/kg, Alyssum masmenkaeum, 24300 mg/kg, Alyssum cypricum, 23600 mg/kg, Alyssum lesbiacum, 22400 mg/kg, Alyssum pterocarpum, 22200 mg/kg, Stackhousia tryonii, 21,500 mg/kg, and Bornmuellaria baldacii, 21300 mg/kg. The mg/kg number refers to the concentration of nickel in the plant’s leaves, chloroplast, and stem. By increasing CO2 concentrations in a greenhouse, plant growth can be rapidly accelerated. Regions where land costs are cheap can be employed. If it proves too difficult to sufficiently enrich nickel using centrifuges, we can instead turn to agromining using greenhouses as a way to produce nickel for a fraction of its current cost. Growing lettuce in vertical farms requires around 2000 kWh/m2/yr, so if we had to provide artificial light to grow nickel hyperaccumulators vertically, 1,380,000 kWh/kg of nickel would have to be expended on LED lighting. Thus it is impossible to increase the production density of agromining, so a method must be developed to utilize low cost land. Agromining using the best nickel accumulators typically yields relatively small amounts of nickel per hectare, around 100 kg or less annually. Although experimental efforts suggest yields up to 300 kg per hectare are possible in tropical regions.

“Early results from the pot trial suggest that a Ni yield of 200–300 kg/ha can be achieved under appropriate agronomic systems—the highest so far achieved with agromining, which is indicative of the hitherto untapped metal resources in tropical regions”. Agromining: Farming for Metals. Extracting Unconventional Resources Using Plants, Alan J.M. Baker, Antony van der Ent, Guillaume Echevarria, Jean Louis Morel, Marie-Odile Simonnot.

To produce 2 million tons of nickel annually assuming 150 kg/hectare, 1.33 million square kilometers would be needed, or 13.5% of the total U.S landmass. The average cost of land in the U.S is $4000/acre, one acre is 0.40 hectare, so the economics as far as land are definitely viable, but not stellar. Desert regions where land costs are only $500/acre could be used provided water can be produced. When fertilizer and greenhouse costs are taken into account, agromining may not seem terribly competitive, but it is a potentially more mature option than centrifugation since there is no technical risk, but it cannot scale or lower the cost much below the current spot price. But we can confidently conclude that once the basic operational efficacy of centrifugal separation of nickel bearing minerals from ultramafic with fine comminution is proven, it will be the only way other than re-generated sulfuric leaching to expand nickel production or to lower its cost. If these methods should fail for whichever reason, human civilization will remain metallurgically constrained for millennia to come.


It is important to state that even in the event that mining the lower concentration ultramafic rock for nickel is not desirable, this comminution-centrifugation-sifting technology can still be used very effectively by small companies to directly extract nickel oxide from existing laterite ores without the use of any acid or floatation agents. Centrifugation technology is inherently more small-scale friendly, all one would need is to produce the raw laterite ore in bulk and simply install a comminutor, centrifuge, and sifting machine at the factory, to satisfy all the nickel needs for stainless steel production. Such a scheme would eliminate the need for the complex equipment needed at present nickel processing facilities, such as rotary kilns. There exists massive reserves, likely hundreds of years of laterite ore with concentrations between 0.5-1.5% that have relatively low market value, around a a third of the retail price of nickel metal. Laterite ores containing a high concentration of garnierite can be bought cheaply for <$150/ton and processed indigenously allowing for a non-trivial cost reduction since expensive acid and pyrometallurgical techniques are not needed. The cost of the ore has surprisingly small effect on the price of the final product, nickel’s selling price of $26000 is much higher than the ore equivalent of around $10,000/ton. This can be explained by the high cost of acid leaching or floatation. If we are ever to markedly increase the global availability of nickel, we must develop a method that can separate the ore via other less chemically intensive methods. To extract nickel from an ore, excluding the floatation process, acids of sulfur or nitrogen oxides must be used to leach the metals from the rock. Consumption of acid may reach 1-1.5 tons/ton of ore, since most of the acid is consumed leaching the iron, magnesium and aluminum. If the acid cost is $200/ton, the cost per kg of nickel may reach $20/kg for the acid alone for a 1% Ni ore grade if it is not recycled in a closed-loop. This clearly shows that it is economically impossible to extract nickel from ultramafic rock at 0.2% concentrations without a closed-loop sulfuric acid system.


But provided sufficient comminution and particle size homogeneity can be achieved, gravitation through centrifugation emerges as an interesting option if acid regeneration cannot be performed. Christophe Pochari Engineering employs a strategy of optionality to reduce risk.

“The application of centrifugal force in separating immiscible liquids or separating solids from liquids is well established, and there is abundant literature on the subject. Outside of patent records, however, there is practically no literature dealing with the principles that relate to the centrifugal separation of solid particles having different densities”.

Centrifugal Concentration: its Theory, Mechanical Development and Experimental Results, January 1, 1929, H. A. Doerner.

Christophe Pochari Engineering has applied a physics and fluid mechanics based approach to the problem of critical metal depletion. By using ultra-high G force centrifugation, widely distributed ultramafic rocks (dunites, peridotites, pyroxenites, troctolite), can be mined for trillions worth of nickel and chromium. These respective transition metal oxides could be separated from their light silica and magnesia hosts using proven centrifugation technology.

Images below show the occurrence of ultramafic rock in various geological bodies and the concentration of nickel and chromium.

Fullscreen capture 12172022 85941 PM.bmpFullscreen capture 12172022 90458 PM.bmpFullscreen capture 12172022 90522 PM.bmpFullscreen capture 12172022 90441 PM.bmp


The idea that some essential technogenic elements, such as nickel, chromium, or even cobalt, are scarce, is theoretically incorrect if lower grade rocks can be harvested. Of course, not all elements are equally abundant, stellar nucleosynthesis, cosmic ray spallation, and beta decay did not result in equal distributions of the elements, no sound person would claim platinum is abundant! Different ionization potentials resulted in elemental segregation during the formation of the earth in the protoplanetary (accretion) disk. Many elements farther up the atomic weight scale are truly scarce and can never be made more abundant with technology, but many could, and the ones that can happen to the most valuable for technical alloys. To argue that ultramafic rock, which contains an average of 0.2% nickel, cannot be “economically” is guaranteed to be true. As technology evolves, the concept of an ore grade “cut-off” becomes nebulous. A mine is a perfect monopoly, one cannot “start” a new mine because by definition they are not created but rather discovered in rare and highly propitious mineral concentration sites. The price of a mine can reach billions, making it impossible for small players to compete. But with a combination of technology and ingenuity, small companies can mine the unlimited supply of ultramafic rocks for nickel, chromium, and cobalt from the oceanic crust that made its way onto continents. There are an estimated 90 teratonnes (90 trillion metric tons) of ultramafic rock easily extractable in ophiolite mountain belts (The variation in composition of ultramafic rocks and the effect on their suitability for carbon dioxide sequestration by mineralization following acid leaching, M. T. Styles). Once the oxide is crushed down to small fragments using advanced comminution machines, the metals of interest are liberated since the desired metal are chalcophiles and siderophiles, while the base metals (silicon and magnesium) are not. High-speed centrifugal separation in a gaseous or liquid medium permits rapid agglomeration of high-mass micron-size particles on the walls of the centrifuge allowing for effective separation after multiple stages. The energy needed to spin these centrifuges is very small. Once the iron oxide is removed, the only heavy elements left are nickel, chromium, and cobalt. By mixing the micron size particles in a gaseous or liquid media, they are free to float due to the high frictional resistance, even small differences in settling velocity both horizontally due to gravity or laterally due to artificial acceleration will cause gravity-determined sorting. The rate at which these particles propagate is a function of Stokes’s law, which is used to predict the terminal velocity of spherical particles in a viscous media at a low Reynolds number. If the particle density difference is 1.2x, the terminal velocity, regardless of g forces applied, fluid viscosity and particle size, will always differ by exactly 1.2x. To maximize the throughput of the centrifuge, we want a medium viscosity as low as possible. If water is used over gases, the water can be pressurized to 25 bar and warmed to 220°C to lower its viscosity from 1 centipoises to 0.14, but a gas would be far superior. The terminal velocity difference of the micron fragments suspended in water with a viscosity of 0.14 cP and liquid density of 870 kg/m3 under an acceleration of 400,000 g (60,000 rpm, 200mm diameter centrifuge) would be 6650 m/s. Such a high velocity difference results in rapid sedimentation. Using a lower viscosity medium the velocity differences between the different mass particles is 18,500 m/s. During each centrifugation cycle, the slightly heavier fraction deposits on the wall, the reactor is then purged and this heavier deposit is re-fed into the centrifuge, a staged system will experience progressively higher separation until concentrations of 90% are reached, which allows reduction operations to begin. Before this mixture of metallic oxides are reduced, it is desirable to remove the iron magnetically since we do not want to expend excessive amounts of hydrogen producing iron. If magnetic separation is undesirable, they can be separated by precisely adjusting the melt temperature to precipitate each metal.

Just because the existing mining industry ignores this immense potential reserve because their current assumption forbids them from extracting “low-grade reserves”, not mean that it is not technically possible according to physics, whether this is acid leaching with regenerated sulfuric acid or through centrifugation. The ore is not crushed to micron sizes making centrifugal separation impractical. Micron-sized comminution is not considered “cost-effective” presently, but if one performs a basic energetic analysis using the Rittinger curve, one can easily see that it is.

Fullscreen capture 12112022 53302 PM.bmp

Fullscreen capture 12172022 60504 PM.bmp

Fullscreen capture 12122022 24707 AM.bmp

With Raymond mills, crushing energies of around 25 kWh/ton are required. Note that the hardness of the brittle oxide makes very little difference on the energy consumption, so the numbers above for calcium carbonate are not increased very much for magnesium oxide or silica. As previously mentioned, ultramafic rock reserves have been estimated to be over 90 terratonnes of readily accessible deposits at shallow depths, but the real reserves are much larger because excavation can be performed to greater depths. The concentration of ultramafic rock in the upper continental crust is estimated to be 5%, the theoretical reserves are thus so huge a calculation is redundant since industrial civilization would not been able to utilize such a quantity of material nor possess the necessary excavation abilities. Taking the 90 terratonnes estimated, if we assume the nickel content of ultramafic rock is only 1500 mg/kg (the actual number is 2000, so we are being conservative), then the total reserves of nickel in this magnesia-rich rock is 135 billion tons, or equal to 67,500 years at current nickel consumption rates of 2 million tons per year. It would be a great tragedy if we failed to harness this untold fortune. If we manage to develop such a methodology, we could increase nickel consumption by over a thousandfold and replace much of the present low-grade steels with nickel and chromium alloys like stainless steel or Inconel. The implications for this centrifugal low grade ore extraction technology are immense, by making nickel not only much cheaper but close to infinitely available, structures could be left unpainted in corrosive environments, bridges, skyscrapers, and most terrestrial structures including residential homes could be constructed entirely of stainless steel. Without engaging in fantastical speculation, one could imagine large permanent ocean settlements or perhaps highways constructed over large bodies of water, connecting the continents. Offshore structures allowing for sea-steading communities now become possible since their cost would be competitive with land-based structures, allowing the formation of private libertarian states in international waters. Vehicles, including tractors, trucks, cars earthmoving equipment, could improve durability and corrosion resistance thanks to stainless steel’s extremely high ductility. Ships could be constructed entirely out of stainless steel and last for centuries, the painting of ships would become entirely redundant. But beyond mere speculation, what can be said is that with the techno-economics of centrifugal ultramafic rock powder separation, nickel production can satisfy current global production for millennia to come at a cost not much higher than aluminum.

Centrifugal separation is a proven technology and the physics behind it are very intuitive, but success is dependent on the ability to liberate nickel minerals from the host silicate and magnesia

Centrifugal separation conjures up images of gigantic farms of vertical tubes spinning at high speed to produce highly enriched uranium for fission bombs, but the principle of centrifugal separation is applied in a number of disparate domains. Zippe style centrifuges, invented by Gernot Zippe, used in uranium enrichment for both nuclear weapons manufacturing and civilization nuclear power, spin at speeds up to 90,000 rpm. Separative work increases with the 4th power of peripheral velocity. By doubling the speed of the centrifuge, the intensity of this artificial gravity grows to the square of the chosen speed, and by doubling the gravity generated, the acceleration of the mass in question grows to the square of this new gravity, so that’s how we get a 16 time increase in separative work from a doubling of the initial peripheral velocity. Spinning a large mass at high speeds requires surprisingly little energy, for example, spinning a 200mm diameter 15 kg centrifuge at 70,000 rpm uses only 1.3 kWh. If such a size centrifuge can process 1000 kg of rock per hour, the energy consumption is less than 1.3 kWh per ton of rock per stage. Assuming around 10-20 stages is needed to raise the concentration of metallic oxides from around 0.35% of the rock by mass to 90%, the energy consumption increases to only 5.6 kWh per kg of metal.

Fullscreen capture 12152022 70015 PM.bmp

Comparison of uranium isotope separation technologies, notice that centrifugation is by far the most effective.

Proteins, blood, and serums for preparing vaccines, and even cream, are separated centrifugally relying on small mass differences (isopycnic centrifugation) to facilitate agglomeration. Such techniques are capable of separating particles with specific gravity differences as little as 0.05. But centrifugation, while not used for existing ore separation, has been successfully applied to battery recycling, proving the viability of our concept. German researchers created a small high speed 20,000 rpm centrifuge to separate lithium-iron phosphate (density of 3.57 g/cm3), from carbon black (density of 1.9 g/cm3). The same team also separated zinc oxide nanoparticles from polymer particles, although the density gradient was larger, the same principles apply. Smaller density differentials simply mean more stages, and as long as the power consumption of each centrifuge is kept low (using CFRP rotors and gas over water) we could use up to 40 stages without using excessive energy, allowing the separation of tiny mass differences, mass differences far lower than what will be encountered in the field. With the high density differences of phosphate and carbon, they achieved recovery rates of up to 90% with one stage immersed in a solvent, note that these mass differences are quite close to the silica-magnesia/nickel oxide values. Uranium in a gas form as UF6 (uranium hexafluoride) possesses a tiny mass difference of barely 0.85 percent, but with enough stages (40-90), it can almost miraculously be purified to over 90%. Note that uranium 235 and 238 have a mass difference of 1.3%, but because uranium hexafluoride is 32.4% fluorine, the difference is further “diluted” to 0.87%. Assuming roughly linear correlations between mass differences and stage count, only a few stages are required in the case of these large mass differences of nearly two-fold. The average density of magnesium-oxide and silica, which comprises 90% of the mass of ultramafic rock is 3.11 g/cm3, while in theory, nickel sulfide, nickel oxide, and chromite have an average density of 5.75 g/cm3, or a 63x greater mass difference than uranium isotopes of fluorine. Of course, in reality, nickel does not exist as a sole oxide of NiO, except in highly weathered laterite ores. It typically occurs as pentlandite (FeNi9S8) and liebenbergite (Ni,Mg2SiO4), with a specific gravity gravity of 4.6. Nickel occurs in these minerals within the host rock, pentlandite has a higher specific gravity than liebenbergite at 4.8, but still well over 1.54x times that of SiO2 and MgO alone. Cobalt occurs as the mineral Cobaltite (CoAsS), with a specific gravity of 6.33, 2.03x more than the host rock. The predominant chromium mineral is the least dense, with the mineral magnesiochromite (MgCr2O4) and ferrous chromite (FeCr2O4), with average specific gravities of 4.2 and 4.6, but still over 1.42x times. The number of centrifuge stages is reduced proportionally to the mass differences and the number of g’s generated. While some may express skepticism that uranium gas separation and solid powder separation share any commonality, the actual kinetics are very homologous. Comminuted powder is either immersed in a low-viscosity liquid or air, and as the fine rock powder sloshes around within the centrifuge, the heavier metallic oxides tend to move towards the perimeter. The concentration of powder in the liquid or gas bath is low enough so that collisions between fragments does not dramatically slow down the separation process. There is a clear trade-off between energy consumption and solid concentration, too high a solid concentration and interference between particles becomes an issue, and too low a solid concentration and excessive energy inputs are required. A a relatively low solid concentration of 5% in air is used for our models.



The above images are from a German study on recycling lithium batteries, the fundamental physics apply equally to ore separation, the only difference is that the concentrations of the heavier particles are smaller so more stages are needed. Centrifugation can apply to any physical compound that has a consistent difference in density, perhaps the only thing that cannot be gravitationally separated is plasma due to its instability.



The average concentration of nickel in ultramafic rock is usually measured to be around 2000 mg/kg, or 2 kg per ton of rock.

Fullscreen capture 12152022 73509 PM.bmp

Current nickel mining operations make use of ores with a concentration of over 1%, or five times greater. The current justification for this strategy is that less ore needs to be processed, but the result is a much more limited reserve and a far from sustainable future supply, which risks thwarting new industrial and technological developments which heavily rely on nickel. Recycling alone cannot meet the demand of a large expansion in nickel use. Existing ores, mainly laterite and sulfide, are formed from the natural weathering of ultramafic rocks exposed in ophiolite belts, but these reserves are finite and are being rapidly drawdown, which partially explains the presently high cost of nickel. Weathering is a very slow process and technogenic extraction can rapidly deplete what took nature millions of years to achieve. For example, when nickel mining first began in New Caledonia in 1875, ore grades were over 10%, today they are barely above 1.5%. Such a strategy of picking earth’s “low-hanging fruit” is by no means sustainable, as it will force less consumption of this critical element, and in turn, lower the quality of civilization. There is no physical law that states lower grade host rocks cannot be harvested, while thermal separation is certainly energy intensive, gravitational means are not, allowing large volumes of rock to be processed without too much concern for energy consumption. If nickel atoms replace a magnesium in the crystal interstices, this particle will have more mass, this is the principle of centrifugal separation. Ultramafic rock can be crushed down to sub 40-micron size particles using only 20-40 kWh per ton of ore using advanced Raymond roller crushers. This fine dust can then be placed inside a high-speed centrifuge (spinning at 70,000 rpm to generate g forces of over 500,000), this powder when immersed in a gas or liquid will quickly segregate by mass resulting in the selective accumulation of iron, nickel, and chromium oxides on the walls of the centrifuge, while the lighter fragments of silica and magnesia will travel straight through. Once a sufficient accumulation of heavy fragments occurs on the walls of the centrifuge, the device is stopped and emptied since there is no practical way to continuously clear the buildup of heavier deposits. With theoretical centrifugal recovery rates of 90% percent, over 3.7 kg of valuable metal (excluding iron, around 9.6 kg/ton) would be produced per ton of rock, including nearly 2 kg of nickel, 1.6 kg of chromium and 0.15 kg of cobalt. These numbers are from a dataset collected in Finland (http://weppi.gtk.fi/publ/foregsatlas/text/), they do not represent selected commercial mining ores such as laterite, they are samples of regular ultramafic rock that occurs in the ophiolites of mountain belts. The Finnish dataset cites the 1970s text Review of research on modern problems in geochemistry Corporate author : International Association of Geochemistry and Cosmochemistry, Frederic R. Siegel. https://unesdoc.unesco.org/ark:/48223/pf0000037516


Ultramafic rock is simply oceanic crust (which is highly mafic) that has been pushed up below the continental crust through obduction (oceanic crust scrapped off and buried underneath the continental crust with some of it being exposed, mainly in ophiolite belts). Magma bubbles up in the oceanic ridges and spreads laterally, forming the oceanic crust. The mantle is 0.2% nickel by mass while the core is 5.3% nickel). Separating magnesium and aluminum is more difficult due to the low density difference of these oxides relative to the principal silicon oxide. Fortunately, the heavy metals we are after will produce heavier particles since a transition metal element, nickel chromium or cobalt, with atomic weights of just under 60, will replace the light magnesium and silicon atoms with the oxide.

Optimal centrifuge design, material, and operational parameters

To maximize the segregation efficiency of the different density metal oxides, we desire the highest g force practically attainable. In order to achieve very high g forces without placing excessive stress on the material, we must employ a material with very low density. Carbon fiber emerges as the obvious candidate. But an additional factor is the mass of the spinning medium inside the cylinder. The rock powder cannot be suspended in a vacuum, it will simply fall to the bottom of the centrifuge. A medium of some kind is required to suspend the solids and carry them through the centrifuge. Since the media inside the cylinder is spinning at the rotational velocity of the cylinder, it requires additional energy to rotate. Since water is very dense, the use of water requires around 3 times more energy than gas for the same g force. Therefore, a simple analysis suggests that a carbon fiber centrifuge, spinning at up to 70000 rpm, is optimal. Note that the rotational speed is decreased proportionally with an increase in centrifuge radius, so lower speed large diameter centrifuges may be optimal, making bearing design less difficult. As long as particles enjoy unencumbered mobility within the medium, different-weight particles are free to move toward the perimeter of the centrifuge due to their greater settling velocities, this is important as if the particles were too concentrated, this would impede their sorting efficiency. A low concentration, such as 5% solids in the medium, permits a large degree of mobility, preventing excessive particle collision and minimizing pressure drop of the delivery gas. Centrifuges work by spinning a mass of gas or liquid by exploiting the high skin friction between the surface of the cylinder and the medium exposed to this surface. The total skin friction encountered by a 6 kg/m3 gas along the wall of the cylinder is in excess of 400 kg at the peripheral velocities encountered. Skin friction coefficients of 0.0031 are encountered at a Reynolds number of 9 million, corresponding to the peripheral velocity of the spinning cylinder. This high skin friction experienced by the gas body within the cylinder causes it to acquire the entire velocity of the spinning cylinder, subjecting all its content to artificial gravity. For ultra-high throughput, low-viscosity gaseous media is employed at moderate pressures and low temperatures. Nitrogen gas is used at 15°C and 10 bar, with a viscosity of only 0.016 cP and a density of 12 kg/m3. The particle size is between 30 and 40 microns, or a standard mesh size of 400, but if mineral liberation is not effective enough at this size, sizes as small as 10-15 micron may be used at the expense of centrifugation throughput. By operating at low temperatures, carbon fiber can be used as the centrifuge cylinder material, reducing the power demand greatly. If water is used, it has to be heated for optimal performance which restricts the material choice to titanium, which has 2.5x the mass of carbon fiber, but only half the strength. Since air has very low density, the terminal velocity of the 35-micron particles is an extraordinary 69,000 m/s and 82,800 m/s respectively, for 3.10 and 3.72 g/cm3 oxides. While the separation efficiency never changes by definition (the mass difference is the sole determinant) the throughput per centrifuge (and hence the energy consumption) is controllable since faster settling velocities result in quicker sedimentation. A lightweight carbon fiber centrifuge can be constructed to offer spectacular efficiency. A centrifuge 200mm in diameter and 1.2 m long weighs less than 9 kg and requires only 1200 watts to spin at 70,000 rpm, generating 535,000 g. The shear stress on the rim is only 1088 MPa, well within the limit of T1100 carbon fiber with a tensile strength of over 3460 MPa. The centrifuge could be made even lighter but we are including the weight of an interior metal liner and the mounting shafts. The higher the g force the higher the gas flow rate through the cylinder and thus the higher the throughput of each centrifuge, which reduces the power used per mass of rock powder separated. If the throughput per cylinder can be brought to 1 ton of rock powder at a 5% loading factor, the pressure drop of the nitrogen at 10 bar is only 0.0018 bar per stage. From experimental data, a solids concentration of 4% at 100-micron particle size adds an additional 2.53x to the baseline pressure drop assuming a smooth pipe. We can thus calculate the gas pumping power per centrifuge. If one centrifuge processes one ton of rock, the compression power is virtually zero and not worth including in our analysis. Since 10 ten stages are assumed to be required for complete separation, the total pressure drop is barely 0.09 bar per centrifuge, this includes an additional margin for bends. Compressing 20,000 kg, (1666 m3) of air to 0.10 bar requires only 5 kWh. We can clearly show the immense techno-energetic potential of this system by simply calculating the relative electricity value of one kilogram of the metal yielded. If we use non-baseload photovoltaic energy at 3 cents/kWh, we can afford to expend 166,000 kWh/ton of metal yielded if the price is to be kept to $5/kg.

An energetic analysis for producing 10 million tons of nickel annually

Centrifugation requires around 26 kWh/kg-metal (assuming a maximum of 40 centrifuge stages assuming the worst case scenario of weak mass differences), and comminuting to 10-35 microns requires another 22.33, yielding a total of 48.33 kWh/kg-metal. To produce 1 ton of nickel, we must expend 48,000 kWh, or still less than $1440 per ton if we used photovoltaic energy, less than 5% of the current spot price. Note that with the current spot price, we could easily expend 3 time the energy and still arrive at a direct production cost of $4300/ton. The total energy consumption to produce 10 million metric tons of nickel annually is thus only 48 GW, or 0.41% of global primary energy consumption. Note that the above estimates are highly conservative, uranium isotope separation with minuscule sub 1% mass difference use only 60 stages, and we basing our numbers on 40 stages! even though the mass difference is at least 25%! We do not engage in ridiculous optimism! these numbers are extremely conservative. We are also using an aggressive number for comminution, since an average of 22 microns is substantially smaller than any current ore crushing. The use of this very small number is due to the concern that without excessive comminution, the liberation of the nickel bearing minerals from the host rock will be less efficient, resulting in small mass differences. Note that a mass difference of as low as 1% is tolerable since we can afford to use 60 stages.

Are there any technical impediments?

As we have already mention, this proposed method will only work if the nickel bearing minerals are mechanically liberated upon ultra-fine comminution to form somewhat higher density grains that can be centrifugally separated. While the theory and mathematics suggest that separation can be very effective provided there is effective liberation of the nickel bearing minerals, we do not exactly know to what extent nickel is highly diffused throughout the rock, if it so highly diffused as to produce only miniscule mass differences between one particle and another, clearly this cannot work. By tiny we mean less than the difference between U-235 and U-238. But the Goldschmidt classification insists that nickel will prefer to form compounds with either iron or sulfur, it is not a lithophilic element, so we are thus able to confidently make these claims: that assuming a minimum amount of comminution, a substantial recovery of the sidero and chalcophile elements will be possible. The fact is magnesium and silicon are highly lithophilic, and so will be almost exclusively found as silicates, with much higher reactivity, for example both silicon and magnesium cannot be reduced with hydrogen. From a purely technical and operational perspective, centrifugation and vibratory sieve separation can indeed work very effectively, so this is not where criticism is due. The fundamental uncertainty is: how effectively can fine comminution, even as fine as 10 microns, produce a high degree of isolated nickel minerals from the preponderance of light magnesia and silicates particles? Some may say: “this sounds good and all, but why hasn’t it been done before?”. Strictly speaking, it is impossible to answer such a question, if the idea in question proposed satisfies the basic mechanics, thermodynamics, or physical principles required for operation, then a technology can be called a proven theoretical concept, with the only risk being a lack of practicality, but not lack of basic functioning. Such a case study might be a flying car, it is perfectly possible to construct one using foldable propellers and micro-gas turbines, but it may not be practical or safe, therefor we do not see them used. A more serious example might be using projectile launchers to fire payloads into space, an idea proposed by Gerald Bull, while the physics is perfectly sound, its practicality compared to rockets has yet to be proven. A new invention cannot by definition have priority, so there may be a genuine chance a new idea works but has never been attempted or considered before. From this perspective, since we have ruled out fundamental operational impossibilities (being physically impossible, i,e perpetual motion machines), it is merely a matter of engineering and economics. As long as the basic physics and working principle can be conceptually evaluated and falsified with respect to a particular application, then it can be assumed the lack of adoption is due to non-technical reasons. We can answer the above question by simply resorting to reasons of industry dogmatism, conservatism, and or outright lack of inventive talent. Industries tend to become large, sclerotic, and generally self-perpetuating systems with little incentive to improve methodology unless it is forced on by competition. Most new ideas promoted today consists of highly uncertain technologies with unproven underlying physics, often with very lofty assumptions of future breakthroughs that will somehow make up for their present shortcoming. Prevailing attitudes tend to dominate and few small companies with visionary individuals offering differing methods can truly compete. As long as different mass particles can be segregated due to different settling velocities, by definition, the process can “work”, but it does not mean it is automatically rendered practical. Practical and possible are two different criteria, but in the case of rock powder separation, we know that spinning a carbon fiber tube with magnetic bearings and high speed electric motors is more than practical, it is proven and works quite well. Of course, the actual engineering details are more unpredictable in this particular application, since rock powder is different from uranium gas or separating sludge from oil. For example, abrasion of the centrifuge by the rock powder or potential vibrational concerns exist if there is an uneven accumulation of sediment along the wall of the centrifuge. But industry experience with solid liquid centrifuges find little issue with uneven solid accumulation.

The essential enabling feature of centrifugal metal extraction from rock powder


Leaving aside the known of metal diffusivity, the critical requirement that must be met for centrifugal separation of rock powder by density difference to work is very high particle size homogeneity. This requirement is unique for our methodology, conventional comminution of ores can tolerate a high degree of particle heterogeneity. The Achilles’ heel of centrifugal separation of different density particles is a lack of size homogeneity, if such a homogenization cannot be effectively achieved, the method will not work. An inherent difficulty of particle separation through gravity is that larger particles of lighter material can attain higher velocities than smaller particles of the denser material, severely hampering segregation efficacy. This becomes more severe as the density differences diminishes, and when this occurs, small difference in particle size can override the difference in settling velocity due to density. It is thus critical to prevent larger magnesia and silica particles from being drawn to the smaller heavier nickel oxide and chromium particles towards the periphery of the centrifuge. Fortunately, a technical solution exists for almost every practical problem. Numerous sectors make use of powder, ranging from baking to high end manufacturing, require highly fine materials of roughly uniform size. A number of effective separation technologies, mainly sieve based, have been developed. The highest performance is by far ultrasonic multi-stage vibratory sifter technology, which can be effectively employed to facilitate high degrees of homogenization. Ultrasonic piezo crystal vibrators are extremely effective at dislodging particles and encouraging the tumbling of particles slightly smaller than the mesh aperture. As a consequence of the intense yet small amplitude vibration afforded by the ultrasonic generator, a high throughput per mesh can be maintained. Using highly precise electroforming micro-mesh technology, a structurally robust and uniform filtration sieve can sort particles according to their sizes to extreme accuracy. These multi-stage vibratory sifters can produce batches of homogenized powder streams by employing the simple principle of successive filtration according to minimum and maximum particle diameters. These segregated and homogenized powders are then sent to a separate centrifuge array for density gradient separation. Each pair of meshes produces a homogenized powder size which is sent to dedicated centrifuges to process this material. It is inherently impossible to generate a homogenous size powder for the entire rock batch, since by definition the comminutor will generate a wide range of particle sizes in the micron range. It is the job of the ultrasonic multi-stage sifter to generate separate streams of highly uniform particle diameters by classifying the initial heterogenous stream. By employing a stack of sieves each vibrating to encourage particle tumbling, by passing the comminuted powder through two sieves, all the particles that fall through the first but do not fall past the second will have a mean size equal to the exact difference in size between the two respective meshes. It is impossible for larger particles to fall through unless they break, but the fracture toughness of the material is greater than the stresses generated during the churning of these ultra-light particles, so very little comminution takes place with vibratory sifters. The inherent springiness of the ductile electroformed nickel mesh prevents particle breaking. But even if some degree of particle crushing takes place, all particles freshly broken off will tumble through the mesh with the larger particles remaining trapped, so size segregation will still occur between two meshes. If the difference in size between the primary and secondary mesh is very small, the particles will converge toward an equilibrium of the two sizes. For example, suppose we want most particles to congregate to 44-46 micron mean size. If we employ a 46-micron mesh at the first stage, all particles smaller or just below 46 microns will tumble through. The second mesh is then set at 44 microns, all particles smaller than 44 microns will tumble through, leaving only 44-micron particles preventing smaller particles from being sent to the centrifuge even if we filtered out all the larger ones. We can then calculate the maximum tolerable difference in particle size that will still yield density segregation in the centrifuge with Stokes’s law. For hypothetical 15 micron particles, the maximum difference in particle size for effective separation is plus or minus 1.5 microns, which still generates a 600 m/s velocity difference between a 3.58 g/m3 magnesium oxide particle and a 4.6 g/cm3 mineral of the desired metal. This means any mesh stack that can maintain a sub-1.5 micron particle size difference will still result in very effective density gradient separation. Current electroforming technology can achieve tolerances of 0.1 micron, so it is well within the capabilities of modern manufacturing. Differential comminution of the transition metal bearing oxide and the silicon and magnesia oxide can influence the density-dependent particle size distribution. If the transition metal minerals are more easily crushed, they will form a smaller mean fragment size than the magnesia and silica, and vice versa.

In summary, it appears that there is no fundamental opposing technical hurdle that cannot be overcome provided micron size comminution and extremely precise filtration is achieved.

Active-Cooled Electro-Drill (ACED)

Christophe Pochari, Christophe Pochari Engineeering, Bodega Bay, CA.

707 774 3024, christophe.pochari@pocharitechnologies.com

Fullscreen capture 11302022 121858 AM.bmp


Christophe Pochari Engineeering has devised a novel drilling strategy using existing technology to solve the problem of excessive rock temperature encountered in deep drilling conditions. The solution proposed is exceedingly simple and elegant: drill a much larger diameter well, around 450mm instead of the typical 250mm or smaller diameters presently drilled. By drilling large-diameter wells, a fascinating opportunity arises: the ability to pull away heat from the rock faster than it can be replenished, thereby cooling it as drilling progresses, preventing the temperature of the water coolant from reaching more than 150°C even in very high rock temperatures. A sufficiently large diameter well has enough cross-sectional area to minimize pressure drop from pumping a voluminous quantity of water through the borehole as it is drilled. The water that reaches the surface of the well will not exceed 150°C, this heat would be rejected at the surface using a large air-cooled heat exchanger. If the site drilling temperature exceeds the ambient of 20°C such as in hot climates, an ammonia chiller can be used to cool it down to as low as 10°C Any alternative drilling system must fundamentally remove rock either by mechanical force or heat. Mechanical force can take the form of abrasion, kinetic energy, extreme pressure, percussion, etc, delivered to the rock through a variety of means. The second category is thermal, which has never to this date been utilized except for precision manufacturing such as cutting tiles or specialized materials using lasers. Thermal drilling is evidently more energy intensive, since rock possesses substantial heat capacity, and any drilling media, whether gas or liquid, will invariably consume a large portion of this heat. Thermal methods involve melting or vaporizing, since at least one phase change will occur, the energy requirements can be very substantial. This heat must then be introduced somehow, it can either be in the form of combustion gases directly imparting this heat or via electromagnetic energy of some sort. Regardless of the technical feasibility of the various thermal drilling concepts, they all share one feature in common: they require drilling with air. The last method available is chemical, in which strong acids may dissolve the rock into an emulsion that can be sucked out. This method is limited by the high temperature of the rock which may decompose the acid and the prohibitively high consumption of chemicals which will prove uneconomical. Any drilling concept which relies on thermal energy to melt, spall, or vaporize rock is ultimately limited by the fact that it cannot practically use water as a working fluid, since virtually all the energy would be absorbed in heating the water. This poses a nearly insurmountable barrier to their implementation since even the deep crust is assumed to contain at least 4-5% H2O by volume, (Crust of the Earth: A Symposium, Arie Poldervaart, pp 132). Water will invariably seep into the well and collect at the bottom, and depending on the local temperature and pressure, will either exist as a liquid or vapor. Additionally, even if the well is kept relatively dry, thermal methods such as lasers or microwaves will still incur high reflective and absorptive losses from lofted rock particles and even micron-thick layers of water on the rock bed. Regardless of the medium of thermal energy delivery, be it radio frequency, visible light such as in a laser, or ionized gas, that is plasma, they will be greatly attenuated by the presence of the drilling fluid, requiring the nozzle to be placed just above the rock surface. This presents overheating and wear issues for the tip nozzle material. Christophe Pochari Engineeering concludes based on extensive first principles engineering analysis that thermal systems will possess an assortment of ineluctable technical difficulties severely limiting their usefulness, operational depth, and practicality. In light of this fact, it is essential to evaluate and consider proven and viable methodologies to take existing diamond bit rotary drilling, and make the necessary design modifications to permit these systems to work in the very hot rock encountered at depths greater than 8 km. In order to access the deep crust, a method to deliver power to a drill bit as deep as 10 kilometers is needed. Due to the large friction generated when spinning a drill shaft such a distance, it is absolutely essential to develop a means to deliver power directly behind the drill bit, in a so-called “down-hole” motor. Rotating a drill pipe 10 or more kilometers deep will absorb much of the power delivered to the pipe from the rig and will rapidly wear the drill pipe, necessitating frequency replacement and increasing downtime. Moreover, due to the high friction, only a very limited rotational speed can be achieved placing an upper limit on rates of penetration. The rate of penetration for a diamond bit is directly proportional to the speed and torque applied, unlike roller-cone bits, diamond bits do not require a substantial downward force acting on them since they work by shearing, not crushing the rock. Down-hole motors have the potential to deliver many fold more power to the bit allowing substantially increased rates of penetration. Clearly, a far superior method is called for and this method is none other than the down-hole motor. But down-hole motors are nothing new, they form the core of modern horizontal drilling technology in the form of positive displacement “mud motors” which drives drill bits all over the U.S. shale play. Another method is the old turbodrill, widely used in Russia and discussed further in this text. But what all these methods have in common is a strict temperature threshold that cannot be crossed or rapid degradation will occur. A new paradigm is needed, one in which the surrounding rock temperature no longer limits the depth that can be drilled, a new method in which the temperature inside the borehole is but a fraction of the surrounding rock temperature. This method is called Active-Borehole Cooling using High Volume Water. Such a scheme is possible due to the low thermal conductivity and slow thermal diffusivity of rock. There is insufficient thermal energy in the rock to raise the temperature of this high volume of water provided the heat is removed at the surface using a heat exchanger. Christophe Pochari Engineeering appears to be the first to propose using very high-flow volume water to prevent the temperature of the down-hole equipment from reaching the temperature of the surrounding rock, no existing literature makes any mention of such a scheme, serving as an endorsement of its novelty.

Impetus for adoption

There is currently tremendous interest in exploiting the vast untapped potential that is geothermal energy, and a number of companies are responding by offering entirely new alternatives in an attempt to replace the conventional rotary bit using exotic methods including plasma, microwaves, and some have even proposed firing concrete projectiles from a cannon! The greatest inventions and innovations in history shared one thing in common, they were elegant and simple solutions that appeared “obvious” in hindsight. There is no need whatsoever to get bogged down with exotic, unproven, complicated, and failure-prone alternative methods when existing technologies can be easily optimized. Conventional drilling technology employs a solid shaft spun at the surface using a “Kelly bushing” to transmit torque to the drill bit. This has remained practically unchanged since the early days of the oil industry in the early 20th century. While turbo drills have enjoyed widespread use, especially in Russia for close to a century, they have a number of limitations. Russia developed turbodrills because the quality of Russian steel at the time was so poor that drill pipes driven from the surface would snap under the applied torque. Russia could not import higher quality Western steel and thus was forced to invent a solution. Early Russian turbodrills wore out rapidly and went through bits much faster than their American shaft-driven counterparts due to the higher rotational speeds of the turbine even with reduction gearing. Diamond bits did not exist at the time and low-quality carbide bits, principally tungsten carbide and roller cones, were used. Bearings would break down as early as 10-12 hours of operation. Reduction gearboxes, essential for a turbodrill to work due to the excessive RPM of the turbine wheels, wore out rapidly due to the loss in viscosity from the high down-hole temperature. The principal challenge of deep rock drilling lies not in the hardness of the rock per se, as diamond bits are still much harder and can shear even the hardest igneous rocks effectively. Existing diamond bits are several orders of magnitude harder than quartz, feldspar, pyroxene, and amphibole, and newer forms of binder-less bits are even more so. From a physics standpoint, it seems absurd to argue that drill bits are not already extremely effective. Rather, the challenge lies in preventing thermal damage to the down-hole components. If only a small flow of drilling fluid is pumped as is presently done, flowing just enough fluid to carry cuttings to the surface, the latent thermal energy in the radius surrounding the well is sufficient to raise the temperature of this fluid, especially a lower heat capacity oil, to the mean temperature along that particular well. For example, in existing small-diameter wells drilled, especially deeper boreholes, are usually around 9-10” or 250mm in diameter. If the well is too much narrower than 350mm in diameter, it is difficult to flow enough water to cool it. Assuming a 100-hour thermal diffusion time, we draw a 1.26-meter radius of rock, that is in a hundred hours, heat moves this distance. By growing the diameter of the well from 250mm to 460mm, the ratio of cross-sectional area which is proportional to the available flow rate at a constant pressure drop, drops from 125 cubic meters of rock per m2 of cross-sectional area to less than 42 cubic meters of rock per m2 of cross-sectional area, or around 3 times less. Flow rates in previous deep drilling projects were usually less than 500 GPM or around 110 m3/hr. The German deep drilling program had mud flow rates of between 250 and 400 GPM (81 m3/hr) for well diameters of 20 cm and 22.2 cm. The average thermal flux from the well is around 70 MWh-t so the water is rapidly warmed to the surrounding well temperature. The minimum flow rate to warm the water to no more than 180°C is around 400 cubic meters, far too high to be flowed in such a small annulus, especially if the drilling mud is viscous and the drill pipe takes up much of the space leaving only a small annulus. The volume of rock cooled per 100 hours is 6.8 cubic meters or 18,000 kg. If this mass of rock is cooled by 300°C, the thermal energy is 1,280 kWh, or a cooling duty of 12.8 kW/m of well-bore length. Since water has a heat capacity of 3850 J/kg-K at the average temperature and pressure of the well, 1800 cubic meters of water, a flow rate achievable with 600 bar of head in a 460mm diameter well, results in a cooling duty of 343,000 kWh or 34.3 kWh/m of wellbore length. Clearly, our well will not produce 350 MWt, equal to a small nuclear reactor, otherwise we would be drilling millions of holes and getting virtually free energy forever! But since drilling occurs over a relatively long period of close to 1500 hours, the thermal draw-down radius is 4.87 meters, or a rock volume of 81.7 cubic meters. The thermal energy in this rock mass is 15,400 kWh or only 10.26 kW/m of cooling duty at a temperature drop of 240°C. But such a large temperature drop is entirely unrealistic, since a 12 km deep well will have an average rock temperature of only 210°C, so a temperature drop of only say 100°C is needed, resulting in a cooling duty of 4.3 kW/m or 6300 kWh/m over 1500 hours. This means a 12 km well will produce 51.6 MWt of heat resulting in a water temperature of only 27°C. If a 12 km well is drilled in a geothermal gradient of 35°C /km, the maximum temperature reached will be 420°C and the average temperature will be 210°C This means in the last 3.5 km, the temperature will be above 300°C, which is far too hot for electronics, lubricants, bearings, and motors to operate reliably without accepting a severe reduction in longevity. Geothermal wells, unlike petroleum and gas wells, must penetrate substantially below the shallow sedimentary layer and for effective energy recovery, rock temperatures over 400°C are desired. As the temperature of the well reaches 300-400°C, the alloys used in constructing the drill equipment, even high-strength beta titanium, begin to degrade, lose strength, become supple, warp, and fail from stress corrosion cracking when chlorides and other corrosive substances contact the metallic surfaces. It can thus be said that proper thermal management represents the crucial exigency that must be satisfied in order for the upper crust to be tapped by human technology. Christophe Pochari EngineeeringActive-Cooled Electro-Drill (ACED) methodology employs the following processes and components to achieve low down-hole temperatures. A number of technologies are concatenated to make this methodology possible.

#1: High volume/pressure water cooling using large diameter beta-titanium drill pipes:

Using high strength beta-titanium drill pipes to deliver 600 bar+ water at over 1700 cubic meters per hour, a cooling duty of up to 400 megawatts can be reached if the temperature of the water coolant is allowed by 180°C. The rock mass around the 450mm diameter well is insufficient to come close to heating this mass of water by this magnitude and an expected 60-80 MW of thermal energy will be delivered to the surface in the first 1500 hours of drilling. The drill string incorporates a number of novel features. Being constructed out of ultra-high strength titanium, it is able to reach depths of 12 km without shearing off under its own weight. It is also designed with an integrated conductor and abrasion liner. The integrated conductor is wrapped around the drill pipe between a layer of insulation and the out-most abrasion liner.

#2: High Power density down-hole electric machines:

A high-speed synchronous motor using high-temperature permanent magnets and mica-silica coated winding generates 780-1200 kW at 15-25,000 rpm. Owing to the high speed of the motor, it is highly compact and can easily fit into the drill string within a hermetic high-strength steel container to protect it from shock and abrasive and corrosive fluids. The motor is cooled by passing fresh water through sealed flow paths in the winding. Compared to the very limited power of Russian electro-drills in the 1940s to 1970s, the modern electro-drill designer has access to state-of-the-art high-power-density electrical machines.

#3 High Speed Planetary Reduction Gearbox:

The brilliance of the high volume active cooling strategy is the ability to use a conventional gear-set to reduce the speed of the high power density motor to the 300-800 RPM ideal for the diamond bit. Using high-viscosity gear oils with 30 CSt at 180°C, sufficient film thickness can be maintained and gearbox life of up to 1000 hours can be guaranteed.

#4: Silicon Thyristors and Nano-Crystalline Iron Transformer Cores:

Silicon thyristors are widely used in the HVDC sector and can be commercially procured for less than 3¢/kW.

Fullscreen capture 12202022 121052 AM.bmp

The maximum voltage of electrical machines is limited by winding density constraints due to corona discharge, requiring thick insulation and reducing coil packing density. For satisfactory operation and convenient design, a voltage much over 400 is not desirable. The problem then becomes, how to deliver up to 1 MW of electrical power over 10 km? With low voltage, this is next to impossible. If a voltage of 400 is used, the current would be a prohibitive 2500 amps, instantly melting any copper conductor. As any power engineer knows, in order to minimize conductor size and losses, a high operating voltage is necessary, 5,000 or more volts. To deliver 1000 kW or 1340 hp to the drill bit, with a 15mm copper wire at 100°C, the average resistance is 0.8 Ohms, resulting in a Joule heating of 22 kWh, or 2.2% of the total power. To deliver current to the motor, DC is generated at 6-10 kV, this DC is then inverted to 100-150 kHz to minimize core size and the voltage is reduced to the 400 required by the motor. This high-frequency low voltage power is then rectified back into DC to change the frequency back to 1000 Hz for the high-speed synchronous motor. Silicon thyristors can operate at up to 150°C in oxidizing atmospheres (thermal stability is substantially improved in reducing or inert atmospheres). Nano-crystalline iron cores have a Curie temperature of 560°C, well above the maximum water temperature encountered with 1700 m3/hr flow rates.

Rock hardness is not the limiting factor

Feldspar, the most common mineral in the crust, has a Vickers hardness of 710 or 6.9 Gpa. Diamond in the binderless polycrystalline form has a hardness of between 90-150 GPa, or 35 times greater. Diamond has a theoretical wear rate of 10^-9 mm3/Nm. Where cubic millimeters represent volume losses per unit of force applied (Newtons) over a given travel distance. We can thus easily calculate the life of the bit using the specific wear rate constant. Unfortunately, it is more complex than this, and bit degradation is usually mediated by spalling, chipping, and breakage. Due to the extrusion of the cobalt from the diamond, the poly-crystalline diamond degrades faster than otherwise predicted by its hardness alone. This means the wear rate is extremely slow unless excessive temperature and shock are present. Archard’s equation states that wear rates are proportional to the load and hardness differential. In light of this thermal constraint, it might seem obvious to any engineer to exploit the low thermal conductivity of rock and simply use coolant, of which water is optimal, to flush heat out of the rock and back to the surface. But in conventional oil and gas drilling, a very heavy viscous drilling mud is employed, this mud is difficult to pump and places stringent requirements on compression equipment. Elaborate filtration systems are required and cooling this mud with a heat exchanger would lead to severe erosion of the heat exchanger tubes. The principal reason why “active” cooling of the well bore is not presently an established process is the fact that there is no present application where such a scheme would be justified. For example, in order to cool a 450mm diameter 10 km borehole that would flux close to 70000 kWh of thermal energy in the first 1200 hours, a pumping power of up to 32,000 hp is required. The average power costs would therefore be close to $1.5 million per well assuming a wholesale power cost of $70/MWh. The added cost of site equipment, including heat exchangers, a larger compressor array, multiple gas turbines, and the necessary fuel delivery to drive the gas turbines make this strategy entirely prohibitive for conventional oil and gas exploration. Even if this could be tolerated, the sub-200°C temperatures encountered could not possibly justify such a setup. What’s more, pumping such a massive amount of water requires a larger diameter drill pipe that can handle the pressure difference at the surface. Since the total pressure drop down the pipe and up the annulus is close to 600 bar across 10 km, the pipe must withstand this pressure without bulging, compressing the water coming up the annulus thus canceling the differential pressure and stopping the flow. High-strength beta-titanium alloys using vanadium, tantalum, molybdenum, and niobium are required since they must not only withstand the great pressure at the surface, but also carry their own mass. Due to its low density (4.7 g/cm3), beta-titanium represents the ideal alloy choice. With its excellent corrosion resistance and high ductility, few materials can surpass titanium. AMT Advanced Materials Technology GmbH markets a titanium alloy called “Ti-SB20 Beta” with high ductility that can reach ultimate tensile strengths of over 1500 MPa. For conventional oil and gas drilling to only a few km deep, the weight of the drill with the buoyancy of heavy drilling mud allows the use of low-strength steels with a yield strength of less than 500 MPa. This high-end titanium would be vacuum melted and the drill pipes forged or even machined from solid round bar stock. The cost of the drill piper set alone would be $5 million or more for the titanium alone, and several additional millions for machining. In addition, titanium has poor wear and abrasion resistance and tends to gall so it cannot be used where it is subject to rubbing against the rock surface. Because an electro-drill does not spin the drill pipe within the well, the only abrasion would be caused by the low concentration of rock fragments in the water and by the sliding action of the pipe if it is not kept perfectly straight, which is next to impossible. To prevent damage to the titanium drill pipe, a liner of manganese steel or chromium can be mechanically adhered to the exterior of the drill pipe and replaced when needed. Another reason that high-volume water cooling of the drilling wells is not done is due to the issue of lost circulation and fracturing of the rock. In the first few kilometers, the soft sedimentary rock is very porous and would allow much of the water pumped to leak into pore spaces resulting in excessive lost circulation. Since a high volume of water requires a pressure surplus at the surface, the water is as much as 250 bar above the background hydrostatic pressure, allowing it to displace liquids in the formation. Fortunately, the high-pressure water does not contact the initial sedimentary later since this pressure is only needed when the well is quite deep and by the time the water flows up the annulus to contact the sedimentary formation, it has lost most of its pressure already. The initial 500-600 bar water is piped through the drill pipe and exits at the spray nozzles around the drill bit. In short, a number of reasons have combined to make such a strategy unattractive for oil and gas drilling. Sedimentary rocks such as shale, sandstone, dolomite, and limestone can be very vugular (a cavity inside a rock), this can cause losses of drilling fluid of up to 500 bbl/hr (80 cubic meters per hour. A lost circulation of 250 bbl/hr is considered severe and rates as high as 500 bbl/hr are rarely encountered. With water-based drilling, the cost is not a great concern since no expensive weighting agents such as barite or bentonite are used, nor are any viscosifing agents such as xanthan gum. Little can be done to prevent lost circulation other than using a closed annulus or drilling and casing simultaneously, but both methods add more cost than simply replacing the lost water. Water has no cost (infinitely available) besides its transport and pumping cost. If 80 cubic meters are lost per hour, an additional 1200 kW is used for compression. The depth of the water table in the Western U.S. (where geothermal gradients are attractive) is about 80 meters. In Central Nevada for example where groundwater is not by any means abundant, the average precipitation is 290 mm, or 290,000 cubic meters per square kilometer. Multiple wells could be drilled to the 80-meter water table with pumps and water purification systems installed to provide onsite water delivery to minimize transport costs. Water consumption for drilling a deep well using active cooling pales in comparison to agriculture or many other water-intensive industries such as paint and coating manufacturing, alkali and chlorine production, and paperboard production. If water has to be physically transported to the site via road transport if well drilling proves impossible for whatever reason, a large tanker trailer with a capacity of 45 cubic meters which is allowed on U.S roads with 8 axles can be used. If the distance between the water pickup site and the drill site is 100 km, which is reasonable, then the transport cost assuming driver wage of $25/hr and fuel costs of $3.7/gal (avg diesel price in the U.S in December 2022), would total of $150 each way to transport 45 cubic meters, or less than $4 per cubic meter or around $320/hr. The total cost of replacing the lost circulation at the most extreme loss rates encountered is thus $450,000 for a 10 km well drilled at a rate of 7 meters per hour.


The drilling technology landscape is ripe for dramatic disruption as new forms of more durable and thermally stable metal-free materials reach the market. But this upcoming disruption in drilling technology is not what many expect. Rather than exotic entirely new drilling technologies such as laser beams or plasma bits, improvements in conventional bit material fabrication and down-hole power delivery present the real innovation potential. Improvements in power delivery and active well cooling allow engineers to supersede the bulky turbodrill into obsolescence. Investors in this arena should be cautious and conservative, as the old adage “tried and true” appears apt in this case. Binder-less polycrystalline diamond has been successfully synthesized at pressures of 16 GPa and temperatures of 2300°C by Saudi Aramco researchers. Conventional metallic bonded poly-crystalline diamond bits begin to rapidly degrade at temperatures over 350°C due to the thermal expansion of the cobalt binder exceeding that of diamond. Attempts have been made to remove the metallic binder by leaching but this usually results in a brittle diamond prone to breaking off during operation. Binderless diamond shows wear resistance around 4 fold higher than binder formulations and thermal stability in oxidizing atmospheres up to 1000°C. The imminent commercialization of this diamond material does not bode well for alternative drilling technologies, namely those that propose using thermal energy or other exotic means to drill or excavate rock. If and when these higher performance longer lasting bits reach maturity, it is likely most efforts at developing alternative technologies will be abandoned outright. In light of this news, it would be unwise to invest large sums of money into highly unproven “bitless” technologies and instead focus efforts on developing thermally tolerant down-hole technologies and or employing active cooling strategies. It is therefore possible to say that there is virtually no potential to significantly alter or improve the core rock-cutting technology. The only innovation left is therefore isolated to the drilling assembly, such as the rig, drill string, fluid, casing strategy, and pumping equipment, but not the actual mechanics of the rock cutting face itself. Conventional cobalt binder diamond bits can drill at 5 meters per hour, using air as a fluid the speed increases to 7.6 meters per hour. Considering most proposed alternatives cannot drill much over 10 meters per hour and non have been proven, it seems difficult to justify their development in light of new diamond bits that are predicted to last four times longer, which in theory would allow at least a doubling in drilling speeds holding wear rates constant. A slew of alternative drilling technologies has been chronicled by William Maurer in the book “Novel Drilling Techniques”. To date, the only attempts to develop these alternative methods have ended in spectacular failure. For example, in 2009 Bob Potter, the inventor of hot dry geothermal, founded a company to drill using hot high-pressure water (hydrothermal spallation). As of 2022, the company appears to be out of business. Another company, Foro Energy, has been attempting to use commercial fiber lasers, widely used in metal cutting, to drill rock, but little speaks for its practicality. The physics speaks for itself, as a 10-micron thick layer of water will absorb 63% of the energy of a CO2 laser. No one could possibly argue the limit of human imagination is the reason for our putative inability to drill cost-effective deep wells. Maurer lists a total of 24 proposed methods over the past 60 years. The list includes Abrasive Jet Drills, Cavitating Jet Drills, Electric Arc and Plasma Drills, Electron Beam Drills, Electric Disintegration Drills, Explosive Drills, High-Pressure Jet Drills, High-Pressure Jet Assisted Mechanical Drills, High-Pressure Jet Borehole Mining, Implosion Drills, REAM Drills, Replaceable Cutterhead Drills, Rocket Exhaust Drills, Spark Drills, Stratapax Bits, Subterrene Drills, Terra-Drill, Thermal-Mechanical Drills, and Thermocorer Drill. This quite extensive list does not include “nuclear drills” proposed during the 1960s. Prior to the discovery of binder-less diamond bits, the author believed that among the alternatives proposed, explosive drills might be the simplest and most conducive to improvement, since they had been successfully field-tested. What most of these exotic alternatives claim to offer (at least their proponents!), are faster drilling rates. But upon scrutiny, they do not live up to this promise. For example, Quaise, a company attempting to commercialize the idea of Paul Waskov to use high-frequency radiation to heat rock to its vaporization point, claims to be able to drill at 10 meters per hour. But this number is nothing spectacular considering conventional binder poly-crystalline diamond bits from the 1980s could drill as fast as 7 meters per hour in crystalline rock using air. (Deep Drilling in Crystalline Bedrock Volume 2: Review of Deep Drilling Projects, Technology, Sciences and Prospects for the Future, Anders Bodén, K. Gösta Eriksson). Drilling with lasers, microwaves, or any other thermal delivery mechanism, is well within the capacities of modern technology, but it offers no compelling advantage to impel adoption. Most of these thermal drilling options require dry holes since water vapor will absorb most of the energy from electromagnetic radiation since water vapor is a dipole molecular. While new binderless polycrystalline diamonds can withstand temperatures up to 1200°C in non-oxidizing atmospheres, down-bore drivetrain components are not practically operated over 250°C due to lubricant limitations, preventing drilling from taking place with down-hole equipment at depths above 7 km, especially in sharp geothermal gradients of over 35°C/km. Electric motors using glass or mica-insulated windings and high Curie temperature magnets such as Permendur can maintain high flux density well over 500°C, but gearbox lubrication issues make such a motor useless. In order to maximize the potential of binder-less diamond bits, a down-hole drive train is called for to eliminate drill pipe oscillation and friction and to allow optimal speed and power. Of all the down-hole drive options, a high-frequency high power density electric motor is ideal, possessing far higher power density than classic turbodrills and offering active speed and torque modulation. Even if a classic Russian turbodrill is employed, a reduction gear set is still required. Russian turbodrills were plagued by rapid wear of planetary gearsets due to low oil viscosity at downhole temperatures. A gearset operating with oil of 3 Cst wears ten times faster than one at 9 Cst. In order to make a high-power electric motor fit in the limited space in the drill pipe, a high operating speed is necessary. This is where the lubrication challenges become exceedingly difficult. While solid lubricants and advanced coatings in combination with ultra-hard materials can allow bearings to operate entirely dry for thousands of hours, non-gear reduction drives are immature and largely unproven for continuous heavy-duty use. The power density of a synchronous electric motor is proportional to the flux density of the magnet, pole count, and rotational speed. This requires a suitable reduction drive system to be incorporated into the drill. Although a number of exotic untested concepts exist, such as traction drives, pneumatic motors, high-temperature hydraulic pumps, dry lubricated gears etc, none enjoy any degree of operational success and exit only as low TRL R&D efforts. Deep rock drilling requires mature technology that can be rapidly commercialized with today’s technology, it cannot hinge upon future advancements which have no guarantee of occurring. Among speed-reducing technologies, involute tooth gears are the only practical reduction drive option widely used in the most demanding applications such as helicopters and turbofan engines. But because of the high Hertzian contact stress generates by meshing gears, it is paramount that the viscosity of the oil does not fall much below 10 centipoises, in order to maintain a sufficient film thickness on the gear face, preventing rapid wear that would necessitate the frequent pull up of the down-hole components. Fortunately, ultra-high viscosity gear oils are manufactured that can operate up to 200°C. Mobil SHC 6080 possesses a dynamic viscosity of 370 Cst at 100°C, the Andrade equation predicts a viscosity of 39 at 180°C. In an anoxic environment, the chemical stability of mineral oils is very high, close to 350°C, but at such temperatures, viscosity drops below the film-thickness threshold, so viscosity, not thermal stability is the singular consideration. It is expected that by eliminating the oscillation of the drill pipe caused by eccentric rotation within the larger borehole and removing the cobalt binder, diamond bits could last up to 100 hours or more. This number is conjectural and more conservative bit life numbers should be used for performance and financial analysis. It is therefore critical that the major down-hole drive train components last as long as the bits so as to not deplete their immense potential. If bit life is increased to 100 hours, the lost time due to pull-out is reduced markedly. With a bit life of 50 hours to be conservative, and a drill-pipe length of 30 meters, pull-up and reinsertion time is reduced to only 544 hours, or 40% of the total drilling time. If the depth of the well is 10,000 meters, the average depth is 5000 meters, the average penetration rate is 7 m/hr, and the drill pipe is 30 meters, then the number of drill pipe sections is 333. During each retrieval, if the turn-around time can be kept to 3 minutes, the total time is 8.3 hours per retrieval one way, or 16.6 hours for a complete bit-swap. If the total drilling time is 1430 hours, then a total of 29-bit swaps will be required, taking up 481 hours, or 33% of the total drilling time. If bit life is improved to 100 hours, downtime is halved to 240 hours or 17%. If a drill-pipe length of 45 meters is employed with a bit life of 100 hours and a rate of penetration of 7 m/hr, the downtime is only 211 hours or 14.7%.

Some may be suspicious that something as simple as this proposed idea has not been attempted before. It is important to realize that presently, there does not exist any rationale for its use. Therefore, we can conclude that rather than fundamental technical problems or concerns regarding its feasibility, a lack of relevant demand can account for its purported novelty. As mentioned earlier, this new strategy has not been employed in drilling before since it imposes excessive demands on surface equipment, namely the need for close to 16000 hp (32,000 hp at full depth) to drive high-pressure water pumps. Such power consumption is impractical for oil and gas drilling where quick assembly and disassembly of equipment is demanded in order to increase drilling throughput. Water, even with its low viscosity, requires a lot of energy to flow up and down this very long flow path. The vast majority of the sedimentary deposits where hydrocarbons were laid down during the Carboniferous period occur in the first 3 km of the crust. The temperatures at these depths correspond to less than 100°C, which is not close to a temperature that warrants advanced cooling techniques. Deep drilling in crystalline bedrock does not prove valuable for hydrocarbon exploration since subduction rarely brings valuable gas and liquid hydrocarbons deeper than a few km. There has therefore been a very weak impetus for the adoption of advanced technologies related to high-temperature drilling. Geothermal energy presently represents a minuscule commercial contribution, and to this date, has proven to be an insufficient commercial incentive to bring to market the necessary technical and operational advances needed to viably drill past 10 km in crystalline bedrock. Cooling is essential for more than just the reduction gearbox lubricant. If pressure transducers, thermocouples, and other sensor technology is desired, one cannot operate hotter than the maximum temperature of integrated circuit silicon electronics. For example, a very effective way to reduce Ohmic losses is by increasing the voltage to keep the current to a minimum. This can easily be done by rectifying high-voltage DC using silicon-controlled diodes (SCR or thyristors) and nano-crystalline transformer cores. But both gearbox oil and thyristors cannot operate at more than 150°C, cooling thus emerges as the enabling factor behind any attempt to drill deep into the crust of the earth, regardless of how exactly the rock is drilled. Incidentally, the low thermal conductivity and heat capacity of the crust yield a low thermal diffusivity, or thermal inertia. Rock is a very poor conductor of heat, in fact, rock (silicates) can be considered insulators, and similar oxides are used as refractory bricks to block heat from conducting in smelting furnaces. The metamorphic rock in the continental crust has a thermal conductivity of only 2.1 W-mK and a heat capacity of under 1100 J/kg-K at 220°C, translating into a very slow thermal diffusivity of 1.1 mm2/s, corresponding to the average temperature of a 12 km deep well. This makes it more than feasible for the operator to pump a high volume of water through the drill pipe and annulus above and beyond the requirement for cutting removal. If rock had an order of magnitude faster thermal diffusivity, such a scheme would be impossible as the speed in which heat travels through the rock would exceed even the most aggressive flow rates allowable through the bore-hole. The motivation behind the use of down-hole electric motors. With satisfactory cooling, electric motors are the most convenient method to deliver power, but they are not the only high-power density option. A turbo-pump (a gas turbine without a compressor) burning hydrogen and oxygen is also an interesting option, requiring only a small hose to deliver the gaseous fuel products which eliminate the need for any down-hole voltage conversion and rectification equipment. But despite the superior power density of a combustion power plant, the need to pump high-pressure flammable gases presents a safety concern at the rig, since each time a new drill string must be coupled, the high-pressure gas lines have to be closed off and purged. In contrast, an electric conductor can simply be de-energized during each coupling without any mechanical action at the drill pipe interface, protecting workers at the site from electric shock. In conclusion, even though a turbo-pump using hydrogen and oxygen is a viable contender to electric motors, complexity and safety issues arising from pumping high-pressure flammable gases rule out this option unless serious technical issues are encountered in the operation down-hole electric motors, which are not anticipated. Conventional turbodrills require large numbers of turbine stages to generate a significant amount of power, this results in a substantial portion of the fluid pumped from the surface being used up by the turbine stages, resulting in considerable pressure drop, which reduces the cooling potential of the water since there is now less head to overcome viscous drag along the rough borehole on the way up the annulus. According to Inglis, T. A. (1987) in Directional Drilling, A 889 hp turbodrill experiences a pressure drop of 200 bar with a flow rate of 163 m3/hr, since the large diameter drill-bit requires at least 1000 kW (1350 hp), the total pressure drop will be 303 bar, or half the initial driving head. This will halve the available flow rate and thus the cooling duty.

Fullscreen capture 12132022 25529 PM.bmp

Electric motors confer to the operator the ability to perform live and active bit speed and torque modulation, while turbodrills cannot be efficiently operated below their optimal speed band. Moreover, even if turbodrills could be designed to operate efficiently at part load, it is not practical to vary the pumping output at the surface to control the turbodrill’s output. And even if turbodrills were used, they would still need to employ our novel active-cooling strategy since they too need speed reduction. It should be emphasized that it is not the use of down-hole motors themselves that makes our drilling concept viable, but rather the massive water flow that keeps everything cool. In hard crystalline bedrock, well-bore collapse generally does not occur, rather a phenomenon called “borehole breakout” occurs. Breakout is caused by a stress concentration produced at the root of two opposing compression domes forming a crack at the point of stress concentration between these two opposing “domes”. Once this crack forms, it stabilizes and the stress concentration is relieved growing only very slowly over time. Imagine the borehole is divided into two parts, each half forms a dome opposite to one other, there is a maximum of compressive stress at the crest of each dome, while there is a minimum of compressive stress at the root or bottom of each dome, this causes the roots of each dome to elongate and fracture. Overburden pressure is an unavoidable problem in deep drilling. Overburden pressure is caused by the sharp divergence between the hydrostatic pressure of rock which experiences a gradient of 26 MPa/km and that of water, which only experiences 10 MPa/km. Technical challenges. It’s important to separate technical problems from operational problems. For example, regardless of what kind of drill one uses, there is always the issue of the hole collapsing in soft formations and equipment getting stuck. Another example would be lost circulation, such a condition is largely technology invariant, short of extreme options such as casing drilling. Operational challenges While there are no strict “disadvantages”, namely features that make it inferior to current surface-driven shaft drills, there are undoubtedly a number of unique operational challenges. Compared to the companies touting highly unproven and outright dubious concepts, this method and technological package faces only operational, not technical challenges. The massive flow of water and the intense removal of heat from the rock will result in more intense than normal fracture propagation in the borehole. The usual issues that pertain to extreme drilling environments apply equally to this technology and are not necessarily made any graver than with conventional shaft-driven drills. For example, the down-hole motor and equipment getting stuck, a sudden unintended blockage of water flow somewhere along the annulus that results in rapid heating, or a snapping of the drill string, are likely to happen occasionally especially in unstable formations, or in regions where over-pressurized fluids are stored in the rock. Another potential downside is intense erosion of the rock surface due to the high annulus velocity of over 8 meters per second. Since a large volume of water must be pumped, a large head is required of at least 600 bar. This pressure energy is converted into velocity energy according to Bernoulli’s principle. Because the concentration of fragments in the water is extremely low (<0.06% vs over 2% in drilling mud), the rate of erosion on the hardened drill pipe liner is not a concern. It is likely that the relatively short period of time where drilling is actually taking place, around 2000 hours including bit replacement and pull up every 50 hours, it is unlikely this water will have time to significantly erode away the well-bore. Even if it does, it will merely enlarge the well diameter, and is not expected to significantly compromise its structural integrity.


Fullscreen capture 11202022 23757 PM.bmpFullscreen capture 11192022 73508 PM.bmp

Fullscreen capture 11192022 73501 PM.bmp

1.xps41586_2009_Article_BFnature07818_Fig1_HTML (1)Fullscreen capture 8252022 14928 AM.bmpFullscreen capture 8252022 12355 AM.bmp




Temperature-dependent thermal diffusivity of the Earth’s crust and implications for magmatism | Nature


Embedded Heat Exchanger Vacuum Insulated High Temperature Thermal Energy Storage Reactor

Christophe Pochari, Pochari Technologies, Bodega Bay, CA.

707 774 3024, christophe.pochari@pocharitechnologies.com

Fullscreen capture 12162022 22038 AMFullscreen capture 12162022 21949 AM.bmp

A 2.8 MWe thermal reactor, the net power density of the reactor is 466 kWe/m3. The overall direct material cost of the system is less than $30/kW. The total number of cycles is almost unlimited. The basic construction of the reactor consists of an atmospheric-bearing vacuum chamber which maintains a medium vacuum to reduce convective heat transfer close to zero. This reduces the complete thermal draw-down time to one year or more. A series of nickel, zirconium and tungsten coated radiant barriers form a monolithic rigid blanket around the aluminum block core, blocking the bulk of the thermal radiation. Each alumina brick is spaced with small zirconia or tungsten spacers to allow expansion and contraction of the entire block assembly. A thin-wall zirconium tank slightly larger than the volume of bricks seals off helium from the vacuum chamber. Helium gas at 5 bar is flowed through the imbedded heat exchanger tubes and exists at the bottom and sent to a secondary heat exchanger where it heats a secondary mass of compressed helium that drives a Brayton cycle gas turbine. The operating temperature of the device is 1500°C, with a 1200°C thermal drawdown to maintain a minimum of 300°C to insure the efficiency of the Brayton cycle does fall excessively. This operating temperature is no higher than standard iron smelting technology hundreds of years old, with a combination of silicon carbide, zirconium, titanium, tungsten, all used sparingly, these temperatures are below the respective melting temperature of the structure. It should be noted that there are no highly stressed parts, since the operating pressures of the helium are very low and the main vacuum chamber is loaded in compression, which allow the use of ceramic materials.


Christophe Pochari Energietechnik has developed a new form of thermal energy storage system not previously considered. It goes without saying that intermittent renewables like solar and wind require some form of storage medium. Unfortunately, after 50 years of research into energy storage, mainly for solar thermal powerplants, no commercial technology exists which can satisfy the demanding scale and endurance requirements of the modern day power grid.

If one evaluates existing schemes of storing energy via sensible heat, one finds a number of very substandard designs and worst yet, a very poor choice of material. Ultimately, a thermal energy storage system is determined almost exclusively by the intrinsic properties of the material used. Design and engineering cannot obviate a poorly conductive material, or a material that simply will not store much heat. The best possible material after examining virtually every earth-abundant elemental composition possible is aluminum oxide. Thermal Energy Storage for Medium and High Temperatures, by WD Steinmann, a recent textbook on high-temperature energy storage, makes only one mention of aluminum oxide in the entire book. A quick search on Google books brings up results for chemical reaction energy storage, where aluminum is combusted and the aluminum oxide is reduced again, but makes no mention of using it as a solid sensible storage medium. Perhaps people have simply missed the opportunity, just as no one had realized one can use pressure to build slender high payload guyed towers. An alternative explanation, and one that we must address to quell concerns of some underlying feasibility issue, is that for some reason, aluminum oxide possesses some feature that makes it an inappropriate material. But this can quickly be ruled out since it finds widespread use as a refractory brick, where durability, thermal stability, and chemical inertness are prized features.

Conventional thermal energy storage technologies are hampered by very poor volumetric power density and sluggish heat transfer due to low-density poorly conductive salts. But the historically poor choice of material and limited operating temperatures of traditional thermal energy storage schemes does not mean a much enhanced and improved system is not possible. The feasibility of the concept here is easily verified with basic heat capacity calculations. The proposal makes no use of exotic materials, methods, or technologies, it is easily manufactured with existing technology at a very low cost. Aluminum oxide has been strangely ignored as a sensible thermal energy storage candidate. Aluminum oxide possesses an essential property for viable thermal storage: high thermal conductivity and diffusivity. This attribute is essential for rapid heating and cooling. The proposed architecture consists of an insulated box filled with individual blocks of solid oxide material. Each “brick” of alumina has 40 6mm diameter channels, an average heat flux of over 40 kW/m² occurs on the channel surfaces. The specific surface area is 40 m²/m3 of brick, enough for 1600 kWh/m3 of heat transfer, allowing for very rapid power extraction. A cubic meter of aluminum oxide can be “drained” to 350°C in only one hour. The image below is a heat transfer simulation showing the hot aluminum oxide brick fluxing heat into the helium channels. Heat flux in many regions approaches 100 kW/m². The simulation was performed in SimSolid. Heat flux of the aluminum oxide block.

Fullscreen capture 12122022 120419 AM.bmp

The core facet of this technology is the embedded resistive heater and gas channels. Without this design, only very sluggish heating and cooling would occur, no matter how high the heat capacity of the material. This is what allows rapid and near complete transfer of heat from the hot solid into the gas and from the resistive heater back into the brick. The choice of resistive heating element material is narrowed to titanium and tungsten, as nickel-chrome would melt at the desired temperature. Titanium is cheaper and infinitely available and with a high melting point of 1668°C, it is sufficient for this particular application. Titanium possesses a resistivity of 7.5 times higher than tungsten, so less current is needed, or conversely, a larger filament can be used to increase its structural stability. At 900°C, corresponding to the mean temperature of the unit, aluminum oxide has a thermal conductivity of 7.95 W-mK and a high heat capacity of 1235 J/kg-K. The thermal diffusivity is 2 mm²/s. The density of aluminum oxide is 3950 kg/m3, so a cubic meter of the material raised to 1500°C and lowered all the way down to 250°C would possess a sensible thermal energy of 1653 kWh, unparalleled by any other low-cost material. It’s important to stress that the system undergoes no phase change so it is very stable, only a slight thermal expansion occurs. The coefficient of thermal expansion for aluminum oxide is 0.0000086 meters per degree Kelvin, translating into a volume change of 1 percent for the unit in question. Such a volume change is easily accounted for by a slight lateral spacing of the bricks. The bricks are free to expand longitudinally as there is a gap between the entrance of the gas channels at the top. The maximum stress developed in the channels is less than 0.10 MPa from the pressurized helium passing through, resulting in minimal crack propagation. To mitigate leakage of the helium between the sections of aluminum oxide blocks, the blocks are lined or clad with 1mm thick zirconium metal. The blocks are undersized relative to the zirconium cladding to allow for thermal expansion. The slow cracking of the brittle aluminum oxide block is not a concern since the gas is sealed off from the block by the zirconium liner. Total zirconium usage is 0.22 kg/kW, or 618,000 tons for a 2,800,000 MW grid. World reserves of zirconium exceed 32 million tons. Zirconium silicate sells for $3500/ton containing 65% zirconium dioxide, zirconium dioxide is 74% zirconium, or $7.2/kg. Including the cost of the calcium reduction agent, we can safely place the direct cost of zirconium at $8/kg, or $2/kW. The system can tolerate almost infinite thermal cycling since the rate of heating and cooling is quite gradual, with the system cooling at a rate of 20°C per minute, which can qualify as “thermal shock”. Thermal shock intensity is usually measured by pouring very hot objects into a bath of cool liquid where cooling rates are in the hundreds of degrees per minute. Even if cracks appear in the zirconium sealing tubes after tens of thousands of thermal cycles, helium leakage is still prevented by the installation of a steel housing that seals the entire unit off from the atmosphere which doubles as a vacuum chamber for the multi-layer insulation to function. Closed-cycle helium gas turbines form the essential technological component of this energy storage architecture. They are the ideal solution and are required due to the corrosiveness of carbon dioxide or nitrogen at high temperatures against the aluminum oxide bricks. Neon, argon, and krypton could also be used. Helium has a density of 1.4 kg/m3 at a pressure of 30 bar and a temperature of 800°C. Helium’s non-corrosiveness would massively extend turbine blade life to the point where blade life is entirely determined by creep, compared to existing oxy-fuel turbines which experienced intensive erosion and oxidation which cause premature blade failure. Advances in single-crystal nickel alloys allow for turbine inlet temperatures over 1100°C.

Helium closed cycle gas turbines have incredible power density, with 170 MW units being only 6 meters in length! Closed Cycle helium gas turbines have very low mass flow rates, with only around 1 kg/s-MWe. To minimize the pressure drop across the aluminum oxide block heat exchanger, the flow circuit is kept relatively short. The total pressure drop is less than 0.2 bar across a 200mm long channel section, the viscosity of helium at 750°C and 30 bar is 0.052 cP. There are a total of 20,000 flow channels in the 11.5 MWh unit.

The total amount of helium needed is only 1.5 kg for an 11.5 MW unit. Total helium reserves are estimated at 8 million tons. The size of the global power grid is 2,800,000 MW, so we need only 5000 tons of helium to power the entire world with helium closed cycle gas turbine aluminum oxide energy storage banks, or less than 0.06% of global reserves, a trivial amount. In contrast, numerous analysts have calculated that powering the entire world grid with lithium-ion nickel cobalt-manganese batteries would result in a total mineral demand that exceeds current reserves. But even if this weren’t the case, the cost savings alone would force any power plant owner to employ this technology or something very similar over the current $150-200/kW battery banks. In contrast, the Tesla “MegaPack” has a capacity of 3.8 MWh in a volume of 42 cubic meters, or a paltry 90 kWh/m3. This means our technology has 6.66x times the volumetric power density than the best battery storage systems presently available. Since lithium-ion battery chemistry has reached close to the physical limit, there is unlikely to be any significant improvement in the foreseeable future since any further enhancement comes at a severe safety penalty. A large-scale lithium-ion battery pack would be a significant fire and explosive hazard, especially considering the each at which saboteurs could fire small and medium caliber rounds (7.62x 5.56x, 308, .30-06 etc) into them. In the U.S, such calibers are readily available, which would make these battery packs a prime target. In contrast, a sensible thermal energy bank is not-pressurized and is entirely inert, there are no flammable, toxic, corrosive or otherwise dangerous substances that can be released in the air. An essential point to highlight is that this energy storage architecture allows solar farms to eliminate the need for inverters since the resistive heaters make optimal use of the low voltage high current power. So-called “round-trip efficiency”, which battery evangelists constantly propound, plays an insignificant role in the appraisal of an energy storage technology. The power density and the cost per kWh are the primary attributes that warrant attention. It should not come as a surprise that a high heat capacity material paired with a very high-efficiency turbomachine can outperform ionic electrical energy storage by a large margin. Only 6% of the solid-brick stack is occupied by the heat exchanger channels and an additional 1% from the embedded resistive heating element. Now that we have evaluated the critical energetic parameters of the technology, we can turn to the techno-economics of the entire energy storage bank. The basic component of this system is aluminum oxide brick. Aluminum comprises 6% of the earth’s crust, so its theoretical cost as an oxide form is close to zero since no electrolysis and reduction reactions are needed. Bauxite purified via the Bayer process can be directly crushed into alumina powder and melted down to form low-porosity alumina bricks. Aluminum oxide powder has a direct cost of only $500/ton, for an 11.3 MWh storage bank, the cost is $36,000, or only $3.23/kW. The insulation and resistive heating element add another negligible $0.5/kW. After the oxide brick, insulation, and resistive heaters, the only significant cost component is the turbine and compressor. Besides the compressor, there is the cost of a metallic structure to seal any helium from leaking into the atmosphere. It should be noted that the “heat exchanger” is encompassed within the energy storage bank, so no external metallic heat exchanger is needed. A 7mm thick steel containment structure houses the oxide bricks, the weight of this structure is only 3 tons and costs only $2000 at a steel price of $700/ton. We are then left with the helium closed-cycle gas turbine. With a power density of over 8 kW/kg, the total nickel-alloy usage for the gas turbine is only 170 kg. Assuming a total material fabrication cost of 8 times raw material costs which are placed at $20/kg, the gas turbine cost is only $27,000 or $20/kW. The turbine is sized for 1350 kW or 8.37 hours of power at 1.35 MW. These numbers are entirely arbitrary as they are sized for Christophe Pochari Engineering’ high-altitude wind turbine. The system can be scaled to any solar farm regardless of size and the high output RPM of the helium turbine permits a massive decrease in the size of the synchronous generator. The output electrical frequency can be precisely modulated to grid standards of ±200 mHz by slightly varying the RPM of the turbine by controlling the flow of helium into the heat exchanger. Since heat transfer plays a big role in the storage bank’s long-term storage efficiency, large units will deplete much slower due to the square-cube law. Additionally, larger turbo machinery in the multi-megawatt scale benefits from lower tip losses and higher overall mechanical efficiency. The CAPEX number for the closed-cycle turbine appears low compared to open-cycle industrial gas turbines which are manufactured for about $130/kW, but it is consistent with the reduced material used due to the higher power density of the helium cycle. Finally, a thermal “battery” can be charged and discharged almost indefinitely, while a conventional lithium-ion cell can barely hold 3000 cycles without losing a substantial portion of its initial charge. A thermal energy storage system of this kind would last in excess of 40 years with proper maintenance. This technology (and other variations of the principle of solid thermal energy storage), is the only method currently known that can store the 3000 GW to meet the demands of the global electrical grid.

GEN IV reactors, while they are unlikely to see the light of day due to irrational fear and excessive cost driven by regulation, much of the engineering literature pertaining to the construction of these devices can be transposed to high-temperature energy storage. The design of heat temperature heat exchanger and non-Rankine power cycles is highly applicable to this thermal energy storage system. Many GEN IV reactor developers plan on using Brayton cycles over Rankine cycles at up to 900°C. Some designs propose helium cycles while others supercritical CO2. But either way, the design of high-temperature heat exchangers, materials, and design methodologies. The use of a split pressure heat exchanger. The split pressure exchanger represents an elegant engineering effort to decouple the pressure of the main working fluid, the helium driving the turbine, from the same inert gas that flows through the channels of the hot alumina bricks to deliver thermal energy to the turbine. A high specific surface area heat exchanger can effectively transfer, with very minimal losses, the heat from a separate mass of helium within the block channels to the turbine and compressor. The motivation behind the design of such a scheme is simple. A Brayton cycle desires a pressure ratio as high as possible, this is not much of an issue at “only” 950°C, because nickel alloys can retain tensile strength to operate at even a very high-pressure ratio of 50 bar. But such a pressure ratio is impossible to maintain in the 1500°C zirconia heat exchanger pipes that prevent the helium from leakage and cracking the brittle alumina blocks. The separation of the two gas sections is simple and poses negligible losses or disadvantages. Like all technologies, this system relies on two chemical elements to work. These are zirconium and helium. Zirconium is essential since it is abundant yet has a high melting point, it can be used for non-heavily stressed parts in the high-temperature zone. Vanadium is another candidate, with a high melting point of 1910°C and high abundance, it can be used to construct the heat exchanging tubes if zirconium proves unsatisfactory. Occasionally, tungsten and silicon fiber (Nicalon and Tyranno) are used for more heavily stressed parts, and the rest of the system, mainly the turbomachinery, is comprised principally of nickel, chromium, and cobalt ferrous alloys. The second essential element is helium, without it, such a scheme fails miserably. No metal can survive exposure to a reactive compound such as nitrogen, water vapor (steam), or CO2 for thousands of hours at a time at elevated temperatures. Only an inert monatomic gas such as helium can satisfy this essential requirement. Helium scarcity should not serve to dissuade the development of this technology for a simple reason: the quantities required by the heat exchanger and turbine are so negligible per kW that even scaling to the entire global electrical demand does not put a dent in the global helium supply. As long as natural gas is produced, helium will be available.

This technology requires the closed-helium Brayton cycle to work, supercritical carbon dioxide will not likely be commercialized in the near future due to corrosion of the nickel alloys, despite immense hype about this new form of power cycle technology. Carbon dioxide when exposed to hot nickel alloys forms nickel and chromium carbides on the surface of the blades, this will cause premature failure and result in poor turbine endurance. A grid-scale energy storage scheme must be able to last well in excess of 100,000 hours of use or an equivalent number of cycles. While supercritical CO2 cycles have higher efficacies at lower temperatures than helium, this is the price to pay for a long-lasting powerplant. Since the maximum temperature of the aluminum oxide temperature is 1500°C, well below the 1880°C zirconium melting point, the minimum temperature is 300°C, the mean temperature of the alumina is thus 900°C, but the mean temperature for the Brayton turbine is only 650°C, where its efficiency will be determined. We can manually calculate what percent the turbine spends at the lower temperature settings per minute. Since the maximum temporal variation in temperature is 1200°C, but the gas is not allowed to rise above 1000°C for blade creep constraints, the temperature rises and falls by 11.66°C per minute, which is a very gradual thermal fluctuation that minimizes stresses to the metal grain structure. We can create seven isolated temperature profiles corresponding to 8.6-minute time frames, one from 300 to 400°C, one from 400 to 500°C, one from 500 to 600°C, one from 600 to 700°C, one from 700 to 800°C, one from 800 to 900°C, and one from 900 to 1000°C. We can add up these 7 temperature increments and sum their total efficiencies and divide by the number to arrive at a mean turbine efficiency. The cycle efficiency of a Helium Brayton cycle has been shown to be 20% at 350°C, 29% at 450°C, 36% at 550°C, 41% at 650°C, 45.8% at 750°C, 49.4% at 850°C, and 52% at 950°C. The mean is then 45.89%. This number is important because it determines the net electrical storage capacity of the system, since the thermal number alone does not represent available mechanical power. A steel-silicon carbide vacuum chamber with nickel and tungsten-coated radiant barriers With the intention to increase heat retainment time to one year, a more sophisticated and potent form of thermal barrier is designed. The Stefan-Boltzmann law states that the intensity of thermal radiation is equal to the fourth power of the body’s temperature, this implies that above a certain temperature threshold, radiative heat transfer begins to overwhelm convective heat transfer in porous bodies. Most conventional insulation materials are effectively air-trapping devices, maintaining high porosity to rely on the low conductivity of air. Unfortunately, at above 600°C, they become quite ineffectual due to the overwhelming dominance of radiation, which they are ill-equipped to arrest. Electromagnetic oscillations become the dominant mode of heat transfer at these refractory temperatures, so any convective slowing insulation will be very ineffective. High temperature thermal energy storage has been historically constrained by this fact, but numerous solutions exist. Conventional refractory bricks, even with high porosity, have difficulty achieving thermal conductivity of less than 0.4 W-mK at the operating temperature of the system which reaches a maximum of 1500°C on the interior. With conventional refractory brick, with 350 millimeters of insulation, which occupies much additional volume and reduces the power density, complete thermal drawdown is as fast as 40 days for a small unit. For large-scale grid storage, we may need to provide backup for months at a time during periods when wind speeds are low or solar irradiance is zero due to permanent cloud cover. It is not acceptable to lose the complete energy contents of the system in only a month, otherwise, we are no better than batteries in this respect. A very simple solution can be adopted to solve this issue. According to basic radiative heat transfer physics, we can use a material with very low emissivity, such as highly polished metal, to very efficiently arrest the propagation of these “heat rays”. Unfortunately, aluminum, which has the lowest emissivity excluding gold and silver, cannot be used due to its low melting point, so instead, we can use nickel, zirconium, tungsten, or cobalt-coated ceramic sheets as our radiant barrier. These respective materials all have emissivity’s of below 0.25 at temperatures of up to 1500°C. Tungsten has an emissivity of 0.15 at 1500°C, nickel 0.16 at 1093°C, Zircaloy 0.24 at 1605°C, cobalt 0.23 at a 1000°C. All numbers on emissivity can be independently verified in the book ASM Ready Reference: Thermal properties of metals, by Fran Cverna, 2002. Tungsten, therefore, emerges as the most reflective, but with enough layers, these relatively small emissivity value differences contribute to a negligible difference in overall thermal flux. The sheets can also be constructed out of solid metal, since they do not have to be very thick, the cost of zirconium or nickel is minimal. If we have a total of 15 panels stacked in front of each other with a small gap of only a few millimeters, there is a natural gradient or temperature drop from the hot to the cold side. The cold side of the radiant barrier touches the steel/ceramic vessel being constantly convectively cooled by the outside air. The radiative flux of the inner-most radiant barrier is equal to the maximum temperature of the device, with each subsequent barrier experiencing a temperature drop depending on the number of panels. The total initial flux at 1550°C is 125,000 W/m² with an emissivity of 0.2. The 2nd panel, 102°C cooler, radiates 99,900 W/m², the 3rd panel 78,700 W/m², the 4th 61,000 W/m², the 5th 46,300 W/m2, the 6th, 34,700 W/m2, 7th panel, 23,400 W/m2, the 8th, 18,000 W/m2, the 9th, 12,400 W/m², the 10th, 8,300 W/m², the 11th, 5200 W/m², the 12th, 3100 W/m², the 13th, 1700 W/m², the 14th, 850 W/m², and the 15th, 363 W/m², or an average temperature adjusted radiative flux of 35,000 W/m². If we then assume exponential radiative decay at an emissivity of 0.2, the net radiative flux on the outer-most panel is only 0.0000011 W/m², or effectively zero. In reality, there will be some leakage of heat through the parameters of these radiant barrier stacks, and emissivity values will slightly differ due to specific wavelengths, but the number is low enough to be treated as zero. This does not mean multi-layer insulation has zero thermal conductivity, substantial losses occur due to the conduction through the spacers.

The radiative flux across a series of stacked radiant barriers is not logarithmic or merely the sum of the emissivity values, it must be exponential since each barrier can only be heated to the extent it is radiated. Emissivity is merely a measure of the fraction of a body’s internal thermal energy shed as radiation, a low emissivity body radiates less than its internal thermal energy because it cannot convert the entirety of its internal kinetic vibratory energy into electromagnetic waves. It is always measured as a ratio of a perfect black or white body. The emissivity of a material must always be the inverse of the absorptivity, and vice versa, this forms the basis of Kirchhoff’s law of thermal radiation. The ability of a material to possess high or low emissivity seems to be strongly determined by its dielectric constant. Now we can calculate the convective heat transfer within the vacuum chamber. If we assume a moderately high vacuum can be attained if leakage rates are kept low, which can easily be achieved by proper seal design and a permanent vacuum pump, then the convective heat transfer is close to zero, but should be calculated anyway. A “medium vacuum” is defined as anything from 10-3 mbar to 1 mbar, with an average of 0.5 mbar, or 0.0000098 atm. Such a vacuum is readily achieved with ordinary vacuum pumps such as rotary plunger pumps, piston pumps, scroll pumps, screw pumps, rotary vane pumps, rotary piston pumps, roots pumps, and absorption pumps. Since air has a density of 1.25 kg/m3, we can assign a density of 0.00061 kg/m3. The convective heat transfer coefficient of a body of air at 1 atm at 1500°C, with a density, viscosity, and thermal conductivity corresponding to these conditions (density of 0.20 kg/m3, thermal conductivity of 0.097 W-mK, and viscosity of 0.000057 N*s/m²), with a temperature difference of 100°C at 1500°C, is 2 W/mK, if we then assume a linear decrease in molar concentration, then the coefficient drops to 0.00098 W/mK, or a thermal flux of only 0.12 W/m². Concept Group LLC has developed a high-temperature vacuum multilayer insulation system designed to operate at up to 1000°C, although little data is provided. If the thickness of the radiant barrier is 50mm and the total then the thermal conductivity is 0.0001 W/mK. Multilayer insulation in a strong vacuum can achieve values of 10^-5 W/mK, or 0.00001 W/mK, or ten times more. So our numbers are reasonably conservative since a higher temperature MLI system will experience more intensive radiative transfer, lower emissivity’s values, and more convective heat transfer since the velocity of air molecules will be proportionally higher. Note that published data on high temperature multilayer insulation by NASA shows thermal conductivity much higher, no less than 0.04 W/mK, this is due to the use of a relatively dense ceramic fibrous material between the reflective layers, which varies from 50-60 kg/mk3 (Heat Transfer in High-Temperature Multilayer Insulation, by Kamran Daryabeigi). This ceramic material has very high emissivity and thus allows radiation to heat it and permits substantial conduction through the fibrous material. The difference between the numbers experienced by NASA and our numbers is not due to some mistake in mathematics. After all, we know exactly what the temperature of each metal barrier is and what the radiative flux is. The convective heat transfer can simply be calculated by taking it as a fraction of atmospheric pressure data at the same temperature. Using the kinetic theory of gases, we can easily calculate the change in the mean free path with molar concentrations and temperature. The mean free path of air molecules at 0.01 mbar and 1000°C is 0.0439 meters, or 43 millimeters, so radiant barrier gap size makes little difference in the convective heat transfer coefficient as long as the mean free path is much greater than the gap avoiding molecule to molecule collisions with virtually all collisions occurring between the two surfaces. The mean free path of gas molecules grows to very large dimensions at low molar concentrations, but decreases at a much slower rate with higher temperatures since molecules have more inertia and velocity, and hence collide more frequently. At ATP conditions, the mean free path is only 0.0001 mm, or 430,000 times less, so the convective heat transfer coefficient should fall roughly proportionally to the molar concentration and the mean free path to radiant barrier gap ratio. With a regime of free molecular flow, defined as having a Knudsen number greater than ten, convection does not occur if the space between the obstruction is smaller than the mean free path, in such a scenario the gas is treated as a conductor only. In fact, the thermal conductivity of a true multilayer insulation system is so low that effectively all the heat flux occurs through the solid spacers, the edges where the panels or sheets are mounted, and through manufacturing defects. Heat loss through the helium inlet and exit hoses also accounts for a non-negligible thermal flux. If we calculate a breakdown of the major contributors to thermal leakage, it is almost exclusively the spacers, so using thick radiant barriers that remain structurally stable without risking creasing, we can dramatically decrease heat flux down to the bare physical limits. When we add spacer conduction, assuming a spacer construction material consisting of highly porous yet structural ceramic, we add around 1 watt to the heat flux. One of the central advantages of the low thermal conductivity radiant barrier vacuum insulation is that it allows us to design quite small systems, since we are less sensitive to an increase in the surface-to-volume ratio. With ultra-low conductivity insulation, we can design units as small as 1 meter in diameter which lowers manufacturing costs in great measure due to higher production volumes and simpler fabrication. The system is still limited by the tip-losses, boundary layer effect, and overall mechanical efficiency of the turbomachinery, but with this insulation system, downscaling to a small 4-5 MWe system is more than possible.

Sealing the vacuum chamber is essential for continued insulation performance. A number of options arise to effectively seal the system. One such option involves the use of solid-mechanical connections such as welds, tightly bolted flanges, low asperity gaskets or high-pressure drop surfaces that slow leakage rates down to the level that can be routinely evacuated by an online vacuum pump. A 5.6 MWe storage unit in a cylindrical geometry has a surface area of 24 m², with a dimension of 1.6 meters wide and 4.5 meters tall. As we have illustrated with these above calculations, the total heat flux is thus almost entirely due to conduction in the spacer. If a porous ceramic is used, the same material as the refractory brick as the space, and the thermal conductivity is around 0.35 W/mK, spacer conduction losses can be kept to a minimum with low spacing intervals. At an interval of 100x100mm with each individual spacer 5mm in diameter, the spacer surface area is 2500 mm²/m², or about 6.5 W/m². This translates to a complete thermal drawdown of 8.9 years. This figure is close to a standard consumer lithium-ion cell which experiences a 2-3% monthly self-discharge rate, but this number is largely prevalent for grid storage since maximum storage times will not exceed a few weeks. In fact, we can tolerate an insulation system that has a thermal conductivity that equals a 10% loss over a period of one month, or about 80 W/m².

The extreme simplicity, proven physics of sensible heat, mature turbomachinery technology, extremely low CAPEX, and simple engineering, make it almost certain that high-temperature aluminum oxide energy storage can scale to facilitate a true 100% wind and solar energy grid. The technology is inherently proven since Cowper furnaces, which are used to heat air in iron smelting plants, make use of the same brick-integral heat exchanging concept and boast very long lives. Competing technologies such as lithium-ion batteries and hydrogen are rendered extremely uncompetitive in light of this technology and one can go as far as argue they are obsolete, strictly in a grid-scale storage context. But this technology has even deeper disruptive implications. It is not merely that so-called “non-baseload” energy technologies such as wind and solar are now cemented in the energy future, but that this technology makes redundant complex and expensive “baseload” clean sources such as nuclear and geothermal, whose sole “raison d’etre” is precisely this very feature. Mono-crystalline solar panels are manufactured for only $240/kW and high-altitude wind turbines (pneumatic towers), could for the same price, with a capacity factor of 21% and 65% respectively, nuclear and geothermal must be brought down to substantially below $1000/kW in order to compete. Such a prospect is very unlikely to occur due to fundamental technological, material, and physics limitations, strongly suggesting investments in any technologies outside of wind or solar is inadvisable. The deserts of the world contain orders of magnitude more land than is needed to power human civilization many times over, if only a way to store gigawatts of energy existed, solar farms in Egypt would power all of Europe.

A brief note on the urge to “compare to batteries”

Alternative energy advocates often like to state that hydrogen, thermal energy storage, pumped hydro, or other energy storage schemes are a form of “battery”. Unfortunately, this is another category error. Batteries cannot and will not fulfill the role of high-intensity ultra-high cycle gigawatt energy storage, but thermal energy storage will not fulfill any roles presently occupied by batteries either, namely applications where less than 500 kW are needed. Apart from our comparison of the volumetric power density, we have been careful to not excessively pitch the technology as an alternative to “batteries”. This technology by no means makes batteries “obsolete”, batteries will be used to power small power output devices for centuries to come. This technology is a highly specialized form of energy storage system ideally suited for wind turbines and photovoltaic arrays, but it will find little use in small-scale applications. While the term “charge” and “recharge” is used to refer to the heating and cooling of the device, we found that this was necessary as the term “heating” did not evoke the “energetic” build-up and depletion characteristic of this technology. This novel thermal energy storage device is suitable for high power density applications that require extremely high endurance (tens of thousands or even hundreds of thousands of cycles), low downtime, and extremely low capital costs. Batteries are small, light, portable low-voltage power sources that are principally used for personal electronic devices. This technology requires turbomachinery to operate and thus cannot scale down to the kilowatt level without severely sacrificing mechanical efficiency. Virtually 100% of all battery packs ever built are below 1 kW, and while individual battery cells can in theory be infinitely stacked, they appear to have limited applicability to high intensity, high endurance, and large-scale high-power density energy storage. The commonly assumed reason that consumer batteries cannot scale to global grid storage is due to constraints imposed by their raw material inputs. But this is incorrect and is yet another example of man’s supercilious attitude towards nature, that he can “destroy” nature by depleting her gifts. There are orders of magnitude more nickel, cobalt, and lithium to produce enough consumer-type batteries to power the entire world’s electricity grid. A standard lithium, ion cobalt manganese cell, the Panasonic 18650 is widely used in electric sedans uses 0.083 kg/kW of lithium, 0.65 kg-kW of nickel, and 0.083 kg/kW of cobalt. The global electrical grid is 3000 GW, and we assume the capacity is equal to name-plate production capacity, but many estimates are unrealistically conservative and multiply this number for additional “reserve”. This is simply not necessary because photovoltaic and wind farms can merely be oversized. If the entire world’s electrical grid used 18650 cells, barely one year’s worth of nickel would be used, a drop in the bucket. A similar situation is found for cobalt and lithium, the so-called “mineral” constraint is an ignorant myth due to incorrect calculations. Just as helium is not a constraint for the proposed architecture, metals are by no means a constraint on building grid-scale batteries. Nickel is plentiful, the current boondoggle of building electromobility will not event place a dent in the global nickel supply. Manganese is a very abundant metal so it is not considered for this quick calculation. The principle and perhaps single reason batteries will not be used for high-intensity grid storage is due to their extremely short cycle life. Any owner of a mobile phone, digital camera, or laptop can attest to this. “Industrial” batteries do not possess different “chemistries” that can markedly change this, it is a fundamental attribute of the technology itself, just as brittleness is an attribute of concrete or heat is an attribute of friction. All “commercial” type batteries are merely versions of existing consumer-grade batteries adapted to commercial use and packaged in a more durable container with additional fireproofing. The typical consumer-grade lithium-ion cell will have trouble preserving half its original charge at barely 1500 cycles, equivalent to complete drainage and charge for four complete years. This is typically not a concern because the phone or device in question will be replaced by this time. A typical electrical grid component, such as a mains transformer, dynamo, or steam generator is rated for several hundred thousand hours of continuous use. A typical steam turbine can last 400,000 hours before overhaul is required, such a lifespan is simply physically impossible with an electrochemical device since chemical reactions will inevitably occur between the dissimilar metals halting the electrochemical activity.

Figure 2: Heat flux of the aluminum oxide block.

Fullscreen capture 12122022 125113 AM.bmp

Figure 4: Power density of helium gas turbines relative to S-CO2 and Rankine.


Figure 5: Heat capacity tables for aluminum oxide.

Fullscreen capture 12122022 13651 AM.bmp

Fullscreen capture 12132022 114042 PM.bmp

All numbers stated here can be confirmed using Omni calculator https://www.omnicalculator.com/physics/specific-heat


[1] https://en.wikipedia.org/wiki/Electrical_resistivity_and_conductivity

[2] http://qedfusion.org/LIB/PROPS/PANOS/al2o3.html

[3] https://www.researchgate.net/publication/326372991_Dilatrometric_sintering_s tudy_and_characterization_of_Alumina-nickel_composites/figures?lo=1

[4] https://en.wikipedia.org/wiki/Tesla_Megapack#cite_note-:0-11

Cable-powered short-range heavy lift technology.


Pochari Technologie’s cabled lifter technology. In March 2019, Pochari Technologies invented a highly novel aerial lift system. The invention arose out of a need to balance the low cost of terrestrial cranes and the degrees of freedom afforded by a helicopter crane. It was found that since most aerial lift jobs occur relatively close to fixed sites, the actual distance traveled by the heavy-lift helicopter was quite minimal. For many jobs that use heavy-lift helicopters, often it’s not so much range that is desired, but rather height. Another strong impetus for aerial cranes is turnaround time, a heavy-lift helicopter such as a Bell 205 can be operated in the “restricted category” which allows for-profit external load operations as long as it does not operate over populated areas. All over the world, Bell 205, S53, and more recently, ex-Military UH-60s perform heavy lift work installing air conditioning units on rooftops or erect cell towers. Classic cranes have difficulty practically attaining heights in excess of 200 meters, nor can they suspend loads further than at best a hundred meters from their center of gravity. The load capacity of a conventional crawler crane or truck crane might be impressive on paper, but this capacity drops off precipitously as the reach is extended. Moreover, the vast majority of helicopter lift jobs do not fully exploit the aircraft’s unlimited degrees of freedom, it spends most of its time hovering over the lift site and flying slowly back and forth to pick up the load from a truck or storage site nearby, rarely does the aircraft fly many miles with a load underneath. Furthermore, in the U.S it isn’t even legal for most of the restricted category heavy-lift helicopters to fly about over dense areas. The desired operational radius of the cabled lifter is not significantly restricting to its range of potential uses. The ground powerplant vehicle can be situated between 5 and ten kilometers, the width of the Pensionala of San Francisco is around 11 kilometers. To prevent the cable from sagging excessively, a small drone carries the cable mid-span. The weight of a ten-kilometer cable would be approximately 1100 kg, with half of the weight being carried by the lifter and the powerplant truck. In order to fly over obstacles, such as forested areas, the powerplant truck carries a telescoping pole that reaches a height of around 50 meters, providing the cable with enough overhead clearance. What we realized is that if electric power could be transmitted in a high power density configuration to the lift craft, then it could effectively hover all day long without the need to refuel, carry the weight of the fuel, nor use highly expensive turboshaft powerplants. The lifter itself would only need moderately high power density non-superconducting electric motors, a transformer to step down the voltage, and a rectifier to convert the unusable high- -frequency AC power to either DC or 60 HZ AC power. Upon further analysis, it was found all three options were available and light enough to be carried by the lift craft. In order to design a lightweight conductor, the amount of current must be reduced to a minimum, the only way to facilitate this is by increasing voltage. The problem with increasing voltage beyond 5 kV is that electric motors are unable to use the power, so some form of step-down transformer is required, The weight of a transformer operating at 60 Hz is prohibitive, the transformer would many times more than the weight of the aircraft is operating at standard grid frequency. To overcome this, a very high-frequency AC power supply is required, in order to achieve this, Schottky diode-based rectifiers are used to convert the AC power generated by the ground power supply into DC, which is then converted back to AC and “chopped” into the appropriate frequency. A transformer could easily be designed with weight reduced to the utmost minimum by stepping up the frequency to over 100 kHz. Nanocrystalline core material such as Hitachi FINEMET could be employed to achieve a gravimetric power density of >33 kW/kg with minimal core losses. The cost of the nanocrystalline material is around $9/kg, or about $0.30/kW. Nanocrystalline high-frequency transformer cores are constructed mainly from iron with grain sizes below 10 nanometers, the microstructure of the alloy facilities extremely high induction with low losses. Since the high-frequency AC power is unusable by an electric motor, the current has to be rectified back into DC which can be used directly by a DC motor or in an AC motor if rectified back into AC. Using the Schottky diodes, a rectifier with power densities of up to 50 kW/kg could easily be designed. Electric motors are by far the biggest weight contributor, with the current state-of-the-art axial flux electric motors having power densities of around 6 kW/kg at 10,000 rpm. rpm. To minimize the mass of the electric motors, a reduction gearbox had to be employed. Using high-frequency AC power at high voltage, a conductor cooled by the ambient air could be designed with a weight of 110 kg per kilometer. High-frequency conductors can take advantage of the skin effect, at 100 kHz, the skin depth of the current is only 150 microns, which means a large diameter hollow conductor can minimize mass while achieving the required resistance to minimize ohmic heating. The cabled lifter is a simple yet powerful concept. At the most basic level, the cabled lifter is as its name suggests, a flying crane that use generates thrust for its locomotion, but rather than carrying fuel onboard and burning it in a turbine, it draws high voltage and high-frequency AC current from an ultralightweight electric cable that unwinds from a ground vehicle. The concept draws from two fundamental technologies: ducted fan lifters and high-frequency rectification. In order to develop a low-cost aerial crane solution, a far more affordable powertrain system is called for. Existing aerial crane technologies consist almost exclusively of one airframe, the Sikorsky sky crane. The Sikorsky S-64 Skycrane is a classic jet fuel-powered turboshaft helicopter.

Iso-thermal piston compressor using integral cooling channels

Pochari Technologies has devised a novel form of iso-thermal piston compressor.
A dense pattern of relatively thin wall cooling tubes extend out and fasten to the compressor cylinder head, the cooling tubes feature an internal passageway for high heat capacity coolants. The pressure of the liquid medium is set at close to the compressor’s operating pressure to minimize the thickness of the cooling tubes. The piston’s compressor features internal bores to accommodate the space of these cooling tubes. A small gap is left as to prevent any friction between the piston’s female bores and the cooling tubes. As the piston reaches the top of the cylinder assembly, the gas is squared tightly between the female bores and the male cooling tubes, allowing extremely rapid heat transfer into the cooling medium.
Even with the high density cooling tubes, there is sufficient space on the cylinder head for gas exit, since the density of the compressed gas is so much greater than during the inlet stroke, the valves can be quite small. During the intake stroke, a wall-valve similar to a two-stroke is used, or a long residence time can be used. A series of small valves are placed at the top of the cylinder assembly between the extending cooling tubes. In the piston assembly, it would be possible to also accommodate small cooling channels in the space between the female bores. To minimizes the thickness of the metal, it would be desirable to also keep the pressure of the coolant as high as possible. Higher pressure also raises the boiling point of the liquid cooling medium. Water has a boiling point of 375 degrees Celsius at 225 bar, this forms the working principle of the famous pressurized water reactor.
With this iso-thermal compression concept, it would be possible to achieve a complete atmospheric to ammonia-synthesis ready 300 bar in a single compression stroke, massively improving the flow capacity and productivity of a single compressor. Due to the fact that the surface area of the cooling tubes is quite high due to their spacing count and relative small size, the total potential thermal flux is immense. The limiting factor would not be metal surfaces, but the cooling medium which would have to be pumped at a high enough flow rate to purge the heat from the gas compression.

iso thermal 1

compressor 2