Active-Cooled Electro-Drill (ACED)

Christophe Pochari, Christophe Pochari Engineeering, Bodega Bay, CA.

707 774 3024,

Fullscreen capture 11302022 121858 AM.bmp


Christophe Pochari Engineeering has devised a novel drilling strategy using existing technology to solve the problem of excessive rock temperature encountered in deep drilling conditions. The solution proposed is exceedingly simple and elegant: drill a much larger diameter well, around 450mm instead of the typical 250mm or smaller diameters presently drilled. By drilling large-diameter wells, a fascinating opportunity arises: the ability to pull away heat from the rock faster than it can be replenished, thereby cooling it as drilling progresses, preventing the temperature of the water coolant from reaching more than 150°C even in very high rock temperatures. A sufficiently large diameter well has enough cross-sectional area to minimize pressure drop from pumping a voluminous quantity of water through the borehole as it is drilled. The water that reaches the surface of the well will not exceed 150°C, this heat would be rejected at the surface using a large air-cooled heat exchanger. If the site drilling temperature exceeds the ambient of 20°C such as in hot climates, an ammonia chiller can be used to cool it down to as low as 10°C Any alternative drilling system must fundamentally remove rock either by mechanical force or heat. Mechanical force can take the form of abrasion, kinetic energy, extreme pressure, percussion, etc, delivered to the rock through a variety of means. The second category is thermal, which has never to this date been utilized except for precision manufacturing such as cutting tiles or specialized materials using lasers. Thermal drilling is evidently more energy intensive, since rock possesses substantial heat capacity, and any drilling media, whether gas or liquid, will invariably consume a large portion of this heat. Thermal methods involve melting or vaporizing, since at least one phase change will occur, the energy requirements can be very substantial. This heat must then be introduced somehow, it can either be in the form of combustion gases directly imparting this heat or via electromagnetic energy of some sort. Regardless of the technical feasibility of the various thermal drilling concepts, they all share one feature in common: they require drilling with air. The last method available is chemical, in which strong acids may dissolve the rock into an emulsion that can be sucked out. This method is limited by the high temperature of the rock which may decompose the acid and the prohibitively high consumption of chemicals which will prove uneconomical. Any drilling concept which relies on thermal energy to melt, spall, or vaporize rock is ultimately limited by the fact that it cannot practically use water as a working fluid, since virtually all the energy would be absorbed in heating the water. This poses a nearly insurmountable barrier to their implementation since even the deep crust is assumed to contain at least 4-5% H2O by volume, (Crust of the Earth: A Symposium, Arie Poldervaart, pp 132). Water will invariably seep into the well and collect at the bottom, and depending on the local temperature and pressure, will either exist as a liquid or vapor. Additionally, even if the well is kept relatively dry, thermal methods such as lasers or microwaves will still incur high reflective and absorptive losses from lofted rock particles and even micron-thick layers of water on the rock bed. Regardless of the medium of thermal energy delivery, be it radio frequency, visible light such as in a laser, or ionized gas, that is plasma, they will be greatly attenuated by the presence of the drilling fluid, requiring the nozzle to be placed just above the rock surface. This presents overheating and wear issues for the tip nozzle material. Christophe Pochari Engineeering concludes based on extensive first principles engineering analysis that thermal systems will possess an assortment of ineluctable technical difficulties severely limiting their usefulness, operational depth, and practicality. In light of this fact, it is essential to evaluate and consider proven and viable methodologies to take existing diamond bit rotary drilling, and make the necessary design modifications to permit these systems to work in the very hot rock encountered at depths greater than 8 km. In order to access the deep crust, a method to deliver power to a drill bit as deep as 10 kilometers is needed. Due to the large friction generated when spinning a drill shaft such a distance, it is absolutely essential to develop a means to deliver power directly behind the drill bit, in a so-called “down-hole” motor. Rotating a drill pipe 10 or more kilometers deep will absorb much of the power delivered to the pipe from the rig and will rapidly wear the drill pipe, necessitating frequency replacement and increasing downtime. Moreover, due to the high friction, only a very limited rotational speed can be achieved placing an upper limit on rates of penetration. The rate of penetration for a diamond bit is directly proportional to the speed and torque applied, unlike roller-cone bits, diamond bits do not require a substantial downward force acting on them since they work by shearing, not crushing the rock. Down-hole motors have the potential to deliver many fold more power to the bit allowing substantially increased rates of penetration. Clearly, a far superior method is called for and this method is none other than the down-hole motor. But down-hole motors are nothing new, they form the core of modern horizontal drilling technology in the form of positive displacement “mud motors” which drives drill bits all over the U.S. shale play. Another method is the old turbodrill, widely used in Russia and discussed further in this text. But what all these methods have in common is a strict temperature threshold that cannot be crossed or rapid degradation will occur. A new paradigm is needed, one in which the surrounding rock temperature no longer limits the depth that can be drilled, a new method in which the temperature inside the borehole is but a fraction of the surrounding rock temperature. This method is called Active-Borehole Cooling using High Volume Water. Such a scheme is possible due to the low thermal conductivity and slow thermal diffusivity of rock. There is insufficient thermal energy in the rock to raise the temperature of this high volume of water provided the heat is removed at the surface using a heat exchanger. Christophe Pochari Engineeering appears to be the first to propose using very high-flow volume water to prevent the temperature of the down-hole equipment from reaching the temperature of the surrounding rock, no existing literature makes any mention of such a scheme, serving as an endorsement of its novelty.

Impetus for adoption

There is currently tremendous interest in exploiting the vast untapped potential that is geothermal energy, and a number of companies are responding by offering entirely new alternatives in an attempt to replace the conventional rotary bit using exotic methods including plasma, microwaves, and some have even proposed firing concrete projectiles from a cannon! The greatest inventions and innovations in history shared one thing in common, they were elegant and simple solutions that appeared “obvious” in hindsight. There is no need whatsoever to get bogged down with exotic, unproven, complicated, and failure-prone alternative methods when existing technologies can be easily optimized. Conventional drilling technology employs a solid shaft spun at the surface using a “Kelly bushing” to transmit torque to the drill bit. This has remained practically unchanged since the early days of the oil industry in the early 20th century. While turbo drills have enjoyed widespread use, especially in Russia for close to a century, they have a number of limitations. Russia developed turbodrills because the quality of Russian steel at the time was so poor that drill pipes driven from the surface would snap under the applied torque. Russia could not import higher quality Western steel and thus was forced to invent a solution. Early Russian turbodrills wore out rapidly and went through bits much faster than their American shaft-driven counterparts due to the higher rotational speeds of the turbine even with reduction gearing. Diamond bits did not exist at the time and low-quality carbide bits, principally tungsten carbide and roller cones, were used. Bearings would break down as early as 10-12 hours of operation. Reduction gearboxes, essential for a turbodrill to work due to the excessive RPM of the turbine wheels, wore out rapidly due to the loss in viscosity from the high down-hole temperature. The principal challenge of deep rock drilling lies not in the hardness of the rock per se, as diamond bits are still much harder and can shear even the hardest igneous rocks effectively. Existing diamond bits are several orders of magnitude harder than quartz, feldspar, pyroxene, and amphibole, and newer forms of binder-less bits are even more so. From a physics standpoint, it seems absurd to argue that drill bits are not already extremely effective. Rather, the challenge lies in preventing thermal damage to the down-hole components. If only a small flow of drilling fluid is pumped as is presently done, flowing just enough fluid to carry cuttings to the surface, the latent thermal energy in the radius surrounding the well is sufficient to raise the temperature of this fluid, especially a lower heat capacity oil, to the mean temperature along that particular well. For example, in existing small-diameter wells drilled, especially deeper boreholes, are usually around 9-10” or 250mm in diameter. If the well is too much narrower than 350mm in diameter, it is difficult to flow enough water to cool it. Assuming a 100-hour thermal diffusion time, we draw a 1.26-meter radius of rock, that is in a hundred hours, heat moves this distance. By growing the diameter of the well from 250mm to 460mm, the ratio of cross-sectional area which is proportional to the available flow rate at a constant pressure drop, drops from 125 cubic meters of rock per m2 of cross-sectional area to less than 42 cubic meters of rock per m2 of cross-sectional area, or around 3 times less. Flow rates in previous deep drilling projects were usually less than 500 GPM or around 110 m3/hr. The German deep drilling program had mud flow rates of between 250 and 400 GPM (81 m3/hr) for well diameters of 20 cm and 22.2 cm. The average thermal flux from the well is around 70 MWh-t so the water is rapidly warmed to the surrounding well temperature. The minimum flow rate to warm the water to no more than 180°C is around 400 cubic meters, far too high to be flowed in such a small annulus, especially if the drilling mud is viscous and the drill pipe takes up much of the space leaving only a small annulus. The volume of rock cooled per 100 hours is 6.8 cubic meters or 18,000 kg. If this mass of rock is cooled by 300°C, the thermal energy is 1,280 kWh, or a cooling duty of 12.8 kW/m of well-bore length. Since water has a heat capacity of 3850 J/kg-K at the average temperature and pressure of the well, 1800 cubic meters of water, a flow rate achievable with 600 bar of head in a 460mm diameter well, results in a cooling duty of 343,000 kWh or 34.3 kWh/m of wellbore length. Clearly, our well will not produce 350 MWt, equal to a small nuclear reactor, otherwise we would be drilling millions of holes and getting virtually free energy forever! But since drilling occurs over a relatively long period of close to 1500 hours, the thermal draw-down radius is 4.87 meters, or a rock volume of 81.7 cubic meters. The thermal energy in this rock mass is 15,400 kWh or only 10.26 kW/m of cooling duty at a temperature drop of 240°C. But such a large temperature drop is entirely unrealistic, since a 12 km deep well will have an average rock temperature of only 210°C, so a temperature drop of only say 100°C is needed, resulting in a cooling duty of 4.3 kW/m or 6300 kWh/m over 1500 hours. This means a 12 km well will produce 51.6 MWt of heat resulting in a water temperature of only 27°C. If a 12 km well is drilled in a geothermal gradient of 35°C /km, the maximum temperature reached will be 420°C and the average temperature will be 210°C This means in the last 3.5 km, the temperature will be above 300°C, which is far too hot for electronics, lubricants, bearings, and motors to operate reliably without accepting a severe reduction in longevity. Geothermal wells, unlike petroleum and gas wells, must penetrate substantially below the shallow sedimentary layer and for effective energy recovery, rock temperatures over 400°C are desired. As the temperature of the well reaches 300-400°C, the alloys used in constructing the drill equipment, even high-strength beta titanium, begin to degrade, lose strength, become supple, warp, and fail from stress corrosion cracking when chlorides and other corrosive substances contact the metallic surfaces. It can thus be said that proper thermal management represents the crucial exigency that must be satisfied in order for the upper crust to be tapped by human technology. Christophe Pochari EngineeeringActive-Cooled Electro-Drill (ACED) methodology employs the following processes and components to achieve low down-hole temperatures. A number of technologies are concatenated to make this methodology possible.

#1: High volume/pressure water cooling using large diameter beta-titanium drill pipes:

Using high strength beta-titanium drill pipes to deliver 600 bar+ water at over 1700 cubic meters per hour, a cooling duty of up to 400 megawatts can be reached if the temperature of the water coolant is allowed by 180°C. The rock mass around the 450mm diameter well is insufficient to come close to heating this mass of water by this magnitude and an expected 60-80 MW of thermal energy will be delivered to the surface in the first 1500 hours of drilling. The drill string incorporates a number of novel features. Being constructed out of ultra-high strength titanium, it is able to reach depths of 12 km without shearing off under its own weight. It is also designed with an integrated conductor and abrasion liner. The integrated conductor is wrapped around the drill pipe between a layer of insulation and the out-most abrasion liner.

#2: High Power density down-hole electric machines:

A high-speed synchronous motor using high-temperature permanent magnets and mica-silica coated winding generates 780-1200 kW at 15-25,000 rpm. Owing to the high speed of the motor, it is highly compact and can easily fit into the drill string within a hermetic high-strength steel container to protect it from shock and abrasive and corrosive fluids. The motor is cooled by passing fresh water through sealed flow paths in the winding. Compared to the very limited power of Russian electro-drills in the 1940s to 1970s, the modern electro-drill designer has access to state-of-the-art high-power-density electrical machines.

#3 High Speed Planetary Reduction Gearbox:

The brilliance of the high volume active cooling strategy is the ability to use a conventional gear-set to reduce the speed of the high power density motor to the 300-800 RPM ideal for the diamond bit. Using high-viscosity gear oils with 30 CSt at 180°C, sufficient film thickness can be maintained and gearbox life of up to 1000 hours can be guaranteed.

#4: Silicon Thyristors and Nano-Crystalline Iron Transformer Cores:

Silicon thyristors are widely used in the HVDC sector and can be commercially procured for less than 3¢/kW.

Fullscreen capture 12202022 121052 AM.bmp

The maximum voltage of electrical machines is limited by winding density constraints due to corona discharge, requiring thick insulation and reducing coil packing density. For satisfactory operation and convenient design, a voltage much over 400 is not desirable. The problem then becomes, how to deliver up to 1 MW of electrical power over 10 km? With low voltage, this is next to impossible. If a voltage of 400 is used, the current would be a prohibitive 2500 amps, instantly melting any copper conductor. As any power engineer knows, in order to minimize conductor size and losses, a high operating voltage is necessary, 5,000 or more volts. To deliver 1000 kW or 1340 hp to the drill bit, with a 15mm copper wire at 100°C, the average resistance is 0.8 Ohms, resulting in a Joule heating of 22 kWh, or 2.2% of the total power. To deliver current to the motor, DC is generated at 6-10 kV, this DC is then inverted to 100-150 kHz to minimize core size and the voltage is reduced to the 400 required by the motor. This high-frequency low voltage power is then rectified back into DC to change the frequency back to 1000 Hz for the high-speed synchronous motor. Silicon thyristors can operate at up to 150°C in oxidizing atmospheres (thermal stability is substantially improved in reducing or inert atmospheres). Nano-crystalline iron cores have a Curie temperature of 560°C, well above the maximum water temperature encountered with 1700 m3/hr flow rates.

Rock hardness is not the limiting factor

Feldspar, the most common mineral in the crust, has a Vickers hardness of 710 or 6.9 Gpa. Diamond in the binderless polycrystalline form has a hardness of between 90-150 GPa, or 35 times greater. Diamond has a theoretical wear rate of 10^-9 mm3/Nm. Where cubic millimeters represent volume losses per unit of force applied (Newtons) over a given travel distance. We can thus easily calculate the life of the bit using the specific wear rate constant. Unfortunately, it is more complex than this, and bit degradation is usually mediated by spalling, chipping, and breakage. Due to the extrusion of the cobalt from the diamond, the poly-crystalline diamond degrades faster than otherwise predicted by its hardness alone. This means the wear rate is extremely slow unless excessive temperature and shock are present. Archard’s equation states that wear rates are proportional to the load and hardness differential. In light of this thermal constraint, it might seem obvious to any engineer to exploit the low thermal conductivity of rock and simply use coolant, of which water is optimal, to flush heat out of the rock and back to the surface. But in conventional oil and gas drilling, a very heavy viscous drilling mud is employed, this mud is difficult to pump and places stringent requirements on compression equipment. Elaborate filtration systems are required and cooling this mud with a heat exchanger would lead to severe erosion of the heat exchanger tubes. The principal reason why “active” cooling of the well bore is not presently an established process is the fact that there is no present application where such a scheme would be justified. For example, in order to cool a 450mm diameter 10 km borehole that would flux close to 70000 kWh of thermal energy in the first 1200 hours, a pumping power of up to 32,000 hp is required. The average power costs would therefore be close to $1.5 million per well assuming a wholesale power cost of $70/MWh. The added cost of site equipment, including heat exchangers, a larger compressor array, multiple gas turbines, and the necessary fuel delivery to drive the gas turbines make this strategy entirely prohibitive for conventional oil and gas exploration. Even if this could be tolerated, the sub-200°C temperatures encountered could not possibly justify such a setup. What’s more, pumping such a massive amount of water requires a larger diameter drill pipe that can handle the pressure difference at the surface. Since the total pressure drop down the pipe and up the annulus is close to 600 bar across 10 km, the pipe must withstand this pressure without bulging, compressing the water coming up the annulus thus canceling the differential pressure and stopping the flow. High-strength beta-titanium alloys using vanadium, tantalum, molybdenum, and niobium are required since they must not only withstand the great pressure at the surface, but also carry their own mass. Due to its low density (4.7 g/cm3), beta-titanium represents the ideal alloy choice. With its excellent corrosion resistance and high ductility, few materials can surpass titanium. AMT Advanced Materials Technology GmbH markets a titanium alloy called “Ti-SB20 Beta” with high ductility that can reach ultimate tensile strengths of over 1500 MPa. For conventional oil and gas drilling to only a few km deep, the weight of the drill with the buoyancy of heavy drilling mud allows the use of low-strength steels with a yield strength of less than 500 MPa. This high-end titanium would be vacuum melted and the drill pipes forged or even machined from solid round bar stock. The cost of the drill piper set alone would be $5 million or more for the titanium alone, and several additional millions for machining. In addition, titanium has poor wear and abrasion resistance and tends to gall so it cannot be used where it is subject to rubbing against the rock surface. Because an electro-drill does not spin the drill pipe within the well, the only abrasion would be caused by the low concentration of rock fragments in the water and by the sliding action of the pipe if it is not kept perfectly straight, which is next to impossible. To prevent damage to the titanium drill pipe, a liner of manganese steel or chromium can be mechanically adhered to the exterior of the drill pipe and replaced when needed. Another reason that high-volume water cooling of the drilling wells is not done is due to the issue of lost circulation and fracturing of the rock. In the first few kilometers, the soft sedimentary rock is very porous and would allow much of the water pumped to leak into pore spaces resulting in excessive lost circulation. Since a high volume of water requires a pressure surplus at the surface, the water is as much as 250 bar above the background hydrostatic pressure, allowing it to displace liquids in the formation. Fortunately, the high-pressure water does not contact the initial sedimentary later since this pressure is only needed when the well is quite deep and by the time the water flows up the annulus to contact the sedimentary formation, it has lost most of its pressure already. The initial 500-600 bar water is piped through the drill pipe and exits at the spray nozzles around the drill bit. In short, a number of reasons have combined to make such a strategy unattractive for oil and gas drilling. Sedimentary rocks such as shale, sandstone, dolomite, and limestone can be very vugular (a cavity inside a rock), this can cause losses of drilling fluid of up to 500 bbl/hr (80 cubic meters per hour. A lost circulation of 250 bbl/hr is considered severe and rates as high as 500 bbl/hr are rarely encountered. With water-based drilling, the cost is not a great concern since no expensive weighting agents such as barite or bentonite are used, nor are any viscosifing agents such as xanthan gum. Little can be done to prevent lost circulation other than using a closed annulus or drilling and casing simultaneously, but both methods add more cost than simply replacing the lost water. Water has no cost (infinitely available) besides its transport and pumping cost. If 80 cubic meters are lost per hour, an additional 1200 kW is used for compression. The depth of the water table in the Western U.S. (where geothermal gradients are attractive) is about 80 meters. In Central Nevada for example where groundwater is not by any means abundant, the average precipitation is 290 mm, or 290,000 cubic meters per square kilometer. Multiple wells could be drilled to the 80-meter water table with pumps and water purification systems installed to provide onsite water delivery to minimize transport costs. Water consumption for drilling a deep well using active cooling pales in comparison to agriculture or many other water-intensive industries such as paint and coating manufacturing, alkali and chlorine production, and paperboard production. If water has to be physically transported to the site via road transport if well drilling proves impossible for whatever reason, a large tanker trailer with a capacity of 45 cubic meters which is allowed on U.S roads with 8 axles can be used. If the distance between the water pickup site and the drill site is 100 km, which is reasonable, then the transport cost assuming driver wage of $25/hr and fuel costs of $3.7/gal (avg diesel price in the U.S in December 2022), would total of $150 each way to transport 45 cubic meters, or less than $4 per cubic meter or around $320/hr. The total cost of replacing the lost circulation at the most extreme loss rates encountered is thus $450,000 for a 10 km well drilled at a rate of 7 meters per hour.


The drilling technology landscape is ripe for dramatic disruption as new forms of more durable and thermally stable metal-free materials reach the market. But this upcoming disruption in drilling technology is not what many expect. Rather than exotic entirely new drilling technologies such as laser beams or plasma bits, improvements in conventional bit material fabrication and down-hole power delivery present the real innovation potential. Improvements in power delivery and active well cooling allow engineers to supersede the bulky turbodrill into obsolescence. Investors in this arena should be cautious and conservative, as the old adage “tried and true” appears apt in this case. Binder-less polycrystalline diamond has been successfully synthesized at pressures of 16 GPa and temperatures of 2300°C by Saudi Aramco researchers. Conventional metallic bonded poly-crystalline diamond bits begin to rapidly degrade at temperatures over 350°C due to the thermal expansion of the cobalt binder exceeding that of diamond. Attempts have been made to remove the metallic binder by leaching but this usually results in a brittle diamond prone to breaking off during operation. Binderless diamond shows wear resistance around 4 fold higher than binder formulations and thermal stability in oxidizing atmospheres up to 1000°C. The imminent commercialization of this diamond material does not bode well for alternative drilling technologies, namely those that propose using thermal energy or other exotic means to drill or excavate rock. If and when these higher performance longer lasting bits reach maturity, it is likely most efforts at developing alternative technologies will be abandoned outright. In light of this news, it would be unwise to invest large sums of money into highly unproven “bitless” technologies and instead focus efforts on developing thermally tolerant down-hole technologies and or employing active cooling strategies. It is therefore possible to say that there is virtually no potential to significantly alter or improve the core rock-cutting technology. The only innovation left is therefore isolated to the drilling assembly, such as the rig, drill string, fluid, casing strategy, and pumping equipment, but not the actual mechanics of the rock cutting face itself. Conventional cobalt binder diamond bits can drill at 5 meters per hour, using air as a fluid the speed increases to 7.6 meters per hour. Considering most proposed alternatives cannot drill much over 10 meters per hour and non have been proven, it seems difficult to justify their development in light of new diamond bits that are predicted to last four times longer, which in theory would allow at least a doubling in drilling speeds holding wear rates constant. A slew of alternative drilling technologies has been chronicled by William Maurer in the book “Novel Drilling Techniques”. To date, the only attempts to develop these alternative methods have ended in spectacular failure. For example, in 2009 Bob Potter, the inventor of hot dry geothermal, founded a company to drill using hot high-pressure water (hydrothermal spallation). As of 2022, the company appears to be out of business. Another company, Foro Energy, has been attempting to use commercial fiber lasers, widely used in metal cutting, to drill rock, but little speaks for its practicality. The physics speaks for itself, as a 10-micron thick layer of water will absorb 63% of the energy of a CO2 laser. No one could possibly argue the limit of human imagination is the reason for our putative inability to drill cost-effective deep wells. Maurer lists a total of 24 proposed methods over the past 60 years. The list includes Abrasive Jet Drills, Cavitating Jet Drills, Electric Arc and Plasma Drills, Electron Beam Drills, Electric Disintegration Drills, Explosive Drills, High-Pressure Jet Drills, High-Pressure Jet Assisted Mechanical Drills, High-Pressure Jet Borehole Mining, Implosion Drills, REAM Drills, Replaceable Cutterhead Drills, Rocket Exhaust Drills, Spark Drills, Stratapax Bits, Subterrene Drills, Terra-Drill, Thermal-Mechanical Drills, and Thermocorer Drill. This quite extensive list does not include “nuclear drills” proposed during the 1960s. Prior to the discovery of binder-less diamond bits, the author believed that among the alternatives proposed, explosive drills might be the simplest and most conducive to improvement, since they had been successfully field-tested. What most of these exotic alternatives claim to offer (at least their proponents!), are faster drilling rates. But upon scrutiny, they do not live up to this promise. For example, Quaise, a company attempting to commercialize the idea of Paul Waskov to use high-frequency radiation to heat rock to its vaporization point, claims to be able to drill at 10 meters per hour. But this number is nothing spectacular considering conventional binder poly-crystalline diamond bits from the 1980s could drill as fast as 7 meters per hour in crystalline rock using air. (Deep Drilling in Crystalline Bedrock Volume 2: Review of Deep Drilling Projects, Technology, Sciences and Prospects for the Future, Anders Bodén, K. Gösta Eriksson). Drilling with lasers, microwaves, or any other thermal delivery mechanism, is well within the capacities of modern technology, but it offers no compelling advantage to impel adoption. Most of these thermal drilling options require dry holes since water vapor will absorb most of the energy from electromagnetic radiation since water vapor is a dipole molecular. While new binderless polycrystalline diamonds can withstand temperatures up to 1200°C in non-oxidizing atmospheres, down-bore drivetrain components are not practically operated over 250°C due to lubricant limitations, preventing drilling from taking place with down-hole equipment at depths above 7 km, especially in sharp geothermal gradients of over 35°C/km. Electric motors using glass or mica-insulated windings and high Curie temperature magnets such as Permendur can maintain high flux density well over 500°C, but gearbox lubrication issues make such a motor useless. In order to maximize the potential of binder-less diamond bits, a down-hole drive train is called for to eliminate drill pipe oscillation and friction and to allow optimal speed and power. Of all the down-hole drive options, a high-frequency high power density electric motor is ideal, possessing far higher power density than classic turbodrills and offering active speed and torque modulation. Even if a classic Russian turbodrill is employed, a reduction gear set is still required. Russian turbodrills were plagued by rapid wear of planetary gearsets due to low oil viscosity at downhole temperatures. A gearset operating with oil of 3 Cst wears ten times faster than one at 9 Cst. In order to make a high-power electric motor fit in the limited space in the drill pipe, a high operating speed is necessary. This is where the lubrication challenges become exceedingly difficult. While solid lubricants and advanced coatings in combination with ultra-hard materials can allow bearings to operate entirely dry for thousands of hours, non-gear reduction drives are immature and largely unproven for continuous heavy-duty use. The power density of a synchronous electric motor is proportional to the flux density of the magnet, pole count, and rotational speed. This requires a suitable reduction drive system to be incorporated into the drill. Although a number of exotic untested concepts exist, such as traction drives, pneumatic motors, high-temperature hydraulic pumps, dry lubricated gears etc, none enjoy any degree of operational success and exit only as low TRL R&D efforts. Deep rock drilling requires mature technology that can be rapidly commercialized with today’s technology, it cannot hinge upon future advancements which have no guarantee of occurring. Among speed-reducing technologies, involute tooth gears are the only practical reduction drive option widely used in the most demanding applications such as helicopters and turbofan engines. But because of the high Hertzian contact stress generates by meshing gears, it is paramount that the viscosity of the oil does not fall much below 10 centipoises, in order to maintain a sufficient film thickness on the gear face, preventing rapid wear that would necessitate the frequent pull up of the down-hole components. Fortunately, ultra-high viscosity gear oils are manufactured that can operate up to 200°C. Mobil SHC 6080 possesses a dynamic viscosity of 370 Cst at 100°C, the Andrade equation predicts a viscosity of 39 at 180°C. In an anoxic environment, the chemical stability of mineral oils is very high, close to 350°C, but at such temperatures, viscosity drops below the film-thickness threshold, so viscosity, not thermal stability is the singular consideration. It is expected that by eliminating the oscillation of the drill pipe caused by eccentric rotation within the larger borehole and removing the cobalt binder, diamond bits could last up to 100 hours or more. This number is conjectural and more conservative bit life numbers should be used for performance and financial analysis. It is therefore critical that the major down-hole drive train components last as long as the bits so as to not deplete their immense potential. If bit life is increased to 100 hours, the lost time due to pull-out is reduced markedly. With a bit life of 50 hours to be conservative, and a drill-pipe length of 30 meters, pull-up and reinsertion time is reduced to only 544 hours, or 40% of the total drilling time. If the depth of the well is 10,000 meters, the average depth is 5000 meters, the average penetration rate is 7 m/hr, and the drill pipe is 30 meters, then the number of drill pipe sections is 333. During each retrieval, if the turn-around time can be kept to 3 minutes, the total time is 8.3 hours per retrieval one way, or 16.6 hours for a complete bit-swap. If the total drilling time is 1430 hours, then a total of 29-bit swaps will be required, taking up 481 hours, or 33% of the total drilling time. If bit life is improved to 100 hours, downtime is halved to 240 hours or 17%. If a drill-pipe length of 45 meters is employed with a bit life of 100 hours and a rate of penetration of 7 m/hr, the downtime is only 211 hours or 14.7%.

Some may be suspicious that something as simple as this proposed idea has not been attempted before. It is important to realize that presently, there does not exist any rationale for its use. Therefore, we can conclude that rather than fundamental technical problems or concerns regarding its feasibility, a lack of relevant demand can account for its purported novelty. As mentioned earlier, this new strategy has not been employed in drilling before since it imposes excessive demands on surface equipment, namely the need for close to 16000 hp (32,000 hp at full depth) to drive high-pressure water pumps. Such power consumption is impractical for oil and gas drilling where quick assembly and disassembly of equipment is demanded in order to increase drilling throughput. Water, even with its low viscosity, requires a lot of energy to flow up and down this very long flow path. The vast majority of the sedimentary deposits where hydrocarbons were laid down during the Carboniferous period occur in the first 3 km of the crust. The temperatures at these depths correspond to less than 100°C, which is not close to a temperature that warrants advanced cooling techniques. Deep drilling in crystalline bedrock does not prove valuable for hydrocarbon exploration since subduction rarely brings valuable gas and liquid hydrocarbons deeper than a few km. There has therefore been a very weak impetus for the adoption of advanced technologies related to high-temperature drilling. Geothermal energy presently represents a minuscule commercial contribution, and to this date, has proven to be an insufficient commercial incentive to bring to market the necessary technical and operational advances needed to viably drill past 10 km in crystalline bedrock. Cooling is essential for more than just the reduction gearbox lubricant. If pressure transducers, thermocouples, and other sensor technology is desired, one cannot operate hotter than the maximum temperature of integrated circuit silicon electronics. For example, a very effective way to reduce Ohmic losses is by increasing the voltage to keep the current to a minimum. This can easily be done by rectifying high-voltage DC using silicon-controlled diodes (SCR or thyristors) and nano-crystalline transformer cores. But both gearbox oil and thyristors cannot operate at more than 150°C, cooling thus emerges as the enabling factor behind any attempt to drill deep into the crust of the earth, regardless of how exactly the rock is drilled. Incidentally, the low thermal conductivity and heat capacity of the crust yield a low thermal diffusivity, or thermal inertia. Rock is a very poor conductor of heat, in fact, rock (silicates) can be considered insulators, and similar oxides are used as refractory bricks to block heat from conducting in smelting furnaces. The metamorphic rock in the continental crust has a thermal conductivity of only 2.1 W-mK and a heat capacity of under 1100 J/kg-K at 220°C, translating into a very slow thermal diffusivity of 1.1 mm2/s, corresponding to the average temperature of a 12 km deep well. This makes it more than feasible for the operator to pump a high volume of water through the drill pipe and annulus above and beyond the requirement for cutting removal. If rock had an order of magnitude faster thermal diffusivity, such a scheme would be impossible as the speed in which heat travels through the rock would exceed even the most aggressive flow rates allowable through the bore-hole. The motivation behind the use of down-hole electric motors. With satisfactory cooling, electric motors are the most convenient method to deliver power, but they are not the only high-power density option. A turbo-pump (a gas turbine without a compressor) burning hydrogen and oxygen is also an interesting option, requiring only a small hose to deliver the gaseous fuel products which eliminate the need for any down-hole voltage conversion and rectification equipment. But despite the superior power density of a combustion power plant, the need to pump high-pressure flammable gases presents a safety concern at the rig, since each time a new drill string must be coupled, the high-pressure gas lines have to be closed off and purged. In contrast, an electric conductor can simply be de-energized during each coupling without any mechanical action at the drill pipe interface, protecting workers at the site from electric shock. In conclusion, even though a turbo-pump using hydrogen and oxygen is a viable contender to electric motors, complexity and safety issues arising from pumping high-pressure flammable gases rule out this option unless serious technical issues are encountered in the operation down-hole electric motors, which are not anticipated. Conventional turbodrills require large numbers of turbine stages to generate a significant amount of power, this results in a substantial portion of the fluid pumped from the surface being used up by the turbine stages, resulting in considerable pressure drop, which reduces the cooling potential of the water since there is now less head to overcome viscous drag along the rough borehole on the way up the annulus. According to Inglis, T. A. (1987) in Directional Drilling, A 889 hp turbodrill experiences a pressure drop of 200 bar with a flow rate of 163 m3/hr, since the large diameter drill-bit requires at least 1000 kW (1350 hp), the total pressure drop will be 303 bar, or half the initial driving head. This will halve the available flow rate and thus the cooling duty.

Fullscreen capture 12132022 25529 PM.bmp

Electric motors confer to the operator the ability to perform live and active bit speed and torque modulation, while turbodrills cannot be efficiently operated below their optimal speed band. Moreover, even if turbodrills could be designed to operate efficiently at part load, it is not practical to vary the pumping output at the surface to control the turbodrill’s output. And even if turbodrills were used, they would still need to employ our novel active-cooling strategy since they too need speed reduction. It should be emphasized that it is not the use of down-hole motors themselves that makes our drilling concept viable, but rather the massive water flow that keeps everything cool. In hard crystalline bedrock, well-bore collapse generally does not occur, rather a phenomenon called “borehole breakout” occurs. Breakout is caused by a stress concentration produced at the root of two opposing compression domes forming a crack at the point of stress concentration between these two opposing “domes”. Once this crack forms, it stabilizes and the stress concentration is relieved growing only very slowly over time. Imagine the borehole is divided into two parts, each half forms a dome opposite to one other, there is a maximum of compressive stress at the crest of each dome, while there is a minimum of compressive stress at the root or bottom of each dome, this causes the roots of each dome to elongate and fracture. Overburden pressure is an unavoidable problem in deep drilling. Overburden pressure is caused by the sharp divergence between the hydrostatic pressure of rock which experiences a gradient of 26 MPa/km and that of water, which only experiences 10 MPa/km. Technical challenges. It’s important to separate technical problems from operational problems. For example, regardless of what kind of drill one uses, there is always the issue of the hole collapsing in soft formations and equipment getting stuck. Another example would be lost circulation, such a condition is largely technology invariant, short of extreme options such as casing drilling. Operational challenges While there are no strict “disadvantages”, namely features that make it inferior to current surface-driven shaft drills, there are undoubtedly a number of unique operational challenges. Compared to the companies touting highly unproven and outright dubious concepts, this method and technological package faces only operational, not technical challenges. The massive flow of water and the intense removal of heat from the rock will result in more intense than normal fracture propagation in the borehole. The usual issues that pertain to extreme drilling environments apply equally to this technology and are not necessarily made any graver than with conventional shaft-driven drills. For example, the down-hole motor and equipment getting stuck, a sudden unintended blockage of water flow somewhere along the annulus that results in rapid heating, or a snapping of the drill string, are likely to happen occasionally especially in unstable formations, or in regions where over-pressurized fluids are stored in the rock. Another potential downside is intense erosion of the rock surface due to the high annulus velocity of over 8 meters per second. Since a large volume of water must be pumped, a large head is required of at least 600 bar. This pressure energy is converted into velocity energy according to Bernoulli’s principle. Because the concentration of fragments in the water is extremely low (<0.06% vs over 2% in drilling mud), the rate of erosion on the hardened drill pipe liner is not a concern. It is likely that the relatively short period of time where drilling is actually taking place, around 2000 hours including bit replacement and pull up every 50 hours, it is unlikely this water will have time to significantly erode away the well-bore. Even if it does, it will merely enlarge the well diameter, and is not expected to significantly compromise its structural integrity.


Fullscreen capture 11202022 23757 PM.bmpFullscreen capture 11192022 73508 PM.bmp

Fullscreen capture 11192022 73501 PM.bmp

1.xps41586_2009_Article_BFnature07818_Fig1_HTML (1)Fullscreen capture 8252022 14928 AM.bmpFullscreen capture 8252022 12355 AM.bmp


Temperature-dependent thermal diffusivity of the Earth’s crust and implications for magmatism | Nature

Galton Reaction Time Slowing Resolved (Scientific)

Christophe Pochari, Pochari Technologies, Bodega Bay, CA

Abstract: The issue of slowing reaction time has not been fully resolved. Since Galton collected 17,000 samples of simple auditory and visual reaction time from 1887 to 1893, achieving an average of a 185 milliseconds, modern researchers have been unable to achieve such fast results, leading some intelligence researchers to erroneously argue that slowing has been mediated by selective mechanisms favoring lower g in modern populations.

Introduction: In this study, we have developed a high fidelity measurement system for ascertaining human reaction time with the principle aim of eliminating the preponderance of measurement latency. In order to accomplish this, we designed a high-speed photographic apparatus where a camera records the stimuli along with the participant’s finger movement. The camera is an industrial machine vision camera designed to stringent commercial standards (Contrastec Mars 640-815UM $310, the camera feeds into a USB 3.0 connection to a windows 10 PC using Halcon machine vision software, the camera records at a high frame rate of 815 frames per second, or 1.2 milliseconds per frame, the camera uses a commercial-grade Python 300 sensor. The high-speed camera begins recording, then the stimuli source is activated, the camera continues filming after the participant has depressed a mechanical lever. The footage is then analyzed using a framerate analyzer software such as Virtualdub 1.10, by carefully analyzing each frame, the point of stimuli appearance is set as point zero, where the elapsed time of reaction commences. When the LED monitor begins refreshing the screen to display the stimuli color, which is green in this case, the framerate analyzer tool is used to identity the point where the screen has refreshed at approximately 50 to 70% through, this point is set as the beginning of the measurement as we estimate the human eye can detect the presence of the green stimuli prior to being fully displayed. Once the frame analyzer ascertains the point of stimuli arrival, the next process is enumerating the point where finger displacement is conspicuously discernable, that is when the liver begins to show evidence of motion from its point in stasis prior to displacement.
Using this innovative technique, we achieved a true reaction time to visual stimuli of 152 milliseconds, 33 milliseconds faster than Francis Galton’s pendulum chronograph. We collected a total of 300 samples to arrive at a long-term average. Using the same test participant, we compared a standard PC measurement system using Inquisit 6, we achieved results of 240 and 230 milliseconds depending on whether a laptop keyboard or desktop keyboard is used. This difference of 10 ms is likely due to the longer key stroke distance on the desktop keyboard. We also used the famous online test and achieved an average of 235 ms. Using the two tests, an internet and local software version, the total latency appears to be up to 83 ms, nearly 40% of the gross figure. These findings strongly suggest that modern methods of testing human reaction time impose a large latency penalty which skews results upwards, hence the fact it appears reaction times are slowing. We conclude that rather than physiological changes, slowing simple RT is imputable to poor measurement fidelity intrinsic to computer/digital measurement techniques.
In compendium, it cannot be stated with any degree of confidence that modern Western populations have experienced slowing reaction time since Galton’s original experiments. This means attempts to extrapolate losses in general cognitive ability from putative slowing reaction times is seriously flawed and based on confounding variables. The reaction time paradox is not a paradox but rather based on conflating latency with slowing, a rather elementary problem that continued to perplex experts in the field of mental chronometry. We urge mental chronometry researchers to abandon measurement procedures fraught with latency such as PC-based systems and use high-speed machine vision cameras as a superior substitute.



Anhydrous ammonia reaches nearly $900/ton in October

Fullscreen capture 10102021 54638 PM.bmp

Record natural gas prices have sent ammonia skyrocketing to nearly $900 per ton for the North American market. Natural gas has reached $5.6/1000cf, driving ammonia to 2014 prices. Pochari distributed photovoltaic production technology will now become ever more competitive featuring even shorter payback periods.

The limits of mental chronometry: little to no decline in IQ can be inferred from POST-Galton DATAsets

The Limits of Mental Chronometry: IQ has not Declined 15 points Since the Victorian era


Christophe Pochari Engineering is the first in the world to use high speed cameras to measure human reaction time. By doing so, we have discovered that the true “raw” undiluted human visual reaction time is actually 150-165 milliseconds, not the slow 240-250 ms frequently cited.

Key findings

Using high speed photography with industrial machine vision cameras, Christophe Pochari Energietechnik has acquired ultra-high-fidelity data on simple visual reaction time, which appears the first study of its kind. The vast preponderance of contemporary reaction time studies make use of computer software based digital measurements systems that are fraught with response lag. For illustration, Inquisit 6, a Windows PC software, is frequently used in psychological assessment settings. We used Inquisit 6 and performed 10 sample runs, with a running average of 242 ms using a standard keyboard and 232 ms with a laptop keyboard. The computer used is an HP laptop with 64 GB of DDR4 ram and a 4.0 GHz Intel processor. Using the machine vision camera, a mean speed of 151 milliseconds was achieved with a standard deviation of 16 ms. Depending on when one decides to begin the cutoff from finger movement and screen refresh, there is a standard interpretation lability of around 10 ms. Based on this high fidelity photographic analysis, our data leads to the conclusion that a latency of around 90 ms is built in with digital computer-based reaction time measurement, generating the false positives of slowing since Galton, which used mechanical levers free of lag. Each individual frame was calculated using Virtualdub 1.10.4 frame analysis software which allows the user to manipulate high frame rate video footage. This data would indicate modern reaction times showing 240-250 milliseconds (Deary etc) cannot be compared to Galton’s original measurement of around 185 ms. Although Galton’s device was no doubt far more accurate than today’s digital systems, it probably still possessed some intrinsic latency, we estimate Galton’s device had around 30 ms of latency based on this analysis assuming 240 as the modern mean. Dodonova et al constructed a pendulum-like chronometer very similar to Galton’s original device, they received a reaction time of 172 ms with this device, so we can be quite confident.

After adjusting for latency, we come to the conclusion there has been minimal change in reaction time since 1889. We plan on using a higher speed camera to further reduce measurement error in a follow up study, although it is not necessary to attain such high degrees of precision since a total latency of  ± milliseconds out of 150 represents a minuscule 2% standard error, there is much more room for error in defining the starting and ending point.

An interesting side note note: There is some data pointing to ultra-fast reaction time in athletes that seems to exceed the speed of normal simple reaction to visual stimuli under non-stressful conditions:

Studies have measured people blinking as early as 30-40 ms after a loud acoustic stimulus, and the jaw can react even faster. The legs take longer to react, as they’re farther away from the brain and may have a longer electromechanical delay due to their larger size. A sprinter (male) had an average leg reaction time of 73 ms (fastest was 58 ms), and an average arm reaction time of 51 ms (fastest was 40 ms)”.

The device used in the study is a Shenzhen Kayeton Technology Co KYT-U400-CSM high speed USB 3.0 330fps @ 640 x 360 MJPEG camera. A single frame increment represents an elapsed time of 3 milliseconds. Pochari Technologies has purchased a Mars 640-815UM at 815 frames per second manufactured by Hangzhou Contrastech Co., Ltd, the purpose of the 815 fps camera is to further reduce latency down to 1.2 milliseconds. In the second study using a different participant we will use the 815 fps device.

To measure finger movement, we used a small metal lever. The camera is fast enough to detect the color transition of the LED monitor, note the color changing from red to green. We set point zero as the point where the color shift is around 50% through. The color on the monitor is changed by the pixels switching color from the top down. The participant is instructed to hold his/her finger as steady as as possible during the waiting period, there is effectively zero detectable movement until the muscle contraction takes place upon nerve signal arrival, which takes place at around 100 m/s, at a distance of 1.6 m (16 ms time) from the brain to the hand. When nerve conduction has begun, the finger begins to be depressed conspicuously on the image and the reaction time can be determined.

Fullscreen capture 1092021 112033 PM

Fullscreen capture 1092021 112044 PM


Mars 640-815UM 3.0 USB machine vision camera 1000 fps camera


Shenzhen Kayeton Technology Co KYT-U400-CSM high speed USB camera.

Introduction and motivation of the study

In 2005, Bruce Charlton came up with a novel idea for psychometric research: attempt to find historical reaction time data to estimate intelligence in past generations. In 2008 he wrote an email to Ian Deary proposing this new method to perform a diachronic analysis of intelligence. Ian Deary unfortunately did not have any information to provide Charlton with, so the project was put into abeyance until 2011 when Michael Woodley discovered Irwin Silverman’s 2010 paper which had rediscovered Galton’s old reaction time collection. The sheer obscurity of Galton’s original study is evident considering the leading reaction time expert, that is Ian Deary, was not even aware of it. The original paper covering Galton’s study was from Johnson et al 1985. The subsequent paper: “Were the Victorians Clever Than us” generated much publicity. One of the lead authors of the paper, Jan te Nijenhuis gave an interview with a Huffington post journalist on Youtube discussing this theory, it was also featured in the Dailymail. The notoriously dyspeptic Greg Cochran threw the gauntlet down on Charlton’s claim in his blog, arguing according to the breeder’s equation that such a decline is impossible. Many HBD bloggers, including HBD chick were initially very skeptical, blogger Scott Alexander Siskind also gave a rebuttal mainly along the lines of sample representation and measurement veracity, the two main arguments made here.

Galton’s original sample has been criticized for not being representative of the population at the time as it mainly consisted of students and professionals visiting a science museum in London where the testing took place. At the time in 1889, most of the Victorian population was comprised of laborers and servants, who would have likely not attended this museum to begin with. Notwithstanding the lack of population representation, the sample was large, over 17,000 total measurements were taken at the South Kensington Museum from 1887 to 1893. Since Galton died in 1911 and never published his reaction time findings, we are reliant on subsequent reanalysis of the data, this is precisely where error may have accrued as Galton may have had personal insight into the workings of the measurement device itself, statistical interpretation, or data aggregation system he used which has not been completely documented. The data used by Silverman was provided by reanalysis of Galton’s original findings published by Koga and Morant 1923, and later more data was uncovered by Johnson 1985. Galton used a mechanical pendulum chronometer which is renowned for its accuracy and minimal latency. Measurement error is not where criticism is due, Galton’s tool was likely more accurate than modern methods on computer testing. Modern computers are thought to possess around 35-40 ms not including any software or internet latencies, but we have shown up to 90 ms.

The problems with inferring IQ decline from Galton-to the present RT data is threefold:

The first issue is that the population is very unlikely to have been completely representative of the British population at the time. It consisted of disproportionate numbers of highly educated individuals, who are more likely to possess high levels of intelligence, since at the time people who participate in events like this would have drawn overwhelmingly from a higher class strata. Society was far more class segregated and average and lower IQ segments would have not participated in intellectual activities.

Scott Alexander comments: “This site tells me that about 3% of Victorians were “professionals” of one sort or another. But about 16% of Galton’s non-student visitors identified as that group. These students themselves (Galton calls them “students and scholars”, I don’t know what the distinction is) made up 44% of the sample – because the data was limited to those 16+, I believe these were mostly college students – aka once again the top few percent of society. Unskilled laborers, who made up 75% of Victorian society, made up less than four percent of Galton’s sample”

The second issue is measurement latency, when adjusting Galton’s original estimate, and correcting modern samples for digital latency, the loss in reaction collapses from the originally claimed 70 ms (14 IQ points) to zero. Another factor mentioned by Dordonova et al is the process of “outlier cleaning”, where samples below 200 ms and above 750 ms are eliminated, this can have a strong effect on the mean, theoretically in any direction, although it appears that outlier cleaning increases the RT mean since slow outliers are rarer than fast outliers. 

The third issue is that reaction time studies only 50-60 years after (1940s and 50s) show reaction times equal to modern samples, which indicates the declines must have taken place in a short timeframe of only 50-60 years. A large study from Forbes 1945 shows 286 ms for males in the UK. A study from Michael Persinger’s book on ELF waves shows a study from 1953 in Germany.

“On the occasion of the German 1953 Traffic Exhibition in Munich, the reaction times of visitors were measured on the exhibition grounds on a continuous basis. The reaction time measurements of the visitors to the exhibition consisted of the time span taken by each subject to release a key upon the presentation of a light stimulus”.

In the 1953 Germany study, they were comparing the reaction of people exposed to different levels of electromagnetic radiation. The mean appeared to be in the 240-260 ms range.

Lastly, it could have been the case that Galton instead chose the fasted of three samples, not the mean of the sum of the samples.

Dordonova et all says “It is also noteworthy that Cattell, in his seminal 1890 paper on measurement, on which Galton commented and that Cattell hoped would “meet his (Galton’s) approval” (p. 373), also stated: In measuring the reaction-time, I suggest that three valid reactions be taken, and the minimum recorded” (p. 376). The latter point in Cattell’s description is the most important one. In fact, what we know almost for sure is that it is very unlikely that Galton computed mean RT on these three trials (For example, Pearson (1914) claimed that Galton never used the mean in any of his analyses. The most plausible conclusion in the case of RT measurement is that Galton followed the same strategy as suggested by Cattell and recorded the best attempt, which would be well in line with other test procedures employed in Galton’s laboratory.

Woods 2015 et al confirms this statement: “based on Galton’s notebooks, Dordonova and Dordonov (2013) argued that Galton recorded the shortest-latency SRT obtained out of three independent trials per subject. Assuming a trial-to-trial SRT variance of 50 ms (see Table 1), Galton’s reported single-trial SRT latencies would be 35–43 ms below the mean SRT latencies predicted for the same subjects; i.e., the mean SRT latencies observed in Experiment 1 would be slightly less than the mean SRT latencies predicted for Galton’s subjects”

A website called run by Ben D Wiklund has gathered 81 million clicks. Such a large sample size eliminates almost all sampling bias. The only issue would be population differences, it’s not known what percent are from Western nations. Assuming most are in Western nations, it’s safe to say this massive collection is far more accurate than a small sample performed by a psychologist. In order for this test to be compared to Galton’s original sample, since the test is online, both internet latency and hardware latency have to be accounted for. Internet latency depends on the distance between the user and the server, so an average is impossible to estimate. Humanbenchmark is hosted in North Bergen, US, so if half the users are outside the U.S, the distance should average at around 3000 km. 

“Connecting to a web site across 1500 miles (2400 km) of distance is going to add at least 25 ms to the latency. Normally, it’s more like 75 after the data zig-zags around a bit and goes through numerous routers”. Unless the website corrects for latency, which seems difficult to believe, since they would have to immediately calculate the distance based on the user’s IP and assume he does not use a VPN, if internet latency can range as high as 75 milliseconds, it is doubtful that the modern average reaction time is 167 ms, therefor we are forced to conclude there must be some form of a latency correction system, although they make no mention of such a feature. For example, since Humanbenchmark is hosted in New Jersey, a person taking the test in California would require to wait 47 ms before his signal reaches New Jersey is 4500 kilometers away, but this includes only the actual in takes for light to travel a straight line at the speed of light, many fiber-optic cables take a circuitous path which adds distance, additionally, there is also latency in the server itself and the modem and router. According to Verizon, the latency for Transatlantic NY London (3500 km) is 92 ms, adjusting for the distance between New Jersey and California (4500 km) gives 92 ms. Since the online test begins to record the time elapsed after the green screen is initiated, the computer program in New Jersey started calculating immediately after green is sent, but 92 ms passes before you see green, and when green appears, you click, which then takes another 92 ms before it arrives at the server to end the timer. The internet is not a “virtual world”, all webservices are hosted by a server computer which performs computation locally, by definition, any click on a website hosted in Australia 10,000 km away will register lag 113 ms after your click, this is limited by the speed of light. Only a quantum entangled based internet could be latency free, but at the expense of destroying the information according to the uncertainty principle! Assuming the estimate provided by Verizon, assuming the average test taker is within 3000 km, we can use an estimate of 70 ms for latency. Since the latency is doubled (calculation time begins immediately signal is sent to user), then a 140 ms is simply too much to subtract, there must be automatic correction, which now makes estimating the true latency more difficult since many users use VPNs which create a false positive up or down. To be conservative, we use a gross single latency of 20 ms. Upon further analysis, using a VPN with an IP in New York just a short distance from the server, the latency adjustment program (if it exists!) would add little correction value as the latency would be less than a few milliseconds. The results show no change in the reaction time upon changing the location, indicating no such mechanism exists which was our first thought. If no such latency correction exists, than modern reaction times could theoretically be as low as 140 ms. (note, this is close to the real number, so our blind estimate was pretty good!). The latency of LED computer monitors varies widely. For example, the LG 32ML600M, a medium-end LED monitor has an input lag of 20 ms, this monitor was chosen randomly and is assumed to reasonably representative of most monitors used by the 81 million users of the online test as well as being the one used in the later study. Using the software program HTML/JavaScript mouse input performance tests we measure a latency of 17 ms for a standard computer mouse. The total latency (including internet at 20 ms) is 56 ms. From the human benchmark dataset, the median reaction time was 274 milliseconds, yielding a net reaction time of 218 milliseconds, 10 milliseconds slower than Galton’s adjusted numbers provided by Woods et al. Bruce Charlton has created a conversion system where 1 IQ point is equal to 3 ms. This assumes a modern reaction time of 250 ms with a standard deviation of 47 ms. This simple but elegant method for turning reaction time is purely linear, it assumes no changes in the correlations at different levels of IQ. With this assumption, 10 ms equates to 3.3 IQ points, unwontedly similar to Piffer’s estimate.

“The mean SRT latencies of 231 ms obtained in the current study were substantially shorter than those reported in most previous computerized SRT studies (Table 1). When corrected for the hardware delays associated with the video display and mouse response (17.8 ms), “true” SRTs in Experiment 1 ranged from 200 ms in the youngest subject group to 222 ms in the oldest, i.e., 15–30 ms above the SRT latencies reported by Galton for subjects of similar age (Johnson et al., 1985). However, based on Galton’s notebooks, Dordonova and Dordonov (2013) argued that Galton recorded the shortest-latency SRT obtained out of three independent trials per subject. Assuming a trial-to-trial SRT variance of 50 ms (see Table 1), Galton’s reported single-trial SRT latencies would be 35–43 ms below the mean SRT latencies predicted for the same subjects; i.e., the mean SRT latencies observed in Experiment 1 would be slightly less than the mean SRT latencies predicted for Galton’s subjects. Therefore, in contrast to the suggestions of Woodley et al. (2013), we found no evidence of slowed processing speed in contemporary populations.

They go on to say: “When measured with high-precision computer hardware and software, SRTs were obtained with short latencies (ca. 235 ms) that were similar across two large subject populations. When corrected for hardware and software delays, SRT latencies in young subjects were similar to those estimated from Galton’s historical studies, and provided no evidence of slowed processing speed in modern populations.”.

What the authors are saying is that correcting for device lag, there’s no appreciable difference in simple RT between Galton’s sample and modern ones. Dordonova and Dordonov claimed that Galton did not use means in computing his samples. Dordonova et al constructed a pendulum similar to Galton’s to ascertain its accuracy, they concluded it would have been a highly accurate device devoid of the latencies that plague modern digital systems. “What is obvious from this illustration is that RTs obtained by the computer are by a few tens of milliseconds longer than those obtained by the pendulum-based apparatus”.

They go on to say: “it is very unlikely that Galton’s apparatus suffered from a problem of such a delay. Galton’s system was entirely mechanical in nature, which means that arranging a simple system of levers could help to make a response key very short in its descent distance”.


There are two interpretations available to us. The first is that no decline whatsoever took place. If reaction time is to be used as a sole proxy for g, then it appears according to Dodonova and Woods, who provide a compelling argument, which I confirmed using data from mass online testing, that no statistically significant increase in RT has transpired. 

Considering the extensive literature that shows negative fertility patterns on g (general intelligence), it seems implausible that some decline has not occurred, but it may not have due to increases in IQ caused by outbreeding (heterosis/hybrid vigor). People in small villages in the past would have been confined to marrying each other, causing reduced genetic diversity which is known to lower IQ, in extremes with inbreeding in Muslims.

While we do not argue, as Mingroni did, that the Flynn effect is entirely due to heterosis (outbreeding), it’s conceivable that populations boosted their fitness by reducing the extent to which they mated within small social circles, for example, villages and rural towns. We know for certain consanguineous marriage severely depresses intelligence, and it tends to be Jensen effect (where the magnitude of the nexus is strongest when the g loading is highest), then we would expect heterosis to be a valid theory worthy of serious consideration. In the age of the 747, it’s easier than ever for an Italian to mate with a Swede, increasing genetic diversity, thereby amplifying the level of variance, and producing more desirable phenotypes. On the other hand, there is ample evidence mixed-race offsprings (if the populations are genetically distant, such as African-European or East Asian European), have higher rates of mental illness and general psychological distress than controls. But this should not be seen as a falsification of the heterosis theory, as a certain threshold of genetic distance is satisfactory, if that threshold is exceeded, the opposite effect can take place. This the principle of “Hormesis”. Almost all biological phenomenon follow a hormesis principle, why should genetics be exempt from this law? Swedish geneticist Gunnar Dahlberg first proposed that outbreeding caused by the breakdown of small isolated villages could raise intelligence in 1944. “Panmaxia” is the term for random mating. The Flynn effect heritability paradox does seem to occur simply on intelligence, Michael Mingroni has complied evidence of height, asthma, myopia, head circumference, head breadth, ADHD, autism, and age at menarche, all of which have high heritabilities, as high as 0.8 if not 0.9 for height, yet show large secular rises that defy the breeder’s equation. In other words, selective or differential fertility cannot have changed their frequencies sufficiently fast to explain the rapid secular changes in the phenotype. Heterosis may operate based on the principal of directional dominance, where dominant alleles push the trait in one direction, let’s say in this case downward, and recessive alleles push the trait upward. One could theorize that a myriad of recessive but antagonistic alleles, that reduce height, IQ, and head size decreased in frequency as heterosis increased during the 20th century. This interpretation is highly compatible with Kondrashev’s theory of sexual mutation purging. Anyone who challenges the power of heterosis should talk to a plant breeder, granted humans have different genetic architectures, but not not different enough for the principle not to apply.

In light of the findings from the photographic measurement method, it appears that this decline is rather so subtle as to not be picked up by RT, the “signal is weak” in an environment of high noise. In an interview with intelligence blogger “Pumpkin person”, Davide Piffer argues that based on his extensive computation of polygenic data, IQ has fallen 3 points per century:

“I computed the decline based on the paper by Abdellaoui on British [Education Attainment] PGS and social stratification and it’s about 0.3 points per decade, so about 3 points over a century.

It’s not necessarily the case that IQ PGS declined more than the EA PGS..if anything, the latter was declining more because dysgenics on IQ is mainly via education so I think 3 points per century is a solid estimate”

Since Galton’s 1889 study, Western populations may have lost 3.9 points, but it’s unlikely. If this number is correct, it interesting to observe how close it is the IQ difference between Europeans and East Asians, who average 104-105 compared to 100 for Northern Europeans and 95 for Southern, Central and Eastern Europeans. East Asia industrialized only very recently, with China only having industrialized in the 1980s, the window for dysgenics to operate has thus been very narrow. Japan has been industrialized for longer, at the turn of the century, so pre-industrial selection pressures would likely relaxed earlier, which presents a Paradox since Japan’s IQ appears very close if not higher than China and South Korea. Of course this is only rough inference, these populations are somewhat genetically different, albeit minor differences, but still somewhat different as far as psychometric differences are concerned. Southern China has greater Australasian/Malay admixture which reduces its average compared to Northern China. For all intents and purposes, East Asian IQ has remained remarkably steady at 105, indicating an “apogee” of IQ that can be reached in pre-industrial populations. Using indirect markers of g, we know that East Asians have larger brains, slower life history speeds, and faster visual processing speeds than whites, corresponding to an ecology of harsh climate (colder winter temperatures than Europe, Nyborg 2003). If any population reached a climax of intelligence, it would have likely been North East Asians. So did Europe feature unique selective pressures?

Unlikely, if one uses a model of “Clarkian selection” (Gregory Clark, the Son also Rises) of downward mobility, Unz documented a similar process in East Asia. Additionally, plagues, climatic disruptions, and mini ice ages afflicted equally if not in greater frequency the populations of East Asia than in Europe. It’s plausible to argue group selection in East Asia would have been markedly weaker since inter-group conflict was less frequent. China has historically been geographically unified, with major wars between groups being rare compared to Europe’s geographic disunity and practically constant inter-group conflict. East Asia also includes Japan, which shows all the markers of strong group selection, that is high ethnocentrism, conformity, in-group loyalty and sacrifice, and a very strong honor culture. If genius is a product of strong group selection as warring tribes are strongly rewarded by genius contributions in weaponry etc, that one would expect genius to be strongly tied to group selection, which appears not the case. Europeans show lower ethnocentrism and group selection than North East Asians on almost all metrics according to Dutton’s research which refuted some of Rushton contradictory findings. A usual argument in the HBD (human biodiversity) community, and mainly espoused by Dutton, is that the harsh ecology of north East Asia, featuring frigidly cold winters pushes the population into a regime of stabilizing selection (selection that reduces genetic variance), this would result in lower frequencies of outlier individuals. No genetic or trait analysis has been performed to compare the degree of variance in key traits such as g, personality, or brain size. What is needed is a global study of the coefficients of additive genetic variation (CVA) to ascertain the degree of historical stabilizing vs disruptive selection. Genius has been argued to be under negative frequency depended selection, where essentially the trait is only fitness salient if it remains rare, there is little reason to believe genius falls under this category. High cognitive ability would be universally under selection, and outlier abilities would simply follow that weak directional selection. Insofar Dutton is correct that genius may come with a fitness reducing baggage, such as bizarre or deviant personality and or general anti-social tendencies. This has been argued repeatedly but has never been convulsively demonstrated. The last remaining theory is the androgen mediated genius hypothesis. If one correlated per capita Nobel prizes with rate of left-handedness as a proxy for testosterone, or national differences in testosterone directly (I don’t believe Dutton did that), then when analyzing only countries with a minimum IQ of 90, testosterone correlates more strongly than IQ since the extremely low per capita Nobel prize rates in NEA cause the correlation to collapse.

To be generous to the possibility Victorian IQ was markedly higher, we run a basic analysis to estimate the current historical frequency of outlier levels of IQ assuming Victorian IQ of 112. 

We use the example of the British Isles for this simple experiment. In 1700, the population of England and Wales was 5,200,000. Two decades into this century, the population increased to 42,000,000, this is excluding immigrants and non-English natives. Charlton and Woodley infer a loss of 1 SD from 1850 onward, we use a more conservative estimate of 0.8 SD + as the mean as the pre-industrial peak. 

This would mean 1700 England would have produced 163,000 individuals with cognitive abilities of 140 from a mean of 112 and an SD of 15. In today’s population, we assume the variance increased slightly due to increasing genetic diversity and stronger assortative mating, we use a slightly higher variance, SD 15.5, with a mean of 100. From today’s population of white British standing at 42,000,000, there are 205,000 individuals with an SD 2.6 times above the current Greenwich IQ mean. If we assume there has been no increase in the variance, which is unlikely considering the increase in genetic diversity due to an expanding population providing room for more mutation, then the number is 168,000.

Three themes can be inferred from this very crude estimate.

The total number of individuals with extremely high cognitive ability may very well have fallen as a percentage, but the total number has remained remarkably steady when accounting for the substantial increase in population. So declining reaction time, even if it did occur (it didn’t) cannot account for declining invention and scientific discovery since the Victorian era as argued by Woodley.

Secondly, this would indicate high IQ in today’s context may mean something very different from high IQ in a pre-industrial setting, since this pool of individuals are not producing shocking genius that is changing the world (otherwise you would have heard of them!).

Thirdly, the global population of high IQ individuals is extraordinary, strongly indicating the pre-industrial Europeans and especially English population possessed traits not measurable by IQ alone which accounted for their prodigious creative abilities, and this was likely confined to European populations but did not extend to Eastern Europe for unknown reasons. But there is no reason to believe this enigmatic unnamed trait was normally distributed and thus followed a similar pattern to standard g, thus today’s population would necessarily produce fewer as a ratio, but at an aggregate level, the total number would remain steady. With massive populations in Asia, primarily India and China, a rough estimate based on Lynn’s IQ estimates give around 13,500,000 individuals in China with an IQ of 140 based on a mean of 105 and an SD of 15. There’s no evidence East Asian SDs are smaller than Europeans as claimed by many in the informal HBD community. While China excels in fields like telecommunication, mathematics, artificial intelligence, and advanced manufacturing (high speed rail etc), there has been little in the way of major breakthrough innovations on par with pre-Modern European genius, especially in theoretical science, despite massive numerical advantage, 85x more than in 1700 England. In fact, most of the evidence suggests China is still heavily reliant on stealing Western technology or at least has been since its recent industrialization. “Genius: (defined as unique creative ability in art, technical endeavors, or pure science or mathematics), is thus a specialized ability not captured by IQ tests. It seams genius is enabled by g, that is in some form of synergistic epistasis, where genius is “activated” by a certain threshold of IQ in the presence of one or more unrelated and unknown cognitive traits, often claimed to be a cluster of unique personality traits, although this model has yet to be proven. India with a much lower mean IQ of 76 from Dave Becker’s dataset, assuming a standard SD (India’s ethnic and caste diversity would strongly favor a larger SD), but for the sake of this estimate, we use an SD of 16. We are left with 41,000 individuals in India with this cutoff, this number does not reconcile with the number of high IQ individuals that India is producing, so we assume either the mean of 76 is way too low, or the SD must be far higher. Even with just 40,000, none of these individuals are displaying any extraordinary abilities closely comparable to genius in pre-Modern Europe, indicating that either there are ethnic differences in creative potential, or that IQ alone must be failing to capture these abilities. Indian populations are classified as closer to Caucasoid according to genetic ancestry modeling, which allows us to speculate as to whether they are closer to Caucasoid in personality traits, novelty-seeking, risk-taking, androgen profiles, and assorted other traits that contribute to genius. Dutton and Kura 2016.

Despite Europe’s prodigious achievements in technology and science which have remained totally unsurpassed by comparably intelligent civilizations, ancient China did muster some remarkable achievements in the past. Lynn says: “One of the most perplexing problems for our theory is why the peoples of East Asia with their high IQs lagged behind the European peoples in economic growth and development until the second half of the twentieth century. Until more parsimonious models on the origin of creativity and genius abilities are developed, rough “historiometric” analysis using RT as the sole proxy may be of limited use. Figueredo and Woodley developed a diachronic lexicographic model using high order woods as another proxy for g. The one issue with this model is that this may be simply measuring a natural process of language simplification over time, which may reflect an increasing emphasis on the speed of information delivery rather than pure accuracy. It is logical to assume in a modern setting where information density and speed of dissemination are extremely important, a smaller number of simpler words are more frequently used (Zipf’s law). Additionally, the fact that far fewer individuals, likely only those of the highest status, were engaging in writing in pre-modern times, should not be overlooked. Most of the population would not have had access to the leisure time to engage in writing, whereas in modern times the nature of written text reflects the palatability of a more simplistic writing style to cater to the masses. Additionally, only 5% of population in Europe would attend university in the early 20th century, so ability levels would be much higher on average than today, so “high order word” usage may not be a useful indicator.


Forbes, G, 1945. The effect of certain variables on visual and auditory reaction times. Journal of Experimental Psychology.

Woods et al (2015). Factors influencing the latency of simple reaction time. Front. Hum. Neurosci

Dodonova etal 2013. Is there any evidence of historical slowing of reaction time? No, unless we compare apples and oranges. Intelligence

Woodley and te Nijenhuis 2013. Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time. Intelligence

Detailed statistics

155 157 154 191 157 164 151 173 158 134 179 152 172 176 163 139 155 182 166 169 179 155 152 169 205 170 149 143 170 142 143 149 174 130 149 139 142 170 127 131 152 127 136 124 125 157 149 127 124 139 158 149 130 149 136 155 143 145 185 152 105 152 130 139 139 140 130 152 166 158 134 142 128 140 155 127 131 139 145 146 139 127 152 145 142 140 143 112 182 185 133 133 130 145 154 158 152 161 152 173 134 145 133 139 148 152 173 158 176 151 181 155 176 149 157 163 167 143 160 145 200 182 140 155 154 148 140 173 173 152 142 143 127 136 164 139 133 145 146 142 149 140 142 124 151 182 166 133 170 152 164 181 121 170 185 164 133 133 149 146 149 119 188 154 150 146 143 151 173 152 160 157 167 148 145 140 155 182 139 166 163 152 170 169 149 136 155 167 154 179 148 155 124 170 134 155 151 181 146 130 173 194 140 131 149 172 182 149 161 155 151 167 157 151 143 142 169 163 136 157 164 133 131 173 133 151 133 143 160 139 157 164 130 131 173 133 151 133 143 152 149 157 142 139 164 136 142 158 145 155 130 166 136 148 133 161 134 145 151 173 146 142 152 166 158 151 173 148 161 172 143 130 148 155 163 142 176 164 173 166 160 142 133 124 152 137 170 142 133 118 152 145 124 151 130 137 157 157 164 155 149 136 137 131 161 142 143 148 115 161 148 167 151 130 139 154 142 149 143

All reaction times recorded

Standard Deviation =16.266789

Varianceσ  =264.60843

Count =319

Mean  = 150.81505

Sum of Squares SS = 84410.088

Anhydrous ammonia prices rise to nearly $730/ton in July

Fullscreen capture 772021 64505 PM.bmp

Anhydrous ammonia (NH3) prices to rise. The price increase is roughly commensurate with the uptick in natural gas prices to $3.6/1000ft3, a high not seen since 2018 (excluding the momentary jump in February caused by an aperiodic cold event in Texas), we can expect if oil reaches a sustained period of $100+, natural gas will follow its usual ratio with oil, sending anhydrous well above 800, likely in the 900 range. Pochari Technologies’ process intensified ammonia system will prove exceedingly more competitive in this future peak hydrocarbon environment. The beauty of this technology is instead of being dependent on an inherently volatile commodity (natural gas), which for the most part, is an exhaustible resource, hence a gradual increase in price over time, Pochari Technologies is only reliant on polysilicon as a commodity, which will continue to go down in price with increased production since silica is effectively inexhaustible, 46% of the earth’s crust! Note that according to the USDA statistic, there are effectively no sellers offering price below 700, so the standard deviation (SD) is very small. This means it’s unlikely for some farmers to be able to snatch up good deals if they are savvy buyers.

Reduced CAPEX alkaline electrolyzers using commercial-off-the-shelf component (COTS) design philosophy.

Posted on  by christophepochari

Dramatically reducing the cost of alkaline water electrolyzers using high surface area mesh electrodes, commercial off-the-shelf components, and non-Zirfon diaphragm separators.

Christophe Pochari, Christophe Pochari Energietechnik, Bodega Bay California

The image below is a CAD model of a classic commercial alkaline electrolyzer design, large, heavy, and industrial-scale units. The heavy use of steel for the endplates and tie rods increases the cost of the stack considerably. The above architecture is bulky, and expensive, with high exclusivity in its design and engineering



An example of the excessively elaborate plumbing and alkali water feed and recirculation system, Christophe Pochari Energietechnik had simplified and made more compact this circuitous and messy ancillary system using thermoplastics and design parsimony.


Christophe Pochari Energietechnik thermoplastic lightweight modular quasi stack-tank electrolyzer cell. Our design eliminates the need for heavy end plates and tie rods, since each electrode is an autonomous stand-alone module. Each electrode module is comprised of a hydrogen and oxygen section that sandwiches between the diaphragm sheet placed in the center of the two plastic capsules. Each oxygen and hydrogen capsule contains its respective electrodes. The Pochari electrolyzer cell frame/electrode box is made from injection molding, an ultra low-cost technology at volume. This cell design is highly scalable, modular, convenient, and extremely easy to manufacture, assemble and transport. The culmination of this engineering effort is a dramatic reduction in CAPEX, where material cost, rather than intricate manufacturing and laborious assembly, dominates the cost structure. One of the central innovations that makes our design stand out is the use of active polarity reversal. A series of valves are placed on the oxygen and hydrogen outlet to allow the anode to to be charged as a cathode and vice versa every sixty seconds. This effectively halts the build up of an oxide layer on either the nickel or iron catalyst. Because iron is very close to nickel in its catalytic effectivity for the hydrogen evolution reaction, only a very small performance penalty is encountered when switching from nickel to iron.


20 centimeter diameter COTS electrolyzer stack.

A component breakdown for the core stack, excluding ancillary equipment. Estimate includes material costs only, labor can be factored in later and adjusted for local wage rate differences.

Anode: plasma sprayed nickel mesh or sheet: 4.4 kg/m2 @8000 watts/m2 (4000 w/m2 total current density): $13.7/kW

Cathode: Carbon steel sheet 4 kg/m2: $2/kW

Plastic electrode module and partition frame: $5/kW

Hydrogen oxygen separator: 200-micron polyethersulfone sheet $11/m2: $2.75/kW

EPDM gaskets: $1.5/kW

Total with nickel electrodes: $25/kW

Total with carbon steel electrodes: $11/kW

For electrolyzers to cost significantly above $100/kW, either exotic materials would have to be used, extremely low productivity manufacturing, an inordinate amount of material beyond what is absolutely necessary (ancillary systems), or an extremely low current density. A typical lead-acid battery, with a capacity of 12 volts and 100 amp hours retails for about $70 on Alibaba, or about $60/kW. An alkaline electrolyzer should be manufactured at the same cost as a lead-acid battery and no more. Since nickel is worth roughly $20-25 per kilogram under normal market conditions, and the rest of the electrolyzer is made of very cheap steel and plastic, we can basically exclude the rest of the system for sake of simplicity. Since we’re using 4 and a half kilos of nickel for 8 kilowatts of power output, the price per kilowatt for the single most expensive component of the stack is $11/kW. We have designed our electrolyzer stacks to be 12 inches in diameter and four feet long, weighing approximately 200 lbs for 30 kilowatts of power, the stack is easily moved with a dolly and connecting into a bank of as many electrolyzers as necessary with convenient flexible chlorinated polyvinyl chloride plumbing for caustic water and hydrogen/oxygen. For engineering simplicity, the stack operates at 1 bar and 90 celsius and would use about 50 kWh/kg at 250 milliamps/cm2. Christophe Pochari Energietechnik is developing a tank type electrolyzer design as a viable alternative to the classic filter pressure architecture.

The nearly two-century old technology of alkali water decomposition, with high throughput manufacturing and Chinese production, is ripe for dramatic cost reduction. Current alkaline electrolyzer technology is excessively expensive beyond what bare material costs would predict, imputable mainly due to a production regime which makes use of inordinate customization, procurement of specialized subcomponents from niche suppliers, minuscule production volumes, a noncompetitive market with a small number of big players, and high cost labor. A further contributor to the uncompetitive CAPEX of this low-tech and old technology is the fact that the ancillary and plumbing components that comprise the electrolyzer module system make use of metallic piping and tankage, usually stainless steel or even nickel rather than cheap thermoplastics. A further reason is the choice of a very large stack size (both in terms of diameter and length), which makes manufacturing and transportation that much more challenging and costly. The current manufacturing process for very long electrolyzer stacks requires an adjustable scaffolding or a varying height underground basement with a hydraulic stand, so that the filter press stack can be built up as workers stand at floor levels. Some electrolyzer stacks are as long as 20 feet and can weigh multiple tons, requiring cranes or hoists to move around in the factory. The massive multi-ton stacks are then bolted down from the endplates and lifted out of their vertical assembly position and transported by truck to a site that will require a crane for installation as well. These respective handicaps serve to impose surfeit costs for a technology that is otherwise made up of relatively cost low-cost raw materials and crudely fabricated components with low precious/tolerance manufacturing. Christophe Pochari Energietechnik’ researchers have thus compiled a plethora of superior design options and solutions, using the strategy of consecutive elimination, to finally bring to market affordable hydrogen generators fabricated from readily available high-quality components, raw materials, and equipment procured on ready to be assembled as small kits ready for use with our novel miniature ammonia plant technology. All of the parts are lightweight and can be lifted by a single person and assembled with common household tools. Our electrolyzers do not exceed 50 kW in size, since our ammonia plants feed off wind and photovoltaics which generate spasmodic current, sundry small electrolyzers are paired up forming a homogenous system, allowing them to be consecutively shot off and on depending on prevailing electrical output, rather than the individual stacks having their power output modulated, which reduces their efficiency. Alkaline electrolyzers require a polarization protection current of around 40-100 amps/m2 during non-operation to mitigate corrosion of the cathode, which is otherwise reduced. An alternative to using a polarization current is simply draining the electrolyte out of the stack, but this would add additional hassle. Most commercial alkaline electrolyzers in operation today are able to fluctuate power output by as much as 125% during a 1-second interval, making it possible to integrate them with wind turbines. During very low-load operation, hydrogen is prone to mix with oxygen by diffusing through the separator membrane when the gas residence time is very high. For this reason, it’s best to operate the electrolyzers at their rated loaded capacity, namely to use our strategy of stacking banks of relatively small units that can be readily shut off and on, rather than throttling a single large scale stack.

Compared to a state-of-the-art lithium-ion battery or Chlor Alkali diaphragm cell, an alkaline cell is an extremely simple and elegant system, consisting of only four major components, each of which features minimal custom fabrication. Alkaline cells, or any electrolyzer for that matter, consists of two basic “architectures”. The most common is the so-called “bipolar” electrolyzer, where current flows from positive to negative through each end of the electrolyzer. The electrolyte serves as the conductor, positive current flows from one end plate until reaching the negative at the opposing endplate, this results in each electrode having a positive and negative on each side. Industrial-scale Alkaline electrolyzers are over 150 years old, with most old fashion designs being constructed entirely out of iron or steel, and corrosion being mitigated not through the use of high-end materials, but by frequent electrode replacement or polarity reversal (to cancel corrosion altogether). In 1789, Adriaan Paets van Troostwijk decomposed water using a gold electrode. The first large-scale use of alkaline electrolysis was in Rjukan Norway, where large banks of electrolyzers fed from the cheap hydropower installations. The Rjukan electrolyzers employed the “Pechkranz electrodes”, invented by Rodolph Pechraknz and patented in Switzerland in 1927, constructed from thick sheets of iron, with the anode electroplated with nickel. It is claimed that the current densities of the Rjukan electrolyzers approached 5500 watts/m2. Most alkaline electrolyzers built before the 1980s used chrysotile asbestos diaphragms.

Prior to the development of the bipolar electrolyzer, most of the early late 19th century designs employed designs that made use of liquid-containing cylinders and submerged electrodes, what is called a “tank type” or “trough” electrolyzer. The electrolyte was contained in a cylindrical vessel, and metal electrodes were suspended from the top. The first modern “bi-polar” electrolyzer was devised by a Russian named Dimitri Latschinoff (also spelled Latchinoff) in Petrograd in 1888, his cell had a current density ranging from 0.35 to 1.4 amp/m2 and used 10% caustic soda. After Latchinoff, a design very close to the modern “filter-press” type was developed by O Schmidt in 1889. Because the Schmidt electrolyzer used potassium carbonate over caustic potash, the electrode corroded at only 1 millimeter per year. In 1902, Maschinenfabrik Oerlikon commercialized the Schmidt bi-polar electrolyzer, which forms the basis for all modern water electrolyzers. The Schmitt design, pictured below, used a cell voltage of 2.5 and generated a hydrogen purity of 99%. The Schmidt electrolyzer generated 2750 liters of hydrogen per hour using 16.5 kilowatts, or 67.34 kWh/kg, or an efficiency of 58.5% of lower heating value. Most early filter press electrolyzers used rubber-bound asbestos diaphragms.

Fullscreen capture 572022 60305 PM.bmp

From: The Electrolysis of Water, Processes and Applications By Viktor Engelhardt · 1904

Fullscreen capture 572022 63213 PM.bmpFullscreen capture 572022 63219 PM.bmpFullscreen capture 572022 63315 PM.bmp

A filter press electrolyzer pictured below was manufactured by National Electrolizer in 1916. The picture above is a filter press electrolyzer made by International Oxygen Company. The picture in the middle is of the asbestos diaphragm pealed in front of the steel electrode. It is claimed in the source that the nickel-plated steel electrodes were “virtually indestructible”. Pictures are taken from the trade journal “Boiler Maker” Volume 16 1916. These electrolyzers were used exclusively for hydrogen welding and cutting of metal. It wasn’t until Rjukan (Norsk Hydro) that this technology first saw use for energetic applications. These Norwegian electrolyzers were also used to electrolyze heavy water for the production of deuterium.

Fullscreen capture 5132022 22423 PM.bmp

The cost of electrolyzer designs from 1904, notice the Schmidtt filter pressure type cost $182/kW for a 10 kW unit, equal to just under $6000 in 2022 dollars. Prices have declined dramatically since 1904, thanks to more productive labor and manufacturing, more global production of nickel and steel, and more efficient fabrication and machining.

The monopolar electrolyzer energizes each electrode individually with a “rack” or bus bar. This design is rarely used. Bipolar systems are also called “filter press” electrolyzers while monopolar are called “tank type” electrolyzers.

While neither of these designs differs by a significant margin in their performance, the bipolar architecture is considered the most “proven” design and forms the basis of all modern electrolysis technology. The only real disadvantage of the monopolar design is the need for very high current bus bars, since a bipolar will use a voltage equal to the sum of each electrode times the number installed, the current required is greatly reduced, placing less demand on the electrical power supply. For example, if a cell voltage of 2 is used, a hundred electrode pairs allow a high voltage of 100 to be used, while monopolar systems require two volts at each electrode at whatever current is required to provide the power, this would increase electrical losses and generate more heat. The bipolar design is the architecture used in this analysis.

Fullscreen capture 5132022 85740 PM.bmp

In order to separate the hydrogen from oxygen, a separation or partition plate is used on each electrode, channeling the separated gases into their respective vent holes. The Pochari design differs insofar as the circumferential frame is replaced with plastic, and the design is square rather than round, to make more efficient use of space.


Tank type electrolyzer module

The electrolyzer system is comprised of the stack and the ancillary equipment, which consists of caustic solution storage tanks, pumps, and the hydrogen and oxygen plumbing system. In current alkaline systems marketed by the established players, elaborate plumbing systems are constructed from nickel alloys. To save cost, rather than constructing these components out of stainless steel, they can be made out of high-temperature plastics, which show excellent resistance to caustic solutions. Christophe Pochari Energietechnik is studying how thermoplastics which can withstand moderate temperatures can be used instead to dramatically lower CAPEX. Semi-crystalline plastics: PEK, PEEK, PPS (Polyphenylene sulfide), PA (polyamide) 11/12. Amorphous plastics: PAI (polyamide-imide), PPSU (polyphenylsulfone), PSU (polysulfone), PES (polyethersulfone). Most of these thermoplastics have a density of below 2 grams/cm3, and can handle temperatures over 100 Celsius. The price of Polyphenylsulfone (130 MPa compressive strength), able to operate as high as 150 Celsius, has a density of only 1.3 grams/cm3, with its retail price of $20/kg, it is nearly 7 times cheaper than nickel with equal alkalinity tolerance.

The four components are the following:

#1 Electrodes.

The electrode can consist of any metallic conductive surface, it can be a woven wire mesh, a metallic foam, or a smooth sheet. To achieve the highest performance, a surface morphology featuring a denticulate pattern formed by plasma spraying Raney nickel on the metallic substrate enables a reduction in the “overpotential”, or the excess voltage above the stoichiometric number. In the absence of such a surface finish, a bare metallic surface achieves only minuscule current density.

#2 Gaskets and separators

The gaskets form the seal between the electrode modules preventing gas and liquid from escaping through the edges. The force of the endplates provides the pressure needed to achieve a strong seal. A gasket (made of cheap synthetic rubbers, EPDM etc) is commonly used. EPDM rubber is extremely cheap, around $2/kg. The diaphragm separator is used to prevent the mixing of hydrogen and oxygen to avoid potentially catastrophic explosions from occurring if the rations are within the flammability range of hydrogen, which is 4 to 74% in O2. The diaphragm separator is often the single most expensive component after the electrode. The material for fabricating the diaphragm membrane must be resistant to alkaline solutions, able to withstand up to 100°C, and be selective enough to separate oxygen and hydrogen, while also permitting sufficient ionic conductivity. A number of materials are used, these include composites of potassium titanate, (K2TiO3 fibers, polytetrafluoroethylene (PTFE, as felt or woven, polyphenylene sulfide coated with zirconium oxide, abbreviated Zirfon, perfluorosulphonic acid, arylene ether, and finally, a polysulfone and asbestos composite coating. Commercial electrolyzers make use of an expensive proprietary brand name separator sold by the Belgian company Agfa Gevaert, N.V. The name of this high-end separator is called Zirfon Pearl, and it sells for a huge price premium over the cost of bare polyethersulfone, which itself is a relatively inexpensive plastic that costs around $20/kg in bulk. Many polymers are suitable for constructing separators, such as Teflon® and polypropylene. A commercially available polyethersulfone ultrafiltration membrane, marketed by Pall Corporation as Supor200, with a pore size of 0.2 um and a thickness of 140 microns, was employed as the separator between the electrodes in an experimental alkaline electrolyzer. Nylon monofilament mesh with a size of over 600 mesh/inch or a pore size of 5 microns can also be used. Polyethersulfone is ideal due to its small size, retaining high H2/O2 selectivity at elevated pressures. It can handle temperatures up to 130 C. If polyethersulfone is not satisfactory (excessive degradation rate if the temperature is above 50 C), Zirfon-clones are available to purchase on B2B marketplaces for $30/m2 from Shenzhen Maibri Technology Co., Ltd.

#4 Structural endplates:

The fourth component are the “end plates” which consist of heavy-duty metallic or composite flat sheets which house a series of rods tightly pressing the stacks to maintain sufficient pressure within the stack sandwich. For higher pressure systems, such as up to 30 bar, the endplates encounter significant force. In our incessant effort at CAPEX reduction, we have concluded it is possible to cast the endplates rather than machining them, this can reduce their manufacturing cost by 70% relative to CNC machining since investment casting is so much more productive. While we do not plan on focusing on a filter press design, we are still considering developing one as an alternative. Christophe Pochari Energietechnik is also looking into using fiberglass to construct the end plates, at a cost of only $1.5/kg and with tremendous compressive strength, fiberglass is a suitable material, especially for lower pressure stacks operating with no overpressure, therefore placing little to no pressure on the end plates.

Unlike PEM technology, noble mineral intensity in alkaline technology is relatively small, if nickel is to be considered a “noble” metal, then alkaline technology is intermediate. Nickel is not abundant but not rare either, it’s approximately the 23rd most abundant element occurring at a 0.0084% of the crust.

Unlike PEM technology, noble mineral intensity in alkaline technology is relatively small, if nickel is to be considered a “noble” metal, then alkaline technology is intermediate to PEM, but it is difficult to place platinum (50,000-ton reserve) and nickel (100 million ton reserve) in the same category. Nickel is not an abundant element but it is not rare either, it is approximately the 23rd most abundant element occurring at 0.0084% of the crust by mass. If electro-mobility is to gain any degree of traction (which has yet to be proven), deep-sea mining to exploit poly-metallic nodules can be undertaken, doubling the current terrestrial reserves of nickel. It is unfortunate that the nascent modular electrolyzer and miniature ammonia industry, which has yet to amount to anything more than a concept, is forced to compete with wasteful lithium-battery manufacturing for the precious nickel element. We can power cars with cheap steel propane tanks filled with anhydrous ammonia, rather than squandering trillions on elaborate “battery packs” using up precious nickel for the cathodes. Since we are incorrigibly resourceful, we will turn to carbon steel electrodes if market conditions force us to. Nickel prices have been surprisingly stable over time, despite large increases in demand from the stainless steel sector. The market price of nickel has risen only 1.38% a year since 1991. The price of one ton of nickel was $7100 in 1991, equivalent to $14,700 in 2022 dollars, in January 2022, the spot price reached $22,000/ton. At the time of this writing (June 2021), Russia had not yet invaded Ukraine! so while I could anticipate a potential spike in nickel prices, I could not time it, otherwise, everyone would become a billionaire by speculating on the commodity market, and as far as I know, most people have not had much success at that game. In spite of the unfortunate development in the nickel market, the electrode cost is still relatively low even at $50,000/ton, it’s unlikely the Ukraine invasion would cause nickel to rise this much, but it’s possible. It will be important to extensively research carbon steel electrodes if nickel reaches an excessively high price, or increase current density at the expense of efficiency, which we may be able to do thanks to hydrostatic wind turbine technology.

For an alkaline electrolyzer using a high surface area electrode, a nickel mesh electrode loading of under 500 grams/m2 of active electrode surface area is needed to achieve an anode life of 5 or more years assuming a corrosion rate of below 0.25 MPY. With current densities of 500 milliamps/cm2 at 1.7-2 volts being achievable at 25-30% KOH concentration, power densities of nearly 10 kW/m2 are realizable. This means a one-megawatt electrolyzer at an efficiency of 75% (45 kWh/kg-H2 LHV) would use 118 square meters of active electrode surface area. Assuming a surface/density ratio of a standard 80×80 mesh, 400 grams of nickel is used per square meter of the total exposed area of the mesh wires. Thus, a total of 2.25 kg of nickel is needed to produce 1 kg of hydrogen per hour. For a 1 megawatt cell, the nickel would cost only $1000 assuming $20/kg. This number is simply doubled if the TBO of the cell is desired to increase to 10 years, or if the power density of the cell is halved. Christophe Pochari Energietechnik is planning on using carbon-steel electrodes or plain iron electrodes to replace nickel in the future to further redux CAPEX below $30/kW, our long-term goal is $15/kW, compared to $500 for today’s legacy system from Western manufacturers. Carbon steel exhibited a corrosion rate of 0.66 MPY, while this is significantly above nickel, the cost of iron is $200 per ton (carbon steel is $700/ton), while nickel is $18,000, so despite a corrosion rate of at least 3x higher, the cost is 25x lower, yielding of 8.5x lower for carbon steel. The disadvantage of carbon steel despite the lower CAPEX is decreased MTBO (mean time before overhaul). Christophe Pochari Energietechnik has designed the cell to be easier to disassemble to replace the corroded electrodes, we are also actively studying low-corrosion ionic liquids to replace potassium hydroxide. We are actively testing a 65Mn (0.65% C) carbon steel electrode under 20% KOH at up to 50 C and experiencing low corrosion rates confirming previous studies. Christophe Pochari Energietechnik is testing these carbon steel electrodes for 8000 hours to ascertain an exact mass loss estimate.

What kind of current density can be achieved by smooth plates?

Current densities of 200mA/cm2 at 1.7 volts (3.4 kW/m2) generates an efficiency of 91% even with non-activated nickel electrodes.

If a corrosion rate of 0.10 MPY is chosen, which is very conservative, then for a material loss rate of 5% per year, 400 grams per square meter is required, yielding a cost per kW of $4.7. If one desires to be extremely conservative, imagine an electrode is used that is around 1 millimeter thick. Since only the anode requires nickel (the cathode can be made of steel since it’s being reduced), we will use 3.9 kg of nickel sheet for 1 square meter, since the current density is 3600 watts per/m2 (200 milliamps x 1.8 volts), and this number is doubled since only half the electrode is nickel, the price per kW is $21. This illustrates that even if the designer wants to use an extremely thick electrode, far thicker than necessary, the cost of the number one most materially sensitive component is only 2 percent of the cost of present commercially available electrolyzers, suggesting chronic manufacturing and production inefficiency among current producers.

Corrosion is by far the single biggest enemy of the electrolyzer, it’s an issue that’s under-discussed but accounts for the preponderance of performance degradation. All metals, even noble ones, tend to oxidize over time. The anode, the negative side, the electrode that evolves hydrogen and is constantly being oxidized, and turns black within hours of use. The oxygen generating is subject to reduction and remains shiny no matter how long it is exposed to the alkaline environment. The hydrogen electrode experiences immense oxidative pressure, and will rapidly accumulate a black oxide layer, in the case of nickel, the oxide layer is comprised of nickel hydroxide. No material is lost, and it’s theoretically possible to recover all of the metallic nickel from the oxide layer which eventually is lost in the alkaline medium. On the oxygen electrode, the black oxide layer quickly reaches a peak and begins to pacify it and slow down the rate of further oxidation, but at the expense of electrochemical performance.

Fullscreen capture 1152022 113119 AM.bmp
Fullscreen capture 1152022 113123 AM.bmp

For a lower corrosion rate of 1 um/yr, a total mass loss of 7% per year will occur with a surface/mass ratio of 140 grams/m2-exposed area, the nickel requirement is only $350 or 17.5 kg for one megawatt! Although this number is achievable, higher corrosion rates will likely be encountered. To ensure sufficient electrode reserve, a nickel loading of around 400-500 grams/m2 is chosen. Pure nickel experiences an excessively high corrosion rate when it is “active”, it becomes “passive” when a sufficient concentration of iron (NiFe2O4), or silicate is found in the oxide layer. For Incoloy alloy 800 with 30% Ni, 20% Cr and 50% Fe experiences a corrosion rate of 1 um/yr at 120 C in 38% KOH, pure nickel is over 200 um. “The “active” corrosion of nickel corresponds to the intrinsic behavior of this metal in oxygenated caustic solutions; the oxide layer is predominantly constituted of NiO at 180°C and of Ni(OH) 2 at 120°C. The nickel corrosion is inhibited when the oxide layer contains a sufficient amount of iron or silicon is present”. The results drawn from this study indicates the ideal alloy contains around 34% Ni, 21% Cr, and 45% Fe. The cost breakdown for the three elements are $18/kg, $9/kg and $0.2/kg, giving an average of $8.1/kg. For a passive corrosion rate of 1 um/yr, a 10% annual material loss corresponds to an electrode mesh loading of 90-100 grams/m2, or $0.11/kW. That is 11 cents per kW! This does not include mesh weaving costs. A 600 mesh weaving machine costs $13,000. The conclusion is meshing costs are very minimal, less than a few cents per square meter.

For the diaphragm separators using a 200 um thick sheet of polyethersulfone (PES), around 20 grams is used per kilowatt, at a typical cost of PES of $25/kg assuming density of 1.37 g/cm2, the cost would be around $0.50/kilowatt assuming an electrode power density of 6.8 kW/m2 (400 milliamps at 1.7 volts). Since Christophe Pochari Energietechnik always adheres to COTS methodology, the expensive and specialized Zirfon membrane is dispensed with in favor of a more ubiquitous material, this saves considerable cost and eases manufacturability as the need to purchase a specialized hard to access material is eliminated. Gasket costs are virtually negligible, with only 4.8 grams of rubber needed per kilowatt, EPDM rubber prices are typically in the range of $2-4/kg. For 30% NaOH at 117 C, a corrosion rate of 0.0063 millimeters per year (0.248 MPY) is observed for an optimal nickel concentration of 80%. This means 55 grams of Ni is lost for one square meter, if we choose 10% per year as an acceptable weight loss, we return to 550 grams per square meter as the most realistic target nickel loading, with much lower loading achievable with reduced corrosion rates. A lower concentration of KOH/NaOH and lower operating temperature can be utilized as a trade-off between corrosion and power density. The total selling price of these units cost including labor and installation is $30/kW. In 2006, GE estimated alkaline electrolyzers could be produced for $100/kW, clearly, must lower prices are possible today. At an efficiency of 6.5 MMW (47.5 kWh/kg-H2), the price is $1430/kg-hour. After the cell stack costs, which we demonstrated can be made very minimal with the COTS design philosophy, the second major cost contributor is the power supply. For a DC 12 volt power supply, $50 is a typical price of a 1000 watt DC power module. Thus, to summarize, alkaline electrolyzer material costs are effectively minuscule, and the cost structure is dominated by conventional fabrication, assembly, and electrode deposition techniques as well as the power supplies and unique requirements of low voltage direct current high amperage power. High-efficiency DC power supplies cost as little as $30/kW and last over 100,000 hours. Once components can be mass-produced and assembled with as little use of manual labor, costs can be brought down close to the basic material contribution. The only uncertainty for the future of alkaline electrolysis is the price of nickel, certain disruptions in the supply of nickel could make the technology less competitive, as long as carbon steel electrodes are unproven. When this text was written, the author has purchased $2000 worth of nickel sheets on Alibaba when the spot price was $18/kg.

It should be noted the activity of the nickel electrode depends heavily on its morphology. A smooth sheet has very little activity and is thus not suitable for industrial scales, although, for small electrolyzers, a smooth catalyst can be sufficient if power density is not an exigency. Catalysts activity depends not on the total surface area available exposed to the reactant material, rather, catalyst activity depends almost exclusively on the presence of so-called “active sites” or “absorption sites” comprised of kink sites, ledges, and steps, adatoms, and holes. These sites, characterized by local geometric perturbation, account for effectively all the activity of a catalyst. It can be said that the vast majority of the catalyst area is not active. By achieving a high fraction of active sites, the current density holding voltage constant can be increased 10-fold. Raney nickel catalysts were first invented in 1948 by Eduard W. Justi and August Winsel. A properly leached Raney nickel catalyst can attain an immense surface density of 100 m2/g.

Raney nickel, an alloy comprised of aluminum and nickel, is sprayed on the bare nickel sheets, meshes, or nickel foam, forming an extremely high specific surface area by producing micron-size jagged edge clumps. This process is called sputtering deposition. The high velocity and temperature of the metal particle cause them to mechanically adhere to the nickel surface. During the application of the Raney nickel with the plasma spraying machine, it is important for the distance, temperature, and deposition rate to be fine-tuned, to avoid excessively thick deposition or clumping. Examination with electron microscopes can be performed by sending a sample of the piece to an electron microscope rental service. After the material has cooled and solidified, the aluminum is then leached and extracted from the surface using a caustic solution, leaving the pure nickel electrode ready to be used. This leaching process, where the aluminum is pulled away from the nickel surface, is what leaves the spongy-like surface and contributes to the stellar electrochemical activity of Raney nickel electrodes. Raney nickel sells for around 300 RMB per kg, or about $50/kg on By mass, only a tiny fraction of the electrode is comprised of the Raney nickel, a thin heterogeneous layer, usually far less than 100 microns. The primary cause of electrode degradation is the loss of the high surface area active sites through the absorption of nickel oxide on the outer surface. Corrosion is almost impossible to prevent, but since no material is lost, the electrodes can simply be regenerated after their useful life. A simple yet elegant option to slow down or even arrest altogether electrode degradation is by periodically reversing the polarity. In doing so, the soon to be oxidized anode has its nickel oxide stripped off by turning it into a cathode and transferred to the former cathode, this allows each electrode to remain at a relatively new state, any accumulated nickel oxide on the hydrogen side is removed after 24 hours. The power supply can simply feature a polarity reversing switch, by installing a mechanical buss-bar which manually moves the input current from positive to negative, requiring no modification to the standard switching power supply. The only tedious aspect of this design is the need to mechanically switch the hydrogen and oxygen hoses, but this too can be done with automatic valves which simply re-route hydrogen into the former oxygen hose and vice versa. Oxy-hydrogen cutting torch operators employ this method to increase the life of their stacks. By employing a simple yet novel solution to corrosion prevention, plain steel anodes can be reliably used. Youtuber NOBOX7 reverses the polarity on his homemade HHO cutting torch generator.

“The reduction in corrosion due to periodically reversed currents appears to be due to the fact that the corrosive process is in a large degree reversible; so that the metal corroded during the half-cycle when current is being discharged is in large measure redeposited during the succeeding half cycle when the current flows toward the metal. This redeposited metal may not be of much value mechanically, but it serves as an anode surface during the next succeeding half cycle, and thus protects the uncorroded metal beneath. Effect of frequency on rate of corrosion: The corrosion of both iron and lead electrodes decreases with increasing frequency of reversal of the current. The corrosion is practically negligible for both metals when the period of the cycle is not greater than about five minutes. With iron electrodes a limiting frequency is reached between 15 and 60 cycles per second, beyond which no appreciable corrosion occurs. No such limit was reached in the lead tests, although it may exist at a higher frequency than 60 cycles. The corrosion of lead reaches practically the maximum value with a frequency of reversal lying between one day and one week. The corrosion of iron does not reach a maximum value until the period of the cycle is considerably in excess of two weeks”.

Digest of Publications of Bureau of Standards on Electrolysis of Underground Structures Caused by the Disintegrating Action of Stray Electric Currents from Electric Railways, United States. National Bureau of StandardsSamuel S. Wyer · 1918

“According to experiments by Larsen, daily reversals of polarity reduce the electrolytic action to one fourth, and hourly reversals to one thirtieth of its normal value . The changing of the direction of the current causes a partial restoration of the metal which has been removed, this effect increasing with the frequency of the reversals. Also, according to Larsen, the nature of the electrolytic action is less harmful when the polarity is periodically reversed than when it remains always the same. When the current flows continuously in the same direction, the pipes become deeply pitted, but when the polarity is periodically reversed the corrosion is more widely and uniformly distributed. Therefore, in all cases where the conditions permit, it is advisable to reverse the polarity of the system at certain intervals. The hourly reversal of polarity reduces corrosion to a very great extent, but when alternating current, even of low frequency, is used the corrosion is completely done away with”.

Stray Currents from Electric Railways by Carl Michalke · 1906

The most challenging aspect of manufacturing a high-performance alkaline electrolyzer is catalyst preparation. Manufacturing an electrolyzer is not semiconductor photolithography, it is a delicate process, but by no means a proprietary or high-tech procedure. The equipment required to perform electrode manufacturing is not specialized, but dual-use, with commercial systems being readily available for electrolyzer manufacturing, obviating the need for expensive and niche suppliers. The major electrolyzer manufacturers do not possess any special expertise that we cannot acquire ourselves. Plasma spraying is the most common method to achieve a highly denticulate surface. A plasma spraying torch can be procured for around $2000 and used to gradually coat the smooth nickel sheets with a highly porous and ragged surface with the Raney nickel. The HX-300 thermal spraying machine sold by Zhengzhou Honest Machinery Co Ltd, runs at 300 amps DC, has a duty factor of 60%, and costs only $1850. It can spray a multitude of metal powders at 0.6 megapascals of pressure.


A typical thermal spraying machine, used to apply heat-resistant coating for automobile components and many disparate applications. These machines require a flow of coolant and compressed air to operate. Their average price is between $2000 and $10,000.


Bare sheets of smooth nickel would be placed on the floor and either a manual operator or gantry frame can be used to automatically pass the plasma head across the metal surface, in the same manner that a painter applies paint over a surface. This process is called sputtering deposition. After the material has cooled and solidified, the aluminum is then leached (extracted) from the surface using a caustic solution. In the paper “Plasma spraying can be done either in a vacuum or in an atmospheric environment. Electrochemical characterization of Raney nickel electrodes prepared by atmospheric plasma spraying for alkaline water electrolysis, the authors Ji-Eun Kim et al achieved satisfactory results using a standard atmospheric plasma thermal spraying machine using Raney nickel particles 12 to 45 microns. Christophe Pochari Energietechnik is developing a low-cost plasma spraying machine using ubiquitous microwave components to perform catalyst preparation, but such an option is only of interest to hobbyists and the HHO energy community, since any commercial-grade factory would be able to purchase a standard thermal spraying machine. Once catalyst surface preparation is complete, the electrolyzer is ready to assemble. Commercial plasma deposition where Raney nickel microparticles are blasted onto a smooth nickel mesh and high temperature and high velocity have an inherent drawback: they produce a brittle adherence, the adhesion between the leached Raney nickel microparticles and the underlying smooth substrate is poor and prone to cracking and peeling.

The polyethersulfone diaphragm separator and rubber gaskets can be cut precisely into circular pieces with a laser cutter, along with the nickel sheets, using virtually no labor other than what is required to load the sheets onto the laser cutting machine bed. Then, once all the parts have been cut, prepared, and readied for installation, the low-skill process of stacking these components and the bolting of the endplates, plumbing fittings, etc can be performed in low labor cost countries, such as Mexico. The electrolyzer can also be packaged as easy to assemble kits, so that owners can perform assembly themselves, further saving cost.

Fullscreen capture 9202021 15337 AM.bmp
Fullscreen capture 7202021 100905 AM.bmp

Fullscreen capture 572022 34430 PM.bmp

Achievable current densities for a number of alkaline electrolyzers.

Fullscreen capture 5132021 14630 AM.bmp

180 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14625 AM.bmp

150 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14606 AM.bmp

120 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14641 AM.bmp
Fullscreen capture 5112021 14702 PM.bmp
Fullscreen capture 5102021 44258 PM.bmp
Fullscreen capture 5102021 41433 PM.bmp
Fullscreen capture 582021 114139 PM.bmp
Fullscreen capture 552021 83800 PM.bmp
Fullscreen capture 6132021 100201 PM.bmp

Typical alkaline electrolyzer degradation rate. The degradation rate varies from as little as 0.25% per year to nearly 3%. This number is almost directly a function of the electrocatalyst deactivation due to corrosion.

Diaphragm membrane rated for up to 100 C in 70% KOH for $124/m2: $8.8/kW

Fullscreen capture 6112021 15334 PM.bmp

*Note: Sandvik materials has published data on corrosion rates of various alloys under aerated sodium hydroxide solutions (the exact conditions found in water electrolyzers), and found that carbon steel with up to 30% sodium hydroxide provided temperatures are kept below 80 Celsius.

Cheap ammonia crackers for automotive, heavy duty mobility, and energy storage using using nickel catalysts

Industrial scale catalysts have been processed intensified by reducing particle size, increasing Ni loading, and increasing specific surface. “Employing the catalyst in powder form instead of in granulated or pellet form significantly reduces the temperature at which an efficient decomposition of ammonia into hydrogen and nitrogen can be effected”. The main reason why the industrial-scale annealing (forming gas) crackers have higher decomposition temperature is due to large catalyst pellet size, usually 20 mm. While typical industrial ammonia cracking catalysts from China (Liaoning Haitai Technology), have Ni loadings of 14%, with GHSVs of 1-3000 with conversion of 99+% at 800-1000 C, some literature pulled up from mining Google patents citing physical testing indicate variants of standard nickel catalysts with higher Ni loading with similar densities (1.1-1.2 kg/liter) can achieve GHSVs of 5000 at lower temperatures (<650 C) and retain high conversion (99.95%). Such a system would equate to a techno-economic power density of 3.85 kg cat/kg-H2/hr, yielding a net of 0.96 kg nickel/kg-H2 at a nickel price of $20/kg, equating to $20kg-hr capacity, leaving little incentive to use noble or exotic alloys. The rest of the cost is found in the metal components, of which around 7 kg of stainless steel is needed for a 1 kg reformer, costing about $140. Aluminum oxide support is virtually insignificant, costing only $1/kg. Pochari Technologies’ goal is to make ammonia crackers cheaper than standard automotive catalyst converters, this appears a tenable goal as catalyst converters require palladium and platinum, albite in smaller quantities. The reformer is approximately the size as a large muffler, which will be fitted near exhaust manifold of the engine, to minimize conductive heat losses through the exhaust. Beyond economics, the power density is already more than satisfactory, with the volume of the catalyst occupying less than 3.2 liters for a reformer capacity of 1 kg-H2-hr, most of the volume is occupied by insulation, the combustion zone (the inner-third portion of the cylinder), and miscellaneous piping, flow regulators, etc.

While the theoretical energy consumption is 3.75 kWh/kg-H2, the minimum energy consumption is somewhere in the order of 4.2-4.8 kWh/kg, but in reality, it is usually higher. This number can be easily ascertained by taking the specific heat capacity of the catalyst mass (mostly aluminum oxide), the active component (nickel 500 kJ-kg/K), the metallic components (500 kJ/kg-K for SS304) that comprise the reactor vessel, catalyst tubes, containment cylinder etc, and finally, the temperature required to raise 5.5 kg of gaseous anhydrous ammonia (2175 kJ/kg-K) to 800 degrees Celsius, which is exactly 2.65 kWh, plus any heat loss. We also need to take into account the higher capacity of the released hydrogen. As the ammonia progressively breaks down, hydrogen is released, this hydrogen has a certain residence time since for complete decomposition, the reformate gas will reside until no appreciable quantities ammonia is present, this in effect means the reformer is also heating hydrogen gas, not just ammonia, so we need add the heat absorption of the hydrogen, which is another 3.17 kWh (14,300 kJ/kg-K). This takes the total to 7.84 kWh/kg-NH3, very close to numbers found on industrial reformers. Heat loss through conduction is minimal, using 40mms of rock-wool insulation wrapped around a 100mm reactor vessel, we can reduce heat transfer for a 3 liter reformer to around 60 watts. The net total amounts to 7.9 kWh/kg NH3, or 23% of the LHV of hydrogen. Nearly 100% of this energy can be supplied by exhaust gases for H2-ICE systems, while for fuel cells, no such heat is available.

Fullscreen capture 6122021 64258 PM.bmpFullscreen capture 6122021 20938 AM.bmpFullscreen capture 6132021 62837 PM.bmp

Fullscreen capture 6132021 62136 PM.bmp

Techno-economic feasibility of micro-channel Fischer-Tropsch production using carbon-neutral hydrogen from municipal solid waste plasma gasification for producing liquid hydrocarbons

Microsoft Word - jfue

Combining photovoltaic power with municipal sold waste plasma gasification, carbon monoxide can be produced along with hydrogen at nearly the same molar ration as required to produce long-chain liquid transportation fuels. Any fuel produced from a sustainable source such as solid waste diverts carbon away from new extraction, mitigating emissions. If 1 ton of fuel is burned that is produced from solid waste, 1 ton less fuel is extracted. Using micro-channel technology rather than classic tubular reactors, the size of the F-T reactor is reduced by an order of magnitude, reducing CAPEX and material usage. Low-cost non-noble cobalt catalysts provide high activity and long life. Graphite electrodes using 10 kV AC plasma torches provides high temperature 2000-3000C gasification temperature generating 513 and 400 Nm3 of CO and H2 respectively using 1.6 MW. 1.2 tons of solid waste can generate 0.27 tons of sulfur-free diesel fuel per day.

Sustainable diesel fuel market price: $946/ton ($3/gal)

Hydrogen source: Photovoltaic 40 kWh/kg-H2 140 kg-H2/t-diesel

Carbon source: Municipal solid waste plasma gasification: 243 kg/t-MSW @1,600 kWh plasma/t-MSW = 6.5/kWh/kg, 5600 kWh/t-diesel p. Hydrogen production: 32 kg/t-MSW

Solar plant CAPEX @0.20/watt: $60,000

DC/AC invertor: $15,000

Treated wood panel support structure: $8000

Plasma gasifier CAPEX: $10,000

Microchannel Fischer-Tropsch reactor: $8,000

Purification: $5,000

Total CAPEX: $106,000

Annual maintenance: $15,000

Revenue per ton MSW: $178

Power consumption: 6000 kWh/ton

MSW consumption: 1.2 tons per day,

Potential Revenue: $94,600

Return on capital: 75%

Compression Loaded Ceramic Turbine Disc Technology for Gas Turbines

Christophe Pochari Energietechnik is reviving long-forgotten and obscure turbine engine technology. With advancements in single-crystal nickel alloys, turbine designers have been able to increase turbine inlet temperatures to record levels. The downside of this is the need to utilize large amounts of compressor bleed air, reducing engine efficiency. Conventional engineering ceramics including silicon nitride and carbide have been forgotten in part due to the focus on ceramic matrix composites. A relatively elegant and simple design is the use of a composite (carbon-fiber) hoop holding the centrifugal forces acting on the blade. Conventional engineering ceramics have uneven properties, rendering them unable to tolerate concentrated forces found in conventional mating conditions use in current turbine disc designs. Contrary to popular belief, engineering ceramics do indeed have sufficient tensile strength to be loaded in tension, such as a beam for example. Silicon carbide has a tensile strength of up to 390 MPa, more than sufficient to tolerate the tensile loads found in a turbine blade root connection. The primary issue is the lack of uniformity, the material properties are highly heterogeneous, in addition, low flexural strength, and high brittleness further impede its use. The simple solution is to load the blades purely in compression, (where ceramics shine), this idea had been extensively investigated by some of the aerospace OEMs and the U.S military, but rapid improvements in single crystal alloys made it uninteresting. In recent times due to interest in “distribution electric propulsion” and broader “clean propulsion” for aircraft, a company called Exonetik, Sherbrooke, Quebec, has strangely claimed to have developed a “new technology” that they have patented, when the idea dates back to the 1970s in the patent US4017209A, Robert R. Bodman, Raytheon Technologies Corp.

There efforts are dubious because they use carbon fiber as the hoop material, which is obviously impossible to its poor thermal stability. Secondly, they use silicon nitride which has much less tensile strength than silicon carbide less alone inferior oxidization resistance.

“Utilizing the high compressive strengths of ceramics in gas turbines for improving ceramic turbine structural integrity has interested engineers in recent years as evidenced by a number of patents and reports issued on Compression Structured Ceramic Turbines with one as early as 1968”

P. J. Coty, Air Force Aero Propulsion Laboratory

Silicon carbide has one of the highest compressive strengths available, 1600 MPa. In order to take advantage of the excellent high-temperature capacity of these materials, a novel architecture is needed, dispensing the existing orthodoxy of turbine disc design. Using this design, turbine inlet temperatures approaching 1600°C are feasible without air cooling. The blades transfer compression loads to hoop loads, the carbon fiber rim performs the same function as a pressure vessel. The carbon fiber parameter containment hoop is not exposed to high gas temperatures. With this technology, it is possible to design small-scale sub-500 hp turboshafts with the efficiency of diesel engines (40%+), enabling jet-pack propulsion with 1 hour plus range.

The idea to use ceramics in a gas turbine is nothing new. To this date, the idea has been plagued with insurmountable problems and is not taken seriously, mainly the sudden brittle failure of any tension-loaded component. Modern research focuses mainly on ceramic matrix composites, but this technology is far from certain. All high-performance gas turbines today use directionally solidified single crystal blades made of nickel, cobalt, and molybdenum, with trace amounts of rhenium, tungsten, tantalum, and hafnium. The problem with these alloys is that although they are very strong, they rapidly oxidize in the oxygen-rich combustion gases and erode quickly, limiting their useful life to only a few thousand hours in small gas turbines. These single-crystal alloys are also very dense, nearly 9 grams/cm3, which generates severe radial stress. When any moving mass is spun, the object is subject to inertia which produces a force stressing the material, eventually, if the object is spun fast enough there will be a rupturing of the object. This is actually what limits turbine power density, not just temperature, because turbine wheels could spin faster for a given size if radial stresses could be lowered.
During the 1960s at the apogee of technological civilization, on-road trucks were viewed as natural candidates for gas turbines. The tremendous success and reliability of gas turbines in helicopters made it seem almost inevitable that they would find use in trucks. Regrettably, a number of technical obstacles made their failure almost inevitable. Unlike a helicopter or turboprop, where gas turbines are the only option, a truck does not operate at full load 100% of the time, not only does the truck use far less power than ever a small helicopter, it will vary its power output by at least 2-3x fold depending on grade, frequency of acceleration, and load. Firstly, gas turbines were woefully inefficient compared to diesel engines. Secondly, their even more abysmal part load efficiency meant that the truck would either be underpowered or overpowered and operated far below the turbine’s optimal speed band. Thirdly, gas turbines at the time, and to a large extent still, rely on metallic alloys for the mainstay of their blading. Metals possess an intrinsic fatigue limit, the more the metal is thermally cycled, the more the grain structure is disturbed and the higher the chance of failure. This meant that trucks operating frequently with short stops would have shortened engine life. Lastly, gas turbines were and still are far more expensive, largely due to small manufacturing volumes and little use outside of aviation and large-scale power generation from natural gas.

I have been forced to employ the term “hard technology” to refer to technologies that produce actual physical output, such as a train, ship, engine, and not merely “virtual” outputs, such as the flickering color on an LED screen. Of course there is something indeed physical about such technologies, a radio produces an electric current when an oscillating magnetic field passes by the antenna, but the output is merely audible to a human ear, or any other organism which can process the particular frequency of sound produced. A true physical technology is one that changes the basic energetic and material basis of civilization, namely labor augmenting technologies, such as steam or hydraulics. In the realm of physical technologies, I have always remained partial to propulsion and energy technologies, viewing these as the substrate by which the rest of the Technosphere lies. Better engines, drivetrains, and prime movers are a sure way to move civilization forward. Anyone who could invent a lighter and more powerful turbine engine is sure to change the world, much more so than Oculus Rift could ever. Regrettably, like much of modernity, we find ourselves in a strange predicament, with politicians and the world’s best engineering labor force squandering precious time and efforts into developing effectively useless “electromobility” using batteries as prime movers. Analyzed on a strictly technical basis, there is simply nothing favoring electro-chemistry to power anything bigger than a lawn mower. Otherwise “intelligent” people are trying to replace the mighty diesel engine with puny little iPhone batteries, rather than replacing the diesel with a superior powerplant, for example, a gas turbine or an adiabatic engine.

In the 1970 and 80s the U.S Army investigated using “adiabatic engines”, the TACOM/Cummins Adiabatic Engine Program. The U.S Army hired Cummins to perform a study and build a prototype engine, unfortunately nothing came of the program due to a lack of suitable lubrication options. An adiabatic engine is essentially an engine that rejects virtually no heat to its surroundings (the cooling system is eliminated altogether), such as engine would be at least 20% more efficient, but due to its extremely high operating temperature, no lubricant could be found that did not experience excessive oxidation. A conventional piston engine rejects about 30% of the heat input to the coolant, such as exergy flow goes unused. The lubricant would needed to have withstood peak temperatures of 400°C, which is impossible for a mineral or even synthetic lubricant such as polyphenol ether. Solid lubricants and ceramic cylinder lines were considered, but solid or “dry” lubricants like molybdenum disulfide or tungsten disulfide, being powders, do not form durable films and provide inferior protection compared to viscous liquids. If the adiabatic engine is not practical, we are left with only the gas turbine as a candidate to replace the diesel engine, but the competition is fierce and it will only happen if the gas turbine can be made more reliable, cheaper, and equally as efficient.

Today civilization is taking a great step backward, towards a lower power density form of propulsion, an entirely illogical and outright perplexing move. But of course, there is no mystery here, we know exactly why this is happening, it is all happening because of Arrhenius’s demon. Until Arrhenius’s demon is overthrown (the greenhouse effect), we may waste decades in this futile pursuit of electromobility. It is more than certain that consumers of these technical boondoggles, whether they be electric semi-trucks or sedans, will finally realize it was a waste of time. Instead of moving towards and cheaper and cleaner fuel: methane, and finding propulsion systems that better utilize this gaseous fuel, notable turbines, every single automotive, truck, and engine maker is committing billions to building batteries. But let us stop complaining about the banality of the modern post-WW2 era and turn to the fascinating and exciting technical aspects of compression-loaded ceramic gas turbines.

If we want to replace bulky, clunky, and maintenance-intensive reciprocating engines with smooth, compact, and efficient gas turbines, we must find a way to reach BSFC parity with diesel engines at the scales found in heavy-duty truck propulsion. A typical highway semi-truck in Europe, such as a Scania R730 consumes 48.7 liters per 100 km, at a cruising speed of 85 km/hr, the engine burns 41.4 liters of diesel fuel in an hour or 35.18 kg/hr. The Spec fuel consumption of the Scania DC16 at 1/2 load is 189 g/kWh at 1500 RPM and 197 g/kWh at 1800 RPM. This means the engine produces 176 kW or just under 200 hp. To highlight the sheer absurdity of “electromobility”, for just one hour of operation, the battery would weigh one ton! To show just how strong a technology we are up against, this Scania DC16 common rail V8 boasts a break thermal efficiency of 42% at 50% load. We must develop a gas turbine that can operate at a steady state output of just under 200 kW at does not substantially fall under the high 30 percent.

A Scania DC16 is hardly a power-dense powerplant, low-speed diesel engines are notoriously heavy and bulky, by installing a gas turbine, the cabin would be roomier since the entire engine bay can be eliminated. A 200 kW high-temperature ceramic and a recuperated gas turbine is a tiny package, which could fit in the glovebox!

A gas turbine has no cooling system, no fan, radiator, or coolant, no high-pressure fuel injection system, no glow plugs, no exhaust treatment system, and no need for complex failure-prone electronics.

The weight savings would be phenomenal, increasing the revenue for the truck operator. The Scania DC16 engine weighs close to 1400 kg loaded with oil and coolant. The 200 kW gas turbine would weigh only at best 100 kg, allowing the truck to carry 1300 kg of additional payload, itself worth tens of thousands annually of additional revenue.

But the gas turbine is not all rosy, there are a number of tradeoffs that have to be made. The high power density of the gas turbine comes at the cost of poor or virtually no part load capability, with efficiency dropping rapidly as the aerodynamics of the compressor and blades fall out of their optimal band. In order to rectify this limitation, only one practical solution exists, namely, modularity where multiple individual turbines can be installed in discrete drivetrains mounted directly parallel to the axle. When more power is needed, they can be simply switched on. A hybrid drivetrain seems appealing, but there is no way to get around the problem of intermediate energy storage. If the gas turbine is sized merely for the average power usage of 180 kW, there will be a large void that will need to be filled when the truck needs to use say 500 kW for half an hour. While we could install a one-ton battery and simply accept it as our penalty, it eats into the advantages offered by the turbine to begin with. Since the DC16 makes a maximum of 566 kW, we may need to draw this amount of power during short bursts or during prolonged climbs up steep grades. Since the turbogenerator is still only able to produce 180 kW, it will take over an hour to replenish this battery, so we are then left with no reserve to draw over 180 kW. Clearly, the best option is t combine both methods, a fully hybrid drivetrain where the high torque and instant acceleration of the motor can be exploited, while also solving the issue of requiring excessive intermediate storage between peak load and steady-state load. In order to maintain the high efficiency of the turbine, we can simply switch additional compressor sections. The turbine can be designed to have more stages than needed during sub-200 kW operation, and when ramped to 550 kW or more additional high-pressure stages can be put into operation. Either a single oversized combustor can be used or an additional higher capacity combustor can be incorporated. Such a design evidently adds complexity, but it is no heavier than a single 550 kW turbine since the smaller capacity sections do not add additional weight beyond the base unit.

Silicon carbide hoop stress-loaded ceramic turbines


Fullscreen capture 4102021 115843 PM.bmp

Fullscreen capture 1182023 12812 PM.bmp

The image above is a CAD drawing of a dual-centrifugal compressor gas turbine with two high pressure turbine stages made of ceramic blades contained within a tension-loaded hoop.

In the category of prime movers, the thermal power cycles used in reciprocating and turbomachinery represent a singular technology with no viable alternative contenders. Fuel cells, batteries, and other electrochemical powertrains are simply worlds apart in power density. Hydrocarbon fuels are in a class of their own, with only liquid hydrogen as a possible alternative, but only practical in large aircraft or trucks, where boiloff losses can be minimized. If we look at the horizon, we see no evidence of early-stage prototypical technology that could replace the hydrocarbon prime movers. Physics and the chemical elements, not human intelligence, is clearly the limit here.

Rotary detonations are perhaps the closest there has come to a new class of powerplant, but they remain laboratory curiosities. Small fission reactors powering Brayton cycles, were it not for radiation concerns, would be feasible for aircraft, helicopters, ships, and many heavy-duty propulsion applications. But since heavy thick radiation barriers would be needed, their power density may not even exceed hydrocarbon-fueled gas turbines.

Small gas turbines can be made as efficient as diesel engines or large gas turbines if higher turbine inlet temperatures are employed along with recuperators. A small <200 kW turbine could easily exceed 45% with over 1600°C TIT and 15:1 pressure ratios combined with recuperation. If such powerplants can be made highly reliable and inexpensive, they would displace many reciprocating powerplants.

Unfortunately, small gas turbines cannot as effectively make use of bleed air blade cooling, which places a large penalty on efficiency. The amount of bleed air needed to cool conventional single-crystal alloy blades, such as CSMX-4, for a small turbine is not practical. Liquid cooling of turbine blades has been considered before, and a number of patents were taken by the major turbine OEMs, including much technical literature, but the inherent leak proneness, sharp thermal gradients, and potential failure points made this option impractical. A better solution is sought whereby the blade itself can sustain a steady-state temperature close to or equal to the gas temperature. Unfortunately, the “low density” alloys of nickel simply melt at these temperatures, so their use is impossible.

Yield-stress-as-a-function-of-temperature-a-Yield-stress-of-a-model-Ni-Al-Cr-alloy (1)

Even the best single crystal alloys can hardly tolerate temperatures higher than 1100°C before losing most of their tensile strength. Even with air pumped through channels and veins within the blade core, the exterior surface of the blade is close to the melting point and is rapidly worn off and oxidized with the metal effectively converted to vapor. With silicon carbide, the tensile strength at 1600°C is high enough for a completely un-cooled design.

Metals are perhaps the worst material one could use to subject a blade to thousand-degree temperatures. Metals become much more ductile at high temperatures and are consequentially subject to intense creep when operated at prolonged high-temperature conditions. A hot turbine blade is about as strong as magnesium as room temperature and barely withstand the radial stresses generating by its own mass spinning. Ceramics, on the other hand, are brittle materials, that is they do not deform at all until ultimate failure. A ceramic blade, as long as it hasn’t broken, will merely expand slightly but will remain morphologically stable. But ceramics have the added advantage of being very light, with silicon carbide having a specific gravity of 3.24, 2.7x lighter than CSXM-4, which has a high density of 8.7 g/cm3, generating proportionally more stress when spinning. This means ceramic turbine blades would not experience any creep. Unfortunately, ceramics, being brittle and possessing poor tensile strength, have to be loaded only in compression to prevent tensile failure. A simple and elegant solution to this problem was first proposed by an engineer at Fiat and described in a patent from the 1980s. The solution was very simple, place a hoop around the tips of the blades, so the blades are kept in compression at all times. This hoop could be made of fibrous materials, for example, silicon carbide fibers which can maintain tensile strengths of 2 GPa at 1600°C. The fibrous hoop contains the blades which fit onto a standard metal disk, the disk is not exposed to the hot gases directly so has time to cool and maintains a much lower temperature than the blades. While this setup is indeed more complex than a standard tension load blade mounted in a fir tree groove, it offers remarkable advantages. The ideal ceramic is silicon carbide, because it forms a thin glass protective layer. Since silica is so chemically stable, it is impossible for oxygen to react with the silicon beneath this layer. Silicon carbide also possesses high fracture toughness, around 6 MPa m 1/2, and high tensile strength of 390 MPa. The diameter of a 240 kW turbine disk, for example in the Allison M250, is 150mm. The RPM of the disk is 50,000, and the total tangential stress is 160 MPa, within the limit of silicon carbide.

“Although SiC ceramic demonstrated higher strength at 1200°C (~234.9 MPa) than at RT(~220.0 MPa), the flexural strength decreased at temperatures above 1400°C, and strength degradation from 239.4 MPa at 1200°C to 203.7 MPa at 1600°C. The specimen fractured at 1400°C and 1600°C exhibited semi-brittle fracture behavior with a fairly large amount of plastic deformation. Degradation in flexural strength at 1400°C and 1600°C can be attributed to softening of the glassy phase”.

High-temperature flexural strength of SiC ceramics prepared by additive manufacturing, Teng-Teng Xu

“Ceramic materials offer a great potential for high-temperature application. This, however, means it is necessary to live – even in the future – with a brittle material with small critical crack length and high crack growth velocity. Thus it will not be easy to ensure reliability for highly loaded ceramic components, keeping in mind that for reaction-bonded ceramics the material’s inherent porosity is in the same order of magnitude as the critical crack length. A solution to increase the reliability of ceramic turbines may be a compression-loaded rotor design with fiber-reinforced hooping”
R. Kochendrfer 1980, Institut für Bauweisen und Strukturtechnologie, DLR, AGARD CONFERENCE PROCEEDINGS No. 276 Ceramics for Turbine Engine Applications, pp 22

“A vaned rotor of the type comprising a central metal hub or rotor body carrying a plurality of rotor blades made of a ceramic material, in which the blades are simply located on the rotor body and held in place by a coil of carbon fibers or ceramic fibers which surrounds the blades. To form a support surface for the coil each blade has a transverse part at the radially outer end thereof, which is partly cylindrical and which together with the transverse parts of the other blades, forms a substantially cylindrical support surface for the coil. Although ceramic materials used for such vanes (silicon nitride, silicon carbide, alumina, etc.) have much better physical properties at high temperatures than any metal alloy, especially if undergoing compression loads, they are nevertheless very difficult to couple to metal parts because of their relative fragility, lack of ductility, and their low coefficient of expansion. Because of the lack of ductility of ceramic materials, the driving forces exerted during the operation of the rotor give rise to a concentration of the load in parts of the coupling areas between the ceramic vanes and the metal body of the rotor. This frequently causes breakages in these parts. The various systems presently in use for attaching a ceramic blade by the root to a metal rotor body for a gas turbine are generally inadequate because these systems, including dovetail fixings having both straight and curved sides, do not take sufficient account of the rigidity and relative fragility of the ceramic vanes. This problem is exacerbated by the fact that present manufacturing techniques for ceramic materials are still not able to provide a complete homogeneity of composition and structure of the material so that adjacent areas of ceramic material can vary by up to 200% in tensile strength. For this reason, the known types of coupling between a support disc forming a rotor body and rotor vanes of ceramic material, which rely on a wedging action, are not satisfactory”
R Cerrato Fiat SpA, U.S patent 3857650A, 1973

“A Compression Structured Ceramic Turbine looks feasible. A new engine aerodynamic cycle with effective working fins to offset windage loss, a reduced tip speed to enhance aeromechanics, and the possible utilization of leakage gas to augment thrust should be considered. Also, the prospect for more efficient energy extraction offered by an inverted taper in the span of the turbine blade should be of prime interest to turbine designers in any future engine utilizing a Compression Structured Ceramic Turbine. Material property data and design refinements based on this data will also have to be seriously considered”
“The “Novel” feature of this ceramic turbine rotor design involves maintaining the ceramic rotating components in a state of compression at all operating conditions. Many ceramic materials being considered for gas turbine components today display compressive strengths ranging from three to eight times their tensile strengths. Utilizing the high compressive strengths of ceramics in gas turbines for improving ceramic turbine structural integrity has interested engineers in recent years as evidenced by a number of patents and reports issued on Compression Structured Ceramic Turbines with one as early as 1968. Turbine blades designed to be in compression could greatly enhance the reliability of the ceramic hot section components. A design of this nature was accomplished in this contractual effort by using an air-cooled, high-strength, lightweight rotating composite containment hoop at the outer diameter of the ceramic turbine tip cooling fins which in turn support the ceramic turbine blades in compression against the turbine wheel. A brief description of the detailed structural and thermal analysis and projected comparable performance between the Compression Structured Ceramic Turbine”.
P. J. Coty, Air Force Aero Propulsion Laboratory, 1983

Unfortunately, the situation is not so perfect, there are a number of inevitable technical problems that will arise. The first issue is the risk of sudden brittle rupturing the blades, but since the stresses generated are proportional to the specific gravity of the material, in this regard silicon carbide is well positioned. While the fracture toughness is poor compared to steel, a turbine blade experiences a largely uniform and steady-state loading regime, impact or shock is not an expected event. As long as the blades are retained within the tension hoop, they can easily maintain the structural integrity necessary to withstand the radial stresses generated by their own spinning mass. In summary, a compression loaded silicon carbide turbine paired with recuperators may be seen as the “final” powerplant for mankind’s propulsion needs. The future of heavy duty trucks, heavy equipment, and even cars, ships, and many other mobility propulsion may very well be quite gas turbines. If this technology is ever to see the light of day, it will a bold investor willing to lose a lot of money. One of the reasons technological progress has crawled to a halt is not solely due to the fact that there are simply fewer useful things to invent, this is evidently true, but another powerful factor has been the hesitance to bother develop what I would call “marginal technologies”, which simply offer weak adoption advantage over their replacement. The steam engine compared to the diesel engine was simply so far inferior that no one could possibly justify keeping them in light of the new option. Steam boilers require someone to shovel coal into them, they need to be heated up before the steam engine can operate, they require a constant source of water to keep plenished, and they are incredibly bulky with power densities a tiny fraction of internal combustion. But if we look at the situation a century later, we do have new options, but compared to the current options, they are barely justified. As much as we like gas turbines and think it would be “cool” to replace low speed and heavy diesel engines with them, the fact is it will be very difficult because are doing just fine at the moment.