Author: Christophe Pochari (Rivals, chevalier de Mazères)
Electric Ducted Fan Tip-Driven Heavy Lift Helicopter with Blade-Integral Conductors
Bored-Rock Heat-Exchanger Deep Geothermal
Piezoelectric-Hydraulic High-Frequency Ceramic Friction Pad Clutch for Rack and Pinion Free-Piston Engine
10
LOW-TORQUE TRACTION DRIVES FOR HELICOPTER MAIN ROTOR GEARBOXES
Arrayed Gimballed Multi-Chamber Hypervelocity Combustion Gas Gun for anti-MIRV warfare
About US
Mazères Energy is currently headquartered in the Coastal development of Bodega Harbour, pictured below.
Rivals Family Coat of arms
Christophe de Rivals, Chevalier de Mazères
Mazères, Ariège, region of Occitanie, where the Rivals family served as Knights.
The Links at Bodega Harbour
The Gherkin in London, Christophe’s favorite skyscraper.
Saint-Tropez, French Riviera, Brigitte Bardot’s favorite town.
In the Eiffel tower at 300 meters, which served as a strong endorsement to the merit of our high altitude wind turbine technology which allows wind energy to be captured towards the end of the boundary layer.
At Charles M Schulz Airport
Oktoberfest in Munich
Monte Carlo, Monaco
Westminster Abbey
A very James Bond looking highway in Ticino Switzerland.
In front of the German Enercon E-70 in Switzerland, the world’s most efficient wind turbine with a CP of over 0.55.
The Charles Parson’s radial flow steam turbine at the London Science Museum.
Newcomen atmospheric engine
European Commission in Brussels
European Commission Brussels
Wewelsburg Castle (former SS officer school), in Büren, outside of Paderborn in the northeast of North Rhine-Westphalia.
Anti-aircraft gun in the hills above the Naval port of Toulon
To the West is the Hyeres Naval Air Station, where the French Navy AS-565 Panthers and NH-90s train.
Christophe with his grandmother and brother in Hyères-les-Palmiers
Comtesse Catherine de Rivals-Mazères as a child
About Christophe Pochari. Christophe Pochari is an independent self-taught and self-employed designer and inventor. Christophe was born on January 29, 2000, in Monterey California. His mother learned to speak 7 languages. His parents traveled to 90 countries throughout the world. As a child, he was an avid draftsman of technical machines, ranging from engines to helicopters, he was especially fond of jetpacks, having drawn his first jetpack at age 5. At a very young age, he told his mother he would build large towers to capture the sun’s energy. At 17 he started his own blog titled “helicopteruniversity” dedicated to rotorcraft engineering and purchased over a hundred market research reports on the profitability of different industries. He has written over 2500 pages on technology and philosophy. At age 8 he built his own house, he built a second house at age 12, he also built a sail for his tricycle using the windy Bodega Bay weather to zip around the road. He also built an asphalt tamper, a chainsaw, wooden airplanes, guns and numerous other contraptions, all out wood since this was the only material he could use. He began using his father’s camera at age 10 to photograph helicopters. At age 11 he mastered GIMP photo editing software. At 17 he finally learned CAD and began using Fusion 360.
Christophe installing siding on his family house.
Christophe’s first construction project
Christophe at age 10 with the California Highway Patrol Golden Gate Air Division.
Christophe with his maternal grandparents and brother in Hyères-les-Palmiers
Christophe’s father, Thomas Pochari Jr, is a publisher at worldaffairsmonthly.com, destructivecapital.com, bottleneckanimal.com, and monitoringrisk.com. He is active on Twitter at @_brainscience and @destructive_cap. At age 16 he wrote a letter to Edward O Wilson. In 2009, Joe McNamara, spokesman for Perot Systems said to Christophe’s father: “Ross Perot thinks you will be considered one of the most powerful, creative, and original scientific thinkers in the history of the world, maybe the most”. Ross Perot wanted to print his online media business, but Michael Dell bought Perot Systems a week later. The aide to Deng Xiaoping called him on his home number in Carmel in 1993. Thomas Jr worked as an independent journalist and interviewed the founders of Hamas and Saddam Hussein’s ambassador, his writings have been translated into Arabic and Chinese. He was invited on Al-Jazeera, he also met the CIA director William Casey in Paris. Among the many important people he interviewed was Vint Cerf, considered the “father of the internet”. He is currently the publisher at worldaffairsmonthly.com, destructivecapital.com, bottleneckanimal.com, and monitoringrisk.com. He is acquainted with Frank George Wisner II, the son of one of the founders of the CIA. He was acquainted with Eric Burgess, friend of Arthur C Clark, who spoke of synthetic hydrocarbon production, something Christophe has been obsessed since a young age. Muammar al-Gaddafi was the first subscriber to World Affairs Monthly. Thomas was acquainted with the Swiss central bank governor Markus Lusser. He has received letters from Richard Nixon and Henry Kissinger, he was also acquainted with Marc Faber, a famed investor and founder of a hospital in Zurich. He wrote extensively on the cold war, foreign relations and the middle East. He knew Herbert P. McLaughlin, the founder of KMD architects in San Francisco. He built the first website with audio and video in 2002 before BBC and Youtube. He was also acquainted with Richard Carl Fuisz (born December 12, 1939) a Slovenian American physician, inventor, and entrepreneur, with connections to the United States military and intelligence community. Fuisz holds more than two hundred patents worldwide, in such diverse fields as drug delivery, interactive media, and cryptography, and has lectured on these topics internationally. While Thomas is more of a diplomat than engineer, he did correctly anticipate the increase in energy and commodity prices at the beginning of the 21st century and made a bet against Julian Simon in the famous “Simon-Ehrlich wager”. Thomas Pochari is acquainted with Frank George Wisner II, the son of one of the founders of the CIA. He wrote extensively on the cold war, foreign relations and the middle East. He built the first website with audio and video in 2002 before BBC and Youtube. He was actively involved in the subject of peak oil having interviewed petroleum geologist Colin J. Campbell. He was the first to propose a global satellite internet system in the early 2000s before Elon Musk’s Starlink but favored instead geostationary orbit, he named the concept “Xipho”, Greek for sword, foreseeing its disruptive potential. The Pochari name is Albanian and perhaps Aromanian origin. The Illyrian tribes who fled the Turkish Muslim invaders into the mountains came to be called the Pochari’s who were likely originally pot makers, a familial history of intransigence characterizes the family. The family is likely originally from the town of Moscopole or Voskopoja, a town West of Korçë famed for its great number of scholars and only printing press outside of Istanbul during the Ottoman empire. The New Academy or Greek Academy, (Greek: Νέα Ἀκαδημία, Ελληνικό Φροντιστήριο) was a renowned educational institution, operating from 1743 to 1769 in Moscopole, an 18th-century cultural and commercial metropolis of the Aromanians and leading center of Greek culture in what is now southern Albania. It was nicknamed the “worthiest jewel of the city” and played a very active role in the inception of the modern Greek Enlightenment movement. Christophe’s paternal grandfather, Thomas Pochari Sr, was born in New York City in 1930 and is currently 93 years old, he briefly attended Cornell and later the U.S Naval Academy where he graduated at the top of his class. He was accepted to Harvard Business School but could not attend as his father declined to pay the high tuition cost. After graduating the Naval Academy, he was transferred to the U.S Air Force as he got seasick on the ships and assigned to Lowry Air Force Base in Colorado. Before Lowry, in March 1954, he was in charge of a group of combat bombers at Osan Air Base in South Korea and later at Kladena Air Base Okinava. In Korea, at the age of 24, he was responsible for launching 25 North American F-86 Sabre’s from the base. Prior to takeoff, he had to check everything related to the fire systems, radars, etc. He was appointed Brigadier General March 15, 1983, and was in charge of logistics in the event of a war at Travis Air Force Base in the Air Force reserves. During his time in the Armed Forces, he was awarded the Medal of Service in Defense of the Nation”, “Medal of Service in Korea”, “Life Service Tape in the Air Force”, “Medal of the Armed Reserve Forces” and “United Nations Service Medal”. Today he is currently a real estate and equities investor with a portfolio of a dozen residential properties including a home on Carmel point in Carmel-by-the-sea and numerous stock holdings. He is a life extension and vitamin expert and expert on the stock market and stock analysis. Pochari was involved with the U2 program as deputy director at Nasa Ames Research Center, magnetometer development for the lunar lander, technical Manager of the Luster flight payload, and a helicopter crash report investigation for a UH-1B. Pochari, Thomas R, Pitts, Samuel W.; Hodges, Dewey H, and Nelson, Howard G.: “Report of Accident Investigation Board for UH-1B Helicopter,” NASA Ames Research Center, June 29, 1978. Sampling with a Luster Sounding Rocket during a Leonid Meteor Shower, National Aeronautics and Space Administration Ames Research Center, Moffett Field, California. The Pochari name is on the list of names of the design and engineering participants left on the moon during the Apollo program. Thomas Pochari Sr’s father, Christophe’s great-grandfather, Christachi “Christ” Pochari, founded a highly successful restaurant in Midtown Manhattan on 52 and 5th avenue which became a favorite for CBS journalists, he was acquainted with Jacob Javits who appointed his son to Annapolis. The family still owns the 2200 square foot property worth an estimated $60 million. The family owns several moon rocks. Thomas Sr’s first cousin on his mother’s side of the family, Pavlina Ververi, is Edmond Leka, the founder of Union Bank Albania. Thomas Pochari Sr’s mother’s father, a merchant, owned the largest house in the town of Korçë Albania. “Union Bank was founded in 2005 by the Union Financiar Tirane (Financial Union of Tirana – UFT). The initial capital was 17.6 million EUR and the bank had 7 branches in Tirana, Durrës, Elbasan, Fushë Krujë and Fier. In 2008 the European Bank for Reconstruction and Development bought 12.5% of the shares, becoming the 2nd largest shareholder, while the other 87.5% is owned by the Albanian shareholders. In 2008 the Bank’s total assets exceeded 100 million EUR. In the following years, the total assets further increased, reaching the amount of 256 million EUR in 2014. The Bank’s main strategy is to further expand its network and increase its lending activities, with particular focus on the SME sector. The EBRD helps Union Bank, by developing and financing its portfolio and strengthening the bank’s funding base”. Leka is an electrical engineer and published on the Albanian power industry, “Power Industry in Albania and Its Way Through the Reform to Market Economies, 1994, 92 pages, publisher: Institute für Höhere Studien. He also co-authored the paper “Aerial Photography and Parcel Mapping for Immovable Property Registration in Albania”, 1997, 10 pages, publisher, Land Tenure Center, University of Wisconsin-Madison. Edmond and Niko Leka are some of the wealthiest Albanians, they also have holdings in media. Regrettably, their connections with the Soros Foundation (Edmond was the chair of the “Open Society Foundation”) make them an arch enemy of the Pochari family, they currently live in Austria for security reasons as they fear kidnapping and assassination in their home country in which they have a high security compound. Moreover, their wealth is derived from money laundering of Marijuana income into Europe, they have ties with the Albanian mafia and have tried silencing my father and I with fraudulent and illegitimate restraining orders.
On Christophe’s mother’s side, he is descended of a highly regarded French noble family tracing its ancestry back to 1247. The family is from the Southwest of France, just North of the Pyrenees, the region is known today as “Aerospace valley” for its high concentration of aircraft manufacturing activity. The Toulouse region was once home to Sud Aviation, Aerospatiale, and now Airbus. Through the Rivals Mazères family, he is a descended of Hugh Capet, founder of the House of Capet and King of the Franks from 987 to 996. “The dynasty he founded ruled France for nearly three and a half centuries from 987 to 1328 in the senior line, and until 1848 via cadet branches”. He is related to Guillaume de Rivals-Mazères, a French aviator who flew for the Vichy Air Force, graduate of the École spéciale militaire de Saint-Cyr, and lieutenant general who served as vice Commander of the Fourth Allied Tactical Air Base in Reims. Guillaume married Zizi du Manoir, an alpine skier, ice hockey player and field hockey player.
Zizi du Manoir https://fr.wikipedia.org/wiki/Zizi_du_Manoir
Christophe is also a descended of the famous playwright Jean Racine, see Jean Racine et sa Descendance, by Chaffanjon, Arnaud, 1964. One of Christophe’s ancestors was Jean de Rivals Mazères, born circa 1664, died 10 November 1738 in Fiac, Tarn, Midi-Pyrénées, France. He was a doctor of law and a lawyer who served as King Louis XIV’s counselor and the tax collector (Ferme générale) in the Diocese (ecclesiastical district) of Carcassonne, department of Aude, region of Occitania. He is also related to Frédéric François-Marsal who served as minister of Finance for ten years and prime minister of France for three days. Marsal sat on the board of directors of numerous companies, banking (BUP, Banque d’Alsace et de Lorraine, Banque générale du Nord), real estate, metallurgy (Forges d’Alais , Electro-Câble, Tréfileries and Le Havre rolling mills), colonial, etc. He joined the prestigious board of the Compagnie Universelle du Canal Maritime de Suez in 1927 and chaired the boards of several firms: Electro-Câble, a company for the equipment of railways and large electrical networks, Société Commerciale de l’Ouest Africa , which he has administered since 1921, Compagnie des Vignobles de la Méditerranée (vineyards in Algeria). He became president of a powerful colonial lobby in 1927, the Union Coloniale Française. The following year, he was elected to the Academy of Moral and Political Sciences in May 1928, in the chair of Charles Jonnart. Our aerospace propulsion focus is a reflection of the de Rivals-Mazeres family’s long history in French aviation. Christophe’s maternal grandfather was an ophthalmologist and artist who spent much of his free time gazing at the planes taking off at the Hyères les Palmiers airport. His favorite aircraft was the Dakota DC-3. Whenever a plane could be heard in the distance, his wife would shout “avión” and he would run out to look. His thesis was titled, “Diagnostic Précoce du Glaucome Simple (à Angle Ouvert) En Pratique Quotidienne Avec 32 Observations Cliniques”. He published the papers: ES 4 Blessures Oculaires, Author, Rivals Mazeres A; Singer B; Deschatres F, Source, Vie Med.; FR.; Da. 1972; Vol. 53; No 19; Pp. 2429-2439, and Le Laser ET L’œIl du Sujet âGé, the Laser and the Eye of Elderdy Subject, Author, De Rivals-Mazeres, A, Source, La Revue de Gériatrie. 1984, Vol 9, Num 8, Pp 406-411, Issn 0397-7927, Scientific Domain, Gerontology Geriatrics
Alain also served as a flight doctor in Algeria in the Aérospatiale Alouette II treating wounded French soldiers and Algerian civilians. He was personal physician to President Georges Pompidou for one year. He was an avid science fiction reader and believed in Lamarckian evolution and the theories of Pierre Teilhard de Chardin. Through marriage, the de-Rivals family is related to Henri de Toulouse-Lautrec and Lyska Kostio de Warkoffska, a Russian-French fashion designer. Another relative through marriage is Alain Marie Guynot de Boismenu, 27 December 1870, 5 November 1953. De Boismenu was a French Roman Catholic prelate who served as the Vicar Apostolic of Papua from 1908 until his retirement in 1945; he was a professed member of the Missionaries of the Sacred Heart and the founder of the Handmaids of the Lord. He studied under the De La Salle Brothers before beginning his religious formation in Belgium where he did his studies for the priesthood. He served for a brief period as a teacher before being sent in 1897 to Papua New Guinea to aid in the missions there; he also served the ailing apostolic vicar and was soon after made his coadjutor with the right of succession. His stewardship of the apostolic vicariate saw the number of missions and catechists increase and his tenure also saw the establishment of new schools and a training center for catechists. The de Rivals family is prominent in French aviation to this day, Géraud de Rivals-Mazères, is regional flight safety director for ATR (Avions de transport régional) and worked for Airbus as flight operations analyst. Elie de Rivals-Mazères from 1992 to 1999 was a fighter pilot in the Escadron de Chasse 1/3 Navarre 1/3, fighter squadron in Nancy on the Dassault Mirage III and Dassault Mirage 2000N. Elie served as deputy commander of the Base aérienne 709 Cognac-Châteaubernard from 2010 to 2017. Elie is now an aviation claim surveyor at ERGET Group. The paper “Dual Mode Inverse Control and Stabilization of Flexible Space Robots“, authored by Geraud de Rivals-Mazeres, describes what’s called the “nonlinear inversion technique” published in the journal of the American Institute of Aeronautics and Astronautics. A book on computer science, titled “Propos informatiques pour non informaticiens” was written by François de Rivals Mazeres, 1973, 99 pages, publisher: Presses du temps présent. A violin made by Gand & Bernardel Frères, Paris, France, 1870 for Victor de Rivals in 1870.
http://www.isabellesviolins.com/gandbernardel/?photo=Back
“According to the Gand & Bernardel archives, violin number 535 was “tabled” on May 6, 1870 and reserved for Mr. de Rivals, who bought it for 240 francs. Victor de Rivals was a violinist at the Société des Concerts du Conservatoire in Paris (one of the first professional symphony orchestras). He had played in its first-violin section from the founding year, 1828, to his retirement in 1864. He also had been a client of the famed Paris violin shop since 1828, when it belonged to Charles-François Gand, the father. It seems he finally decided, after retiring, to commission a violin from the Gand family. The label states, “Luthiers de la Musique de l’Empereur.” The Emperor’s Music was Napoleon III’s private orchestra, which got dismantled in 1871 after the fall of the Second Empire.
Could our Victor de Rivals be the Mr. de Rivals-Mazères whose name is attached to the Stradivari violin “Tua, Marie Soldat, Rivals-Mazères de Toulouse” of 1708? The Cozio Archive tells us this Stradivari belonged to Mr. de Rivals Mazere in 1880. The Gand & Bernardel books tell us this same Stradivari was sold by Mr. de Rivals-Mazères in 1885. It is tempting to speculate that after Victor de Rivals passed away, his heirs from Toulouse — who used the full name “de Rivals–Mazères” — simply sold his violins. In fact, on February 28, 1882 the Gand & Bernardel house sold our violin #535 to Mr. Gross at Le Havre. And on December 29, 1885 it sold the Stradivari to Teresina Tua. (By the way, she paid 8,000 francs for it, whereas the Rivals family only received 5,000.”
Christophe Pochari’s brother, Sebastien Pochari, is a commercial helicopter pilot currently flying Bell 206L3’s in New York. He has a Youtube channel called “TheHelicopterPerspective”. Christophe’s uncle on his grandmother’s side is the famous Danish ceramic artist Morten Løbner Espersen, one of his cousins is rapidly becoming a famous musician in Denmark, his band is called “FRAADs”. Another relative on his mother’s Danish side of the family works as the office manager at the National Emergency Management Agency, she attended the Danish Pharmaceutical University. Christophe’s paternal great-great grandfather designed the logo for Del Monte foods.
Alain Christian Victor de Rivals-Mazères, physician and artist, born in the 16th arrondissement of Paris in 1934, died in 2019.
Doctor de Rivals-Mazères’s ophthalmology private practice in Hyeres Les Palmiers.
Thomas Pochari Jr on Al-Jazeera in 2001
Brigadier General Thomas Richard Pochari Sr.
Lieutenant General Guillaume de Rivals-Mazères pictured on the right above and the left below.
The Dewoitine D.520, the plane that Guillaume flew.
The plane that Guillaume piloted crashed in North Africa after sustaining enemy fire. Le 7 juillet 1941, le capitaine RIVALS-MAZÈRES doit effectivement se poser en panne dans le désert, sans dommage, et effectuer une marche de 30 km pour trouver du secours ; mais son appareil était le n°302 codé « 30 », sans bande des as, et il sera récupéré ensuite par la « France Libre » et utilisé par les « FA.F.L ». On July 7, 1941, Captain RIVALS-MAZÈRES must actually land in the desert, without damage, and carry out a 30 km walk to find help; but his camera was number 302 coded “30”, without aces band, and it will then be recovered by “France Libre” and used by the “FA.FL”.
Some of the published writings and evolutionary-political theory of Christophe’s father.
Christophe’s father’s unpublished thesis on the Cold War
The de Rivals-Mazères coat of arms, this title is not purchased, it was earned through superior wisdom, bravery and character in 13th century Frankish society.
Château de Fiac, where the de Rivals family once lived.
The Boisleux-au-Mont castle, destroyed in WW1
Christophe maternal great-grandfather, third from right, Ejnar Jacob Madsen, a lieutenant in the Danish Army, photographer, and rail station chief. He married Dagny Sophie Jorgensen, whose family owned a dairy processing facility in the town of Nordrup outside of Copenhagen, they were the only one in the town to own a car, a Ford Model T, as well as a telephone.
Lise Løbner-Madsen, Christophe’s maternal grandmother
Diane Patricia Thorne, Christophe’s paternal grandmother
Arrhenius’s Demon: The Chimera of the Greenhouse effect
Introduction
Note: The radiative heat transfer equation based on Stefan-Boltzmann 4th power law is erroneous and cannot be relied upon. The only way to possibly measure radiative heat transfer is by measuring the intensity of infrared radiation with electrically sensitive instruments.
The Ultraviolet Catastrophe Illustrated.
The Stefan-Boltzmann law states that radiation intensity scales to the 4th power of temperature, drastically overestimating radiation at high temperatures. If we heat a one cubic meter cube to 2000 C°, we radiate 1483 kW/m2, since a one cubic meter cubic has 6 square meters, we would be radiating 8890 kW, or nearly 9 megawatts of power! Clearly, this is impossible because it would mean heating and melting metal would be physically impossible since it would cool through radiation faster than it can be heated! To heat 7,860 kg worth of steel to 2000 C° in one hour, we need to impart 2032 kWh worth of thermal energy, far less than what we would radiate every second. The Stefan-Boltzmann is wrong and must be modified. Rather than quantizing radiation as Planck did, we can simply assign it a non-linear exponent, where a rise in temperature is accompanied by a reduction in the sharpness of the slope. It therefore appears as if the entire greenhouse effect fallacy is not only caused by the confusion over power and energy and its amplifiability, but also by the incorrect mathematical formulation of radiative heat transfer. If the Stefan-Boltzmann law based on the 4th power exponent is true, hot bodies would cool within seconds and nothing could be heated, lava would solidify immediately and smelting iron, melting glass, or any any high temperature process becomes impossible!
In August of 2021, I had become suspicious that perhaps the entire greenhouse effect was suspect and decided to see if anyone had managed to refute the greenhouse effect. I searched the term “greenhouse effect falsified” and found a number of interesting results in Google scholar. At the time, I had a difficult time believing that each and every single expert, Ph.D. academic, etc, could be so wrong. I kept thinking in the back of my mind, “this cannot be, the whole thing is a fraud?” But then upon reading the fascinating articles and blog posts put together by the slayers, I immediately identified the origin of the century-long confusion: the conflation of energy and power. A number of individuals in the 21st century have put into question the greenhouse effect theory. The first serious effort to refute the greenhouse effect is the now quite famous “G&T” paper, by Gerhard Gerlich and Ralf D. Tscheuschner. Although it is not known who was the first to refute the greenhouse effect, I have found no articles or papers in the Google book archive during the entire 20th century, except for some arguments made by the quite kooky psychoanalyst Immanuel Velikovsky. In fact, I cannot find evidence that anyone had ever seriously questioned (serious defined by scientific papers or articles published) Arrhenius, Tyndall, or Poynting during the 19th and early 20th centuries. This is likely because atmospheric science remained largely obscure and occupied little time in the mind of natural philosophers, physicists and what we they now call “scientists”. It appears that it took the increased discussion of the greenhouse effect during the global warming scare driven by Al Gore’s propaganda to get people to finally scrutinize it. With the introduction of the internet and the growth of the “blogosphere”, individuals could contribute outside of the scientific guild. Those who “deny” the greenhouse effect go by the term “slayers”. They accrued the name “slayers” after the title of the first ever book refuting the greenhouse effect: “Slaying the Sky Dragon: Death of the Greenhouse Gas Theory”, by John O’Sullivan. So far, I have found only these following publications challenging the fundamental assumptions of the greenhouse effect: Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics, by Gerhard Gerlich, The Greenhouse Effect as a Function of Atmospheric Mass, by Hans Jelbring, There is no Radiative Greenhouse Effect, by Joseph Postma, No “Greenhouse Effect” is Possible from the way the Intergovernmental Panel on Climate Change Defines it, by John Elliston, Refutation of the “Greenhouse Effect” Theory on a Thermodynamic and Hydrostatic basis, by Alberto Miatello, The Adiabatic Theory of Greenhouse Effect, by OG Sorokhtin, Comprehensive Refutation of the Radiative Forcing Greenhouse Hypothesis, by Douglas Cotton, Thermal Enhancement on Planetary Bodies and the Relevance of the Molar Mass Version of the Ideal Gas Law to the Null Hypothesis of Climate Change, by Robert Ian Holmes, and, On the Average Temperature of Airless Spherical Bodies and the magnitude of Earth’s Atmospheric Thermal Effect, by Ned Nikolov. In addition to these publications, the blog “tallboke” run by Roger Tattersall has provided invaluable data on the gravito-thermal effect, most of which is thanks to the work of Roderich Graeff. It is unlikely that without the efforts of Roderich Graeff, anyone would have noticed the obscure gravito-thermal effect. In the Springer book: Economics of the International Coal Trade: Why Coal Continues to Power the World, By Lars Schernikau, the author mentions briefly the gravito-thermal effect and the possibility the entire greenhouse effect is faulty.
The article is a synthesis of the largely informal and cluttered online literature on “alternative climate science”, with a special emphasis on the gravito-thermal effect. The word alternative is something regrettable to say, since it implies it is just another “fringe” alternative theory competing against a widely established and well-founded mainstream. Due to a lack of clarity in the current state of climate science, I felt it would be useful to summarize the competing theories. One could divided the “alternative climate” theorists into three broad camps. Out of all the “slayers”, the best one by far is Claes Johnson, with his fascinating resonator interpretation of radiative heat transfer.
#1: Radiative GHE refutation based on the 2nd law only, this includes Gerlich & Tscheuschner, Klaus Ermecke and the GHE slayer book authors.
#2: Gravito-thermal Models. This includes Sorokhtin, Chilingar, Cotton, Nikolov, and Zeller, and a Huffman.
#3: “Sun only” theories. I know of only Postma who has propounded a climate theory based purely on the heating of the sun.
The first “school” focuses mainly on the deficits within the existing radiative greenhouse mechanism, and while this is important, it misses other important aspects and provides no alternative explanation. Since we are attempting “overthrow” the dogma that the earth amplifies solar energy by slowing down cooling, if we have completely ruled out this mechanism, then we can either say the earth can be warmed solely by the sun or that some other previously ignored mechanism warms it above and beyond what the sun can provide. We argue that the only parsimonious mechanism allowed by our current laws of physics is a gravito-thermal mechanism. Although “sun-only” models have been proposed, they are shown to be erroneous. A great deal of work needs to be done to finally build a real science of climate, it will take generations since all the textbooks have to be rewritten. Millions of scientific papers, thousands of textbooks, and virtually every popular media article need to be updated so that future generations do not keep being miseducated. Most engineers working in the energy sector are also gravely misinformed. This is especially important because many politicians and engineers are incorrectly using non-baseload energy sources, wind, photovoltaic, otherwise useful technologies, to decarbonize, as opposed to supplement and hedge against uncertain future hydrocarbon supplies.
Does the greenhouse effect’s falsity signify a great deal of parallels in other scientific domains? It is indicting to modern science that the backbone of climatology, the science that deals with the climate of our very earth, is a vacuous mess.
What other areas of science could be predicated entirely on a completely erroneous foundation? Excluding theoretical physics, which is a den of mysticism, we should turn to more practical and real-world theories, those that try to explain observable, measurable phenomena. Which other mainstream postulates or theories could be suspect?
It does seem as if the greenhouse effect was somewhat unique since it was one of the few physical theories, that while untested and speculative, fulfilled some mental desire, and due to its relative insignificance prior to the 21st century, did not garner the attention needed for a swift refutation. Few other theories that are so deeply ingrained in society could have perpetuated for so long on a false foundation because most axioms of modern science are empirical, simply updated versions of the 19th century Victorian methods of rigor and confirmation. The greenhouse effect is truly the outlier, something that had caught the attention of one of the weaker fields within science: climate, but never the attention of the engineer, actual thermodynamicist or physicist who built real useful machines. As John O’Sullivan said, the greenhouse effect was never something observed by actual “applied scientists” who worked with CO2, industrial heaters, heat transfer fluids, cooling systems, insulation, etc. It is implausible that the marvelous “insulating” properties of this wonder gas would not have been noticed by experimentalists in over a century. As we’ve mentioned before, if one searches the term “greenhouse effect wrong, false, refuted, erroneous, impossible, violates thermodynamics etc,” no scientific paper, journal articles, or discussions are retrieved in the Google books archive, suggesting that this theory received little attention. Wood’s experiment doesn’t count because all he says is that the real greenhouse does not work via infrared trapping, he says nothing of the atmosphere, or that the entire thing violates the conservation of energy by magically doubling energy flux. The only record I could find is one mention by Velikoskvy, claiming that the greenhouse effect violated the 2nd law of thermodynamics.
“I have previously raised objections to the greenhouse theory though most have been rejected for publication. But recently even the greenhouse advocates have begun to note certain problems. Suomi et. al. [in the Journal of Geophysical Research, Vol. 85 (1980), pp. 8200-8213] notes that most of the visible radiation is absorbed in the upper atmosphere of Venus so that the heat source [the cloud cover] is at a low temperature while the heat sink [the surface] is at a high temperature, in apparent violation of the second law of thermodynamics.”
Carl Sagan and Immanuel Velikovsky, By Charles Ginenthal
“Later efforts by astronomers to account for the high temperatures by means of a “runaway greenhouse effect” were denounced by Velikovsky as clumsy groping – “completely unsupportable” he called it in 1974, adding that such an idea was “in violation of the Second Law of Thermodynamics”
How Good Were Velikovsky’s Space and Planetary Science Predictions, Really? by James E. Oberg
The greenhouse effect is just another “superseded” theory in the history of science. Wikipedia, despite being edited by spiteful leftists, is more than willing to acknowledge the long list of superseded theories, but somehow they think this process magically stopped in the 21st century! The greenhouse gas theory will join the resting place of a very long list of now specious theories, although, at the time, they were perfectly reasonable and even rational. We must be careful to avoid a “present bias”. The list of disproven theories, while not by any means expansive, includes phlogiston theory, caloric theory, geo-centrism (Ptolemaic earth), tectonic stasis (pre-Wegener geology), Perpetuum mobile, Newton’s corpuscular light theory, Lamarckism, or Haeckel’s recapitulation theory, just to name a few. Unsurprisingly, Wikipedia also lists “scientific racism” as a “superseded” theory, even though ample evidence exists for fixed racial differences in intelligence and life history speed.
We cannot accuse its mistaken founders of fraud, but we can blame the veritable army of the global warming industrial complex for systematic fraud, deception, and duplicity. Arrhenius, the god of global warming, wanted to believe that burning coal could avert another ice age and make the climate more palatable for human settlement. Those who have used the greenhouse gas theory as an excuse to “decarbonize” civilization, can indeed be accused of fraud, because they have willingly suppressed counter-evidence by censoring, firing, or rejecting challenging information, and they have knowingly falsified historical temperature data. The conclusion is that catastrophic anthropogenic global warming (CAGW) is the single largest fraud in world history, simply unparalleled in scale, scope, and magnitude by any other event. We do not know how global warming has grown to be such a monster, but one explanation is that it has been used as a political machination to spread a new form of “Bolshevism” to destroy the West.
I have decided to call the greenhouse effect “Arrhenius’s Demon” after “Maxwell’s demon”, a fictitious being that sorts gas molecules according to their velocity to generate a thermal gradient from an equilibrium.
Atmospheric climate demystified and the universality of the Gravito-Thermal effect

A “Brown Dwarf”, a perfect example of the gravito-thermal effect in action.
The confusion over the cause of earth’s temperature is in large part due to the historical omittance of atmospheric pressure as a source of continuous heat. Gases possess high electrostatic repulsion, which is why they are gases to begin with. The atoms of elements that exist as solids under normal conditions strongly adhere to each other, forming crystals, but gases can only exist as solids at extremely low temperature or extremely high pressure, in the GPa range. Many have erroneously argued that because the oceans and solids do not display a visible gravito-thermal effect, the gases in the atmosphere somehow cannot. This is obviously explained by the fact that liquids and solids are not compressible, so they generate little to no heating when confined. Gas molecules possess extremely high mean velocity, a gas molecule in thermal equilibrium at ATP possesses a velocity of 500 m/s. As the molecular density increases, the mean free path decreases, and the frequency of collisions increases since the packing density has increased, generating more heat. But since atmospheres are free to expand if they become denser, a given increase in pressure does not produce a proportional rise in temperature, since the height of the atmosphere will grow. Unsurprisingly, fusion in stars occurs when gaseous molecular clouds accrete and auto-compress from their own mass.
There is nothing mysterious about the gravito-thermal effect, for some reason, it has been clouded in mystery and poorly elucidated and virtually ignored by most physics texts. The gravito-thermal effect is what we see happening in the stars that shine all around us. People have somehow forget to ask where the energy comes from to power these gigantic nuclear reactors? All the energy from fusion ultimately derives from gravity, because nuclei do not fuse on their own! We know that gas centrifuges used for enriching uranium develop a substantial thermal gradient.
Modern climate science is one of the great frauds perpetrated in the 20th century, along with relativity theory, confined fusion, and artificial intelligence.
Brief summary of the status of “dissident climate science”, or more appropriately named: “real climate science”
Most “climate denial” involves a disagreement over the degree of warming that is posited to occur from emissions of “greenhouse gases”, not whether “greenhouse gases” are even capable of imparting additional heat to the earth. The entire premise of the debate is predicated on the veracity of the greenhouse effect, so most of these debates between climate skeptics and climate alarmists, for example between a “skeptic” like William Happer and an alarmist like Raymond Pierrehumbert, are based on a vacuous foundation, so the entire debate is erroneous and meaningless. We have found ourselves in a situation where an entire generation of physicists believe in an entirely non-existent phenomenon. While we have mentioned that there exist a number of “greenhouse slayers”, they have very little visibility and there has been no major public debate between them and the alarmists. In fact, most have never heard of the slayers, even within the relatively large “climate denial” community. Jo Nova is typical of modern AGW skeptics in that she ardently defends the greenhouse chimera and argues entirely on the merit of the alarmist dogma, quibbling only over magnitude. Other skeptics but champions of the greenhouse effect are Anthony Watts and Roy Spencer. Anthony Watts is just a weatherman and has a weak grasp of physics or thermodynamics, but Roy Spencer considers himself well-versed in these areas. Willis Eschenbach is perhaps the most glaring case study of a deluded skeptic. He went out of his way on Anthony Watt’s blog to defend Arrhenius’s Demon. In attempting to show just how brilliant the IPCC was, he created a hypothetical “steel greenhouse” where the earth was wrapped in a thin metal layer that reflected all the outgoing radiation while absorbing all incoming radiation. Below is an illustration of Eschenbach’s “steel greenhouse”. Apparently, he and Watts, and virtually every “climate scientist”, believes it is possible to simply double the incoming radiation by nothing more than reflecting it. It has evidently not dawned on them that no lens, mirror, reflector, radiant barrier, or surface in existence has ever been shown to increase the power density of radiative flux, whether it is UV, infrared, or Gamma Rays.
#1: There is no greenhouse effect as it violates the conservation of energy. The theory originated from the confusion that energy flux or power could be amplified by “slowing down cooling”. The grave error made was believing that slowing down heat rejection could raise the steady state temperature of a continuously radiated body without the addition of work. Earth’s temperature is a full 15°C warmer than solar radiation can support alone, around -1.1 C°.
#2: The gravito-thermal effect, coined by Roderich Graeff, provides the preponderance of the above-zero temperature on earth. The gravito-thermal effect is simply the gravitional confinement of gas molecules which produces kinetic energy and releases heat through collisions between gas molecules. The gravito-thermal effect can predict the atmospheric lapse rate and surface temperature with nearly 100% accuracy using the ideal gas law, for both Earth and Venus. The “adiabatic lapse rate” is not some artificially generated number derived from the ideal gas law, static air temperature gauges on cruising airliners measure a temperature almost identical to that predicted by the ideal gas law. In fact, current theory cannot even explain the cause of the lapse rate, various nebulous concepts such as convective cooling or “radiative height” are proposed but none of these explanations can be correct if we can predict the lapse rate perfectly with the ideal gas law. The original atmospheric-driven climate theory proposed by Oleg Georgievich Sorokhin, later articulated in the West by independent researcher Douglas Cotton, is the only veridical mechanism and is the only known solution compatible with current physical laws that can account for the temperature of the earth and other planetary bodies. The gravito-thermal effect produces 72.46 W/m², while the sun produces 303 W/m². The sun therefor accounts for 78% of the earth’s thermal budget while the atmosphere accounts 22%.
#3: The moon’s temperature is likely much higher than currently assumed, with solar radiation predicting a mean surface temperature of between 10 and 12°C depending on the exact emissivity value. Current mean lunar temperature estimates place the mean at between of between minus 24 and minus 30°C, but this would mean the moon only receives 194 W/m² assuming an emissivity of 0.98, requiring it to have an albedo of 0.47. It is preposterous that the moon could have such a high albedo, so the current temperature estimates produced by probes are either way off, or the moon has a much high reflectivity, failing to absorb perhaps the more energetic portion (UV, UV-C, visible) portion of the sun’s spectrum. The moon can be seen to be very reflective from earth, glowing a bright yellowish color, this may be because it reflects more energy. Either way, the probes are either way off, or the moon reflects more energy, because no stellar body can absorb more or less radiation than its spherical “unwrapped” surface area, as this would violate the conservation of energy. The only possible solution to this problem is that when radiation hits a body at a shallower angle of incidence (where radiation is received at the poles), more of it is reflected for a given emissivity value, resulting in a less than theoretical absorbed power density. This has not something that has been mentioned before as a solution to some of the temperature paradoxes.
#4: The present concept of an albedo of only 0.44 is entirely erroneous and serves only to underestimate the heating power of the sun. The earth receives at least 300 W/m², because the gravito thermal effect only generates 75 W/m², but the earth must radiate close to or exactly 375 W/m² since our thermometers do not lie, the earth is 13.9°C, there is no arguing with this number. Depending on the exact absorptivity value. The albedo has been deliberately overestimated by excluding the entire 55% of the infrared spectrum to deliberately show that a “greenhouse effect” is absolutely required to generate a warm climate.
#5: Using the ideal gas law, the temperature estimates of the Mesozoic can be explained by a denser atmosphere. In fact, since solar radiation should not have been much more intense, the ideal gas law can be used to predict with near-perfect accuracy the density of the Mesozoic atmosphere by simply using the isotope records. The Paleocene–Eocene Thermal Maximum may have featured temperatures as high as 13°C hotter than today or 28°C as recent as 50 Myr. In order to arrive at the required pressure and density, we can simply construct a continuum from the sea level pressure and temperature. In order to do this, we must establish the hydrostatic pressure gradient. A linear hydrostatic gradient is only valid for incompressible solids, compressible columns “densify” with depth. I have performed this calculation up to a temperature of 25.2 C°. Because the calculation is performed manually, it is very time consuming, I plan on continuing to a temperature of 30 C°, equivalent to Mesozoic temperatures. From the chart below you can see that an increase in atmospheric density of only 15.07% generates an additional 10.2 C° of surface temperature. Robert Dudley argues oxygen concentration of the late Paleozoic atmosphere may have risen as high as 35 %, assuming nitrogen levels are largely fixed since nitrogen is unreactive, this would have resulted in an atmosphere with a density of 12.6% higher, but the actual number is likely much higher since the high temperatures of the Phanerozoic necessitate a denser atmosphere. The origin of atmospheric nitrogen is quite mysterious, nitrogen is sparce in the crust and does not form compounds easily, the only abundant nitrogenous compounds are ammonium ions, which have been bound to silicates and liberated during subduction and volcanic activity. The temperature lapse rate with altitude is a constant value, since gas molecules evenly segregate according to the local force that confines them together. But the relationship between pressure, density and temperature are not linear values and can only be arrived at by performing an individual calculation of each hypothetical gas layer and generating a mean density for the layer above it to predict the amount of compression. With the amount of compression per layer established, it is then possible to use this pressure value to arrive at the density. The calculation is very simple, simply use a constant thermal gradient of 0.006 C°/m and average the density of each increment of gas layer. The ideal gas law cannot predict pressure and density with temperature alone, you cannot just “solve” for density and pressure with temperature as the only known variable, you must establish pressure as well, and this can only be done by knowing the mass above the gas. I have not found an exponent that can arrive at this number, the calculation has to be performed individual for each discrete layer.
If we hypothetically dug out an entire cavern in the earth a few kilometers deep, it would not increase in density because the atmosphere would simply “fall down” and reach a lower altitude, the pressure wouldn’t change. Conversely, by adding mass, the denser atmosphere reaches a greater altitude and moves further into space. Current atmospheric losses to space are 90 tons annually, or just 0.00000087% over 50 million years. Clearly, some form of mineralization or solidification transpired where gaseous oxygen ended up bound into solids. Certain chemical processes removing the highly reactive oxygen and forming solids must have occurred starting during the Mesozoic. An alternative scenario is that gigantic chunks of the atmosphere were ripped away during the average 450,000 year geomagnetic reversal interval when the earth is most vulnerable to solar energetic particles. Geomagnetic reversals are thought to leave the earth with a much weaker temporary magnetic field, which could generate Mars-like erosion of the atmosphere. The last reversal was 780,000 years ago, called the “Brunhes–Matuyama reversal”. The duration of a geomagnetic reversal is thought to be 7,000 years. For a polarity reversal to occur, a reduction in the field’s strength of 90% is required. Estimates place the number of geomagnetic reversals at a minimum of 183 reversals over the time frame spanning back to 83 Myr. Biomass generally contains 30-40% oxygen, since bound oxygen does not appear to be released back into the atmosphere during its decomposition into peat and other fossil materials, it is conceivable much of the paleo-atmosphere’s mass is bound up in oxidized organic matter buried in the crust as sedimentary rock with only a tiny fraction reduced into hydrocarbons. Organic matter is thus an “oxygen sink”.
#6: Short-term climate trends can only be explained by solar variation since atmospheric pressure only changes over very long periods of time due to mineralization of oxygen. A tiny drop in solar irradiance equivalent to +-3 W/m² can produce a temperature change of 0.7°C. A 10 W/m² difference in solar irradiance drops the surface temperature by 2.3°C, enough to cause a mild glaciation. But there is no evidence fluctuations in the magnetic activity of the photosphere can produce such changes, requiring an intermediate mechanism, namely cosmic ray spallation of aerosols.
#7: Joseph Postma’s theory of dividing solar radiation by two is valid only geometrically, but it does not change temperature, because geometry, tilt, or rotation speed, does not affect the total delivered insolation or power density. The real “flat earth” theory is the removal of infrared and the fake “albedo” of 0.44. Postma attempted to increase the available power density of the sun by averaging it over a small area, but this cannot increase temperature since there is still the other half of the sphere radiating freely into space. There is simply no way to employ a “sun-only” model of climate that is utterly ridiculous.
#8: The Gravito-Thermal effect, as predicted by Roderick Graeff, is indeed a source of infinite work, but does not violate the 2nd law, since the work is derived from the continuous exertion of gravitational acceleration. This is something Maxwell and Boltzmann were wrong about. Gravitational acceleration on earth, which is quite strong at 9.8 m/s^2, provides an infinite source of work to generate heat, just as brown dwarfs glow red due to gravitational compression, or molecular clouds collapse forming nuclear cores. Brown dwarfs usually have surface temperatures of 730 °C.
#9: Venus would have a temperature of 40°C without a dense 91 bar atmosphere, but Venus’s true temperature is likely closer to 480°C predicted by the ideal gas law, although the super-critical quasi-liquid nature of the Venusian atmosphere may somewhat compromise its accuracy at low altitudes. Denser atmospheres extend into space further, that is they are “taller” and but should not have a significantly different thermal gradient or “lapse rate”.
We can now finally answer: does CO2 cool or warm the earth? Strictly speaking, radiatively, it can do neither because it is utterly incapable of changing the energy flux. Because some may argue that because the partial pressure of the atmosphere increases due to the addition of carbon, releasing CO2 increases the density of the atmosphere and could produce a tiny amount of warming. It turns out that because hydrocarbons contain a substantial amount of hydrogen, and hydrogen forms water when combusted, the net result of hydrocarbon combustion is a reduction in atmospheric pressure and hence temperature, although the magnitude of this effect is extremely small. How ironic is it that how three century long voracious appetite for carbon has cooled our climate by a few microkelvins?
By burning hydrocarbons, hydrogen converts atmospheric oxygen into liquid water, which is nearly a thousand times denser than air, so there is a net reduction in atmospheric mass. Refined liquid hydrocarbons contain 14% hydrogen on average, to combust 1 kg of hydrogen requires 8 kg of oxygen. Per ton of hydrocarbon combusted, 1120 kg of oxygen is converted to water. Most of this water condenses into liquid, so it results in a reduction of atmospheric mass. The 86% of the hydrocarbon that consists of pure carbon forms carbon dioxide and consumes 2.66 kg of oxygen per kg, so 2287 kg of oxygen has been consumed, releasing 3.66 kg of CO2 per kg of carbon, or 3153 kg. If we subtract the oxygen, we are left with 866 kg of carbon, less than the 1120 kg of oxygen that has been converted to water, so we are left with a mass deficit of 254 kg of oxygen per ton of hydrocarbon burned. Therefore, the combustion of hydrocarbons reduces the density of the atmosphere, increasing the amount of water on earth, and therefore must result in a net cooling effect, albeit insignificant.
The total estimated hydrocarbon burned since 1750 is 705 gigatons, representing a 0.0000347% reduction in atmospheric mass, or 1.7907e+14 kg of oxygen removed from the atmosphere, which is 5.1480e+18 kg. Using the ideal gas law, the predicted cooling is -0.00014°C.
The only possible way humans could warm the planet is by releasing massive amounts of oxygen from oxides to significantly raise the pressure of the atmosphere but without available reducing agents, this would be impossible. It can thus be concluded that under the present knowledge of atmospheric physics, it is effectively impossible for technogenic activity to raise or lower temperatures. Short-term variations, Maunder minimum, medieval warm period, etc, are driven solely by sunspot activity caused by changes in the sun’s magnetic field. No other mechanism can be invoked that stands scrutiny.
The fallacious albedo of 0.44 and the missing infrared
The albedo estimate of the earth is deliberately inflated to buttress the greenhouse effect. At least 55% of the sun’s energy is in the infrared regime, and virtually all of this energy would be absorbed by the surface, with very little of it reflected by the atmosphere.
The Moon’s temperature anomaly
The mean receives a mean solar irradiance almost identical to the earth, about 360 watts per square meter. If the moon’s regolith is assumed to have an emissivity of 0.95, the mean surface temperature will be 12.76 C, which is far higher than the estimate by Nikolov and Zeller of 198-200 K (-75°C). The Moon’s either considerably more reflective than present estimates, or it’s much hotter, there can be no in-between if we are not to abandon the Stefan Boltzmann law, which would make any planetary temperature prediction virtually impossible. Moon should have virtually no “albedo” because it has effectively no atmosphere which would be capable of reflecting any significant amount of radiation.
The ideal gas law can be used to predict lapse rate and planetary temperatures with unparalleled accuracy.
The ideal gas law predicts with nearly 100% accuracy the atmospheric lapse rate and the temperature at any given altitude. The calculation was performed for a typical airline flight level since there is extensive temperature data to confirm the results. The answer was minus 56°C, within decimal points of the measured temperature at the altitude. Therefore we can state with near certainty that the temperature of any gas body subject to a gravitational field will be solely determined by the density (molar concentration) and pressure, a function of the local gravity. The atmosphere is thus a gigantic frictional heat engine, continuously subjecting gas molecules to collisions and converting gravitational energy to heat, much like a star does, using the core pressure, a product of the massive gravity, to fuse nuclei. Brown dwarfs are compressed just enough by gravity to achieve core pressures of a 100 billion bar, they generate enough heat in the process for their outer surface glows red. The same principle is in action for a main sequence star, brown dwarf, or a low pressure planetary atmosphere. The temperature of a gravitationally compressed gas volume should be equal to the frequency and intensity of the collisions. If this is correct, the kinetic theory of gases should predict the temperature of any body of gas on any planet with near-perfect accuracy, regardless of solar radiation. It is not the solar radiation that heats the gas molecules, but solely gravity. If a planet gets a small amount of solar irradiance, then a layer of the atmosphere continuously exposed to the cold surface will be cooled, with some of its gravitational collision energy transferred to the cold surface, so the temperature of the gas will be below the equilibrium temperature predicted by the ideal gas law. This is precisely what we see on earth. Since a pressure of 101.325 kPa, with a molar density of 42.2938, yields 14.99144°C, but the mean surface temperature is only 13.9°C, then the earth must receive at least 303 watts per square meter assuming an emissivity of 0.975. This very closely corresponds to an infrared-adjusted albedo of less than 20%. The earth must then be heated to around minus 1°C by solar radiation alone. For Mars, with an atmospheric pressure of 610 Pascal and a density of around 20 grams/m3, the predicted atmospheric temperature is -110.11°C. Mars receives spherical average of 147.5 W/m2, or -45.88°C, which appears very close to the -63°C estimate, so just like with the moon, probes have underestimated the temperature.
Nikolov and Zeller erroneously assumed the one-bar atmosphere could produce 90 K worth of heating, but there is insufficient kinetic energy at a pressure of 1 bar to produce this heat. They are correct in rejecting the unphysical greenhouse effect, but they cannot count on a 1-bar atmosphere to produce 90 Kelvin of heating. The ideal gas law predicts a temperature of exactly 15°C for a 1013 mbar atmosphere and it predicts 440°C for Venus at 91 bar, it must be correct. Harry Dale Huffman calculated the temperature of Venus at 49 km, where its atmosphere equals earth (1013 mbar), the temperature is exactly 15°C! The molar mass of the molecules do not matter, only their concentration and the force pushing them together, which contributes to more violent and frequent collisions. Postma’s theory that we must treat the earth as a half-sphere only exposed to solar radiation is theoretically correct insofar as the sun never shines on the entire surface at once, but it doesn’t change the mean energy flux per unit area, which is required for a given temperature. The interval of solar exposure time does not change the mean energy flux. Temperature can only be changed by raising or lowering the delivered energy to the body. Since much of the sun’s energy is in the infrared spectrum, we can assume close to 83% of the sun’s energy contributes to the heating of the surface. Current climate models ignore the fact that the sun produces 55% of its energy in the infrared spectrum, all of which is absorbed. The “real” albedo is in fact much less, which allows more of the sun’s energy to be absorbed.
What about short term variation in temperature?
Carbon dioxide has been a useful little demon for climate science since it serves as a veritable “knob” that entirely controls climate. Modern climate science is such a fraud that they will have you believe there were no poles during the Eocene because of carbon dioxide! Of course, Arrhenius’s demon is but a fictional entity, so if we want to understand short term variation, clearly we cannot claim that the atmosphere has gained any mass since the Maunder minimum!
Short-term variations are mediated by cosmic ray spallation of sulfuric acid and other atmospheric aerosols that produce nano-meter-sized cloud condensation nucleons. This increases the reflection of the more energetic UV portion of the spectrum and lowers global temperatures by the plus or minus a few degrees, what we have witnessed over the past millennia.
Isotope records of beryllium 10, chlorine 36, and carbon 14 provide ample evidence that indeed these cosmic rays mediate temperature because they overlap sharply with temperature records using ice cores. This phenomenon is called “cosmoclimatology”, coined by Henrik Svensmark who first proposed the mechanism. Don Easterbrook and Nir Shaviv are two other proponents of this mechanism. Disappointingly, all seem to still endorse the greenhouse effect from comments in their lectures available on Youtube where they compared the effect of the “forcing effect” of cosmic rays compared to CO2.
Variation in sunspot activity is mediated by sunspot activity, large magnetic fields that burst out of the photosphere and produce visible black spots. When these magnetic fields are stronger and more numerous, fewer solar energetic particles or cosmic rays reach earth, producing fewer aerosols and allowing more UV to strike the earth.
A Thermodynamic Fallacy
We must first define what POWER is. The sun delivers power, not energy. Energy, dimensionally, is defined as mass times length squared times time squared: L2M1T-2. Power is energy over time, energy divided by the time spent delivering the energy.
Energy is not power. Power is flux, a continuous stream of a “motive” substance capable of performing work. In dimensional analysis, power is measured as mass times length square times time cubed: L2M1T-3. Power could be said to be analogous to pressure and flow rate, while energy is just the pressure. Note that below we use the term energy flux and power interchangeably, they are both the same units.
The greenhouse effect treats energy as a compressible medium with an infinite source of available work
Work or energy flux cannot be compressed or made denser by slowing the rate at which energy leaves a system, this treats energy flux as a multipliable medium, which it is clearly not. Using mechanical analogies for the sake of clarity, we can express energy flux as gas flowing through a pipeline. The energy flux would be analogous to gas molecules and the area in which this energy is expressed is the surface of the earth. Using the pipe analogy, we can evoke Bernoulli’s theorem to show that mass is always conserved. If we squeeze our pipe, the mass flow rate drops but the velocity increases, a basic law of proportionality or equiveillance. With the greenhouse effect, the energy flux flowing through the pipeline is subject to a constriction (reduction in cooling), the constriction now alters the ability of energy to exit the pipeline, thereby increasing the density of energy particles within the volume. This is in essence the current greenhouse effect power multiplication phenomenon. By “constricting” the pipe, energy flux “particles” pile up and increase in their proximity, creating a “zone” of higher intensity. But this is clearly a fallacy since it produces additional energy flux density (work) from nothing. This scheme has found a way to increase power density without changing total delivered power or area/volume, therefore it has created work from nothing, and it thus cannot exist in reality. No degree of constriction (analogous to back radiation) can increase the flux density, required to heat the earth.
The fact that a century’s worth of top scientists failed to identify this error strongly confirms our hypothesis that most technology and discovery is largely a revelatory phenomenon, as opposed to being the expression of deep insight. The fact that modern science cannot even explain the climate of the very earth we live on is quite astonishing. Modern technology can construct transistors a few nanometers in diameter, yet we are still debating elementary heat flow and energy conservation axioms.
Some GHE deniers go wrong by incorrectly stating that a radiatively coupled gas can “cool” the atmosphere, again this makes the same error that led to the erroneous greenhouse effect in the first place. Cooling can never lower the temperature of a continuously radiated and radiating body, such a scheme is impossible because it would eventually deplete all the energy from the body. The term heating and cooling with respect to the atmosphere need to be dispensed with altogether. Think of the atmosphere as a water wheel, damming up the river in front of the water will not speed up the water wheel, whose speed is solely determined by the mass flow and velocity of the river beneath it. A body receiving a steady-state source of radiation can never be cooled, via radiation, at a rate greater than it is heated due to the reversibility of emissivity and absorptivity, in other words, cooling can never exceed warming and vice versa. The fundamental basis of the greenhouse effect is the assumption that power delivered can exceed power rejected. Since the sun continuously emits “new” radiation per second, the radiation that is “consumed” and converted to molecular kinetic energy is always released at an equal rate than it is delivered. Radiation forms a reversible continuum of thermal energy transfer, without the ability to accumulate or transfer this heat energy at a greater rate than is received. Conduction or convective cooling has no applicability in radiative heat transfer in the vacuum of space since convection or conductive heat transfer scenarios on earth have virtually infinite low-temperature bodies to cool to. Therefore, all stellar bodies are in perfect radiative equilibrium, neither trapping, storing, or rejecting more radiant energy than they can absorb and reject per second.
The confusion over the “amplifiability“ of power
We have already defined power as fundamentally mass times area (length squared) times time cubed, expressed in dimensional analysis as L2M1T-3. Energy is a cumulative phenomenon, energy as a stored quantity is punctuated, while power or energy flux is a continuous or “live” phenomenon, being measurable only in its momentary form, imparting action on a non-stop basis. Mice can produce kilowatt-hours worth of energy by carrying cheese around a house over the course of a few years, but they can never produce one kilowatt. A one-watt power source can produce nearly 9 kWh in a year, but a nine watt-hours can never produce 9 kilowatts! Energy gives the wrong impression that power is somehow accumulated. This rather confusing distinction, the distinctiveness of the different entities or expressions of energy, being inherently time-dependent, led to the fallacy of the greenhouse effect. Because energy can be “stored” and accumulated to form a larger sum, it was assumed energy flux could be amplified as well, by simply slowing down the rate of energy loss relative to energy input, leading to an inevitable increase in temperature. Amplification through altering energy loss could never increase flux, as this would mean insulation would amplify the output of a heater. Insulation can only prolong the lifespan of thermal energy in a finite quantity, it has no bearing on flux values or power. This is because power is a constant value, not mutable, amplified, or attenuated. Power is a time-dependent measure of the intensity of the delivery of work or energy, power is simply energy divided by time.
To increase the temperature of the planet, one would need to increase the flux.
Slowing the rate of heat loss can only work to extend a body’s finite internal energy, a body that is donated a quantity of energy and never replenished, but is unable to raise the temperature of a continuously heated body, because such a body’s emissions are the product of its own temperature, and recycling these emissions can never exceed the source temperature.
A good analogy would be low-grade heat (say 100°C) versus high-grade heat. One could have a million watts of “low-grade heat”, but this low-grade heat can never spontaneously upgrade itself to even a single 1 watt worth of high-grade heat, say 1000°C. Heat can never be “concentrated” to afford a higher temperature, it must always follow the law of “disgregation”, the original true meaning of “entropy” coined by Clausius. The “lifespan” of a concentrated form of energy can be prolonged or extended via modulating perviousness or retentiveness of the storage medium, but the time-invariant flux equivalent sum remains constant. The greenhouse gas theory is therefore quite an elementary mistake, the conflation of the permeability of heat with the flux intensity required to achieve said heat. To raise the temperature of the earth to 15°C, the total flux must increase, one can never trap or amplify a lower flux value to reach a higher flux value, because flux is not a modulable entity.
Many greenhouse effect “slayers” get worked up over the concept of back radiation and radiative heat transfer from hot to cold, but this is not the issue with the greenhouse effect, the greenhouse effect is a 1st law violation, not a 2nd law violation. Of course, one still cannot warm a body with less intense radiation emitted by a hotter surface, but this is a secondary problem, the principle error is the confusion between flux and energy.
Low-grade heat cannot be transformed into high-grade heat, such a scheme would require energy input and an “upgrading heat pump” usually employing exothermic chemical reactions such as water and sulfuric acid. Heat upgrading heat pumps exist in industry and evidently do not violate any laws of thermodynamics because they work! These pumps obviously require work to perform this “upgrading” in the first place.
The greenhouse effect is impossible because it leads to a buildup of energy, it forbids a thermal equilibrium. All stable systems are in perfect thermal equilibrium. The reason the conservation of energy (first proposed by von Mayer) is a universal law of nature is because its absence would mean the spontaneous creation or destruction of energy. Since energy and mass are the same form but differently expressed (first proposed by Olinto De Pretto), a universe without the 1st law would disappear within seconds. Stability requires continuity, and continuity requires conservation. Energy flux is not a cumulative phenomenon, it is not possible to trap and store more energy since this energy would continuously build up and lead to thermal runaway. Energy itself is cumulative, it can be built up, drawn down, and stored, but flux cannot, but flux represents a volume of flow, while energy represents the time-dependent accumulation or cumulative sum of said flow. Energy can be pumped or accumulated to form a larger sum over a period of time, but flux can never be altered, it is impossible to change the power output of an engine, laser, or flame by any scheme that does not result in the addition of extra work. If greenhouse gases store more heat than can otherwise flux into space, this greater heat content generates more radiation by raising the temperature, and now this radiation is blocked from leaving, generating even more heating of the surface, which produces yet still more radiation. The process goes to infinity and therefore must be unphysical. Such a scenario is impossible because it’s totally unstable. A mechanism must exist that continuously provides the thermal energy to maintain a constant surface temperature, this mechanism cannot be solar radiation alone.
Kirchhoff’s law forbids emissivity from exceeding absorptivity and vice versa, so the greenhouse effect violates Kirchhoff’s law. One cannot selectively “tune” emissivity to retain more heat to slowly build up a “hotter” equilibrium. By definition, one cannot “build up” an equilibrium, since an equilibrium requires input and output to be perfectly synced, and by definition, the greenhouse effect is when these values are not synced, but considerably diverged, since there is more retained that imparted into the system, but such a condition inevitably leads to infinity.
There are two ways of falsifying the greenhouse effect. One way is to find errors in the predictive power of a CO2-driven paleoclimate or ancient climate record, another better way is to identify and highlight the major physical errors in the mechanism itself.
During the Paleogene-Eocene thermal maximum, there were no poles and sea levels were considerably higher, likely close to a hundred meters higher.
Henry’s law is temperature dependent, when liquids rise in temperature, the solubility value for gases decreases, so less gas can be stored in oceans. CO2, therefore, outgases from the oceans following a temperature increase.
The difference between 1600 and 400 ppm cannot account for the complete absence of ice in the Eocene, the ice ages, or millennia temporal variation, this would require close to 5000 ppm CO2 according to current 1 c/doubling sensitivity. Paleogene-Eocene maximum up to 13°C warmer, but CO2 concentrations were only 3.3 times higher than the present, which would translate to a sensitivity of 4°C/doubling, but this is far too high even if one subscribes to the non-existent greenhouse effect. Even water vapor, which on average accounts for 2.5% of the volume of the atmosphere, would decrease emissivity by 2.5%, or raise or lower temperature by only 0.32 degrees.
Even if the concept of back radiation is valid, which it is not, the tiny concentration of CO2, even at an absorptivity of 1, will yield only a minuscule difference in net atmospheric emissivity. CO2 is 0.042% by volume, assuming each CO2 molecule acts as a perfect radiant barrier, the total increase in emissivity can only by definition, be 0.042%.
Milankovitch cycles cannot account for ice ages since the distance to the sun does not change, or only very slightly.
Loschmidt firmly believed contrary to Maxwell, Boltzmann, Thomson, and Clausius, that a gravitational field alone could maintain a temperature difference which could generate work. Roderich W. Graeff measured gravitational temperature gradients as high as 0.07 K/m in highly insulated hermetic columns of air, which corroborates Loschmidt’s theory and confirms the adiabatic atmosphere theory.
“Thereby the terroristic nimbus of the second law is destroyed, a nimbus which makes that second law appear as the annihilating principle of all life in the universe, and at the same time we are confronted with the comforting perspective that, as far as the conversion of heat into work is concerned, mankind will not solely be dependent on the intervention of coal or of the sun, but will have available an inexhaustible resource of convertible heat at all times” — Johann Josef Loschmidt
“In isolated systems – with no exchange of matter and energy across its borders – FORCE FIELDS LIKE GRAVITY can generate in macroscopic assemblies of molecules temperature, density, and concentration gradients. The temperature differences may be used to generate work, resulting in a decrease of entropy”—Roderich W. Graeff
[1] http://theendofthemystery.blogspot.com/2010/11/venus-no-greenhouse-effect.html
Active-Cooled Electro-Drill (ACED)
Christophe Pochari, Christophe Pochari Engineeering, Bodega Bay, CA.
707 774 3024, christophe.pochari@pocharitechnologies.com
Introduction:
Christophe Pochari Engineeering has devised a novel drilling strategy using existing technology to solve the problem of excessive rock temperature encountered in deep drilling conditions. The solution proposed is exceedingly simple and elegant: drill a much larger diameter well, around 450mm instead of the typical 250mm or smaller diameters presently drilled. By drilling large-diameter wells, a fascinating opportunity arises: the ability to pull away heat from the rock faster than it can be replenished, thereby cooling it as drilling progresses, preventing the temperature of the water coolant from reaching more than 150°C even in very high rock temperatures. A sufficiently large diameter well has enough cross-sectional area to minimize pressure drop from pumping a voluminous quantity of water through the borehole as it is drilled. The water that reaches the surface of the well will not exceed 150°C, this heat would be rejected at the surface using a large air-cooled heat exchanger. If the site drilling temperature exceeds the ambient of 20°C such as in hot climates, an ammonia chiller can be used to cool it down to as low as 10°C Any alternative drilling system must fundamentally remove rock either by mechanical force or heat. Mechanical force can take the form of abrasion, kinetic energy, extreme pressure, percussion, etc, delivered to the rock through a variety of means. The second category is thermal, which has never to this date been utilized except for precision manufacturing such as cutting tiles or specialized materials using lasers. Thermal drilling is evidently more energy intensive, since rock possesses substantial heat capacity, and any drilling media, whether gas or liquid, will invariably consume a large portion of this heat. Thermal methods involve melting or vaporizing, since at least one phase change will occur, the energy requirements can be very substantial. This heat must then be introduced somehow, it can either be in the form of combustion gases directly imparting this heat or via electromagnetic energy of some sort. Regardless of the technical feasibility of the various thermal drilling concepts, they all share one feature in common: they require drilling with air. The last method available is chemical, in which strong acids may dissolve the rock into an emulsion that can be sucked out. This method is limited by the high temperature of the rock which may decompose the acid and the prohibitively high consumption of chemicals which will prove uneconomical. Any drilling concept which relies on thermal energy to melt, spall, or vaporize rock is ultimately limited by the fact that it cannot practically use water as a working fluid, since virtually all the energy would be absorbed in heating the water. This poses a nearly insurmountable barrier to their implementation since even the deep crust is assumed to contain at least 4-5% H2O by volume, (Crust of the Earth: A Symposium, Arie Poldervaart, pp 132). Water will invariably seep into the well and collect at the bottom, and depending on the local temperature and pressure, will either exist as a liquid or vapor. Additionally, even if the well is kept relatively dry, thermal methods such as lasers or microwaves will still incur high reflective and absorptive losses from lofted rock particles and even micron-thick layers of water on the rock bed. Regardless of the medium of thermal energy delivery, be it radio frequency, visible light such as in a laser, or ionized gas, that is plasma, they will be greatly attenuated by the presence of the drilling fluid, requiring the nozzle to be placed just above the rock surface. This presents overheating and wear issues for the tip nozzle material. Christophe Pochari Engineeering concludes based on extensive first principles engineering analysis that thermal systems will possess an assortment of ineluctable technical difficulties severely limiting their usefulness, operational depth, and practicality. In light of this fact, it is essential to evaluate and consider proven and viable methodologies to take existing diamond bit rotary drilling, and make the necessary design modifications to permit these systems to work in the very hot rock encountered at depths greater than 8 km. In order to access the deep crust, a method to deliver power to a drill bit as deep as 10 kilometers is needed. Due to the large friction generated when spinning a drill shaft such a distance, it is absolutely essential to develop a means to deliver power directly behind the drill bit, in a so-called “down-hole” motor. Rotating a drill pipe 10 or more kilometers deep will absorb much of the power delivered to the pipe from the rig and will rapidly wear the drill pipe, necessitating frequency replacement and increasing downtime. Moreover, due to the high friction, only a very limited rotational speed can be achieved placing an upper limit on rates of penetration. The rate of penetration for a diamond bit is directly proportional to the speed and torque applied, unlike roller-cone bits, diamond bits do not require a substantial downward force acting on them since they work by shearing, not crushing the rock. Down-hole motors have the potential to deliver many fold more power to the bit allowing substantially increased rates of penetration. Clearly, a far superior method is called for and this method is none other than the down-hole motor. But down-hole motors are nothing new, they form the core of modern horizontal drilling technology in the form of positive displacement “mud motors” which drives drill bits all over the U.S. shale play. Another method is the old turbodrill, widely used in Russia and discussed further in this text. But what all these methods have in common is a strict temperature threshold that cannot be crossed or rapid degradation will occur. A new paradigm is needed, one in which the surrounding rock temperature no longer limits the depth that can be drilled, a new method in which the temperature inside the borehole is but a fraction of the surrounding rock temperature. This method is called Active-Borehole Cooling using High Volume Water. Such a scheme is possible due to the low thermal conductivity and slow thermal diffusivity of rock. There is insufficient thermal energy in the rock to raise the temperature of this high volume of water provided the heat is removed at the surface using a heat exchanger. Christophe Pochari Engineeering appears to be the first to propose using very high-flow volume water to prevent the temperature of the down-hole equipment from reaching the temperature of the surrounding rock, no existing literature makes any mention of such a scheme, serving as an endorsement of its novelty.
Impetus for adoption
There is currently tremendous interest in exploiting the vast untapped potential that is geothermal energy, and a number of companies are responding by offering entirely new alternatives in an attempt to replace the conventional rotary bit using exotic methods including plasma, microwaves, and some have even proposed firing concrete projectiles from a cannon! The greatest inventions and innovations in history shared one thing in common, they were elegant and simple solutions that appeared “obvious” in hindsight. There is no need whatsoever to get bogged down with exotic, unproven, complicated, and failure-prone alternative methods when existing technologies can be easily optimized. Conventional drilling technology employs a solid shaft spun at the surface using a “Kelly bushing” to transmit torque to the drill bit. This has remained practically unchanged since the early days of the oil industry in the early 20th century. While turbo drills have enjoyed widespread use, especially in Russia for close to a century, they have a number of limitations. Russia developed turbodrills because the quality of Russian steel at the time was so poor that drill pipes driven from the surface would snap under the applied torque. Russia could not import higher quality Western steel and thus was forced to invent a solution. Early Russian turbodrills wore out rapidly and went through bits much faster than their American shaft-driven counterparts due to the higher rotational speeds of the turbine even with reduction gearing. Diamond bits did not exist at the time and low-quality carbide bits, principally tungsten carbide and roller cones, were used. Bearings would break down as early as 10-12 hours of operation. Reduction gearboxes, essential for a turbodrill to work due to the excessive RPM of the turbine wheels, wore out rapidly due to the loss in viscosity from the high down-hole temperature. The principal challenge of deep rock drilling lies not in the hardness of the rock per se, as diamond bits are still much harder and can shear even the hardest igneous rocks effectively. Existing diamond bits are several orders of magnitude harder than quartz, feldspar, pyroxene, and amphibole, and newer forms of binder-less bits are even more so. From a physics standpoint, it seems absurd to argue that drill bits are not already extremely effective. Rather, the challenge lies in preventing thermal damage to the down-hole components. If only a small flow of drilling fluid is pumped as is presently done, flowing just enough fluid to carry cuttings to the surface, the latent thermal energy in the radius surrounding the well is sufficient to raise the temperature of this fluid, especially a lower heat capacity oil, to the mean temperature along that particular well. For example, in existing small-diameter wells drilled, especially deeper boreholes, are usually around 9-10” or 250mm in diameter. If the well is too much narrower than 350mm in diameter, it is difficult to flow enough water to cool it. Assuming a 100-hour thermal diffusion time, we draw a 1.26-meter radius of rock, that is in a hundred hours, heat moves this distance. By growing the diameter of the well from 250mm to 460mm, the ratio of cross-sectional area which is proportional to the available flow rate at a constant pressure drop, drops from 125 cubic meters of rock per m2 of cross-sectional area to less than 42 cubic meters of rock per m2 of cross-sectional area, or around 3 times less. Flow rates in previous deep drilling projects were usually less than 500 GPM or around 110 m3/hr. The German deep drilling program had mud flow rates of between 250 and 400 GPM (81 m3/hr) for well diameters of 20 cm and 22.2 cm. The average thermal flux from the well is around 70 MWh-t so the water is rapidly warmed to the surrounding well temperature. The minimum flow rate to warm the water to no more than 180°C is around 400 cubic meters, far too high to be flowed in such a small annulus, especially if the drilling mud is viscous and the drill pipe takes up much of the space leaving only a small annulus. The volume of rock cooled per 100 hours is 6.8 cubic meters or 18,000 kg. If this mass of rock is cooled by 300°C, the thermal energy is 1,280 kWh, or a cooling duty of 12.8 kW/m of well-bore length. Since water has a heat capacity of 3850 J/kg-K at the average temperature and pressure of the well, 1800 cubic meters of water, a flow rate achievable with 600 bar of head in a 460mm diameter well, results in a cooling duty of 343,000 kWh or 34.3 kWh/m of wellbore length. Clearly, our well will not produce 350 MWt, equal to a small nuclear reactor, otherwise we would be drilling millions of holes and getting virtually free energy forever! But since drilling occurs over a relatively long period of close to 1500 hours, the thermal draw-down radius is 4.87 meters, or a rock volume of 81.7 cubic meters. The thermal energy in this rock mass is 15,400 kWh or only 10.26 kW/m of cooling duty at a temperature drop of 240°C. But such a large temperature drop is entirely unrealistic, since a 12 km deep well will have an average rock temperature of only 210°C, so a temperature drop of only say 100°C is needed, resulting in a cooling duty of 4.3 kW/m or 6300 kWh/m over 1500 hours. This means a 12 km well will produce 51.6 MWt of heat resulting in a water temperature of only 27°C. If a 12 km well is drilled in a geothermal gradient of 35°C /km, the maximum temperature reached will be 420°C and the average temperature will be 210°C This means in the last 3.5 km, the temperature will be above 300°C, which is far too hot for electronics, lubricants, bearings, and motors to operate reliably without accepting a severe reduction in longevity. Geothermal wells, unlike petroleum and gas wells, must penetrate substantially below the shallow sedimentary layer and for effective energy recovery, rock temperatures over 400°C are desired. As the temperature of the well reaches 300-400°C, the alloys used in constructing the drill equipment, even high-strength beta titanium, begin to degrade, lose strength, become supple, warp, and fail from stress corrosion cracking when chlorides and other corrosive substances contact the metallic surfaces. It can thus be said that proper thermal management represents the crucial exigency that must be satisfied in order for the upper crust to be tapped by human technology. Christophe Pochari Engineeering‘ Active-Cooled Electro-Drill (ACED) methodology employs the following processes and components to achieve low down-hole temperatures. A number of technologies are concatenated to make this methodology possible.
#1: High volume/pressure water cooling using large diameter beta-titanium drill pipes:
Using high strength beta-titanium drill pipes to deliver 600 bar+ water at over 1700 cubic meters per hour, a cooling duty of up to 400 megawatts can be reached if the temperature of the water coolant is allowed by 180°C. The rock mass around the 450mm diameter well is insufficient to come close to heating this mass of water by this magnitude and an expected 60-80 MW of thermal energy will be delivered to the surface in the first 1500 hours of drilling. The drill string incorporates a number of novel features. Being constructed out of ultra-high strength titanium, it is able to reach depths of 12 km without shearing off under its own weight. It is also designed with an integrated conductor and abrasion liner. The integrated conductor is wrapped around the drill pipe between a layer of insulation and the out-most abrasion liner.
#2: High Power density down-hole electric machines:
A high-speed synchronous motor using high-temperature permanent magnets and mica-silica coated winding generates 780-1200 kW at 15-25,000 rpm. Owing to the high speed of the motor, it is highly compact and can easily fit into the drill string within a hermetic high-strength steel container to protect it from shock and abrasive and corrosive fluids. The motor is cooled by passing fresh water through sealed flow paths in the winding. Compared to the very limited power of Russian electro-drills in the 1940s to 1970s, the modern electro-drill designer has access to state-of-the-art high-power-density electrical machines.
#3 High Speed Planetary Reduction Gearbox:
The brilliance of the high volume active cooling strategy is the ability to use a conventional gear-set to reduce the speed of the high power density motor to the 300-800 RPM ideal for the diamond bit. Using high-viscosity gear oils with 30 CSt at 180°C, sufficient film thickness can be maintained and gearbox life of up to 1000 hours can be guaranteed.
#4: Silicon Thyristors and Nano-Crystalline Iron Transformer Cores:
Silicon thyristors are widely used in the HVDC sector and can be commercially procured for less than 3¢/kW.
The maximum voltage of electrical machines is limited by winding density constraints due to corona discharge, requiring thick insulation and reducing coil packing density. For satisfactory operation and convenient design, a voltage much over 400 is not desirable. The problem then becomes, how to deliver up to 1 MW of electrical power over 10 km? With low voltage, this is next to impossible. If a voltage of 400 is used, the current would be a prohibitive 2500 amps, instantly melting any copper conductor. As any power engineer knows, in order to minimize conductor size and losses, a high operating voltage is necessary, 5,000 or more volts. To deliver 1000 kW or 1340 hp to the drill bit, with a 15mm copper wire at 100°C, the average resistance is 0.8 Ohms, resulting in a Joule heating of 22 kWh, or 2.2% of the total power. To deliver current to the motor, DC is generated at 6-10 kV, this DC is then inverted to 100-150 kHz to minimize core size and the voltage is reduced to the 400 required by the motor. This high-frequency low voltage power is then rectified back into DC to change the frequency back to 1000 Hz for the high-speed synchronous motor. Silicon thyristors can operate at up to 150°C in oxidizing atmospheres (thermal stability is substantially improved in reducing or inert atmospheres). Nano-crystalline iron cores have a Curie temperature of 560°C, well above the maximum water temperature encountered with 1700 m3/hr flow rates.
Rock hardness is not the limiting factor
Feldspar, the most common mineral in the crust, has a Vickers hardness of 710 or 6.9 Gpa. Diamond in the binderless polycrystalline form has a hardness of between 90-150 GPa, or 35 times greater. Diamond has a theoretical wear rate of 10^-9 mm3/Nm. Where cubic millimeters represent volume losses per unit of force applied (Newtons) over a given travel distance. We can thus easily calculate the life of the bit using the specific wear rate constant. Unfortunately, it is more complex than this, and bit degradation is usually mediated by spalling, chipping, and breakage. Due to the extrusion of the cobalt from the diamond, the poly-crystalline diamond degrades faster than otherwise predicted by its hardness alone. This means the wear rate is extremely slow unless excessive temperature and shock are present. Archard’s equation states that wear rates are proportional to the load and hardness differential. In light of this thermal constraint, it might seem obvious to any engineer to exploit the low thermal conductivity of rock and simply use coolant, of which water is optimal, to flush heat out of the rock and back to the surface. But in conventional oil and gas drilling, a very heavy viscous drilling mud is employed, this mud is difficult to pump and places stringent requirements on compression equipment. Elaborate filtration systems are required and cooling this mud with a heat exchanger would lead to severe erosion of the heat exchanger tubes. The principal reason why “active” cooling of the well bore is not presently an established process is the fact that there is no present application where such a scheme would be justified. For example, in order to cool a 450mm diameter 10 km borehole that would flux close to 70000 kWh of thermal energy in the first 1200 hours, a pumping power of up to 32,000 hp is required. The average power costs would therefore be close to $1.5 million per well assuming a wholesale power cost of $70/MWh. The added cost of site equipment, including heat exchangers, a larger compressor array, multiple gas turbines, and the necessary fuel delivery to drive the gas turbines make this strategy entirely prohibitive for conventional oil and gas exploration. Even if this could be tolerated, the sub-200°C temperatures encountered could not possibly justify such a setup. What’s more, pumping such a massive amount of water requires a larger diameter drill pipe that can handle the pressure difference at the surface. Since the total pressure drop down the pipe and up the annulus is close to 600 bar across 10 km, the pipe must withstand this pressure without bulging, compressing the water coming up the annulus thus canceling the differential pressure and stopping the flow. High-strength beta-titanium alloys using vanadium, tantalum, molybdenum, and niobium are required since they must not only withstand the great pressure at the surface, but also carry their own mass. Due to its low density (4.7 g/cm3), beta-titanium represents the ideal alloy choice. With its excellent corrosion resistance and high ductility, few materials can surpass titanium. AMT Advanced Materials Technology GmbH markets a titanium alloy called “Ti-SB20 Beta” with high ductility that can reach ultimate tensile strengths of over 1500 MPa. For conventional oil and gas drilling to only a few km deep, the weight of the drill with the buoyancy of heavy drilling mud allows the use of low-strength steels with a yield strength of less than 500 MPa. This high-end titanium would be vacuum melted and the drill pipes forged or even machined from solid round bar stock. The cost of the drill piper set alone would be $5 million or more for the titanium alone, and several additional millions for machining. In addition, titanium has poor wear and abrasion resistance and tends to gall so it cannot be used where it is subject to rubbing against the rock surface. Because an electro-drill does not spin the drill pipe within the well, the only abrasion would be caused by the low concentration of rock fragments in the water and by the sliding action of the pipe if it is not kept perfectly straight, which is next to impossible. To prevent damage to the titanium drill pipe, a liner of manganese steel or chromium can be mechanically adhered to the exterior of the drill pipe and replaced when needed. Another reason that high-volume water cooling of the drilling wells is not done is due to the issue of lost circulation and fracturing of the rock. In the first few kilometers, the soft sedimentary rock is very porous and would allow much of the water pumped to leak into pore spaces resulting in excessive lost circulation. Since a high volume of water requires a pressure surplus at the surface, the water is as much as 250 bar above the background hydrostatic pressure, allowing it to displace liquids in the formation. Fortunately, the high-pressure water does not contact the initial sedimentary later since this pressure is only needed when the well is quite deep and by the time the water flows up the annulus to contact the sedimentary formation, it has lost most of its pressure already. The initial 500-600 bar water is piped through the drill pipe and exits at the spray nozzles around the drill bit. In short, a number of reasons have combined to make such a strategy unattractive for oil and gas drilling. Sedimentary rocks such as shale, sandstone, dolomite, and limestone can be very vugular (a cavity inside a rock), this can cause losses of drilling fluid of up to 500 bbl/hr (80 cubic meters per hour. A lost circulation of 250 bbl/hr is considered severe and rates as high as 500 bbl/hr are rarely encountered. With water-based drilling, the cost is not a great concern since no expensive weighting agents such as barite or bentonite are used, nor are any viscosifing agents such as xanthan gum. Little can be done to prevent lost circulation other than using a closed annulus or drilling and casing simultaneously, but both methods add more cost than simply replacing the lost water. Water has no cost (infinitely available) besides its transport and pumping cost. If 80 cubic meters are lost per hour, an additional 1200 kW is used for compression. The depth of the water table in the Western U.S. (where geothermal gradients are attractive) is about 80 meters. In Central Nevada for example where groundwater is not by any means abundant, the average precipitation is 290 mm, or 290,000 cubic meters per square kilometer. Multiple wells could be drilled to the 80-meter water table with pumps and water purification systems installed to provide onsite water delivery to minimize transport costs. Water consumption for drilling a deep well using active cooling pales in comparison to agriculture or many other water-intensive industries such as paint and coating manufacturing, alkali and chlorine production, and paperboard production. If water has to be physically transported to the site via road transport if well drilling proves impossible for whatever reason, a large tanker trailer with a capacity of 45 cubic meters which is allowed on U.S roads with 8 axles can be used. If the distance between the water pickup site and the drill site is 100 km, which is reasonable, then the transport cost assuming driver wage of $25/hr and fuel costs of $3.7/gal (avg diesel price in the U.S in December 2022), would total of $150 each way to transport 45 cubic meters, or less than $4 per cubic meter or around $320/hr. The total cost of replacing the lost circulation at the most extreme loss rates encountered is thus $450,000 for a 10 km well drilled at a rate of 7 meters per hour.
Summary
The drilling technology landscape is ripe for dramatic disruption as new forms of more durable and thermally stable metal-free materials reach the market. But this upcoming disruption in drilling technology is not what many expect. Rather than exotic entirely new drilling technologies such as laser beams or plasma bits, improvements in conventional bit material fabrication and down-hole power delivery present the real innovation potential. Improvements in power delivery and active well cooling allow engineers to supersede the bulky turbodrill into obsolescence. Investors in this arena should be cautious and conservative, as the old adage “tried and true” appears apt in this case. Binder-less polycrystalline diamond has been successfully synthesized at pressures of 16 GPa and temperatures of 2300°C by Saudi Aramco researchers. Conventional metallic bonded poly-crystalline diamond bits begin to rapidly degrade at temperatures over 350°C due to the thermal expansion of the cobalt binder exceeding that of diamond. Attempts have been made to remove the metallic binder by leaching but this usually results in a brittle diamond prone to breaking off during operation. Binderless diamond shows wear resistance around 4 fold higher than binder formulations and thermal stability in oxidizing atmospheres up to 1000°C. The imminent commercialization of this diamond material does not bode well for alternative drilling technologies, namely those that propose using thermal energy or other exotic means to drill or excavate rock. If and when these higher performance longer lasting bits reach maturity, it is likely most efforts at developing alternative technologies will be abandoned outright. In light of this news, it would be unwise to invest large sums of money into highly unproven “bitless” technologies and instead focus efforts on developing thermally tolerant down-hole technologies and or employing active cooling strategies. It is therefore possible to say that there is virtually no potential to significantly alter or improve the core rock-cutting technology. The only innovation left is therefore isolated to the drilling assembly, such as the rig, drill string, fluid, casing strategy, and pumping equipment, but not the actual mechanics of the rock cutting face itself. Conventional cobalt binder diamond bits can drill at 5 meters per hour, using air as a fluid the speed increases to 7.6 meters per hour. Considering most proposed alternatives cannot drill much over 10 meters per hour and non have been proven, it seems difficult to justify their development in light of new diamond bits that are predicted to last four times longer, which in theory would allow at least a doubling in drilling speeds holding wear rates constant. A slew of alternative drilling technologies has been chronicled by William Maurer in the book “Novel Drilling Techniques”. To date, the only attempts to develop these alternative methods have ended in spectacular failure. For example, in 2009 Bob Potter, the inventor of hot dry geothermal, founded a company to drill using hot high-pressure water (hydrothermal spallation). As of 2022, the company appears to be out of business. Another company, Foro Energy, has been attempting to use commercial fiber lasers, widely used in metal cutting, to drill rock, but little speaks for its practicality. The physics speaks for itself, as a 10-micron thick layer of water will absorb 63% of the energy of a CO2 laser. No one could possibly argue the limit of human imagination is the reason for our putative inability to drill cost-effective deep wells. Maurer lists a total of 24 proposed methods over the past 60 years. The list includes Abrasive Jet Drills, Cavitating Jet Drills, Electric Arc and Plasma Drills, Electron Beam Drills, Electric Disintegration Drills, Explosive Drills, High-Pressure Jet Drills, High-Pressure Jet Assisted Mechanical Drills, High-Pressure Jet Borehole Mining, Implosion Drills, REAM Drills, Replaceable Cutterhead Drills, Rocket Exhaust Drills, Spark Drills, Stratapax Bits, Subterrene Drills, Terra-Drill, Thermal-Mechanical Drills, and Thermocorer Drill. This quite extensive list does not include “nuclear drills” proposed during the 1960s. Prior to the discovery of binder-less diamond bits, the author believed that among the alternatives proposed, explosive drills might be the simplest and most conducive to improvement, since they had been successfully field-tested. What most of these exotic alternatives claim to offer (at least their proponents!), are faster drilling rates. But upon scrutiny, they do not live up to this promise. For example, Quaise, a company attempting to commercialize the idea of Paul Waskov to use high-frequency radiation to heat rock to its vaporization point, claims to be able to drill at 10 meters per hour. But this number is nothing spectacular considering conventional binder poly-crystalline diamond bits from the 1980s could drill as fast as 7 meters per hour in crystalline rock using air. (Deep Drilling in Crystalline Bedrock Volume 2: Review of Deep Drilling Projects, Technology, Sciences and Prospects for the Future, Anders Bodén, K. Gösta Eriksson). Drilling with lasers, microwaves, or any other thermal delivery mechanism, is well within the capacities of modern technology, but it offers no compelling advantage to impel adoption. Most of these thermal drilling options require dry holes since water vapor will absorb most of the energy from electromagnetic radiation since water vapor is a dipole molecular. While new binderless polycrystalline diamonds can withstand temperatures up to 1200°C in non-oxidizing atmospheres, down-bore drivetrain components are not practically operated over 250°C due to lubricant limitations, preventing drilling from taking place with down-hole equipment at depths above 7 km, especially in sharp geothermal gradients of over 35°C/km. Electric motors using glass or mica-insulated windings and high Curie temperature magnets such as Permendur can maintain high flux density well over 500°C, but gearbox lubrication issues make such a motor useless. In order to maximize the potential of binder-less diamond bits, a down-hole drive train is called for to eliminate drill pipe oscillation and friction and to allow optimal speed and power. Of all the down-hole drive options, a high-frequency high power density electric motor is ideal, possessing far higher power density than classic turbodrills and offering active speed and torque modulation. Even if a classic Russian turbodrill is employed, a reduction gear set is still required. Russian turbodrills were plagued by rapid wear of planetary gearsets due to low oil viscosity at downhole temperatures. A gearset operating with oil of 3 Cst wears ten times faster than one at 9 Cst. In order to make a high-power electric motor fit in the limited space in the drill pipe, a high operating speed is necessary. This is where the lubrication challenges become exceedingly difficult. While solid lubricants and advanced coatings in combination with ultra-hard materials can allow bearings to operate entirely dry for thousands of hours, non-gear reduction drives are immature and largely unproven for continuous heavy-duty use. The power density of a synchronous electric motor is proportional to the flux density of the magnet, pole count, and rotational speed. This requires a suitable reduction drive system to be incorporated into the drill. Although a number of exotic untested concepts exist, such as traction drives, pneumatic motors, high-temperature hydraulic pumps, dry lubricated gears etc, none enjoy any degree of operational success and exit only as low TRL R&D efforts. Deep rock drilling requires mature technology that can be rapidly commercialized with today’s technology, it cannot hinge upon future advancements which have no guarantee of occurring. Among speed-reducing technologies, involute tooth gears are the only practical reduction drive option widely used in the most demanding applications such as helicopters and turbofan engines. But because of the high Hertzian contact stress generates by meshing gears, it is paramount that the viscosity of the oil does not fall much below 10 centipoises, in order to maintain a sufficient film thickness on the gear face, preventing rapid wear that would necessitate the frequent pull up of the down-hole components. Fortunately, ultra-high viscosity gear oils are manufactured that can operate up to 200°C. Mobil SHC 6080 possesses a dynamic viscosity of 370 Cst at 100°C, the Andrade equation predicts a viscosity of 39 at 180°C. In an anoxic environment, the chemical stability of mineral oils is very high, close to 350°C, but at such temperatures, viscosity drops below the film-thickness threshold, so viscosity, not thermal stability is the singular consideration. It is expected that by eliminating the oscillation of the drill pipe caused by eccentric rotation within the larger borehole and removing the cobalt binder, diamond bits could last up to 100 hours or more. This number is conjectural and more conservative bit life numbers should be used for performance and financial analysis. It is therefore critical that the major down-hole drive train components last as long as the bits so as to not deplete their immense potential. If bit life is increased to 100 hours, the lost time due to pull-out is reduced markedly. With a bit life of 50 hours to be conservative, and a drill-pipe length of 30 meters, pull-up and reinsertion time is reduced to only 544 hours, or 40% of the total drilling time. If the depth of the well is 10,000 meters, the average depth is 5000 meters, the average penetration rate is 7 m/hr, and the drill pipe is 30 meters, then the number of drill pipe sections is 333. During each retrieval, if the turn-around time can be kept to 3 minutes, the total time is 8.3 hours per retrieval one way, or 16.6 hours for a complete bit-swap. If the total drilling time is 1430 hours, then a total of 29-bit swaps will be required, taking up 481 hours, or 33% of the total drilling time. If bit life is improved to 100 hours, downtime is halved to 240 hours or 17%. If a drill-pipe length of 45 meters is employed with a bit life of 100 hours and a rate of penetration of 7 m/hr, the downtime is only 211 hours or 14.7%.
Some may be suspicious that something as simple as this proposed idea has not been attempted before. It is important to realize that presently, there does not exist any rationale for its use. Therefore, we can conclude that rather than fundamental technical problems or concerns regarding its feasibility, a lack of relevant demand can account for its purported novelty. As mentioned earlier, this new strategy has not been employed in drilling before since it imposes excessive demands on surface equipment, namely the need for close to 16000 hp (32,000 hp at full depth) to drive high-pressure water pumps. Such power consumption is impractical for oil and gas drilling where quick assembly and disassembly of equipment is demanded in order to increase drilling throughput. Water, even with its low viscosity, requires a lot of energy to flow up and down this very long flow path. The vast majority of the sedimentary deposits where hydrocarbons were laid down during the Carboniferous period occur in the first 3 km of the crust. The temperatures at these depths correspond to less than 100°C, which is not close to a temperature that warrants advanced cooling techniques. Deep drilling in crystalline bedrock does not prove valuable for hydrocarbon exploration since subduction rarely brings valuable gas and liquid hydrocarbons deeper than a few km. There has therefore been a very weak impetus for the adoption of advanced technologies related to high-temperature drilling. Geothermal energy presently represents a minuscule commercial contribution, and to this date, has proven to be an insufficient commercial incentive to bring to market the necessary technical and operational advances needed to viably drill past 10 km in crystalline bedrock. Cooling is essential for more than just the reduction gearbox lubricant. If pressure transducers, thermocouples, and other sensor technology is desired, one cannot operate hotter than the maximum temperature of integrated circuit silicon electronics. For example, a very effective way to reduce Ohmic losses is by increasing the voltage to keep the current to a minimum. This can easily be done by rectifying high-voltage DC using silicon-controlled diodes (SCR or thyristors) and nano-crystalline transformer cores. But both gearbox oil and thyristors cannot operate at more than 150°C, cooling thus emerges as the enabling factor behind any attempt to drill deep into the crust of the earth, regardless of how exactly the rock is drilled. Incidentally, the low thermal conductivity and heat capacity of the crust yield a low thermal diffusivity, or thermal inertia. Rock is a very poor conductor of heat, in fact, rock (silicates) can be considered insulators, and similar oxides are used as refractory bricks to block heat from conducting in smelting furnaces. The metamorphic rock in the continental crust has a thermal conductivity of only 2.1 W-mK and a heat capacity of under 1100 J/kg-K at 220°C, translating into a very slow thermal diffusivity of 1.1 mm2/s, corresponding to the average temperature of a 12 km deep well. This makes it more than feasible for the operator to pump a high volume of water through the drill pipe and annulus above and beyond the requirement for cutting removal. If rock had an order of magnitude faster thermal diffusivity, such a scheme would be impossible as the speed in which heat travels through the rock would exceed even the most aggressive flow rates allowable through the bore-hole. The motivation behind the use of down-hole electric motors. With satisfactory cooling, electric motors are the most convenient method to deliver power, but they are not the only high-power density option. A turbo-pump (a gas turbine without a compressor) burning hydrogen and oxygen is also an interesting option, requiring only a small hose to deliver the gaseous fuel products which eliminate the need for any down-hole voltage conversion and rectification equipment. But despite the superior power density of a combustion power plant, the need to pump high-pressure flammable gases presents a safety concern at the rig, since each time a new drill string must be coupled, the high-pressure gas lines have to be closed off and purged. In contrast, an electric conductor can simply be de-energized during each coupling without any mechanical action at the drill pipe interface, protecting workers at the site from electric shock. In conclusion, even though a turbo-pump using hydrogen and oxygen is a viable contender to electric motors, complexity and safety issues arising from pumping high-pressure flammable gases rule out this option unless serious technical issues are encountered in the operation down-hole electric motors, which are not anticipated. Conventional turbodrills require large numbers of turbine stages to generate a significant amount of power, this results in a substantial portion of the fluid pumped from the surface being used up by the turbine stages, resulting in considerable pressure drop, which reduces the cooling potential of the water since there is now less head to overcome viscous drag along the rough borehole on the way up the annulus. According to Inglis, T. A. (1987) in Directional Drilling, A 889 hp turbodrill experiences a pressure drop of 200 bar with a flow rate of 163 m3/hr, since the large diameter drill-bit requires at least 1000 kW (1350 hp), the total pressure drop will be 303 bar, or half the initial driving head. This will halve the available flow rate and thus the cooling duty.
Electric motors confer to the operator the ability to perform live and active bit speed and torque modulation, while turbodrills cannot be efficiently operated below their optimal speed band. Moreover, even if turbodrills could be designed to operate efficiently at part load, it is not practical to vary the pumping output at the surface to control the turbodrill’s output. And even if turbodrills were used, they would still need to employ our novel active-cooling strategy since they too need speed reduction. It should be emphasized that it is not the use of down-hole motors themselves that makes our drilling concept viable, but rather the massive water flow that keeps everything cool. In hard crystalline bedrock, well-bore collapse generally does not occur, rather a phenomenon called “borehole breakout” occurs. Breakout is caused by a stress concentration produced at the root of two opposing compression domes forming a crack at the point of stress concentration between these two opposing “domes”. Once this crack forms, it stabilizes and the stress concentration is relieved growing only very slowly over time. Imagine the borehole is divided into two parts, each half forms a dome opposite to one other, there is a maximum of compressive stress at the crest of each dome, while there is a minimum of compressive stress at the root or bottom of each dome, this causes the roots of each dome to elongate and fracture. Overburden pressure is an unavoidable problem in deep drilling. Overburden pressure is caused by the sharp divergence between the hydrostatic pressure of rock which experiences a gradient of 26 MPa/km and that of water, which only experiences 10 MPa/km. Technical challenges. It’s important to separate technical problems from operational problems. For example, regardless of what kind of drill one uses, there is always the issue of the hole collapsing in soft formations and equipment getting stuck. Another example would be lost circulation, such a condition is largely technology invariant, short of extreme options such as casing drilling. Operational challenges While there are no strict “disadvantages”, namely features that make it inferior to current surface-driven shaft drills, there are undoubtedly a number of unique operational challenges. Compared to the companies touting highly unproven and outright dubious concepts, this method and technological package faces only operational, not technical challenges. The massive flow of water and the intense removal of heat from the rock will result in more intense than normal fracture propagation in the borehole. The usual issues that pertain to extreme drilling environments apply equally to this technology and are not necessarily made any graver than with conventional shaft-driven drills. For example, the down-hole motor and equipment getting stuck, a sudden unintended blockage of water flow somewhere along the annulus that results in rapid heating, or a snapping of the drill string, are likely to happen occasionally especially in unstable formations, or in regions where over-pressurized fluids are stored in the rock. Another potential downside is intense erosion of the rock surface due to the high annulus velocity of over 8 meters per second. Since a large volume of water must be pumped, a large head is required of at least 600 bar. This pressure energy is converted into velocity energy according to Bernoulli’s principle. Because the concentration of fragments in the water is extremely low (<0.06% vs over 2% in drilling mud), the rate of erosion on the hardened drill pipe liner is not a concern. It is likely that the relatively short period of time where drilling is actually taking place, around 2000 hours including bit replacement and pull up every 50 hours, it is unlikely this water will have time to significantly erode away the well-bore. Even if it does, it will merely enlarge the well diameter, and is not expected to significantly compromise its structural integrity.
Data:
Sources: