Atmospheric CO2 Extraction Using Gas Centrifuges for Synthetic Fuel Production

Christophe Pochari Energietechnik (de Rivals-Mazères Ingénierie) is reviving long-forgotten gas centrifuge technology for atmospheric carbon dioxide extraction to produce synthetic fuels such as jet fuel on aircraft carriers and to guarantee energy security through the production of kerosene, diesel and jet fuel for defense. The implications this technology has for military logistics is simply enormous.

Direct hydrogenation is redundant as it requires expensive iridium complexes, complicated ligand molecules (coordination complexes), it offers no advantage over simply producing additional hydrogen for reverse water gas shift. However, the direct hydrogenation of CO2 produces formic acid or methanol, which must further be reacted or decomposed to produce carbon monoxide until Fischer-Tropsch can be performed. Fischer-Tropsch is a highly mature process, it requires only earth-abundant elements for catalysis, namely cobalt and manganese. The TOF of these catalysts tends to be high resulting in a small reactor volume, the operating pressure is typically slightly over 2 MPa. The triatomic carbon-oxygen bond must be broken either way and this is accompanied by a large investment of energy. Copper/nickel supported on silica provides high conversion of CO, RWGS has been extensively for producing synthetic hydrocarbons on Mars. Copper catalysts have been widely studied for reverse water gas shift reactions, they also show high activity for the more common water gas shift reaction. Advances in micro-channel etching technology permit exothermic reactors like Fischer-Tropsch to be greatly process intensified, resulting in tiny footprints. Below are images of a microchannel Fischer-Tropsch reactors from the paper “Microchannel Reactor for Fischer–Tropsch Synthesis: Adaptation of a Commercial Unit for Testing Microchannel Blocks”, by Luciano C. Almeida.

Microsoft Word - jfue

Process intensified Fischer-Tropsch reactors featuring higher mass-transfer rates owing to high surface-volume ratios.

Christophe Pochari Energietechnik Christophe Pochari Energietechnik (Mazères Propulsion), is designing a complete package including the electrolysis bank for hydrogen production, the CO2 extraction centrifugation batch, the Fischer-Tropsch reactor, and the final hydrocarbon refining unit. This package can then be hooked up to any electrical power source to produce unlimited liquid hydrocarbon for armed forces around the world. We hope to supply NATO militaries such as the U.S, UK, France, etc, Another application will be trucking, airline, and civilian vehicle fleets that spend large sums on liquid fuels, by pairing up a CO2 centrifugation and Fischer-Tropsch plant to our high-altitude wind turbine operators can produce liquid fuels for less than $0.5/gallon. Even with the imminent end of the Ukraine war, long-term oil prices will remain over $70/bbl due to growing international demand and limited production elasticity. Moreover, hydrocarbons extracted from the sedimentary crust are contaminated with sulfur, nitrides, and other trace metals such as vanadium. Synthetic hydrocarbons are extremely clean and much clearer in appearance than “terrestrial hydrocarbons”. But besides fuel autonomy for armed forces and fleets, perhaps one of the most consequential aspects of this technology is energy storage for intermittent renewable sources such as wind and photovoltaic. Centrifuge and electrolyzer banks can be selectively switched on and off to match the prevailing electrical output of a variable power source like a wind turbine. This technology allows heavy-consuming countries like Germany and Japan with little to no gas and oil to produce their own liquid fuels using electricity only.

Centrifuge Specs:

Diameter: 300mm

Length: 1.5m

Rotational Speed: 80,000 rpm

Weight: 4.2 kg

Material: Toray T1100G CFRP

Peripheral Speed: 1256 m/s

Tangential Tensile Stress: 2860 MPa

Steady State Power Consumption: 0.5 kW

Bearing Type: Magnetic

Atmospheric CO2 Extraction Parameters: CO2: 1 bar inlet pressure, 1000 bar peripheral pressure

UF6 Benchmark Parameters: 0.13 bar sublimation pressure, 10 microbar center pressure, average pressure: 0.06 bar.

UF6-CO2/Air Difference: 34,720x UF6 throughput

Energetics: 17,500 kWh/kg U-235: 414 kWh/ton-CO2

Footprint Requirements: 0.25 kg-U-235/yr: 8.5/ton-CO2/yr/m2

Space Required for Aircraft Carrier

“A typical aircraft carrier can refuel every fighter jet about 20 times before depleting its supply of jet fuel.” An aggressive scenario assumes each fighter takes off once a day, in this scenario, the fuel supply would last 20 days. The typical aircraft carrier has a supply of 1 million gallons (3800 m3), or roughly 3150 tons worth of jet fuel. In order for the aircraft carrier to produce its entire fuel supply indigenously, it would require 210,000 tons of CO2 annually, or 247,000 m2. The total space required on a carrier is about 230,000 m2, so this is feasible if the centrifuges are stacked atop each other and the number of flights is reduced somewhat. Each to of kerosene consumes a minimum of 14 MWh of electrical power, 99% of which is found in the electrolysis of hydrogen to both produce the hydrogen for the hydrocarbon molecule itself and to break the carbon-oxygen bond. To produce 59,000 tons of kerosene annually would consume 815,000 MWh. The total installed electrical generation capacity on a Nimitz class carrier is 194 MW, so 48% of its power would be used to make kerosene, So clearly, this scenario is unrealistic both from a space and power perspective. A reduced usage scenario, around 10% of this estimate, or 10% of installed power, 19.4 MW, and 23,000 m2.

“Pressure therefore increases monotonically with the distance R from the center point of the centrifuge.”

Monte-Carlo Simulations of Centrifugal Gas Separation Roger Cracknell, Michael Golombok, a Shell Global Solutions (UK), Cheshire Innovation Park, Po Box 1, Chester, CH1 3SH, UK; B Shell Exploration and Production, Kessler Park 1, 2288 GS Rijswijk, the Netherlands

“The classical objection to using centrifugal gas separation industrially has arisen from the throughput restrictions associated with uranium isotopes. However, the restrictions for lighter gas separations are much less severe. Pressure is no longer limited by de-sublimation to sub-atmospheric levels. Radial pressure gradients are a factor of 10^4 less so that overall higher throughputs may be obtained for a given sized unit. The wall pressures are further reduced if we allow for the fact that, prior to diffusive separation of components, the mass transfer associated with setting up the radial pressure gradient takes place under thermodynamically adiabatic conditions. This variation from the usual isothermal assumption enables higher throughputs.”

Thermodynamic Factors Governing Centrifugal Separation of Natural Gas, M. Golombok, C. Morley.

“In fact, a number of methods are being investigated: there are various types of gas adsorbing crystal systems, there are membranes… but a few years ago, at Shell the imp of perversity prompted to me to suggest to the Shell Gamechangers (a mechanism for innovation) that we should look at mechanical separation of gases by centrifugation.”

Golombock, 2007, Difficult separations Citation for published version (APA): Golombok, M. (2007). Difficult separations. Technische Universiteit Eindhoven.

“A number of processes for producing hydrogen and other gaseous and liquid fuels from coal were developed and utilized prior to 1930. Use of these processes was nearly discontinued in the 1940s because of the great availability of natural gas and oil. However, the basic chemistry involved in these processes can again be utilized to produce desirable fuels, but the enormous advances in construction materials, process controls and instrumentation that have occurred in the last 40 years will allow the design of much better process plant hardware and more efficient processes. This proposal outlines an application of the modern high-strength composite materials to the design of a piece of process plant hardware for the separation of hydrogen from the other gases produced in the coal gasification process. The separation hardware will allow the chemical reaction process to be performed in a more efficient manner and may lead to new uses of hydrogen as a fuel. This separation technology can also be applied to the enrichment of natural gas which has too low a combustible content to be utilized directly.”

Application of Centrifugal Separation to the Production of Hydrogen from Coal, Laurence O. Williams.

There is, however, one other important difference arising from the different range of molecular weights in which we are interested and relating to the properties of UF6. The latter is a solid which sublimes at 56°C-at ambient temperature the vapor pressure is typically around 0.2 bar (Perry and Green, 1984). This means that during operation the pressure should never exceed this amount because this will lead to solid material precipitating on the wall which will severely damage operation. However, the pressure gradients for heavy molecular weight materials are extremely high. Consider a moderate range of operation, a centrifuge of radius 5 cm operating at 70,000 rpm. Based on equation (1) the pressure at the wall will be of the order of 14,000 times that at the center. Given the 0.2 bar operating limit at the wall, then the center pressure will only be of the order of 10 ubar. This results in a very low throughput and indicates one of the major paradigm shifts in going from isotope separation to industrial-scale natural gas separation.

Thermodynamic Factors Governing Centrifugal Separation of Natural Gas, M. Golombok, C. Morley.

Device Separating e.g. Gas Into Heavier Fraction, Has Outlet Opening Provided in Wall of Centrifugal Tube and Gas Removal Valve Provided in Wall of Outer Pressure Container for Outputting Heavier Fraction, DE102009013883A1, Herbert Widulle, 2009

A German chemist named Herbet Widulle patented a centrifuge for separating oxygen from nitrogen to produce medical-grade oxygen.

http://www.widulle.org/content/kontakt_de.html

Centrifugal gas-liquid separator, JPH01207151A, Osamu Hanabusa, Mitsubishi Heavy Industries Ltd, 1988

centrifuge patent 1centrifuge parent 2

“The invention relates to a centrifuge for separating gaseous mixtures, comprising at least one hollow, mainly cylindrical rotor part which is rotatable in a housing and designed as a separating drum, which rotor part has at one end an end wall in which openings are provided close to the inside wall of the separating drum, this rotor part being provided with one stationary admission tube for the gaseous mixture to be separated, which admission tube is rigidly fastened to the housing, and with at least one outlet tube for the discharge of a first separation component, which tubes open into a separating chamber situated inside the separating drum, the rotor part being coupled to a driving motor and the centrifuge housing being provided with at least one connection for the discharge of a second separation component.

Such a centrifuge is known from the published Netherlands Pat. No. 103,433.

The application aims at further improving a centrifuge of the type mentioned in the preamble in such a way that it becomes suitable for separating a gas which could occur in very low concentration in vapour or mixtures of vapors. The application specifically aims at separating inert gases, such as helium, from natural gas. For this purpose, according to the invention, the space between the cylindrical outside of the rotor part and the inside of the part of the housing located opposite it, which space will hereinafter be called “expansion slot,” is provided with a number of throttling restrictors spaced at regular distances from each other, in such a way that this slot communicates, on the one hand, with the aforementioned openings and, on the other hand, with the connection for the discharge of the second separation component. As a result, the second separation component is removed from the separating chamber through the aforementioned openings, whereupon it is conveyed along the outside of the rotor part, while undergoing a gradual reduction of pressure, to the other rotor end, from where it is received by the aforementioned connection.

During the rotation, condensable gases such as CH4 and higher fractions of the natural gas are so extensively densified under the high concentration of pressure in the centrifuge drum that they liquefy against the inside wall of the drum, where they eventually form a thin layer of liquid. This is also where the heavier impurities become concentrated, such as mercury, nitrogen and the like.

This cannot become a thick layer, since liquid natural gas, which is subject to simultaneous expansion accompanied by partial evaporation, can flow off continuously through the aforementioned openings in the end wall of the separating space towards the space inside the housing but outside the drum. From there, this partly liquid gas flows at a pressure p2 through the first of a series of restrictors which are arranged in the slot, during which process the pressure is relieved to a lower value p3. This is governed by the rule that p1 /p2, just as p2 /p3, is equal to the critical pressure ratio. The gas, although cooling down as a result of expansion during this relief of pressure, is heated at the same time on account of the heat transfer from the layer of liquid natural gas inside the drum, which heat moves through the drum wall of the centrifuge. Eventually, therefore, the temperature in the expansion slot remains substantially constant. This process is repeated at all restrictors except the last. In the last restrictor, the medium in the expansion slot is no longer heated by heat from the layer of liquid natural gas inside the centrifuge drum. This medium leaves the centrifuge after the last expansion. It is an important aspect of the invention that the throttling restrictors in the expansion slot are designed as a number of bearings which surround the rotor and support it along the entire periphery, mutually separated by expansion chambers. The drum is thus adequately supported from distance to distance over the entire length of the drum, so that quiet running of the drum is ensured. Such a bearing is preferably provided in the form of a gas-film bearing or of a spiral-groove gas bearing, because this enables the second separation component to flow through the spiral-groove passages to the other side of the bearing. Such a bearing is also designed effectively as a viscoseal. In order to prevent the possibility of insufficient medium flowing through the lubricant film space of a bearing to the other side, a bearing house is provided, if necessary, with at least one bypass for connecting the front and the rear side of the bearing to each other. The pressure on the outside of the drum is very high at the beginning of the expansion, its maximum being almost as high as in the thin liquid layer inside the drum, but it becomes gradually lower to the measure that more restrictors have been passed. Accordingly, the drum is exposed to a higher outward differential pressure near the inlet of the gas into the drum than near the outlet of the liquid gas as it leaves the drum. The drum will therefore expand more greatly at the inlet end if the wall thickness is kept constant. In order to enable the bearings to follow this rotor-drum deformation, which changes from place to place, a bearing is interrupted on the periphery by a dilation slot, so that the bearing can perform a flexible motion.

With regard to wall thickness, the centrifuge housing can be adapted to the local pressure in the expansion slot, in the sense that the wall thickness increases to the measure that the maximum operating pressure in the expansion slot has a higher value. According to a preferred embodiment, the outlet tube for the first separation component extends inside the rotor into or near the coolest part thereof. As a result, the first component is drained at a point where the vapour pressure of the second medium is as low as possible, so that the concentration of the inert gas, such as helium, is high for that very reason, allowing the first component to be drained with the highest possible helium enrichment.

It can be advantageous to provide the centrifuge with a two-part rotor, in such a way that those end walls of each rotor part which are furnished with openings face each other, while being connected by a central portion. In such case, the need for a collar bearing to absorb the axial pressure is obviated. This embodiment greatly simplifies the installation of the driving electromotor, since it can be so fitted that its armature coincides with the aforementioned central portion. The stator of such an electromotor is provided as a canned stator, around which the liquid natural gas flows. In order to avoid trouble from certain critical speeds of the centrifuge drum, it is preferably so manufactured that the drum wall exhibits at regular intervals a constriction in the form of a circular slot along the periphery. Such circular slots can be provided along the inside periphery as well as along the outside periphery. In the places where such a slot-shaped groove occurs, the drum wall is somewhat more flexible, allowing the critical speeds of the rotor drum to be made so low that they are smaller than the operating speed. It is thus made impossible for these frequency ranges to interfere with each other. The expansion space located outside the separating drum can be provided with means for separate discharge of liquid, so that the expansion slot contains substantially the gaseous second separation pressure separation component, so that the jacket friction on the outside of the centrifuge drum is appreciably reduced. In this embodiment, the bearings which embrace the drum are provided in the form of gas bearings. In order to load the drum wall more uniformly, the inside diameter of the drum can be made large in an area marked by a high outside pressure, and conversely. As a result, the liquid layer of the first separation component, and therefore the internal load, increases in magnitude in places of a high outside pressure, and decreases in magnitude where the outside pressure is low. Expansion chambers can also be arranged inside the drum, as will be described hereinafter. This causes the gas pressure to be lowered on the outside of the drum, with a corresponding decrease of the frictional resistance losses.

With the use of several of the centrifuges described, a centrifuge cascade can be so built up that these centrifuges communicate with each other on their gas sides, in such a way that the degree of enrichment of the first component increases at each subsequent centrifuge. The first component can then be abstracted at the top of the cascade, this component containing to a high degree the desired gas, for example helium. This component is then discharged to an installation for burning the entrained residues of the second component, which consists substantially of hydrocarbons, whereupon the resultant gaseous mixture is supplied to an installation for freezing the impurities out of the desired inert gas.

Assignee: Ultra Centrifuge Nederland N.V., The Hague, Netherlands 

Inventors: Frederik H. Theyse, Bensberg-Herkenrath, Fed. Rep. of Germany; Fridtjof E. T. Kelling, Amsterdam, Netherlands 

Appl. No.: 802,900 22 Filed: Jun. 2, 1977 

Patent name: Centrifuge for Separating Helium From Natural Gas

About US

cropped-mazereslogo.png

Mazères Energy is currently headquartered in the Coastal development of Bodega Harbour, pictured below.

cropped-whatsapp-image-2023-05-004.jpg

Rivals Family Coat of arms

Christophe de Rivals, Chevalier de Mazères

62167

Mazères, Ariège, region of Occitanie, where the Rivals family served as Knights.

1b7f9a3fa42fb47605704dfdff6f6ed6

The Links at Bodega Harbour

IMG_2502

The Gherkin in London, Christophe’s favorite skyscraper.

IMG_7650

Saint-Tropez, French Riviera, Brigitte Bardot’s favorite town.

WhatsApp Image 2023-05-05 at 15.01.03

In the Eiffel tower at 300 meters, which served as a strong endorsement to the merit of our high altitude wind turbine technology which allows wind energy to be captured towards the end of the boundary layer.

WhatsApp Image 2023-05-21 at 14.23.45

At Charles M Schulz Airport

WhatsApp Image 2023-05-05 at 15.02.18

Oktoberfest in Munich

IMG_8328

Monte Carlo, Monaco

IMG_2110

Westminster Abbey

IMG_4830

A very James Bond looking highway in Ticino Switzerland.

IMG_4823

In front of the German Enercon E-70 in Switzerland, the world’s most efficient wind turbine with a CP of over 0.55.

IMG_2764

The Charles Parson’s radial flow steam turbine at the London Science Museum.

IMG_2814

Newcomen atmospheric engine

IMG_3154

European Commission in Brussels

IMG_3111

European Commission Brussels

IMG_3698

Wewelsburg Castle (former SS officer school), in Büren, outside of Paderborn in the northeast of North Rhine-Westphalia.

IMG_7856

Anti-aircraft gun in the hills above the Naval port of Toulon

IMG_7492

IMG_7478

To the West is the Hyeres Naval Air Station, where the French Navy AS-565 Panthers and NH-90s train.

IMG_7994

IMG_7521

Christophe with his grandmother and brother in Hyères-les-Palmiers

WhatsApp Image 2023-04-27 at 11.02.48

Comtesse Catherine de Rivals-Mazères as a child

About Christophe Pochari. Christophe Pochari is an independent self-taught and self-employed designer and inventor. Christophe was born on January 29, 2000, in Monterey California. His mother learned to speak 7 languages. His parents traveled to 90 countries throughout the world. As a child, he was an avid draftsman of technical machines, ranging from engines to helicopters, he was especially fond of jetpacks, having drawn his first jetpack at age 5. At a very young age, he told his mother he would build large towers to capture the sun’s energy. At 17 he started his own blog titled “helicopteruniversity” dedicated to rotorcraft engineering and purchased over a hundred market research reports on the profitability of different industries. He has written over 2500 pages on technology and philosophy. At age 8 he built his own house, he built a second house at age 12, he also built a sail for his tricycle using the windy Bodega Bay weather to zip around the road. He also built an asphalt tamper, a chainsaw, wooden airplanes, guns and numerous other contraptions, all out wood since this was the only material he could use. He began using his father’s camera at age 10 to photograph helicopters. At age 11 he mastered GIMP photo editing software. At 17 he finally learned CAD and began using Fusion 360.

siding1

Christophe installing siding on his family house.

DSC08104

Christophe’s first construction project

imgpsh_fullsize_anim (2)

Christophe at age 10 with the California Highway Patrol Golden Gate Air Division.

imgpsh_fullsize_anim (8)

DSC05298

Christophe with his maternal grandparents and brother in Hyères-les-Palmiers

Christophe’s father, Thomas Pochari Jr, is a publisher at worldaffairsmonthly.com, destructivecapital.com, bottleneckanimal.com, and monitoringrisk.com. He is active on Twitter at @_brainscience and @destructive_cap. At age 16 he wrote a letter to Edward O Wilson. In 2009, Joe McNamara, spokesman for Perot Systems said to Christophe’s father: “Ross Perot thinks you will be considered one of the most powerful, creative, and original scientific thinkers in the history of the world, maybe the most”. Ross Perot wanted to print his online media business, but Michael Dell bought Perot Systems a week later. The aide to Deng Xiaoping called him on his home number in Carmel in 1993. Thomas Jr worked as an independent journalist and interviewed the founders of Hamas and Saddam Hussein’s ambassador, his writings have been translated into Arabic and Chinese. He was invited on Al-Jazeera, he also met the CIA director William Casey in Paris. Among the many important people he interviewed was Vint Cerf, considered the “father of the internet”. He is currently the publisher at worldaffairsmonthly.com, destructivecapital.com, bottleneckanimal.com, and monitoringrisk.com. He is acquainted with Frank George Wisner II, the son of one of the founders of the CIA. He was acquainted with Eric Burgess, friend of Arthur C Clark, who spoke of synthetic hydrocarbon production, something Christophe has been obsessed since a young age. Muammar al-Gaddafi was the first subscriber to World Affairs Monthly. Thomas was acquainted with the Swiss central bank governor Markus Lusser. He has received letters from Richard Nixon and Henry Kissinger, he was also acquainted with Marc Faber, a famed investor and founder of a hospital in Zurich. He wrote extensively on the cold war, foreign relations and the middle East. He knew Herbert P. McLaughlin, the founder of KMD architects in San Francisco. He built the first website with audio and video in 2002 before BBC and Youtube. He was also acquainted with Richard Carl Fuisz (born December 12, 1939) a Slovenian American physician, inventor, and entrepreneur, with connections to the United States military and intelligence community. Fuisz holds more than two hundred patents worldwide, in such diverse fields as drug delivery, interactive media, and cryptography, and has lectured on these topics internationally. While Thomas is more of a diplomat than engineer, he did correctly anticipate the increase in energy and commodity prices at the beginning of the 21st century and made a bet against Julian Simon in the famous “Simon-Ehrlich wager”. Thomas Pochari is acquainted with Frank George Wisner II, the son of one of the founders of the CIA. He wrote extensively on the cold war, foreign relations and the middle East. He built the first website with audio and video in 2002 before BBC and Youtube. He was actively involved in the subject of peak oil having interviewed petroleum geologist Colin J. Campbell. He was the first to propose a global satellite internet system in the early 2000s before Elon Musk’s Starlink but favored instead geostationary orbit, he named the concept “Xipho”, Greek for sword, foreseeing its disruptive potential. The Pochari name is Albanian and perhaps Aromanian origin. The Illyrian tribes who fled the Turkish Muslim invaders into the mountains came to be called the Pochari’s who were likely originally pot makers, a familial history of intransigence characterizes the family. The family is likely originally from the town of Moscopole or Voskopoja, a town West of Korçë famed for its great number of scholars and only printing press outside of Istanbul during the Ottoman empire. The New Academy or Greek Academy, (Greek: Νέα Ἀκαδημία, Ελληνικό Φροντιστήριο) was a renowned educational institution, operating from 1743 to 1769 in Moscopole, an 18th-century cultural and commercial metropolis of the Aromanians and leading center of Greek culture in what is now southern Albania. It was nicknamed the “worthiest jewel of the city” and played a very active role in the inception of the modern Greek Enlightenment movement. Christophe’s paternal grandfather, Thomas Pochari Sr, was born in New York City in 1930 and is currently 93 years old, he briefly attended Cornell and later the U.S Naval Academy where he graduated at the top of his class. He was accepted to Harvard Business School but could not attend as his father declined to pay the high tuition cost. After graduating the Naval Academy, he was transferred to the U.S Air Force as he got seasick on the ships and assigned to Lowry Air Force Base in Colorado. Before Lowry, in March 1954, he was in charge of a group of combat bombers at Osan Air Base in South Korea and later at Kladena Air Base Okinava. In Korea, at the age of 24, he was responsible for launching 25 North American F-86 Sabre’s from the base. Prior to takeoff, he had to check everything related to the fire systems, radars, etc. He was appointed Brigadier General March 15, 1983, and was in charge of logistics in the event of a war at Travis Air Force Base in the Air Force reserves. During his time in the Armed Forces, he was awarded the Medal of Service in Defense of the Nation”, “Medal of Service in Korea”, “Life Service Tape in the Air Force”, “Medal of the Armed Reserve Forces” and “United Nations Service Medal”. Today he is currently a real estate and equities investor with a portfolio of a dozen residential properties including a home on Carmel point in Carmel-by-the-sea and numerous stock holdings. He is a life extension and vitamin expert and expert on the stock market and stock analysis. Pochari was involved with the U2 program as deputy director at Nasa Ames Research Center, magnetometer development for the lunar lander, technical Manager of the Luster flight payload, and a helicopter crash report investigation for a UH-1B. Pochari, Thomas R, Pitts, Samuel W.; Hodges, Dewey H, and Nelson, Howard G.: “Report of Accident Investigation Board for UH-1B Helicopter,” NASA Ames Research Center, June 29, 1978. Sampling with a Luster Sounding Rocket during a Leonid Meteor Shower, National Aeronautics and Space Administration Ames Research Center, Moffett Field, California. The Pochari name is on the list of names of the design and engineering participants left on the moon during the Apollo program. Thomas Pochari Sr’s father, Christophe’s great-grandfather, Christachi “Christ” Pochari, founded a highly successful restaurant in Midtown Manhattan on 52 and 5th avenue which became a favorite for CBS journalists, he was acquainted with Jacob Javits who appointed his son to Annapolis. The family still owns the 2200 square foot property worth an estimated $60 million. The family owns several moon rocks. Thomas Sr’s first cousin on his mother’s side of the family, Pavlina Ververi, is Edmond Leka, the founder of Union Bank Albania. Thomas Pochari Sr’s mother’s father, a merchant, owned the largest house in the town of Korçë Albania. “Union Bank was founded in 2005 by the Union Financiar Tirane (Financial Union of Tirana – UFT). The initial capital was 17.6 million EUR and the bank had 7 branches in Tirana, Durrës, Elbasan, Fushë Krujë and Fier. In 2008 the European Bank for Reconstruction and Development bought 12.5% of the shares, becoming the 2nd largest shareholder, while the other 87.5% is owned by the Albanian shareholders. In 2008 the Bank’s total assets exceeded 100 million EUR. In the following years, the total assets further increased, reaching the amount of 256 million EUR in 2014. The Bank’s main strategy is to further expand its network and increase its lending activities, with particular focus on the SME sector. The EBRD helps Union Bank, by developing and financing its portfolio and strengthening the bank’s funding base”. Leka is an electrical engineer and published on the Albanian power industry, “Power Industry in Albania and Its Way Through the Reform to Market Economies, 1994, 92 pages, publisher: Institute für Höhere Studien. He also co-authored the paper “Aerial Photography and Parcel Mapping for Immovable Property Registration in Albania”, 1997, 10 pages, publisher, Land Tenure Center, University of Wisconsin-Madison. Edmond and Niko Leka are some of the wealthiest Albanians, they also have holdings in media. Regrettably, their connections with the Soros Foundation (Edmond was the chair of the “Open Society Foundation”) make them an arch enemy of the Pochari family, they currently live in Austria for security reasons as they fear kidnapping and assassination in their home country in which they have a high security compound. Moreover, their wealth is derived from money laundering of Marijuana income into Europe, they have ties with the Albanian mafia and have tried silencing my father and I with fraudulent and illegitimate restraining orders.

https://pamfleti.net/edmond-leka-i-western-union-thirret-ne-prokurori-per-te-vjelle-zullumet-me-transfertat-e-miliardave-te-pista

On Christophe’s mother’s side, he is descended of a highly regarded French noble family tracing its ancestry back to 1247. The family is from the Southwest of France, just North of the Pyrenees, the region is known today as “Aerospace valley” for its high concentration of aircraft manufacturing activity. The Toulouse region was once home to Sud Aviation, Aerospatiale, and now Airbus. Through the Rivals Mazères family, he is a descended of Hugh Capet, founder of the House of Capet and King of the Franks from 987 to 996. “The dynasty he founded ruled France for nearly three and a half centuries from 987 to 1328 in the senior line, and until 1848 via cadet branches”. He is related to Guillaume de Rivals-Mazères, a French aviator who flew for the Vichy Air Force, graduate of the École spéciale militaire de Saint-Cyr, and lieutenant general who served as vice Commander of the Fourth Allied Tactical Air Base in Reims. Guillaume married Zizi du Manoir, an alpine skier, ice hockey player and field hockey player. 

Zizi_du_Manoir_-_Bis

Zizi du Manoir https://fr.wikipedia.org/wiki/Zizi_du_Manoir

Christophe is also a descended of the famous playwright Jean Racine, see Jean Racine et sa Descendance, by Chaffanjon, Arnaud, 1964. One of Christophe’s ancestors was Jean de Rivals Mazères, born circa 1664, died 10 November 1738 in Fiac, Tarn, Midi-Pyrénées, France. He was a doctor of law and a lawyer who served as King Louis XIV’s counselor and the tax collector (Ferme générale) in the Diocese (ecclesiastical district) of Carcassonne, department of Aude, region of Occitania. He is also related to Frédéric François-Marsal who served as minister of Finance for ten years and prime minister of France for three days. Marsal sat on the board of directors of numerous companies, banking (BUP, Banque d’Alsace et de Lorraine, Banque générale du Nord), real estate, metallurgy (Forges d’Alais , Electro-Câble, Tréfileries and Le Havre rolling mills), colonial, etc. He joined the prestigious board of the Compagnie Universelle du Canal Maritime de Suez in 1927 and chaired the boards of several firms: Electro-Câble, a company for the equipment of railways and large electrical networks, Société Commerciale de l’Ouest Africa , which he has administered since 1921, Compagnie des Vignobles de la Méditerranée (vineyards in Algeria). He became president of a powerful colonial lobby in 1927, the Union Coloniale Française. The following year, he was elected to the Academy of Moral and Political Sciences in May 1928, in the chair of Charles Jonnart. Our aerospace propulsion focus is a reflection of the de Rivals-Mazeres family’s long history in French aviation. Christophe’s maternal grandfather was an ophthalmologist and artist who spent much of his free time gazing at the planes taking off at the Hyères les Palmiers airport. His favorite aircraft was the Dakota DC-3. Whenever a plane could be heard in the distance, his wife would shout “avión” and he would run out to look. His thesis was titled, “Diagnostic Précoce du Glaucome Simple (à Angle Ouvert) En Pratique Quotidienne Avec 32 Observations Cliniques”. He published the papers: ES 4 Blessures Oculaires, Author, Rivals Mazeres A; Singer B; Deschatres F, Source, Vie Med.; FR.; Da. 1972; Vol. 53; No 19; Pp. 2429-2439, and Le Laser ET L’œIl du Sujet âGé, the Laser and the Eye of Elderdy Subject, Author, De Rivals-Mazeres, A, Source, La Revue de Gériatrie. 1984, Vol 9, Num 8, Pp 406-411, Issn 0397-7927, Scientific Domain, Gerontology Geriatrics

Alain also served as a flight doctor in Algeria in the Aérospatiale Alouette II treating wounded French soldiers and Algerian civilians. He was personal physician to President Georges Pompidou for one year. He was an avid science fiction reader and believed in Lamarckian evolution and the theories of Pierre Teilhard de Chardin. Through marriage, the de-Rivals family is related to Henri de Toulouse-Lautrec and Lyska Kostio de Warkoffska, a Russian-French fashion designer. Another relative through marriage is Alain Marie Guynot de Boismenu, 27 December 1870, 5 November 1953. De Boismenu was a French Roman Catholic prelate who served as the Vicar Apostolic of Papua from 1908 until his retirement in 1945; he was a professed member of the Missionaries of the Sacred Heart and the founder of the Handmaids of the Lord. He studied under the De La Salle Brothers before beginning his religious formation in Belgium where he did his studies for the priesthood. He served for a brief period as a teacher before being sent in 1897 to Papua New Guinea to aid in the missions there; he also served the ailing apostolic vicar and was soon after made his coadjutor with the right of succession. His stewardship of the apostolic vicariate saw the number of missions and catechists increase and his tenure also saw the establishment of new schools and a training center for catechists. The de Rivals family is prominent in French aviation to this day, Géraud de Rivals-Mazères, is regional flight safety director for ATR (Avions de transport régional) and worked for Airbus as flight operations analyst. Elie de Rivals-Mazères from 1992 to 1999 was a fighter pilot in the Escadron de Chasse 1/3 Navarre 1/3, fighter squadron in Nancy on the Dassault Mirage III and Dassault Mirage 2000N. Elie served as deputy commander of the Base aérienne 709 Cognac-Châteaubernard from 2010 to 2017. Elie is now an aviation claim surveyor at ERGET Group. The paper “Dual Mode Inverse Control and Stabilization of Flexible Space Robots“, authored by Geraud de Rivals-Mazeres, describes what’s called the “nonlinear inversion technique” published in the journal of the American Institute of Aeronautics and Astronautics. A book on computer science, titled “Propos informatiques pour non informaticiens” was written by François de Rivals Mazeres, 1973, 99 pages, publisher: Presses du temps présent. A violin made by Gand & Bernardel Frères, Paris, France, 1870 for Victor de Rivals in 1870.

http://www.isabellesviolins.com/gandbernardel/?photo=Back

“According to the Gand & Bernardel archives, violin number 535 was “tabled” on May 6, 1870 and reserved for Mr. de Rivals, who bought it for 240 francs. Victor de Rivals was a violinist at the Société des Concerts du Conservatoire in Paris (one of the first professional symphony orchestras). He had played in its first-violin section from the founding year, 1828, to his retirement in 1864. He also had been a client of the famed Paris violin shop since 1828, when it belonged to Charles-François Gand, the father. It seems he finally decided, after retiring, to commission a violin from the Gand family. The label states, “Luthiers de la Musique de l’Empereur.” The Emperor’s Music was Napoleon III’s private orchestra, which got dismantled in 1871 after the fall of the Second Empire.
Could our Victor de Rivals be the Mr. de Rivals-Mazères whose name is attached to the Stradivari violin “Tua, Marie Soldat, Rivals-Mazères de Toulouse” of 1708? The Cozio Archive tells us this Stradivari belonged to Mr. de Rivals Mazere in 1880. The Gand & Bernardel books tell us this same Stradivari was sold by Mr. de Rivals-Mazères in 1885. It is tempting to speculate that after Victor de Rivals passed away, his heirs from Toulouse — who used the full name “de Rivals–Mazères” — simply sold his violins. In fact, on February 28, 1882 the Gand & Bernardel house sold our violin #535 to Mr. Gross at Le Havre. And on December 29, 1885 it sold the Stradivari to Teresina Tua. (By the way, she paid 8,000 francs for it, whereas the Rivals family only received 5,000.”

Fullscreen capture 4292023 65310 AM.bmp

Christophe Pochari’s brother, Sebastien Pochari, is a commercial helicopter pilot currently flying Bell 206L3’s in New York. He has a Youtube channel called “TheHelicopterPerspective”. Christophe’s uncle on his grandmother’s side is the famous Danish ceramic artist Morten Løbner Espersen, one of his cousins is rapidly becoming a famous musician in Denmark, his band is called “FRAADs”. Another relative on his mother’s Danish side of the family works as the office manager at the National Emergency Management Agency, she attended the Danish Pharmaceutical University. Christophe’s paternal great-great grandfather designed the logo for Del Monte foods.

4457761

Alain Christian Victor de Rivals-Mazères, physician and artist, born in the 16th arrondissement of Paris in 1934, died in 2019.

IMG_8080

17349778_323609644745068_9150169399305364764_o

Doctor de Rivals-Mazères’s ophthalmology private practice in Hyeres Les Palmiers.

WhatsApp Image 2023-04-28 at 18.18.05WhatsApp Image 2023-04-28 at 18.18.34

Thomas Pochari Jr on Al-Jazeera in 2001

DF-SN-84-05972

WhatsApp Image 2023-05-18 at 18.16.51

Brigadier General Thomas Richard Pochari Sr.

Lieutenant General Guillaume de Rivals-Mazères pictured on the right above and the left below.

Beker_voor_de_Nederlandse_Luchtmacht_bij_de_NAVO-competitie_De_Franse_luchtmach,_Bestanddeelnr_915-0769

pah-22959063 (1)

27657296_958997614539598_6312865252890870568_n

The Dewoitine D.520, the plane that Guillaume flew.

image059 (1)

The plane that Guillaume piloted crashed in North Africa after sustaining enemy fire. Le 7 juillet 1941, le capitaine RIVALS-MAZÈRES doit effectivement se poser en panne dans le désert, sans dommage, et effectuer une marche de 30 km pour trouver du secours ; mais son appareil était le n°302 codé « 30 », sans bande des as, et il sera récupéré ensuite par la « France Libre » et utilisé par les « FA.F.L ». On July 7, 1941, Captain RIVALS-MAZÈRES must actually land in the desert, without damage, and carry out a 30 km walk to find help; but his camera was number 302 coded “30”, without aces band, and it will then be recovered by “France Libre” and used by the “FA.FL”.

Fullscreen capture 432023 63254 AM.bmp

imgpsh_fullsize_anim (9)

Some of the published writings and evolutionary-political theory of Christophe’s father.

thesius 1

Christophe’s father’s unpublished thesis on the Cold War

117 - AERODROMES ET BASES 1945-1962_A (40)25994681_920238471748846_4845338459518119095_n

Riv-14-0v9b9g9c1f (1)

The de Rivals-Mazères coat of arms, this title is not purchased, it was earned through superior wisdom, bravery and character in 13th century Frankish society.

google_mybusiness

452(5)

Château de Fiac, where the de Rivals family once lived.

22141311_851821978590496_794469389051760914_n

The Boisleux-au-Mont castle, destroyed in WW1

IMG_8062

Christophe maternal great-grandfather, third from right, Ejnar Jacob Madsen, a lieutenant in the Danish Army, photographer, and rail station chief. He married Dagny Sophie Jorgensen, whose family owned a dairy processing facility in the town of Nordrup outside of Copenhagen, they were the only one in the town to own a car, a Ford Model T, as well as a telephone.

WhatsApp Image 2023-05-15 at 08.42.01

Lise Løbner-Madsen, Christophe’s maternal grandmother

Ftam0IYaUAADmCq

Diane Patricia Thorne, Christophe’s paternal grandmother

Continue reading “About US”

Arrhenius’s Demon: The Chimera of the Greenhouse effect

Introduction

Note: The radiative heat transfer equation based on Stefan-Boltzmann 4th power law is erroneous and cannot be relied upon. The only way to possibly measure radiative heat transfer is by measuring the intensity of infrared radiation with electrically sensitive instruments.

The Ultraviolet Catastrophe Illustrated.

The Stefan-Boltzmann law states that radiation intensity scales to the 4th power of temperature, drastically overestimating radiation at high temperatures. If we heat a one cubic meter cube to 2000 C°, we radiate 1483 kW/m2, since a one cubic meter cubic has 6 square meters, we would be radiating 8890 kW, or nearly 9 megawatts of power! Clearly, this is impossible because it would mean heating and melting metal would be physically impossible since it would cool through radiation faster than it can be heated! To heat 7,860 kg worth of steel to 2000 C° in one hour, we need to impart 2032 kWh worth of thermal energy, far less than what we would radiate every second. The Stefan-Boltzmann is wrong and must be modified. Rather than quantizing radiation as Planck did, we can simply assign it a non-linear exponent, where a rise in temperature is accompanied by a reduction in the sharpness of the slope. It therefore appears as if the entire greenhouse effect fallacy is not only caused by the confusion over power and energy and its amplifiability, but also by the incorrect mathematical formulation of radiative heat transfer. If the Stefan-Boltzmann law based on the 4th power exponent is true, hot bodies would cool within seconds and nothing could be heated, lava would solidify immediately and smelting iron, melting glass, or any any high temperature process becomes impossible!

In August of 2021, I had become suspicious that perhaps the entire greenhouse effect was suspect and decided to see if anyone had managed to refute the greenhouse effect. I searched the term “greenhouse effect falsified” and found a number of interesting results in Google scholar. At the time, I had a difficult time believing that each and every single expert, Ph.D. academic, etc, could be so wrong. I kept thinking in the back of my mind, “this cannot be, the whole thing is a fraud?” But then upon reading the fascinating articles and blog posts put together by the slayers, I immediately identified the origin of the century-long confusion: the conflation of energy and power. A number of individuals in the 21st century have put into question the greenhouse effect theory. The first serious effort to refute the greenhouse effect is the now quite famous “G&T” paper, by Gerhard Gerlich and Ralf D. Tscheuschner. Although it is not known who was the first to refute the greenhouse effect, I have found no articles or papers in the Google book archive during the entire 20th century, except for some arguments made by the quite kooky psychoanalyst Immanuel Velikovsky. In fact, I cannot find evidence that anyone had ever seriously questioned (serious defined by scientific papers or articles published) Arrhenius, Tyndall, or Poynting during the 19th and early 20th centuries. This is likely because atmospheric science remained largely obscure and occupied little time in the mind of natural philosophers, physicists and what we they now call “scientists”. It appears that it took the increased discussion of the greenhouse effect during the global warming scare driven by Al Gore’s propaganda to get people to finally scrutinize it. With the introduction of the internet and the growth of the “blogosphere”, individuals could contribute outside of the scientific guild. Those who “deny” the greenhouse effect go by the term “slayers”. They accrued the name “slayers” after the title of the first ever book refuting the greenhouse effect: “Slaying the Sky Dragon: Death of the Greenhouse Gas Theory”, by John O’Sullivan. So far, I have found only these following publications challenging the fundamental assumptions of the greenhouse effect: Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics, by Gerhard Gerlich, The Greenhouse Effect as a Function of Atmospheric Mass, by Hans Jelbring, There is no Radiative Greenhouse Effect, by Joseph Postma, No “Greenhouse Effect” is Possible from the way the Intergovernmental Panel on Climate Change Defines it, by John Elliston, Refutation of the “Greenhouse Effect” Theory on a Thermodynamic and Hydrostatic basis, by Alberto Miatello, The Adiabatic Theory of Greenhouse Effect, by OG Sorokhtin, Comprehensive Refutation of the Radiative Forcing Greenhouse Hypothesis, by Douglas Cotton, Thermal Enhancement on Planetary Bodies and the Relevance of the Molar Mass Version of the Ideal Gas Law to the Null Hypothesis of Climate Change, by Robert Ian Holmes, and, On the Average Temperature of Airless Spherical Bodies and the magnitude of Earth’s Atmospheric Thermal Effect, by Ned Nikolov. In addition to these publications, the blog “tallboke” run by Roger Tattersall has provided invaluable data on the gravito-thermal effect, most of which is thanks to the work of Roderich Graeff. It is unlikely that without the efforts of Roderich Graeff, anyone would have noticed the obscure gravito-thermal effect. In the Springer book: Economics of the International Coal Trade: Why Coal Continues to Power the World, By Lars Schernikau, the author mentions briefly the gravito-thermal effect and the possibility the entire greenhouse effect is faulty.

The article is a synthesis of the largely informal and cluttered online literature on “alternative climate science”, with a special emphasis on the gravito-thermal effect. The word alternative is something regrettable to say, since it implies it is just another “fringe” alternative theory competing against a widely established and well-founded mainstream. Due to a lack of clarity in the current state of climate science, I felt it would be useful to summarize the competing theories. One could divided the “alternative climate” theorists into three broad camps. Out of all the “slayers”, the best one by far is Claes Johnson, with his fascinating resonator interpretation of radiative heat transfer.

#1: Radiative GHE refutation based on the 2nd law only, this includes Gerlich & Tscheuschner, Klaus Ermecke and the GHE slayer book authors.

#2: Gravito-thermal Models. This includes Sorokhtin, Chilingar, Cotton, Nikolov, and Zeller, and a Huffman.

#3: “Sun only” theories. I know of only Postma who has propounded a climate theory based purely on the heating of the sun.

The first “school” focuses mainly on the deficits within the existing radiative greenhouse mechanism, and while this is important, it misses other important aspects and provides no alternative explanation. Since we are attempting “overthrow” the dogma that the earth amplifies solar energy by slowing down cooling, if we have completely ruled out this mechanism, then we can either say the earth can be warmed solely by the sun or that some other previously ignored mechanism warms it above and beyond what the sun can provide. We argue that the only parsimonious mechanism allowed by our current laws of physics is a gravito-thermal mechanism. Although “sun-only” models have been proposed, they are shown to be erroneous. A great deal of work needs to be done to finally build a real science of climate, it will take generations since all the textbooks have to be rewritten. Millions of scientific papers, thousands of textbooks, and virtually every popular media article need to be updated so that future generations do not keep being miseducated. Most engineers working in the energy sector are also gravely misinformed. This is especially important because many politicians and engineers are incorrectly using non-baseload energy sources, wind, photovoltaic, otherwise useful technologies, to decarbonize, as opposed to supplement and hedge against uncertain future hydrocarbon supplies.

Does the greenhouse effect’s falsity signify a great deal of parallels in other scientific domains? It is indicting to modern science that the backbone of climatology, the science that deals with the climate of our very earth, is a vacuous mess.

What other areas of science could be predicated entirely on a completely erroneous foundation? Excluding theoretical physics, which is a den of mysticism, we should turn to more practical and real-world theories, those that try to explain observable, measurable phenomena. Which other mainstream postulates or theories could be suspect?

It does seem as if the greenhouse effect was somewhat unique since it was one of the few physical theories, that while untested and speculative, fulfilled some mental desire, and due to its relative insignificance prior to the 21st century, did not garner the attention needed for a swift refutation. Few other theories that are so deeply ingrained in society could have perpetuated for so long on a false foundation because most axioms of modern science are empirical, simply updated versions of the 19th century Victorian methods of rigor and confirmation. The greenhouse effect is truly the outlier, something that had caught the attention of one of the weaker fields within science: climate, but never the attention of the engineer, actual thermodynamicist or physicist who built real useful machines. As John O’Sullivan said, the greenhouse effect was never something observed by actual “applied scientists” who worked with CO2, industrial heaters, heat transfer fluids, cooling systems, insulation, etc. It is implausible that the marvelous “insulating” properties of this wonder gas would not have been noticed by experimentalists in over a century. As we’ve mentioned before, if one searches the term “greenhouse effect wrong, false, refuted, erroneous, impossible, violates thermodynamics etc,” no scientific paper, journal articles, or discussions are retrieved in the Google books archive, suggesting that this theory received little attention. Wood’s experiment doesn’t count because all he says is that the real greenhouse does not work via infrared trapping, he says nothing of the atmosphere, or that the entire thing violates the conservation of energy by magically doubling energy flux. The only record I could find is one mention by Velikoskvy, claiming that the greenhouse effect violated the 2nd law of thermodynamics.

“I have previously raised objections to the greenhouse theory though most have been rejected for publication. But recently even the greenhouse advocates have begun to note certain problems. Suomi et. al. [in the Journal of Geophysical Research, Vol. 85 (1980), pp. 8200-8213] notes that most of the visible radiation is absorbed in the upper atmosphere of Venus so that the heat source [the cloud cover] is at a low temperature while the heat sink [the surface] is at a high temperature, in apparent violation of the second law of thermodynamics.”

Carl Sagan and Immanuel Velikovsky, By Charles Ginenthal

“Later efforts by astronomers to account for the high temperatures by means of a “runaway greenhouse effect” were denounced by Velikovsky as clumsy groping – “completely unsupportable” he called it in 1974, adding that such an idea was “in violation of the Second Law of Thermodynamics”

How Good Were Velikovsky’s Space and Planetary Science Predictions, Really? by James E. Oberg

The greenhouse effect is just another “superseded” theory in the history of science. Wikipedia, despite being edited by spiteful leftists, is more than willing to acknowledge the long list of superseded theories, but somehow they think this process magically stopped in the 21st century! The greenhouse gas theory will join the resting place of a very long list of now specious theories, although, at the time, they were perfectly reasonable and even rational. We must be careful to avoid a “present bias”. The list of disproven theories, while not by any means expansive, includes phlogiston theory, caloric theory, geo-centrism (Ptolemaic earth), tectonic stasis (pre-Wegener geology), Perpetuum mobile, Newton’s corpuscular light theory, Lamarckism, or Haeckel’s recapitulation theory, just to name a few. Unsurprisingly, Wikipedia also lists “scientific racism” as a “superseded” theory, even though ample evidence exists for fixed racial differences in intelligence and life history speed.
We cannot accuse its mistaken founders of fraud, but we can blame the veritable army of the global warming industrial complex for systematic fraud, deception, and duplicity. Arrhenius, the god of global warming, wanted to believe that burning coal could avert another ice age and make the climate more palatable for human settlement. Those who have used the greenhouse gas theory as an excuse to “decarbonize” civilization, can indeed be accused of fraud, because they have willingly suppressed counter-evidence by censoring, firing, or rejecting challenging information, and they have knowingly falsified historical temperature data. The conclusion is that catastrophic anthropogenic global warming (CAGW) is the single largest fraud in world history, simply unparalleled in scale, scope, and magnitude by any other event. We do not know how global warming has grown to be such a monster, but one explanation is that it has been used as a political machination to spread a new form of “Bolshevism” to destroy the West.

I have decided to call the greenhouse effect “Arrhenius’s Demon” after “Maxwell’s demon”, a fictitious being that sorts gas molecules according to their velocity to generate a thermal gradient from an equilibrium.

Atmospheric climate demystified and the universality of the Gravito-Thermal effect

This artist's conception illustrates the brown dwarf named 2MASSJ22282889-431026.
Artist’s conception illustrates the brown dwarf named 2MASSJ22282889-431026. 

A “Brown Dwarf”, a perfect example of the gravito-thermal effect in action.

The confusion over the cause of earth’s temperature is in large part due to the historical omittance of atmospheric pressure as a source of continuous heat. Gases possess high electrostatic repulsion, which is why they are gases to begin with. The atoms of elements that exist as solids under normal conditions strongly adhere to each other, forming crystals, but gases can only exist as solids at extremely low temperature or extremely high pressure, in the GPa range. Many have erroneously argued that because the oceans and solids do not display a visible gravito-thermal effect, the gases in the atmosphere somehow cannot. This is obviously explained by the fact that liquids and solids are not compressible, so they generate little to no heating when confined. Gas molecules possess extremely high mean velocity, a gas molecule in thermal equilibrium at ATP possesses a velocity of 500 m/s. As the molecular density increases, the mean free path decreases, and the frequency of collisions increases since the packing density has increased, generating more heat. But since atmospheres are free to expand if they become denser, a given increase in pressure does not produce a proportional rise in temperature, since the height of the atmosphere will grow. Unsurprisingly, fusion in stars occurs when gaseous molecular clouds accrete and auto-compress from their own mass.

There is nothing mysterious about the gravito-thermal effect, for some reason, it has been clouded in mystery and poorly elucidated and virtually ignored by most physics texts. The gravito-thermal effect is what we see happening in the stars that shine all around us. People have somehow forget to ask where the energy comes from to power these gigantic nuclear reactors? All the energy from fusion ultimately derives from gravity, because nuclei do not fuse on their own! We know that gas centrifuges used for enriching uranium develop a substantial thermal gradient.

Modern climate science is one of the great frauds perpetrated in the 20th century, along with relativity theory, confined fusion, and artificial intelligence. 

Brief summary of the status of “dissident climate science”, or more appropriately named: “real climate science”

Most “climate denial” involves a disagreement over the degree of warming that is posited to occur from emissions of “greenhouse gases”, not whether “greenhouse gases” are even capable of imparting additional heat to the earth. The entire premise of the debate is predicated on the veracity of the greenhouse effect, so most of these debates between climate skeptics and climate alarmists, for example between a “skeptic” like William Happer and an alarmist like Raymond Pierrehumbert, are based on a vacuous foundation, so the entire debate is erroneous and meaningless. We have found ourselves in a situation where an entire generation of physicists believe in an entirely non-existent phenomenon. While we have mentioned that there exist a number of “greenhouse slayers”, they have very little visibility and there has been no major public debate between them and the alarmists. In fact, most have never heard of the slayers, even within the relatively large “climate denial” community. Jo Nova is typical of modern AGW skeptics in that she ardently defends the greenhouse chimera and argues entirely on the merit of the alarmist dogma, quibbling only over magnitude. Other skeptics but champions of the greenhouse effect are Anthony Watts and Roy Spencer. Anthony Watts is just a weatherman and has a weak grasp of physics or thermodynamics, but Roy Spencer considers himself well-versed in these areas. Willis Eschenbach is perhaps the most glaring case study of a deluded skeptic. He went out of his way on Anthony Watt’s blog to defend Arrhenius’s Demon. In attempting to show just how brilliant the IPCC was, he created a hypothetical “steel greenhouse” where the earth was wrapped in a thin metal layer that reflected all the outgoing radiation while absorbing all incoming radiation. Below is an illustration of Eschenbach’s “steel greenhouse”. Apparently, he and Watts, and virtually every “climate scientist”, believes it is possible to simply double the incoming radiation by nothing more than reflecting it. It has evidently not dawned on them that no lens, mirror, reflector, radiant barrier, or surface in existence has ever been shown to increase the power density of radiative flux, whether it is UV, infrared, or Gamma Rays.

steel-greenhouse-2

#1: There is no greenhouse effect as it violates the conservation of energy. The theory originated from the confusion that energy flux or power could be amplified by “slowing down cooling”. The grave error made was believing that slowing down heat rejection could raise the steady state temperature of a continuously radiated body without the addition of work. Earth’s temperature is a full 15°C warmer than solar radiation can support alone, around -1.1 C°.

#2: The gravito-thermal effect, coined by Roderich Graeff, provides the preponderance of the above-zero temperature on earth. The gravito-thermal effect is simply the gravitional confinement of gas molecules which produces kinetic energy and releases heat through collisions between gas molecules. The gravito-thermal effect can predict the atmospheric lapse rate and surface temperature with nearly 100% accuracy using the ideal gas law, for both Earth and Venus. The “adiabatic lapse rate” is not some artificially generated number derived from the ideal gas law, static air temperature gauges on cruising airliners measure a temperature almost identical to that predicted by the ideal gas law. In fact, current theory cannot even explain the cause of the lapse rate, various nebulous concepts such as convective cooling or “radiative height” are proposed but none of these explanations can be correct if we can predict the lapse rate perfectly with the ideal gas law. The original atmospheric-driven climate theory proposed by Oleg Georgievich Sorokhin, later articulated in the West by independent researcher Douglas Cotton, is the only veridical mechanism and is the only known solution compatible with current physical laws that can account for the temperature of the earth and other planetary bodies. The gravito-thermal effect produces 72.46 W/m², while the sun produces 303 W/m². The sun therefor accounts for 78% of the earth’s thermal budget while the atmosphere accounts 22%.

#3: The moon’s temperature is likely much higher than currently assumed, with solar radiation predicting a mean surface temperature of between 10 and 12°C depending on the exact emissivity value. Current mean lunar temperature estimates place the mean at between of between minus 24 and minus 30°C, but this would mean the moon only receives 194 W/m² assuming an emissivity of 0.98, requiring it to have an albedo of 0.47. It is preposterous that the moon could have such a high albedo, so the current temperature estimates produced by probes are either way off, or the moon has a much high reflectivity, failing to absorb perhaps the more energetic portion (UV, UV-C, visible) portion of the sun’s spectrum. The moon can be seen to be very reflective from earth, glowing a bright yellowish color, this may be because it reflects more energy. Either way, the probes are either way off, or the moon reflects more energy, because no stellar body can absorb more or less radiation than its spherical “unwrapped” surface area, as this would violate the conservation of energy. The only possible solution to this problem is that when radiation hits a body at a shallower angle of incidence (where radiation is received at the poles), more of it is reflected for a given emissivity value, resulting in a less than theoretical absorbed power density. This has not something that has been mentioned before as a solution to some of the temperature paradoxes.

#4: The present concept of an albedo of only 0.44 is entirely erroneous and serves only to underestimate the heating power of the sun. The earth receives at least 300 W/m², because the gravito thermal effect only generates 75 W/m², but the earth must radiate close to or exactly 375 W/m² since our thermometers do not lie, the earth is 13.9°C, there is no arguing with this number. Depending on the exact absorptivity value. The albedo has been deliberately overestimated by excluding the entire 55% of the infrared spectrum to deliberately show that a “greenhouse effect” is absolutely required to generate a warm climate.

#5: Using the ideal gas law, the temperature estimates of the Mesozoic can be explained by a denser atmosphere. In fact, since solar radiation should not have been much more intense, the ideal gas law can be used to predict with near-perfect accuracy the density of the Mesozoic atmosphere by simply using the isotope records. The Paleocene–Eocene Thermal Maximum may have featured temperatures as high as 13°C hotter than today or 28°C as recent as 50 Myr. In order to arrive at the required pressure and density, we can simply construct a continuum from the sea level pressure and temperature. In order to do this, we must establish the hydrostatic pressure gradient. A linear hydrostatic gradient is only valid for incompressible solids, compressible columns “densify” with depth. I have performed this calculation up to a temperature of 25.2 C°. Because the calculation is performed manually, it is very time consuming, I plan on continuing to a temperature of 30 C°, equivalent to Mesozoic temperatures. From the chart below you can see that an increase in atmospheric density of only 15.07% generates an additional 10.2 C° of surface temperature. Robert Dudley argues oxygen concentration of the late Paleozoic atmosphere may have risen as high as 35 %, assuming nitrogen levels are largely fixed since nitrogen is unreactive, this would have resulted in an atmosphere with a density of 12.6% higher, but the actual number is likely much higher since the high temperatures of the Phanerozoic necessitate a denser atmosphere. The origin of atmospheric nitrogen is quite mysterious, nitrogen is sparce in the crust and does not form compounds easily, the only abundant nitrogenous compounds are ammonium ions, which have been bound to silicates and liberated during subduction and volcanic activity. The temperature lapse rate with altitude is a constant value, since gas molecules evenly segregate according to the local force that confines them together. But the relationship between pressure, density and temperature are not linear values and can only be arrived at by performing an individual calculation of each hypothetical gas layer and generating a mean density for the layer above it to predict the amount of compression. With the amount of compression per layer established, it is then possible to use this pressure value to arrive at the density. The calculation is very simple, simply use a constant thermal gradient of 0.006 C°/m and average the density of each increment of gas layer. The ideal gas law cannot predict pressure and density with temperature alone, you cannot just “solve” for density and pressure with temperature as the only known variable, you must establish pressure as well, and this can only be done by knowing the mass above the gas. I have not found an exponent that can arrive at this number, the calculation has to be performed individual for each discrete layer.

negative gravito-thermal numbers-1

If we hypothetically dug out an entire cavern in the earth a few kilometers deep, it would not increase in density because the atmosphere would simply “fall down” and reach a lower altitude, the pressure wouldn’t change. Conversely, by adding mass, the denser atmosphere reaches a greater altitude and moves further into space. Current atmospheric losses to space are 90 tons annually, or just 0.00000087% over 50 million years. Clearly, some form of mineralization or solidification transpired where gaseous oxygen ended up bound into solids. Certain chemical processes removing the highly reactive oxygen and forming solids must have occurred starting during the Mesozoic. An alternative scenario is that gigantic chunks of the atmosphere were ripped away during the average 450,000 year geomagnetic reversal interval when the earth is most vulnerable to solar energetic particles. Geomagnetic reversals are thought to leave the earth with a much weaker temporary magnetic field, which could generate Mars-like erosion of the atmosphere. The last reversal was 780,000 years ago, called the “Brunhes–Matuyama reversal”. The duration of a geomagnetic reversal is thought to be 7,000 years. For a polarity reversal to occur, a reduction in the field’s strength of 90% is required. Estimates place the number of geomagnetic reversals at a minimum of 183 reversals over the time frame spanning back to 83 Myr. Biomass generally contains 30-40% oxygen, since bound oxygen does not appear to be released back into the atmosphere during its decomposition into peat and other fossil materials, it is conceivable much of the paleo-atmosphere’s mass is bound up in oxidized organic matter buried in the crust as sedimentary rock with only a tiny fraction reduced into hydrocarbons. Organic matter is thus an “oxygen sink”.

#6: Short-term climate trends can only be explained by solar variation since atmospheric pressure only changes over very long periods of time due to mineralization of oxygen. A tiny drop in solar irradiance equivalent to +-3 W/m² can produce a temperature change of 0.7°C. A 10 W/m² difference in solar irradiance drops the surface temperature by 2.3°C, enough to cause a mild glaciation. But there is no evidence fluctuations in the magnetic activity of the photosphere can produce such changes, requiring an intermediate mechanism, namely cosmic ray spallation of aerosols.

#7: Joseph Postma’s theory of dividing solar radiation by two is valid only geometrically, but it does not change temperature, because geometry, tilt, or rotation speed, does not affect the total delivered insolation or power density. The real “flat earth” theory is the removal of infrared and the fake “albedo” of 0.44. Postma attempted to increase the available power density of the sun by averaging it over a small area, but this cannot increase temperature since there is still the other half of the sphere radiating freely into space. There is simply no way to employ a “sun-only” model of climate that is utterly ridiculous.

#8: The Gravito-Thermal effect, as predicted by Roderick Graeff, is indeed a source of infinite work, but does not violate the 2nd law, since the work is derived from the continuous exertion of gravitational acceleration. This is something Maxwell and Boltzmann were wrong about. Gravitational acceleration on earth, which is quite strong at 9.8 m/s^2, provides an infinite source of work to generate heat, just as brown dwarfs glow red due to gravitational compression, or molecular clouds collapse forming nuclear cores. Brown dwarfs usually have surface temperatures of 730 °C.

#9: Venus would have a temperature of 40°C without a dense 91 bar atmosphere, but Venus’s true temperature is likely closer to 480°C predicted by the ideal gas law, although the super-critical quasi-liquid nature of the Venusian atmosphere may somewhat compromise its accuracy at low altitudes. Denser atmospheres extend into space further, that is they are “taller” and but should not have a significantly different thermal gradient or “lapse rate”.

We can now finally answer: does CO2 cool or warm the earth? Strictly speaking, radiatively, it can do neither because it is utterly incapable of changing the energy flux. Because some may argue that because the partial pressure of the atmosphere increases due to the addition of carbon, releasing CO2 increases the density of the atmosphere and could produce a tiny amount of warming. It turns out that because hydrocarbons contain a substantial amount of hydrogen, and hydrogen forms water when combusted, the net result of hydrocarbon combustion is a reduction in atmospheric pressure and hence temperature, although the magnitude of this effect is extremely small. How ironic is it that how three century long voracious appetite for carbon has cooled our climate by a few microkelvins?

By burning hydrocarbons, hydrogen converts atmospheric oxygen into liquid water, which is nearly a thousand times denser than air, so there is a net reduction in atmospheric mass. Refined liquid hydrocarbons contain 14% hydrogen on average, to combust 1 kg of hydrogen requires 8 kg of oxygen. Per ton of hydrocarbon combusted, 1120 kg of oxygen is converted to water. Most of this water condenses into liquid, so it results in a reduction of atmospheric mass. The 86% of the hydrocarbon that consists of pure carbon forms carbon dioxide and consumes 2.66 kg of oxygen per kg, so 2287 kg of oxygen has been consumed, releasing 3.66 kg of CO2 per kg of carbon, or 3153 kg. If we subtract the oxygen, we are left with 866 kg of carbon, less than the 1120 kg of oxygen that has been converted to water, so we are left with a mass deficit of 254 kg of oxygen per ton of hydrocarbon burned. Therefore, the combustion of hydrocarbons reduces the density of the atmosphere, increasing the amount of water on earth, and therefore must result in a net cooling effect, albeit insignificant.
The total estimated hydrocarbon burned since 1750 is 705 gigatons, representing a 0.0000347% reduction in atmospheric mass, or 1.7907e+14 kg of oxygen removed from the atmosphere, which is 5.1480e+18 kg. Using the ideal gas law, the predicted cooling is -0.00014°C.

The only possible way humans could warm the planet is by releasing massive amounts of oxygen from oxides to significantly raise the pressure of the atmosphere but without available reducing agents, this would be impossible. It can thus be concluded that under the present knowledge of atmospheric physics, it is effectively impossible for technogenic activity to raise or lower temperatures. Short-term variations, Maunder minimum, medieval warm period, etc, are driven solely by sunspot activity caused by changes in the sun’s magnetic field. No other mechanism can be invoked that stands scrutiny.

The fallacious albedo of 0.44 and the missing infrared 

The albedo estimate of the earth is deliberately inflated to buttress the greenhouse effect. At least 55% of the sun’s energy is in the infrared regime, and virtually all of this energy would be absorbed by the surface, with very little of it reflected by the atmosphere.

The Moon’s temperature anomaly

The mean receives a mean solar irradiance almost identical to the earth, about 360 watts per square meter. If the moon’s regolith is assumed to have an emissivity of 0.95, the mean surface temperature will be 12.76 C, which is far higher than the estimate by Nikolov and Zeller of 198-200 K (-75°C). The Moon’s either considerably more reflective than present estimates, or it’s much hotter, there can be no in-between if we are not to abandon the Stefan Boltzmann law, which would make any planetary temperature prediction virtually impossible. Moon should have virtually no “albedo” because it has effectively no atmosphere which would be capable of reflecting any significant amount of radiation.

The ideal gas law can be used to predict lapse rate and planetary temperatures with unparalleled accuracy.

The ideal gas law predicts with nearly 100% accuracy the atmospheric lapse rate and the temperature at any given altitude. The calculation was performed for a typical airline flight level since there is extensive temperature data to confirm the results. The answer was minus 56°C, within decimal points of the measured temperature at the altitude. Therefore we can state with near certainty that the temperature of any gas body subject to a gravitational field will be solely determined by the density (molar concentration) and pressure, a function of the local gravity. The atmosphere is thus a gigantic frictional heat engine, continuously subjecting gas molecules to collisions and converting gravitational energy to heat, much like a star does, using the core pressure, a product of the massive gravity, to fuse nuclei. Brown dwarfs are compressed just enough by gravity to achieve core pressures of a 100 billion bar, they generate enough heat in the process for their outer surface glows red. The same principle is in action for a main sequence star, brown dwarf, or a low pressure planetary atmosphere. The temperature of a gravitationally compressed gas volume should be equal to the frequency and intensity of the collisions. If this is correct, the kinetic theory of gases should predict the temperature of any body of gas on any planet with near-perfect accuracy, regardless of solar radiation. It is not the solar radiation that heats the gas molecules, but solely gravity. If a planet gets a small amount of solar irradiance, then a layer of the atmosphere continuously exposed to the cold surface will be cooled, with some of its gravitational collision energy transferred to the cold surface, so the temperature of the gas will be below the equilibrium temperature predicted by the ideal gas law. This is precisely what we see on earth. Since a pressure of 101.325 kPa, with a molar density of 42.2938, yields 14.99144°C, but the mean surface temperature is only 13.9°C, then the earth must receive at least 303 watts per square meter assuming an emissivity of 0.975. This very closely corresponds to an infrared-adjusted albedo of less than 20%. The earth must then be heated to around minus 1°C by solar radiation alone. For Mars, with an atmospheric pressure of 610 Pascal and a density of around 20 grams/m3, the predicted atmospheric temperature is -110.11°C. Mars receives spherical average of 147.5 W/m2, or -45.88°C, which appears very close to the -63°C estimate, so just like with the moon, probes have underestimated the temperature.

Nikolov and Zeller erroneously assumed the one-bar atmosphere could produce 90 K worth of heating, but there is insufficient kinetic energy at a pressure of 1 bar to produce this heat. They are correct in rejecting the unphysical greenhouse effect, but they cannot count on a 1-bar atmosphere to produce 90 Kelvin of heating. The ideal gas law predicts a temperature of exactly 15°C for a 1013 mbar atmosphere and it predicts 440°C for Venus at 91 bar, it must be correct. Harry Dale Huffman calculated the temperature of Venus at 49 km, where its atmosphere equals earth (1013 mbar), the temperature is exactly 15°C! The molar mass of the molecules do not matter, only their concentration and the force pushing them together, which contributes to more violent and frequent collisions. Postma’s theory that we must treat the earth as a half-sphere only exposed to solar radiation is theoretically correct insofar as the sun never shines on the entire surface at once, but it doesn’t change the mean energy flux per unit area, which is required for a given temperature. The interval of solar exposure time does not change the mean energy flux. Temperature can only be changed by raising or lowering the delivered energy to the body. Since much of the sun’s energy is in the infrared spectrum, we can assume close to 83% of the sun’s energy contributes to the heating of the surface. Current climate models ignore the fact that the sun produces 55% of its energy in the infrared spectrum, all of which is absorbed. The “real” albedo is in fact much less, which allows more of the sun’s energy to be absorbed. 

What about short term variation in temperature?

Carbon dioxide has been a useful little demon for climate science since it serves as a veritable “knob” that entirely controls climate. Modern climate science is such a fraud that they will have you believe there were no poles during the Eocene because of carbon dioxide! Of course, Arrhenius’s demon is but a fictional entity, so if we want to understand short term variation, clearly we cannot claim that the atmosphere has gained any mass since the Maunder minimum!

Short-term variations are mediated by cosmic ray spallation of sulfuric acid and other atmospheric aerosols that produce nano-meter-sized cloud condensation nucleons. This increases the reflection of the more energetic UV portion of the spectrum and lowers global temperatures by the plus or minus a few degrees, what we have witnessed over the past millennia.
Isotope records of beryllium 10, chlorine 36, and carbon 14 provide ample evidence that indeed these cosmic rays mediate temperature because they overlap sharply with temperature records using ice cores. This phenomenon is called “cosmoclimatology”, coined by Henrik Svensmark who first proposed the mechanism. Don Easterbrook and Nir Shaviv are two other proponents of this mechanism. Disappointingly, all seem to still endorse the greenhouse effect from comments in their lectures available on Youtube where they compared the effect of the “forcing effect” of cosmic rays compared to CO2.
Variation in sunspot activity is mediated by sunspot activity, large magnetic fields that burst out of the photosphere and produce visible black spots. When these magnetic fields are stronger and more numerous, fewer solar energetic particles or cosmic rays reach earth, producing fewer aerosols and allowing more UV to strike the earth.

224919741scfifZE2L._AC_SY1000_

A Thermodynamic Fallacy

We must first define what POWER is. The sun delivers power, not energy. Energy, dimensionally, is defined as mass times length squared times time squared: L2M1T-2. Power is energy over time, energy divided by the time spent delivering the energy. 

Energy is not power. Power is flux, a continuous stream of a “motive” substance capable of performing work. In dimensional analysis, power is measured as mass times length square times time cubed: L2M1T-3. Power could be said to be analogous to pressure and flow rate, while energy is just the pressure. Note that below we use the term energy flux and power interchangeably, they are both the same units.

The greenhouse effect treats energy as a compressible medium with an infinite source of available work

Work or energy flux cannot be compressed or made denser by slowing the rate at which energy leaves a system, this treats energy flux as a multipliable medium, which it is clearly not. Using mechanical analogies for the sake of clarity, we can express energy flux as gas flowing through a pipeline. The energy flux would be analogous to gas molecules and the area in which this energy is expressed is the surface of the earth. Using the pipe analogy, we can evoke Bernoulli’s theorem to show that mass is always conserved. If we squeeze our pipe, the mass flow rate drops but the velocity increases, a basic law of proportionality or equiveillance. With the greenhouse effect, the energy flux flowing through the pipeline is subject to a constriction (reduction in cooling), the constriction now alters the ability of energy to exit the pipeline, thereby increasing the density of energy particles within the volume. This is in essence the current greenhouse effect power multiplication phenomenon. By “constricting” the pipe, energy flux “particles” pile up and increase in their proximity, creating a “zone” of higher intensity. But this is clearly a fallacy since it produces additional energy flux density (work) from nothing. This scheme has found a way to increase power density without changing total delivered power or area/volume, therefore it has created work from nothing, and it thus cannot exist in reality. No degree of constriction (analogous to back radiation) can increase the flux density, required to heat the earth. 

The fact that a century’s worth of top scientists failed to identify this error strongly confirms our hypothesis that most technology and discovery is largely a revelatory phenomenon, as opposed to being the expression of deep insight. The fact that modern science cannot even explain the climate of the very earth we live on is quite astonishing. Modern technology can construct transistors a few nanometers in diameter, yet we are still debating elementary heat flow and energy conservation axioms.

Some GHE deniers go wrong by incorrectly stating that a radiatively coupled gas can “cool” the atmosphere, again this makes the same error that led to the erroneous greenhouse effect in the first place. Cooling can never lower the temperature of a continuously radiated and radiating body, such a scheme is impossible because it would eventually deplete all the energy from the body. The term heating and cooling with respect to the atmosphere need to be dispensed with altogether. Think of the atmosphere as a water wheel, damming up the river in front of the water will not speed up the water wheel, whose speed is solely determined by the mass flow and velocity of the river beneath it. A body receiving a steady-state source of radiation can never be cooled, via radiation, at a rate greater than it is heated due to the reversibility of emissivity and absorptivity, in other words, cooling can never exceed warming and vice versa. The fundamental basis of the greenhouse effect is the assumption that power delivered can exceed power rejected. Since the sun continuously emits “new” radiation per second, the radiation that is “consumed” and converted to molecular kinetic energy is always released at an equal rate than it is delivered. Radiation forms a reversible continuum of thermal energy transfer, without the ability to accumulate or transfer this heat energy at a greater rate than is received. Conduction or convective cooling has no applicability in radiative heat transfer in the vacuum of space since convection or conductive heat transfer scenarios on earth have virtually infinite low-temperature bodies to cool to. Therefore, all stellar bodies are in perfect radiative equilibrium, neither trapping, storing, or rejecting more radiant energy than they can absorb and reject per second. 

The confusion over the “amplifiability of power

We have already defined power as fundamentally mass times area (length squared) times time cubed, expressed in dimensional analysis as L2M1T-3. Energy is a cumulative phenomenon, energy as a stored quantity is punctuated, while power or energy flux is a continuous or “live” phenomenon, being measurable only in its momentary form, imparting action on a non-stop basis. Mice can produce kilowatt-hours worth of energy by carrying cheese around a house over the course of a few years, but they can never produce one kilowatt. A one-watt power source can produce nearly 9 kWh in a year, but a nine watt-hours can never produce 9 kilowatts! Energy gives the wrong impression that power is somehow accumulated. This rather confusing distinction, the distinctiveness of the different entities or expressions of energy, being inherently time-dependent, led to the fallacy of the greenhouse effect. Because energy can be “stored” and accumulated to form a larger sum, it was assumed energy flux could be amplified as well, by simply slowing down the rate of energy loss relative to energy input, leading to an inevitable increase in temperature. Amplification through altering energy loss could never increase flux, as this would mean insulation would amplify the output of a heater. Insulation can only prolong the lifespan of thermal energy in a finite quantity, it has no bearing on flux values or power. This is because power is a constant value, not mutable, amplified, or attenuated. Power is a time-dependent measure of the intensity of the delivery of work or energy, power is simply energy divided by time. 

To increase the temperature of the planet, one would need to increase the flux. 

Slowing the rate of heat loss can only work to extend a body’s finite internal energy, a body that is donated a quantity of energy and never replenished, but is unable to raise the temperature of a continuously heated body, because such a body’s emissions are the product of its own temperature, and recycling these emissions can never exceed the source temperature.

A good analogy would be low-grade heat (say 100°C) versus high-grade heat. One could have a million watts of “low-grade heat”, but this low-grade heat can never spontaneously upgrade itself to even a single 1 watt worth of high-grade heat, say 1000°C. Heat can never be “concentrated” to afford a higher temperature, it must always follow the law of “disgregation”, the original true meaning of “entropy” coined by Clausius. The “lifespan” of a concentrated form of energy can be prolonged or extended via modulating perviousness or retentiveness of the storage medium, but the time-invariant flux equivalent sum remains constant. The greenhouse gas theory is therefore quite an elementary mistake, the conflation of the permeability of heat with the flux intensity required to achieve said heat. To raise the temperature of the earth to 15°C, the total flux must increase, one can never trap or amplify a lower flux value to reach a higher flux value, because flux is not a modulable entity.

Many greenhouse effect “slayers” get worked up over the concept of back radiation and radiative heat transfer from hot to cold, but this is not the issue with the greenhouse effect, the greenhouse effect is a 1st law violation, not a 2nd law violation. Of course, one still cannot warm a body with less intense radiation emitted by a hotter surface, but this is a secondary problem, the principle error is the confusion between flux and energy.

Low-grade heat cannot be transformed into high-grade heat, such a scheme would require energy input and an “upgrading heat pump” usually employing exothermic chemical reactions such as water and sulfuric acid. Heat upgrading heat pumps exist in industry and evidently do not violate any laws of thermodynamics because they work! These pumps obviously require work to perform this “upgrading” in the first place.

The greenhouse effect is impossible because it leads to a buildup of energy, it forbids a thermal equilibrium. All stable systems are in perfect thermal equilibrium. The reason the conservation of energy (first proposed by von Mayer) is a universal law of nature is because its absence would mean the spontaneous creation or destruction of energy. Since energy and mass are the same form but differently expressed (first proposed by Olinto De Pretto), a universe without the 1st law would disappear within seconds. Stability requires continuity, and continuity requires conservation. Energy flux is not a cumulative phenomenon, it is not possible to trap and store more energy since this energy would continuously build up and lead to thermal runaway. Energy itself is cumulative, it can be built up, drawn down, and stored, but flux cannot, but flux represents a volume of flow, while energy represents the time-dependent accumulation or cumulative sum of said flow. Energy can be pumped or accumulated to form a larger sum over a period of time, but flux can never be altered, it is impossible to change the power output of an engine, laser, or flame by any scheme that does not result in the addition of extra work. If greenhouse gases store more heat than can otherwise flux into space, this greater heat content generates more radiation by raising the temperature, and now this radiation is blocked from leaving, generating even more heating of the surface, which produces yet still more radiation. The process goes to infinity and therefore must be unphysical. Such a scenario is impossible because it’s totally unstable. A mechanism must exist that continuously provides the thermal energy to maintain a constant surface temperature, this mechanism cannot be solar radiation alone. 

Kirchhoff’s law forbids emissivity from exceeding absorptivity and vice versa, so the greenhouse effect violates Kirchhoff’s law. One cannot selectively “tune” emissivity to retain more heat to slowly build up a “hotter” equilibrium. By definition, one cannot “build up” an equilibrium, since an equilibrium requires input and output to be perfectly synced, and by definition, the greenhouse effect is when these values are not synced, but considerably diverged, since there is more retained that imparted into the system, but such a condition inevitably leads to infinity. 

There are two ways of falsifying the greenhouse effect. One way is to find errors in the predictive power of a CO2-driven paleoclimate or ancient climate record, another better way is to identify and highlight the major physical errors in the mechanism itself. 

During the Paleogene-Eocene thermal maximum, there were no poles and sea levels were considerably higher, likely close to a hundred meters higher.

Henry’s law is temperature dependent, when liquids rise in temperature, the solubility value for gases decreases, so less gas can be stored in oceans. CO2, therefore, outgases from the oceans following a temperature increase.

The difference between 1600 and 400 ppm cannot account for the complete absence of ice in the Eocene, the ice ages, or millennia temporal variation, this would require close to 5000 ppm CO2 according to current 1 c/doubling sensitivity. Paleogene-Eocene maximum up to 13°C warmer, but CO2 concentrations were only 3.3 times higher than the present, which would translate to a sensitivity of 4°C/doubling, but this is far too high even if one subscribes to the non-existent greenhouse effect. Even water vapor, which on average accounts for 2.5% of the volume of the atmosphere, would decrease emissivity by 2.5%, or raise or lower temperature by only 0.32 degrees.

Even if the concept of back radiation is valid, which it is not, the tiny concentration of CO2, even at an absorptivity of 1, will yield only a minuscule difference in net atmospheric emissivity. CO2 is 0.042% by volume, assuming each CO2 molecule acts as a perfect radiant barrier, the total increase in emissivity can only by definition, be 0.042%. 

Milankovitch cycles cannot account for ice ages since the distance to the sun does not change, or only very slightly.

Loschmidt firmly believed contrary to Maxwell, Boltzmann, Thomson, and Clausius, that a gravitational field alone could maintain a temperature difference which could generate work. Roderich W. Graeff measured gravitational temperature gradients as high as 0.07 K/m in highly insulated hermetic columns of air, which corroborates Loschmidt’s theory and confirms the adiabatic atmosphere theory.

“Thereby the terroristic nimbus of the second law is destroyed, a nimbus which makes that second law appear as the annihilating principle of all life in the universe, and at the same time we are confronted with the comforting perspective that, as far as the conversion of heat into work is concerned, mankind will not solely be dependent on the intervention of coal or of the sun, but will have available an inexhaustible resource of convertible heat at all times” — Johann Josef Loschmidt

“In isolated systems – with no exchange of matter and energy across its borders – FORCE FIELDS LIKE GRAVITY can generate in macroscopic assemblies of molecules temperature, density, and concentration gradients. The temperature differences may be used to generate work, resulting in a decrease of entropy”—Roderich W. Graeff

Fullscreen capture 8152022 32818 PM.bmp

[1] http://theendofthemystery.blogspot.com/2010/11/venus-no-greenhouse-effect.html

[2] https://aip.scitation.org/doi/10.1063/1.1523808