Galton Reaction Time Slowing Resolved (Scientific)

Christophe Pochari, Pochari Technologies, Bodega Bay, CA

Abstract: The issue of slowing reaction time has not been fully resolved. Since Galton collected 17,000 samples of simple auditory and visual reaction time from 1887 to 1893, achieving an average of a 185 milliseconds, modern researchers have been unable to achieve such fast results, leading some intelligence researchers to erroneously argue that slowing has been mediated by selective mechanisms favoring lower g in modern populations.

Introduction: In this study, we have developed a high fidelity measurement system for ascertaining human reaction time with the principle aim of eliminating the preponderance of measurement latency. In order to accomplish this, we designed a high-speed photographic apparatus where a camera records the stimuli along with the participant’s finger movement. The camera is an industrial machine vision camera designed to stringent commercial standards (Contrastec Mars 640-815UM $310 Alibaba.com), the camera feeds into a USB 3.0 connection to a windows 10 PC using Halcon machine vision software, the camera records at a high frame rate of 815 frames per second, or 1.2 milliseconds per frame, the camera uses a commercial-grade Python 300 sensor. The high-speed camera begins recording, then the stimuli source is activated, the camera continues filming after the participant has depressed a mechanical lever. The footage is then analyzed using a framerate analyzer software such as Virtualdub 1.10, by carefully analyzing each frame, the point of stimuli appearance is set as point zero, where the elapsed time of reaction commences. When the LED monitor begins refreshing the screen to display the stimuli color, which is green in this case, the framerate analyzer tool is used to identity the point where the screen has refreshed at approximately 50 to 70% through, this point is set as the beginning of the measurement as we estimate the human eye can detect the presence of the green stimuli prior to being fully displayed. Once the frame analyzer ascertains the point of stimuli arrival, the next process is enumerating the point where finger displacement is conspicuously discernable, that is when the liver begins to show evidence of motion from its point in stasis prior to displacement.
Using this innovative technique, we achieved a true reaction time to visual stimuli of 152 milliseconds, 33 milliseconds faster than Francis Galton’s pendulum chronograph. We collected a total of 300 samples to arrive at a long-term average. Using the same test participant, we compared a standard PC measurement system using Inquisit 6, we achieved results of 240 and 230 milliseconds depending on whether a laptop keyboard or desktop keyboard is used. This difference of 10 ms is likely due to the longer key stroke distance on the desktop keyboard. We also used the famous online test humanbenchmark.com and achieved an average of 235 ms. Using the two tests, an internet and local software version, the total latency appears to be up to 83 ms, nearly 40% of the gross figure. These findings strongly suggest that modern methods of testing human reaction time impose a large latency penalty which skews results upwards, hence the fact it appears reaction times are slowing. We conclude that rather than physiological changes, slowing simple RT is imputable to poor measurement fidelity intrinsic to computer/digital measurement techniques.
In compendium, it cannot be stated with any degree of confidence that modern Western populations have experienced slowing reaction time since Galton’s original experiments. This means attempts to extrapolate losses in general cognitive ability from putative slowing reaction times is seriously flawed and based on confounding variables. The reaction time paradox is not a paradox but rather based on conflating latency with slowing, a rather elementary problem that continued to perplex experts in the field of mental chronometry. We urge mental chronometry researchers to abandon measurement procedures fraught with latency such as PC-based systems and use high-speed machine vision cameras as a superior substitute.

starter_2021-Oct-14_09-11-36AM-000_CustomizedView33020511471_jpg

starter_2021-Oct-14_09-48-10AM-000_CustomizedView52869063810_jpg

Anhydrous ammonia reaches nearly $900/ton in October

Fullscreen capture 10102021 54638 PM.bmp

Record natural gas prices have sent ammonia skyrocketing to nearly $900 per ton for the North American market. Natural gas has reached $5.6/1000cf, driving ammonia to 2014 prices. Pochari distributed photovoltaic production technology will now become ever more competitive featuring even shorter payback periods.

The limits of mental chronometry: little to no decline in IQ can be inferred from POST-Galton DATAsets

The Limits of Mental Chronometry: IQ has not Declined 15 points Since the Victorian era

starter_2021-Oct-14_09-48-10AM-000_CustomizedView52869063810_jpgstarter_2021-Oct-14_09-11-36AM-000_CustomizedView33020511471_jpg

Christophe Pochari Engineering is the first in the world to use high speed cameras to measure human reaction time. By doing so, we have discovered that the true “raw” undiluted human visual reaction time is actually 150-165 milliseconds, not the slow 240-250 ms frequently cited.

Key findings

Using high speed photography with industrial machine vision cameras, Christophe Pochari Energietechnik has acquired ultra-high-fidelity data on simple visual reaction time, which appears the first study of its kind. The vast preponderance of contemporary reaction time studies make use of computer software based digital measurements systems that are fraught with response lag. For illustration, Inquisit 6, a Windows PC software, is frequently used in psychological assessment settings. We used Inquisit 6 and performed 10 sample runs, with a running average of 242 ms using a standard keyboard and 232 ms with a laptop keyboard. The computer used is an HP laptop with 64 GB of DDR4 ram and a 4.0 GHz Intel processor. Using the machine vision camera, a mean speed of 151 milliseconds was achieved with a standard deviation of 16 ms. Depending on when one decides to begin the cutoff from finger movement and screen refresh, there is a standard interpretation lability of around 10 ms. Based on this high fidelity photographic analysis, our data leads to the conclusion that a latency of around 90 ms is built in with digital computer-based reaction time measurement, generating the false positives of slowing since Galton, which used mechanical levers free of lag. Each individual frame was calculated using Virtualdub 1.10.4 frame analysis software which allows the user to manipulate high frame rate video footage. This data would indicate modern reaction times showing 240-250 milliseconds (Deary etc) cannot be compared to Galton’s original measurement of around 185 ms. Although Galton’s device was no doubt far more accurate than today’s digital systems, it probably still possessed some intrinsic latency, we estimate Galton’s device had around 30 ms of latency based on this analysis assuming 240 as the modern mean. Dodonova et al constructed a pendulum-like chronometer very similar to Galton’s original device, they received a reaction time of 172 ms with this device, so we can be quite confident.

After adjusting for latency, we come to the conclusion there has been minimal change in reaction time since 1889. We plan on using a higher speed camera to further reduce measurement error in a follow up study, although it is not necessary to attain such high degrees of precision since a total latency of  ± milliseconds out of 150 represents a minuscule 2% standard error, there is much more room for error in defining the starting and ending point.

An interesting side note note: There is some data pointing to ultra-fast reaction time in athletes that seems to exceed the speed of normal simple reaction to visual stimuli under non-stressful conditions:

Studies have measured people blinking as early as 30-40 ms after a loud acoustic stimulus, and the jaw can react even faster. The legs take longer to react, as they’re farther away from the brain and may have a longer electromechanical delay due to their larger size. A sprinter (male) had an average leg reaction time of 73 ms (fastest was 58 ms), and an average arm reaction time of 51 ms (fastest was 40 ms)”.

The device used in the study is a Shenzhen Kayeton Technology Co KYT-U400-CSM high speed USB 3.0 330fps @ 640 x 360 MJPEG camera. A single frame increment represents an elapsed time of 3 milliseconds. Pochari Technologies has purchased a Mars 640-815UM at 815 frames per second manufactured by Hangzhou Contrastech Co., Ltd, the purpose of the 815 fps camera is to further reduce latency down to 1.2 milliseconds. In the second study using a different participant we will use the 815 fps device.

To measure finger movement, we used a small metal lever. The camera is fast enough to detect the color transition of the LED monitor, note the color changing from red to green. We set point zero as the point where the color shift is around 50% through. The color on the monitor is changed by the pixels switching color from the top down. The participant is instructed to hold his/her finger as steady as as possible during the waiting period, there is effectively zero detectable movement until the muscle contraction takes place upon nerve signal arrival, which takes place at around 100 m/s, at a distance of 1.6 m (16 ms time) from the brain to the hand. When nerve conduction has begun, the finger begins to be depressed conspicuously on the image and the reaction time can be determined.

Fullscreen capture 1092021 112033 PM

Fullscreen capture 1092021 112044 PM

H91e8a176a28a4a43bf7a79982146a067T

Mars 640-815UM 3.0 USB machine vision camera 1000 fps camera

H93481fa1c61148a9b032a986d0960ae0K

Shenzhen Kayeton Technology Co KYT-U400-CSM high speed USB camera.

Introduction and motivation of the study

In 2005, Bruce Charlton came up with a novel idea for psychometric research: attempt to find historical reaction time data to estimate intelligence in past generations. In 2008 he wrote an email to Ian Deary proposing this new method to perform a diachronic analysis of intelligence. Ian Deary unfortunately did not have any information to provide Charlton with, so the project was put into abeyance until 2011 when Michael Woodley discovered Irwin Silverman’s 2010 paper which had rediscovered Galton’s old reaction time collection. The sheer obscurity of Galton’s original study is evident considering the leading reaction time expert, that is Ian Deary, was not even aware of it. The original paper covering Galton’s study was from Johnson et al 1985. The subsequent paper: “Were the Victorians Clever Than us” generated much publicity. One of the lead authors of the paper, Jan te Nijenhuis gave an interview with a Huffington post journalist on Youtube discussing this theory, it was also featured in the Dailymail. The notoriously dyspeptic Greg Cochran threw the gauntlet down on Charlton’s claim in his blog, arguing according to the breeder’s equation that such a decline is impossible. Many HBD bloggers, including HBD chick were initially very skeptical, blogger Scott Alexander Siskind also gave a rebuttal mainly along the lines of sample representation and measurement veracity, the two main arguments made here.

Galton’s original sample has been criticized for not being representative of the population at the time as it mainly consisted of students and professionals visiting a science museum in London where the testing took place. At the time in 1889, most of the Victorian population was comprised of laborers and servants, who would have likely not attended this museum to begin with. Notwithstanding the lack of population representation, the sample was large, over 17,000 total measurements were taken at the South Kensington Museum from 1887 to 1893. Since Galton died in 1911 and never published his reaction time findings, we are reliant on subsequent reanalysis of the data, this is precisely where error may have accrued as Galton may have had personal insight into the workings of the measurement device itself, statistical interpretation, or data aggregation system he used which has not been completely documented. The data used by Silverman was provided by reanalysis of Galton’s original findings published by Koga and Morant 1923, and later more data was uncovered by Johnson 1985. Galton used a mechanical pendulum chronometer which is renowned for its accuracy and minimal latency. Measurement error is not where criticism is due, Galton’s tool was likely more accurate than modern methods on computer testing. Modern computers are thought to possess around 35-40 ms not including any software or internet latencies, but we have shown up to 90 ms.

The problems with inferring IQ decline from Galton-to the present RT data is threefold:

The first issue is that the population is very unlikely to have been completely representative of the British population at the time. It consisted of disproportionate numbers of highly educated individuals, who are more likely to possess high levels of intelligence, since at the time people who participate in events like this would have drawn overwhelmingly from a higher class strata. Society was far more class segregated and average and lower IQ segments would have not participated in intellectual activities.

Scott Alexander comments: “This site tells me that about 3% of Victorians were “professionals” of one sort or another. But about 16% of Galton’s non-student visitors identified as that group. These students themselves (Galton calls them “students and scholars”, I don’t know what the distinction is) made up 44% of the sample – because the data was limited to those 16+, I believe these were mostly college students – aka once again the top few percent of society. Unskilled laborers, who made up 75% of Victorian society, made up less than four percent of Galton’s sample”

The second issue is measurement latency, when adjusting Galton’s original estimate, and correcting modern samples for digital latency, the loss in reaction collapses from the originally claimed 70 ms (14 IQ points) to zero. Another factor mentioned by Dordonova et al is the process of “outlier cleaning”, where samples below 200 ms and above 750 ms are eliminated, this can have a strong effect on the mean, theoretically in any direction, although it appears that outlier cleaning increases the RT mean since slow outliers are rarer than fast outliers. 

The third issue is that reaction time studies only 50-60 years after (1940s and 50s) show reaction times equal to modern samples, which indicates the declines must have taken place in a short timeframe of only 50-60 years. A large study from Forbes 1945 shows 286 ms for males in the UK. A study from Michael Persinger’s book on ELF waves shows a study from 1953 in Germany.

“On the occasion of the German 1953 Traffic Exhibition in Munich, the reaction times of visitors were measured on the exhibition grounds on a continuous basis. The reaction time measurements of the visitors to the exhibition consisted of the time span taken by each subject to release a key upon the presentation of a light stimulus”.

In the 1953 Germany study, they were comparing the reaction of people exposed to different levels of electromagnetic radiation. The mean appeared to be in the 240-260 ms range.

Lastly, it could have been the case that Galton instead chose the fasted of three samples, not the mean of the sum of the samples.

Dordonova et all says “It is also noteworthy that Cattell, in his seminal 1890 paper on measurement, on which Galton commented and that Cattell hoped would “meet his (Galton’s) approval” (p. 373), also stated: In measuring the reaction-time, I suggest that three valid reactions be taken, and the minimum recorded” (p. 376). The latter point in Cattell’s description is the most important one. In fact, what we know almost for sure is that it is very unlikely that Galton computed mean RT on these three trials (For example, Pearson (1914) claimed that Galton never used the mean in any of his analyses. The most plausible conclusion in the case of RT measurement is that Galton followed the same strategy as suggested by Cattell and recorded the best attempt, which would be well in line with other test procedures employed in Galton’s laboratory.

Woods 2015 et al confirms this statement: “based on Galton’s notebooks, Dordonova and Dordonov (2013) argued that Galton recorded the shortest-latency SRT obtained out of three independent trials per subject. Assuming a trial-to-trial SRT variance of 50 ms (see Table 1), Galton’s reported single-trial SRT latencies would be 35–43 ms below the mean SRT latencies predicted for the same subjects; i.e., the mean SRT latencies observed in Experiment 1 would be slightly less than the mean SRT latencies predicted for Galton’s subjects”

A website called humanbenchmark.com run by Ben D Wiklund has gathered 81 million clicks. Such a large sample size eliminates almost all sampling bias. The only issue would be population differences, it’s not known what percent are from Western nations. Assuming most are in Western nations, it’s safe to say this massive collection is far more accurate than a small sample performed by a psychologist. In order for this test to be compared to Galton’s original sample, since the test is online, both internet latency and hardware latency have to be accounted for. Internet latency depends on the distance between the user and the server, so an average is impossible to estimate. Humanbenchmark is hosted in North Bergen, US, so if half the users are outside the U.S, the distance should average at around 3000 km. 

“Connecting to a web site across 1500 miles (2400 km) of distance is going to add at least 25 ms to the latency. Normally, it’s more like 75 after the data zig-zags around a bit and goes through numerous routers”. Unless the website corrects for latency, which seems difficult to believe, since they would have to immediately calculate the distance based on the user’s IP and assume he does not use a VPN, if internet latency can range as high as 75 milliseconds, it is doubtful that the modern average reaction time is 167 ms, therefor we are forced to conclude there must be some form of a latency correction system, although they make no mention of such a feature. For example, since Humanbenchmark is hosted in New Jersey, a person taking the test in California would require to wait 47 ms before his signal reaches New Jersey is 4500 kilometers away, but this includes only the actual in takes for light to travel a straight line at the speed of light, many fiber-optic cables take a circuitous path which adds distance, additionally, there is also latency in the server itself and the modem and router. According to Verizon, the latency for Transatlantic NY London (3500 km) is 92 ms, adjusting for the distance between New Jersey and California (4500 km) gives 92 ms. Since the online test begins to record the time elapsed after the green screen is initiated, the computer program in New Jersey started calculating immediately after green is sent, but 92 ms passes before you see green, and when green appears, you click, which then takes another 92 ms before it arrives at the server to end the timer. The internet is not a “virtual world”, all webservices are hosted by a server computer which performs computation locally, by definition, any click on a website hosted in Australia 10,000 km away will register lag 113 ms after your click, this is limited by the speed of light. Only a quantum entangled based internet could be latency free, but at the expense of destroying the information according to the uncertainty principle! Assuming the estimate provided by Verizon, assuming the average test taker is within 3000 km, we can use an estimate of 70 ms for latency. Since the latency is doubled (calculation time begins immediately signal is sent to user), then a 140 ms is simply too much to subtract, there must be automatic correction, which now makes estimating the true latency more difficult since many users use VPNs which create a false positive up or down. To be conservative, we use a gross single latency of 20 ms. Upon further analysis, using a VPN with an IP in New York just a short distance from the server, the latency adjustment program (if it exists!) would add little correction value as the latency would be less than a few milliseconds. The results show no change in the reaction time upon changing the location, indicating no such mechanism exists which was our first thought. If no such latency correction exists, than modern reaction times could theoretically be as low as 140 ms. (note, this is close to the real number, so our blind estimate was pretty good!). The latency of LED computer monitors varies widely. For example, the LG 32ML600M, a medium-end LED monitor has an input lag of 20 ms, this monitor was chosen randomly and is assumed to reasonably representative of most monitors used by the 81 million users of the online test as well as being the one used in the later study. Using the software program HTML/JavaScript mouse input performance tests we measure a latency of 17 ms for a standard computer mouse. The total latency (including internet at 20 ms) is 56 ms. From the human benchmark dataset, the median reaction time was 274 milliseconds, yielding a net reaction time of 218 milliseconds, 10 milliseconds slower than Galton’s adjusted numbers provided by Woods et al. Bruce Charlton has created a conversion system where 1 IQ point is equal to 3 ms. This assumes a modern reaction time of 250 ms with a standard deviation of 47 ms. This simple but elegant method for turning reaction time is purely linear, it assumes no changes in the correlations at different levels of IQ. With this assumption, 10 ms equates to 3.3 IQ points, unwontedly similar to Piffer’s estimate.

“The mean SRT latencies of 231 ms obtained in the current study were substantially shorter than those reported in most previous computerized SRT studies (Table 1). When corrected for the hardware delays associated with the video display and mouse response (17.8 ms), “true” SRTs in Experiment 1 ranged from 200 ms in the youngest subject group to 222 ms in the oldest, i.e., 15–30 ms above the SRT latencies reported by Galton for subjects of similar age (Johnson et al., 1985). However, based on Galton’s notebooks, Dordonova and Dordonov (2013) argued that Galton recorded the shortest-latency SRT obtained out of three independent trials per subject. Assuming a trial-to-trial SRT variance of 50 ms (see Table 1), Galton’s reported single-trial SRT latencies would be 35–43 ms below the mean SRT latencies predicted for the same subjects; i.e., the mean SRT latencies observed in Experiment 1 would be slightly less than the mean SRT latencies predicted for Galton’s subjects. Therefore, in contrast to the suggestions of Woodley et al. (2013), we found no evidence of slowed processing speed in contemporary populations.

They go on to say: “When measured with high-precision computer hardware and software, SRTs were obtained with short latencies (ca. 235 ms) that were similar across two large subject populations. When corrected for hardware and software delays, SRT latencies in young subjects were similar to those estimated from Galton’s historical studies, and provided no evidence of slowed processing speed in modern populations.”.

What the authors are saying is that correcting for device lag, there’s no appreciable difference in simple RT between Galton’s sample and modern ones. Dordonova and Dordonov claimed that Galton did not use means in computing his samples. Dordonova et al constructed a pendulum similar to Galton’s to ascertain its accuracy, they concluded it would have been a highly accurate device devoid of the latencies that plague modern digital systems. “What is obvious from this illustration is that RTs obtained by the computer are by a few tens of milliseconds longer than those obtained by the pendulum-based apparatus”.

They go on to say: “it is very unlikely that Galton’s apparatus suffered from a problem of such a delay. Galton’s system was entirely mechanical in nature, which means that arranging a simple system of levers could help to make a response key very short in its descent distance”.

Implications

There are two interpretations available to us. The first is that no decline whatsoever took place. If reaction time is to be used as a sole proxy for g, then it appears according to Dodonova and Woods, who provide a compelling argument, which I confirmed using data from mass online testing, that no statistically significant increase in RT has transpired. 

Considering the extensive literature that shows negative fertility patterns on g (general intelligence), it seems implausible that some decline has not occurred, but it may not have due to increases in IQ caused by outbreeding (heterosis/hybrid vigor). People in small villages in the past would have been confined to marrying each other, causing reduced genetic diversity which is known to lower IQ, in extremes with inbreeding in Muslims.

While we do not argue, as Mingroni did, that the Flynn effect is entirely due to heterosis (outbreeding), it’s conceivable that populations boosted their fitness by reducing the extent to which they mated within small social circles, for example, villages and rural towns. We know for certain consanguineous marriage severely depresses intelligence, and it tends to be Jensen effect (where the magnitude of the nexus is strongest when the g loading is highest), then we would expect heterosis to be a valid theory worthy of serious consideration. In the age of the 747, it’s easier than ever for an Italian to mate with a Swede, increasing genetic diversity, thereby amplifying the level of variance, and producing more desirable phenotypes. On the other hand, there is ample evidence mixed-race offsprings (if the populations are genetically distant, such as African-European or East Asian European), have higher rates of mental illness and general psychological distress than controls. But this should not be seen as a falsification of the heterosis theory, as a certain threshold of genetic distance is satisfactory, if that threshold is exceeded, the opposite effect can take place. This the principle of “Hormesis”. Almost all biological phenomenon follow a hormesis principle, why should genetics be exempt from this law? Swedish geneticist Gunnar Dahlberg first proposed that outbreeding caused by the breakdown of small isolated villages could raise intelligence in 1944. “Panmaxia” is the term for random mating. The Flynn effect heritability paradox does seem to occur simply on intelligence, Michael Mingroni has complied evidence of height, asthma, myopia, head circumference, head breadth, ADHD, autism, and age at menarche, all of which have high heritabilities, as high as 0.8 if not 0.9 for height, yet show large secular rises that defy the breeder’s equation. In other words, selective or differential fertility cannot have changed their frequencies sufficiently fast to explain the rapid secular changes in the phenotype. Heterosis may operate based on the principal of directional dominance, where dominant alleles push the trait in one direction, let’s say in this case downward, and recessive alleles push the trait upward. One could theorize that a myriad of recessive but antagonistic alleles, that reduce height, IQ, and head size decreased in frequency as heterosis increased during the 20th century. This interpretation is highly compatible with Kondrashev’s theory of sexual mutation purging. Anyone who challenges the power of heterosis should talk to a plant breeder, granted humans have different genetic architectures, but not not different enough for the principle not to apply.

In light of the findings from the photographic measurement method, it appears that this decline is rather so subtle as to not be picked up by RT, the “signal is weak” in an environment of high noise. In an interview with intelligence blogger “Pumpkin person”, Davide Piffer argues that based on his extensive computation of polygenic data, IQ has fallen 3 points per century:

“I computed the decline based on the paper by Abdellaoui on British [Education Attainment] PGS and social stratification and it’s about 0.3 points per decade, so about 3 points over a century.

It’s not necessarily the case that IQ PGS declined more than the EA PGS..if anything, the latter was declining more because dysgenics on IQ is mainly via education so I think 3 points per century is a solid estimate”

Since Galton’s 1889 study, Western populations may have lost 3.9 points, but it’s unlikely. If this number is correct, it interesting to observe how close it is the IQ difference between Europeans and East Asians, who average 104-105 compared to 100 for Northern Europeans and 95 for Southern, Central and Eastern Europeans. East Asia industrialized only very recently, with China only having industrialized in the 1980s, the window for dysgenics to operate has thus been very narrow. Japan has been industrialized for longer, at the turn of the century, so pre-industrial selection pressures would likely relaxed earlier, which presents a Paradox since Japan’s IQ appears very close if not higher than China and South Korea. Of course this is only rough inference, these populations are somewhat genetically different, albeit minor differences, but still somewhat different as far as psychometric differences are concerned. Southern China has greater Australasian/Malay admixture which reduces its average compared to Northern China. For all intents and purposes, East Asian IQ has remained remarkably steady at 105, indicating an “apogee” of IQ that can be reached in pre-industrial populations. Using indirect markers of g, we know that East Asians have larger brains, slower life history speeds, and faster visual processing speeds than whites, corresponding to an ecology of harsh climate (colder winter temperatures than Europe, Nyborg 2003). If any population reached a climax of intelligence, it would have likely been North East Asians. So did Europe feature unique selective pressures?

Unlikely, if one uses a model of “Clarkian selection” (Gregory Clark, the Son also Rises) of downward mobility, Unz documented a similar process in East Asia. Additionally, plagues, climatic disruptions, and mini ice ages afflicted equally if not in greater frequency the populations of East Asia than in Europe. It’s plausible to argue group selection in East Asia would have been markedly weaker since inter-group conflict was less frequent. China has historically been geographically unified, with major wars between groups being rare compared to Europe’s geographic disunity and practically constant inter-group conflict. East Asia also includes Japan, which shows all the markers of strong group selection, that is high ethnocentrism, conformity, in-group loyalty and sacrifice, and a very strong honor culture. If genius is a product of strong group selection as warring tribes are strongly rewarded by genius contributions in weaponry etc, that one would expect genius to be strongly tied to group selection, which appears not the case. Europeans show lower ethnocentrism and group selection than North East Asians on almost all metrics according to Dutton’s research which refuted some of Rushton contradictory findings. A usual argument in the HBD (human biodiversity) community, and mainly espoused by Dutton, is that the harsh ecology of north East Asia, featuring frigidly cold winters pushes the population into a regime of stabilizing selection (selection that reduces genetic variance), this would result in lower frequencies of outlier individuals. No genetic or trait analysis has been performed to compare the degree of variance in key traits such as g, personality, or brain size. What is needed is a global study of the coefficients of additive genetic variation (CVA) to ascertain the degree of historical stabilizing vs disruptive selection. Genius has been argued to be under negative frequency depended selection, where essentially the trait is only fitness salient if it remains rare, there is little reason to believe genius falls under this category. High cognitive ability would be universally under selection, and outlier abilities would simply follow that weak directional selection. Insofar Dutton is correct that genius may come with a fitness reducing baggage, such as bizarre or deviant personality and or general anti-social tendencies. This has been argued repeatedly but has never been convulsively demonstrated. The last remaining theory is the androgen mediated genius hypothesis. If one correlated per capita Nobel prizes with rate of left-handedness as a proxy for testosterone, or national differences in testosterone directly (I don’t believe Dutton did that), then when analyzing only countries with a minimum IQ of 90, testosterone correlates more strongly than IQ since the extremely low per capita Nobel prize rates in NEA cause the correlation to collapse.

To be generous to the possibility Victorian IQ was markedly higher, we run a basic analysis to estimate the current historical frequency of outlier levels of IQ assuming Victorian IQ of 112. 

We use the example of the British Isles for this simple experiment. In 1700, the population of England and Wales was 5,200,000. Two decades into this century, the population increased to 42,000,000, this is excluding immigrants and non-English natives. Charlton and Woodley infer a loss of 1 SD from 1850 onward, we use a more conservative estimate of 0.8 SD + as the mean as the pre-industrial peak. 

This would mean 1700 England would have produced 163,000 individuals with cognitive abilities of 140 from a mean of 112 and an SD of 15. In today’s population, we assume the variance increased slightly due to increasing genetic diversity and stronger assortative mating, we use a slightly higher variance, SD 15.5, with a mean of 100. From today’s population of white British standing at 42,000,000, there are 205,000 individuals with an SD 2.6 times above the current Greenwich IQ mean. If we assume there has been no increase in the variance, which is unlikely considering the increase in genetic diversity due to an expanding population providing room for more mutation, then the number is 168,000.

Three themes can be inferred from this very crude estimate.

The total number of individuals with extremely high cognitive ability may very well have fallen as a percentage, but the total number has remained remarkably steady when accounting for the substantial increase in population. So declining reaction time, even if it did occur (it didn’t) cannot account for declining invention and scientific discovery since the Victorian era as argued by Woodley.

Secondly, this would indicate high IQ in today’s context may mean something very different from high IQ in a pre-industrial setting, since this pool of individuals are not producing shocking genius that is changing the world (otherwise you would have heard of them!).

Thirdly, the global population of high IQ individuals is extraordinary, strongly indicating the pre-industrial Europeans and especially English population possessed traits not measurable by IQ alone which accounted for their prodigious creative abilities, and this was likely confined to European populations but did not extend to Eastern Europe for unknown reasons. But there is no reason to believe this enigmatic unnamed trait was normally distributed and thus followed a similar pattern to standard g, thus today’s population would necessarily produce fewer as a ratio, but at an aggregate level, the total number would remain steady. With massive populations in Asia, primarily India and China, a rough estimate based on Lynn’s IQ estimates give around 13,500,000 individuals in China with an IQ of 140 based on a mean of 105 and an SD of 15. There’s no evidence East Asian SDs are smaller than Europeans as claimed by many in the informal HBD community. While China excels in fields like telecommunication, mathematics, artificial intelligence, and advanced manufacturing (high speed rail etc), there has been little in the way of major breakthrough innovations on par with pre-Modern European genius, especially in theoretical science, despite massive numerical advantage, 85x more than in 1700 England. In fact, most of the evidence suggests China is still heavily reliant on stealing Western technology or at least has been since its recent industrialization. “Genius: (defined as unique creative ability in art, technical endeavors, or pure science or mathematics), is thus a specialized ability not captured by IQ tests. It seams genius is enabled by g, that is in some form of synergistic epistasis, where genius is “activated” by a certain threshold of IQ in the presence of one or more unrelated and unknown cognitive traits, often claimed to be a cluster of unique personality traits, although this model has yet to be proven. India with a much lower mean IQ of 76 from Dave Becker’s dataset, assuming a standard SD (India’s ethnic and caste diversity would strongly favor a larger SD), but for the sake of this estimate, we use an SD of 16. We are left with 41,000 individuals in India with this cutoff, this number does not reconcile with the number of high IQ individuals that India is producing, so we assume either the mean of 76 is way too low, or the SD must be far higher. Even with just 40,000, none of these individuals are displaying any extraordinary abilities closely comparable to genius in pre-Modern Europe, indicating that either there are ethnic differences in creative potential, or that IQ alone must be failing to capture these abilities. Indian populations are classified as closer to Caucasoid according to genetic ancestry modeling, which allows us to speculate as to whether they are closer to Caucasoid in personality traits, novelty-seeking, risk-taking, androgen profiles, and assorted other traits that contribute to genius. Dutton and Kura 2016.

Despite Europe’s prodigious achievements in technology and science which have remained totally unsurpassed by comparably intelligent civilizations, ancient China did muster some remarkable achievements in the past. Lynn says: “One of the most perplexing problems for our theory is why the peoples of East Asia with their high IQs lagged behind the European peoples in economic growth and development until the second half of the twentieth century. Until more parsimonious models on the origin of creativity and genius abilities are developed, rough “historiometric” analysis using RT as the sole proxy may be of limited use. Figueredo and Woodley developed a diachronic lexicographic model using high order woods as another proxy for g. The one issue with this model is that this may be simply measuring a natural process of language simplification over time, which may reflect an increasing emphasis on the speed of information delivery rather than pure accuracy. It is logical to assume in a modern setting where information density and speed of dissemination are extremely important, a smaller number of simpler words are more frequently used (Zipf’s law). Additionally, the fact that far fewer individuals, likely only those of the highest status, were engaging in writing in pre-modern times, should not be overlooked. Most of the population would not have had access to the leisure time to engage in writing, whereas in modern times the nature of written text reflects the palatability of a more simplistic writing style to cater to the masses. Additionally, only 5% of population in Europe would attend university in the early 20th century, so ability levels would be much higher on average than today, so “high order word” usage may not be a useful indicator.

Sources:

Forbes, G, 1945. The effect of certain variables on visual and auditory reaction times. Journal of Experimental Psychology.

Woods et al (2015). Factors influencing the latency of simple reaction time. Front. Hum. Neurosci

Dodonova etal 2013. Is there any evidence of historical slowing of reaction time? No, unless we compare apples and oranges. Intelligence

http://iqpersonalitygenius.blogspot.com/2013/02/the-ordinal-scale-of-iq-could-be.html

https://www.youtube.com/watch?v=7QACfJoGf8g

https://westhunt.wordpress.com/2013/06/07/the-breeders-equation/

Woodley and te Nijenhuis 2013. Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time. Intelligence

Detailed statistics

155 157 154 191 157 164 151 173 158 134 179 152 172 176 163 139 155 182 166 169 179 155 152 169 205 170 149 143 170 142 143 149 174 130 149 139 142 170 127 131 152 127 136 124 125 157 149 127 124 139 158 149 130 149 136 155 143 145 185 152 105 152 130 139 139 140 130 152 166 158 134 142 128 140 155 127 131 139 145 146 139 127 152 145 142 140 143 112 182 185 133 133 130 145 154 158 152 161 152 173 134 145 133 139 148 152 173 158 176 151 181 155 176 149 157 163 167 143 160 145 200 182 140 155 154 148 140 173 173 152 142 143 127 136 164 139 133 145 146 142 149 140 142 124 151 182 166 133 170 152 164 181 121 170 185 164 133 133 149 146 149 119 188 154 150 146 143 151 173 152 160 157 167 148 145 140 155 182 139 166 163 152 170 169 149 136 155 167 154 179 148 155 124 170 134 155 151 181 146 130 173 194 140 131 149 172 182 149 161 155 151 167 157 151 143 142 169 163 136 157 164 133 131 173 133 151 133 143 160 139 157 164 130 131 173 133 151 133 143 152 149 157 142 139 164 136 142 158 145 155 130 166 136 148 133 161 134 145 151 173 146 142 152 166 158 151 173 148 161 172 143 130 148 155 163 142 176 164 173 166 160 142 133 124 152 137 170 142 133 118 152 145 124 151 130 137 157 157 164 155 149 136 137 131 161 142 143 148 115 161 148 167 151 130 139 154 142 149 143

All reaction times recorded

Standard Deviation =16.266789

Varianceσ  =264.60843

Count =319

Mean  = 150.81505

Sum of Squares SS = 84410.088

Anhydrous ammonia prices rise to nearly $730/ton in July

Fullscreen capture 772021 64505 PM.bmp

Anhydrous ammonia (NH3) prices to rise. The price increase is roughly commensurate with the uptick in natural gas prices to $3.6/1000ft3, a high not seen since 2018 (excluding the momentary jump in February caused by an aperiodic cold event in Texas), we can expect if oil reaches a sustained period of $100+, natural gas will follow its usual ratio with oil, sending anhydrous well above 800, likely in the 900 range. Pochari Technologies’ process intensified ammonia system will prove exceedingly more competitive in this future peak hydrocarbon environment. The beauty of this technology is instead of being dependent on an inherently volatile commodity (natural gas), which for the most part, is an exhaustible resource, hence a gradual increase in price over time, Pochari Technologies is only reliant on polysilicon as a commodity, which will continue to go down in price with increased production since silica is effectively inexhaustible, 46% of the earth’s crust! Note that according to the USDA statistic, there are effectively no sellers offering price below 700, so the standard deviation (SD) is very small. This means it’s unlikely for some farmers to be able to snatch up good deals if they are savvy buyers.

Reduced CAPEX alkaline electrolyzers using commercial-off-the-shelf component (COTS) design philosophy.

Posted on  by christophepochari

Dramatically reducing the cost of alkaline water electrolyzers using high surface area mesh electrodes, commercial off-the-shelf components, and non-Zirfon diaphragm separators.

Christophe Pochari, Christophe Pochari Energietechnik, Bodega Bay California

The image below is a CAD model of a classic commercial alkaline electrolyzer design, large, heavy, and industrial-scale units. The heavy use of steel for the endplates and tie rods increases the cost of the stack considerably. The above architecture is bulky, and expensive, with high exclusivity in its design and engineering

4b7c1777-8ed4-423f-85df-d7881ea345bc1

HTB1ImYjaLDH8KJjy1Xc763pdXXam.png_

An example of the excessively elaborate plumbing and alkali water feed and recirculation system, Christophe Pochari Energietechnik had simplified and made more compact this circuitous and messy ancillary system using thermoplastics and design parsimony.

untitled.72

Christophe Pochari Energietechnik thermoplastic lightweight modular quasi stack-tank electrolyzer cell. Our design eliminates the need for heavy end plates and tie rods, since each electrode is an autonomous stand-alone module. Each electrode module is comprised of a hydrogen and oxygen section that sandwiches between the diaphragm sheet placed in the center of the two plastic capsules. Each oxygen and hydrogen capsule contains its respective electrodes. The Pochari electrolyzer cell frame/electrode box is made from injection molding, an ultra low-cost technology at volume. This cell design is highly scalable, modular, convenient, and extremely easy to manufacture, assemble and transport. The culmination of this engineering effort is a dramatic reduction in CAPEX, where material cost, rather than intricate manufacturing and laborious assembly, dominates the cost structure. One of the central innovations that makes our design stand out is the use of active polarity reversal. A series of valves are placed on the oxygen and hydrogen outlet to allow the anode to to be charged as a cathode and vice versa every sixty seconds. This effectively halts the build up of an oxide layer on either the nickel or iron catalyst. Because iron is very close to nickel in its catalytic effectivity for the hydrogen evolution reaction, only a very small performance penalty is encountered when switching from nickel to iron.

Trassatis-volcano-plot-for-the-hydrogen-evolution-reaction-in-acid-solutions-j-00

20 centimeter diameter COTS electrolyzer stack.

A component breakdown for the core stack, excluding ancillary equipment. Estimate includes material costs only, labor can be factored in later and adjusted for local wage rate differences.

Anode: plasma sprayed nickel mesh or sheet: 4.4 kg/m2 @8000 watts/m2 (4000 w/m2 total current density): $13.7/kW

Cathode: Carbon steel sheet 4 kg/m2: $2/kW

Plastic electrode module and partition frame: $5/kW

Hydrogen oxygen separator: 200-micron polyethersulfone sheet $11/m2: $2.75/kW

EPDM gaskets: $1.5/kW

Total with nickel electrodes: $25/kW

Total with carbon steel electrodes: $11/kW

For electrolyzers to cost significantly above $100/kW, either exotic materials would have to be used, extremely low productivity manufacturing, an inordinate amount of material beyond what is absolutely necessary (ancillary systems), or an extremely low current density. A typical lead-acid battery, with a capacity of 12 volts and 100 amp hours retails for about $70 on Alibaba, or about $60/kW. An alkaline electrolyzer should be manufactured at the same cost as a lead-acid battery and no more. Since nickel is worth roughly $20-25 per kilogram under normal market conditions, and the rest of the electrolyzer is made of very cheap steel and plastic, we can basically exclude the rest of the system for sake of simplicity. Since we’re using 4 and a half kilos of nickel for 8 kilowatts of power output, the price per kilowatt for the single most expensive component of the stack is $11/kW. We have designed our electrolyzer stacks to be 12 inches in diameter and four feet long, weighing approximately 200 lbs for 30 kilowatts of power, the stack is easily moved with a dolly and connecting into a bank of as many electrolyzers as necessary with convenient flexible chlorinated polyvinyl chloride plumbing for caustic water and hydrogen/oxygen. For engineering simplicity, the stack operates at 1 bar and 90 celsius and would use about 50 kWh/kg at 250 milliamps/cm2. Christophe Pochari Energietechnik is developing a tank type electrolyzer design as a viable alternative to the classic filter pressure architecture.

The nearly two-century old technology of alkali water decomposition, with high throughput manufacturing and Chinese production, is ripe for dramatic cost reduction. Current alkaline electrolyzer technology is excessively expensive beyond what bare material costs would predict, imputable mainly due to a production regime which makes use of inordinate customization, procurement of specialized subcomponents from niche suppliers, minuscule production volumes, a noncompetitive market with a small number of big players, and high cost labor. A further contributor to the uncompetitive CAPEX of this low-tech and old technology is the fact that the ancillary and plumbing components that comprise the electrolyzer module system make use of metallic piping and tankage, usually stainless steel or even nickel rather than cheap thermoplastics. A further reason is the choice of a very large stack size (both in terms of diameter and length), which makes manufacturing and transportation that much more challenging and costly. The current manufacturing process for very long electrolyzer stacks requires an adjustable scaffolding or a varying height underground basement with a hydraulic stand, so that the filter press stack can be built up as workers stand at floor levels. Some electrolyzer stacks are as long as 20 feet and can weigh multiple tons, requiring cranes or hoists to move around in the factory. The massive multi-ton stacks are then bolted down from the endplates and lifted out of their vertical assembly position and transported by truck to a site that will require a crane for installation as well. These respective handicaps serve to impose surfeit costs for a technology that is otherwise made up of relatively cost low-cost raw materials and crudely fabricated components with low precious/tolerance manufacturing. Christophe Pochari Energietechnik’ researchers have thus compiled a plethora of superior design options and solutions, using the strategy of consecutive elimination, to finally bring to market affordable hydrogen generators fabricated from readily available high-quality components, raw materials, and equipment procured on Alibaba.com ready to be assembled as small kits ready for use with our novel miniature ammonia plant technology. All of the parts are lightweight and can be lifted by a single person and assembled with common household tools. Our electrolyzers do not exceed 50 kW in size, since our ammonia plants feed off wind and photovoltaics which generate spasmodic current, sundry small electrolyzers are paired up forming a homogenous system, allowing them to be consecutively shot off and on depending on prevailing electrical output, rather than the individual stacks having their power output modulated, which reduces their efficiency. Alkaline electrolyzers require a polarization protection current of around 40-100 amps/m2 during non-operation to mitigate corrosion of the cathode, which is otherwise reduced. An alternative to using a polarization current is simply draining the electrolyte out of the stack, but this would add additional hassle. Most commercial alkaline electrolyzers in operation today are able to fluctuate power output by as much as 125% during a 1-second interval, making it possible to integrate them with wind turbines. During very low-load operation, hydrogen is prone to mix with oxygen by diffusing through the separator membrane when the gas residence time is very high. For this reason, it’s best to operate the electrolyzers at their rated loaded capacity, namely to use our strategy of stacking banks of relatively small units that can be readily shut off and on, rather than throttling a single large scale stack.

Compared to a state-of-the-art lithium-ion battery or Chlor Alkali diaphragm cell, an alkaline cell is an extremely simple and elegant system, consisting of only four major components, each of which features minimal custom fabrication. Alkaline cells, or any electrolyzer for that matter, consists of two basic “architectures”. The most common is the so-called “bipolar” electrolyzer, where current flows from positive to negative through each end of the electrolyzer. The electrolyte serves as the conductor, positive current flows from one end plate until reaching the negative at the opposing endplate, this results in each electrode having a positive and negative on each side. Industrial-scale Alkaline electrolyzers are over 150 years old, with most old fashion designs being constructed entirely out of iron or steel, and corrosion being mitigated not through the use of high-end materials, but by frequent electrode replacement or polarity reversal (to cancel corrosion altogether). In 1789, Adriaan Paets van Troostwijk decomposed water using a gold electrode. The first large-scale use of alkaline electrolysis was in Rjukan Norway, where large banks of electrolyzers fed from the cheap hydropower installations. The Rjukan electrolyzers employed the “Pechkranz electrodes”, invented by Rodolph Pechraknz and patented in Switzerland in 1927, constructed from thick sheets of iron, with the anode electroplated with nickel. It is claimed that the current densities of the Rjukan electrolyzers approached 5500 watts/m2. Most alkaline electrolyzers built before the 1980s used chrysotile asbestos diaphragms.

Prior to the development of the bipolar electrolyzer, most of the early late 19th century designs employed designs that made use of liquid-containing cylinders and submerged electrodes, what is called a “tank type” or “trough” electrolyzer. The electrolyte was contained in a cylindrical vessel, and metal electrodes were suspended from the top. The first modern “bi-polar” electrolyzer was devised by a Russian named Dimitri Latschinoff (also spelled Latchinoff) in Petrograd in 1888, his cell had a current density ranging from 0.35 to 1.4 amp/m2 and used 10% caustic soda. After Latchinoff, a design very close to the modern “filter-press” type was developed by O Schmidt in 1889. Because the Schmidt electrolyzer used potassium carbonate over caustic potash, the electrode corroded at only 1 millimeter per year. In 1902, Maschinenfabrik Oerlikon commercialized the Schmidt bi-polar electrolyzer, which forms the basis for all modern water electrolyzers. The Schmitt design, pictured below, used a cell voltage of 2.5 and generated a hydrogen purity of 99%. The Schmidt electrolyzer generated 2750 liters of hydrogen per hour using 16.5 kilowatts, or 67.34 kWh/kg, or an efficiency of 58.5% of lower heating value. Most early filter press electrolyzers used rubber-bound asbestos diaphragms.

Fullscreen capture 572022 60305 PM.bmp

From: The Electrolysis of Water, Processes and Applications By Viktor Engelhardt · 1904

Fullscreen capture 572022 63213 PM.bmpFullscreen capture 572022 63219 PM.bmpFullscreen capture 572022 63315 PM.bmp

A filter press electrolyzer pictured below was manufactured by National Electrolizer in 1916. The picture above is a filter press electrolyzer made by International Oxygen Company. The picture in the middle is of the asbestos diaphragm pealed in front of the steel electrode. It is claimed in the source that the nickel-plated steel electrodes were “virtually indestructible”. Pictures are taken from the trade journal “Boiler Maker” Volume 16 1916. These electrolyzers were used exclusively for hydrogen welding and cutting of metal. It wasn’t until Rjukan (Norsk Hydro) that this technology first saw use for energetic applications. These Norwegian electrolyzers were also used to electrolyze heavy water for the production of deuterium.

Fullscreen capture 5132022 22423 PM.bmp

The cost of electrolyzer designs from 1904, notice the Schmidtt filter pressure type cost $182/kW for a 10 kW unit, equal to just under $6000 in 2022 dollars. Prices have declined dramatically since 1904, thanks to more productive labor and manufacturing, more global production of nickel and steel, and more efficient fabrication and machining.

The monopolar electrolyzer energizes each electrode individually with a “rack” or bus bar. This design is rarely used. Bipolar systems are also called “filter press” electrolyzers while monopolar are called “tank type” electrolyzers.

While neither of these designs differs by a significant margin in their performance, the bipolar architecture is considered the most “proven” design and forms the basis of all modern electrolysis technology. The only real disadvantage of the monopolar design is the need for very high current bus bars, since a bipolar will use a voltage equal to the sum of each electrode times the number installed, the current required is greatly reduced, placing less demand on the electrical power supply. For example, if a cell voltage of 2 is used, a hundred electrode pairs allow a high voltage of 100 to be used, while monopolar systems require two volts at each electrode at whatever current is required to provide the power, this would increase electrical losses and generate more heat. The bipolar design is the architecture used in this analysis.

Fullscreen capture 5132022 85740 PM.bmp

In order to separate the hydrogen from oxygen, a separation or partition plate is used on each electrode, channeling the separated gases into their respective vent holes. The Pochari design differs insofar as the circumferential frame is replaced with plastic, and the design is square rather than round, to make more efficient use of space.

3-s2.0-B9780128194249000100-f04-22-9780128194249

Tank type electrolyzer module

The electrolyzer system is comprised of the stack and the ancillary equipment, which consists of caustic solution storage tanks, pumps, and the hydrogen and oxygen plumbing system. In current alkaline systems marketed by the established players, elaborate plumbing systems are constructed from nickel alloys. To save cost, rather than constructing these components out of stainless steel, they can be made out of high-temperature plastics, which show excellent resistance to caustic solutions. Christophe Pochari Energietechnik is studying how thermoplastics which can withstand moderate temperatures can be used instead to dramatically lower CAPEX. Semi-crystalline plastics: PEK, PEEK, PPS (Polyphenylene sulfide), PA (polyamide) 11/12. Amorphous plastics: PAI (polyamide-imide), PPSU (polyphenylsulfone), PSU (polysulfone), PES (polyethersulfone). Most of these thermoplastics have a density of below 2 grams/cm3, and can handle temperatures over 100 Celsius. The price of Polyphenylsulfone (130 MPa compressive strength), able to operate as high as 150 Celsius, has a density of only 1.3 grams/cm3, with its retail price of $20/kg, it is nearly 7 times cheaper than nickel with equal alkalinity tolerance.

The four components are the following:

#1 Electrodes.

The electrode can consist of any metallic conductive surface, it can be a woven wire mesh, a metallic foam, or a smooth sheet. To achieve the highest performance, a surface morphology featuring a denticulate pattern formed by plasma spraying Raney nickel on the metallic substrate enables a reduction in the “overpotential”, or the excess voltage above the stoichiometric number. In the absence of such a surface finish, a bare metallic surface achieves only minuscule current density.

#2 Gaskets and separators

The gaskets form the seal between the electrode modules preventing gas and liquid from escaping through the edges. The force of the endplates provides the pressure needed to achieve a strong seal. A gasket (made of cheap synthetic rubbers, EPDM etc) is commonly used. EPDM rubber is extremely cheap, around $2/kg. The diaphragm separator is used to prevent the mixing of hydrogen and oxygen to avoid potentially catastrophic explosions from occurring if the rations are within the flammability range of hydrogen, which is 4 to 74% in O2. The diaphragm separator is often the single most expensive component after the electrode. The material for fabricating the diaphragm membrane must be resistant to alkaline solutions, able to withstand up to 100°C, and be selective enough to separate oxygen and hydrogen, while also permitting sufficient ionic conductivity. A number of materials are used, these include composites of potassium titanate, (K2TiO3 fibers, polytetrafluoroethylene (PTFE, as felt or woven, polyphenylene sulfide coated with zirconium oxide, abbreviated Zirfon, perfluorosulphonic acid, arylene ether, and finally, a polysulfone and asbestos composite coating. Commercial electrolyzers make use of an expensive proprietary brand name separator sold by the Belgian company Agfa Gevaert, N.V. The name of this high-end separator is called Zirfon Pearl, and it sells for a huge price premium over the cost of bare polyethersulfone, which itself is a relatively inexpensive plastic that costs around $20/kg in bulk. Many polymers are suitable for constructing separators, such as Teflon® and polypropylene. A commercially available polyethersulfone ultrafiltration membrane, marketed by Pall Corporation as Supor200, with a pore size of 0.2 um and a thickness of 140 microns, was employed as the separator between the electrodes in an experimental alkaline electrolyzer. Nylon monofilament mesh with a size of over 600 mesh/inch or a pore size of 5 microns can also be used. Polyethersulfone is ideal due to its small size, retaining high H2/O2 selectivity at elevated pressures. It can handle temperatures up to 130 C. If polyethersulfone is not satisfactory (excessive degradation rate if the temperature is above 50 C), Zirfon-clones are available to purchase on B2B marketplaces https://b2b.baidu.com for $30/m2 from Shenzhen Maibri Technology Co., Ltd.

#4 Structural endplates:

The fourth component are the “end plates” which consist of heavy-duty metallic or composite flat sheets which house a series of rods tightly pressing the stacks to maintain sufficient pressure within the stack sandwich. For higher pressure systems, such as up to 30 bar, the endplates encounter significant force. In our incessant effort at CAPEX reduction, we have concluded it is possible to cast the endplates rather than machining them, this can reduce their manufacturing cost by 70% relative to CNC machining since investment casting is so much more productive. While we do not plan on focusing on a filter press design, we are still considering developing one as an alternative. Christophe Pochari Energietechnik is also looking into using fiberglass to construct the end plates, at a cost of only $1.5/kg and with tremendous compressive strength, fiberglass is a suitable material, especially for lower pressure stacks operating with no overpressure, therefore placing little to no pressure on the end plates.

Unlike PEM technology, noble mineral intensity in alkaline technology is relatively small, if nickel is to be considered a “noble” metal, then alkaline technology is intermediate. Nickel is not abundant but not rare either, it’s approximately the 23rd most abundant element occurring at a 0.0084% of the crust.

Unlike PEM technology, noble mineral intensity in alkaline technology is relatively small, if nickel is to be considered a “noble” metal, then alkaline technology is intermediate to PEM, but it is difficult to place platinum (50,000-ton reserve) and nickel (100 million ton reserve) in the same category. Nickel is not an abundant element but it is not rare either, it is approximately the 23rd most abundant element occurring at 0.0084% of the crust by mass. If electro-mobility is to gain any degree of traction (which has yet to be proven), deep-sea mining to exploit poly-metallic nodules can be undertaken, doubling the current terrestrial reserves of nickel. It is unfortunate that the nascent modular electrolyzer and miniature ammonia industry, which has yet to amount to anything more than a concept, is forced to compete with wasteful lithium-battery manufacturing for the precious nickel element. We can power cars with cheap steel propane tanks filled with anhydrous ammonia, rather than squandering trillions on elaborate “battery packs” using up precious nickel for the cathodes. Since we are incorrigibly resourceful, we will turn to carbon steel electrodes if market conditions force us to. Nickel prices have been surprisingly stable over time, despite large increases in demand from the stainless steel sector. The market price of nickel has risen only 1.38% a year since 1991. The price of one ton of nickel was $7100 in 1991, equivalent to $14,700 in 2022 dollars, in January 2022, the spot price reached $22,000/ton. At the time of this writing (June 2021), Russia had not yet invaded Ukraine! so while I could anticipate a potential spike in nickel prices, I could not time it, otherwise, everyone would become a billionaire by speculating on the commodity market, and as far as I know, most people have not had much success at that game. In spite of the unfortunate development in the nickel market, the electrode cost is still relatively low even at $50,000/ton, it’s unlikely the Ukraine invasion would cause nickel to rise this much, but it’s possible. It will be important to extensively research carbon steel electrodes if nickel reaches an excessively high price, or increase current density at the expense of efficiency, which we may be able to do thanks to hydrostatic wind turbine technology.

For an alkaline electrolyzer using a high surface area electrode, a nickel mesh electrode loading of under 500 grams/m2 of active electrode surface area is needed to achieve an anode life of 5 or more years assuming a corrosion rate of below 0.25 MPY. With current densities of 500 milliamps/cm2 at 1.7-2 volts being achievable at 25-30% KOH concentration, power densities of nearly 10 kW/m2 are realizable. This means a one-megawatt electrolyzer at an efficiency of 75% (45 kWh/kg-H2 LHV) would use 118 square meters of active electrode surface area. Assuming a surface/density ratio of a standard 80×80 mesh, 400 grams of nickel is used per square meter of the total exposed area of the mesh wires. Thus, a total of 2.25 kg of nickel is needed to produce 1 kg of hydrogen per hour. For a 1 megawatt cell, the nickel would cost only $1000 assuming $20/kg. This number is simply doubled if the TBO of the cell is desired to increase to 10 years, or if the power density of the cell is halved. Christophe Pochari Energietechnik is planning on using carbon-steel electrodes or plain iron electrodes to replace nickel in the future to further redux CAPEX below $30/kW, our long-term goal is $15/kW, compared to $500 for today’s legacy system from Western manufacturers. Carbon steel exhibited a corrosion rate of 0.66 MPY, while this is significantly above nickel, the cost of iron is $200 per ton (carbon steel is $700/ton), while nickel is $18,000, so despite a corrosion rate of at least 3x higher, the cost is 25x lower, yielding of 8.5x lower for carbon steel. The disadvantage of carbon steel despite the lower CAPEX is decreased MTBO (mean time before overhaul). Christophe Pochari Energietechnik has designed the cell to be easier to disassemble to replace the corroded electrodes, we are also actively studying low-corrosion ionic liquids to replace potassium hydroxide. We are actively testing a 65Mn (0.65% C) carbon steel electrode under 20% KOH at up to 50 C and experiencing low corrosion rates confirming previous studies. Christophe Pochari Energietechnik is testing these carbon steel electrodes for 8000 hours to ascertain an exact mass loss estimate.

What kind of current density can be achieved by smooth plates?

Current densities of 200mA/cm2 at 1.7 volts (3.4 kW/m2) generates an efficiency of 91% even with non-activated nickel electrodes.

If a corrosion rate of 0.10 MPY is chosen, which is very conservative, then for a material loss rate of 5% per year, 400 grams per square meter is required, yielding a cost per kW of $4.7. If one desires to be extremely conservative, imagine an electrode is used that is around 1 millimeter thick. Since only the anode requires nickel (the cathode can be made of steel since it’s being reduced), we will use 3.9 kg of nickel sheet for 1 square meter, since the current density is 3600 watts per/m2 (200 milliamps x 1.8 volts), and this number is doubled since only half the electrode is nickel, the price per kW is $21. This illustrates that even if the designer wants to use an extremely thick electrode, far thicker than necessary, the cost of the number one most materially sensitive component is only 2 percent of the cost of present commercially available electrolyzers, suggesting chronic manufacturing and production inefficiency among current producers.

Corrosion is by far the single biggest enemy of the electrolyzer, it’s an issue that’s under-discussed but accounts for the preponderance of performance degradation. All metals, even noble ones, tend to oxidize over time. The anode, the negative side, the electrode that evolves hydrogen and is constantly being oxidized, and turns black within hours of use. The oxygen generating is subject to reduction and remains shiny no matter how long it is exposed to the alkaline environment. The hydrogen electrode experiences immense oxidative pressure, and will rapidly accumulate a black oxide layer, in the case of nickel, the oxide layer is comprised of nickel hydroxide. No material is lost, and it’s theoretically possible to recover all of the metallic nickel from the oxide layer which eventually is lost in the alkaline medium. On the oxygen electrode, the black oxide layer quickly reaches a peak and begins to pacify it and slow down the rate of further oxidation, but at the expense of electrochemical performance.

Fullscreen capture 1152022 113119 AM.bmp
Fullscreen capture 1152022 113123 AM.bmp

For a lower corrosion rate of 1 um/yr, a total mass loss of 7% per year will occur with a surface/mass ratio of 140 grams/m2-exposed area, the nickel requirement is only $350 or 17.5 kg for one megawatt! Although this number is achievable, higher corrosion rates will likely be encountered. To ensure sufficient electrode reserve, a nickel loading of around 400-500 grams/m2 is chosen. Pure nickel experiences an excessively high corrosion rate when it is “active”, it becomes “passive” when a sufficient concentration of iron (NiFe2O4), or silicate is found in the oxide layer. For Incoloy alloy 800 with 30% Ni, 20% Cr and 50% Fe experiences a corrosion rate of 1 um/yr at 120 C in 38% KOH, pure nickel is over 200 um. “The “active” corrosion of nickel corresponds to the intrinsic behavior of this metal in oxygenated caustic solutions; the oxide layer is predominantly constituted of NiO at 180°C and of Ni(OH) 2 at 120°C. The nickel corrosion is inhibited when the oxide layer contains a sufficient amount of iron or silicon is present”. The results drawn from this study indicates the ideal alloy contains around 34% Ni, 21% Cr, and 45% Fe. The cost breakdown for the three elements are $18/kg, $9/kg and $0.2/kg, giving an average of $8.1/kg. For a passive corrosion rate of 1 um/yr, a 10% annual material loss corresponds to an electrode mesh loading of 90-100 grams/m2, or $0.11/kW. That is 11 cents per kW! This does not include mesh weaving costs. A 600 mesh weaving machine costs $13,000. The conclusion is meshing costs are very minimal, less than a few cents per square meter.

For the diaphragm separators using a 200 um thick sheet of polyethersulfone (PES), around 20 grams is used per kilowatt, at a typical cost of PES of $25/kg assuming density of 1.37 g/cm2, the cost would be around $0.50/kilowatt assuming an electrode power density of 6.8 kW/m2 (400 milliamps at 1.7 volts). Since Christophe Pochari Energietechnik always adheres to COTS methodology, the expensive and specialized Zirfon membrane is dispensed with in favor of a more ubiquitous material, this saves considerable cost and eases manufacturability as the need to purchase a specialized hard to access material is eliminated. Gasket costs are virtually negligible, with only 4.8 grams of rubber needed per kilowatt, EPDM rubber prices are typically in the range of $2-4/kg. For 30% NaOH at 117 C, a corrosion rate of 0.0063 millimeters per year (0.248 MPY) is observed for an optimal nickel concentration of 80%. This means 55 grams of Ni is lost for one square meter, if we choose 10% per year as an acceptable weight loss, we return to 550 grams per square meter as the most realistic target nickel loading, with much lower loading achievable with reduced corrosion rates. A lower concentration of KOH/NaOH and lower operating temperature can be utilized as a trade-off between corrosion and power density. The total selling price of these units cost including labor and installation is $30/kW. In 2006, GE estimated alkaline electrolyzers could be produced for $100/kW, clearly, must lower prices are possible today. At an efficiency of 6.5 MMW (47.5 kWh/kg-H2), the price is $1430/kg-hour. After the cell stack costs, which we demonstrated can be made very minimal with the COTS design philosophy, the second major cost contributor is the power supply. For a DC 12 volt power supply, $50 is a typical price of a 1000 watt DC power module. Thus, to summarize, alkaline electrolyzer material costs are effectively minuscule, and the cost structure is dominated by conventional fabrication, assembly, and electrode deposition techniques as well as the power supplies and unique requirements of low voltage direct current high amperage power. High-efficiency DC power supplies cost as little as $30/kW and last over 100,000 hours. Once components can be mass-produced and assembled with as little use of manual labor, costs can be brought down close to the basic material contribution. The only uncertainty for the future of alkaline electrolysis is the price of nickel, certain disruptions in the supply of nickel could make the technology less competitive, as long as carbon steel electrodes are unproven. When this text was written, the author has purchased $2000 worth of nickel sheets on Alibaba when the spot price was $18/kg.

It should be noted the activity of the nickel electrode depends heavily on its morphology. A smooth sheet has very little activity and is thus not suitable for industrial scales, although, for small electrolyzers, a smooth catalyst can be sufficient if power density is not an exigency. Catalysts activity depends not on the total surface area available exposed to the reactant material, rather, catalyst activity depends almost exclusively on the presence of so-called “active sites” or “absorption sites” comprised of kink sites, ledges, and steps, adatoms, and holes. These sites, characterized by local geometric perturbation, account for effectively all the activity of a catalyst. It can be said that the vast majority of the catalyst area is not active. By achieving a high fraction of active sites, the current density holding voltage constant can be increased 10-fold. Raney nickel catalysts were first invented in 1948 by Eduard W. Justi and August Winsel. A properly leached Raney nickel catalyst can attain an immense surface density of 100 m2/g.

Raney nickel, an alloy comprised of aluminum and nickel, is sprayed on the bare nickel sheets, meshes, or nickel foam, forming an extremely high specific surface area by producing micron-size jagged edge clumps. This process is called sputtering deposition. The high velocity and temperature of the metal particle cause them to mechanically adhere to the nickel surface. During the application of the Raney nickel with the plasma spraying machine, it is important for the distance, temperature, and deposition rate to be fine-tuned, to avoid excessively thick deposition or clumping. Examination with electron microscopes can be performed by sending a sample of the piece to an electron microscope rental service. After the material has cooled and solidified, the aluminum is then leached and extracted from the surface using a caustic solution, leaving the pure nickel electrode ready to be used. This leaching process, where the aluminum is pulled away from the nickel surface, is what leaves the spongy-like surface and contributes to the stellar electrochemical activity of Raney nickel electrodes. Raney nickel sells for around 300 RMB per kg, or about $50/kg on https://b2b.baidu.com. By mass, only a tiny fraction of the electrode is comprised of the Raney nickel, a thin heterogeneous layer, usually far less than 100 microns. The primary cause of electrode degradation is the loss of the high surface area active sites through the absorption of nickel oxide on the outer surface. Corrosion is almost impossible to prevent, but since no material is lost, the electrodes can simply be regenerated after their useful life. A simple yet elegant option to slow down or even arrest altogether electrode degradation is by periodically reversing the polarity. In doing so, the soon to be oxidized anode has its nickel oxide stripped off by turning it into a cathode and transferred to the former cathode, this allows each electrode to remain at a relatively new state, any accumulated nickel oxide on the hydrogen side is removed after 24 hours. The power supply can simply feature a polarity reversing switch, by installing a mechanical buss-bar which manually moves the input current from positive to negative, requiring no modification to the standard switching power supply. The only tedious aspect of this design is the need to mechanically switch the hydrogen and oxygen hoses, but this too can be done with automatic valves which simply re-route hydrogen into the former oxygen hose and vice versa. Oxy-hydrogen cutting torch operators employ this method to increase the life of their stacks. By employing a simple yet novel solution to corrosion prevention, plain steel anodes can be reliably used. Youtuber NOBOX7 reverses the polarity on his homemade HHO cutting torch generator.

“The reduction in corrosion due to periodically reversed currents appears to be due to the fact that the corrosive process is in a large degree reversible; so that the metal corroded during the half-cycle when current is being discharged is in large measure redeposited during the succeeding half cycle when the current flows toward the metal. This redeposited metal may not be of much value mechanically, but it serves as an anode surface during the next succeeding half cycle, and thus protects the uncorroded metal beneath. Effect of frequency on rate of corrosion: The corrosion of both iron and lead electrodes decreases with increasing frequency of reversal of the current. The corrosion is practically negligible for both metals when the period of the cycle is not greater than about five minutes. With iron electrodes a limiting frequency is reached between 15 and 60 cycles per second, beyond which no appreciable corrosion occurs. No such limit was reached in the lead tests, although it may exist at a higher frequency than 60 cycles. The corrosion of lead reaches practically the maximum value with a frequency of reversal lying between one day and one week. The corrosion of iron does not reach a maximum value until the period of the cycle is considerably in excess of two weeks”.

Digest of Publications of Bureau of Standards on Electrolysis of Underground Structures Caused by the Disintegrating Action of Stray Electric Currents from Electric Railways, United States. National Bureau of StandardsSamuel S. Wyer · 1918

“According to experiments by Larsen, daily reversals of polarity reduce the electrolytic action to one fourth, and hourly reversals to one thirtieth of its normal value . The changing of the direction of the current causes a partial restoration of the metal which has been removed, this effect increasing with the frequency of the reversals. Also, according to Larsen, the nature of the electrolytic action is less harmful when the polarity is periodically reversed than when it remains always the same. When the current flows continuously in the same direction, the pipes become deeply pitted, but when the polarity is periodically reversed the corrosion is more widely and uniformly distributed. Therefore, in all cases where the conditions permit, it is advisable to reverse the polarity of the system at certain intervals. The hourly reversal of polarity reduces corrosion to a very great extent, but when alternating current, even of low frequency, is used the corrosion is completely done away with”.

Stray Currents from Electric Railways by Carl Michalke · 1906

The most challenging aspect of manufacturing a high-performance alkaline electrolyzer is catalyst preparation. Manufacturing an electrolyzer is not semiconductor photolithography, it is a delicate process, but by no means a proprietary or high-tech procedure. The equipment required to perform electrode manufacturing is not specialized, but dual-use, with commercial systems being readily available for electrolyzer manufacturing, obviating the need for expensive and niche suppliers. The major electrolyzer manufacturers do not possess any special expertise that we cannot acquire ourselves. Plasma spraying is the most common method to achieve a highly denticulate surface. A plasma spraying torch can be procured for around $2000 and used to gradually coat the smooth nickel sheets with a highly porous and ragged surface with the Raney nickel. The HX-300 thermal spraying machine sold by Zhengzhou Honest Machinery Co Ltd, runs at 300 amps DC, has a duty factor of 60%, and costs only $1850. It can spray a multitude of metal powders at 0.6 megapascals of pressure.

HTB1B6riSIfpK1RjSZFOq6y6nFXab

A typical thermal spraying machine, used to apply heat-resistant coating for automobile components and many disparate applications. These machines require a flow of coolant and compressed air to operate. Their average price is between $2000 and $10,000.

41598_2020_67954_Fig1_HTML

Bare sheets of smooth nickel would be placed on the floor and either a manual operator or gantry frame can be used to automatically pass the plasma head across the metal surface, in the same manner that a painter applies paint over a surface. This process is called sputtering deposition. After the material has cooled and solidified, the aluminum is then leached (extracted) from the surface using a caustic solution. In the paper “Plasma spraying can be done either in a vacuum or in an atmospheric environment. Electrochemical characterization of Raney nickel electrodes prepared by atmospheric plasma spraying for alkaline water electrolysis, the authors Ji-Eun Kim et al achieved satisfactory results using a standard atmospheric plasma thermal spraying machine using Raney nickel particles 12 to 45 microns. Christophe Pochari Energietechnik is developing a low-cost plasma spraying machine using ubiquitous microwave components to perform catalyst preparation, but such an option is only of interest to hobbyists and the HHO energy community, since any commercial-grade factory would be able to purchase a standard thermal spraying machine. Once catalyst surface preparation is complete, the electrolyzer is ready to assemble. Commercial plasma deposition where Raney nickel microparticles are blasted onto a smooth nickel mesh and high temperature and high velocity have an inherent drawback: they produce a brittle adherence, the adhesion between the leached Raney nickel microparticles and the underlying smooth substrate is poor and prone to cracking and peeling.

.
The polyethersulfone diaphragm separator and rubber gaskets can be cut precisely into circular pieces with a laser cutter, along with the nickel sheets, using virtually no labor other than what is required to load the sheets onto the laser cutting machine bed. Then, once all the parts have been cut, prepared, and readied for installation, the low-skill process of stacking these components and the bolting of the endplates, plumbing fittings, etc can be performed in low labor cost countries, such as Mexico. The electrolyzer can also be packaged as easy to assemble kits, so that owners can perform assembly themselves, further saving cost.

Fullscreen capture 9202021 15337 AM.bmp
Fullscreen capture 7202021 100905 AM.bmp

Fullscreen capture 572022 34430 PM.bmp

Achievable current densities for a number of alkaline electrolyzers.

Fullscreen capture 5132021 14630 AM.bmp

180 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14625 AM.bmp

150 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14606 AM.bmp

120 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14641 AM.bmp
Fullscreen capture 5112021 14702 PM.bmp
Fullscreen capture 5102021 44258 PM.bmp
Fullscreen capture 5102021 41433 PM.bmp
Fullscreen capture 582021 114139 PM.bmp
corrosion.jpg
Fullscreen capture 552021 83800 PM.bmp
Fullscreen capture 6132021 100201 PM.bmp

Typical alkaline electrolyzer degradation rate. The degradation rate varies from as little as 0.25% per year to nearly 3%. This number is almost directly a function of the electrocatalyst deactivation due to corrosion.

Diaphragm membrane rated for up to 100 C in 70% KOH for $124/m2: $8.8/kW

Fullscreen capture 6112021 15334 PM.bmp

*Note: Sandvik materials has published data on corrosion rates of various alloys under aerated sodium hydroxide solutions (the exact conditions found in water electrolyzers), and found that carbon steel with up to 30% sodium hydroxide provided temperatures are kept below 80 Celsius.

Cheap ammonia crackers for automotive, heavy duty mobility, and energy storage using using nickel catalysts

Industrial scale catalysts have been processed intensified by reducing particle size, increasing Ni loading, and increasing specific surface. “Employing the catalyst in powder form instead of in granulated or pellet form significantly reduces the temperature at which an efficient decomposition of ammonia into hydrogen and nitrogen can be effected”. The main reason why the industrial-scale annealing (forming gas) crackers have higher decomposition temperature is due to large catalyst pellet size, usually 20 mm. While typical industrial ammonia cracking catalysts from China (Liaoning Haitai Technology), have Ni loadings of 14%, with GHSVs of 1-3000 with conversion of 99+% at 800-1000 C, some literature pulled up from mining Google patents citing physical testing indicate variants of standard nickel catalysts with higher Ni loading with similar densities (1.1-1.2 kg/liter) can achieve GHSVs of 5000 at lower temperatures (<650 C) and retain high conversion (99.95%). Such a system would equate to a techno-economic power density of 3.85 kg cat/kg-H2/hr, yielding a net of 0.96 kg nickel/kg-H2 at a nickel price of $20/kg, equating to $20kg-hr capacity, leaving little incentive to use noble or exotic alloys. The rest of the cost is found in the metal components, of which around 7 kg of stainless steel is needed for a 1 kg reformer, costing about $140. Aluminum oxide support is virtually insignificant, costing only $1/kg. Pochari Technologies’ goal is to make ammonia crackers cheaper than standard automotive catalyst converters, this appears a tenable goal as catalyst converters require palladium and platinum, albite in smaller quantities. The reformer is approximately the size as a large muffler, which will be fitted near exhaust manifold of the engine, to minimize conductive heat losses through the exhaust. Beyond economics, the power density is already more than satisfactory, with the volume of the catalyst occupying less than 3.2 liters for a reformer capacity of 1 kg-H2-hr, most of the volume is occupied by insulation, the combustion zone (the inner-third portion of the cylinder), and miscellaneous piping, flow regulators, etc.

While the theoretical energy consumption is 3.75 kWh/kg-H2, the minimum energy consumption is somewhere in the order of 4.2-4.8 kWh/kg, but in reality, it is usually higher. This number can be easily ascertained by taking the specific heat capacity of the catalyst mass (mostly aluminum oxide), the active component (nickel 500 kJ-kg/K), the metallic components (500 kJ/kg-K for SS304) that comprise the reactor vessel, catalyst tubes, containment cylinder etc, and finally, the temperature required to raise 5.5 kg of gaseous anhydrous ammonia (2175 kJ/kg-K) to 800 degrees Celsius, which is exactly 2.65 kWh, plus any heat loss. We also need to take into account the higher capacity of the released hydrogen. As the ammonia progressively breaks down, hydrogen is released, this hydrogen has a certain residence time since for complete decomposition, the reformate gas will reside until no appreciable quantities ammonia is present, this in effect means the reformer is also heating hydrogen gas, not just ammonia, so we need add the heat absorption of the hydrogen, which is another 3.17 kWh (14,300 kJ/kg-K). This takes the total to 7.84 kWh/kg-NH3, very close to numbers found on industrial reformers. Heat loss through conduction is minimal, using 40mms of rock-wool insulation wrapped around a 100mm reactor vessel, we can reduce heat transfer for a 3 liter reformer to around 60 watts. The net total amounts to 7.9 kWh/kg NH3, or 23% of the LHV of hydrogen. Nearly 100% of this energy can be supplied by exhaust gases for H2-ICE systems, while for fuel cells, no such heat is available.

Fullscreen capture 6122021 64258 PM.bmpFullscreen capture 6122021 20938 AM.bmpFullscreen capture 6132021 62837 PM.bmp

Fullscreen capture 6132021 62136 PM.bmp

Techno-economic feasibility of micro-channel Fischer-Tropsch production using carbon-neutral hydrogen from municipal solid waste plasma gasification for producing liquid hydrocarbons

Microsoft Word - jfue

Combining photovoltaic power with municipal sold waste plasma gasification, carbon monoxide can be produced along with hydrogen at nearly the same molar ration as required to produce long-chain liquid transportation fuels. Any fuel produced from a sustainable source such as solid waste diverts carbon away from new extraction, mitigating emissions. If 1 ton of fuel is burned that is produced from solid waste, 1 ton less fuel is extracted. Using micro-channel technology rather than classic tubular reactors, the size of the F-T reactor is reduced by an order of magnitude, reducing CAPEX and material usage. Low-cost non-noble cobalt catalysts provide high activity and long life. Graphite electrodes using 10 kV AC plasma torches provides high temperature 2000-3000C gasification temperature generating 513 and 400 Nm3 of CO and H2 respectively using 1.6 MW. 1.2 tons of solid waste can generate 0.27 tons of sulfur-free diesel fuel per day.

Sustainable diesel fuel market price: $946/ton ($3/gal)

Hydrogen source: Photovoltaic 40 kWh/kg-H2 140 kg-H2/t-diesel

Carbon source: Municipal solid waste plasma gasification: 243 kg/t-MSW @1,600 kWh plasma/t-MSW = 6.5/kWh/kg, 5600 kWh/t-diesel p. Hydrogen production: 32 kg/t-MSW

Solar plant CAPEX @0.20/watt: $60,000

DC/AC invertor: $15,000

Treated wood panel support structure: $8000

Plasma gasifier CAPEX: $10,000

Microchannel Fischer-Tropsch reactor: $8,000

Purification: $5,000

Total CAPEX: $106,000

Annual maintenance: $15,000

Revenue per ton MSW: $178

Power consumption: 6000 kWh/ton

MSW consumption: 1.2 tons per day,

Potential Revenue: $94,600

Return on capital: 75%