SITYS: Climate Models do not Conserve Mass or Energy

From Roy Spencer’s Global Warming Blog

Roy W. Spencer, Ph. D.

See, I told you so.

One of the most fundamental requirements of any physics-based model of climate change is that it must conserve mass and energy. This is partly why I (along with Danny Braswell and John Christy) have been using simple 1-dimensional climate models that have simplified calculations and where conservation is not a problem.

Changes in the global energy budget associated with increasing atmospheric CO2 are small, roughly 1% of the average radiative energy fluxes in and out of the climate system. So, you would think that climate models are sufficiently carefully constructed so that, without any global radiative energy imbalance imposed on them (no “external forcing”), that they would not produce any temperature change.

It turns out, this isn’t true.

Back in 2014 our 1D model paper showed evidence that CMIP3 models don’t conserve energy, as evidenced by the wide range of deep-ocean warming (and even cooling) that occurred in those models despite the imposed positive energy imbalance the models were forced with to mimic the effects of increasing atmospheric CO2.

Now, I just stumbled upon a paper from 2021 (Irving et al., A Mass and Energy Conservation Analysis of Drift in the CMIP6 Ensemble) which describes significant problems in the latest (CMIP5 and CMIP6) models regarding not only energy conservation in the ocean but also at the top-of-atmosphere (TOA, thus affecting global warming rates) and even the water vapor budget of the atmosphere (which represents the largest component of the global greenhouse effect).

These represent potentially serious problems when it comes to our reliance on climate models to guide energy policy. It boggles my mind that conservation of mass and energy were not requirements of all models before their results were released decades ago.

One possible source of problems are the model “numerics”… the mathematical formulas (often “finite-difference” formulas) used to compute changes in all quantities between gridpoints in the horizontal, levels in the vertical, and from one time step to the next. Miniscule errors in these calculations can accumulate over time, especially if physically impossible negative mass values are set to zero, causing “leakage” of mass. We don’t worry about such things in weather forecast models that are run for only days or weeks. But climate models are run for decades or hundreds of years of model time, and tiny errors (if they don’t average out to zero) can accumulate over time.

The 2021 paper describes one of the CMIP6 models where one of the surface energy flux calculations was found to have missing terms (essentially, a programming error). When that was found and corrected, the spurious ocean temperature drift was removed. The authors suggest that, given the number of models (over 30 now) and number of model processes being involved, it would take a huge effort to track down and correct these model deficiencies.

I will close with some quotes from the 2021 J. of Climate paper in question.

“Our analysis suggests that when it comes to globally integrated OHC (ocean heat content), there has been little improvement from CMIP5 to CMIP6 (fewer outliers, but a similar ensemble median magnitude). This indicates that model drift still represents a nonnegligible fraction of historical forced trends in global, depth-integrated quantities…”

“We find that drift in OHC is typically much smaller than in time-integrated netTOA, indicating a leakage of energy in the simulated climate system. Most of this energy leakage occurs somewhere between the TOA and ocean surface and has improved (i.e., it has a reduced ensemble median magnitude) from CMIP5 to CMIP6 due to reduced drift in time-integrated netTOA. To put these drifts and leaks into perspective, the time-integrated netTOA and systemwide energy leakage approaches or exceeds the estimated current planetary imbalance for a number of models.

“While drift in the global mass of atmospheric water vapor is negligible relative to estimated current trends, the drift in time-integrated moisture flux into the atmosphere (i.e., evaporation minus precipitation) and the consequent nonclosure of the atmospheric moisture budget is relatively large (and worse for CMIP6), approaching/exceeding the magnitude of current trends for many models.”

4.9 38 votes
Article Rating
172 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
August 21, 2023 10:05 am

Perpetual motion machines, brought to you by the UN.

Paul S
Reply to  Tom Halla
August 21, 2023 11:08 am

Perpetual motion machines in many respects. Perpetual funding, perpetual erroneous results, perpetual lying, perpetual hysteria, perpetual drift towards totalitarianism.

Jim Masterson
Reply to  Tom Halla
August 21, 2023 4:36 pm

Climate science is an oxymoron.

Richard Page
Reply to  Jim Masterson
August 22, 2023 9:25 am

Climate Science is a playground for computer literate children that have never (and never will have) reached mental maturity.

Richard Page
Reply to  Richard Page
August 22, 2023 9:46 am

The ‘Lost Boys’ of academia.

Jim Masterson
Reply to  Richard Page
August 22, 2023 6:05 pm

And Climate Scientists have never studied Thermodynamics. You can’t average intensive properties–such as temperature and pressure–as many here have pointed out numerous times–even though Mr. Mosher wrongly thinks you can.

ClimateBear
Reply to  Jim Masterson
August 22, 2023 7:21 pm

Its not even science by any stretch for so many who ‘practice’ it. More like some toxic hybrid with philosophy, “Climatossophy”?

Models don’t achieve energy balance? Fancy that, who’d a thunk it, eh ?/sarc

Here in Oz we are looking at an El Nino (i.e hot and dry on this side of the Pacific) plus a negative IOD (same for our continent so the double whammy). Happens pretty regularly which is why even our poets write about how they love (inter alia) “a land of droughts and flooding rains”. But for the muppets and alarmists it is ‘climate change’, ‘global warming’ and now ‘global boiling’. Well someone’s brain is boiling.

cimdave
Reply to  ClimateBear
August 23, 2023 7:22 am

Climate Scientology

Rud Istvan
August 21, 2023 10:21 am

This is but one of several fundamental climate model problems. Some others:

  1. All but one of CMIP6 (the exception is INM CM5) produce a tropical troposphere hotspot that does not in fact exist.
  2. The model median ECS is about 2x observational EBM estimates. That is why the IPCC no longer gives a ‘best’ ECS estimate in AR5 and 6.
  3. The CFL constraint on numerical solutions to partial differential equations means models are 6-7 orders of magnitude computationally intractable. So they must be parameterized. The parameterization is tuned to best hindcast—a CMIP requirement. That tuning drags in the attribution problem with natural variation. See old post ‘The trouble with climate models’ for details.
  4. CMIP comparing hindcast model anomalies hides the fact that in actual temperature terms, they disagree by about +/-3C. That is huge after tuned parameters.
Curious George
Reply to  Rud Istvan
August 21, 2023 10:40 am

NCAR Common Atmosphere Model CAM5.1 neglects the temperature dependence of a latent heat of water vaporization, overstating an energy transfer by evaporation from tropical seas by 3%. I alerted NCAR to this bug ten years ago, no response.

Dave Yaussy
Reply to  Rud Istvan
August 21, 2023 10:53 am

This issue of the models’ lack of fitness for purpose is something I wish I knew more about. I imagine it takes a better understanding of even rudimentary physics than I possess.

I suspect that there’s no easy analogy that we laypeople could use to dismiss them when they are trotted out as proof of anthropogenic human warming, but it would certainly be nice if someone could come up with one.

Dave Fair
Reply to  Dave Yaussy
August 21, 2023 12:04 pm

Dave, just use Rud Istvan’s examples verbatim.

David Pentland
Reply to  Dave Yaussy
August 21, 2023 12:07 pm

Laypeople want simple analogies for complex issues. Hence, ” the science is settled” and “CO2 is the thermostat”.
Only when the energy system comes totally unravelled will MSM discuss uncertainties.

David Pentland
Reply to  David Pentland
August 21, 2023 12:48 pm

“It’s said that science will dehumanize people and turn them into numbers. That’s false, tragically false. Look for yourself. This is the concentration camp and crematorium at Auschwitz. This is where people were turned into numbers. Into this pond were flushed the ashes of some four million people. And that was not done by gas. It was done by arrogance, it was done by dogma, it was done by ignorance. When people believe that they have absolute knowledge, with no test in reality, this is how they behave. This is what men do when they aspire to the knowledge of gods.” Jacob Bronowski

cilo
Reply to  David Pentland
August 22, 2023 5:00 am

Yeah, but science is really adamant about proof, anecdotes and personal reminiscences and generalised libel carry no weight when actual truth is at issue. In science, when someone shows us the place the bones are buried, we are allowed, no, encouraged to dig’em up and present the proof.
I.o.w: Not a good analogy at all.

Martin Brumby
Reply to  David Pentland
August 21, 2023 2:44 pm

It is easier than that.

Identify the “Scientists” who continually lie and exaggerate snd pepper their abstracts with ‘could’, ‘might’, ‘experts think’, ‘worse than we thought’, ‘is consistent with’ and all the rest of the guff.

Mann, Santer, Schmidt, Watson, Gleick, [add 30 of your favourites here].

If the New York Times and the Grauniad hero-worship them, it is a slam-dunk.

Give them all the trust you have in Gates, Fauci, Collins, Farrar and the rest.

RickWill
Reply to  Dave Yaussy
August 21, 2023 3:14 pm

it would certainly be nice if someone could come up with one.

The best analogy is Galileo. It took 359 years for the RC Church to recognise his model of the solar system was correct. His model was based on meticulous observation. His model was not the consensus of the day and he was jailed for not complying with the consensus.

There is one simple fact with regard Earth’s energy balance and climate. Open ocean surface cannot sustain a temperature above 30C. You do not even need to know why because the control process can be observed every day using global observations that are readily available on the internet.

For example, two weeks ago, the ocean surface water off the west of Mexico was above 30C:
https://earth.nullschool.net/#2023/08/13/1200Z/ocean/surface/level/overlay=sea_surface_temp/orthographic=281.41,7.90,372/loc=-102.253,16.145

It is now back below 30C due to convective instability resulting in a convective storm that is now having some impact in California:
https://earth.nullschool.net/#2023/08/20/1200Z/ocean/surface/level/overlay=sea_surface_temp/orthographic=281.41,7.90,372/loc=-103.320,16.242

The same process was at play in the Bay of Bengal from June to August. Also off the Philippines around the same time.

All models, apart from the Russian INM model, have some open ocean surface sustaining temperature above 30C over an annual cycle. So they produce unphysical nonsense. The models are junk – a sick joke on humanity.

cilo
Reply to  Dave Yaussy
August 22, 2023 4:36 am

Dave, you don’t have to be some genius specialist to prove the models inadequate. What you do need, is wide reading on many subjects, and, this is important, the ability to be honest with yourself.
Thusly, we can look at the “Formula” Baal Gates and fiends like to project behind them as they preach their climastrologist dogma, and notice they have a variable for ‘particulates’ or similar. All very well, but has anyone here seen any numbers on the (at least) 40 companies in the US, and about as many over the rest of the world, making a living off of weather modification programmes? What, exactly, is the effect of Chemtrails, a conspiricy theory recently officially admitted to, albeit renamed as “Stratospheric Particulate Injection”?
As long as they are not honest or competent enough to include such an obvious parameter, what is there to take seriously?
Like that hole in the ozone: Stir a glass of water with a spoon, ( 1 000 miles an hour around the equator), look at the middle of the glass, and tell me what you see.
Oh, how I laugh at that “hole in the ozone”!
Wanna do 911? 1/6? Dark Matter?

ATheoK
Reply to  Dave Yaussy
August 25, 2023 6:01 pm

This issue of the models’ lack of fitness for purpose is something I wish I knew more about.”

  • The climate models never worked. Government users excuse the most atrocious programmer and program behavior.
  • Model owners utterly ignore their model’s inabilities and misbehaviors, literally for decades.
  • Instead, the model owners promote their results, while demonizing and deriding anyone who criticizes model errors or irrational results.
  1. In the Engineering and Financial world, there are financial and often death repercussions from model errors.
  2. Models can return ballpark forecasts that are never considered real or in place of hard data. i.e., a forecast just to help plan labor or services.
  3. When lives are on the line, model owners never allow results that they cannot back 100%.
  4. Any doubt about a run/result and the programmers go through the model line by line verifying every calculation and result. Their bosses receive ongoing status reports and full explanations regarding cause and correction when they finish.
  5. Those businesses have zero tolerance for incompetent or incorrigible programmers.
  6. Unlike government or green organizations that institutionalize bad programming practice and excuses.
Tim Gorman
Reply to  ATheoK
August 26, 2023 3:32 am

100%!

Mumbles McGuirck
August 21, 2023 10:23 am

“While drift in the global mass of atmospheric water vapor is negligible relative to estimated current trends, the drift in time-integrated moisture flux into the atmosphere (i.e., evaporation minus precipitation) and the consequent nonclosure of the atmospheric moisture budget is relatively large (and worse for CMIP6), approaching/exceeding the magnitude of current trends for many models.”

Just remember, water vapor is the Number One greenhouse gas, contributing ~95% of the atmospheric warming due to GHGs. So if the models get that value wrong their output is nonsense.

Peta of Newark
Reply to  Mumbles McGuirck
August 21, 2023 11:09 am

Water is just ‘some stuff‘ that gets warm when the sun shines on it.
The notion that it absorbs energy **only** because it is a greenhousegas is Utter Garbage

There is simultaneously:

  • Nothing special about water
  • Everything special about water

Water vapour (and liquid) are immensely strong absorbers of Near Infra Red radiation (range between 700nm and 2,000nm) and 40% of El Sol’s total output is in that range.

The sun heats the atmosphere directly even at just 0.01% humidity.

Water has got to be The Most taken-for-granted and thus misunderstood substance in this entire universe

climategrog
Reply to  Peta of Newark
August 21, 2023 11:48 am

The notion that it absorbs energy **only** because it is a greenhousegas is Utter Garbage

where did you get that straw man from?


mkelly
Reply to  Peta of Newark
August 21, 2023 4:52 pm

Fresh water has a specific heat of 4.184 J/g K. Couple that with its mass 18 and it takes a lot of energy to raise its temperature. That is why it is a coolant. Same with CO2. They do not cause warming.

Dave Andrews
Reply to  mkelly
August 22, 2023 7:44 am

Talking of coolants. The UKs fleet of Advanced Gas Cooled Reactors (AGRs)
guess what the coolant gas was? – right if you said CO2!

ClimateBear
Reply to  mkelly
August 23, 2023 12:04 am

Not to mention its LFV of 2.26 kJ/g which is a big, big big player in the atmospheric temperature mechanism IMO as it evaporates at the surface, transports LHV to the upper atmosphere then offloads same as it forms liquid droplets, aggregates, precipitates and falls back to earth.

Jim Gorman
Reply to  ClimateBear
August 23, 2023 4:34 am

Not to mention that it falls back at a cooler temperature.

David Pentland
Reply to  Mumbles McGuirck
August 21, 2023 12:11 pm

Mumbles, how do we know “~95%”.

This is suspiciously close to 97%.

bonbon
August 21, 2023 10:24 am

This sure looks like a convoluted way to confirm Pat Frank’s articles here.
https://wattsupwiththat.com/tag/pat-frank/

Thomas
Reply to  bonbon
August 21, 2023 1:59 pm

From Dr. Roy’s article: “But climate models are run for decades or hundreds of years of model time, and tiny errors (if they don’t average out to zero) can accumulate over time.”

Yes, that is pretty much exactly what Pat Frank said. Yet Dr. Roy argued against it. He couldn’t see that there is a difference between propagation of errors in a damped and constrained model and the output of a damped and constrained model.

Bonbon get’s a bonbon.

Tim Gorman
Reply to  bonbon
August 21, 2023 2:40 pm

Spencer will still probably disagree with Frank. Frank came at the problem from a measurement uncertainty point of view in the initial conditions used to prime the model initialization which then accumulate at each iteration. Spencer is basically arguing that errors in the model algorithms accumulate at each iteration.

Both are correct. It’s the problem you *always* find in iterative models. It’s why the climate models need the limits in the software that Willis has pointed out in order to keep the models from blowing up.

It’s why the CAGW advocates in climate science have to use the assumptions that 1. all measurement uncertainty is random and Gaussian and therefore cancels out and 2. that variances don’t add when combining measurement station data. If you refuse to accept these two assumptions then the uncertainty of the global average temperature is so large that you can’t use it for anything.

Jim Gorman
Reply to  bonbon
August 22, 2023 5:45 am

That is exactly what I thought. You beat me to the reply. Iterative processes invariably spin out of control when there are transmitted errors in each iteration.

Dr. Pat Frank’s papers illustrated this to a T. The arguments against the ±15° uncertainty interval were really misplaced. If the interval is even as low as ±3° it makes the models worthless.

bonbon
Reply to  bonbon
August 23, 2023 4:04 pm

Just found this :
https://www.youtube.com/watch?v=0-Ke9F0m_gw
Patrick Frank: Nobody understands climate | Tom Nelson Pod #139

Tim Gorman
Reply to  bonbon
August 24, 2023 4:23 am

A very important part of this talk occurs at about 13min in. It’s where Pat is explaining the uncertainty associated with the climate models. He emphasis on the fact that the uncertainty envelope does *not* say that the temperature change can be from the positive max to the negative min but that it is an area of *NOT KNOWING*. I have yet to communicate with a CAGW advocate that understands this, *not one*. I even fall into this trap occasionally. The temperature could be anywhere in the interval but YOU DON’T KNOW WHERE! There *is* a true value somewhere BUT YOU DON’T KNOW WHERE IT IS.

Lot’s of people like to say that every value in the interval has an equal chance of being the true value, i.e. a uniform distribution. But this is typically merely a way to dismiss the uncertainty by assuming whatever value you want it to be, usually the median value. This is wrong-headed in my humble opinion. It should be looked at as the true value having a probability of 1 of being the true value and all other values have a probability of 0 of being the true value. BUT YOU DON’T KNOW WHICH VALUE HAS A PROBABILITY OF 1! It is an UNKOWN.

You can be sure that however a CAGW advocate characterizes the uncertainty interval it is meant to support the idea that all uncertainty in measurement is random, Gaussian, and cancels leaving the stated value as 100% accurate.

Scissor
August 21, 2023 10:25 am

Two out of three ain’t bad.

Richard Page
Reply to  Scissor
August 22, 2023 9:34 am

Well it ain’t good. It’s the science equivalent of having an exam paper where the only question you got correct was your name.

dbakerber
August 21, 2023 10:27 am

Why would we be surprised? All the climate scientists that actually studied physics, chemistry, and other applicable disciplines have been labelled as climate deniers.

Energywise
Reply to  dbakerber
August 21, 2023 1:13 pm

Don’t forget the Geologists

slowroll
Reply to  dbakerber
August 22, 2023 9:09 am

Well all those types are clouding the issue with facts. Can’t have that.

pillageidiot
August 21, 2023 10:46 am

Do any of these types of models publish their entire code into the public domain?

I would certainly like enough transparency that researchers can go through the code and find out how many equations have missing parameters and how many raw fudge factors exist in the code so that the historical matching has an acceptable fit.

Rud Istvan
Reply to  pillageidiot
August 21, 2023 11:04 am

A few do, most don’t. easterbrook ran down CMIP3 in ~2010 IIRC. A paper on source code from 9 CMIP3 models had the typical model with about 300-400k lines of code. Lots of room for error,

Clyde Spencer
Reply to  Rud Istvan
August 21, 2023 12:05 pm

“All non-trivial computer programs have bugs.”

I suppose we could extend that truism to “The number of bugs in a computer program is proportional the number of lines of code.”

old cocky
Reply to  Clyde Spencer
August 21, 2023 7:45 pm

“The number of bugs in a computer program is proportional the number of lines of code.”

To the power of the number of programming languages used

slowroll
Reply to  Clyde Spencer
August 22, 2023 9:12 am

And, add the fact that the bugs can’t be found without real-world testing. Need a couple hundred years of testing to validate climate models. Their approach of testing a model with another model is fallacious in the extreme.

old cocky
Reply to  slowroll
August 22, 2023 2:29 pm

the bugs can’t be found without real-world testing. Need a couple hundred years of testing to validate climate models

Code bugs can be found by inspection, walkthrough, or comparing results to those from a parallel implementation.

Design bugs are another matter.

Their approach of testing a model with another model is fallacious in the extreme.

That’s quite a valid approach. Both programs had better strictly comply with specifications, though 🙂

Tim Gorman
Reply to  old cocky
August 23, 2023 7:31 am

Assuming the specifications are accurate to begin with.

old cocky
Reply to  Tim Gorman
August 23, 2023 1:37 pm

That comes back to the design bug.

Robert B
Reply to  Rud Istvan
August 21, 2023 3:30 pm

Reminds me of a proof that one plus one doesn’t equal two. About 10 lines of simple arithmetic and still a bugger to spot the error.

SteveG
Reply to  Rud Istvan
August 21, 2023 6:12 pm

Rud, The models, depending on resolution are based on slicing the earth into a number of cells correct? The higher the resolution the greater the number of cells.

This computer grid of the earth extends from the bottom of the ocean to a point in the atmosphere.

Do we know what the typical cell area of the models is in klm2?

B Zipperer
Reply to  SteveG
August 21, 2023 7:17 pm

FWIW I have read the atmospheric cells are 50-100 km x 100 km x 0.5 km depending on the model. Don’t recall the ocean grid dimensions [I think the depths are more shallow].

B Zipperer
Reply to  B Zipperer
August 21, 2023 7:40 pm

and relatedly, I highly recommend this 2019 article on model deficiencies:
https://www.pnas.org/content/116/49/24390

Tim Gorman
Reply to  B Zipperer
August 22, 2023 3:01 am

Thanks for the link. Interesting paper!

dk_
Reply to  SteveG
August 21, 2023 8:10 pm

SteveG,

“…the current grid boxes are 100 Kilometers…”

Steve Koonan, Aug 21, 2023, Interview at Hoover “Uncommon Knowledge,”
“Hot or Not: Steven Koonin Questions Conventional Climate Science and Methodology,” https://youtu.be/l90FpjPGLBE, timetag +/-19:35:
Discussing his book “Unsettled” published Apr 27, 2021.

I’d just listented to this interview an hour or two before I read this post and your comment/question. I found the interview worthwhile even though I’d read the book, but there’s nothing new in there. Recommended anyway.

Curious George
Reply to  pillageidiot
August 21, 2023 11:52 am
Energywise
Reply to  pillageidiot
August 21, 2023 1:15 pm

No, most is written on ZX Spectrum or Commodore 64 so is ok at triggering a relay to light a bulb, but beyond that…………………

pillageidiot
Reply to  pillageidiot
August 21, 2023 1:24 pm

Thanks for the replies, everyone.

I went to the UCAR/NCAR link. I couldn’t tell if a user could see all of the source code, or could just use tools from their toolbox to perform some modelling aspects in their own research.

Curious George
Reply to  pillageidiot
August 21, 2023 3:03 pm

Try https://escomp.github.io/CESM/release-cesm2/introduction.html
There are probably thousands of source code files. A BIG project.

old cocky
Reply to  Curious George
August 21, 2023 11:01 pm

Ahh, the joys of dependency heck building subversion from source.

Why would anybody use git and svn?

cilo
Reply to  pillageidiot
August 22, 2023 5:09 am

…publish their entire code…

A few weeks after that famous hokey schtick thing came out, the source code was emailed around. I remember the subroutine that lifts the baseline towards the end date, was REMéd with a comment that actually included the term: “…fudge factor…” (from some date, subsequent base line values were each multiplied by 101% over the previous, if memory serves)
My guess is not one of them dare let anybody see their homework after that!

Peta of Newark
August 21, 2023 11:18 am

It would have been nice if they even just conserved the surface area of Earth.

i.e. How the he11 did this garbage science get away with saying that the sun only impinges upon ¼ of Earth’s surface area – as it does in every expalantion of the GHGE?

i.e.2. How the he11 did they get away with the Albedo figure they use (0.30)
That is the figure that Climate creates – in order to cool the Earth.

IOW How can they/anyone suggest that an Earth without a climate has clouds?
Where did they come from otherwise?

Climate (atmosphere) cools the Earth and those 2 sleights of hand created such energy/temperature nonsense that it became necessary to confabulate the GHGE so as to warm it back up again.

do you laugh or cry

DMacKenzie
Reply to  Peta of Newark
August 21, 2023 12:01 pm

I like your Albedo comment, and your clouds comment, but really the Surface Area of the Earth is 4pi r^2 and the area of its shadow is only pi R^2 and you cant spell “explanation”, so what are we readers to think of your analytical skills ?

Curious George
Reply to  Peta of Newark
August 21, 2023 12:03 pm

You are referring to a “flat earth” climate “energy model”. Sun only shines at the day side of the planet, that’s a half, and then mostly obliquely, that’s another half. More mathematically, Sun is shining at the perpendicular area of Pi*R^2, whereas the Earth’s surface is 4*Pi*R^2.

David Pentland
Reply to  Curious George
August 21, 2023 1:25 pm

Peta’s confusion about this basic geometry question is pretty much universal among the general population.
Ignorance can be fixed…

Kevin Kilty
Reply to  David Pentland
August 21, 2023 3:19 pm

If you think about this, it’s a pretty complex geometry problem for the average person to grasp because half the surface is hidden and there is a cosine factor in the surface element orientation on the sunlite side that no layperson is likely to understand without having had Calc II anyway. Just the idea of a cosine is baffling to a population having just managed to get through Algebra II. The ratio of areas argument has got to look like bafflegab to them.

DMacKenzie
Reply to  Kevin Kilty
August 22, 2023 4:34 pm

KK
We can’t give up on the average person so easily. We must give them the desire to check out what we they are told they are supposed to believe. A lot of the general public is smart enough to figure it out if they want to. They can figure out how to speak other languages if the girls are cute, learn to play musical instruments, figure out how to port forward their video games…

Kevin Kilty
Reply to  Curious George
August 21, 2023 3:07 pm

I tried the same explanation hours ago here. I think the ratio of areas argument is very obscure and perhaps better to bring up the oblique rays explanation.

Jim Gorman
Reply to  Curious George
August 22, 2023 6:18 am

I’m curious. What is the integral of cosine Θ from 0 to π/2, i.e., the equator to the north pole? Now double that for the equator to thesouth pole and what do you get?

Clyde Spencer
Reply to  Peta of Newark
August 21, 2023 12:12 pm

The CRC Handbook of Chemistry and Physics lists an Earth albedo larger than 0.30, and with more significant figures. Yet, it is often rounded down to 0.3, and calculations using it are shown with more than 1-significant figure. Generally speaking, using a larger albedo should reduce the predicted warming. Also, taking into account the specular reflection from the oceans should further reduce the warming effect. Why these issues are apparently ignored is beyond my comprehension.

Rick C
Reply to  Peta of Newark
August 21, 2023 12:50 pm

How the he11 did this garbage science get away with saying that the sun only impinges upon ¼ of Earth’s surface area – as it does in every expalantion of the GHGE?

PoN: They do not say the sun impinges on 1/4 of the earth. The 1/4 calculation is for how much of the energy that hits the earth hits each square meter on average. The incoming radiation is stated in terms of watts per square meter over the circular area defined by the earth’s radius r (=pi x r^2) about 1374 w/m^2. Ok, but how much radiation is that per the total surface area of the spherical earth which is 4 x pi x r^2 m^2? Well just divide 1374 w/m^2 over the flat circular area by the actual total sphere area which is 4 times larger. That gives you 343.5 w/m^2.

Now, of course we know that only 1/2 the surface is exposed at any given time so the average received on the daytime 1/2 is about 687 w/m^2. But, of course that too is an average as the distribution is not uniform – areas normal to the sun at noon see more radiation than areas located nearer to the horizon. That’s why you can look at sunsets and sunrises without damaging your retinas but staring at the sun when it’s high in the sky will blind you. Actual energy flux for any area on a given date and time and assuming clear sky can be quite accurately calculated including adjustments for the thickness of the atmosphere depending on solar angle. Such calculations are done all the time to determine how much solar energy will be available for solar heating or solar panels.

The problem with models is much more problem of many other assumptions and variables like albedo and cloudiness.

Tim Gorman
Reply to  Rick C
August 21, 2023 3:02 pm

It’s actually much more complicated than this. If the impinging radiation is only vertical at the equator then each square meter north and south of the equator is going to get the impinging radiation at an angle related to its latitude. Only the vertical component at each point gets absorbed at the surface, i.e. (total-radiation)(sin l) where “l” is the latitude. The sun’s radiation also follows the inverse square law. Since the path length gets longer as latitude goes up then higher latitudes get a reduced amount of radiation. Since as the path length goes up more atmosphere is encountered the inverse square law factor must be modified to account for the additional loss.

You can’t just assume the surface area of the earth is πr^2 and every point on the earth gets the same radiation. It’s far more complicated than that.

I’m sure some will use the excuse that the inverse square law loss is minimal and can be ignored and that the angle of incidence loss is minimal and can be ignored. The answer to that is that the anomalies being calculated are also minimal and can be ignored as well. What’s good for the goose is good for the gander.

(btw. I worked for a while in the design group of microwave links for telephone company traffic. Path loss (inverse square and atmospheric loss) were major components to be considered. The sun’s radiation isn’t any different.)

morfu03
Reply to  Tim Gorman
August 21, 2023 8:03 pm

Here is some literature about that:
M. J. Prather et al (2019) A round Earth for climate modelsdoi.org/10.1073/pnas.1908198116

and an older one that the fact that the earth is oblate matters, so far without any response from the modeling community.. something for Rud´s list I guess:
G. L. Smith et al. (2014) Computation of Radiation Budget on an Oblate Earth
doi.org/10.1175/JCLI-D-14-00058.1

well.. it´s only been about 10 years, they probably still working on updating the hockey-stick article.. you have to understand, saving the climate leaves little time for actual work . .

Tim Gorman
Reply to  morfu03
August 22, 2023 3:07 am

from the second paper: “For radiation budget computations, the earth oblateness effects are shown to be small compared to error sources of measuring or modeling.” (bolding mine, tpg)

Of course the CAGW advocates don’t recognize the fact that there are error sources such as measurement uncertainty and modeling error. To them all error is random, Gaussian, and cancels!

MarkW
Reply to  Peta of Newark
August 21, 2023 3:27 pm

You were corrected on this error the last time you brought it up.

Giving_Cat
August 21, 2023 11:32 am

> The authors suggest that, given the number of models (over 30 now) and number of model processes being involved, it would take a huge effort to track down and correct these model deficiencies.

At LEAST all but one of those models are wrong. NONE of those models can correctly hindcast.

That said. The opening sentence could use a bit more precision.

> One of the most fundamental requirements of any physics-based model of climate change is that it must conserve mass and energy.

I would suggest instead:

One of the most fundamental requirements of any physics-based model of climate change is that it must account for mass and energy.

This because we all realize “climate” isn’t the closed system your comment suggests. The best we can hope for is tracking inputs and outflows. For but one instance were a massive volcanic eruption to upset the climate we can account for it but it doesn’t look like conservation within the constraints of any climate model.

climategrog
Reply to  Giving_Cat
August 21, 2023 11:55 am

This because we all realize “climate” isn’t the closed system your comment suggests.

The comment does not suggest that.
Since there is accouting of the TOA energy budget, it is not regarded as a closed system and that energy budget MUST balance.

Same for conservation of mass. There is technically a small loss of mass from the exosphere. If that is signficant to global mass it can be included as a mass flux.

If Dr Spencer says it should conserve mass and energy you’d better remember he is a physicist and come up with a more solidly argued criticism.

Curious George
Reply to  Giving_Cat
August 21, 2023 12:09 pm

I like your formulation “must account”. An old joke about hiring an accountant, they ask him: How much is 1+1? The best answer: How much do you want it to be?

Dave Fair
August 21, 2023 11:58 am

Name the national and international science organizations broadcasting this information from the high hills……..Waiting………………..

kbgregory3gmailcom
August 21, 2023 12:13 pm

The caption of Figure 5 of the paper says “The MIROC models have a total leakage of approximately −3.5 W m−2, with offsetting ocean and nonocean leakages of approximately −41.5 and 38.0 W m−2, respectively.”
https://journals.ametsoc.org/view/journals/clim/34/8/JCLI-D-20-0281.1.xml
WOW!!

MST
August 21, 2023 12:21 pm

“ The authors suggest that, given the number of models (over 30 now) and number of model processes being involved, it would take a huge effort to track down and correct these model deficiencies.”

Far too large an effort to correct the code, but use the definitionally erroneous results as reason to rejigger the entire planetary economic system? Piece of cake.

Thomas
Reply to  MST
August 21, 2023 2:10 pm

And why bother to track them down? If a car is seriously damaged, we don’t try to fix it. We just put it on the junk heap and go get a new one.

MST
Reply to  Thomas
August 21, 2023 5:59 pm

When you total a car, you only get a cheque for a 1:1 replacement. It’s the same people building the same models with the same assumptions which assume the same conclusions.

Ulric Lyons
August 21, 2023 12:33 pm

Moreover, are the models ‘tuned’ to believe that rising CO2 levels caused the rise in OHC since 1995, rather than the decline in cloud cover since 1995?

comment image

comment image

Thomas
Reply to  Ulric Lyons
August 21, 2023 2:16 pm

Exactly.

B Zipperer
Reply to  Thomas
August 21, 2023 7:51 pm

Which to me, is the basis for their “attribution” claim:
“See, if we leave out the CO2 term then the curves don’t match up. It must be CO2.”

Energywise
August 21, 2023 1:01 pm

I remember covid modelling Ferguson, a UK Govt Sage advisor at that time – his models predicted 500,000 deaths in UK when covid hit, leaving the NHS swamped, unless there were lockdowns – there were then fear induced lockdowns imposed on citizens by Govt – Ferguson admitted that his Imperial College model of the COVID-19 disease was based on undocumented, 13-year-old computer code that was intended to be used for a feared influenza pandemic, rather than a coronavirus. Ferguson declined to release his original code so other scientists could check his results. He only released a heavily revised set of code after a six-week delay.
He’s also made other modelling predictions in his career – all wildly inaccurate as it turned out! – He ultimately resigned in disgrace during covid

https://www.nationalreview.com/corner/professor-lockdown-modeler-resigns-in-disgrace/

Climate modelling is a similar dark art with many unknown inputs and erroneous skewed software logic – software can predict anything you want it to
If covid taught our leaders anything, it’s that modelling is incapable of delivering data for major policy decisions
God forbid we ever let AI make policy decisions

cilo
Reply to  Energywise
August 22, 2023 5:24 am

Just yesterday we had a story about an Anointed One of the White Coat, complaining her AI buggered up a prescription.(Just in case anybody thought covidiocy was the medical industry’s final insult).
What’s worse, a computer that sends your son to war, or one that euthanises you for your sore throat?

slowroll
Reply to  Energywise
August 22, 2023 9:21 am

What we have now is just as bad. Actual unintelligance making policy decisions.

Richard Page
Reply to  Energywise
August 22, 2023 9:41 am

If only he had resigned due to his multiple failures in modelling rather than him breaking lockdown to pursue an affair with a married woman.

David Dibbell
August 21, 2023 1:03 pm

These are similar concerns to those expressed by Willis Eschenbach in early 2022 in an article here about conservation of energy.

https://wattsupwiththat.com/2022/02/18/meandering-through-a-climate-muddle/

I commented on that article concerning NOAA’s GFDL model CM4.0.

https://wattsupwiththat.com/2022/02/18/meandering-through-a-climate-muddle/#comment-3456955

In short, about 2 W/m^2 is spread around globally because dissipative heating cannot be computed locally. This is greater by a lot than, say, the yearly change in GHG “forcings” being investigated.

Duane
August 21, 2023 1:05 pm

All of the warmunist climate models make the erroneous assumption that the atmosphere warms or cools the oceans and the land surface, particularly the oceans. The source of energy, of course, is sunlight, with some contribution from the Earth’s core and mantle, not the atmosphere. The sunlight that reaches the ocean or land surface warms those surfaces in accordance with the diurnal cycle and the annual seasonal variations in incident sunlight. The oceans absorb that varying energy, store it, then transfer that energy to the atmosphere via radiation and convection, aided by ocean currents as well as atmospheric prevailing wind directions and speeds.

The ability of the oceans to store and release incident solar energy is truly massive compared to the atmosphere, which has a virtually negligible ability to store and release energy to the oceans as compared to the oceans transferring energy to the atmosphere

That is due to the specific heat content of liquid water vs. air, which is the amount of energy that is applied or released necessary to cause a given amount of temperature change PER UNIT MASS. Liquid water has 1/4 the specific heat of air – meaning it takes four times as much energy gained or lost per unit mass of liquid water to cause the water temperature to change by one degree C or K per unit mass (kcal per deg C per mass in kg) as it does for air. The mass of the world’s surface water is approx. 1.4 x 10 to the 21st power kg. The mass of the world’s atmosphere is 5.15 x 10 to the 18th power kg. That is a ratio of 271 to 1, water mass to air mass.

That in turn means that a 1 degree C or K change in temperature of the atmosphere theoretically induces, in a perfectly distributed application of that energy to the oceans (an impossible result), only 1/1,084 deg C or K temperature change in oceanic water. Temperature stratification and ocean currents of course result in a wide disparity of oceanic temperatures both spatially and vertically. Coriolis effects due to the Earth’s spin result in the bands of wind currents that vary in both direction and speed by latitude.

Meaning, the atmosphere and the whole notion of CO-2 induced “global warming” (i.e., the atmosphere warming the oceans) has no practical effect on the oceans. The only effects that matter are the amount of incident sunlight reaching the ocean surfaces over a long timeframe (at least hundreds of years), and the movement of oceanic mass both vertically and horizontally via currents. The only thing that affects the former is solar radiation variations and cloud cover, not CO-2, which has zero effect on incident sunlight at the oceanic surface.

Land surfaces, of course also have a somewhat lower specific heat content than the atmosphere, and are of course also vastly more massive than the atmosphere.

Plus, affecting both the oceans and land surfaces is the thermal energy transfer from the core and mantle to the upper crust via a combination of conduction and convection, and again, there is no effect from atmospheric CO-2.

The warmunists are completely clueless when it comes to conservation of both energy and mass – they’re physics idiots, all of them.

cilo
Reply to  Duane
August 22, 2023 5:55 am

…they’re physics idiots, all of them…

Didn’t your mama not teach you not to heap on them retard kids?

bnice2000
August 21, 2023 1:27 pm

Where were they measuring ocean temperature in the 1950s,60s?

Didn’t Phil Jones say southern hemisphere ocean temperature were “mostly made up”?

Bob Tisdale showed very little ocean coverage before ARGO.

ocean temp coverage.png
Dave Andrews
Reply to  bnice2000
August 22, 2023 8:21 am

John Gribbin a former assistant editor of Nature wrote a book ‘Forecasts, Famines and Freezes’ published in 1976.

On page 8 he says

“for the five year period 1968-72, the average temperature recorded by the 9 ocean weather ships between 35 degrees N and 66 degrees N was more than half a degree centigrade below the peak of the 1940s”

No more details unfortunately.

Rud Istvan
August 21, 2023 2:09 pm

A follow up concerning ‘outlier’ INM CM5. Not only no tropical troposphere hotspot, but an ECS ~1.8, close to EBM ECS ~1.65-1.7.

The underlying reason is fascinating. INM carefully parameterized their ocean rainfall to ARGO observations (ARGO near surface salinity measures ‘ocean fresh water storage’ so indirectly ocean rainfall). So their model has about about twice the ocean rainfall as the rest of CMIP6, so about half the water vapor feedback (WVF) of the rest of CMIP6.

And this INM WVF fact can be used with Linden’s 2011 British Parliament paper Bode feedback curve to derive ECS ~1.8 differently. Lindzen’s no feedback ECS is Bode 0 at 1.2C. AR4 said WVF doubles no feedback, so +2.4C so Bode 0.5. IPCC says everything else except clouds nets to about 0, so clouds contribute +0.6C or Bode 0.15 for a total IPCC ECS 3 so Bode 0.65.
Dessler showed cloud feedback is about zero in 2010. INM figured observed WVF is about half of other models. So 0.65-0.15-0.5/2= Bode ~0.25. Plugged into Lindzen’s Bode curve produces ECS 1.8. Neat triangulation.

Also explains why the rest of CMIP6 produce a tropical troposphere hotspot and otherwise run hot. Too much WVF, unrealistic low ocean rainfall not parameterized using ARGO observations.

RickWill
August 21, 2023 2:41 pm

One possible source of problems are the model “numerics”…

This shows a lack of understanding of climate phiisics. Climatology is adorned with its own array of climate phiisics. Anything can happen and does happen in climate models. As long as they produce a warming trend everywhere all the time that correlates to CO2 increasing in the atmosphere they are doing their job.

They are unphysical claptrap. Utter tripe. Pure nonsense. Anyone who puts any weight on their output is an incompetent fool. More detail on Australia’s contribution to this laughing stock here:
https://wattsupwiththat.com/2023/08/14/climate-modelling-in-australia/

ScienceABC123
August 21, 2023 3:12 pm

So climate models are like M. C. Escher’s waterfall. Got it.

cilo
Reply to  ScienceABC123
August 22, 2023 6:01 am

…climate models are like M. C. Escher’s waterfall…

I object! Escher’s work was precisely mathematical. He was first and foremost a mathematician, who used art to demonstrate his mathematics.
Climastrology starts with the artistic invention, then make up math to frame it with.

slowroll
Reply to  cilo
August 22, 2023 9:25 am

Climate models are perhaps more like a Klein bottle.

Robert B
August 21, 2023 3:34 pm

There is a problem in science in general. “Publish or die” and peer review means that authors rarely tick off all the boxes. They merely tick off the boxes that they think will be checked even though the authors, themselves, are usually the best experts to do the review.

Kevin Kilty
August 21, 2023 3:47 pm

It seems to me perfectly reasonable to expect violations of conservations laws in these codes. Especially when the “imbalances” people are looking for in the GHG effect are extremely small compared to the input and flows — 0.5%. I wrote several fairly large codes dealing with coupled mass and heat transfer 50 years ago, but these climate modeling codes are 40 times the size of mine and mine were hard enough to verify. The computer code here involves a set, a large set, of coupled partial differential equations and a whole range of issues touches upon the behavior of the discretized version of these equations; 1) is the solution method explicit or implicit? 2) does the discrete solution converge to a true solution? 3) do round off errors in the arithmetic accumulate or are they damped away so eventually very old errors left over from early iterations no longer matter? 4) initial conditions are subject to sampling issues, do these disturbances damp away? 5) there are source/sink terms which occur on length and time scales so small (nanoscale heat and mass transfer at interfaces) they can’t be included explicitly as basic physics processes like the macroscale terms but are parameterized. Do the parameterizations disturb these conservation laws — are they unphysical?

What makes their results convincing for policy, though, is that anything can happen on a computer screen.

Nick Stokes
Reply to  Kevin Kilty
August 21, 2023 6:46 pm

It seems to me perfectly reasonable to expect violations of conservations laws in these codes.”

But no violation is shown here. And in fact they do not occur. The codes are constructed to conserve mass, momentum and energy. But they are also monitored after the fact, and adjustments are made for discrepancies. These are too small to significantly alter what you want to know about the solution.

In fact what the linked paper is about is initializations and spinup. There will always be something unphysical about what we assume for an initial state, and that is countered by allowing a period for spurious effects to settle down. It used to be decades; now, apparently, centuries. That is ample for the air and surface regions that we care about, but is still to short for the deep oceans. There you’ll get results that vary due to initial choices that persist. It isn’t failure to conserve; it is variation of starting amount. It doesn’t really matter, because of the time scale difference. The paper is about ways of identifying and removing the effects (dedrifting).


bnice2000
Reply to  Nick Stokes
August 22, 2023 2:12 am

… adjustments

… unphysical

… spurious effects

… results that vary due to initial choices

These are The AGW Nick-pick memes.

They are unvalidated computer games…

… no matter what SPIN Nick-Pick want to put on them.!

Tim Gorman
Reply to  Nick Stokes
August 22, 2023 2:56 am

There will always be something unphysical about what we assume for an initial state, and that is countered by allowing a period for spurious effects to settle down. “

“There you’ll get results that vary due to initial choices that persist.”

” It doesn’t really matter, because of the time scale difference.”

Garbage. First you state the variation in initial state will “settle down”, I assume meaning that the initial state doesn’t matter. Then you state that results vary because of initial choices that persist, i.e. that don’t settle down. Then you offer the excuse that it doesn’t matter anyway!

Spurious effects don’t “settle down” in an iterative process. They compound with each iteration. All you’ve really offered up here is magical handwaving. Willis E has already shown us code that has required a “limit statement” to be put in to keep the model from blowing up. That means the iterative process *does* compound for at least some of the processes in the code. Are the limiting statements for all the potential compounding processes accurate? If not, then the output of the model is inaccurate as well.

AlanJ
Reply to  Tim Gorman
August 22, 2023 6:18 am

If I understand the paper correctly, it’s not a problem with the underlying model physics (what Spencer is trying to imply), it’s a compute resource problem. Because the models are so computationally expensive, a decision is typically made to spin them up over a shorter scale than would be required to achieve that “settling down” you are saying doesn’t occur in models. The imbalance arises because the model’s initial state is set by observational datasets, which need time to sync up with the underlying model physics. If you allow enough spin up time, this will happen (imagine that your observational dataset for SSTs used as an initial state is an El Niño year, but the model doesn’t start with atmospheric circulation in place that would drive an El Niño – that doesn’t matter over many decades, but it would matter for the first few years of the model run), but if you want to cut the spin up short you will be left with some imbalance. It’s a tradeoff, and the result is that you need to dedrift the resultant datasets. That’s what the authors are describing here:

The most obvious solution to this issue would be to let the model run to equilibrium before performing any experiments of interest. The problem is that state-of-the-art coupled climate models are computationally expensive, which makes a “spinup” period of many thousands of years impractical. Instead, models are generally spun up for a few hundred years. Experiments will therefore exhibit changes/trends associated with incomplete model spinup, as well as changes related to external forcing or internal climate variability. 

The paper seems to be suggesting that this problem is pretty small for everything except the deep ocean, where the timescales are so large that cutting the spin up time short leaves a larger imbalance that has to be de-drifted.

The major consequence of this seems to be that you lose resolution in internal variability by dedrifting the data, not that you produce incorrect long term trends (what Spencer seems to be implying).

Jim Gorman
Reply to  AlanJ
August 22, 2023 11:33 am

Do you really think that model ensembles get things correct?

Errors propagate regardless of any excuses. Don’t try to dismiss them. The evidence is clearly available to show that errors exist and that they are additive over multiple iterations. Uncertainty also propagates additively thru multiple iterations.

Excuses for wrong answers don’t cut it in science! If the answers don’t meet real world measurements, then they are simply wrong. Dr. Spencer is simply outlining one more reason that the models do not meet scientific criteria.

AlanJ
Reply to  Jim Gorman
August 22, 2023 12:21 pm

I’m assuming you’re dredging up Pat Frank’s nonsense, but that has nothing whatsoever to do with this paper. This is not about whether errors cancel or not, it’s about how long it takes the models to spin up and the tradeoff between the computational cost of waiting for them to fully spin up or cutting spinup short and dedrifting the results.

bnice2000
Reply to  AlanJ
August 22, 2023 2:16 pm

Your understanding is totally lacking as always.

Spin-up… tradeoffs… errors, pretending you can make errors converge rather than diverge, ie dedrifting…

You are basically clueless, and are just gibbering.

Betting you have never done any solving of differentials via computer programming.

Jim Gorman
Reply to  bnice2000
August 22, 2023 2:22 pm

One of the faithful trying to explain why evil is really good. It’s just how you look at it.

Tim Gorman
Reply to  bnice2000
August 23, 2023 6:59 am

I don’t know why you got the downchecks. You are correct in what you state!

The only way you can make errors converge is to state the conclusion before you write the model!

Jim Gorman
Reply to  AlanJ
August 22, 2023 2:19 pm

It matters not what excuses you use, the models propagate errors and uncertainty through each iteration to the next. Their output is not correct for many reasons.

Why don’t you address Dr. Spencer’s assertions directly rather than coming up with some strawman excuse to justify “refuting” his assertion by saying the people running the models create errors by not allowing complete “spin-up”. What a joke! Models have been run for 50 years and no one has taken the computer time to verify this?

You have no evidence that a “full spin-up” would have no errors or uncertainty passed to succeeding iterations. Keep trying.

AlanJ
Reply to  Jim Gorman
August 22, 2023 3:53 pm

I’m directly addressing what is in the actual paper. Whatever nonsense you believe about error propagation is irrelevant here, since that’s not what the paper is about nor what Roy Spencer is talking about.

No one is saying that allowing a model to fully spin up would remove all errors and uncertainties, what I’m saying is that the full spin-up is necessary to remove the drift the paper is talking about – but doing so is computationally prohibitive, so a tradeoff is often made, and the paper is talking about how well progress on reducing that tradeoff is going.

Jim Gorman
Reply to  AlanJ
August 22, 2023 6:04 pm

So what you are admitting is that the models are incorrect. You are just trying to excuse that by saying no one, any where, at any time in the past could afford the computational time to run accurate iterations of the software. That seems kind of funny. In other words hey couldn’t let one iteration run fully and save the results until time on the computer became available to run the next iteration.

I’ll say it again. That is just not scientific. It is especially egregious to then turn around and trumpet how accurate the models are.

AlanJ
Reply to  Jim Gorman
August 22, 2023 6:33 pm

There’s a staggering level of ignorance in your comment that makes it difficult to respond to productively. Not only is running experiments on these models computationally expensive (thousands to tens of thousands of dollars per hour), but it is time expensive as well. Some experiments need to run for many months to years, and with limited resources available, scientists are forced to make tradeoffs. We don’t want model results 20 years from now, we need to generate results for successive generations of models.

I’m not even certain what you’re trying to suggest by saying we could just save model results to a hard disc and use those for the next experiment. That’s not how any of this works. The results are not the inputs.

old cocky
Reply to  AlanJ
August 22, 2023 7:57 pm

thousands to tens of thousands of dollars per hour

That might have been the case with the Crays, but it’s surprising it’s still the case with the massively parallel Linux boards.

Or are they using “pay by the second” AWS or Google VMs now?

Jim Gorman
Reply to  old cocky
August 23, 2023 4:38 am

It seems a bit ironic that we’re spending trillions on EV’s, windmills, kitchen ranges, boilers, and solar because we haven’t spent enough on research into models.to get accurate predictions.

AlanJ
Reply to  Jim Gorman
August 23, 2023 6:56 am

I agree that we need to be funding climate research a lot more than we do. Nice to find some common ground.

Jim Gorman
Reply to  AlanJ
August 23, 2023 9:09 am

You didn’t even begin to understand what I said did you?

AlanJ
Reply to  old cocky
August 23, 2023 7:01 am

They’re using giant supercomputers in research instutitons. I don’t think it would be remotely cost effective to pay for cloud computing to run these giant models on AWS.

old cocky
Reply to  AlanJ
August 23, 2023 1:43 pm

The most recent things I had seen on supercomputers was quite a while back.
At that stage, they were massive Beowulf clusters.
I just don’t see how these cost thousands per hour to run.

Jim Gorman
Reply to  AlanJ
August 23, 2023 4:31 am

“The results are not the inputs.”

Now you are trying to say that the models are not run iteratively.

 “(thousands to tens of thousands of dollars per hour)”

This is a drop in the bucket compared to the investments being made due to the results of these models. If the cost is why they are so wrong, why are we making these investments based upon the models. Something is out of whack.

You are just dancing around a bush trying to blame poor modeling on the costs of computer time. That just won’t work. Show us some papers and news articles where scientists have bemoaned the fact that their research into models has been limited due to the lack of funding for new and faster computers with a consequence of poor results in predicting the climate for the next century.

AlanJ
Reply to  Jim Gorman
August 23, 2023 6:54 am

Now you are trying to say that the models are not run iteratively.

If you’re saying to save a time step to use as the initial conditions for a subsequent experiment, this is something that is done, and it certainly can save compute time, but it doesn’t solve the problem entirely. You still had to dedicate the compute resources to the initial spinup in the first place, which isn’t always an option. This approach is also not appropriate for some kinds of experiments, since by nature it will not allow you to examine transient response.

This is a drop in the bucket compared to the investments being made due to the results of these models. If the cost is why they are so wrong, why are we making these investments based upon the models. Something is out of whack.

It’s a monetary and time consideration, and there is an easy way around the drawbacks – simply dedrifting the results. So depending on the needs of the particular study it can be quite a lot more sensible.

You are just dancing around a bush trying to blame poor modeling on the costs of computer time. 

I’m trying to discuss what the paper is actually talking about, since almost all of the discussion in this comment section is completely wide of the mark. The paper cites incomplete spinup as the primary reason for model drift, and it’s discussing the state of improvements to this drift in CMIP6 experiments. They note:

The overall reduction in drift from CMIP2+ to CMIP5 has been primarily attributed to longer spinup times and more careful initialization of the coupled ocean–atmosphere system

Your insistence that this has nothing whatsoever to do with the issue is just highlighting the fact that you haven’t read or understood the paper under discussion.

Tim Gorman
Reply to  AlanJ
August 23, 2023 7:05 am

The results are not the inputs.”

They ARE in an iterative model. Each iterative step becomes the input for the next iteration.

You are just spouting nonsense. Have you *ever* stopped to think about what you are posting before hitting the POST button?

Reply to  Tim Gorman
August 23, 2023 11:43 am

He doesn’t understand which is why he avoids specific details of the article to throw in a bunch of words instead.

Reply to  AlanJ
August 23, 2023 11:40 am

Yet you can’t explain why it wrong at all thus your comment was dead on arrival.

When are you going to actually discuss some details of the article you keep avoiding.

Tim Gorman
Reply to  AlanJ
August 23, 2023 7:03 am

Error propagation is *ALWAYS* required, it is *NOT* nonsense. This is the same excuse the modelers use by saying all measurement error is random, Gaussian, and always cancels out!

It’s a wrong assumption on the face of it. If nothing else systematic error can *NOT* cancel and Pat Frank has shown the in-built systematic error of LIG thermometers.

If the model training is done with observations that have error then the model will have that very same error built in. You simply can’t avoid it.

AlanJ
Reply to  Tim Gorman
August 23, 2023 7:10 am

I did not say error propagation is nonsense, I said Pat Frank’s take on it is nonsense, and I said that it has nothing whatsoever to do with this paper.

Reply to  AlanJ
August 23, 2023 11:38 am

LOL, you didn’t read the presentation and understand it, you are here to spread fog just like Nick is commonly doing.

Why don’t YOU make a detailed counterpoint based on what is published at the top of the page.

old cocky
Reply to  AlanJ
August 22, 2023 3:00 pm

I’m assuming you’re dredging up Pat Frank’s nonsense

No, error propagation is a fact of life with numerical methods and finite precision.

Modern systems allow far higher precision operations than Lorenz had in the early 1960s, but sensitive dependence on initial conditions doesn’t disappear.
The higher precision is offset by the higher speeds allowing more iterations over shorter time steps.

old cocky
Reply to  old cocky
August 22, 2023 4:04 pm

Another -1.

We have a rounding error denier as well…

Tim Gorman
Reply to  old cocky
August 23, 2023 7:11 am

That’s because they are professing religious dogma and not looking at actual reality. “The World Ends Tomorrow” – no matter what.

Tim Gorman
Reply to  old cocky
August 23, 2023 7:10 am

Precision is not accuracy. That is the common mistake those in climate science continually make – and AlanJ is a perfect example.

A very precise number that is highly inaccurate is useless for predicting the future. That is the problem with the models. Sooner or later the inaccuracies (i.e. the uncertainty) overwhelms whatever precision you think you have. Stating a result out to the ten thousandths digit when the uncertainty is in the tenths digit is FALSE accuracy. You simply can’t KNOW what is in the ten thousandths digit when you aren’t even sure what the tenths digit actually is.

old cocky
Reply to  Tim Gorman
August 23, 2023 2:04 pm

Precision is not accuracy.

The statement was purely regarding the effect of internal precision on the iterative effects of rounding errors.

Stating a result out to the ten thousandths digit when the uncertainty is in the tenths digit is FALSE accuracy. You simply can’t KNOW what is in the ten thousandths digit when you aren’t even sure what the tenths digit actually is.

It’s not the number of digits in the result, it’s the resistance to rounding errors in the calculations.

Something which would be worth doing, and perhaps it has been done, is to seed model run “spin-up” periods with data points at either end of the uncertainty bounds and see what happens.

Tim Gorman
Reply to  old cocky
August 23, 2023 3:34 pm

Rounding of results that have uncertainty baked in is, again, false accuracy. It doesn’t matter how careful you are with your rounding if the digit you are rounding is uncertain. That’s why Taylor suggests never taking intermediate results from measurements more than one digit past the last significant digit. The last significant digit in your result should be of the same order of magnitude as the uncertainty. If the uncertainty in your measurements is +/- 0.5C then temperature averages shouldn’t be quoted past the tenths digit. It doesn’t matter how careful you are at rounding the thousandths digit.

old cocky
Reply to  Tim Gorman
August 23, 2023 4:39 pm

It has to do principally with the conversion from base 10 to base 2 (and back) and the representation of real numbers (float or double) internally as mantissa + exponent.
Just to make life more interesting, there are different internal representations, though IEEE is the most likely.
There’s a brief discussion at https://softwareengineering.stackexchange.com/questions/215065/can-anyone-explain-representation-of-float-in-memory

The ALU can also make a difference. The old joke about the Pentium 66 was that 2 + 2 = 3.9999999999999999973

There were also cases where the “same” compiler on different architectures (from memory, SPARC, Alpha, POWER and x86) gave subtly different results for the same calculation.

There can also be subtle differences relating to the optimisation flags used when compiling, whether the FPU is used, and a host of others.

My joke about bugs being raised to the power of the number of languages was only half joking. Unless all the programs involved in data passing through function calls use exactly the same internal representation and the same internal calculation methods, you are looking at a Mars lander.
Every time you dump out intermediate values to be used by another program, you are also introducing error.

Once you start running these through hundreds of millions of iterations, it can add up.

Tim Gorman
Reply to  old cocky
August 24, 2023 4:40 am

2 + 2 = 3.9999999999999999973″

That is truly a violation of the rules on significant digits. It’s an easy trap to fall into when you are using a computer capable of floating point arithmetic. It’s part and parcel with the concept that an infinitely repeating decimal is infinitely precise and accurate.

Too many mathematicians and computer scientists are taught that more digits in the answer is better. Always take a calculation out to the limits of your computer’s data representation. That may be true in an abstract “math” world but it simply doesn’t apply to the real world.

In the real world that equation would be written as:

(2 +/- u1) + (2 +/- u2) = ?

Since the stated value has one significant digit the stated value of the answer should have one significant digit as well. So you would get:

(2 +/- u1) + (2 +/- u2) = 4 +/- (u1 + u2)

It doesn’t matter how many digits your computer can handle. If your addition algorithm doesn’t handle the significant digits properly then your answer is garbage in the real world.

Much of the engineering for the first space flights was done using slide rules. A slide rule, by its nature, limits how many digits you can resolve thus following the significant digit rules pretty well. You didn’t need to calculate the diameter of the fuel line for the Mercury booster out to 16 digits when you could only measure it accurately out to 3 decimal places using a micrometer.

old cocky
Reply to  Tim Gorman
August 24, 2023 2:43 pm

They’re different animals. You’re looking at the joke from an Engineer’s perspective as spurious precision. It’s actually a Computer Scientist’s joke about the other meaning of GIGO (Good In, Garbage Out), which can arise from any of the causes I listed as well as a stack more.

The joke isn’t accurate, but what jokes are? Look up the Pentium II FDIV bug to see what it really was.

The map isn’t the territory, and what’s in the computer’s memory is only the map. Chaos theory came from Edward Lorenz re-entering intermediate values by hand and not using the full precision. With iterative numerical methods, the more internal precision, the less quickly rounding errors accumulate.

Too many mathematicians and computer scientists are taught that more digits in the answer is better. 

Mathematicians, perhaps. A properly educated Computer Scientist knows what horrors dwell in the depths.

I was only half joking on the occasions when I said that scientists should never be allowed near a computer keyboard. Like statistics, intelligent people can use the tools to get results, but it requires the correct background to use the right tool(s) for the job, and to spot when even the right tools give the wrong results.

Tim Gorman
Reply to  old cocky
August 24, 2023 3:04 pm

Chaos theory came from Edward Lorenz re-entering intermediate values by hand and not using the full precision.”

I did not know that. And it *is* one of m foibles – looking at things from an engineers viewpoint only. I suspect it stems from being held responsible if you don’t pay attention to actual accuracy limits.

old cocky
Reply to  Tim Gorman
August 24, 2023 3:47 pm

I had to look it up. I remembered it had to do with him noticing very different outcomes from minor differences in inputs. He had one of Asimov’s “that’s funny?” moments, which led to the new field.

There is often benefit to be gained from a fresh pair of eyes providing cross-pollination between fields.
Nick’s FASTFLO work certainly seems to fit into this category.

Conversely, pontificating outside one’s area of expertise often leads to falling flat on one’s face.

There seems to be a fine line between the two 🙂

bnice2000
Reply to  AlanJ
August 22, 2023 2:09 pm

If I understand the paper correctly”

Which is highly unlikely !

If you really think they have the underlying physics correct, when they don’t properly include clouds, gas laws, and the output is generally garbage that is in no way representative of reality….

… then you are a fantasy la-la-land… you are living a cult-based nonsense.

All you have typed in your nonsense post is idiotic SPIN !.

Tim Gorman
Reply to  bnice2000
August 23, 2023 7:15 am

If they actually had the underlying physics correct then they would also know the initial conditions exactly. The proof is the variance in the outputs of the ensemble members. The wider the variance the higher the uncertainty is for the average value. The ensemble variance seems to never get less meaning that “average” remains highly uncertain. If the underlying physics were correct the ensemble members would have started to converge over the past 30 years – but they haven’t!

Tim Gorman
Reply to  AlanJ
August 23, 2023 6:58 am

 If you allow enough spin up time, this will happen (imagine that your observational dataset for SSTs used as an initial state is an El Niño year, but the model doesn’t start with atmospheric circulation in place that would drive an El Niño – that doesn’t matter over many decades, but it would matter for the first few years of the model run)”

Do you truly realize how inane this statement is? “If it starts out of sync it will eventually get into sync? What forces that synchronization? It can’t be FUTURE observations since they haven’t occurred yet!

What you seem to be saying here is, once again, that initial conditions don’t matter – that the models will be correct over time. If that were true you would *NOT NEED* to even input initial conditions – just start the model running and it will eventually turn out a correct result!



AlanJ
Reply to  Tim Gorman
August 23, 2023 7:29 am

To be clear, I’m not saying my example is a realistic one – it is just conceptually easy to understand. And yes, I am exactly saying that the model will eventually sync up if the initial conditions are out of phase with the model physics if given enough time. Take the example where SSTs are out of phase with atmospheric circulation – those SST anomalies are going to quickly dissipate as circulation drives currents/upwelling in the tropical oceans. It might take a few weeks, but that disparity isn’t going to persist because the laws of physics embedded in the model won’t let it. I suppose you could instantiate a model run with initial conditions so unrealistic that it is impossible to reconcile them with the physics, but that isn’t an issue encountered in typical applications.

It is absolutely necessary to instate the model with initial conditions – you can’t simply not have surface temperatures or wind or ocean currents.

Tim Gorman
Reply to  AlanJ
August 23, 2023 7:49 am

How do they sync up with unknown conditions in the future? What you *really* mean is that they wind up giving you the pre-conceived answer.

It is absolutely necessary to instate the model with initial conditions – you can’t simply not have surface temperatures or wind or ocean currents.”

That’s the whole point! Have you really thought through what you just posted here? If you don’t know the initial conditions, ALL OF THEM, to some level of accuracy then your output is worthless. What initial conditions are provided to the models for global cloud cover? For regional or local cloud cover? What initial conditions are provided for ocean temperature – both vertically and horizontally?

How are the uncertainties in those initial conditions allowed for? Monte Carlo analysis won’t tell you because it doesn’t provide for systematic uncertainty in any of the given initial conditions.

You are back to saying that initial conditions are essential while also saying that they don’t matter. Pick one and stick with it!

AlanJ
Reply to  Tim Gorman
August 23, 2023 8:15 am

How do they sync up with unknown conditions in the future? What you *really* mean is that they wind up giving you the pre-conceived answer. 

If I model a ball under the force of gravity and start with an initial upward velocity for the ball of 10 m/s, then start the model, at first the ball will be out of equilibrium with the model physics – the ball is moving upward with constant velocity but gravity is pulling it downward. If I let the model run for a while it will reach a state of equilibrium with the ball resting on the ground. That isn’t a “pre-conceived answer,” it’s just the physics of the model. If I stopped the “spinup” of my model before the system reached equilibrium and began treating it as thought it were at equilibrium I would get artifacts arising from that that I would have to deal with. That doesn’t mean my model’s treatment of gravity was wrong.

That’s the whole point! Have you really thought through what you just posted here? If you don’t know the initial conditions, ALL OF THEM, to some level of accuracy then your output is worthless.

Why would it be worthless? It might not be helpful if your goal is, say, short term weather prediction, but if he goal is long term climate projection then the specific weather conditions at the moment the model started aren’t super important.

Tim Gorman
Reply to  AlanJ
August 23, 2023 12:48 pm

If I model a ball under the force of gravity and start with an initial upward velocity for the ball of 10 m/s, then start the model, at first the ball will be out of equilibrium with the model physics – the ball is moving upward with constant velocity but gravity is pulling it downward.”

Malarky! When you start the model you had better start it with the conditions in existence at the time, i.e. a ball velocity of 10 m/s.

It’s what happens next that you keep on ignoring. If that ball passes through an updraft THAT YOU DON’T KNOW ABOUT then the model’s result is going to be totally wrong. Change “updraft” to “cloud cover” and it’s why the climate models simply can’t get things correct.

You can’t parameterize dynamic atmospheric conditions. Averages won’t give you the right answer.

“If I let the model run for a while it will reach a state of equilibrium with the ball resting on the ground.”

Maybe it will but it might be miles from where you think it will land and at a far different arrival time! So you’ll *never* wind up in sync!

 I would get artifacts arising from that that I would have to deal with. “

You are going to get artifacts no matter what you do because your models are not in sync with a dynamical system. It’s why hurricane track models vary so widely! None of them ever reach a full sync point and they all diverge.

You are still trying to justify the cognitive dissonance of believing that initial conditions matter while also claiming they don’t matter. Pick one and stick to it!

Jim Gorman
Reply to  AlanJ
August 23, 2023 8:58 am

You are basically saying near future prediction are not believable but by golly, predictions 30, 70, 100 years in the future are accurate regardless of the initial conditions.

You know what, the proof is in the pudding. Model predictions have actually expanded the uncertainty of ECS and continue to be further and further from from reality.

Your attempt to blame computer time on the inaccuracy of predictions rather than the model programming is not going to convince anyone.

Reply to  Jim Gorman
August 23, 2023 11:46 am

He doesn’t know when to quit pushing his badly flawed arguments which is why many here can’t take him seriously.

AlanJ
Reply to  Jim Gorman
August 24, 2023 5:20 am

Near-term prediction is trickier than long term projection for precisely the reason laid out above. Any deviation in your initial conditions from the actual conditions on the ground will produce variations in the modeled weather. If you’re trying to predict the path of a hurricane, for instance, you model will produce a hurricane, but it won’t be the the hurricane you are trying to model. Something Jeff Goldblum taught us all so well in 1993. Tiny variations in the initial conditions, imperfections in the skin and all that.

But that doesn’t matter for long term projection, because you don’t care if the model produces specific weather events, you care the it produces realistic weather events. Thus you need initial conditions, but they don’t have to be flawlessly representative of the state of the weather at the moment you captured them.

Your attempt to blame computer time on the inaccuracy of predictions rather than the model programming is not going to convince anyone.

You Gormans seem to struggle an awful lot staying on topic. The subject of the paper under discussion is how truncated spinup produces model drift and how this has improved over time, and ways to identify and address it. I’m not attempting to blame anything, I’m explaining what the paper is actually talking about.

Tim Gorman
Reply to  AlanJ
August 24, 2023 8:41 am

But that doesn’t matter for long term projection, because you don’t care if the model produces specific weather events, you care the it produces realistic weather events.”

Pure sophistry!

How do you *KNOW* that future weather events are realistic? You are designing the model to produce the “realistic” weather events you expect! Circular logic at its finest! Decide on what the result should be and then design the model to give that result!

You are *still* trying to justify that initial conditions don’t determine future results while also claiming that initial conditions are required to bring the model into sync.

Wake up!

AlanJ
Reply to  Tim Gorman
August 24, 2023 9:37 am

How do you *KNOW* that future weather events are realistic?

Because we know what hurricanes and thunderstorms and atmospheric circulation looks like, and we can see that the models produce it very well. Models are not designed to yield a specific output – the “weather” changes are emergent phenomena of the underlying physics. Nobody tells a model what a hurricane looks like, for instance, they just pop up organically in the models due to the physics.

You are *still* trying to justify that initial conditions don’t determine future results while also claiming that initial conditions are required to bring the model into sync.

Not at all what I’m saying. I’m saying you need to give enough spin up time to allow the model physics and initial conditions to sync up. If you cut the spin up time short, they won’t be in sync and you will get model drift. That is what this paper is about.

Tim Gorman
Reply to  AlanJ
August 24, 2023 12:33 pm

Because we know what hurricanes and thunderstorms and atmospheric circulation looks like, and we can see that the models produce it very well.”

None of the models predicted the last two multi-decadal pause in warming. Apparently they *DON’T* what the physical processes well enough to tell what the results are gong to be in the future, spin up time is irrelevant. They haven’t gotten the temperature increase correct either – meaning they don’t know what the physical processes cause as a result.

The models not only have to have hurricanes and thunderstorms “crop up”, they have to crop up with sufficient timeliness and sufficient location to properly match what is happening in the future – and that includes multi-decadal pauses.

As Pat Frank has shown, a simple linear equation gives the same result the models do – and that equation doesn’t predict, or even hindcast the recent pauses.

The models simply aren’t fit for purpose. They do *NOT* correctly model the physics of the the earth. They are nothing more than confirmation of someone’s guess at what will happen in the future. As a predictive tool they are useless.

AlanJ
Reply to  Tim Gorman
August 25, 2023 5:10 am

None of the models predicted the last two multi-decadal pause in warming. Apparently they *DON’T* what the physical processes well enough to tell what the results are gong to be in the future, spin up time is irrelevant. They haven’t gotten the temperature increase correct either – meaning they don’t know what the physical processes cause as a result. 

What were the last two multidecadal pauses in warming? The “pause(s)” were well within the envelope of model spread:

comment image

It shouldn’t be expected that every model is going to capture exactly the same mode of short term, unforced (internal) variability at the same time. It’s just semi-random noise.

Tim Gorman
Reply to  AlanJ
August 25, 2023 6:07 am

If you can’t see from this graph that the models are DIVERGING from the observational data, as unreliable as *that* is, they you are just being willfully ignorant.

Dan Hughes
Reply to  Nick Stokes
August 22, 2023 5:02 am

The codes are constructed to conserve mass, momentum and energy. But they are also monitored after the fact, and adjustments are made for discrepancies.

They do, but they don’t ?? Is this DoubleThink illustrated?

Tim Gorman
Reply to  Dan Hughes
August 23, 2023 12:49 pm

It’s called cognitive dissonance. Believing two opposite things at the same time.

rxc6422
Reply to  Nick Stokes
August 22, 2023 11:45 am

“But no violation is shown here. And in fact they do not occur. The codes are constructed to conserve mass, momentum and energy. But they are also monitored after the fact, and adjustments are made for discrepancies. These are too small to significantly alter what you want to know about the solution.”

So, there is no violation, but when violations are discovered, “something” is “adjusted” to “…[tell] you what you want to know about the solution”. This sounds like the results are adjusted to get the answer desired. And the codes are tuned as one method. But if no violations occur, then why are adjustments made? This makes no sense whatsoever.

Tim Gorman
Reply to  rxc6422
August 23, 2023 7:16 am

This sounds like the results are adjusted to get the answer desired.”

100% correct!

TimTheToolMan
Reply to  Nick Stokes
August 22, 2023 1:19 pm

“The paper is about ways of identifying and removing the effects (dedrifting).”

And is another practical confirmation the models are not capable of unforced climate change. Another nail in the coffin of natural variability seen in the models. Little wonder the claim is the earth has no unforced natural variability then.

TimTheToolMan
Reply to  TimTheToolMan
August 22, 2023 1:45 pm

Oh wait, silly me. This is actually another confirmation the models aren’t simulating the climate.

If the deep oceans are out of sync after a long spin up then they’re out of sync full stop. Initial conditions must come from observations and in energy terms, cant be far out. So the assumption is that the pre industrial earth is not naturally changing is simply invalid.

Or further confimation the simulation itself is flawed.

Or our knowledge of the energy content of the deep oceans is rubbish.

Or all of the above and more.

Tim Gorman
Reply to  TimTheToolMan
August 23, 2023 7:17 am

100% correct.

sherro01
August 21, 2023 4:50 pm

It would have been interesting to learn of Maurits Escher’s critique of the illustration heading this article. But sadly, Escher died in 1972.

ferdberple
August 22, 2023 4:33 am

Timeslices fundamentally break causality leading to non conservation.

Take two events. One that happens before the other. Within a timeslice they happen at the same time. This is fundamentally non physical.

I ended up rewriting a simple model for just this reason to avoid timeslice errors.

Tim Gorman
Reply to  ferdberple
August 22, 2023 4:40 am

Uncertainties in the beginning initial conditions generate uncertainties in the output of the first timeslice. The output of that initial timeslice becomes the initial conditions for the second timeslice thus propagating the uncertainty in the first timeslice output into the second timelice. Those uncertainites therefore compound at each successive iteration. There is no “settling out” over time.

When your timeslice is a year-long, it is impossible to avoid having events happening throughout the year being considered as happening at the same time.

old cocky
Reply to  Tim Gorman
August 22, 2023 1:56 pm

Edward Lorenz noticed this in the early 1960s.

A whole new field of mathematics came out of it.

prjndigo
August 22, 2023 8:39 am

They don’t even obey gravity or acknowledge that the earth’s atmosphere is an open system that can expand and contract – or acknowledge that gases aren’t hydraulic.

Ignoring specific heat per volume and density changes makes the models invalid at any output.

slowroll
Reply to  prjndigo
August 22, 2023 9:32 am

And effects of specific heat in the atmosphere is well demonstrated by the obvious differences in air temperature behavior in dry desert areas versus humid areas. I suspect the models don’t properly address that for a reason; and not a scientific one.

Tim Gorman
Reply to  slowroll
August 23, 2023 7:29 am

It’s why averaging Las Vegas temperatures with Miami temperatures with Bismarck, ND is so wrong. T is not a good proxy for heat.

Tim Gorman
Reply to  prjndigo
August 23, 2023 7:24 am

pv = nrT is not part of their religious dogma.

rxc6422
August 22, 2023 9:30 am

 I used to evaluate thermal hydraulic (T/H) computer codes for nuclear reactors for the US Govt. I also used to organize test exercises for the international community of T/H modelers where they the participants were provided with the fundamental information needed to model actual test facilities, some of which were scale models, but some of which were actual operating nuclear reactors and nuclear power plants.

I remember specifically one exercise involving a very simple scenario with a very simple facility that caused the modelers a lot of problems, because it involved the creation of very low temperatures due to gas expansion above a water surface. The codes were not able to model this phenomenon, so the modelers had to be “creative” to get the right answer. The most creative participants were from organizations which had no skin in the game – they worked for academic or governmental organizations and did not have to justify their results to the regulators. One of them used a “negative flow loss coefficient” (very un-physical) to get the answer he wanted.

Another experiment was a one dimensional experiment of fluid and particulate flow in a long tube. The modelers all used the same basic code, but were allowed to “tune it” as they saw fit. The results from about 1 dozen participants varied by about 6 orders of magnitude, with the academic and government participants again showing the most “creative” ways to model the physical system.

I have been a skeptic of the climate models ever since I first heard about them, because of my experience modeling well characterized systems in nuclear plants. It was very hard to model certain scenarios over period of several days inside a closed building or even a well understood reactor, and I could not understand how anyone could think they could really model the weather for an entire planet for decades at a time without lots of creative tuning. Which renders the results immediately suspect.

Failure to achieve a mass and energy balance in a T/H code is a good indication that it is not fit for purpose.  

SteveZ56
August 22, 2023 12:55 pm

When climate modelers talk about increases of X petajoules of “ocean heat content” over the last 50 years or so, given the vast mass of the oceans, this translates to a temperature variation of a few thousandths (0.001) of a degree C. Can the Argos buoys, which sometimes measure temperatures in the deep ocean, measure temperature to that precision, or are all modeled variations in OHC lost in instrument error, and are therefore undetectable in reality?

The entire mass of the atmosphere over 1 m2 of the earth’s surface is equivalent to a height of about 10,330 meters at sea level pressure (about 1013 mb average). If such a column of air contained 1% water vapor by volume, the mass of water vapor would be about 78.7 kg at 15 C. In reality, the water vapor content at altitude would be much less than 1%, since cold air at altitude cannot hold that much water vapor without precipitation.

The average depth of the oceans is about 3.7 km, meaning that the mass of water in 1 m2 of ocean is about 3700 m * 1000 kg/m3 = 3.7 million kg.

This means that a major change in the mass of water vapor in the atmosphere would not result in much change in the mass of water in the oceans. Doubling the water vapor in the atmosphere would decrease the ocean mass by about 21 ppm in this example. This means that an error in the mass balance of the water in the atmosphere would go unnoticed in the Ocean Heat Content calculation.

But if the water vapor content of the atmosphere (calculated by a model) increases, the additional water has to come from evaporation from a body of water, which requires about 2400 kJ/kg of latent heat (at 15 C). This would have very little effect on the total heat content of the oceans, but a large effect on the atmospheric heat balance.

When the same models are used to predict melting of polar ice caps, the heat of melting ice is about 333 kJ/kg, or about 333 MJ per m3 of water formed. If the column of air over 1 m2 is heated by 1 deg C, the heat required is about 105 MJ. This means that if there was “global warming” of 1 deg C over the poles, it would melt a depth of ice of 105 / 333 = 0.315 m, but only during the seasons when the air temperature is above 0 deg C.

If the ice was melted to a depth of 0.315 m over the entire Antarctic and Greenland ice sheets (15.7 million km2), it would form about 4.95*10^12 m3 (or 4950 gigatonnes) of water added to the oceans. But since the area of the oceans is 361 million km2 = 3.61*10^14 m2, the resulting sea level rise would be (4.95*10^12) / (3.61*10^14) = 0.0137 m, or about 0.54 inch.

So if the entire atmosphere was warmed up by 1 C, it would have enough heat to melt enough ice to raise sea level by 1.37 cm, or 0.54 inch. From the point of view of a simple heat balance over water and ice, this is not much to worry about.

When will these ultra-sophisticated climate modelers learn to do a heat and mass balance, and realize that their “dreaded” warming of 1.5 or 2.0 C is literally an inch in the ocean?

Tim Gorman
Reply to  SteveZ56
August 23, 2023 7:37 am

this translates to a temperature variation of a few thousandths (0.001) of a degree C. Can the Argos buoys, which sometimes measure temperatures in the deep ocean, measure temperature to that precision,”

It’s not a matter of precision, it is a matter of accuracy. The Argo floats have a measurement uncertainty of +/- 0.5C. It doesn’t matter if you can measure down to the thousandths digit if you don’t know if it is accurate or not. .001C +/- 0.5C is meaningless.

%d
Verified by MonsterInsights