2021 Tied for 6th Warmest Year in Continued Trend, NASA Analysis Shows (Claims)

From NASA

2021 was tied for the sixth warmest year on NASA’s record, stretching more than a century. Because the record is global, not every place on Earth experienced the sixth warmest year on record. Some places had record-high temperatures, and we saw record droughts, floods and fires around the globe. Credits: NASA’s Scientific Visualization Studio/Kathryn Mersmann

Lee esta nota de prensa en español aquí.

Earth’s global average surface temperature in 2021 tied with 2018 as the sixth warmest on record, according to independent analyses done by NASA and the National Oceanic and Atmospheric Administration (NOAA).

Continuing the planet’s long-term warming trend, global temperatures in 2021 were 1.5 degrees Fahrenheit (0.85 degrees Celsius) above the average for NASA’s baseline period, according to scientists at NASA’s Goddard Institute for Space Studies (GISS) in New York. NASA uses the period from 1951-1980 as a baseline to see how global temperature changes over time.

Collectively, the past eight years are the warmest years since modern recordkeeping began in 1880. This annual temperature data makes up the global temperature record – which tells scientists the planet is warming.

According to NASA’s temperature record, Earth in 2021 was about 1.9 degrees Fahrenheit (or about 1.1 degrees Celsius) warmer than the late 19th century average, the start of the industrial revolution.

“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson. “Eight of the top 10 warmest years on our planet occurred in the last decade, an indisputable fact that underscores the need for bold action to safeguard the future of our country – and all of humanity. NASA’s scientific research about how Earth is changing and getting warmer will guide communities throughout the world, helping humanity confront climate and mitigate its devastating effects.”

This warming trend around the globe is due to human activities that have increased emissions of carbon dioxide and other greenhouse gases into the atmosphere. The planet is already seeing the effects of global warming: Arctic sea ice is declining, sea levels are rising, wildfires are becoming more severe and animal migration patterns are shifting. Understanding how the planet is changing – and how rapidly that change occurs – is crucial for humanity to prepare for and adapt to a warmer world.

Weather stations, ships, and ocean buoys around the globe record the temperature at Earth’s surface throughout the year. These ground-based measurements of surface temperature are validated with satellite data from the Atmospheric Infrared Sounder (AIRS) on NASA’s Aqua satellite. Scientists analyze these measurements using computer algorithms to deal with uncertainties in the data and quality control to calculate the global average surface temperature difference for every year. NASA compares that global mean temperature to its baseline period of 1951-1980. That baseline includes climate patterns and unusually hot or cold years due to other factors, ensuring that it encompasses natural variations in Earth’s temperature.

Many factors affect the average temperature any given year, such as La Nina and El Nino climate patterns in the tropical Pacific. For example, 2021 was a La Nina year and NASA scientists estimate that it may have cooled global temperatures by about 0.06 degrees Fahrenheit (0.03 degrees Celsius) from what the average would have been.

A separate, independent analysis by NOAA also concluded that the global surface temperature for 2021 was the sixth highest since record keeping began in 1880. NOAA scientists use much of the same raw temperature data in their analysis and have a different baseline period (1901-2000) and methodology.

“The complexity of the various analyses doesn’t matter because the signals are so strong,” said Gavin Schmidt, director of GISS, NASA’s leading center for climate modeling and climate change research. “The trends are all the same because the trends are so large.”

NASA’s full dataset of global surface temperatures for 2021, as well as details of how NASA scientists conducted the analysis, are publicly available from GISS.

GISS is a NASA laboratory managed by the Earth Sciences Division of the agency’s Goddard Space Flight Center in Greenbelt, Maryland. The laboratory is affiliated with Columbia University’s Earth Institute and School of Engineering and Applied Science in New York.

For more information about NASA’s Earth science missions, visit:

https://www.nasa.gov/earth

-end-

1.5 33 votes
Article Rating
593 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
January 14, 2022 6:07 am

Considering how much GISS cooks historic records, any statements from them Are useless.

Reply to  Tom Halla
January 14, 2022 6:29 am

But Bill Nelson said it is an indisputable fact 8 of the 10 hottest years have occurred and bold action is required….can’t we boldly fire Bill Nelson? Science leaves no room for doubt according to Bill.

oeman 50
Reply to  Anti_griff
January 14, 2022 8:28 am

It seems obvious he has to say these things to keep his job in this administration.

Michael S. Kelly
Reply to  oeman 50
January 14, 2022 7:11 pm

Bill Nelson was first elected to the US House of Representatives from Florida in 1978 (at age 36). In 1986, he became the first sitting Member of the House of Representatives to fly in space, aboard STS-61-C (Columbia), the last successful Shuttle flight before the Challenger disaster. He was flown as a “payload specialist.” Many, if not most, astronauts have a “handle.” A former colleague of mine, George Zamka, for example, had the astronaut handle of “Zambo.” The handle the astronaut corps bestowed on Nelson was “Ballast.”

While in the Senate, Nelson championed all of the NASA human exploration initiatives, culminating in the development of the Space Launch System (SLS), otherwise known as the Senate Launch System. It is a fully expendable launch vehicle combining all of the worst features of the Space Shuttle, and none of the advantages. In the unlikely event that it is ever launched, SLS will cost more than 10 times that of a Falcon Heavy. If Elon succeeds in getting Starship operational (and I think he will), it will outperform SLS at 100 times less cost. But Elon didn’t contribute as much to Nelson’s campaign as Boeing, Lockheed-Martin, and Northrop-Grumman. So SLS will go on ad infinitum without ever flying while Elon goes on to the Moon and Mars.

Having said all of that, I can’t really criticize Nelson. From the start of his Congressional career, he represented his constituents. That was his job, and he did it very well. His constituents were conditioned to accept the NASA model of “space exploration” by decades of abuse by NASA and the federal government. As a result, they were interested in a certain path forward, and Nelson dutifully pursued it, with great success.

He’s a good soldier. He will do what his commanders command. I can’t criticize him for that. The only thing I could criticize him for is pretending to believe the CAGW nonsense in order to please his commanders, if in fact, he was only pretending. I don’t know if he is. If he has any doubts, however, then I would be very critical.

lee
Reply to  Anti_griff
January 14, 2022 6:33 pm

8 out of 10 years? That is only weather. 😉

In The Real World
Reply to  Tom Halla
January 14, 2022 10:32 am

They have to keep making up their lies to keep their jobs , so there will always be fresh propaganda to keep the global warming scam going .

http://temperature.global/
This link updates from thousands of worldwide weather stations and shows that overall temperatures have been below average for the last 7 years .
But it only covers the last 30 years .

So , if you carefully select the figures you want and adjust them to suit your agenda , then it is possible to make up lies like ” Hottest Years On Record “, which the warmists are doing .

Mark D
Reply to  Tom Halla
January 14, 2022 10:33 am

After the past several years any respect I had for gvt produced data is long gone.

Beyond that so what? It’s been warmer. It’s been colder. Another trip ’round the sun.

Reply to  Mark D
January 14, 2022 12:42 pm

Once one understands that the “official narrative” is always a lie, things begin to make sense.

Tom Abbott
Reply to  John VC
January 14, 2022 2:45 pm

Yes, and the official temperature narrative is a Big Lie.

This Distorted Temperature Record is the ONLY thing Alarmists have to show as “evidence” that CO2 is a danger.

These “Hottest Year Evah!” claims are refuted by the written temperature record which shows it was just as warm in the Early Twenthieth Century as it is today, which puts the lie to “Hottest Year Evah!”.

NASA Climate and NOAA are a bunch of Liars who are screwing this old world up with their Climate Change lies.

The Bastardized Temperature Record is not fit for purpose. It’s a trick used by Alarmists to scare people into obeying.

Hughes Fullydeeoth
Reply to  Mark D
January 14, 2022 2:02 pm

What baffles me is how anybody can say (without bursting out laughing) that a government agency has carried out an “independent” activity
I blame idiocy, malice or a mixture of the two

Mark D
Reply to  Hughes Fullydeeoth
January 14, 2022 7:11 pm

Whenever a paper is stuck in my face my first question is who funded it.

Reply to  Hughes Fullydeeoth
January 15, 2022 6:00 am

HF: I upvoted your comment but it was registered as a downvote. Add 2 to the uppers.

MarkW
Reply to  Graham Lyons
January 15, 2022 3:25 pm

You cancel an upvote or a downvote, by pressing the opposite key. Then you can record either an up vote or a downvote.

Simon
Reply to  Tom Halla
January 15, 2022 12:07 pm

If you think they cook the books then explain where. That’s right your team can’t though many have tried. Which is why your statement is useless

Carlo, Monte
Reply to  Simon
January 15, 2022 1:59 pm

Figured out who Peter Daszak is yet?

Simon
Reply to  Carlo, Monte
January 15, 2022 3:50 pm

Yawn.

MarkW
Reply to  Simon
January 15, 2022 3:27 pm

How the books have been cooked has been explained many times.
Not surprised that you have managed to forget that.
Regardless, if it weren’t for useless statements, there would be no Simon.

PS: I see that Simon still believes that crying “you’re wrong” is an irrefutable refutation.

Simon
Reply to  MarkW
January 15, 2022 3:51 pm

How the books have been cooked has been explained many times.” No, it has been attempted that’s all. As I recall the Global Warming Policy Foundation attempted to collect evidence, then gave up and published nothing.

Pat from kerbob
Reply to  Simon
January 15, 2022 8:15 pm

It’s been shown several times that the adjustments track c02 increases >98%.
That’s pretty clear

I was challenged to go on the GISS site and graph the same values Hansen did in 1998 and you get a different graph than is shown in his paper, cooler in the past, warmer in 1998 now.

Your statements are no different than claims climategate was nothing except no unbiased sentient individual with more than 2 brain cells can read those emails and insist there was nothing to see.

As always, if you had a solid story you wouldn’t have to lie.
You wouldn’t feel the need to produce hockey sticks based on rickety proxy data and claim that over rules endless physical evidence that it was much warmer through most of human history.

You just wouldn’t have to do it.
So keep on walking with your crap, no one here is buying

Simon
Reply to  Pat from kerbob
January 15, 2022 10:51 pm

“It’s been shown several times that the adjustments track c02 increases >98%.
That’s pretty clear”
I expect if it’s that clear you will be able to direct me to a site that demonstrates that clearly?

“As always, if you had a solid story you wouldn’t have to lie.” Wow that horse. Where did I lie?

I will ask it again. Where is your evidence the data is faulty? It’s like the Trump votes thing.Despite multiple enquires … No evidence.

Simon
Reply to  Pat from kerbob
January 16, 2022 10:07 am

You seem to have gone very quiet Pat from kerbob.

January 14, 2022 6:08 am

Correlation is not causation. Why do government employees believe this nonsense? Where are the whistleblowers?

Joseph Zorzin
Reply to  David Siegel
January 14, 2022 7:26 am

few people will turn away a government sinecure- so they sing the party line

mark from the midwest
Reply to  David Siegel
January 14, 2022 8:17 am

Because there is no money to be made from “doing nothing”

DMacKenzie
Reply to  mark from the midwest
January 14, 2022 10:54 am

Plus if you do nothing for long enough, somebody notices and fires you….

Mark D
Reply to  DMacKenzie
January 14, 2022 7:34 pm

In fed/gov? Surely you jest.

Reply to  David Siegel
January 15, 2022 6:09 am

Yes.
Where were the Lysenkoism whistleblowers in 1930s USSR?
Not much different in the ‘free’ World science is there.

2hotel9
January 14, 2022 6:09 am

Anything can be whatever you claim when you change data to fit your political agenda.

Steve Case
January 14, 2022 6:09 am

Yes, and every month NASA makes several hundred changes to their Land Ocean Temperature Index. Well the year just completed and yesterday they updated LOTI and compared to ten years ago here’s a graphic of what all the changes for the past ten years looks like:

comment image

Editor
Reply to  Steve Case
January 14, 2022 6:26 am

Thanks for the graph, Steve. I hadn’t looked at the changes to GISS LOTI in almost a decade. (Then again, I haven’t plotted GISS LOTI for a blog post in almost 8 years.)

Some of those look like they might be tweaks to earlier tweaks.

Regards,
Bob

Steve Case
Reply to  Bob Tisdale
January 14, 2022 8:13 am

It’s a comparison of the December 2011 LOTI file saved from the Internet Archives Way Back Machine with the LOTI file that came out yesterday. To be more specific it’s the the AnnualMean J-D values for December 2021 minus December 2011 plotted out.

If that includes earlier tweaks, then that’s what it is.

For 2021 GISTEMP averaged 329 “tweaks” per month all the way back to data from the 19th century.

The response from GISTEMP when asked why old data all the way back to January 1880 is regularly adjusted they say:

“[A]ssume that a station moves or gets a new instrument that is placed in a different location than the old one, so that the measured temperatures are now e.g. about half a degree higher than before. To make the temperature series for that station consistent, you will either have to lower all new readings by that amount or to increase the old readings once and for all by half a degree. The second option is preferred, because you can use future readings as they are, rather than having to remember to change them. However, it has the consequence that such a change impacts all the old data back to the beginning of the station record.”

Retired_Engineer_Jim
Reply to  Steve Case
January 14, 2022 8:39 am

So they are still finding that stations have been moved, or new instruments installed all the way back to 1880?

DMacKenzie
Reply to  Retired_Engineer_Jim
January 14, 2022 9:13 am

So what they said makes some sense, except I’m sure they don’t move 329 stations per month on average, although they might recalibrate that many. But on recalibration of an instrument, you generally let the old readings stand because you don’t know WHEN or at what rate it drifted out of calibration except in obvious cases.

Rick C
Reply to  DMacKenzie
January 14, 2022 10:11 am

Most all recalibrations of meteorological glass thermometers simply verify that they are in calibration. It is very rare to find a glass thermometer has drifted – I never saw one in 40 years of laboratory work except for those used at high temperatures > 200C. They break, but they don’t drift significantly. I would doubt any explanation of temperature adjustments made due to calibration issues for stations using liquid in glass thermometers.

DMacKenzie
Reply to  Rick C
January 14, 2022 11:02 am

Did you know ? for about the first 100 years of glass thermometers, there were drift problems, some drifted 10 degrees F in 20 years…until types of glass that weren’t much affected by mercury were developed…..

Pat Frank
Reply to  DMacKenzie
January 15, 2022 8:43 am

Historical LiG thermometers suffered from Joule creep. This is the effect resulting from a slow contraction of the glass bulb, as the silica relaxes to a more stable configuration.

The drift never stops but attenuates to become very slow after about 50 years. Total drift can be up to 1 C.

Meteorological LiG thermometers also had a ±0.25 C limit of resolution, at best. Somehow, people compiling the GMST record never learned about instrumental resolution. There’s not a mention of it in their papers.

One suspects the people compiling the global record have never worked with an actual LiG thermometer.

tygrus
Reply to  Rick C
January 14, 2022 1:35 pm

The thermometer accuracy over time are fine but there are many other factors that change recorded values for different sites & sites change over time:
1) time of day obs were taken & if it was missing the correct min/max.
2) site location eg. verandah, open field vs tree shade, near irrigation crops, near water body, A/C heat pump exhaust air, near carpark / concrete, plane exhaust at airports now with larger planes…
3) enclosure / mounting eg. on wall under eaves but open, pre-Stevenson screen, thermometer put off-centre instead of central in the box, large vs small box, height from ground…
4) scale divisions & F vs C. Did they round towards 0 or round to nearest? Did they measure 0.5 or smaller?

But poor records of these details over time & lack of testing means the QA process makes many assumptions vulnerable to biases. Some adjustments are correct, some may not be.

Temp dif between city temp & airport then they stop the city temp & this change exaggerates bias. Look at Penrith vs Richmond vs Orchard Hill (NSW, Australia) dif recs at dif locations probably affected by urbanisation. We previously used hoses/sprinklers outside to cool the children, concrete & house during heatwaves. The last 25yrs stops watering (mains water) during 10am to 4pm during hot summers & droughts.

Pat Frank
Reply to  tygrus
January 15, 2022 8:54 am

No one takes into account the systematic measurement error from solar irradiance and wind-speed effects. Even highly accurate unaspirated LiG thermometers produce field measurement uncertainties of about ±0.5 C.

The entire historical record suffers from this problem. The rate and magnitude of the change in GMST since 1850 is completely unknowable.

If the people compiling the global air temperature record worked to the scientific standard of experimental rigor, they’d have nothing to say about global warming.

Hence, perhaps, their neglectful incompetence.

Steve Case
Reply to  Retired_Engineer_Jim
January 14, 2022 9:33 am

It looks like they find an error in a station today, and assume that error extends all the way back to the 19th century. One station wouldn’t change the entire time series, so it further looks like they find a multitude of errors that do affect today’s anomaly for any particular month and extend the correction back in time. But it’s very curious as to why a pattern forms where all the anomalies since 1970 increase as a result of those corrections. Besides that one would think that records of those corrections would be made public and available for audit.

Jim Gorman
Reply to  Steve Case
January 14, 2022 12:05 pm

I think they propagate changes via homogenization. They “find” an inhomogeneity and make a change to a station. The next run they include the changed temp and lo and behold a nearby station looks incorrect and gets homogenized. And on and on. That’s why all the changes a downward rather than a 50/50 mix of up and down adjustments. What you end up with is an algorithm controlling the decision on what to change rather than investigating the actual evidence.

And let’s not forget that they are changing averages that should have nothing but integer precision by 1/100th of a degree. I hope someday, that some POTUS asks them to justify each and every change with documented evidence rather than an algorithm that follows the bias of a programmer.

Editor
Reply to  Steve Case
January 14, 2022 12:00 pm

For a station move, the adjustment would be by the same amount every year – possibly different in different seasons but the same every year. Also, station moves can be in any direction so the sum total of adjustments could reasonably be expected to be about zero. Yet the chart of adjustments show steadily increasing negative adjustments as you go back over time. And there’s no way that NASA/GISS can keep finding lots of old station moves every single year. Something else is going on, and it smells.

Tim Gorman
Reply to  Steve Case
January 14, 2022 5:07 pm

Hubbard and Lin did a study almost twenty years ago and determined that adjustments *have* to be done on a station-by-station basis. Broad adjustments are too subjective to be accurate. Apparently GISTEMP doesn’t even recognize that temperature readings are impacted by little things like the ground over which the station sits. If it is grass and is green in summer and brown in the winter *that* alone will affect the calibration and reading of the temperature station. So what temps did they pick to formulate their “adjustment”? Summer or winter?

Robertvd
Reply to  Steve Case
January 14, 2022 7:10 am

You just wonder how all that ice melted in the 20ies and how life could have existed before the Ice Age started 3 million years ago.

TheFinalNail
Reply to  Steve Case
January 14, 2022 7:30 am

Those changes look pretty minor. Do they have any influence on the trend? If not, what would be the benefit of making them up?

DHR
Reply to  TheFinalNail
January 14, 2022 8:24 am

Collectively, the changes make a substantial upward change to global temperature databases. Go to climate4you.com to see the sum of the changes over the years. The GISS global temperature chart is shifted up by 0.67C and the NCDC (NOAA) is shifted up by 0.49C. Curiously, the NOAA Climate Reference Network, a group of about 112 climate stations which include triply redundant thermometers and other devices and located uniformly over the Lower 48 shows no change in Lower 48 temperature since the system was set up in January of 2005 – 17 years. I have never seen a single announcement by NOAA or NASA concerning this information.

TheFinalNail
Reply to  DHR
January 15, 2022 2:21 am

Collectively, the changes make a substantial upward change to global temperature databases.

I looked up the Climate4you chart you refer to and have to say I found it very misleading. You say “The GISS global temperature chart is shifted up by 0.67C…”; well, no.

Firstly, that’s not a trend alteration, that’s a difference between two different monthly values, January 1910 and January 2020. More importantly, it shows that as of May 2008 there was already a difference of 0.45C between the Jan 1910 and Jan 2020 starting figures. Since then, fractional changes have increased this difference to 0.67C; that’s a change of 0.22C from the 2008 values for these 2 months, not 0.67C.

Remember, this example refers to two single months, 110 years apart. What about all the other months? It looks to me as though Climate4you has scanned through the entire GISS data set and zeroed in on the biggest divergence it could find between 2 separate months, then misrepresented it (to the casual reader) as a much bigger change than it actually is.

Steve Case
Reply to  TheFinalNail
January 14, 2022 8:34 am

Yes, they are pretty minor changes, but over time they add up. Here’s the effect of those changes over the period 1997 to 2019:

comment image

You have to understand that GISTEMP makes hundreds of changes every month, it’s a steady drone. And as you can see from the other graph, all the the changes affecting data since 1970 result in an increase in temperature.

Well OK, that’s an influence of only 0.25 degree per century, but you have to understand that when GISS crows about being the sixth warmest they’re dealing with hundredths of a degree differences from year to year. It looks like they think it’s a benefit. My opinion? It makes them look petty.

cerescokid
Reply to  Steve Case
January 14, 2022 11:16 am

In isolation they might be minor but then how much is UHI effect and land use changes and uncertainties and for last 40 years AMO, etc etc. Individually insignificant but the cumulative effect given, as you say, we are dealing with tenths it all adds up.

TheFinalNail
Reply to  Steve Case
January 15, 2022 2:31 am

Do you have access to the 1997 edition of GISS, Steve? Can you post a link if so, thanks.

Steve Case
Reply to  TheFinalNail
January 15, 2022 6:42 am

The link in the post below is to 2000 not 1997 because the 1997 Link in my files no longer works. But the 2000 version is close, but already in those three years the 1950-1997 time series increased from 0.75 to 0.81

Oh on edit, I see that link shows that it is an “update”

Jim Gorman
Reply to  TheFinalNail
January 14, 2022 12:10 pm

Explain how you get 1/100ths of precision from thermometers prior to 1980 when the recorded precision was in units digits. It is all fake mathematics that have no relation to scientific measurements and the requirements for maintaining measurement precision.

MarkW
Reply to  Jim Gorman
January 14, 2022 2:24 pm

According to the alarmists, if you have 100 stations scattered across the country, that makes each of the stations more accurate.

DMacKenzie
Reply to  Jim Gorman
January 14, 2022 6:53 pm

If you use a tape measure to measure the distance from your back door to your back fence, and the tape measure is calibrated in 10ths of an inch, and you make entirely random errors in the readings…..then statistically after 10,000 readings, your answer should be accurate to say your 1/10 of an inch divided by the square root of your number of readings….so 1/1000 of an inch…(I overly simplify)

But what if your reading errors aren’t actually random ? Or maybe your tape changes length with temperature, or there is a cross wind, etc. at certain times of the day and most of your readings are taken during those times ?

What if you are interested in, and only write down readings to the nearest foot? A million readings and your average could still be out half a foot, but someone else does the calcs and says your accuracy is a thousandth of a foot…..hmmmm…

What if you use a different tape measure for every different reading ? What if your neighbor starts writing his measurements to his fence in your logbook (homogenizing them) ?

Stats have a very human basis. Somebody just decided that a standard deviation should be the square root of the absolute y-axis errors squared, and it happens to mesh with other formulas derived from games of chance…so some useful things can be extrapolated. However, non-random errors from 10,000 different measuring tapes of the distance to your fence from one year to the next, isn’t going to cut your error to 1/100 th of a calibration distance….

I used to have a stats professor who would spend 15 minutes describing the lecture assignment and the pertinent equations, then 45 minutes explaining the ways these equations did not apply to the real world. His recommended second textbook was a notepad sized “How to Lie With Statistics”, still a classic I understand. Maybe he tainted my brain.

Pat Frank
Reply to  DMacKenzie
January 15, 2022 9:01 am

then statistically after 10,000 readings, your answer should be accurate to say your 1/10 of an inch divided by the square root of your number of readings….so 1/1000 of an inch…(I overly simplify)

If your tape measure is calibrated in 1/10ths of an inch, then the average of repeated measurements will approach that 1/10th inch accuracy limit.

Doing better than the lower limit of instrumental resolution is physically impossible.

Tim Gorman
Reply to  DMacKenzie
January 15, 2022 11:01 am

How do you make 10000 measurements of the SAME temperature? I guess I missed the development of the time machine? Did MIT do it?

TheFinalNail
Reply to  Jim Gorman
January 15, 2022 2:29 am

Explain how you get 1/100ths of precision from thermometers prior to 1980 when the recorded precision was in units digits.

No one is suggesting that the precision described comes directly from thermometers. No more than anyone is suggesting that the average family size is actually 2.5 children.

Jim Gorman
Reply to  TheFinalNail
January 15, 2022 6:09 am

The precision ONLY COMES from the thermometers. There is simply no calculation that increase the resolution/precision of measurements. You would do better to show references that support your assertion. I’m not sure where you will find one.

MarkW
Reply to  TheFinalNail
January 15, 2022 3:39 pm

If it’s not coming from the instruments themselves, then it is imaginary.

Reply to  Steve Case
January 14, 2022 10:11 am

If possible to calculate, I think it would be great to be able to say something along the lines of XX% of global warming is because it is colder in the past than it used to be (according to NASA).

Jim Gorman
Reply to  gord
January 14, 2022 12:13 pm

Exactly! What caused the warming at the end of the Little Ice Age? When did that natural occurrence dissipate and when was it replaced by CO2. Was water vapor that small back then? Basically, don’t worry about the past, we know what we are doing in making future predictions.

To bed B
Reply to  Steve Case
January 14, 2022 11:20 am

They’re minor tweaks, but to something already dodgy. They do show that its massaging of data, especially that 40’s peak and the late 19th C adjustments. We couldn’t have a warming trend 100 years ago as big as the one now, and we can also claim changes also warmed the past, as I have come across many times.

To bed B
Reply to  Steve Case
January 14, 2022 12:36 pm

Comparing GISS LOTI with UAH 6 (offset 0.53) and you can see that they are very similar up until 1997. They differ a lot after that.

https://woodfortrees.org/graph/gistemp/from:1979/plot/uah6/from:1979/offset:0.53

Here is a plot of linear fits to GISS Loti and UAH 6 from 1979 to 1997 and 1999 till the present.

https://woodfortrees.org/graph/gistemp/from:1979/to:1997/trend/plot/uah6/from:1979/offset:0.53/to:1997/trend/plot/gistemp/from:1999/trend/plot/uah6/from:1999/offset:0.53/trend

Looking at the comparison of the plots, the large difference in trends due to the difference in the months cooler than the trend line, mostly after 2006. The months warmer than the trend tend line to be very similar. This is not the case for 1998. The peak of the El Nino is half a degree cooler in GISS.

This is not just because of a difference in methodology (or because we live on the surface and, not in the lower troposphere). One of the methods must be very dodgy.

Tom Abbott
Reply to  To bed B
January 14, 2022 2:58 pm

“This is not the case for 1998. The peak of the El Nino is half a degree cooler in GISS.”

NASA had to modify GISS to show 1998 cooler, otherwise they couldn’t claim that ten years between 1998 and 2016 were the “hottest year evah!”

Here’s the UAH satellite chart. See how many years can be declared the “hottest year evah! between 1998 and 2016. The answer is NO years between 1998 and 2016 could be called the “hottest year evah!” if you go by the UAH chart, so NASA makes up a bogus chart in order to do so.

comment image

Bill Everett
Reply to  Tom Abbott
January 14, 2022 4:32 pm

I believe that the temperature reading for 2004 that marked the end of the warming period that started about 1975 exceeds most of the annual temperature readings after 2004. If the El Nino periods after 2004 are ignored than this is even more evident. It almost looks like the beginning years of a thirty year pause in warming.

To bed B
Reply to  Tom Abbott
January 14, 2022 8:01 pm

This is what made me look closer. The other El Ninos seem similar in both.

Needless to say which one looksike it’s the dodgy one.

ResourceGuy
January 14, 2022 6:13 am

This Climate Church Lady Administration is part of the problem in banning all knowledge or spoken truth on the term cycles. Ye shall be excommunicated by the tech platform enforcers and all other official comrades. So let it be written in the Congressional Record, so let be done by regulation and decree (and all allied talking heads).

Rah
January 14, 2022 6:13 am

They really are pushing this crap to the limit. All part of the great reset.

Reply to  Rah
January 14, 2022 6:24 am

Even the Weather Channel pushes climate alarmism … https://www.youtube.com/watch?v=HQKbm4qU_lQ

Trying to Play Nice
Reply to  John Shewchuk
January 14, 2022 8:28 am

What do you mean “even the Weather Channel”? They’ve been screeching climate alarmism for quite a while.

Pflashgordon
Reply to  John Shewchuk
January 14, 2022 1:25 pm

When the so-called “Weather Channel” stopped blandly reporting weather forecasts and went with videos, serial programming, and live humans in the studio and in the field, they almost immediately ceased to be an objective, reliable source of weather information. They have long been a weather/climate pornography channel. I NEVER look at them. Unfortunately, they have bought up formerly reliable weather apps (e.g., Weather Underground) and ruined them with their weather propaganda. They are the weather equivalent of the Lame Stream Media.

MarkW
Reply to  Rah
January 14, 2022 7:32 am

The scam is starting to fall apart, they have to get as much as the can before that happens.

Steve Case
Reply to  MarkW
January 14, 2022 9:39 am

The scam is starting to fall apart, …
__________________________

If only that were true. If you include Acid Rain, The Ozone Hole and Global Cooling, the scam has been going on for over 50 years and doesn’t really show any signs of rolling over and playing dead.

Thomas
Reply to  Steve Case
January 14, 2022 1:40 pm

In fact it has gain much strength since the acid rain scare.

Gregory Woods
January 14, 2022 6:13 am

Gee, sounds pretty bad…

Chip
January 14, 2022 6:14 am

They claim to know annual global temperatures since 1880, and then proved reports measured in tenths and hundredths of a degree. This is a religion, not science.

Laertes
Reply to  Chip
January 14, 2022 6:19 am

Only except when it invalidates their theory. I’ve seen articles that “former temperature records from the 30s are suspect but we can be SURE of the recent ones.” Rubbish.

Jim Gorman
Reply to  Laertes
January 14, 2022 12:17 pm

In essence the are saying the weathermen at that time were chumpls that didn’t have a clue as to what they were doing. “We” can look back 100 years to the information they put on paper and decipher where errors were made and where lackadaisical attitudes caused problems.

Joseph Zorzin
Reply to  Chip
January 14, 2022 7:29 am

they use extremely accurate tree rings /s

Latitude
January 14, 2022 6:15 am

…a tie for the 6th warmest year……is not a warming trend

TheFinalNail
Reply to  Latitude
January 14, 2022 7:35 am

…a tie for the 6th warmest year……is not a warming trend

Nor is any individual year’s average temperature. The question is, what impact does this year’s anomaly have on the long term trend? In the 30-year period to 2020, the trend in GISS was +0.232C per decade. Adding 2021 data, even though it was a coller year, actually increases that trend fractionally to +0.233 C per decade. That’s not a statistically significant increase, obviously, but there’s certainly no sign of a slowdown in warming either.

DHR
Reply to  TheFinalNail
January 14, 2022 8:27 am

See climate4you.com for actual data.

TheFinalNail
Reply to  DHR
January 15, 2022 2:33 am

See above, in at least one instance this has been badly misrepresented.

Alan the Brit
Reply to  TheFinalNail
January 14, 2022 8:35 am

As a retired engineer, I find it extremely difficult to believe that scientists are able to measure to an accuracy of 1/1000th of a degree from o.232C to 0.233C, with no tolerance reference of measurement!!!

rbmorse
Reply to  Alan the Brit
January 14, 2022 9:06 am

Especially with a data range that extends more than 120 years into the past.

Jim Gorman
Reply to  rbmorse
January 14, 2022 12:21 pm

And values that were recorded to integer precision for at least half of that time.

MarkW
Reply to  Jim Gorman
January 14, 2022 2:27 pm

In addition to be recorded only to integer precision, they only took the daily high and low for each day.
Anyone who thinks that they can get an average for a day to within a few tenths of a degree, from just the daily high and low, has never been outside.

Mark D
Reply to  Alan the Brit
January 14, 2022 10:44 am

As a retired hard hat I moved heat for a living and I learned just how difficult it is to get repeatable numbers measuring temperatures. One project for NASA had me change platinum rtds several times until they got the numbers were what they wanted to see.

The fever thermometers I use at home are all glass. They might not be accurate but the are consistent.

Jim Gorman
Reply to  Mark D
January 14, 2022 12:43 pm

I copied a page of routine uncertainties for one RTDS. As you can see even a class A at 0C is +/- 0.15C. This is the precision of measurement. How do these folks get values of precision out to the 1/1000th of a degree?

This just isn’t done in science. Otherwise we would know the distance to planets and stars down to the centimeter or less. All we would have to do is average the readings over the last century, divide by the number of data points and Voila!

Screenshot 2022-01-14 at 2.32.18 PM.png
RetiredEE
Reply to  Jim Gorman
January 14, 2022 1:54 pm

At least for the temperature ranges we are discussing the ITS90 uses a platinum RTD as the interpolation standard between standard points. When calibrating a given RTD for high precision it must be referenced to the temperature standards (i.e. H2O triple point) then a polynomial calibration is produced for that specific RTD. This can be used with accuracies/uncertainty below 0.001C however the devices in this class are lab standards requiring careful handling and would not be used for field work or instrumentation. They are also generally wire wound and very sensitive to shock and vibration.

The general purpose RTDs are trimmed to the performance required by the specific class required as noted in the referenced table. They still need to be calibrated in the instrumentation circuits. Oh yes, the circuitry used to measure the resistance must ensure that the sense current does not cause excessive heating of the RTD.

All that said, the general purpose and even the meteorological instruments do not have accuracy or resolution to those being stated by the adjustments. For example the ASOS system temperature measurement from -58 to +122F has an RMS error of 0.9F with a max error of 1.8F and a resolution of 0.1F.

It is becoming ever more difficult to trust anything from the government.

Tim Gorman
Reply to  RetiredEE
January 14, 2022 5:20 pm

You are only describing the uncertainty in the sensor itself. In the field that sensor uncertainty increases because of uncertainties in the instrument housing itself. Did a leaf block the air intake for a period of time? Did ice cover the instrument case for a period of time in the winter? Did insect detritus build up around the sensor over time? Did the grass under the instrument change from green to brown over time (e.g. seasonal change).

Although it has since been deleted from the internet, the field uncertainty of even the ARGO floats was once estimated to be +/- 0.5C.

MarkW
Reply to  Jim Gorman
January 14, 2022 2:28 pm

“How do these folks get values of precision out to the 1/1000th of a degree?”

By abusing statistics to the point that criminal charges would be warranted.

Joao Martins
Reply to  Alan the Brit
January 14, 2022 1:02 pm

I find it extremely difficult to believe that scientists are able to measure to an accuracy of 1/1000th

… except if it was not actually measured!… (“measured” as in using a ruler or a thermometer)

TheFinalNail
Reply to  Alan the Brit
January 15, 2022 2:34 am

They weren’t able to measure to that degree of accuracy and have never claimed to have been able to do so. As an engineer you will grasp the concept of averaging and how this tends to increase the precision of the collective indivdual values.

Carlo, Monte
Reply to  TheFinalNail
January 15, 2022 5:08 am

how this tends to increase the precision of the collective indivdual [sic] values

A fundamental principal of climastrology that exists nowhere else in science and engineering.

Jim Gorman
Reply to  TheFinalNail
January 15, 2022 5:23 am

As an engineer, here is what I learned and it certainly does not agree with increasing precision by averaging.

Washington University at St. Louis’s chemistry department has a concise definition about precision. http://www.chemistry.wustl.edu/~coursedev/Online%20tutorials/SigFigs.htm

Defining the Terms Used to Discuss Significant Figures

 

Significant Figures: The number of digits used to express a measured or calculated quantity.

By using significant figures, we can show how precise a number is. If we express a number beyond the place to which we have actually measured (and are therefore certain of), we compromise the integrity of what this number is representing. It is important after learning and understanding significant figures to use them properly throughout your scientific career. (bold and underlined by me)

As you can see, calculations can not add precision beyond what was actually measured. The word INTEGRITY should have special meaning to any scientist/engineer.

Bellman
Reply to  Jim Gorman
January 15, 2022 12:34 pm

The problem with your significant figure rules of of thumb are that following the rules exactly allow you to express an average to more decimal places than the individual measurements.

Suppose I take 1000 temperatures each written to the nearest degree C, i.e. 0 decimal places. Add them up and I follow rule “For addition and subtraction, the answer should have the same number of decimal places as the term with the fewest decimal places.”

So I get a sum to 0 decimal places. Say 12345°C.

Now I divide that by 1000 to get the average. This follows the rule “For multiplication and division, the answer should have the same number of significant figures as the term with the fewest number of significant figures.

12345 has 5 significant figures. 1000 is an exact number, so follows the rule “Exact numbers, such as integers, are treated as if they have an infinite number of significant figures.

5 is fewer than infinity, so the answer should be written to 5 significant figures, 12.345°C.

Now whether it makes sense to write it to 3 decimal places is another matter, which is why I’m not keen on these simplistic rules. As I’ve said before, I think the rule presented in the GUM and other works you insist I read are better – work out the uncertainty to 1 or 2 significant figures and write the answer to the same degree.

Carlo, Monte
Reply to  Bellman
January 15, 2022 2:01 pm

How exactly do you propose to measure 1000 temperatures simultaneously?

Measured temperatures are real numbers, not integers, so this fantasy world example is stooopid.

Bellman
Reply to  Carlo, Monte
January 15, 2022 3:47 pm

How exactly do you propose to measure 1000 temperatures simultaneously?

When did I propose that? You seemed to be obsessed with the idea that you can only take an average if you measure everything at exactly the same time, which I think says something about your understanding of statistics.

Measured temperatures are real numbers, not integers, so this fantasy world example is stooopid.

I wasn’t claiming the temperatures were integers, just that they were only measured to the nearest whole number. It would work just as well if you quoted the temperatures in 0.1s of a degree.

Tim Gorman
Reply to  Bellman
January 16, 2022 9:22 am

An average of independent, random measurements of different things is useless when applied to the individual elements. The average creates no expectation of what the next measurement will be – meaning it is useless in the real world.

If you want to describe something using statistics then you must be measuring the same thing with your measurements which can be averaged to create an expectation value for the next measurement.

I wasn’t claiming the temperatures were integers, just that they were only measured to the nearest whole number. It would work just as well if you quoted the temperatures in 0.1s of a degree.”

And yet you do your calculations as if those measurements have no uncertainty, assuming they are 100% accurate. The words “nearest whole number” *should* be a clue that uncertainty applies and must be propagated into your calculations. And it is that uncertainty that determines where your first significant digit is.

Bellman
Reply to  Tim Gorman
January 16, 2022 12:04 pm

An average of independent, random measurements of different things is useless when applied to the individual elements.

And you still don’t get that I’m not applying the average to the individual elements. The goal is to use the individual elements to determine the average. The average is thing I’m interested in. I’m not using it to predict what the next measurement will be. This does not make it useless in the real world. Believe it or not, statistics are used to understand the real world. There’s more to the real world than are drempt of in your workshop.

And yet you do your calculations as if those measurements have no uncertainty, assuming they are 100% accurate.”

No. The point of these significance rules is to give an implied uncertainty. The assumption is that of you are stating measurements to the nearest whole number, than there is an implied uncertainty of ±0.5, and that you can ignore all uncertainty calculations and just use the “rules” of significant figures to stand in for the actual uncertainty propagation.

Tim Gorman
Reply to  Bellman
January 17, 2022 12:29 pm

“And you still don’t get that I’m not applying the average to the individual elements.”

Then of what use is the average? Statistics are used to describe the population – i.e. the elements of the data set.

 I’m not using it to predict what the next measurement will be.”

If the average is not a predictor of the next measurement, then of what use is the average?

Believe it or not, statistics are used to understand the real world. “

My point exactly. If your statistic, i.e. the average, doesn’t tell you something about the real world then of what use is it? If your statistic doesn’t allow you to predict what is happening in the real world then of what use is it?

That’s the biggest problem with the Global Average Temperature. What actual use in the real world is it? It doesn’t allow predicting the temperature profile anywhere in the physical world. Based on past predictions, it apparently doesn’t allow you to predict the actual climate anywhere on the earth. From extinction of the polar bears to NYC being flooded by now to food shortages to the Arctic ice disappearing the GAT has failed utterly in telling us anything about the real world.

No. The point of these significance rules is to give an implied uncertainty. The assumption is that of you are stating measurements to the nearest whole number, than there is an implied uncertainty of ±0.5, and that you can ignore all uncertainty calculations and just use the “rules” of significant figures to stand in for the actual uncertainty propagation.”

Word salad. Did you actually mean to make a real assertion here?

There is no “implied” uncertainty. The rules give an indication of how accurate a measurement is. Overstating the accuracy is a fraud perpetrated on following users of the measurement.

You can ignore all uncertainty calculations? Exactly what uncertainty calculations are you speaking of? An average? If you calculate an average out to more digits than the measurement uncertainty allows then you are claiming an accuracy that you can’t possibly justify!

The significant digits rules are part and parcel of measurements. They directly give you indication of the accuracy of the measurement. That applies to propagation of uncertainty from multiple measurements. The rules apply to any statistics calculated from the measurements. An average doesn’t have an uncertainty all of its own totally separate from the uncertainty propagated into the average from the individual elements.

That’s why the standard deviation of the sample means only indicates how precisely you have calculated the mean, it doesn’t tell you how accurate that calculated mean is.

Again, if you have three sample measurements, 29 +/- 1, 30 +/- 1, and 31 +/- 1, you can’t just calculate the mean as 30 and use that figure to calculate the population mean. You can’t just drop the +/- 1 uncertainty from calculations and pretend that 29, 30, and 31 are 100% accurate. Yet that is what they do in calculating the GAT. At a minimum that sample mean should be stated as 30 +/- 1.7.

Call those three values sample means. The standard deviation of the stated values of the sample means is sqrt[ (1^2 + 0^2 + 1^2) / 3 ] = sqrt[ 2/3 ] = 0.8. You and the climate scientists would state that the uncertainty of the mean is 0.8 But it isn’t. It’s at least twice that value, 1.7 (see the preceding paragraph).

Bellman
Reply to  Tim Gorman
January 17, 2022 1:57 pm

Then of what use is the average? Statistics are used to describe the population – i.e. the elements of the data set.

If you cannot understand the use of an average, why do you get so upset about uncertainty. The point of a summary statistic is to summarize the elements, not to tell you something about every element in the set. I’m really not sure what else you think a summary is?

If the average is not a predictor of the next measurement, then of what use is the average?

To tell you what the average is. You can, of course, use statistics to predict what a random element of the set will be, but only as far as to indicate it’s likely range, using a prediction interval.

My point exactly. If your statistic, i.e. the average, doesn’t tell you something about the real world then of what use is it?

It’s not your point exactly. You say averages tell you nothing about the real world and I say they tell you something about it.

If your statistic doesn’t allow you to predict what is happening in the real world then of what use is it?

Have you ever tried to investigate this question for yourself?

One use, for instance, is to test the hypothesis that two populations are different. I’m sure if you try hard you can come up with other uses.

That’s the biggest problem with the Global Average Temperature. What actual use in the real world is it?

Case in point. You can test the hypothesis that the climate is changing. Is the global average temperature today different from what it was 50 years ago. Maybe you should read this blog more, there are always claims that this years average proves it’s cooler than a few years ago.

What it won;t tell you is what tomorrow’s local weather will be. For that you need a specific forecast.

Bellman
Reply to  Tim Gorman
January 17, 2022 2:30 pm

Continued.

In response to me saying that significance rules were supposed to imply an uncertainty, and that therefore giving the results in integers implied an uncertainty of ±0.5, you say:

There is no “implied” uncertainty. The rules give an indication of how accurate a measurement is. Overstating the accuracy is a fraud perpetrated on following users of the measurement.

Which I find odd as in the previous comment you said, (my emphasis)

You stated the measurements were rounded to the nearest units digit. That implies an uncertainty associated with your measurements of +/- 0.5.”

So I’m not sure what your disagreement with me is.

The significant digits rules are part and parcel of measurements. They directly give you indication of the accuracy of the measurement.

And my point, in arguing with Jim, is that they are only a rough way of determining uncertainty, and not as good as doing the actual uncertainty analysis. And, as I say in my original comment these rules imply the exact opposite of what you say – the number of decimal places in an average can be greater than the individual elements.

That applies to propagation of uncertainty from multiple measurements.

And as I and others keep telling you, the rules of propagation of uncertainties all lead to the conclusion that the uncertainty of an average can be smaller than the uncertainty of individual measurements.

Again, if you have three sample measurements, 29 +/- 1, 30 +/- 1, and 31 +/- 1, you can’t just calculate the mean as 30 and use that figure to calculate the population mean.

You obviously can do it. And statistically the figure of 30 will be the best estimate. That doesn’t mean you shouldn’t calculate and state the uncertainty. But you have to use the correct calculation and not just pull a figure out of the air. Case in point.

At a minimum that sample mean should be stated as 30 +/- 1.7.

You then say:

Call those three values sample means.

You keep coming up with these toy example and never explain what they are meant to be. First you had three measurements, now they are three samples of unspecified size. What are the ±1 values then meant to represent? The standard error of each sample or what?

The standard deviation of the stated values of the sample means is sqrt[ (1^2 + 0^2 + 1^2) / 3 ] = sqrt[ 2/3 ] = 0.8.

No. If this is a sample (of sample means) then the standard deviation needs to be sqrt[2 / 2] = 1.

“You and the climate scientists would state that the uncertainty of the mean is 0.8.

Firstly, what on earth are you doing. You said these were three samples, you presumably already know the standard error of mean of each sample. You don’t estimate the error by treating them as a sample of samples, especially not when you only have three such samples.

Secondly, nobody is calculating the uncertainty of a global average anomaly like this (and note it’s the anomaly not the temperature). I have no interest in going over the fine points of how the uncertainty is actually calculated, but they do indeed include the uncertainty in the measurements, along with uncertainties from the sampling distribution, infilling and adjustments.

Carlo, Monte
Reply to  Bellman
January 17, 2022 4:51 pm

What you lot are effectively claiming is that the operation of averaging can increase knowledge—it cannot.

Take three measurements that you have an innate need to average—n1, n2, and n3.

However, for this example, n1, n2, and n3 each have large bias errors that much larger than the standard deviation of the mean.

Averaging does NOT remove the errors!

This is the fundamental property of uncertainty that you and bzx*** refuse to acknowledge—it is what you don’t know!

***it seems blob can now be included in this lits

Bellman
Reply to  Carlo, Monte
January 17, 2022 6:14 pm

Of course averaging can increase knowledge. For a start in increases your knowledge of what the average is. Seriously, do you think everyone who has been using averaging over the centuries has been wasting their time? Every company or department who invest time and money into collecting stats should have just given up? Every statistician who developed the maths for analyzing averages have all been wasting their time? All because assert that it’s impossible for averaging to increase knowledge.

But yes, you are correct about systematic errors. Averaging won’t remove them. But Tim is talking about random independent errors, otherwise why does he think the uncertainty of the sum increases with the square root of the sample size? And even if you are now saying these are uncertainties coming entirely from systematic errors, that still does not justify the claim that the uncertainties increase with the sample size.

Carlo, Monte
Reply to  Bellman
January 18, 2022 6:17 am

Of course averaging can increase knowledge.

You are beyond hopeless and hapless. Tim is attempting to tell you about UNCERTAINTY, not random error.

Bellman
Reply to  Carlo, Monte
January 18, 2022 9:12 am

Which has what to do with knowledge increasing by averaging? You really have a hard time sticking to a point, almost as if you need to keep causing distractions.

Jim Gorman
Reply to  Bellman
January 18, 2022 10:02 am

Averaging temperatures tells you nothing. If the temp here is 77 and 30 miles down the road it is 70, what does the average tell you? 147/2 = 73.5. Is the midpoint really 73.5? How do you know?

Worse, when you put down 73.5, you have just covered up the difference unless you also quote the standard deviation.

Have you quoted the standard deviation of any mean you have shown? Why not?

Bellman
Reply to  Jim Gorman
January 18, 2022 1:21 pm

You and Tim tell me that averages tell you “nothing” that I’m seriously wondering if you understand what that word means.

As usual you give me a context free example of an average of just two value, insist it tells you nothing, and then want to conclude that therefore all averages tell you nothing. In this case I don’t even understand why you think this toy example tells you nothing.

Lets say I’m in an area and all I know is the average of two points 30 miles is 73.5. You don’t give units but if this is meant to be in Celsius of Kelvin it tells me I need to get out very quickly., More seriously, it obviously tells me something, an area with an average of 73.5°F is likely to be warmer than an area with an average of 23.5°F.

Moreover, does the average of 73.5°F tell me less than knowing that one place, say, 15 miles away has a temperature of 77°F? I can’t see how it tells me less, so by your logic a single measurement in a single location tells you nothing. I would argue it’s probably a more useful measurement. If I’m somewhere between the two places. It’s more likely to be closer to 73.5 than 77.

Jim Gorman
Reply to  Bellman
January 18, 2022 6:44 pm

You miss the whole purpose of uncertainty and don’t even know it! Where does the average temperature occur? How do you know? Is the temp in one location always higher than the other?

You can’t answer any of these with certainty, therefore there is uncertainty in the mean beyond the simple average. By the way, I notice that you conveniently added a digit of precision. Does this meet Significant Digit rules? All of my electonic, chemistry, and physics lab instructors would have given me a failing grade for doing this.

I have shown you the references that universities teach. Do you just disregard what they teach for your own beliefs?

Bellman
Reply to  Jim Gorman
January 18, 2022 7:34 pm

You keep changing the argument. The question was “what does the average tell you”? You don;t need to know the exact spot which exactly matches the average. The point is that you know the average will be the best estimate for your temperature given the data available. Best estimate means it minimizes the total error. In your toy example you simply have to ask, if you were somewhere in the vicinity of the two measurements (assuming you didn’t know how geographically close you were to either), would the average be more likely to be closer to your actual temperature than one of the exact measurements.

Of course there’s uncertainty in the average. That’s what we’ve been arguing about for the past year.

You notice that I conveniently added an extra digit, but failed to notice that I was just copying your stated average. You still think this fails your significant figure rules, and fail to understand why it doesn’t. 77 + 70 = 147. Three significant figures. 147 / 2 = 73.5. The rule is to use the smallest number of significant figures, in this case 3 figures compared with infinite. So 3 figures wins.

I feel sorry for you if every teacher would have failed you for using 1 too many decimals, especially if they were using the wrong rules.

Jim Gorman
Reply to  Bellman
January 19, 2022 5:30 am

Dude, no wonder you are out in left field. With your assertion you can increase the significant digits available in a measurement by simply adding more and more measurements. Heck, if you can go from two to three sig figs by using the sum, lets add enough to go to four or five sig figs. That way we can end up with number like 75.123 when we only started with 2 significant digits.

You really need to stop making stuff up. Show me some references that support using the “sum” to determine the number of significant digits in an average measurement.

I’ll warn you up front, that is what a mathematician would say, not a scientist or engineer.

Bellman
Reply to  Jim Gorman
January 19, 2022 6:29 am

There your rules, you keep insisting that everyone stick to those rules as if they were a fundamental theorem. In my opinion these rules a reasonable guide for those who don;t want to be bothered doing the math. They are an approximation of the real rules from propagating uncertainty, but shouldn’t be taken too literally.

I don;t know why you would think it a problem that summing increases the number of significant figures. 77 has two sf, so does 70. The sum is 147 which has 3 sf. Why is that a problem? The “rules” say when adding it’s the number of decimal places that count, not the number of significant figures. You seem to disagree.

I’ll remind you that it was you who said the average was 73.5 and I just copied your result. But I also think this illustrates the problem of these implied uncertainty rules.

If the 77 and 70 figure are quoted to an integer there is an implies uncertainty of ±0.5, which seems reasonable. The uncertainty of the average of the two is at most ±0.5, so it’s reasonable to quote the average as 73.5±0.5. This tells us that the true vale may be somewhere between 73 and 74. If you insist that this has to be rounded to the nearest integer, say 74, then there is also an implied uncertainty of ±0.5, which means your answer is 74±0.5, which implies the true value could be between 73.5 and 74.5, which is misleading. But by the logic of implied uncertainties, if you say the average is 73.5 you are implying an uncertainty of ±0.05, which is also misleading.

So yes, I think it’s better to actually quote the uncertainty rather than use these sf simple rules. But I also think it’s mostly irrelevant in the context of your toy example.

Jim Gorman
Reply to  Bellman
January 19, 2022 8:02 am

I would tell you to go read lab practice procedures from certified laboratories if you need further persuading.

Obviously references from well known Universities mean nothing to you. If you wish to keep putting out incorrect and false information that is up to you but no one will believe you without references showing your assertions are accepted at known centers of excellence like Universities.

BTW, keeping one extra digit is allowed so rounding errors don’t compound through additional calculations. However the final number should always be rounded to the correct number of sig figs as determined by the resolution of the actual physical measurements. These are not just numbers, they have actual physical presence.



Bellman
Reply to  Jim Gorman
January 19, 2022 10:36 am

You’re clearly determined to use this as a distraction from your original argument – so before I get into a rant, let’s just not worry about it and say the average was 74 rather than 73.5 as you claimed. What difference does it make to your claim that it “tells you nothing”?

It’s still a better estimate for someone between the two locations than 77 or 70. It still tells you that your location is probably warmer than somewhere with an average of 30.

Jim Gorman
Reply to  Bellman
January 19, 2022 1:09 pm

It’s still a better estimate for someone between the two locations than 77 or 70. “

No it really isn’t a better estimate. The point is that you don’t know what the temperature between the two locations actually is. It could lower than 70 or higher than 77 or it may be 70 or it may be 77. YOU SIMPLY DON’T KNOW.

You are assuming the temperatures are very closely correlated. Has anyone proven that and how closely stations need to be for the correlation to be small? You said,

“It still tells you that your location is probably warmer than somewhere with an average of 30.”

You are correct about this. If I average Miami, Fl and Buffalo, NY today are those temperatures closely correlated? You are trying to prove that their average is a meaningful number. It is not. The average tells you nothing about either data point nor how they are changing. It tells you nothing about the temps in between. Miami temperatures vary a small amount year round. Buffalo temps change wildly throughout a year. Averaging them moderates the Buffalo temp changes. Is that a good thing?

Bellman
Reply to  Jim Gorman
January 20, 2022 6:46 am

I think the problem you and Tim are having here is you are not understanding what I mean by the “best estimate”. I am not saying you know the actual temperature, but the average value is a better estimate than anything else, given that is all the information you have.

If you are in a region and have zero knowledge of it, you have no way of guessing the temperature at a random point. If you know that the temperature at a point with the 30 mile radius is 77, you can say that 77 is the best estimate you could make of what the temperature at your location will be. It could still be a lot colder or warmer, you don’t know the range but without any other information it is equally likely that your temperature is above or below this value, hence it is the best estimate in the sense that it is better than any other guess.

Now if you know that there is another place within the area that has a temperature of 70 you have more information, and the best estimate is the average of the two value. You also now have an estimate for the deviation, but even if you only know the average that is still the best estimate and better than 77. The fact that the temperature could be below 70 or above 77 is one reason why the mid point is a better estimate than either of the individual values.

Of course it would be better of you had many more values and even better if you had knowledge of the local geography and micro climates. But the question being addressed was whether the average of two values told you nothing, and I think it’s absurd to say that.

You can keep listing all the things an average doesn’t tell you as much as you want, but that doesn’t mean it tells you nothing, is meaningless, or has removed knowledge.

Tim Gorman
Reply to  Bellman
January 19, 2022 2:35 pm

It’s only a 73.5 if you are a mathematician or statistician. A physical scientist or engineer would tell you that the temperature midway between two points is basically unknown. You don’t know the elevation, terrain, or humidity at the mid-point so how can you know what the mid-point temperature actually is?

You actually don’t even know the two temperatures exactly, each have uncertainty and that uncertainty is typically ignored by mathematicians, statisticians, and climate scientists. They just assume the two stated values are 100% accurate so no uncertainty gets factored in when trying to infill an unknown temperature.

You *do* realize that even calibrated thermometers, one in the middle of a soybean field and one in a pasture next to the soybean field will read different temperatures, right? And Pete forbid that there should be a sizable pond in the middle of the pasture!

So the practical answer is that you simply do *NOT* know anything about the temperature midway between the two measuring points. The midway point could be higher in temp than the other two or it might be lower in temp. YOU JUST DO NOT KNOW.

Nor do anomalies help in determining a *global* average. You have lower daily temperature swings in some of the seasons that in others. So when you average a temp in Kansas City with one in Rio de Janerio what does that average tell you? In that case you are finding an average of a multi-modal distribution. What does that average tell you?

Tim Gorman
Reply to  Bellman
January 19, 2022 2:22 pm

It’s like I already told you. The uncertainty and your calculated value should both end at the same point where the most doubtful digit exists in the elements you are using.

77 + 70 both have the units digit as the most doubtful digit. That’s where the result should end, the units digit.

If you have 70 +/- 0.5 and 77 +/- 0.5 then the *MOST* the uncertainty can be when they are added it 0.5 + 0.5 = 1. So your sum result would be 147 +/- 1.

Even if you assume those two figures have *some* random contribution and you therefore add the uncertainties in quadrature you get sqrt ( 0.5^2 + 0.5^2 ) = sqrt( .5) = .7, not .5

Your average should be stated as 74 +/- 1 or 74 +/- 0.7.

The units digit is the most doubtful digit in both elements so the average should be the same. Again, you cannot increase resolution to the tenths digit by just averaging the numbers.

Bellman
Reply to  Tim Gorman
January 19, 2022 4:32 pm

It’s like I already told you.

For anyone following at home, nearly everything Tim Gorman tells me is demonstrably wrong.

Your average should be stated as 74 +/- 1 or 74 +/- 0.7.

Here’s an example. Despite people telling him for at least a year that this is wrong, he still persist in the belief that the uncertainty of the average is the same as the uncertainty of the sum.

MarkW
Reply to  Bellman
January 15, 2022 3:49 pm

Looks like Bellman has never done either engineering or science.
When doing calculations, your final answer can never have more digits of accuracy than the original number did.

Bellman
Reply to  MarkW
January 15, 2022 3:58 pm

Yes to your first point, no to your second.

Carlo, Monte
Reply to  Bellman
January 15, 2022 6:27 pm

The hinge on which all of climate scientology rotates.

Tim Gorman
Reply to  Bellman
January 16, 2022 9:12 am

You missed the significant digit rule that no calculated result should be stated past the last digit in doubt in the elements of the calculation.

12345 -> 12.345 the last digit in doubt would be the unit digit. So your result should be quoted as 12.

As usual you are confusing the use of significant digits by mathematicians instead of physical scientists and engineers.

You cannot increase resolution by calculating an average. It is *truly* that simple.

Suppose I take 1000 temperatures each written to the nearest degree C, i.e. 0 decimal places”

In other words your uncertainty is in the units digit. That uncertainty propagates through to the summation of the temperature measurements. And that uncertainty determines where your last significant digit should appear.

As usual you just ignore uncertainty and assume everything is 100% accurate – the hallmark of a mathematician as opposed to a physical scientist or engineer.

Bellman
Reply to  Tim Gorman
January 16, 2022 11:52 am

You missed the significant digit rule that no calculated result should be stated past the last digit in doubt in the elements of the calculation.

I was using this set of rules, as recommended by Jim. I see nothing about the rule you speak of. In any event, if there’s no doubt about the integer digit in any of the readings, there would be no doubt about the third decimal place when I divide them by 1000.

As usual you are confusing the use of significant digits by mathematicians instead of physical scientists and engineers.

Has it occurred to you that taking an average or any statistic is a mathematical rather than an engineering operation.

You cannot increase resolution by calculating an average. It is *truly* that simple.

It truly isn’t. However you are defining resolution.

In other words your uncertainty is in the units digit. That uncertainty propagates through to the summation of the temperature measurements. And that uncertainty determines where your last significant digit should appear.

That was the point I was making at the end. I think it’s better to base your figures on the propagated uncertainty rather than using these simplistic rules for significant figures.

As usual you just ignore uncertainty and assume everything is 100% accurate – the hallmark of a mathematician as opposed to a physical scientist or engineer.

I said nothing about the uncertainty of the readings, I was just illustrating what using the “rules” would mean.

Tim Gorman
Reply to  Bellman
January 17, 2022 9:26 am

I was using this set of rules, as recommended by Jim. I see nothing about the rule you speak of. In any event, if there’s no doubt about the integer digit in any of the readings, there would be no doubt about the third decimal place when I divide them by 1000.”

In other words you *STILL* have never bothered to get a copy of Dr. Taylor’s tome on uncertainty! The rules you are looking at are but an *example* given to students at the start of a lab class. This is usually extended throughout the lab to include actual usage in the real world.

I know this subject has been taught to you multiple time but you just refuse to give up your delusions about uncertainty.

Taylor:

Rule 2.5: Experimental uncertainties should almost always be rounded to one significant figure.

Rule 2.9: The last significant figure in any stated answer should usually be of the same magnitude (in the same decimal point) as the uncertainty.

Taylor states there is one significant exception to this. If the leading digit in the uncertainty is a 1, then keeping two significant figures in ẟx may be better. For instance, if ẟx = 0.14 then rounding this to 0.1 is a substantial proportionate reduction. In this case it would be better to just use the 0.14. As the leading digit goes up (I.e. 2-9) there is less reason to add an additional significant figure.

Has it occurred to you that taking an average or any statistic is a mathematical rather than an engineering operation.

Uncertainty in a measurement appears to be only significant to to physical scientists and/or engineer. This is *especially* true of the examples of mathematicians on this blog!

It truly isn’t. However you are defining resolution.”

A statement from a mathematician, not a physical scientist or engineer who has to work in the real world. Resolution is defined by the measurement device. You can’t get better than that. Refer back to your statement that the measurements are rounded to the units digit. That means your measurement has a resolution in the units digit, anything past that has to be estimated and estimated values in a measurement introduce uncertainty. You can’t fix that by calculation. Your uncertainty will have AT LEAST a value of +/- 0.5. That value is a MINIMUM value. Other factors will only add additional uncertainty.

“That was the point I was making at the end. I think it’s better to base your figures on the propagated uncertainty rather than using these simplistic rules for significant figures.”

You can’t get away from uncertainty in physical measurements. And that uncertainty *has* to follow the rules for significant figures. Otherwise someone using your measurements will have no idea of what the measurement really means. Propagated uncertainties are no different. If you imply a smaller propagated uncertainty than what the measurement resolutions allow then you are committing a fraud upon those who might have to use your measurement.

I said nothing about the uncertainty of the readings, I was just illustrating what using the “rules” would mean.”

Of course you did. You stated the measurements were rounded to the nearest units digit. That implies an uncertainty associated with your measurements of +/- 0.5.

Bellman
Reply to  Tim Gorman
January 17, 2022 1:34 pm

In other words you *STILL* have never bothered to get a copy of Dr. Taylor’s tome on uncertainty.

If you mean J.R. Taylor’s An Introduction to Error Analysis I’ve quoted it to you on numerous occasions and you keep rejecting what it says. But I’ve also been accused of using it when it’s out of date, and should not be talking about uncertainty in terms of error.

The rules you are looking at are but an *example* given to students at the start of a lab class.

You need to take this up with Jim. He’s the one saying they showed that an average couldn’t increase precision.

Rule 2.5: Experimental uncertainties should almost always be rounded to one significant figure.

Yes, that’s what he says. Other’s including the GUM say one or two significant figures. Some even recommend 2 over 1. This is why it’s best not to treat any authority as absolute, especially when talking about uncertainty.

A statement from a mathematician, not a physical scientist or engineer who has to work in the real world.”

You’re too kind. I may have studied some maths and take an interest in it, but I wouldn’t call myself a mathematician. But I disagree that statisticians don’t work in the real world.

Resolution is defined by the measurement device.”

I was thinking that the VIM defined resolution in a couple of ways, but the online versions seems to have been removed, so I can’t check. Instrument indication is one type of resolution, but the other is along the lines of the smallest change it’s possible to discern.

You can’t get better than that.

A statement that shows a lack of ambition. Have you forgotten Taylor’s example of measuring a stack of paper? The resolution of the measurement of the stack may only be 0.1″, but the thickness of a single sheet of paper can be calculated to 4 decimal places.

Tim Gorman
Reply to  Bellman
January 19, 2022 2:49 pm

 Instrument indication is one type of resolution, but the other is along the lines of the smallest change it’s possible to discern.”

Which only shows you have no understanding of uncertainty. That certainly shows in just about everything you post.

If you have a digital voltmeter with a 3 digit display what is the smallest change it is possible to discern?

” Have you forgotten Taylor’s example of measuring a stack of paper? The resolution of the measurement of the stack may only be 0.1″, but the thickness of a single sheet of paper can be calculated to 4 decimal places.”

Go back and reread Taylor again. The stack is measured at 1.3 +/- .1. Tenths digit in both.

Each sheet is .0065 +/- .0005. Write that as 65 x 10^-4 +/- 5 x 10^-4. Units digit in both.

Bellman
Reply to  Tim Gorman
January 19, 2022 3:56 pm

Here’s the definition I was thinking of:

resolution

smallest change in a quantity being measured that causes a perceptible change in the corresponding indication

NOTE Resolution can depend on, for example, noise (internal or external) or friction. It may also depend on the value of a quantity being measured

https://www.bipm.org/documents/20126/2071204/JCGM_200_2012.pdf

What you are talking about is

resolution of a displaying device

smallest difference between displayed indications that can be meaningfully distinguished

Jim Gorman
Reply to  Bellman
January 20, 2022 6:24 pm

If you don’t have the background to understand what you are reading you won’t get the right answer.

Watch this YouTube video for an education.

https://youtu.be/ul3e-HXAeZA

Bellman
Reply to  Jim Gorman
January 21, 2022 7:01 am

So your response to me quoting the Joint Committee for Guides in Metrology definition of resolution, is prefer an random YouTube video aimed at A level students.
Do you also argue that temperature values are not measurements but readings?

Bellman
Reply to  Tim Gorman
January 19, 2022 4:08 pm

Each sheet is .0065 +/- .0005. Write that as 65 x 10^-4 +/- 5 x 10^-4. Units digit in both.”

Good. So we accept that if you measure 1000 things with a resolution of 1, we can still divide it by 1000, get an average with 3 decimal places, and it doesn’t affect your significant figure rules because you can state it in units of 10^3.

It just doesn’t agree with you saying “Resolution is defined by the measurement device. You can’t get better than that.”

Tim Gorman
Reply to  TheFinalNail
January 15, 2022 11:14 am

The average can only have the same number of significant digits as the elements used to calculate the average. I.e. no increase in precision. Using your logic a repeating decimal average value would be infinitely precise. That’s only true for a mathematician or a climate scientist.

Carlo, Monte
Reply to  Tim Gorman
January 15, 2022 2:06 pm

He has been told this on multiple occasions yet refuses to acknowledge reality.

Derg
Reply to  TheFinalNail
January 14, 2022 10:54 am

Lol slowing down 😉

And yet the sea is rising at a slow rate. In 500 years Obama’s house will be underwater.

Jim Gorman
Reply to  TheFinalNail
January 14, 2022 12:20 pm

CO2 is now impotent, right? Or will we see an immense erection of temperature values when natural variation goes away in the next couple of years?

TheFinalNail
Reply to  Jim Gorman
January 15, 2022 2:41 am

My guess is we will see the current long term rate of rise continue (about 0.2C per decade over a running 30-year period). There will of course be spikes up and down due to natural variabilty and volcanic activity, etc.

Jim Gorman
Reply to  TheFinalNail
January 15, 2022 5:11 am

So you are now willing to act like a dictator and force everyone to finance the spending of trillions of dollars we don’t have based on a guess?

Whatever happened to KNOWING for sure what will occur? Science is not based on guesses, it is only based on provable facts.

Thanks for nothing!

MarkW
Reply to  TheFinalNail
January 14, 2022 2:26 pm

Why stop at a 30 year trend? Why not a 100 or 1000 year trend?

TheFinalNail
Reply to  MarkW
January 15, 2022 2:43 am

30 years is regarded as a period of ‘climatology’ by the WMO. It is often used as the base period for anomalies (GISS, UAH, HadCRUT), though not necessarily the same 30-year period.

Jim Gorman
Reply to  TheFinalNail
January 15, 2022 5:03 am

And the WMO is the be all and end all in deciding this? Tell us a designated climate area on the earth that has changed in a 30 year period.

Climate is the average weather conditions in a place over a long period of time—30 years or more.” From What Are the Different Climate Types? | NOAA SciJinks – All About Weather

I think you’ll find that 30 years is the minimum time to detect a climate change. As of present, no one has ever reclassified any areas to a new climate type.

MarkW
Reply to  Jim Gorman
January 15, 2022 3:53 pm

Post modern science.
Decide what the answer should be, then invent a method that gets you there.

MarkW
Reply to  TheFinalNail
January 15, 2022 3:52 pm

Since when do we do science based on the most convenient data set?

January 14, 2022 6:23 am

NOAA and NASA continue to play god with temperature data … https://www.youtube.com/watch?v=hs-K_tadveI

TheFinalNail
Reply to  John Shewchuk
January 14, 2022 7:39 am

Berkeley are also calling it the 6th warmest year:

As are JMA.

MarkW
Reply to  TheFinalNail
January 14, 2022 2:30 pm

Using the same data.

Bellman
Reply to  MarkW
January 14, 2022 5:08 pm

It’s equal 7th warmest in UAH and equal 6th in RSS, not using the same data.

TheFinalNail
Reply to  MarkW
January 15, 2022 2:44 am

Using the same data.

Not quite. Berkeley in particular uses a lot more stations than the others.

LdB
Reply to  TheFinalNail
January 15, 2022 4:51 pm

Stations and a method that is rejected by even the CAGW crowd … you probably need to specify a point 🙂

LdB
Reply to  TheFinalNail
January 14, 2022 5:01 pm

Warmest year eva acording to the bloke down the pub it is a very subjective thing.

Joseph Zorzin
Reply to  John Shewchuk
January 14, 2022 7:40 am

fantastic! science fiction!

Meab
Reply to  John Shewchuk
January 14, 2022 9:55 am

Good video, well worth watching. It shows several things using the raw and “corrected” data from the USHCN network, the world’s most reliable network of temperature measurement stations for analysis of long-term temperature changes. First, it shows that the raw data actually shows a slight decline in temperature over the US in the last century but the NOAA-adjusted data shows a temperature increase. Since the temperature increase is all owing to adjustments, in order for NOAA’s adjusted trends to be correct NOAA must have complete confidence that their adjustments are unbiased. Second, it shows that NOAA has recently taken almost 1/3 of the stations off-line -replacing their actual measurements with infilled (interpolated and adjusted) data. Therefore, NOAA’s recent data are the most subject to infilling errors. Third, the most recent temperature adjustments are still positive. This is a most curious thing given how modern stations are computerized and automated. If anything, most of the recent adjustments should be in the downward direction due to increasing UHI, but they aren’t.

These three things cast serious doubt on the accuracy of NOAA’s claims regarding which years were the hottest, especially as the differences between years are, at most, a few hundredths of a degree.

Reply to  Meab
January 14, 2022 10:09 am

Exactly right. Thanks for the confirmation. Tony Heller has been exposing this fraud for several years, and I thought more exposure is needed. I now even discuss this at my public speaking events and it’s amazing how many are completely stunned — and upset. And the dang Weather Channel keeps shows this old video, even though NOAA changed their data … https://www.youtube.com/watch?v=HQKbm4qU_lQ

Carlo, Monte
Reply to  John Shewchuk
January 14, 2022 10:37 am

Great video; incredibly, month-after-month the usual suspects show up in WUWT trying to defend this professional misconduct.

Reply to  Carlo, Monte
January 14, 2022 10:53 am

Yes indeed — and thanks for the comments. It’s common sense — the fraud is so obvious and blatant. Just like Hitler indicated …the bigger the lie and the greater its delivery — the more people will believe. It’s no different than when people killed over 50,000 witches because they were indoctrinated to believe the witches caused the Little Ice Age and other related disasters of that time period. It’s ignorance fostered by propaganda. And so, hey … why fight it, and just do as our “climate leaders” do. If they insist on fighting climate change from the seashore — they we should too … https://www.youtube.com/watch?v=dZvYHt_3nt0

bdgwx
Reply to  John Shewchuk
January 14, 2022 1:14 pm

There is no fraud John. If you think there is and it is so obvious and blatant then it should be easy for you to show us which line or section in the code here and here is the fraudulent piece. That is your challenge.

Reply to  bdgwx
January 14, 2022 2:04 pm

Don’t worry about a thing, the new climate leadership at the United Nations will fix things … https://www.youtube.com/watch?v=p8hKJ_MMza8

bdgwx
Reply to  John Shewchuk
January 14, 2022 5:43 pm

How is that related to you indicting scientists of fraud?

Reply to  bdgwx
January 14, 2022 5:47 pm

That’s easy — just watch what our “climate leaders” DO versus SAY … https://www.youtube.com/watch?v=dZvYHt_3nt0

bdgwx
Reply to  John Shewchuk
January 14, 2022 9:00 pm

I watched the video. There is no evidence presented of fraud. It doesn’t even discuss fraud. No wait…it doesn’t even discuss science or evidence of any kind at all. In fact, there is no discussion in the video…like at all. Is it meant to be a joke?

Reply to  bdgwx
January 14, 2022 9:03 pm

Glad you liked it. Speaking of jokes … https://www.youtube.com/watch?v=cE6rAWcjTyw

bdgwx
Reply to  John Shewchuk
January 15, 2022 5:59 am

So do you believe scientists actions are criminal or not?

Reply to  bdgwx
January 15, 2022 6:04 am
LdB
Reply to  bdgwx
January 15, 2022 4:57 pm

Stupid question they aren’t criminal because scientists can propose any theory they like right even stupid things like pink unicorns created Earth. They may even engage in scientific fraud of which there have been huge numbers of lately in many fields which is still not criminal. The only point it becomes criminal is if they do something against some law.

Tim Gorman
Reply to  bdgwx
January 14, 2022 5:33 pm

If those infilled station records were from stations that were more than 50 miles distant, either in longitude or latitude then the made up data is inaccurate. The correlation of temperature between two points on the globe that are more than 50 miles apart is less then 0.8 and most physical scientists will consider that insufficient correlation to make the data useful.

Nothing needs to be shown in the programming is fraudulent. The assumptions about infilling are just plain wrong.

Reply to  Tim Gorman
January 15, 2022 6:06 am

Ditto. And the “wrong” supports the big “wrong” … https://www.youtube.com/watch?v=GYhfrgRAbH4

bdgwx
Reply to  Meab
January 14, 2022 10:42 am

1) USHCN is not “the world’s most reliable network of temperature measurement”. It’s not even worldwide. It only covers 2% of the Earth’s surface.

2) USHCN is a subset of GHCN. It is produced by the same processing system. All USHCN observations are included in GHCN. As a result the adjustments are applied equally to both GHCN and USHCN.

3) The adjustments applied to GHCN include those for station moves, instrument changes, time-of-observation changes, etc. Those for ERSST include those for bucket measurements, ship intake measurements, etc. They are necessary to remove the biases they cause. The net effect of all adjustments actually reduces the overall warming trend.

4) The UHI effect and the UHI bias are not the same thing. UHI effect is the increase in temperature in urban areas as a result of anthropogenic land use changes. UHI bias is the error induced in a spatial average as result in adequate sampling of urban regions. It is possible for the UHI effect and the UHI bias to simultaneously be positive and negative respectively. The UHI bias is positive when urban observations are used as proxies for predominately rural grid cells or when the ratio of urban-to-rural stations increases. The UHI bias is negative when rural observations are used as proxies for predominately urban grid cells or when the ratio of urban-to-rural stations decreases.

5) USCRN is a network of stations that has been specifically designed to mitigate non-climatic effect biases like station moves, instrument changes, time-of-observation changes, etc. The USCRN-raw data corroborates the USHCN-adj data and confirms that USHCN-raw is contaminated with biases. See Hausfather et al. 2016 for details of this comparison.

MarkW
Reply to  bdgwx
January 14, 2022 10:51 am

1) USHCN is not “the world’s most reliable network of temperature measurement”. It’s not even worldwide. It only covers 2% of the Earth’s surface.

Is english not your first language?
The quote does not claim that it cover’s the entire earth.

bdgwx
Reply to  MarkW
January 14, 2022 11:38 am

You missed some of the conservation in another thread where I pointed out the fact that the net effect of all adjustments actually reduces the warming trend relative to the unadjusted data for the global mean surface temperature. meab disagreed and used the USHCN dataset as evidence and to represent the entire Earth. There was no concession at the time that USHCN cannot possibly be used to make statements about the effect of adjustments on the global mean surface temperature or any statements regarding the entire Earth for that matter so I have no choice but to continue to believe that he still thinks it is a global or worldwide dataset or can be used to make statements regarding the entire Earth.

Meab
Reply to  bdgwx
January 14, 2022 12:39 pm

Now yor’re just lying, badwaxjob. The types of adjustments applied to both data sets are substantially the same, and you know it.

Reply to  Meab
January 14, 2022 2:09 pm

Climate religion has its crusaders — just like in the “crusades” 1,000 years ago, where data was not a requirement — just a belief. It is sad to see — even today. Fortunately, new leadership at the UN can see the fraud … https://www.youtube.com/watch?v=p8hKJ_MMza8

bdgwx
Reply to  Meab
January 14, 2022 8:56 pm

Of course I know it. That’s what I’m trying to tell you. Again…USHCN is produced by the same processing system as GHCN. The adjustments that are applied to GHCN are the same as those applied to USHCN. The only difference is that USHCN only represents about 4% of the stations in GHCN. I don’t know how trying to explain something that you don’t seem to be challenging (at least with this post) makes me a liar.

MarkW
Reply to  bdgwx
January 14, 2022 2:31 pm

And once again, the alarmist tries to completely change the subject.

I’m getting the idea that you have don’t have the ability to stick with one subject when you find yourself falling behind.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 10:53 am

Nothing to see here, just more keeping the jive alive, move along now folks…

Jim Gorman
Reply to  bdgwx
January 14, 2022 1:04 pm

1) USHCN is not “the world’s most reliable network of temperature measurement”.

You didn’t even address the assertion in your answer. Bob and weave, right?

2) USHCN is a subset of GHCN. It is produced by the same processing system. 

So what is the point?

3) The adjustments applied to GHCN include those for station moves, instrument changes, time-of-observation changes, etc. 

So if we give you some stations that have been adjusted, especially from the late 1800’s or early 1900’s, you can provide physical evidence supporting the timing of moves, instrument changes, etc, right?

None of these address the artificial increase in precision of past data that was recorded only in integers. Show some scientific or academic references that support showing temperatures up until 1980 with precision to 1/100ths or even 1/000ths of a degree.

fretslider
January 14, 2022 6:23 am

“Earth’s global average surface temperature in 2021 tied with 2018 as the sixth warmest on record”

So, tied and not even fifth. Oh well.

Aerosols to the rescue?

The World Was Cooler in 2021 Than 2020. That’s Not Good News

As the world locked down in 2020, fewer emissions went into the sky, including aerosols that typically reflect some of the sun’s energy back into space. “If you take them away, you make the air cleaner, then that’s a slight warming impact on the climate,” said Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, during a Thursday press conference announcing the findings. But as economic activity ramped back up in 2021, so did aerosol pollution, contributing again to that cooling effect. 

https://www.wired.com/story/the-world-was-cooler-in-2021-than-2020-thats-not-good-news/

They didn’t think that headline through.

RoodLichtVoorGroen
Reply to  fretslider
January 14, 2022 7:35 am

“The World Was Cooler in 2021 Than 2020. That’s Not Good News”

Indeed it isn’t, but not for the reasons stated.

“If you take them away, you make the air cleaner, then that’s a slight warming impact on the climate,”

So close yet so far…

TheFinalNail
Reply to  fretslider
January 14, 2022 7:41 am

The result of cooling La Nina conditions in the Pacific, when the ocean absorbs more heat. There is no evidence in a slowdown in the long-term rate of surface warming.

Dave Yaussy
Reply to  TheFinalNail
January 14, 2022 8:19 am

Thank goodness. A slow, beneficial increase in temperature and CO2 fertilization for a grateful world.

MarkW
Reply to  TheFinalNail
January 14, 2022 8:44 am

Apples and oranges.
You are looking at a 50 year alleged trend, and then saying that a very weak La Nina that is only a few months old, hasn’t completely cancelled this trend.

Derg
Reply to  TheFinalNail
January 14, 2022 10:56 am

And no mention of the warm 30’s 🤔

Richard M
Reply to  TheFinalNail
January 14, 2022 12:27 pm

There also is no evidence of a CO2 driven surface warming. The CERES data of the past two decades points to a different cause. A cloud thinning has allowed more solar energy to reach the surface.

Please tell us how you are going to increase the clouds.

Jim Gorman
Reply to  TheFinalNail
January 14, 2022 1:08 pm

Historical documents are going to be the downfall of some of the CAGW myth. As this site has shown recently, there are various newpaper and journals that tend to show that the temp record is not what it should be. Too much fiddling will be caught out.

Mike Smyth
January 14, 2022 6:29 am

“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson.

That’s his THEORY and he’s sticking to it. To bad the climate models are crap. To bad the artic sea ice is INCREASING, not decreasing. To bad there’s no correlation between CO2 and temperature.

Climate hysterics are neobarbarians. They want everyone to freeze to death because we can’t afford the energy to heat our homes.

Trying to Play Nice
Reply to  Mike Smyth
January 14, 2022 8:30 am

He mentions science but doesn’t have any available to show.

RetiredEE
Reply to  Trying to Play Nice
January 14, 2022 2:00 pm

Well, political science is NOT science. I think there is some confusion by the powers that be on this point.

ResourceGuy
January 14, 2022 6:35 am

When does the dam break on advocacy science and the related political hard press? Real science wants to know.

Patrick B
January 14, 2022 6:49 am

NASA doesn’t believe in margins of error. Are there any real scientists left at NASA?

bdgwx
Reply to  Patrick B
January 14, 2022 7:12 am
Carlo, Monte
Reply to  bdgwx
January 14, 2022 8:07 am

Not even close to a real UA, but it does have the standard milli-Kelvin “confidential informants”.

MarkW
Reply to  Carlo, Monte
January 14, 2022 8:46 am

Do they continue to misuse the “law of large numbers”?

Carlo, Monte
Reply to  MarkW
January 14, 2022 9:40 am

Any port in a storm for a climastrologer. They are completely oblivious about how ridiculous these claims are.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 9:07 am

That’s not what the data says. The data says the 95% CI is about 0.05 K for the contemporary period and 0.10 K or higher for the pre WWII period. That is 100x higher than you’re claimed 0.001 K value. Don’t take my word for it. Download the data and see for yourself.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 9:30 am

Silly person, 50-100mK are still milli-Kelvins, and are still way smaller than what is attainable with actual temperature measurements. You might know this if you had any real metrology experience.

Nowhere did I state anything about a “0.001 K value”.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 10:06 am

milli-Kelvin is 0.001 K. 50-100mK is 0.050-0.100 K. The later two are 50x and 100x higher respectively than the former. And they aren’t even remotely similar in magnitude. The later is literally 2 orders of magnitude higher. You can call me silly or any of the other names you like. It is not going to change the fact that NASA is not claiming milli-Kelvin level of uncertainty.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 10:16 am

50-100mK is 0.050-0.100 K. The

Now you are just lying; repeat, nowhere did I state anything about a “0.001 K value”.

With the best available instruments it is possible to get down to 0.5-0.6°C. Anything smaller is ludicrous.

But keep trying, you have a vested interest in keeping the jive alive with these tiny linear “trends” that have no significance from a metrology point-of-view.

MarkW
Reply to  Carlo, Monte
January 14, 2022 10:54 am

Like most alarmists, bdgwx is very skilled in changing the subject.

bdgwx
Reply to  MarkW
January 14, 2022 2:17 pm

I’m not the one claiming that NASA does not believe in margins of error of that NASA is claiming milli-Kelvin uncertainty. But I am responding to those claims and specifically those claims in the thread in which they were created.. I’m neither changing the topic being discussed nor deflecting or diverting away from it. I’m responding to them directly. And to summarize this thread, not only did NASA not claim milli-Kelvin uncertainty neither did they ignore error margins.

Tim Gorman
Reply to  bdgwx
January 14, 2022 5:59 pm

From the uncertainty link:

“In Lenssen et al (2019), we have updated previous uncertainty estimates using currently available spatial distributions of source data and state-of-the-art reanalyses, and incorporate independently derived estimates for ocean data processing, station homogenization and other structural biases.”

Estimates on top of estimates, homogenization on top of homogenization, and biases on top of biases.

As Hubbard and Lin states in 2002, twenty years ago, adjustments have to made on a station-by-station basis taking into account the micro-climate and environment at each individual station.

Apparently NASA hasn’t learned that lesson yet, not even after twenty years. Homogenization and reanalysis using stations more than 50 miles apart just introduces greater uncertainty, not less.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 6:45 pm

You excel in pedantry.

MarkW
Reply to  bdgwx
January 15, 2022 3:57 pm

And there you go again.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 11:03 am

That’s not a lie. It is a fact that 50 mK = 0.050 K and 100 mK = 0.100 K.

And above you said “Not even close to a real UA, but it does have the standard milli-Kelvin “confidential informants”. You’ve also made the claim here, here, here, and here. In fact, this blog post here has numerous comments of you and Pat Frank defending your claims that scientists are reporting milli-Kelvin levels of uncertainty for the global mean surface temperature.

It is an undeniable fact. Scientists are not claiming a mill-Kelvin level uncertainty on global mean surface temperatures. Period.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 4:11 pm

Just keep banging your head on the bricks, the pain will eventually ease off.

And with no concepts of what real-world metrology entails, you try to cover your ignorance with lots and lots of words and sophistry.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 8:52 pm

The fact that 50 mK = 0.050 K or that 0.050 K is 50x greater than 0.001 K is not sophistry. Middle school students understand it.

Pat Frank
Reply to  bdgwx
January 15, 2022 11:56 am

“The fact that 50 mK = 0.050 K or that 0.050 K is 50x greater than 0.001 K is not sophistry. Middle school students understand it.

When they get to studying that, middle school students are taught that writing out a value to three digits past the decimal means that they claim to know the value to three significant figures past the decimal.

To write 0.050 K is a claim that you know the value to ±0.001 K. Period.

Captain climate
Reply to  bdgwx
January 14, 2022 10:30 am

You are a [snip] moron if you think equipment can measure to that level or somehow that every error occurring in measurement reduces with the central limit theorem.

Pat Frank
Reply to  bdgwx
January 15, 2022 11:51 am

0.050-0.100 K. The later two are 50x and 100x higher

bdgwx, standard interpretation is that the last significant figure in a number represents the limit of uncertainty.

So, if one writes, 0.050 K last zero is taken to mean that the value is known to ±0.001 K. Thus, 0.050 K = 0.050±0.001 K. Standard practice.

To write 50 mK when the value is known to ±0.01K, one writes it 0.05 K or as the exponential: 5×10¯²K indicating ±0.01K. Or one can write (5±1)×10¯²K.

But writing the magnitude out to the third decimal place is a statement that the value is known to the third decimal place — three significant figures past the decimal.

Jim Gorman
Reply to  Pat Frank
January 15, 2022 6:05 pm

Just one more indicator of how these folks have never dealt with measurements either in real life or in a certified laboratory.

I don’t know of one lab teacher in college or high school that doesn’t teach that when you end a measurement with digits after the decimal point with a “0” that the assumptions is that you have measured that value to that place.

0.050 does mean you have measured to the 1/1000ths place and have obtained a reading of “0” in the 1/1000ths digit. You have not ascertained a 0.051 or 0.049, but a 0.050 exactly. That makes the 1/10000ths place where resolution ends. It also means that you have 3 Significant Digits after the decimal point.

bdgwx
Reply to  Jim Gorman
January 15, 2022 8:36 pm

In this context the 0.050 K value isn’t the measurement though. It is the uncertainty. And note that I didn’t make up the 50 mK or 0.050 K value. Carlo Monte did. What I said was that the uncertainty is actually closer to 0.05 K. He’s the one that took the liberty to call it 50 mK. I think you and Pat need to explain these significant figure rules and the difference between a measurement and its uncertainty. Hopefully you guys can speak his language because it is abundantly clear that I’m not getting through.

bdgwx
Reply to  Pat Frank
January 15, 2022 7:05 pm

I’m challenging anything you just said. But remember that the 0.050 K figure is itself an uncertainty. It’s not an actual temperature. So in the 0.050±0.001 K value you posted the figure ±0.001 might be described as the uncertainty of the uncertainty whatever that happens to mean. I don’t know. Either way it definitely does NOT describe the uncertainty of the actual temperature anomaly.

For example, in this dataset the 2021 annual anomaly is 0.901±0.027. Notice that Berkeley Earth is NOT saying that the uncertainty of 0.901 is ±0.001. They are saying it is ±0.027. The fact that they include the guard digit in the uncertainty is in no way a statement that the uncertainty of the anomaly is ±0.001 which would be absurd since the said it was actually ±0.027.

Carlo Monte is claiming that because they published the anomaly as 0.901 that necessary means they are claiming an uncertainty of ±0.001 even they literally say it is actually ±0.027. It would be the equivalent of taking your assessed uncertainty of ±0.46 and claiming that you are saying the uncertainty is ±0.01. If that sounds absurd it is because it is absurd. You aren’t saying that anymore than Berkeley Earth, GISTEMP, or anyone else. That’s what I’m trying to explain to him. If you think you can get this point across to him then be my guest.

Pat Frank
Reply to  bdgwx
January 15, 2022 10:12 pm

Carlo Monte is claiming that because they published the anomaly as 0.901 that necessary means they are claiming an uncertainty of ±0.001 even they literally say it is actually ±0.027.

Carlo is correct. And the ±0.027 should be written as ±0.03.

For example, in this dataset the 2021 annual anomaly is 0.901±0.027. Notice that Berkeley Earth is NOT saying that the uncertainty of 0.901 is ±0.001. They are saying it is ±0.027.

That statement just shows you don’t know what you’re taking about and neither does Berkeley Earth.

0.901±0.027” is an incoherent presentation.

The 0.901 represents a claim that the magnitude is known to the third place past the decimal. The ±0.027 is a claim that the uncertainty is also known to the third past the decimal.

But the measurement resolution is in fact at the second place past the decimal. Because the uncertainty at place two has physical magnitude. The third place cannot be resolved.

The uncertainty should be written as ±0.03 because the second decimal place clearly represents the limit of resolution. That’s where the rounding should be to produce a conservative estimate of accuracy. The third decimal place is physically meaningless.

And as the third place is meaningless the 0.901 itself should be written as 0.90 because the last digit is beyond the limit of resolution. It is meaningless. So the measurement should be written 0.90±0.03.

Of course, claiming to know a global temperature anomaly to ±0.01 C is equally nonsensical.

And this … Estimated Jan 1951-Dec 1980 global mean temperature (C)
Using air temperature above sea ice:  14.105 +/- 0.021
Using water temperature below sea ice: 14.700 +/- 0.021″

… is in-f-ing-credible. What a monument to incompetence.

The last paragraph in my 2010 paper indicates an appropriate round up of uncertainty to ±0.5 C.

bdgwx
Reply to  Pat Frank
January 16, 2022 4:36 am

PF said: “Carlo is correct. And the ±0.027 should be written as ±0.03.”

So ±0.027 is the same as ±0.001, but somehow ±0.03 is ±0.03?

PF said: “The last paragraph in my 2010 paper indicates an appropriate round up of uncertainty to ±0.5 C.”

You put ±0.46 in the abstract…twice. If ±0.027 is the same as ±0.001 then ±0.46 is the same as ±0.01. No?

Captain climate
Reply to  bdgwx
January 14, 2022 10:27 am

There’s no way you get to that precision without all errors canceling, and you have no evidence they do.

MarkW
Reply to  Captain climate
January 14, 2022 10:56 am

The claim is that if you measure 100 different points at 100 different times, with 100 different instruments, your accuracy for all the measurements goes up.

Total nonsense, but it keeps the masses in line.

bdgwx
Reply to  MarkW
January 14, 2022 1:03 pm

MarkW: “The claim is that if you measure 100 different points at 100 different times, with 100 different instruments, your accuracy for all the measurements goes up.”

Strawman. Nobody is saying that.

MarkW
Reply to  bdgwx
January 14, 2022 2:34 pm

That is how the law of large number works, when done properly.
It’s what you and your fellow alarmists use to get these impossibly high accuracy claims.

bdgwx
Reply to  MarkW
January 14, 2022 8:50 pm

No it’s not. The law of large numbers does not say that the uncertainty of individual observations decrease as you increase the number of observations. The uncertainty of the next observation will be the same as the previous observations regardless of how many you acquire. And don’t hear what I didn’t say. I didn’t say the uncertainty of the mean does not decrease as the number of observations increase. There is a big difference between an individual observation and the mean of several observations. Do not conflate the two.

Pat Frank
Reply to  bdgwx
January 15, 2022 12:14 pm

“The uncertainty of the next observation will be the same as the previous observations regardless of how many you acquire.”

No it won’t.

bdgwx
Reply to  Pat Frank
January 15, 2022 6:49 pm

That paper is paywalled. Regardless I don’t see anything in the abstract that is challenging my statement. I’m wondering if because you aren’t up to speed on the conversation you may not be understanding it so let me clarify now.

If you have an instrument with assessed uncertainty of ±X then any measurement you take with it will have an uncertainty of ±X. Taking the next measurement will not decrease the uncertainty of any measurement preceding it nor will it decrease the uncertainty of the most recent measurement. In fact, if anything the uncertainty may grow with each successive measurement due to drift.

MarkW
Reply to  bdgwx
January 15, 2022 4:01 pm

So you admit that claiming to know the temperature of the Earth 200 years ago to 0.01C is ridiculous, or that knowing the temperature of the oceans to 0.001C is utterly impossible.

bdgwx
Reply to  MarkW
January 15, 2022 6:42 pm

I’m the one trying to tell people that we don’t know the global mean surface temperature to within 0.001 C or even 0.01 C. The best we can do for monthly anomaly is about 0.05 C.

Pat Frank
Reply to  bdgwx
January 15, 2022 12:12 pm

Nobody is saying that.

They’re all saying that.

Definitively: Brohan, P., Kennedy, J. J., Harris, I., Tett, S. F. B., & Jones, P. D. (2006). Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. J. Geophys. Res., 111, D12106 12101-12121; doi:12110.11029/12005JD006548

2.3.1.1. Measurement Error (ε_ob)
The random error in a single thermometer reading is about 0.2 C (1 sigma) [Folland et al., 2001]; the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean. So the error in the monthly average will be at most 0.2/(sqrt60) = 0.03 C and this will be uncorrelated with the value for any other station or the value for any other month.

Folland’s “random error” is actually the read error from eye-balling the meniscus.

That paragraph is the whole ball of wax for treatment of temperature measurement error among the professionals of global air temperature measurement. Pathetic isn’t the word for it. Incompetent is.

bdgwx
Reply to  Pat Frank
January 15, 2022 6:38 pm

PF said: “They’re all saying that.”

I think your confused because you don’t understand what “that” is. MarkW defines “that” as The claim is that if you measure 100 different points at 100 different times, with 100 different instruments, your accuracy for all the measurements goes up.”

Nobody is saying “that”. In other words nobody is saying that accuracy goes up for individual measurements as you increase the number of measurements. What they are saying is that the uncertainty of average of the 100 observations is less than the uncertainty of the individual observations. I’m boldening and underlining average intentionally to drive home the point. Note that the average is not the same thing as an individual value. Do not conflate the two.

PF said: “That paragraph is the whole ball of wax for treatment of temperature measurement error among the professionals of global air temperature measurement.”

And notice what they said. They said and I quote “So the error in the monthly average will be at most 0.2/(sqrt60) = 0.03 C”. I took the liberty to bolden and underline average to make it undeniably obvious that they didn’t say what MarkW is claiming.

PF said: “Pathetic isn’t the word for it. Incompetent is.”

What is incompetent is believing that uncertainty of the average of a set of values is not equal to or less than the uncertainty of the individual values themselves. Fortunately and based on what I’ve seen in your works you accept this.

MarkW
Reply to  bdgwx
January 15, 2022 4:00 pm

That is precisely what the various alarmists have been claiming. That’s how they get results of 0.001C out of records that record measurements to the nearest degree.

bdgwx
Reply to  MarkW
January 15, 2022 6:39 pm

I’ve not seen any dataset that publishes an uncertainty as low 0.001 C for a global mean surface temperature.

Jim Gorman
Reply to  MarkW
January 14, 2022 1:15 pm

Not only accuracy but the precision is also increased. In other words, with enough measurements with a yardstick, you can get 1/1000ths precision. Instead of ending up with 1 yard, you get 1.0001 yards. Do you know any machinists that would believe that?

bdgwx
Reply to  Captain climate
January 14, 2022 1:09 pm

I have the GUM, Taylor, NIST, all statistics expert and text that all say that the uncertainty of the mean is less than the uncertainty of the individual measurements that went into that mean. In other words, I have a lot of evidence that backs that claim up. Would you like to discuss that evidence now?

MarkW
Reply to  bdgwx
January 14, 2022 2:35 pm

That only works when you are using one instrument to measure the same thing repeatedly.
It does apply when you use multiple instruments to measure different things.

bdgwx
Reply to  MarkW
January 14, 2022 8:46 pm

Not only do the methods used to combine uncertainties from different measurands using different measurement instruments or methodologies, but they can be used to combine uncertainties from measurands with completely different units. Don’t take my words. Look at GUM equation 10 and pay particular example to the example used. Even Tim Gorman accepts this because when he tried to use Taylor equation 3.18 he was combining the uncertainty of not only different temperatures in units of K but also of the number of temperatures which is unitless. He just did the math wrong. Had he followed 3.18 and not made an arithmetic mistake he would have concluded that the uncertainty of the mean is given by u(T_avg) = u(T)/sqrt(N). Don’t take my word for it though. Try it for yourself. Compare your results with Taylor 3.16, Taylor 3.18, GUM 10, GUM 15 and the NIST uncertainty calculator. That’s 6 different methods that all give the same answer.

Tim Gorman
Reply to  bdgwx
January 15, 2022 1:08 pm

Nope. I sent you a message on how to properly use eqn 3.18.

Try again.

here it is:

———————————————–
Equation 10 from the gum and Taylor’s Eqn 3.18
Tavg = (ΣTn for 1 to n) /n
δTavg/Tavg = δTavg/(ΣTn/n) = sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ]
δw = 0
δTavg/(ΣTn/n) = sqrt[ (ΣδTn/ΣTn)^2 ] = (ΣδTn/ΣTn)
δTavg = (ΣTn/n) * (ΣδTn/ΣTn) = (ΣδTn)/n
δTavg is the average uncertainty of the Tn dataset.
If δT1 = .1, δT2 = .2, δT3 = .3, δT4 = .4, δT5 = .5 then the ΣTn = 1.5
1.5/5 = .3, the average value of the data set.
If all δTn are equal then you get δTavg = (n δT)/n = δT
———————————————–

Taylor’s 3.18:

If q = (x * y * z) / (u * v * w) and y=z=u=v = 0

Then δq/q = sqrt[ (δx/x)^2 + (δw/w)^2 ]

if w is a constant then this reduces to

δq/q = δx/x

If q = Tavg = (ΣTn for 1 to n) /n

then the rest follows.

δTavg/ (ΣTn for 1 to n) /n = (ΣδTn)/n

Again, this basically means that the uncertainty of T is equal to the average uncertainty of the elements of T.

The uncertainty in the mean doesn’t decrease to a lower value than the elements used.

You keep confusing accuracy and precision. You can calculate the standard deviation of the sample means to any precision you want, it doesn’t make the mean you calculate any more accurate.

bdgwx
Reply to  Tim Gorman
January 15, 2022 6:16 pm

You made the same arithmetic mistake here as you did down below. I showed you what the mistake is and how to fix it here. There no need for me to rehash that in this subthread.

Tim Gorman
Reply to  bdgwx
January 16, 2022 9:34 am

Nope. You didn’t show me anything. You just stated an assertion with no proof.

I answered you on it. If you are going to sum temperature measurements to do an average then you need to sum the uncertainties as well. You are trying to say ΣδTn is not right while saying ΣTn is ok.

You can’t have it both ways.

MarkW
Reply to  bdgwx
January 15, 2022 4:03 pm

They can’t, but that won’t stop the climate alarmists from violating all the rules of statistics, science and engineering.

bdgwx
Reply to  MarkW
January 15, 2022 6:19 pm

I’m not the violating rules of statistics. I get the same answer whether I apply Taylor 3.9/3.16, Taylor 3.16/3.18, Taylor 3.47, GUM 10, GUM 15, or the NIST monte carlo methods. The reason why Tim gets a different answer with Taylor 3.18 is because of an arithmetic mistake. If he were to do the arithmetic correctly he would get the same answer as everyone else does and would conclude that the uncertainty of the mean is less than the uncertainty of the individual elements from which the mean was computed. Don’t take my word for it though. I encourage to apply each of these 6 methods and verify the result for yourself.

Tim Gorman
Reply to  bdgwx
January 16, 2022 9:37 am

I showed you the calculations WITH NO MISTAKES. You can point out any mistakes. All you can say is that it is wrong.

“δTavg = (ΣTn/n) * (ΣδTn/ΣTn) = (ΣδTn)/n
δTavg is the average uncertainty of the Tn dataset.”

You can’t show where this is wrong.

bdgwx
Reply to  Tim Gorman
January 16, 2022 11:29 am

You wrote this:

δTavg/Tavg = sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ]

That is wrong. The reason it is wrong is because δ(ΣTn) does not equal Σ(δTn). Refer to Taylor 3.16 regarding the uncertainty of sums.

Fix the mistake and resubmit. I want you to see for yourself what happens when you do the arithmetic correctly.

Carlo, Monte
Reply to  bdgwx
January 16, 2022 11:35 am

You are in no position to make demands, foolish one.

Bellman
Reply to  Carlo, Monte
January 16, 2022 12:06 pm

Do you remember how you were insulting people always having to have the last word yesterday?

(This is a fun game of chicken. Will he respond?)

Carlo, Monte
Reply to  Bellman
January 16, 2022 9:05 pm

More whining?

bdgwx
Reply to  Carlo, Monte
January 17, 2022 7:52 am

I don’t think it is unreasonably to ask that arithmetic mistakes be fixed especially when the disagreement disappears when the math is done correctly. I understand that GUM 10 requires Calculus so I’ll give people grace on that one. But Taylor equation 3.16 and 3.18 only require Algebra I level math which I’m pretty sure is a requirement to graduate high school at least in the United States so this shouldn’t be that difficult.

Carlo, Monte
Reply to  bdgwx
January 17, 2022 3:04 pm

I don’t think

Don’t care what you allegedly think.

You and bellcurveman are now the world’s foremost experts on uncertainty despite a total lack of any real experience with the subject.

bdgwx
Reply to  Carlo, Monte
January 17, 2022 5:11 pm

There’s no uncertainty (pun intended) in what I think. I definitely don’t think it is unreasonable to expect people to simplify/solve equations without making a mistake. And I’m hardly an expect. I’m just simplifying/solving equations that those far smarter than I (Taylor, the GUM, etc.) have made available. And don’t hear what I’m not saying. I’m not saying I’m perfect or immune from mistakes. I made mistakes all of the time. It happens. The only thing I ask that we all (including me) correct them we they are identified.

Carlo, Monte
Reply to  bdgwx
January 17, 2022 6:04 pm

What you are claiming it that averaging can increase knowledge—it CANNOT. This is the nature of uncertainty that you refuse to acknowledge.

bdgwx
Reply to  Carlo, Monte
January 17, 2022 6:56 pm

What is being claimed by Taylor, the GUM, NIST, all other statistics text and which I happen to accept is that the uncertainty of the average is less than the uncertainty of the individual measurements that went into the average. I think it depends on your definition of knowledge whether or not that equates to an increase of knowledge.

Tim Gorman
Reply to  bdgwx
January 16, 2022 3:02 pm

What do you think each Tn has for uncertainty?

In order to find the uncertainty for ΣTn you have to know each individual δTn so you have ΣT1, δT2, δT3, …., δTn.

Once again you can’t face the fact that the uncertainty of each individual element gets propagated into the whole.

So you wind up with δT1/T1, δT2/T2, δT3/T3, …., δTn/Tn for the relative uncertainties of all the elements.

If Ttotal = T1 + T2 + T3 + …. + Tn = ΣTn from 1 to n in order to calculate an average then why doesn’t the total uncertainty = ΣδTn from 1 to n?

If you want to argue that the relative uncertainty for each individual element needs to be added as fractions I will agree with you. But then you lose your “n” factor from the equation!

So which way do you want to take your medicine?



bdgwx
Reply to  Tim Gorman
January 16, 2022 7:10 pm

Taylor 3.16 says δ(ΣTn) = sqrt[Σ(δTn)^2] and when δTn is the same for all values then it reduces to δT * sqrt(n). You get the same result via GUM 10 and the NIST uncertainty calculator as well. This is the familiar root sum square (RSS) or summation in quadrature rule.

Tim Gorman
Reply to  bdgwx
January 17, 2022 8:38 am

it reduces to δT * sqrt(n).”

ROFL! So you admit that it doesn’t reduce to δT / sqrt(n)?

That means that uncertainty GROWS by n, it doesn’t reduce by n.

Which means, as everyone has been trying to tell you, that the uncertainties quoted for GAT and the climate studies ARE NOT CORRECT!

bdgwx
Reply to  Tim Gorman
January 17, 2022 1:22 pm

TG said: “ROFL! So you admit that it doesn’t reduce to δT / sqrt(n)?”

I’ve never claimed that δ(ΣTn) = δT / sqrt(n). I’ve always said δ(ΣTn) = δT * sqrt(n).

TG said: “That means that uncertainty GROWS by n, it doesn’t reduce by n.”

Yeah…duh. That’s for when δq = δ(ΣTn). Everybody knows this. It is the familiar root sum square (RSS) or summation in quadrature rule.

TG said: “Which means, as everyone has been trying to tell you, that the uncertainties quoted for GAT and the climate studies ARE NOT CORRECT!”

What? Not even remotely close. I think you’ve tied yourself in a knot so tight you don’t even realize that Tsum = ΣTn is not the same thing as Tavg = ΣTn / n. I highly suspect you are conflating Tavg with Tsum. Let me clarify it now.

When q = Tsum then δq = δTsum = δT * sqrt(n)

When q = Tavg then δq = δTavg = δT / sqrt(n)

If you would just do the arithmetic correctly when applying Taylor 3.18 you’d see that δTavg = δT / sqrt(n) is the correct solution. This is literally high school level math.

Do you want to walk through Taylor 3.18 step by step?

Jim Gorman
Reply to  bdgwx
January 15, 2022 4:17 pm

I don’t think eq. 10 says what you think it says. First here is a section about adding variances from: https://intellipaat.com/blog/tutorial/statistics-and-probability-tutorial/sampling-and-combination-of-variables/

Linear Combination of Independent Variables

 

The mean of a linear combination is exactly what we would expect: W = aX + bY.

If we multiply a variable by a constant, the variance increases by a factor of the constant squared: variance(aX) = a2 variance(X). This is consistent with the fact that variance has units of the square of the variable. Variances must increase when two variables are combined: there can be no cancellation because variabilities accumulate.

 

Variance is always a positive quantity, so variance multiplied by the square of a constant would be positive. Thus, the following relation for combination of two independent variables is reasonable:

More than two independent variables can be combined in the same way.

If the independent variables X and Y are simply added together, the constants a and b are both equal to one, so the individual variances are added:

Now lets look at what sections 5.1.2 and what 5.1.3 say about eq. 10. I’ll include a screenshot.

“The combined standard uncertainty uc(y) is the positive square root of the combined variance uc^2 ( y), …, where f is the function given in Equation (1).”

Why don’t you define the function of how temps are combined so you can also properly define the partial derivative of “f”.

It goes on to say:

“The combined standard uncertainty uc(y) is an estimated standard deviation and characterizes the dispersion of the values that could reasonably be attributed to the measurand Y (see 2.2.3). … Equation (10) and its counterpart for correlated input quantities, Equation (13), both of which are based on a first-order Taylor series approximation of Y = f (X1, X2, …, XN), express what is termed in this Guide the law of propagation of uncertainty (see E.3.1 and E.3.2).”

Read 5.1.3 very carefully also. You may be able to understand why variances are very important and why they are asked for. When you quote a GAT without the variance, i.e. the dispersion of temperatures surrounding the GAT, then you are leaving out a very important piece of information.

It is why the SEM is a meaningless statistic of the mean of the sample means when it comes to defining how well the GAT represents the data used to calculate it.

Section E.3.2 also has a very good discussion of how uncertainties of are truly the variance of the input quantities to “wi” are equal to the standard deviations of the probability distributions of the wi combine to give the uncertainty of the output quantity z.

In fact, it is appropriate to call Equation (E.3) the

law of propagation of uncertainty as is done in this Guide because it shows how the uncertainties of the input quantities wi, taken equal to the standard deviations of the probability distributions of the wi, combine to give the uncertainty of the output quantity z if that uncertainty is taken equal to the standard deviation of the probability distribution of z.

Do you want to explain where the standard deviations of the probability distributions of the input quantities used to calculate a GAT are calculated and what their values are and how the variances are combined?

Jim Gorman
Reply to  Jim Gorman
January 15, 2022 4:19 pm

From the GUM.

gum combined uncertainty.jpg
Bellman
Reply to  Jim Gorman
January 15, 2022 4:48 pm

Not sure why I’m jumping down this rabbit hole again, but in the link you have the formula

\sigma_w^2 = a^2\sigma_x^2 + b^2\sigma_y^2

So if you are taking the average, and a = b = 1/2, what do you think \sigma_w is?

Bellman
Reply to  Bellman
January 15, 2022 4:51 pm

Actually you don’t have to work it out, because they tell you in the next section, Variance of Sample Means.

bdgwx
Reply to  Jim Gorman
January 15, 2022 8:26 pm

For the application of GUM 10 the function f is defined as f(X_1, X_2, …, X_n) = Σ[X_i, 1, N] / N and so ∂f/∂X_i = 1/N for all X_i.

Pat Frank
Reply to  MarkW
January 15, 2022 12:17 pm

It also doesn’t apply when measurement error varies systematically due to uncontrolled environmental variables.

bdgwx has been exposed to that qualification about a gazillion times by now. It slides off his back like water from a greased cutting board.

MarkW
Reply to  Pat Frank
January 15, 2022 4:04 pm

The alarmists have been conditioned to believe what ever nonsense their bishops tell them to believe.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 4:13 pm

Did you remember to polish your gold star this morning?

Oh, and don’t forget that you thrive on abusing canned equations while pooh-poohing experienced professionals.

Jim Gorman
Reply to  bdgwx
January 14, 2022 4:27 pm

None of those say that. The uncertainty if the mean to which you refer is the Standard Error of the Sample Mean, i.e., the SEM. That is the width of the interval within which the population mean may lay when estimating it by using sampling and calculating a mean from a distribution of sample means.

You have no idea of what the basis of these documents is built upon. Cherry picking equations without making sure the underlying assumptions are met is what an academician would do, not a person who has physically worked with measurements. You can’t even deal with precision and the accompanying Significant Digit rules needed to insure appropriate precision is displayed.

bdgwx
Reply to  Jim Gorman
January 14, 2022 8:35 pm

Yes they do. You’re own preferred method Taylor 3.18 says so.

Tim Gorman
Reply to  bdgwx
January 14, 2022 6:44 pm

I’m sorry I got busy and didn’t get back to you.

Here is the result of Eqn 10 from the gum and Eqn 3.18 from Taylor.

Equation 10 from the gum and Taylor’s Eqn 3.18

Tavg = (ΣTn for 1 to n) /n

δTavg/Tavg = δTavg/(ΣTn/n) = sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ]

δw = 0

δTavg/(ΣTn/n) = sqrt[ (ΣδTn/ΣTn)^2 ] = (ΣδTn/ΣTn)

δTavg = (ΣTn/n) * (ΣδTn/ΣTn) = (ΣδTn)/n

δTavg is the average uncertainty of the Tn dataset.

If δT1 = .1, δT2 = .2, δT3 = .3, δT4 = .4, δT5 = .5 then the ΣTn = 1.5

1.5/5 = .3, the average value of the data set.

If all δTn are equal then you get δTavg = (n δT)/n = δT

uncertainty of the mean is less than the uncertainty of the individual measurements that went into that mean. “

Nope. There is no “uncertainty of the mean” calculated from the sample means. There is the standard deviation of the sample means which is the PRECISION with which you have calculated the mean. It is *NOT* the uncertainty of the mean.

The uncertainty of the mean is the propagated uncertainty from the data set values. As shown above it is at least the average value of the various uncertainties associated with the data in the data set.

bdgwx
Reply to  Tim Gorman
January 14, 2022 8:22 pm

Nope. sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ] does not follow from Taylor 3.18.

Tim Gorman
Reply to  bdgwx
January 15, 2022 12:58 pm

Of course it follows. Relative uncertainty is δT/T.

Since Tavg uses ΣTn in its calculation then δTavg must also be based on ΣδTn/ΣTn.

If you don’t want to use Tavg and δTavg then figure out a better way to formulate the problem.



bdgwx
Reply to  Tim Gorman
January 15, 2022 3:53 pm

The Tavg method is fine. But there is a very subtle mistake in your first step.

You have this which is wrong.

δTavg/Tavg = sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ]

The correct step is this.

δTavg/Tavg = sqrt[ (δΣTn/ΣTn)^2 + (δw/w)^2 ]

Pay really close attention here. In plain language you don’t take the sum of the uncertainties of Tn but instead you take the uncertainty of the sum of Tn. Or in mathematical notation if x = ΣTn then δx = δΣTn. It’s not δx = ΣδTn. If the first step is wrong then all steps afterward are wrong.

It’s easier to spot if you break it down into simpler steps.

x = Tsum = ΣTn

w = n

q = x / w = Tsum / n = Tavg

δq/q = sqrt[ (δx/x)^2 + (δw/w)^2 ]

δTavg/Tavg = sqrt[ (δTsum/Tsum)^2 + (δn/n)^2 ]

δTavg/Tavg = sqrt[ (δTsum/Tsum)^2 + 0 ]

δTavg/Tavg = sqrt[ (δTsum/Tsum)^2 ]

δTavg/Tavg = δTsum / Tsum

δTavg = δTsum / Tsum * Tavg

δTavg = δTsum / Tsum * Tsum / n

δTavg = δTsum / n

At this point we have to pause and use Taylor 3.16 to solve for δTsum. Once we do that we can plug that back in and resume Taylor 3.18. When you do that you’ll get δTavg = δT / sqrt(n).

MarkW
Reply to  bdgwx
January 15, 2022 4:02 pm

They can say something that is not true as many times as they want. It still won’t make it true.

bdgwx
Reply to  MarkW
January 15, 2022 6:09 pm

Who’s “they”? What are “they” saying?

MarkW
Reply to  bdgwx
January 14, 2022 10:53 am

0.05 is about half the error for the instruments involved and it completely ignores the error caused by woefully inadequate sampling.

Only those who are desperate to keep the scam alive believe that the 95% confidence interval is a mere 0.05K.

bdgwx
Reply to  MarkW
January 14, 2022 1:06 pm

Read Lenssen et al. 2019 for details regarding GISTEMP uncertainty. In a nutshell though the core concept is that the uncertainty of the mean is less than the uncertainty of the individual observations that went into that mean. Don’t hear what I didn’t say. I didn’t say the uncertainty of the individual observations is less when you have more them. Nobody is saying that.

MarkW
Reply to  bdgwx
January 14, 2022 2:35 pm

More excuses for ignoring the basic rules of statistics.

Carlo, Monte
Reply to  MarkW
January 14, 2022 4:14 pm

He’s really adept at using the appeal to authority fallacy.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 5:37 pm

What reasonably and ethical alternative is there for citing sources? How do you do it?

MarkW
Reply to  bdgwx
January 15, 2022 4:07 pm

When your so called authority has been shown to be incorrect, just repeating that you have an “authority” is neither reasonable nor ethical.

bdgwx
Reply to  MarkW
January 15, 2022 6:08 pm

I don’t base my position on authorities. I base my position on evidence. I’ve not seen a challenge that the evidence presented in Lenssen 2019 is 1) has an egregious mistake, 2) is accompanied by a result with the mistake fixed so that the magnitude of the mistake can be assessed, and 3) doesn’t itself have an egregious mistake.

BTW…I’d still like to see those basic rules of statistics that you trust and use yourself.

MarkW
Reply to  Carlo, Monte
January 15, 2022 4:05 pm

As a good alarmist, he’s been trained to believe whatever he’s told to believe.

bdgwx
Reply to  MarkW
January 15, 2022 6:02 pm

I’m trained to believe what the abundance of evidence tells me to believe. I’m not sure if I’m an “alarmist” though. Different people have different definitions of “alarmist”. If you provide a definition for which I my position can be objectively evaluated I’ll be happy to give you my honest assessment of whether I’m an “alarmist” or not. Just know that assessment would only apply to your definition.

bdgwx
Reply to  MarkW
January 14, 2022 5:29 pm

Can post a link to the basic rules describing how you quantify the uncertainty of the mean?

Jim Gorman
Reply to  bdgwx
January 15, 2022 5:32 pm

Yes. But first you must declare some baseline assumptions.

1) Is the data set you are using considered a population or does it consist of a number of samples?

2) If the data set is a number of samples, what is the sample size?

3) If the data set is a number of samples are the individual samples IID? That is the same mean and standard deviation?

You might want to review this link.

Independent and Identically Distributed Data (IID) – Statistics By Jim

Carlo, Monte
Reply to  bdgwx
January 14, 2022 4:15 pm

Bollocks, this Lenssen authority that you appeal to must be another climastrologer who is either ignorant of uncertainty or is heavily into professional misconduct.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 5:35 pm

I’m not appealing to Lenssen. I’m appealing to the evidence provided. I always cite my references by name and year so that 1) you can find and double check it yourself 2) that is the standard and 3) so that I’m not accused of plagiarism. Don’t confuse appealing to evidence with appealing to authority.

Tim Gorman
Reply to  bdgwx
January 14, 2022 7:15 pm

Uncertainty is accuracy not precision. If you don’t propagate the uncertainty of the sample data into the sample mean and just assume the sample mean is 100% accurate then all you are doing is calculating the precision with which you have calculated the standard deviation of the sample means.

If sample1 mean is 30 +/- 1, sample2 mean is 29 +/- 1.1, sample3 mean is 31 +/- .9 then the mean you calculate from those stated values *will* have an uncertainty interval associated with it.

You simply cannot assume that 30, 29, and 31 are 100% accurate and their standard deviation is the uncertainty of the population mean. As shown in the prior message the uncertainty associated with the mean will be the average uncertainty of the individual components or +/- 1.

The standard deviation of the sample means will be

sqrt[ |30-29|^2 + |30-30|^2 + |31-30|^2 ] /3 =

sqrt[ 1^2 + 0^2 + 1^2 ] /3 = sqrt[ 2] /3 = 1.4/3 = .5

The standard deviation of the sample means is half the actual uncertainty propagated from the uncertainties of the individual elements.

All the .5 tells you is how precisely you have calculated the mean of the sample means assuming the stated values are 100% accurate. The higher the number of sample means you have the smaller the standard deviation of the stated values of the sample means should get. But that *still* tells you nothing about the accuracy of the mean you have calculated.

I’m not surprised climate scientists try to make their uncertainty look smaller than it truly is.

But it’s a fraud on those not familiar with physical science.

Pat Frank
Reply to  bdgwx
January 15, 2022 12:24 pm

Lenssen, et al., describe systematic error as “due to nonclimatic sources. Thermometer exposure change bias … Urban biases … due the local warming effect [and] incomplete spatial and temporal coverage.“

Not word one about systematic measurement error due to solar irradiance and wind-speed effects. These have by far the largest impact on station calibration uncertainty.

Lenssen et al., also pass over Folland’s ±0.2 C read error in silence. That ±0.2 C is assumed to be constant and random, both of which assumptions are tendentious and self-serving, and neither of which assumptions have ever been tested.

bdgwx
Reply to  Pat Frank
January 15, 2022 2:37 pm

Why would the solar irradiance and wind speed effects be a problem at one period of time but not another? In other words, why would that systematic bias survive the anomaly conversion?

It sounds like you are working with more information about that Folland ±0.2 C figure than was published. What does it mean? Why should it be factored into uncertainty analysis on top of what is already factored in already?

Pat Frank
Reply to  bdgwx
January 15, 2022 2:50 pm

Why would the solar irradiance and wind speed effects be a problem at one period of time but not another?”

They’re a problem all the time, with varying impacts of unknown magnitude on measurement.

You claimed to have studied Hubbard and Lin 2002. And now here you are pretending to not know exactly what you purportedly studied.

It’s too much, really, The same tired studied ignorance over, and yet over again.

bdgwx
Reply to  Pat Frank
January 15, 2022 5:59 pm

I did study Hubbard 2002. You can see that the MMTS bias is +0.21 C. That bias exists in the anomaly baseline. So when you subtract the baseline from the observation the bias cancels. This is mathematically equivalent to Ta = (To + B) – (Tb + B) = To – Tb + (B – B) = To – Tb where B is the bias, Ta is the anomaly value, To is the observation, and Tb is the baseline. The real question is whether the bias B = +0.21 C is constant or time variant?

Regarding solar irradiance and wind speed effects…do you have data showing that the bias caused by these effects is increasing, decreasing, or staying constant with time?

And regarding the Folland ±0.2 C value would you mind presenting the data you are working from that suggests it is a different type of uncertainty that Hubbard 2002 did not address and which must be combined with the uncertainty assessed by Hubbard 2002?

Captain climate
Reply to  Patrick B
January 14, 2022 10:26 am

This is an agency that mixed imperial and metric units and thus killed a Martian probe. So no. It’s asshat country. Kudos so far for Webb though—- until they [screw] up.

[In the words of Biden, “come on man”. Enough with the profanity already. -mod]

Al Miller
January 14, 2022 6:50 am

“Science leaves no room for doubt: Climate change is the existential threat of our time,” 
What a blatant pile of garbage.
Further to this since when are fudged figures and models “science”.


fretslider
Reply to  Al Miller
January 14, 2022 7:02 am

Science leaves no room for doubt:”

What they meant was politicised science leaves no room for doubt

“- So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what is the right balance between being effective and being honest.”  —Dr. Stephen Schneider, former IPCC Coordinating Lead Author, APS Online, Aug./Sep. 1996

An activist has to feel they have been effective – honesty be damned.

Joseph Zorzin
Reply to  fretslider
January 14, 2022 7:42 am

Schneider should be in prison for that!

Timo V
Reply to  Joseph Zorzin
January 14, 2022 8:24 am

Umm… Make that Hell. He’s long gone.

jeffery p
Reply to  fretslider
January 14, 2022 8:11 am

I remember when NASA administrators had backgrounds in science and/or engineering. Is that no longer true?

MarkW
Reply to  Al Miller
January 14, 2022 7:34 am

Science always leaves room for doubt.
It’s only politics and religion that have no room for doubt.

yirgach
January 14, 2022 7:00 am

Joe Bastardi has some great comments on this in the last public Saturday Summary at Weatherbell.com.
Basically the oceans store and release heat at different times of the year leading to increasingly smaller plateaus. We are nearing the top of the curve…
Here’s a screenshot:
comment image

willem post
Reply to  yirgach
January 14, 2022 9:15 am

The temp went from -0.25 C to + 2.0 C, a change of 0.45 C over 42 years, or 0.01 C/y

I think such a small change does not justify spending, world-wide $TRILLIONS/y

IF, CO2 IS the culprit, then world-wide, increased efficiency would be the best approach.

That would reduce CO2 and save resources and money

David Strom
January 14, 2022 7:00 am

What do the other global temperature datasets show?

Seems to me a pretty big leap from “it looks like it’s getting warmer” to “existential threat”. My son’s in high school, his teachers would say “show your work”. Even *if* (and I’m not conceding any of these) it’s getting warmer and the sea level is rising and ice sheets are melting and weather is more stormy/droughty/rainy/whatever, how does that mean the claimed warming threatens our *existence”?

Guess it’s a cry for attention.

Joseph Zorzin
Reply to  David Strom
January 14, 2022 7:44 am

children have an existential threat when they can’t have another lollipop

John Bell
January 14, 2022 7:04 am

Lewandowski projecting conspiracy on 60 minutes Australia
OT a bit, but worth a look here: https://www.youtube.com/watch?v=HeolihloqOY

Robertvd
January 14, 2022 7:04 am

not a grain of salt but a mountain.

ResourceGuy
January 14, 2022 7:12 am

Well, they do have a lifestyle to maintain in GISS NYC among all the concrete and inflation.

Goddard Institute for Space Studies – Wikipedia

bdgwx
January 14, 2022 7:14 am
Duane
Reply to  bdgwx
January 14, 2022 7:32 am

How do they measure “ocean heat content” when they only have data for most of the oceans as sea surface temperatures, which do NOT define “ocean heat content”?

MarkW
Reply to  Duane
January 14, 2022 8:48 am

They just assume that the vast majority that they haven’t sampled is behaving the same as the small portion they have sampled.

Duane
Reply to  MarkW
January 14, 2022 8:57 am

Which of course has no basis in reality. Anybody familiar with oceanography understands that there are all manner of ocean currents that operate within the oceans … and anybody familiar with thermodynamics and fluid transport understand that heat transfer in oceans with complicated currents, eddies, bottom surface profiles, and depths understands that the sea surface is but the “skin” of the ocean that does not remotely begin to be representative of the entire water column, let alone the three dimensional dynamics in the Earth’s oceans

Of course, this is SOP with warmunists – they take a single parameter within a complex system and declare it to be utterly determinative … such as their ridiculous claim that CO2 concentrations in the lower atmosphere serve as the 100% control knob for the climate of this planet.

bdgwx
Reply to  Duane
January 14, 2022 10:23 am

I don’t know who these “warmunists” are that you are listening to. Regardless my advice is to stop listening to them and instead start listening to scientists and what the abundance of evidence says. For example, the director of GISS reports that CO2 is only 20% of the GHE. See Schmidt et al. 2010 for details.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 10:38 am

Take a look in the bathroom mirror.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 10:48 am

I’ve never claimed that CO2 is the control knob or 100% of it. I’ve also never claimed that CO2 is responsible for all of the climate change we observe today. So if that is a necessary condition to be included as a “warmunist” then I’m definitely ineligible.

Dave Fair
Reply to  bdgwx
January 14, 2022 11:45 am

What is this “climate change we observe today?”

bdgwx
Reply to  Dave Fair
January 14, 2022 12:59 pm

In this context it is the +350 ZJ change in OHC.

MarkW
Reply to  bdgwx
January 14, 2022 2:36 pm

In this case it is a number that is so far below the ability of the system to measure that only those who are interested in obfuscation cite it.

Dave Fair
Reply to  bdgwx
January 14, 2022 4:40 pm

Other than a little energy into the oceans, what climatic metric(s) changed?

Mr.
Reply to  bdgwx
January 14, 2022 5:22 pm

Whoa there pilgrim!

Your IPCC defines ‘climate’ as conditions of ‘weather’ in an area observed over at least 30 years.

You’re moving the goalposts to global oceans conditions now?

Do we have to redefine ‘weather’ now to keep up with the ever-shifting goalposts of what “climate change” means.

bdgwx
Reply to  Mr.
January 14, 2022 7:47 pm

All kinds of changes can be expected. My top comment is focused on OHC which is why I pointed it out in response to DF’s question. I am in no way insinuating that other changes are precluded.

Mike
Reply to  bdgwx
January 14, 2022 6:45 pm

I’ve never claimed that CO2 is the control knob or 100% of it. I’ve also never claimed that CO2 is responsible for all of the climate change we observe today.

So you would disagree with the claim made by IPPC that we (co2) are actually responsible for well over 100% of the warming?

” as NASA’s Dr Gavin Schmidt has pointed out, the IPCC’s implied best guess was that humans were responsible for around 110% of observed warming (ranging from 72% to 146%), with natural factors in isolation leading to a slight cooling over the past 50 years.”

If so, please explain why this is the case.
Thanks…

bdgwx
Reply to  Mike
January 14, 2022 7:33 pm

No. I’m not challenging the IPCC attribution. That’s how know CO2 isn’t the only thing modulating the climate. Refer to AR5 WG1 Ch. 8 table 8.2. Notice that CO2 is 65% of the total well-mixed GHG forcing. And of the total anthropogenic forcing (including aerosols, ozone, etc.) it is 80%. Note that Schmidt is not challenging that. He’s basing his statement off the same IPPC report and data which also shows that ANT is about 110% of Observed (see figure 10.5).

MarkW
Reply to  bdgwx
January 14, 2022 10:58 am

Where’s this evidence of which you worship?
It doesn’t exist, it has never existed.
The claim that you know what the temperature of the entire oceans to within 0.05C is so utterly ridiculous, that only someone with no knowledge of either science or statistics could make it with a straight face.

bdgwx
Reply to  MarkW
January 14, 2022 12:56 pm

I didn’t claim we know the temperature of the oceans to within 0.05 C. Cheng et al. 2022 clearly say the increase from 2021 is known to within ±11 ZJ. At a mass of the first 2000 m of approximately 0.7e21 kg and using 4000 j/kg for the specific heat that is about (1/4000) C.kg.j-1 * (1/0.7e21) kg-1 * 11e21 j = 0.004 C. If you think 0.05 C is “utterly ridiculous” then you’re almost certainly going to reject 0.004 C is as well. Yet that is what it is. But it’s not that ridiculous. It’s only about 1 part in 75,000 which isn’t even noteworthy in the annals of science. I think the record holder is LIGO at about 1 part in 1,000,000,000,000,000,000,000 (1e21). But even limiting it to temperatures the bar for excellence today I believe it is about 1 part in 1 million. In other words, Cheng et al. 2022 figure precision won’t win them any recognition in terms of precision of measurement. Anyway, the evidence is Cheng et al. 2022 and Cheng et al. 2015. Now you might want to challenge that evidence. But you can’t say it doesn’t exist because it does, in fact, exist…literally.

MarkW
Reply to  bdgwx
January 14, 2022 2:37 pm

I don’t know if you actually are this clueless or if you just don’t care how dumb you look.

bdgwx
Reply to  MarkW
January 15, 2022 6:16 am

I have no shame in admitting that I am clueless compared to those who are actually do the research and publishing their results. And the more I learn the more I realize how clueless I am about the world around us.

MarkW
Reply to  bdgwx
January 15, 2022 4:09 pm

In other words, you have no idea whether they are right or not, but because you want to believe, you will just assume that they must be.

bdgwx
Reply to  MarkW
January 15, 2022 5:46 pm

The scientific method is based on falsification because truthification requires infinite effort. I cannot tell you that Cheng et al. 2015 and 2022 are right because that is an impossible task. But I can tell you that no one has published the finding of an egregious mistake with an accompanying result that has the mistake fixed. And until someone does I have no choice but to accept the contribution to the body of scientific knowledge just like we do for all other lines evidence that have yet to be falsified.

Duane
Reply to  bdgwx
January 14, 2022 11:23 am

Nearly all warmunists claim it is 100% about CO2. They aren’t scientists – merely political scientists (ie “propagandists”)

Dave Fair
Reply to  bdgwx
January 14, 2022 11:44 am

Schmidt’s models predict that additional CO2-caused warming is over 3 times its calculated effect by engendering positive feedbacks in the predominate GHG, H2O, and cloud alterations. Schmidt recently said, however, that UN IPCC CliSciFi models run egregiously hot. The UN IPCC CliSciFi AR6 had to throw out the results of the higher-end models.

The models are not sufficient evidence to fundamentally alter our society, economy and energy systems.

bdgwx
Reply to  Dave Fair
January 14, 2022 2:11 pm

Did he say CO2 is 100% of the total GHE or that CO2 is the cause of 100% of the global temperature change in the GISTEMP record?

Dave Fair
Reply to  bdgwx
January 14, 2022 4:44 pm

Who gives a shit what Gavin Schmidt said? He is a known Deep State liar.

bdgwx
Reply to  Dave Fair
January 15, 2022 7:45 am

WUWT does. He is a central figure to this blog post in fact. I assume you are care what he says as well since you posted about him too.

bdgwx
Reply to  MarkW
January 14, 2022 9:23 am

That is not correct. See Cheng et al. 2015 for details on the method.

MarkW
Reply to  bdgwx
January 14, 2022 11:00 am

Yes it is correct. The fact is the oceans are woefully under sampled, you would need at least 100,000 more probes before you could even come close to those kind of accuracies.

bdgwx
Reply to  Duane
January 14, 2022 9:21 am

The method and data is described in detail in Cheng et al. 2015.

Duane
Reply to  bdgwx
January 14, 2022 11:25 am

The point being there is no such temperature monitoring network other than sea surface temperatures. Everything else is fake

bdgwx
Reply to  Duane
January 14, 2022 12:34 pm

That’s not what Cheng et al. 2015 say.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 6:50 pm

You sing the I Cheng.

Tom.1
Reply to  Duane
January 14, 2022 11:26 am

There is an array of buoys spread around the world’s oceans. The stay at depth (2000 m as I recall), and then periodically come to the surface collecting data as the rise. When they get to the surface, they transmit their data to a satellite. That’s what we have.

Mr.
Reply to  Tom.1
January 14, 2022 5:50 pm

Yes Tom.
But how extensive is this “array”?

As was commented earlier, we would need tens if not hundreds of thousands more sampling buoys than what we’ve ever had to get a precise handle on what’s happening across and down in all the world’s oceans, all the time.

Back in the day, mainframe computer programs would report an error message –
“INSUFFICIENT DATA – NO RESULT”

We seem to have lost or discarded this sage message from (now) ancient computer systems.

MarkW
Reply to  Mr.
January 15, 2022 4:12 pm

I don’t remember which of the so called climate scientists I was having a talk with many years ago. When we got around to the quality of the historical temperature record, he admitted that it wasn’t fit for purpose, but since it was the only thing they had, they had no choice but to use it.

MarkW
Reply to  bdgwx
January 14, 2022 7:37 am

I love how they use small Joules in order to hide the fact that the actual warming is only about 0.001C.

Which is of course at least two orders of magnitude less than their instruments are capable of measuring.
Which of course means that when combined with the lack of coverage the error bars are 3 to 4 orders of magnitude greater than the signal they claim to be seeing.

Anthony Banton
Reply to  MarkW
January 14, 2022 9:22 am

“I love how they use small Joules in order to hide the fact that the actual warming is only about 0.001C.”

And I love how you keep getting it, err, wrong ….

https://argo.ucsd.edu/data/data-faq/#deep

“The temperatures in the Argo profiles are accurate to ± 0.002°C and pressures are accurate to ± 2.4dbar. For salinity, there are two answers. The data delivered in real time are sometimes affected by sensor drift. For many floats this drift is small, and the uncorrected salinities are accurate to ± .01 psu. At a later stage, salinities are corrected by expert examination, comparing older floats with newly deployed instruments and with ship-based data. Corrections are made both for identified sensor drift and for a thermal lag error, which can result when the float ascends through a region of strong temperature gradients.”

“They just assume that the vast majority that they haven’t sampled is behaving the same as the small portion they have sampled.”

Indeed: just like anything else large that has to be measured.
The GMST FI.

Also I love how you object to measuring ocean temp in terms of energy as it’s temperature vastly differs from that of air in it’s heat content.

By illustration:
The oceans have a mass ~250x that of the atmosphere and a SH of 4x

Mass of atmosphere 5.14 x 10^18
Mass of oceans     1.4 x 10^21

IE 1000x heat capacity.

So 0.002C in the oceans corresponds to 2C if transferred to the atmosphere.
It is therefore thermodynamically correct to use.

Carlo, Monte
Reply to  Anthony Banton
January 14, 2022 9:32 am

The temperatures in the Argo profiles are accurate to ± 0.002°C

Total BS, not attainable outside of carefully controlled laboratory environments.

Anthony Banton
Reply to  Carlo, Monte
January 14, 2022 2:32 pm

See below

MarkW
Reply to  Anthony Banton
January 14, 2022 11:01 am

You really like to spread total bullshit. They claim that the accuracy is 0.002C, but that is complete nonsense and physically impossible.

Graemethecat
Reply to  MarkW
January 14, 2022 11:08 am

“It is better to be approximately correct than precisely wrong”.

Banton is always precisely wrong.

Anthony Banton
Reply to  Graemethecat
January 14, 2022 2:36 pm

See below.
QED the precise demonstration of the “d” word.
Bless

Anthony Banton
Reply to  MarkW
January 14, 2022 2:37 pm

See below also.
And put up some denial of the way the world is after that post.
After all it’s what you do best.
And it seems that it’s mightily important to you?
Me, I just report the science.
Odd, I know, but I reckon they know more than me and certainly more than you.

Mr.
Reply to  Anthony Banton
January 14, 2022 5:59 pm

No Anthony, you don’t report science.

You report THE science. (So called by AGW acolytes)

There is no such thing as THE SCIENCE.

Honest scientific pursuit demands constant challenge of all held positions.

Settling on a vein of dogma is not scientific pursuit.
That’s called RELIGION.

MarkW
Reply to  Anthony Banton
January 15, 2022 4:14 pm

Anthony, reporting something that is physically impossible is not science, it’s religion.
Your willingness to defend any abuse of science or statistics so long as it supports what you want to believe makes you a good acolyte. It does not make you a good scientist.

Duane
Reply to  Anthony Banton
January 14, 2022 11:27 am

Tell us how any temperature measuring device in a non laboratory environment is precise to within 2 one thousands of 1 deg C.

No such device exists.

mrsell
Reply to  Anthony Banton
January 14, 2022 12:20 pm

The temperatures in the Argo profiles are accurate to ± 0.002°C

Study shows ARGO ocean robots uncertainty was up to 100 times larger than advertised:
https://joannenova.com.au/2015/06/study-shows-argo-ocean-robots-uncertainty-was-up-to-100-times-larger-than-advertised/

bdgwx
Reply to  mrsell
January 14, 2022 2:00 pm

That Hatfield et al. 2007 publication is good. They did not say that ARGO global average profiles were 100x higher than ±0.002 C. Like…not even close. What they said is that the RMS of the ARGO section relative to the cruise line section is 0.6 C. Note that is for the sectional temperature field. It is not for a 3D global average. But they also determined the heat storage uncertainty which can we can use to compute the 3D global average temperature. They determined the RMS to be 29 W/m2, 12 W/m2 and 4 W/m2 for monthly, seasonal (3 months), and biannual (6 month) averages. Let’s assume the 4 W/m2 figure does not continue to drop for an annual average. That is 4 W/m2 for each 10×10 degree grid cell. There are approximately 450 such cells covering the ocean. That means the uncertainty of the global average is 4 / sqrt(450) = 0.19 W/m2. That is 0.19 W/m2 * 357e12 m2 * (365.24 * 24 * 3600) s = 2.1 ZJ. That is about 2e21 j * (1/4000) C.kg.j-1 * (1/0.7e21) kg-1 = 0.001 C for an annual average. So it is equivalent to saying it is about 1/2 of what is claimed. Now there are some caveats here. First, that 4 W/m2 figure is the average for the North Atlantic. Second, based on other statements in the publication that uncertainty is probably a bit higher globally. Third, that 0.001 C figure I calculated assumes no correlation between grid cells. Fourth and most importantly, I’m not an expert in the topic of ocean heat storage so it is possible I made mistake in this analysis.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 4:19 pm

I’m not an expert

You could have just stopped here

Mr.
Reply to  bdgwx
January 14, 2022 6:16 pm

Well I just had an average day.
It was 3C when I ventured out this morning, then 6C when I went out at lunchtime, then 8C when I came home this afternoon, and at dinner time it was back down to 3C, but when I came home after a night out it was 1C.

So I’ve enjoyed an average temp of 4.2C today.

(Except I had to endure a 100% increase in temps from hair of the dog time to lunch-time tipple, then a further 33% increase to beer o’clock, then a drop of 62.5% to fine wine dinner-time, and a further 66.6% to nightcap time.

This climate change stuff is enough to drive a man to drink)

Carlo, Monte
Reply to  bdgwx
January 14, 2022 8:08 am

Oh dear! A hockey schick!

Run away!

Garboard
Reply to  bdgwx
January 14, 2022 8:34 am

Argo adjusted data

Retired_Engineer_Jim
Reply to  Garboard
January 14, 2022 8:47 am

Why is it adjusted?

Clyde Spencer
Reply to  Retired_Engineer_Jim
January 14, 2022 8:59 am

Because Karl (2014) thought that engine boiler room intake temperatures were more reliable than the built-for-purpose Argo buoys.

bdgwx
Reply to  Clyde Spencer
January 14, 2022 9:14 am

That is patently false. According to Karl et al. 2015.

“More generally, buoy data have been proven to be more accurate and reliable than ship data, with better-known instrument characteristics and automated sampling. Therefore, ERSST version 4 also considers this smaller buoy uncertainty in the reconstruction.”

I also want you to ready Haung et al. 2015 as it describes the adjustments made to ERSSTv4.

It’s okay to be critical of data and methods. It is not okay to misrepresent the data and methods.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 9:33 am

Just above Baton is claiming 2 mK “accuracy”.

You people are completely unbelievable.

Anthony Banton
Reply to  Carlo, Monte
January 14, 2022 2:24 pm

Actually it’s denizens who believe the “completely unbelievable” and reject anything that possibly interferes with what constitutes their belief.
It’s called cognitive dissonance.
But go ahead, it makes no difference to the way the tide is going.
It seems that belief in bollocks is rife nowadays anyhow.

And no, I don’t “claim” anything – the company who manufactures the instrument does.
If you don’t believe it (now there’s a surprise) – take it up with them.

https://www.seabird.com/technical-papers/long-term-stability-for-conductivity-and-temperature-sensors

“Summary
Two Argo CTDs, an SBE 41 and 41continuous profiler (cp), were routinely calibrated over 5+ years in the Sea-Bird Alace float facility.

  • Results indicate a very stable calibration system and low drift performance of the SBE 41 and 41cp designs.

To date, six returned Sea-Bird Argo float CTDs have been post-calibrated at the factory in the as-received condition.

  • Results indicate very low drift (< -0.003 psu and -0.002 °C) and sustained calibration accuracy for deployment periods spanning 2 – 6 years.”
Carlo, Monte
Reply to  Anthony Banton
January 14, 2022 4:20 pm

Idiot—all that says is the sensor didn’t drift much over time, there are lots and lots of other uncertainty sources.

MarkW
Reply to  Carlo, Monte
January 15, 2022 4:16 pm

Anthony doesn’t bother trying to understand what he believes. Being told that it is right is good enough for him.

MarkW
Reply to  bdgwx
January 14, 2022 11:02 am

If the Argo data was more accurate, why did they adjust the Argo data to match the ship data?

Dave Fair
Reply to  MarkW
January 14, 2022 11:54 am

To hype the computed upward trend over time as more ARGO floats came online.

Dave Andrews
Reply to  Dave Fair
January 15, 2022 9:33 am

There are about 4000 Argo floats and they use a small globe to show them that makes it look like they cover all of the ocean in the world.

In reality, I remember Willis calculating that each one covered an area the size of Portugal and we would all use a single temperature measurement area in Portugal to tell us temperature of the whole of Portugal wouldn’t we?

bdgwx
Reply to  MarkW
January 14, 2022 11:59 am

They didn’t. See Haung et al. 2015. Though it actually would not matter which dataset received the bias correction as long as it was applied appropriately. If X is biased from Y by +B then that means Y is biased from X by -B. You can either apply -B to X or +B to Y to homogenize the two timeseries. And remember, these are anomalies so it it doesn’t matter if -B is applied X or +B is applied to Y. It will yield the same anomaly value either way. You could also choose the mid point as the alignment anchor and apply -0.5B to X and +0.5B to Y if you want. Again…it will yield the same answer. Mathematically this is equivalent to B = X – Y which is also Y = X – B or X = Y + B or 0 = X – Y – B or 0 = (X – 0.5B) – (Y + 0.5B). What I’m saying is that mathematically it doesn’t matter how the correction factor is distributed as long as it is conserved.

bdgwx
Reply to  bdgwx
January 14, 2022 7:07 pm

bdgwx said: “They didn’t.”

Err…They did.

Duane
Reply to  bdgwx
January 14, 2022 11:29 am

“More accurate than ship data” does not equate to a precision of .002 deg C

bigoilbob
Reply to  Duane
January 15, 2022 7:54 am

The volume of data makes it possible for trends and differences to be delineated that are smaller than the error bands of every datum. We have known this for generations….

Carlo, Monte
Reply to  bigoilbob
January 15, 2022 10:56 am

Total bollocks, another jive artist doin’ the scam shuffle.

And who are “we”?

MarkW
Reply to  bigoilbob
January 15, 2022 4:18 pm

Only when you use the same instrument to measure the same thing over and over again.
Using different instruments to measure different things does not meet the requirements.

bdgwx
Reply to  MarkW
January 15, 2022 5:33 pm

No such requirement exists in the literature that I could find.

bigoilbob
Reply to  MarkW
January 15, 2022 5:34 pm

Using different instruments to measure different things does not meet the requirements.”

AGAIN, with this fact free silliness? Every “instrument” – and every instrumental measurement process – has precision, resolution, accuracy. Measurement data groups might or might not be correlated. All statistically evaluable, without regard to your meaninglessly differing measurement methodologies.

Aks any poster here who claims oilfield evaluative experience. Scores of geological or rheological parameters, from hundreds of instrumental evaluative processes are used together to estimate reserves and recoveries and to thereby make investment decisions. Just one example – permeability – will commonly come from over a dozen different sources. Porosity, the same. And since – in the right combinations of lithology and fluid compositions – these 2 parameters correlate in known ways, that too is part of the evaluations.

Climate data is much better than ours. Snap out of it…..

Dave Fair
Reply to  bdgwx
January 14, 2022 11:52 am

NOAA adjusted all ARGO data up by 0.12 C “to be consistent with previous data.” As more and more ARGO buoys come on line, it engenders an upward bias on the temperature trend. More CliSciFi wheels within wheels.

bdgwx
Reply to  Dave Fair
January 14, 2022 12:33 pm

Can you post a link to where you see that 0.12 C figure?

Dave Fair
Reply to  bdgwx
January 14, 2022 3:46 pm

Try the January 14, 2021 WUWT “Betting Against Collapsing Ocean Ecosystems” and associated comments. Amusedly, it is based on Karl’s 2015 Science lies.

Clyde Spencer
Reply to  bdgwx
January 14, 2022 3:48 pm

Your link provides it, just before your quote. Talk about “cognitive bias!”

Carlo, Monte
Reply to  bdgwx
January 14, 2022 4:22 pm

Can you get a clue somewhere? Anywhere?

Clyde Spencer
Reply to  bdgwx
January 14, 2022 1:44 pm
Clyde Spencer
Reply to  bdgwx
January 14, 2022 3:45 pm

bdgwx,

From your citation (Karl et al. 2015), just before your quote:

In essence, the bias correction involved calculating the average difference between collocated buoy and ship SSTs. The average difference globally was 0.12°C, a correction that is applied to the buoy SSTs at every grid cell in ERSST
version 4.”

Note that the ship temperatures were almost certainly only read to the nearest degree, because the concern was/is inadvertently feeding the boiler hot water, not keeping it within a narrow range. The in-line thermometers were probably rarely, if ever, calibrated because a precise temperature was not important. The defined ship SSTs varied with the intake depth, which varied with waves, cargo loading, and size of ship. Because the water was drawn deep enough to avoid sucking air, it was typically colder than the actual SST during the day, and warmer at night. It was then, however, heated traveling through the hot boiler room. Thus, ship SSTs had low accuracy and low precision (high variance).

It is questionable whether a global average should have been applied. That was a very ‘broad brush’ approach. However, under no circumstances should high-quality data be adjusted to agree with low-quality data! That is what Karl did though.

I did not misrepresent the data adjustment, as my quote from the paper demonstrates. However, it does appear that Karl had a motivation to discredit the hiatus through the use of unsanctioned data adjustment methods. He did so just before retiring.

You said elsewhere, “Though it actually would not matter which dataset received the bias correction as long as it was applied appropriately.” The operative word here is “appropriately.” It appears that Karl was anxious to discredit the hiatus in warming. So, raising the composite temperature with the modern (recent) Argo temps raised, accomplishes that goal. That is, if the ship SSTs had been lowered than it would have extended the hiatus. It does make a difference!

bdgwx
Reply to  Clyde Spencer
January 14, 2022 5:54 pm

Yeah…so ship SST measurements are high relative to buoy measurements or said another way buoys are low relative to ships. That makes since to me. It also makes since that this bias be corrected.

Carlo, Monte
Reply to  bdgwx
January 14, 2022 6:53 pm

The religion of the Holy Adjustors.

Clyde Spencer
Reply to  bdgwx
January 14, 2022 8:30 pm

It also makes since [sic] that this bias be corrected.

Yes, but it has to be corrected, as you yourself said, “appropriately.” That means, removing the bias from the low-quality ship data so that it aligns with the high-quality Argo data.

bdgwx
Reply to  Clyde Spencer
January 14, 2022 9:10 pm

CS said: “That means, removing the bias from the low-quality ship data so that it aligns with the high-quality Argo data.”

It doesn’t matter. X = Y – B is equivalent to Y = X + B. Haung et al. 2015 even discusses this and says “As expected, the global averaged SSTA trends between 1901 and 2012 (refer to Table 2) are the same whether buoy SSTs are adjusted to ship SSTs or the reverse.” The reason why they chose to adjust the buoy data is because the alternative is to adjust the ship data which would mean those adjustments are applied to a broader set of data and for periods which there was no matching buoy data. That seems pretty reasonable to me. Then again, I wouldn’t haven’t cared if they did the reverse because it doesn’t change the warming trend which is what I’m most interested in.

Jim Gorman
Reply to  bdgwx
January 15, 2022 7:49 am

Do you understand “high quality data” and “low quality data”? Why would you change high quality data?

In fact, according to your math, why adjust anyway since the same trend remains?

There is only one reason to lower the temps from high quality instruments and that is to make warming look worse.

I’ll say it again, adjusting data is wrong. It is CREATING new data. You try to justify it by saying the ship data was wrong, but you have no physical evidence showing that is true. What you should admit is that ships data is measuring something different from buoys. That should make it unfit for purpose for what you are doing and should then be discarded.

bigoilbob
Reply to  bdgwx
January 15, 2022 7:56 am

I agree with Clyde on what should have been adjusted. I agree with you on the fact that it really makes no difference.

bdgwx
Reply to  bigoilbob
January 15, 2022 5:27 pm

JG said: “Do you understand “high quality data” and “low quality data”? Why would you change high quality data?”

The decision was made to adjust the buoy data on the basis that any correction should applied during an overlap period. The correction is determined by comparing ship obs with buoy obs so that the magnitude of the correction can be more accurately quantified. The alternative is to adjust the ship obs during a period in which there are no buoy obs which means you have to apply a constant correction value. That’s fine if the correction value is truly constant, but according to Haung et al. 2015 that does not seem to be the case. That is the correction value evolves with time. The 0.12 figure being thrown around is only the average bias.

JG said: “In fact, according to your math, why adjust anyway since the same trend remains?”

The options are:

1) Adjust ship obs to be consistent with buoy obs.

2) Adjust buoy obs to be consistent with ship obs.

3) Ignore the bias.

1 and 2 are equivalent such that the trend remains the same. 3 is not equivalent. The trend could (and is per Haung et al. 2015) be different.

JG said: “There is only one reason to lower the temps from high quality instruments and that is to make warming look worse.”

Lowering temps from higher quality instruments which are more numerous later in the period lowers the warming trend. But they didn’t lower the higher quality obs. They raised them. I think the confusion here is caused by the reverse language in Karl 2015 vs. Haung 2015. Karl said buoy minus ship is -0.12 whereas Haung said ship – buoy is +0.12. They are equivalent statements.

bdgwx
Reply to  bigoilbob
January 15, 2022 5:30 pm

I agree with Clyde in principal. The problem is the technical details. By applying the corrections to the ship obs you have to assume the correction value is constant since there are no buoy obs to compare them to prior to 1980. But by applying the corrections to the buoy obs you have a full overlap period that allows you to compute a correction value applicable for the time. The 0.12 correction value is only the average that Haung 2015 used.

bigoilbob
Reply to  bdgwx
January 15, 2022 6:17 pm

Thx. This makes it much easier to understand.

Dave Fair
Reply to  bdgwx
January 15, 2022 10:27 am

As I explained earlier, as time goes on the upward-adjusted ARGO numbers become more dominate in the data set. You get an artificially induced warming trend that increases over time. Your nonsense math is just that; nonsense.

bdgwx
Reply to  Dave Fair
January 15, 2022 2:19 pm

Haung et al. 2015 said it doesn’t matter.

Clyde Spencer
Reply to  bdgwx
January 15, 2022 5:52 pm

Would Haung et al. have any reason to defend the end of the hiatus and a resumption in warming?

bdgwx
Reply to  Clyde Spencer
January 15, 2022 8:11 pm

I don’t know.

bigoilbob
Reply to  bdgwx
January 15, 2022 7:50 am

When Clyde is blatantly caught out he often concedes that point and then deflects. He’s not even doing that now.

bdgwx
Reply to  Retired_Engineer_Jim
January 14, 2022 9:19 am

The biggest issues are bucket and engine intake measurements. Bucket measurements are biased low. Engine intake measurements are biased high. The bucket measurements prior to WWII were particularly contaminated with bias. This is the primary reason why the net effect of all adjustments to the global surface temperature record actually pulls the overall warming trend down. Yes, that is right…down. Despite the myth that never dies the unadjusted warming rate is higher and the adjusted rate is lower. See Haung et al. 2015 for details.

Jim Gorman
Reply to  bdgwx
January 14, 2022 4:15 pm

If the data is contaminated it should be discarded as not fit for purpose. Trying to adjust it is simply making up data to replace existing data. If one is absolutely determined to use the data then broad, very broad error uncertainty should be applied and propagated throughout the calculations. Sorry that that would ruin projections but that is life.

You will never convince anyone that “adjusting” the data due to unknown, and incalculable biases and errors is going to end up with accurate data. It should be discarded.

You obviously consider yourself to be an accomplished mathematician but you also have illustrated how you have no idea about how to treat measurements in the real world and how fiddling with measurement data will cost you your job.

A real scientist would simply say we don’t have ocean data that is fit for purpose for determining a global temperature before ARGO and maybe even not then.

Derg
Reply to  bdgwx
January 14, 2022 10:57 am

Just released…extra extra read all about it.

Dave Andrews
Reply to  bdgwx
January 15, 2022 8:58 am

And note they measure in Zetta Joules so that red part looks much more scarier!

Dave Andrews
Reply to  Dave Andrews
January 15, 2022 9:00 am

That was meant to be a reply to Duane

bdgwx
Reply to  Dave Andrews
January 15, 2022 5:03 pm

It doesn’t look scary to me.

George Daddis
January 14, 2022 7:17 am

Even if it were warming, why is that warmth “existential”?
Logic and historical records show warming is beneficial to man.

In geological time, the earth has experienced much higher temperatures and of course never experienced a “tipping point” (the only way the fear of increasing temperatures can be justified in my opinion). And in future geological time the globe will certainly start cooling which may be “existential” for Chicago and NYC.

Dan Sudlik
Reply to  George Daddis
January 14, 2022 7:37 am

How dare you not wanting to go back to the conditions of the little ice age when people lived a “glorious” (if somewhat short) life. I’m sure Mikey Mann would enjoy that type of life for us but not for him and his guys of course. (Do I need sarc?)

MarkW
Reply to  George Daddis
January 14, 2022 7:38 am

Apparently civilization is so delicate that temperature changes of only a few hundredths of a degree are going to cause it to collapse.

Anthony Banton
Reply to  MarkW
January 14, 2022 2:31 pm

Would you like to extrapolate 0.002C OHC increase, (which is 2C were that heat applied and retained in the atmosphere) out a few decades mr mark?

Or is that to sensible a concept in terms of a slow trend for you to grasp?

Jim Gorman
Reply to  Anthony Banton
January 14, 2022 2:56 pm

From Merriam-Webster

Definition of extrapolate

transitive verb

1

a

to predict by projecting past experience or known data

extrapolate public sentiment on one issue from known public reaction on others

b

to project, extend, or expand (known data or experience) into an area not known or experienced so as to arrive at a usually conjectural knowledge of the unknown area

extrapolates present trends to construct an image of the future

2

to infer (values of a variable in an unobserved interval) from values within an already observed interval

You know a lot of us have made our living by making PREDICTIONS of what was going to happen in the next year, two years, or even five years. Real things like budgets, usage patterns, growth, revenue, people, etc. Things you were held accountable for.

If you made up rosy projections by messing with regressions, you were putting your job one the line. You learned quickly that error limits were your friend. Somehow this has never made it into academia or climate science. These folks believe they can increase measured precision by averaging, they make projections with error limits that are obviously not correct, and have no mathematics to back up their “pseudo-science” predictions of the future. They continually show graphs of how log of CO2 and temperature correlate and say, “See, science!” “We can now accurately predict extinction level heat if we don’t stop our energy use.”

If I projected that the maintenance costs for the next 200,000 miles on your auto for would be very small based on your past costs, would you believe me?

Carlo, Monte
Reply to  Jim Gorman
January 14, 2022 4:25 pm

But hey! bzx has on-line calculators that agree with his built-in biases!

LdB
Reply to  Anthony Banton
January 14, 2022 5:09 pm

I thought we were going to 2 degrees regardless of what we do because we aren’t all joining the COP net-zero party?

You aren’t going to frighten anyone with that number because we all excepted that you will have to come up with a really scary number now off you go … we need a really really big number to frighten us.

angech
Reply to  Anthony Banton
January 14, 2022 6:46 pm

Would you like to extrapolate 0.002C OHC increase, (which is 2C were that heat applied and retained in the atmosphere) out a few decades

Why not ?
OTOH why a few measly decades
Try 500 years for a 1C hotter ocean overall.
According to your theory the atmosphere is now at 15C plus 1000 C
Unbelievable but true Ripley et al

MarkW
Reply to  Anthony Banton
January 15, 2022 4:22 pm

1) The 0.002C increase does not exist.
2) 0.002C over 20 years or so extrapolated out over 100 years only comes to 0.01C. Still a big nothing.
3) Even if it did exist, since it wasn’t caused by CO2, there’s no guarantee that the trend will continue.

Robertvd
January 14, 2022 7:18 am

Could hippos live in the Thames all year long like 130 ky ago ?

January 14, 2022 7:24 am

‘Understanding how the planet is changing – and how rapidly that change occurs – is crucial for humanity to prepare for and adapt to a warmer world.’

That understanding is THE problem. It is not the CO2 that is doing it…..It is the greening by the CO2 that is doing it.

https://breadonthewater.co.za/2022/01/10/global-warming-due-to-ehhh-global-greening/

Derg
Reply to  HenryP
January 14, 2022 10:58 am

Rapidly …yawn

Duane
January 14, 2022 7:26 am

And warming is a problem because???

Warming is good, cooling is bad.

They’ve got it entirely backwards.

jeffery p
Reply to  Duane
January 14, 2022 8:19 am

It all hinges on some meritless idea of a tipping point that only exists in computer models. Rather than validating whether the models correctly model the actual climate, the model outputs are taken as facts. Models do not output facts or data. It’s all pure conjecture and it’s politically motivated at heart.

I will concede that for many, funding is the issue. Models and “studies” that seek to “prove” global warming and it’s efffects get funded.

Retired_Engineer_Jim
Reply to  jeffery p
January 14, 2022 8:49 am

Not only is there no interest in verifying or validating their models, there are numerous parameterized parts to the models, and the inputs to the parametric model parts are tuned to get the right answer.

Duane
Reply to  jeffery p
January 14, 2022 9:04 am

Actually, the entire argument over computer models is rather fruitless, and is basically just an unresolvable “he said/she said” argument that will never end.

I prefer to challenge the warmunists on their most fundamental claim, regardless of the effects of CO2 concentration in the lower atmosphere:

That being, warming is GOOD, and cooling is BAD. If we are warming, that is fantastic for humanity and nearly all of the biosphere of the planet. It has always been thus, which is why, today, in an era that indisputably is warmer than at the end of the Little Ice Age in 1850, humans today are enjoying the best health, mortality, standard of living, lifespan, population density, and quality of life than in any other time in the history of the human species

And during cold eras, such as the Little Ice Age, and other provably cool eras in human history such as the last glaciation ending 16 thousand years ago, humans suffered vast starvation, disease pandemics, short lifespans, low standards of living, low quality of life, with most humans spending their short, miserable lives freezing in the dark.

Pat from Kerbob
Reply to  Duane
January 14, 2022 9:47 am

Not only that, cooling is racist, imperialist, colonialist, it is indisputable that the era of European imperialism coincided with the little ice age, the outflow of people to the tropics. Its why the French were happy to trade all of Lower Canada (Quebec) for a tiny island in the caribbean where they could grow sugar. Europeans died in droves from tropical diseases to which they had no immunity and yet they kept coming to central and south america, africa, asia, India, because it was WARM.

And based on the rhetoric of climate scientology, where CO2 controls temp, saying we need to return to pre-industrial co2 means returning to those temperatures, the worse period in recent human history.

Duane
Reply to  Pat from Kerbob
January 14, 2022 11:41 am

I read a book several years ago, I forget the title now, but it’s theses was that Europeans had a really difficult time colonizing North America in the late 16th/early 17 century because of the effects of the Little Ice Age cooling which caused early attempts at farming to fail, which also spurred attempts to colonize because of massive crop failures and resultant mad starvation in Europe at that time.

A secondary cause of early colonial failures in eastern North America, such as Roanoke, and the “starving time” in 1608-1609 in Jamestown, was the Europeans’ failure to understand the effects of warm maritime climates generated by westerly winds over the Gulf Stream, such as enjoyed in Western Europe, vs. the cold Continental climate in eastern North America. Europeans assumed that only latitude controlled climate, such that with Virginia colony being far south of London and Paris it should have far milder weather and better crop growing conditions. They learned to their horror that such was not so.

Pat from Kerbob
Reply to  Duane
January 14, 2022 12:56 pm

The planet has a habit of bitch slapping people who make assumptions.

Clyde Spencer
Reply to  jeffery p
January 14, 2022 8:35 pm

It all hinges on some meritless idea of a tipping point that only exists in computer models.

Like an untrapped ‘divide by zero’ error.

MarkW
Reply to  jeffery p
January 15, 2022 4:27 pm

Since most of the Holocene Optimum was at least 3 or 4C degrees warmer than today, and these mythical tipping points did not kick in. The claim that a few tenths of a degree of warming is going to cause the planet to hit a tipping point completely fails the laugh test.

bdgwx
Reply to  MarkW
January 15, 2022 4:56 pm

Can you post the global temperature reconstruction you are referring to when you are making the claim that the Holocene Optimum was at least 3 or 4C degrees warmer than today?

MarkW
January 14, 2022 7:30 am

A few hundredths of a degree warmer than the coldest period in the last 100 years is an existential threat?
Repeats of storms that have been happening for as far back as we have records is an existential threat?
Temperatures that are still 2 or 3C cooler than the average for the last 10 to 15K years is an existential threat?
Are these guys really this desperate?

JOHN CHISM
January 14, 2022 7:43 am

So… When few people had thermometers – or even kept time records of readings – in the horse and wagon, steam locomotive, sailing ships and steamships of the 1880s – leaving vast expanses without any thermometers – at the end of “The Little Ice Age” the coldest period in our Holocene Interglacial…”The Earth is Warming” and it is detrimental to the existence of all life on Earth?

I am so tired of this narrative that warming equals anything bad happening. Only people that are ignorant of history believes that Climate Change of warming is detrimental. Climate has changed from the beginning of Earth. Anything that cannot adapt to the Climate… dies. Climate changed long before humans existed.

Carlo, Monte
January 14, 2022 8:03 am

Quick, send in the Holy Trenders, they are needed PDQ.

Mickey Reno
Reply to  Carlo, Monte
January 14, 2022 9:13 am

The Holy Trenders of Antioch? NO, any one but them!

If Gavin Schmidt told me water was wet, I’d tell him to stick his head in a bucket full of it to prove it.

Carlo, Monte
Reply to  Mickey Reno
January 14, 2022 10:17 am

How long can he hold his breath?

Bellman
Reply to  Carlo, Monte
January 14, 2022 12:57 pm

Holy Tenders? Do you mean Lord Monckton?

I’ll guess that he would say. The new pause starts in December 2014, making it exactly 7 years and 1 month old. From that date the trend is -0.025°C / decade. Please don’t mention uncertainties in the trend, or if this is significantly different to the previous trend.

Carlo, Monte
Reply to  Bellman
January 14, 2022 4:28 pm

<yawn>

Like ants to sugar, they swarm right in…

Bellman
Reply to  Carlo, Monte
January 14, 2022 5:01 pm

Glad to be of service.

MarkW
Reply to  Bellman
January 15, 2022 4:30 pm

I guess there is a first time for everything.

Richard M
Reply to  Bellman
January 15, 2022 8:22 am

The PDO went positive around 2014 leading to reduced cloud thickness and hence more solar energy reaching the surface. No other natural changes since then and hence no warming. Meanwhile, CO2/CH4 have continued to rise.

The same was true prior to 2014. A full 17 years will no warming.

MarkW
Reply to  Bellman
January 15, 2022 4:30 pm

I see the fact that Lord Monckton has time and again cleaned your clock is still causing you no end of emotional pain.

Bellman
Reply to  MarkW
January 15, 2022 4:59 pm

Cleaned my what? Oh , clock. Apparently means to punch someone in the face and utterly defeat them. Well if calling me Bellhop and telling me to stop whining is what passes for defeating someone I can’t argue with that.

Carlo, Monte
Reply to  Bellman
January 15, 2022 6:40 pm

LastWordBellman

Anthony Banton
Reply to  Bellman
January 16, 2022 12:53 am

Yep

Anthony Banton
Reply to  MarkW
January 16, 2022 12:51 am

Mr Mark:
M’lord will always (in your terms), “clean(ed) your clock” because he never answers any criticism with a straight answer, and when anyone persists he resorts to politically riddled ad hom (been there several times).
If that is the sort of advocate you feel the need to worship then it says a lot about you.

The man is impossible to deal with on an integrity based level.
As such he is rightly ignored, by all but those desperate to have their bias bolstered and ignore anything that may disturb it.
That is (largely) those here who lap-up his latest snake-oil selling recipe.

But then again we know how that goes with the QAnon mob and other conspiracy ideation that require bizarre down-the-rabbit-hole thinking to keep the dissonance going.
I am well aware that any “push-back” from people such as me that present the true science here will only serve to increase your push-back.
It’s the psychopathy at play.
It doesn’t matter – it’s just a Blog – a place for you to wear your ignorance proudly to the cheering crowd and spit out your hatred towards those that wish to shatter your worldview.
And why the US is in a very dangerous place currently (democratically).
Have a nice day and continue the good fight, after all your opponents are just commies committing a fraud (or else they don’t’ know basic physics) LOL

Carlo, Monte
Reply to  Anthony Banton
January 16, 2022 6:59 am

Baton digs deep, and pulls out … “QAnon”? WTH?

a place for you to wear your ignorance

The irony from the Holy Trenders is thick.

Pauleta
January 14, 2022 8:04 am

Global Warming, aka Climate Change is the last refuge of the inept, the incompetent and the corrupt. From scientists, politicians, civl servants, NGOs and businesses.

When you cannot plan, project, prepare and get things correctly you run to blame the most powerful molecule in the universe.

RevJay4
January 14, 2022 8:05 am

NASA and NOAA equals government funded equals whatever the government wants to show from whatever data they want used to display it. In other words, bunkus wunkus, BS, propaganda, et al.

jeffery p
January 14, 2022 8:08 am

Eight of the top 10 warmest years on our planet occurred in the last decade…” The planet is over 4 billion years old. That statement is completely false. What a sorry state we’re in when a NASA official spouts such garbage.

Terry
January 14, 2022 8:17 am

They are comparing the temps to those out of the end of the Little Ice Age. We should be thankful it is warmer.

Slowroll
January 14, 2022 8:21 am

This another of H.L Mencken’s imaginary hobgoblins from which we must led to safety…

Garboard
January 14, 2022 8:30 am

Not warmer high temps but warmer night time low temps and winter low temps ? UHI?

griff
January 14, 2022 8:30 am

Australia matched its hottest ever record yesterday.

a year of new heat records globally in 2021 continues into 2022

MarkW
Reply to  griff
January 14, 2022 8:52 am

I see griff is still confusing weather for climate.

Given how short the weather records are, and given the fact that there are 10’s of millions of places making measurements.
The fact that there are thousands of high temperature records is not in the tiniest bit surprising.
One thing that griff always over looks when he’s hyperventilating about record highs, is that there are always as many if not more record cold temperatures.

Duane
Reply to  MarkW
January 14, 2022 11:45 am

It’s always 5 o’clock somewhere … and it’s also always above average somewhere in the world at the same time it’s always colder than average somewhere else.

Bill
Reply to  Duane
January 15, 2022 11:02 am

And don’t forget…half of everybody is below average in intelligence!

Clyde Spencer
Reply to  griff
January 14, 2022 9:06 am

Can you say A.N.E.C.D.O.T.E.?

2021 was cooler than 2020. In fact, it was cooler than the preceding 6 years! Get your facts straight.

https://scitechdaily.com/2021-continued-earths-warming-trend-the-past-8-years-have-been-the-warmest-in-the-global-record/

You are welcome to your own irrational opinions, but you don’t get to make up your own facts!

Reply to  griff
January 14, 2022 9:31 am

Griff, The link below is to Tony Heller showing a 53.1 in 1889, along with other readings higher than, or equal to, yesterdays figure. Go to the 2:45 mark for a re-brief.
https://youtu.be/fsy0ysTDRRc

Pat from Kerbob
Reply to  griff
January 14, 2022 9:33 am

Griff, even the Adjustment Bureau could not make this the hottest year ever for the earth, so despite some hot temps here and there, like our western canada “heat dome” you love to mention, the earth cooled even according to your scientology shamans. Without adjustments its probably a lot lower than that too.
So it was obviously colder in more of the world than it was hot.

But as we now know from new settled science, that is also due to CO2 right?

Back to you

Redge
Reply to  griff
January 14, 2022 11:08 am

Australia matched its hottest ever record yesterday.

So you’re saying yesterdays temperature was not unprecedented, mate?

Welcome to the real world

BruceC
Reply to  griff
January 14, 2022 8:38 pm

Griff, go do some research on the January 1896 Australian coast-to-coast heat-wave (which lasted the entire month of Jan.). Many temperatures of +51C were recorded by official persons using official equipment during this period.

51.7C – Geraldton, WA
51.1C – New Angledool, QLD
50.5C – Camden, NSW

Later in 1896, heat waves also occurred in India, Burma, Borneo, America, England, Germany and Spain.

LdB
Reply to  griff
January 14, 2022 10:44 pm

We had a cyclone that stopped the built up heat from the wet season being released it happens. You don’t see any of us running around claiming the sky is falling.

Perhaps ask those who live here not play guess-a-mole from your UK housing estate.

Hoyt Clagwell
Reply to  griff
January 14, 2022 11:15 pm

Griff, just let me know when any place on Earth exceeds 134ºf. That is something I will be interested in.

Richard M
Reply to  griff
January 15, 2022 8:25 am

Sorry griff, I asked you before what was the effect of reduced cloud thickness and you never answered. Why is that? Let’s see, how many years has it been since the PDO shift caused this cloud change? Oh right, that was between 2013-15. Almost 8 years.

Dave Andrews
Reply to  griff
January 15, 2022 9:57 am

You neglected to mention that the previous matched record occurred in January 1962 that is 62 years ago.

Just like you did awhile back with the hottest New Years Day in the UK when the previous record had been set in 1916, 106 years ago.

Dave Andrews
Reply to  Dave Andrews
January 16, 2022 6:56 am

Dang! 60 years ago not 62

Bruce Cobb
January 14, 2022 8:30 am

NASA uses the period from 1951-1980 as a baseline to see how global temperature changes over time.

How convenient. Now they can make cherry pie.

Dave Fair
Reply to  Bruce Cobb
January 14, 2022 12:04 pm

Take the coldest period since the Little Ice Age and use it as a baseline to scare the ignorant. This is your government lying to you. If you fail to understand the implications of that fact, you (not the rest of us) deserve what you get. Let’s Go Brandon!

guest
January 14, 2022 8:53 am

I pointed out on JPL’s Facebook page that the claim is scientifically meaningless and intentionally misleading. First, to make a distinction between these years, the global average temperature has to be stated to a precision of a hundredth of a degree. The measurements themselves are made mostly to a precision of a tenth of a degree. Second, is the error introduced by the estimation of temperatures for the overwhelming majority of surface locations that are unmeasured. Finally, the baseline temperature period was 1951-1980 which was a well-known period of cooling. So much so that climate scientists warned of a returning Ice Age.

This is just an attempt to use sensationalistic headlines to gain funding. Perhaps this is why no climate scientists call out this annual charade.

Schrodinger's Cat
January 14, 2022 9:09 am

The Little Ice Age ended around 1870 just as this recordkeeping started. As we gradually warmed up from the LIA it is not surprising that as the years passed by, more and more new records were set. Since we have only recovered about 1.1 degrees Celsius, there may be some more to go.

Pat from kerbob
Reply to  Schrodinger's Cat
January 15, 2022 10:53 am

I certainly hope so

And anyone demanding we return to those temperatures should be locked in a hole for espousing inter generational crimes against humanity

Clyde Spencer
January 14, 2022 9:10 am

2021 Tied for 6th Warmest Year in Continued Trend, …

 
The headline is seriously misleading! The 2021 temperature anomaly is significantly lower than the preceding two years; it is the tied for the lowest in the last six years. It is tied with 2018, which was significantly lower than the 2016 El Nino high. I wouldn’t call that continuing the warming trend.
 
How low would it have had to drop before NASA Earth Observatory would acknowledge that the upward trend was NOT continuing? If it had reached the 2014 level, would they have said it was tied for the 7th highest temperature? They are really being disingenuous!
 

2021 was a La Niña year, …

 
However, it didn’t even make the top 24 list!
 

The long-term global warming trend is largely due to human activities that have increased emissions of carbon dioxide and other greenhouse gases into the atmosphere.

 
Yet, 2020 appears tied with 2016 (an El Nino year) for the warmest ever. 2020 was a year that saw reductions in anthropogenic CO2 of 7-10% for the full year, and a decline of more than 18% during May, when the pandemic shut everything down. That meant reductions in ALL so-called greenhouse gases, not just CO2. The declines in methane, nitrous oxides, and ozone were measured. The estimated declines in CO2 are not measurable.
 
Warming may well come back this year or next. However, NASA has had to twist themselves into a logical knot to try to support the meme that anthropogenic CO2 is causing a “continuing” increase in global temperatures. It speaks of desperation.

Pat from Kerbob
Reply to  Clyde Spencer
January 14, 2022 9:29 am

I pointed out to some clown elsewhere that saying 2020 is tied with 2016 is the same as saying there has been no warming in the intervening 5 years, and now its dropping further even tho co2 rise continues.
For that i get “deniers don’t understand anything”.

I’m just pointing out the obvious

Carlo, Monte
Reply to  Clyde Spencer
January 14, 2022 9:37 am

In the Church of Climastrology, nothing is more important than the Holy Trends.

Doonman
January 14, 2022 9:32 am

NASA uses the period from 1951-1980 as a baseline to see how global temperature changes over time.

So NASA uses the coldest period baseline since reliable records began to see how global temperatures change over time.

Now, in order to do proper science, NASA should also tell us why it was so cold then compared to the earlier 20th century and how much of any warming recorded since is natural.

bdgwx
Reply to  Doonman
January 14, 2022 11:12 am

The baseline does not matter. You get the same amount of warming regardless of which baseline is chosen whether it be 1851-1880, 1951-1980, 1981-2010, 1991-2020, or whatever else you may see. If you can change the baseline of any dataset by subtracting off the average of whatever new baseline you want to see. In fact, that is the method I often use to normalize multiple datasets to the same baseline for comparison. Try it out.

Pat from Kerbob
Reply to  bdgwx
January 14, 2022 12:52 pm

How warm was it compared to 1021?

bdgwx
Reply to  Pat from Kerbob
January 14, 2022 2:05 pm

GISTEMP only goes back to 1880. You’ll have to use one of the global temperature reconstruction like Osman et al. 2021, Marcott et al. 2013, or Kaufmann et al. 2020 to determine that. Based on those publications it is about 1 C. Unfortunately the temperature proxy reconstructions do not have sufficient resolution delineate annual means like we can with the instrumental temperature record so that 1 C difference is going to be at best on decadal timescales.

Pat from kerbob
Reply to  bdgwx
January 15, 2022 10:51 am

Love a good proxy study

Even though it was clearly warmer 1000 years ago, you can make that elephant’s trunk wiggle and even tie itself in a knot

Lots of great reading out there, “A Cultural History of Climate Change” was quite good, endless hard physical evidence from all over the world showing it was indesputably warmer back then.

You stick with the Scientology hockey sticks, I’ll stick with physical evidence.

It’s why you’ll lose in the end

bdgwx
Reply to  Pat from kerbob
January 15, 2022 2:09 pm

Can you post a link to global temperature reconstruction showing that it was warmer 1000 years ago?

Doonman
Reply to  bdgwx
January 14, 2022 1:18 pm

Except it’s cooler in 2021. But we now know that warming causes cooling too, that’s why we call cooler temp anomalies for a given year warming, to normalize the baseline narrative.

DMacKenzie
January 14, 2022 9:35 am

So does 6th warmest out of the last decade also make it tied for the 5th coldest ?….just to show how meaningless such statements really are… /s

Adam Gallon
January 14, 2022 9:54 am

And according to the European satellite, it’s the 5th warmest & 1.1-1.2 degrees !

Tom.1
January 14, 2022 10:02 am
January 14, 2022 10:16 am

6th warmest year.
So I guess its actually cooling now, since the warmest year?

ResourceGuy
Reply to  Leo Smith
January 14, 2022 11:39 am

That word can get you banned in the new overreach Administration. You must adapt like they are doing in the UK with phases like the “warming hole” in the North Atlantic.

Clyde Spencer
Reply to  Leo Smith
January 14, 2022 3:56 pm

Yes, if 2021 had been the warmest year ‘evah,’ they would have been all over it like a horned toad at an ant hill and not have pussy footed around with “6th warmest year.” Based on Monckton’s work, the claimed “continued warming” hasn’t been around for several years, depending on how it is defined.

Bob Clark
January 14, 2022 12:13 pm

1951-1980 as the norm, inflates the degree of warming I should think. 1951-1980 was a period of rather cooler temps in modern times, as if you remember the talk at the time was of global COOLING.

Peta of Newark
January 14, 2022 12:40 pm

Isn’t it just The Craziest Thing (OK, apart from Jojo Brandon’s speeches) how deserts have such high temperatures yet are exceedingly cold places.

A warming troposphere means a cooling Earth – that energy can not return whence it came – Entropy says as much.

We all should be immensely thankful that the gases making up the troposphere have such low heat capacities and thermal conductivities.

Alex
January 14, 2022 12:58 pm

Half full or half empty?
 
It’s been recognized here and elsewhere that global temperature has not increased for nearly a decade. Why then should one be surprised that the 2021 average is about the same as prior years?
 
The latest “pause” is a repeat of behavior between earlier El Ninos. Remove the jumps associated with those El Ninos and Voila! Global warming disappears.
 
https://rclutz.com/2022/01/12/uah-confirms-global-warming-gone-end-of-2021/
 
It’s safe to conclude that El Ninos are not caused by increasing CO2.

Bellman
January 14, 2022 1:23 pm

In case anyone’s interested, or even if they are not, here’s the top ten warmest years in the GISTEMP data set, recalculated to the 1991-2020 base period which UAH uses.

  1 2020  0.41 
  2 2016  0.40 
  3 2019  0.37 
  4 2017  0.31 
  5 2015  0.29 
 =6 2018  0.24 
 =6 2021  0.24 
  8 2014  0.13 
  9 2010  0.11 
=10 2013  0.06 
=10 2005  0.06

For comparison here’s the same for UAH

  1 2016  0.39 
  2 2020  0.36 
  3 1998  0.35 
  4 2019  0.30 
  5 2017  0.26 
  6 2010  0.19 
 =7 2015  0.14 
 =7 2021  0.13 
  9 2018  0.09 
 10 2002  0.08 

Note, I’ve rounded these to the nearest 0.01°C, but this is misleading in comparing 2015 and 2021. They only differ by 0.001°C, hence I’ve marked them as being equal above.

Clyde Spencer
Reply to  Bellman
January 14, 2022 4:01 pm

And 2020 was the year that anthro’ CO2 emission declined, and there is no discernible difference in the Fall-Winter ramp-up phase from 2019. That is not a compelling argument for “This warming trend around the globe is due to human activities that have increased emissions of carbon dioxide and other greenhouse gases into the atmosphere.”

Bellman
Reply to  Clyde Spencer
January 14, 2022 4:59 pm

What a weird set of non sequiturs. Human emissions fell by about 5% in 2020 which resulted in a tiny reduction in the increase in CO2, which you think should have meant 2020 should have been colder.

Clyde Spencer
Reply to  Bellman
January 14, 2022 9:19 pm

Non sequiturs? I think that your perception of the points not following logically says a lot about your cognitive dissonance, or perhaps an inability to connect the dots. Let me see if I can make it clear for you.

For no apparent reason, you listed the ranking of the hottest years from several different sources. I took the opportunity to point out that 2020, which was variously ranked 1st or 2nd by your sources, was a unique year in that it allowed a controlled experiment. That is, anthropogenic emissions were significantly reduced by 7-10% for the entire year (not by the 5% you claim), with at least one month being over 18%, as estimated from sales and taxes.

The alarmist hypothesis is that CO2 is driving the increase in global temperatures. I quoted the exact claim from the article that supports that interpretation. That is, conversely, if the anthro’ CO2 flux should decline, it is predicted that, at the very least, the rate of increase should similarly decline, and perhaps even the full year should show cooling. However, neither happened!

What is the point of reducing anthro’ emissions if there is no measurable effect on either the annual CO2 increase, or especially, the supposed warming resulting from it?

The seasonal ramp-up curve for 2019-2020 was almost indistinguishable from 2018-2019. That is, the predicted decline in the warming rate or peak temperature resulting from a decline in anthro’ emissions did not happen! In summary, the unusual year (2020) that saw a decline in anthro’ emissions, essentially tied with an El Nino year (2016) for the global average temperature. That is, the alarmist hypothesis was falsified!

I do hope you were able to follow that. I have never been accused of having difficulty communicating. You might want to review the graphs here:

https://wattsupwiththat.com/2021/06/11/contribution-of-anthropogenic-co2-emissions-to-changes-in-atmospheric-concentrations/

Bellman
Reply to  Clyde Spencer
January 15, 2022 8:25 am

Your arguments, the controlled experiment as you put it, is that slighlty reducing CO2 emissions should have caused some sort of a reduction in atmospheric CO2 that should in turn have caused a measurable reduction in temperatures in a sinlge year. None of this is correct and does not follow from the sentence you quoted. The operative word in the that was “trend”.

There are several main problems with your hypotheisis.

1) you completly ignore year to year variance will be much larger than any observable effect from a reduction in CO2 over a single year. The article makes it clear that many factors effect a single years temperature, especially ENSO conditions.

2) you keep making an implicit assumption that reducing emissions will result in a reduction in atmospheric CO2. What should happen is a reduction in emissions will cause a reduction in the increase in atmospheric CO2. 2020 would still be expected to have more CO2 than 2019, just by a slightly smaller increase than would have happened without the reduction in emissions.

3) even if you could ignore all variations, the expected change in temperature would be too small to measure in any one year. This is why you have to look at changes over decades to see what is happening.

Bellman
Reply to  Bellman
January 15, 2022 8:47 am

Lets put some ball park figures on point 3).

Each year we release a certain amount of CO2, which causes a an increase of around 2-3 ppm. Lets call it 3ppm.

If you decrease emissions in one year you would expect to see a corresponding reduction in the increase. I said emisions in 2020 were around 5% less than in 2019, based on the figures I saw, but you claim it’s actually up to 10% based on estimates , so lets say it is a 10% reduction. Atmospheric CO2 rises by 10% less than would have been expected, so lets say only 2.7ppm.

3ppm is less than a 1% increase, but lets say it’s a 1% increae it year, and as a result of the 10% reduction it was only a 0.9% increase this year. What effect does this have on warming.

Estimates for the transient climate response are between 1.0 and 2.5C, and unlikely to be more than 3C, so lets use that figure. Every doubling of CO2 causes an “immediate” warming of 3C. A 1% rise would cause about 1.5% of that, say 0.05C (Actually a bit more as it’s logarithmic), and if the increase was 10% less it would cause 0.005C less warming.

So assumuming the strongest possible effects, and ignoring all natural variability, I think the largest effect possible would still only be 0.005C in a single year. Compare this with the stated uncertainties for the annual GISS anomaly, which is 0.05C, an order of magnitiude greater. You couldn;t measure it given the uncertainty in the global average, let alone expect it to be visible above the annual variation.

Carlo, Monte
Reply to  Bellman
January 15, 2022 11:04 am

with the stated uncertainties for the annual GISS anomaly, which is 0.05C

More climastrology milli-Kelvin bollocks (cue bzx).

Bellman
Reply to  Carlo, Monte
January 15, 2022 11:57 am

Feel free to do your own UA, or pull your usual multi thousand milli-Kelvin guess out of thin air. But you do realize in this case, the less certainty in the annual figure, the more it undermines Clyde Spencer’s claim. As I’m saying, there is no way a change of 0.005°C could be detected.

Carlo, Monte
Reply to  Bellman
January 15, 2022 2:15 pm

Feel free to do your own UA

Free clue, that’s not my job to do your hard work—you and the climastrologists make these absurd UA claims, this is YOUR responsibility.

Of course, because you have no clues about what a real UA is, this will never happen.

Clyde Spencer
Reply to  Bellman
January 15, 2022 1:23 pm

Once again, you insist on conveniently ignoring monthly information available in the MLO data by only considering the net annual change.

You said, “Each year we release a certain amount of CO2, which causes a [sic] an increase of around 2-3 ppm.” You are assuming that the increase is entirely anthropogenic. You are ignoring the fact that the total annual anthropogenic emissions are less than 5% of the total CO2 flux. You deny the ability of a 10% decrease of a 5% proportion of CO2 flux to have any discernible decrease in either CO2 concentration or temperature over a period of less than decades. Yet, you claim that a small annual increase in the 5% flux proportion regularly causes an increase in total CO2 concentration of 2-3 PPM and contributes to an annual global increase in temperature of about 0.02 deg C. Do you not see the contradiction? You can’t have it both ways!

You provide us with an implied estimate that the annual anthropogenic warming with a 10% increase should be about 0.005 deg C annually. Yet, the measured increase is about 0.012-0.018 deg C. Actually, with anthro’ CO2 emissions rebounding (10+%) in 2021, the temperature decreased by about 0.25 deg C instead of increasing 0.015+0.005! That is, an order of magnitude greater than the expected increase, and in the opposite direction. Clearly, something other than CO2 is driving the variance. There is a strong suggestion from Fig. 6
[ https://wattsupwiththat.com/2021/06/11/contribution-of-anthropogenic-co2-emissions-to-changes-in-atmospheric-concentrations/ ] that temperature is one of the more important forcing factors. It alone explains almost half of the variance.

image-38[1].png
Bellman
Reply to  Clyde Spencer
January 15, 2022 3:42 pm

You are assuming that the increase is entirely anthropogenic.

What I’m trying to do is establish a ball park figure for the maximum change in temperature expected from a small reduction in emissions in a single year. For the purpose of this exercise I’m assuming that there is no natural variance, so this indeed means assuming the expected rise is entirely anthropogenic. If we don’t assume that then you have no argument, because as I was saying the natural variation both in CO2 increase and temperature will completely swamp the tiny change in expected temperature.

Yet, you claim that a small annual increase in the 5% flux proportion regularly causes an increase in total CO2 concentration of 2-3 PPM…

No. The change in the amount of CO2 emissions has only a very small effect on the rate of increase. What causes the regular 2-3 ppm is the 5% flux itself.

Do you not see the contradiction?

No I don’t.

Year to year, changes occur in the rise of CO2 because of temporary changes in sinks and sources, but in the longer term it’s the human emissions that dominate the rise.

You provide us with an implied estimate that the annual anthropogenic warming with a 10% increase should be about 0.005 deg C annually.”

That would very much be an upper estimate. But I’m not sure if you you are following what I’m saying here. The 10% increase would be a single year’s increase and would cause an additional 0.005°C warming in a single year.

Yet, the measured increase is about 0.012-0.018 deg C.

I’m not sure what the “yet” means here. The increase in temperatures here are, assuming all else is equal and the “alarmist hypothesis” is correct, caused by the year on year increase in atmospheric CO2, which is currently somewhat less than 1% a year. An rise of 10% in emissions in a single year would only have a minimal effect on the amount of CO2 in the atmosphere, and a corresponding unmeasurable increase in temperatures.

Actually, with anthro’ CO2 emissions rebounding (10+%) in 2021, the temperature decreased by about 0.25 deg C instead of increasing 0.015+0.005!

Because as I keep saying there are many other things that affect temperature, including the fact that 2021 was a La Niña year. In any event, atmospheric CO2 did not increase 10% more in 2021 than it did in 2020 – quite the reverse.

…temperature is one of the more important forcing factors. It alone explains almost half of the variance.

Yes, temperature and ENSO conditions are factors in determining year on year variance. That’s why 2021 saw a smaller increase than 2020 despite there being more emissions, and why 2022 is predicted to have the smallest rise in recent years. This is also why I don’t understand how you can claim that 2020 falsifies anything.

Clyde Spencer
Reply to  Bellman
January 15, 2022 2:12 pm

You couldn;t measure it given the uncertainty in the global average, let alone expect it to be visible above the annual variation.

I would submit that if a presumed forcing parameter is so weak that it can be obscured by annual random variations, then it is not the driving parameter. Rather, a forcing parameter that is not obscured, such as temperature, is more likely to be the driving parameter.

Even if the annual variations are not random, but the result of temperature, then it is clear that temperature is more important than CO2.

Bellman
Reply to  Clyde Spencer
January 15, 2022 4:33 pm

In this case, what you couldn’t measure is the amount of temperature changes caused by a single year of emissions reduced by 10%.

But I think you are talking here about the increase in CO2 rather than temperature. Otherwise you are claiming rising temperatures are the driver for rising temperatures.

It’s too late in the day to go over all the reasons why I doubt that temperature could be the dominate reason for the current increase in CO2.

But for one, it makes no sense to ignore the CO we know we have been emitting. It has to go somewhere.

Carlo, Monte
Reply to  Bellman
January 15, 2022 11:02 am

you completly [sic] ignore year to year variance will be much larger than any observable

Oh this is rich…the irony is just oozing out of the screen.

Nice hand-waving, though.

Bellman
Reply to  Carlo, Monte
January 15, 2022 11:58 am

My typo aside, just what is your point? Do you think ‘I deny year to year variance?

Carlo, Monte
Reply to  Bellman
January 15, 2022 2:18 pm

Try reel hard, see if you can figger it out.

Clyde Spencer
Reply to  Bellman
January 15, 2022 12:18 pm

You are exhibiting a denseness that doesn’t correspond to other indicators of your intelligence. It would seem that you are so desperate to defend your consensus view that you resort to cherry picking facts and ignore responding to points I raise that undermine your views.

1) You are hung up on looking at the problem from the coarse temporal resolution viewpoint of the net annual changes. In my analysis of the MLO data, I presented graphs for several years, delineating both the ramp-up and draw-down phases. Rather than ignore variance, I was interested in what it might show us. Indeed, ENSO reveals itself very clearly with the anomalous shape and height of the 2016 El Nino, which you imply shouldn’t be visible because it is less than “decades.”

2) Your statement is a strawman argument. I explicitly said that what is expected is a decline in the rate of increase of CO2 during the ramp-up phase. If that were to occur, then the consensus hypothesis claims that the rate of temperature increase should similarly decline.

3) I’ll follow up responding to 3), below:

Bellman
Reply to  Clyde Spencer
January 15, 2022 2:42 pm

In case I wasn’t clear, this is the argument I was responding to.

The alarmist hypothesis is that CO2 is driving the increase in global temperatures. I quoted the exact claim from the article that supports that interpretation. That is, conversely, if the anthro’ CO2 flux should decline, it is predicted that, at the very least, the rate of increase should similarly decline, and perhaps even the full year should show cooling.

You are arguing that if the “alarmist hypothesis” is correct then a decrease in human emissions should reduce the rate of warming, or even cause cooling, and that even the full year should show cooling. You then implied that as 2020 was a warm year and emissions declined in that year, this was in some way a falsification of the “alarmist hypothesis”. Exact quote:

In summary, the unusual year (2020) that saw a decline in anthro’ emissions, essentially tied with an El Nino year (2016) for the global average temperature. That is, the alarmist hypothesis was falsified!

My contention is that nothing has been falsified. It’s illogical to assume that a small drop in the increase in CO2 for one year would produce a change in the increase that would be discernible above the year to year variance in either CO2 or temperature and you certainly would not expect an increase in CO2 in and of itself.to cause a decrease in temperature.

In part 1) I should have been clearer I was talking about the variation in temperature, though it also applies to a lesser extent to the increase in atmospheric CO2.

You complain that I’m only looking at the annual average. But given my point that one year is too short a time to see the effects of a small drop, I’m not sure what looking at monthly figures would achieve. In any event, this argument starts with the annual average temperature for 2020. If you don;t want to look at annual averages you would have to ignore the first few months of 2020, which were the warmest.

You agree with me that there is a lot of variation in the annual CO2 increase and that ENSO plays a strong part. I’m not sure why you think this argues against ,my point. You need to know what the CO2 increase during 2020 would have been without the reduction in emissions, before you can claim that the slowdown had no effect on the rise in CO2.

For example, in May 2020 the Met Office revised their predictions for the annual rise down to 2.48ppm due to the pandemic, compared with an expected 2.80ppm if there had been no reduction in emissions.

https://www.carbonbrief.org/analysis-what-impact-will-the-coronavirus-pandemic-have-on-atmospheric-co2

The actual increase was 2.51ppm.

Bellman
Reply to  Clyde Spencer
January 15, 2022 2:53 pm

Continuing on to point 2). You say

Your statement is a strawman argument. I explicitly said that what is expected is a decline in the rate of increase of CO2 during the ramp-up phase.

I said that your argument implied that CO22 would have to reduce as you seemed to be saying that you expected 2020 to cool. If you are agreeing that it was only the rate of warming that should have changed in 2020 then fine, I withdraw this argument. I’m just puzzled about what your argument actually is. I really don’t see how 2020 falsifies anything.

Jim Gorman
Reply to  Bellman
January 15, 2022 7:40 am

It should be pointed out that most CAGW adherents, and even those folks who post here, advocate that the human emitted CO2 IS THE ENTIRE INCREASE in CO2. They claim that the sinks and sources are equal, therefore all the increase occurs because of anthropologic generated CO2.

If you believe that CO2 is what heats the earth then a reduction should cause a lowering in temperature. You simply can’t have it any other way.

Your claim belies both those theories. You are becoming a sceptic of CO2 GHG warming.

Bellman
Reply to  Jim Gorman
January 15, 2022 8:56 am

It should be pointed out that most CAGW adherents, and even those folks who post here, advocate that the human emitted CO2 IS THE ENTIRE INCREASE in CO2.

Do they? I think the IPCC says it’s very likely to be the dominant reason for the increase.

If you believe that CO2 is what heats the earth then a reduction should cause a lowering in temperature.

1) there hasn’t been a reduction in atmospheric CO2. 2020 was higher than 2019, 2021 is higher still.

2) CO2 causes warming does not imply every year will be slightly warmer than the previous one. Over a short period year to year variance caused by such things as ENSO conditions dominate.

Carlo, Monte
Reply to  Bellman
January 15, 2022 11:05 am

The Trends! But what about the Trends?!??

Bellman
Reply to  Carlo, Monte
January 15, 2022 12:00 pm

What about the trends?

Carlo, Monte
Reply to  Bellman
January 15, 2022 2:19 pm

Bellman absolutely positively must have The Very Last Word On Absolutely Everything.

Bellman
Reply to  Carlo, Monte
January 15, 2022 3:56 pm

So you don’t know what you meant either.

Feel free to make another joke after this so I don’t have the last word.

Gary Pearse
January 14, 2022 1:27 pm

“NASA uses the period from 1951-1980 as a baseline to see how global temperature changes over time.”

Which ‘just happens’ to be the very 30yrs of the “Ice Age Cometh” deep cooling when global temperatures had alarmingly dropped about 0.5°C

https://m.youtube.com/watch?v=0-ZDnSbNIYs

About this baseline reference, Gavin Schmidt shamelessly says

“That baseline includes climate patterns and unusually hot or cold years due to other factors, ensuring that it encompasses natural variations in Earth’s temperature.”

This is simply dishonest. He signals that by ‘admitting’ “…unusual hot or cold (indeed!!) periods” to assure us that it isn’t the cherry pick that it clearly is!

Also, the climate wroughters were careful to use the period after the deepest cold of that period, when Arctic Ice was greatly expanded as the baseline from which to measure the decline in ice.

Bellman
January 14, 2022 1:36 pm

For completeness here are the other data sets I keep track of and have data up to the end of 2021.

JMA

 1 2016  0.35 
 2 2020  0.34 
 3 2019  0.31 
 4 2015  0.30 
 5 2017  0.27 
 6 2021  0.23 
 7 2018  0.17 
 8 2014  0.14 
 9 2010  0.11 
10 2013  0.07

NOAA

 1 2016  0.38 
 2 2020  0.36 
 3 2019  0.33 
 4 2015  0.31 
 5 2017  0.29 
 6 2021  0.23 
 7 2018  0.21 
 8 2014  0.12 
 9 2010  0.10 
10 2013  0.06

BEST

=1 2016  0.40 
=1 2020  0.40 
 3 2019  0.37 
 4 2017  0.30 
 5 2015  0.26 
 6 2021  0.24 
 7 2018  0.23 
=8 2014  0.12 
=8 2010  0.12 
10 2005  0.08

RSS

=1 2020  0.46 
=1 2016  0.46 
 3 2019  0.39 
 4 2017  0.33 
=5 2010  0.26 
=5 2021  0.26 
=5 2015  0.26 
 8 1998  0.22 
 9 2018  0.19 
10 2014  0.13
Tom Abbott
Reply to  Bellman
January 14, 2022 3:47 pm

1998 just can’t get no respect!

Carlo, Monte
Reply to  Bellman
January 14, 2022 6:59 pm

BFD

Peta of Newark
January 14, 2022 1:56 pm

A slightly OT comment but of instant relevance, concerning UK energy supply demand.

Average temp across the UK is now (at nearly 22:00 GMT) about minus 2 Celsius.
Meaning that all of Bojo’s proposed air-source heat-pumps will be frozen solid and not pumping any heat.
Meanwhile the windmills are producing 3.1GW as contribution to a, high for the time of day, UK consumption of 37GW

Heads are certainly going to roll….

PS and actually On Topic but I already told you all, my Wunderground personal weather stations on the western side of England (not the UK, just England) all recorded 2021 as among their )4th or 5th) coldest in their 20 year record

Gyan1
January 14, 2022 2:04 pm

“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson.”

Is he psychotically deluded or just a liar?

Science says otherwise.

“World Atmospheric CO2, Its 14C Specific Activity, Non-fossil Component, Anthropogenic Fossil Component, and Emissions (1750–2018)”
https://journals.lww.com/health-physics/Fulltext/2022/02000/World_Atmospheric_CO2,_Its_14C_Specific_Activity,.2.aspx

Results in this paper and citations in the scientific literature support the following 10 conclusions.

  1. The scientific literature does not appear to provide estimates of either the annual mean values of the anthropogenic fossil component, CF(t), or of the non-fossil component, CNF(t), present in the total atmospheric CO2 concentration, C(t), nor their respective changes from values in 1750.
  2. The annual mean values of all CO2 quantities provided in this paper automatically account for the redistribution of CO2 among its reservoirs, including all of its isotopic forms. Results depend on chosen values in 1750 of 276.44 ppm for C(0) and 16.33 dpm (gC)−1 for S(0), both of which may be somewhat overestimated as indicated in the text. Based on the simple equations used to calculate all CO2 quantities, smaller values for these chosen quantities would yield smaller values of the anthropogenic fossil component, CF(t), and larger values of the non-fossil component, CNF(t).
  3. In 1950, the <CF(t)> value of 4.03 ppm in Table 2a is 1.29 % of C(t) and 11.48% of the increase, DC(t), of 35.10 ppm since 1750. After 1950, values of the two components of C(t) begin to increase rapidly, and this increase continues through 2018. This rapid increase, however, is not triggered by the greenhouse effect and global warming associated with either the 1950 value of 4.03 ppm for CF(t) or the relatively small increase in the annual change, DCNF(t), of 31.07 ppm in the non-fossil component, which is 88.5% of the DC(t) value of 35.10 ppm. This DCNF(t) value of 31.07 ppm in 1950 results from the annual redistribution of CO2 among its reservoirs, primarily a net release of CO2 from the oceans due to increases in temperatures from solar insolation in 1950 and afterwards.
  4. In 2018, the <CF(t)> value of 46.84 ppm is 11.55% of the C(t) value of 405.40 ppm, 36.32% of the DC(t) value of 128.96 ppm, and 57.04% of the DCNF(t) value of 82.12 ppm. These results negate claims that the increase, DC(t), in C(t), since 1750 has been dominated by the increase of the anthropogenic fossil component, CF(t).
  5. In 2018, the total content of anthropogenic fossil CO2 in the atmosphere is estimated as 3.664 × 1017 g, which is 23% of the total emissions of 1.590 × 1018 g since 1750. Thus, in 2018, 77% of the total emissions is estimated to be present in the atmosphere’s exchange reservoirs.
  6. Claims of the dominance of the anthropogenic component, CF(t), in the increase of the CO2 concentration, C(t), first began in 1960 with: “Keeling Curve: Increase in CO2 from burning fossil fuel” (Rubino 2013). Despite the lack of knowledge of the two components of C(t), these claims have continued in the scientific literature.
  7. Calculated much larger reductions in values for the annual mean D14C statistic and other reductions in the <S(t)> statistic in Table 1 are required to support the claims that the increase of the total concentration, C(t) above C(0) in 1750 has been dominated by or is equal to the increase in the anthropogenic fossil component, CF(t). These results negate the claims of the dominance of CF(t) in the increase of C(t).
  8. Claims of the dominance of the anthropogenic fossil component have involved (a) the misuse of the d13C and D14C statistics to validate these claims when they are expressed in the common unit of per mil (‰), which causes their slopes in plots to be magnified by a factor of 1,000 above what they otherwise would be; (b) the plot of decreasing values of the d13C statistic along with the plot of increasing values of the total CO2 concentration, C(t), in the same figure on different vertical axes to infer or claim that the increase in C(t) is due mostly or entirely to the anthropogenic fossil component, CF(t), even though large reductions of the d13C statistic reflect no significant changes in its (13C/12C) atom ratios and bear no significant relationship with increases in the anthropogenic fossil component, CF(t); and (c) EIA plot of the CO2 concentration, C(t), in the same figure with the plot of the annual emissions, DE(t), of anthropogenic fossil-derived CO2 on different vertical scales, which provides the inference that the anthropogenic fossil component, CF(t), is responsible for the increase in C(t) when a plot of the non-fossil component, CNF(t), and of the anthropogenic fossil component, CF(t), are not included in the EIA figure.
  9. An article on Glacial-Interglacial Cycles (NOAA) suggests that recent increases in CO2 and temperatures are due primarily to cyclic changes of solar radiation associated with Earth’s orbit about the sun. The annual change, DCNF(t), in the non-fossil component has positive increasing values in Table 2 (https://links.lww.com/HP/A210) after 1764. It will eventually become negative in the next glacial period when average temperatures decrease again as they have done over all of the previous glacial-interglacial cycles.
  10. The assumption that the increase in CO2 since 1800 is dominated by or equal to the increase in the anthropogenic component is not settled science. Unsupported conclusions of the dominance of the anthropogenic fossil component of CO2 and concerns of its effect on climate change and global warming have severe potential societal implications that press the need for very costly remedial actions that may be misdirected, presently unnecessary, and ineffective in curbing global warming.
Chris Hanley
January 14, 2022 2:23 pm

“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson.

According to Wiki: “Clarence William Nelson II (born September 29, 1942) is an American politician and attorney serving as the administrator of the National Aeronautics and Space Administration (NASA)”.
Nelson is an ex-politician (D) and Biden’s flunky but if that is the official view of NASA then that organization should never be trusted with assessing highly inexact surface temperature data.

Gunga Din
January 14, 2022 3:11 pm

These from NOAA for my little spot on the globe.
<pre> Record Highs for the day as listed in July, 2012 compared to what was listed 2007 Newer-April ’12 Older-’07 (did not include ties) 6-Jan 68 1946 Jan-06 69 1946 Same year but “new” record 1*F lower 9-Jan 62 1946 Jan-09 65 1946 Same year but “new” record 3*F lower 31-Jan 66 2002 Jan-31 62 1917 “New” record 4*F higher but not in ’07 list 4-Feb 61 1962 Feb-04 66 1946 “New” tied records 5*F lower 4-Feb 61 1991 23-Mar 81 1907 Mar-23 76 1966 “New” record 5*F higher but not in ’07 list 25-Mar 84 1929 Mar-25 85 1945 “New” record 1*F lower 5-Apr 82 1947 Apr-05 83 1947 “New” tied records 1*F lower 5-Apr 82 1988 6-Apr 83 1929 Apr-06 82 1929 Same year but “new” record 1*F higher 19-Apr 85 1958 Apr-19 86 1941 “New” tied records 1*F lower 19-Apr 85 2002 16-May 91 1900 May-16 96 1900 Same year but “new” record 5*F lower 30-May 93 1953 May-30 95 1915 “New” record 2*F lower 31-Jul 100 1999 Jul-31 96 1954 “New” record 4*F higher but not in ’07 list 11-Aug 96 1926 Aug-11 98 1944 “New” tied records 2*F lower 11-Aug 96 1944 18-Aug 94 1916 Aug-18 96 1940 “New” tied records 2*F lower 18-Aug 94 1922 18-Aug 94 1940 23-Sep 90 1941 Sep-23 91 1945 “New” tied records 1*F lower 23-Sep 90 1945 23-Sep 90 1961 9-Oct 88 1939 Oct-09 89 1939 Same year but “new” record 1*F lower 10-Nov 72 1949 Nov-10 71 1998 “New” record 1*F higher but not in ’07 list 12-Nov 75 1849 Nov-12 74 1879 “New” record 1*F higher but not in ’07 list 12-Dec 65 1949 Dec-12 64 1949 Same year but “new” record 1*F higher 22-Dec 62 1941 Dec-22 63 1941 Same year but “new” record 1*F lower 29-Dec 64 1984 Dec-29 67 1889 “New” record 3*F lower Record Lows for the day as listed in July, 2012 compared to what was listed 2007 Newer-’12 Older-’07 (did not include ties) 7-Jan -5 1884 Jan-07 -6 1942 New record 1 warmer and 58 years earlier 8-Jan -9 1968 Jan-08 -12 1942 New record 3 warmer and 37 years later 3-Mar 1 1980 Mar-03 0 1943 New record 3 warmer and 26 years later 13-Mar 5 1960 Mar-13 7 1896 New record 2 cooler and 64 years later 8-May 31 1954 May-08 29 1947 New record 3 warmer and 26 years later 9-May 30 1983 May-09 28 1947 New tied record 2 warmer same year and 19 and 36 years later 30 1966 30 1947 12-May 35 1976 May-12 34 1941 New record 1 warmer and 45 years later 30-Jun 47 1988 Jun-30 46 1943 New record 1 warmer and 35 years later 12-Jul 51 1973 Jul-12 47 1940 New record 4 warmer and 33 years later 13-Jul 50 1940 Jul-13 44 1940 New record 6 warmer and same year 17-Jul 52 1896 Jul-17 53 1989 New record 1 cooler and 93 years earlier 20-Jul 50 1929 Jul-20 49 1947 New record 1 warmer and 18 years earlier 23-Jul 51 1981 Jul-23 47 1947 New record 4 warmer and 34 years later 24-Jul 53 1985 Jul-24 52 1947 New record 1 warmer and 38 years later 26-Jul 52 1911 Jul-26 50 1946 New record 2 warmer and 35 years later 31-Jul 54 1966 Jul-31 47 1967 New record 7 warmer and 1 years later 19-Aug 49 1977 Aug-19 48 1943 New record 1 warmer and 10, 21 and 34 years later 49 1964 49 1953 21-Aug 44 1950 Aug-21 43 1940 New record 1 warmer and 10 years later 26-Aug 48 1958 Aug-26 47 1945 New record 1 warmer and 13 years later 27-Aug 46 1968 Aug-27 45 1945 New record 1 warmer and 23 years later 12-Sep 44 1985 Sep-12 42 1940 New record 2 warmer and 15, 27 and 45 years later 44 1967 44 1955 26-Sep 35 1950 Sep-26 33 1940 New record 2 warmer and 12 earlier and 10 years later 35 1928 27-Sep 36 1991 Sep-27 32 1947 New record 4 warmer and 44 years later 29-Sep 32 1961 Sep-29 31 1942 New record 1 warmer and 19 years later 2-Oct 32 1974 Oct-02 31 1946 New record 1 warmer and 38 years earlier and 19 years later 32 1908 15-Oct 31 1969 Oct-15 24 1939 New tied record same year but 7 warmer and 22 and 30 years later 31 1961 31 1939 16-Oct 31 1970 Oct-16 30 1944 New record 1 warmer and 26 years later 24-Nov 8 1950 Nov-24 7 1950 New tied record same year but 1 warmer 29-Nov 3 1887 Nov-29 2 1887 New tied record same year but 1 warmer 4-Dec 8 1976 Dec-04 3 1966 New record 5 warmer and 10 years later 21-Dec -10 1989 Dec-21 -11 1942 New tied record same year but 1 warmer and 47 years later -10 1942 31 ? Dec-05 8 1976 December 5 missing from 2012 list </pre>

Gunga Din
Reply to  Gunga Din
January 14, 2022 3:18 pm

DANG! I forgot “pre” doesn’t work anymore with the new format.
Basically, of the record highs and lows listed in 2007 and in 2012, about 10% have been “adjusted”. Not new records set, old records changed.

PS The all time recorded high was 106 F set 7-21-1934 and tied 7-14-1936. The all time recorded low was -22 F set 1-19-1994.

Gunga Din
Reply to  Gunga Din
January 14, 2022 3:40 pm

“pre” was a way to put up a table that didn’t show up as a confused mess like mine above! 😎

Mark Hugo
January 14, 2022 5:14 pm

And where are the 22 years of Satellite Data, lower troposphere? I thought that Anthony Watts pretty much blew this manure away with his Surface stations.org work?!

Dan
January 14, 2022 9:18 pm

“NASA compares that global mean temperature to its baseline period of 1951-1980”.
Hmmm, I wonder why they picked those years? Hmmm. They just happen to be the coldest decades of the 20th century. Oh, and don’t forget how they adjusted the temperatures prior to 1950 downward and the post 2000 temperatures upward to a) remove the hot years of the 1930’s and 1940’s from being in the warmest top 5 and b) make sure the 2010’s have the hottest years on record.


Gunga Din
Reply to  Dan
January 15, 2022 3:31 pm

Mr. Layman here.
As I understand it, back in the 1920’s the standard of 30 years was set for the official basis of “average” because they only had 30 years worth of reliable data back then. So 30 year blocks of time.
(Of course, now they’ve decided those first 30 years of data aren’t reliable after all. Too warm so they’ve cooled them.)
There is no reason why they couldn’t switch to an “average” based on 60 years.

bdgwx
Reply to  Gunga Din
January 15, 2022 4:51 pm

You could also form the baseline on all available data as well. For GISTEMP this is 1880 to 2021 or 142 years. The shape of the graph and trends remain exactly the same either you do it so it doesn’t matter if you pick 30, 60, or a 142 year period for the baseline.

Clyde Spencer
Reply to  bdgwx
January 15, 2022 6:23 pm

Actually, one could use any number at all for a baseline. However, one of the benefits of using a 30-year period immediately preceding the current decade, is that it allows plotting the ‘anomalies’ in red and blue, above and below the baseline, respectively, accentuating the difference between the the two with a lot of red splashed on the graph.

I suspect you will say that it doesn’t matter. However, why don’t other sciences use that approach. They typically show their time-series as actual measurements, perhaps re-scaled or truncated to remove the white space. Alternatively, they may normalize the data in some fashion. The bottom line is that other sciences plot data in the same manner as mathematicians so that conventional metrics such as slope are obvious and not subjectively influenced by selection of colors. Objectivity seems to be of more importance to physicists than to alarmist climatologists.

bdgwx
Reply to  Clyde Spencer
January 15, 2022 7:15 pm

I actually hate those red and blue plots so there will be no challenge by me if you want to criticize their use. In fact, I’ll jump right in there with you. I don’t care if it is NASA, NOAA, or whoever that’s doing it. It drives me crazy all the same.

ATheoK
January 14, 2022 10:37 pm

“The complexity of the various analyses doesn’t matter because the signals are so strong,” said Gavin Schmidt, director of GISS”

Must be because use of the word “robust” is so passé nowadays.

The song and their mendacity are the same. No matter how many times global warming predictions have failed, their march towards overt false propaganda, rampant changing the past and their aggressive censorious despotic racketeering.

S Browne
January 15, 2022 7:26 am

Religion and politics have always been hot topics. But I remember when the weather was a topic we could engage in without apocalyptic political hyperbole such as ‘existential threat’.

I also remember when scientists knew that questioning theories and conclusions was the essence of the scientific method and that nothing was ever absolutely and finally settled because we don’t know what we don’t know. Now the head of the supposedly premier scientific agency of the U.S. government tells us that “science leaves no room for doubt”. That’s the most ignorant, hubristic, unscientific statement one could ever make.

Wharfplank
January 15, 2022 8:16 am

I think we could solve the “climate crisis” by assuring all data are reviewed by 50% Republicans and 50% Democrats…stable temps forevah!

Pat Frank
January 15, 2022 9:17 am

The Figure shows the published GMST with, (left) the official uncertainty, and (right), the uncertainty when the systematic measurement errors from solar irradiance and wind speed effects are included in the land-surface data, and the estimated average errors from bucket thermometers, ship engine intakes and ARGO floats are included in the SST.

The book chapter is here.

The global GMST is hopelessly lost within the uncertainty envelope resulting from these errors.

GASAT&Cor_GASAT.png
Carlo, Monte
Reply to  Pat Frank
January 15, 2022 11:14 am

And these don’t show the several degrees of variations caused by the standard deviations from all the averaging.

bigoilbob
Reply to  Pat Frank
January 16, 2022 7:18 am

Your “book chapter” link is deed. Please provide the referenced expected value and “estimated average error” data, in usable form.

Thx in advance…

bigoilbob
Reply to  Pat Frank
January 17, 2022 5:22 am

It’s been a day now. Any luck finding the requested expected value and “estimated average error” data, with distribution info, in usable form, for that second plot? I think it’s a decade old, and part of a paper. Albeit one with nada citations.

Or could it be that you would rather that no one check out your undocumented statement that “The global GMST is hopelessly lost within the uncertainty envelope resulting from these errors.“.

Naaaaaah…

Tim Gorman
Reply to  bigoilbob
January 17, 2022 1:37 pm

It is easy to calculate the uncertainty envelope. Assuming most temperature measurements have an uncertainty of +/- 0.5C then what is the overall uncertainty when all the thousands of temperatures are added together to form the average?

The sum of all the temperatures will have an uncertainty of ẟTsum = sqrt[ (ẟT1)^2 + (ẟT2)^2 + … + (ẟTn)^2 ] = sqrt[ n(ẟT)^2 ] = ẟTsqrt[ n ].

The error band gets so wide so quickly that the sum of all the uncertainties hides any possible trend line – or even better the trend line can be anything!

You can, however, do your own calculation if you wish. Another way would be to use relative uncertainties in which case you would use the formula

ẟTavg/ Tavg = sqrt[ (ẟT1/T1)^2 + (ẟT2/T2)^2 + … + (ẟTn/Tn)^2 ]

but you probably won’t come up with anything very much different.

Bellman
Reply to  Tim Gorman
January 17, 2022 2:38 pm

I must have been arguing with you for almost a year now, and you still show zero understanding that an average is not the same as a sum.

Carlo, Monte
Reply to  Bellman
January 17, 2022 3:10 pm

You have a vested interest that requires forcing the uncertainties of these tiny trends as small as possible. If real-world numbers were to emerge, your castle of playing cards collapses into statistical insignificance.

Bellman
Reply to  Carlo, Monte
January 18, 2022 2:11 pm

If the real numbers are going to bring my castle of playing cards down (whatever you think that is), it would be in my interest to exaggerate the uncertainty as much as possible.

bdgwx
Reply to  Tim Gorman
January 17, 2022 2:52 pm

TG said: “ẟTsum = sqrt[ (ẟT1)^2 + (ẟT2)^2 + … + (ẟTn)^2 ] = sqrt[ n(ẟT)^2 ] = ẟTsqrt[ n ]”

YES. That is correct per Taylor 3.16.

TG said: “ẟTavg/ Tavg = sqrt[ (ẟT1/T1)^2 + (ẟT2/T2)^2 + … + (ẟTn/Tn)^2 ]”

NO. That does not follow from Taylor 3.18. Fix the arithmetic mistake and resubmit your solution. If you would show your work that would be helpful. Take it slow. Show each step one-by-one. Watch the order of operations and/or put parathesis around terms to mitigate the chance of a mistake.

Carlo, Monte
Reply to  bdgwx
January 17, 2022 3:13 pm

Your hat size has swelled a lot today.

bigoilbob
Reply to  bdgwx
January 17, 2022 4:08 pm

Stokesian patience. You flatter every commenter you respond to, (whether they realize it or not) with the inference that, sooner or later, they’ll snap out of it.

bigoilbob
Reply to  Tim Gorman
January 17, 2022 3:52 pm

“The error band gets so wide so quickly that the sum of all the uncertainties hides any possible trend line – or even better the trend line can be anything!”

Perhaps part of the way to calculate the standard error of a trend line, which is the relevant parameter here, but not to completion. For a trend line with only expected values, find an earlier comment by Willis E to get schooled. I can provide the expansion of that standard error with any kind of distributed errors for each datum. They don’t need to be equal. They don’t need to be symmetrical. They don’t even need to be the same distribution type. Now, I don’t have stochastic evaluation software like @RISK or Crystal Ball, so I can’t deal with correlations. But since uncorrelated data provides the largest standard errors for trends, I can provide the worst case answer(s).

But this all begs the larger question of why Pat Frank, who has claimed to have calculated, from the ground up, the data distributions – of whatever kind – that comprise the vertical “error bar” line segments that spread from every data point in his second plot, has not provided them. I have not questioned their provenance (here), but only wish to check out his data free claim that “The global GMST is hopelessly lost within the uncertainty envelope resulting from these errors.“. He might be correct, but with Pat Frank, “Trust but verify” is the best policy.

bigoilbob
Reply to  bigoilbob
January 17, 2022 4:21 pm

H/T to Willis E. This is how we calculate the standard error of a trend, with expected value only data.

comment image

Carlo, Monte
Reply to  bigoilbob
January 17, 2022 4:52 pm

Another canned equation, applied without understanding.

Well done blob, you Shirley showed them this time.

bigoilbob
Reply to  Carlo, Monte
January 17, 2022 6:01 pm

“…applied without understanding”

Pray elucidate. What am I missing? The equation is valid, and it’s underlying assumptions are commonly known. And I would only use it as a base, and widening it based on Pats data distributions.

Pat’s data can be trended, and the statistical durability of those trends can be quantified. At least if he has the usable density functions for it that he claims are the bases for his plot. He should be eager to post them.

AGAIN, bdgwx and Bellman are doing the heavy lifting here. All I want to see is how the statistical durability of his trends square with his backup free assertion.

Carlo, Monte
Reply to  bigoilbob
January 17, 2022 6:05 pm

You cannot increase knowledge by averaging. This is effectively what you are arguing.

bigoilbob
Reply to  Carlo, Monte
January 17, 2022 6:24 pm

We can increase our knowledge by using more data, properly. That, is effectively what we are arguing.

Jim Gorman
Reply to  bigoilbob
January 18, 2022 6:22 am

No, you can not increase knowledge of individual measurements by using more and more independent measurements of different things.

Bellman
Reply to  Jim Gorman
January 18, 2022 9:31 am

More moving of goal posts. Carlo said averaging doesn’t increase knowledge, then you change this to averaging doesn’t increase knowledge of individual measurements.

I’d still disagree with this. Knowing the average of a population does increase the knowledge of individual measurements – it tells you if they are above or below average.

Carlo, Monte
Reply to  Bellman
January 18, 2022 3:21 pm

Looks like you’ve chosen to follow bzx on the road down to Pedantry.

Jim Gorman
Reply to  Bellman
January 18, 2022 5:36 pm

The mean tells you nothing about a distribution. Where did you learn about statistics?

You need to know the standard deviation or variance so you know how well the mean represents the spread of the data.

Look at the screen capture. It a sampling simulation. The basic distribution (top) is equivalent to Southern Hemisphere and Northern Hemisphere in summer time. What does the arithmetic mean tell you about the distribution? Not much.

Do me a favor and take the standard deviation of the two sample distributions and multiply them by the sample size. You’ll see they equal the population SD. That is where the equation SEM = σ / √N originates. The standard deviation of the sample means is the SEM!

That is why dividing by the number of data points is ridiculous. Calling stations a sample doesn’t meet the requirement for samples having the same mean and variance as the population. If you want to deal with statisics, at least do so properly

Bellman
Reply to  Jim Gorman
January 18, 2022 5:57 pm

And the goalposts shift again. No, a mean does not tell you what the distribution is, it doesn’t tell you a lot of things and you can spend the next few comments itemizing all those things – but it does not explain why you think a mean tells you nothing.

Jim Gorman
Reply to  Bellman
January 18, 2022 6:35 pm

An arithmetic mean tells you the central value of a range of numbers. That is all. It tells you nothing about the shape of the distribution.

A variance or standard deviation is needed to describe the variation/dispersion of the data in a distribution.

In a measurement of a single thing with the same device, multiple times, the mean will define the true value, if and only if the distribution of the individual measurements form a Gaussian distribution.

Bellman
Reply to  Jim Gorman
January 18, 2022 7:15 pm

Finally an admission that a mean does tell you something, though it won’t necessarily tell you what the “central value of a range of numbers” is.

Of course a mean on it’s own does not tell you what the distribution is, nor for that matter does the mean and standard deviation. There are lots of things that won;t tell you the distribution of a set of numbers, but can still be useful. The sum of the numbers wont tell you the distribution, but nobody here is insisting that adding numbers tells you nothing.

In a measurement of a single thing with the same device, multiple times, the mean will define the true value, if and only if the distribution of the individual measurements form a Gaussian distribution.

Wrong on all counts. The mean won’t define the true value any more than a single measurement will. It will just be more precise assuming there are random errors in the measurement. And it can do this regardless of the distribution of the measurements.

Jim Gorman
Reply to  Bellman
January 19, 2022 5:38 am

The mean won’t define the true value any more than a single measurement will. “

I thought you were just making stuff up but I now realize your really pulling stuff out of your arse. This doesn]t even deserve a response.

Jim Gorman
Reply to  Bellman
January 18, 2022 6:55 pm

The mean tells you nothing about any individual point of data. It is the simple center if a group of numbers. The mean may not even be the same as any data point. You have no idea of the distribution shape or the range of values used to calculate it. You have no idea if the distribution is skewed or to what extent.

Only the mean and standard deviation together will tell you the dispersal of data surrounding the mean.

Jim Gorman
Reply to  bigoilbob
January 18, 2022 6:20 am

That is the the uncertainty in the trend assuming the data is 100% accurate with no uncertainty in the data points themselves!

It does not address the uncertainty in the data used to generate the trend.

When are you going to learn that the uncertainty in the data is going to make a trend line very, very wide, such that you can not “assign” a definite value to the variable because it can be any value the line touches.

If you are using a point like 75F from 1910 the base uncertainty will be +/- 0.5. That requires any trend line using that value be 1F wide.

bigoilbob
Reply to  Jim Gorman
January 18, 2022 6:28 am

“That is the the uncertainty in the trend assuming the data is 100% accurate with no uncertainty in the data points themselves!
It does not address the uncertainty in the data used to generate the trend.”

No. Pat Frank’s claim is that he has thrown in the kitchen sink and included every potential source of uncertainty. I am willing to include his total uncertainty for every data point that Pat Frank is claiming, to validate his data free claim that the relevant trends are statistically unjustifiable. AGAIN, I am not disputing their provenance (here). I just want to check them out.

Dr. Frank’s dog must’ve eaten is e homework again…..

“When are you going to learn that the uncertainty in the data is going to make a trend line very, very wide, such that you can not “assign” a definite value to the variable because it can be any value the line touches.”

Uncertainty in the data will indeed “widen” the standard errors of any of it’s trends. What is missing here is the simple evaluation of how much. Pat Frank expects acceptance of a claim based only on Rorschachian eyeballing. He might be right, but he should provide his numbers and allow proper checking.

Carlo, Monte
Reply to  bigoilbob
January 18, 2022 7:43 am

No. Pat Frank’s claim is that he has thrown in the kitchen sink and included every potential source of uncertainty.

This is a baldfaced lie, blob.

bigoilbob
Reply to  Carlo, Monte
January 18, 2022 8:16 am

“This is a baldfaced lie, blob.”

First, nope.

Second, you’re both goal post moving and losing track of my actual data request. I simply want to see the distribution information that resulted in Pat Frank’s “error band” line segments. Evaluation of them either backs up his claim of trend “unknowability” or it doesn’t.

I suspect you think you know the answer already, from your deflections from Dr. Frank’s radio silence on providing the decade old distribution data. Assuming it even exists, and that the line segments aren’t just rulered into the cartoon. Of course, there might be “But wait, there’s more!”, after that. But let’s do this first.

Reluctance to provide this data channels early ’80’s oilfield experience. Both HTC and Christensen used to give you a bottle of (good) whiskey for every one their bits you ran. We would always prank our back to backs (the drilling supervisor who spelled you on your days away) by telling the bit salesman:

“Don’t give Fred any whiskey when he comes on.”

“Why?”

“Hey, dumb ****! He’ll drink it!”

Carlo, Monte
Reply to  bigoilbob
January 18, 2022 8:37 am

“This is a baldfaced lie, blob.”

First, nope.

Yep. The paper analyzed the effects of the 4 W/m2 cloud uncertainty.

A single source of uncertainty.

Rest of your rant unread.

bigoilbob
Reply to  Carlo, Monte
January 18, 2022 9:17 am

The paper analyzed the effects of the 4 W/m2 cloud uncertainty.”

Again, nope. Here are multiple, systematic (his term) errors mentioned in the abstract of the paper that Pat Frank linked us to. I.e., the one in this post, and the source of his figure 2 cartoon:

  1. Field-calibrations of the “traditional Cotton Regional Shelter (Stevenson screen) and the modern Maximum-Minimum Temperature Sensor (MMTS) shield “
  2.  Marine field calibrations of bucket or engine cooling-water intake thermometers 
  3. Modern floating buoys

https://www.worldscientific.com/doi/abs/10.1142/9789813148994_0026

Rest of your rant unread.”

I doubt it. More like a deflection from your earlier deflection.

Carlo, Monte
Reply to  bigoilbob
January 18, 2022 3:55 pm

I doubt it. More like a deflection from your earlier deflection.

Another unread blob rant.

Jim Gorman
Reply to  bigoilbob
January 18, 2022 10:16 am

Here is what you typed.

“This is how we calculate the standard error of a trend, with expected value only data.”

Here is what I replied.

“That is the the uncertainty in the trend assuming the data is 100% accurate with no uncertainty in the data points themselves!”

My answer has nothing to do with anything but your assertion. The equation you showed has no allowance for any uncertainty or error in the measurement data. It only evaluates how well the trend matches the data points used, i.e. the data is assumed to be 100% accurate with no uncertainty. That is, they are treated like counting numbers, not measurements.

If your data is integer values, the trend line should be at least +/- 0.5 wide. This would cover any 1/1000th measurement multiple times over!

Carlo, Monte
Reply to  Jim Gorman
January 18, 2022 7:42 am

It has the word “error” in the title so it tells them everything they want to know.

bdgwx
Reply to  bigoilbob
January 17, 2022 5:01 pm

You’re a smart guy so you may have already figured this out, but Bellman and I discovered that Pat’s method boils down to these calculations.

(1a) Folland 2001 provides σ = 0.2 “standard error” for daily observations.

(1b) sqrt(N * 0.2^2 / (N-1)) = 0.200 where N is large (ie ~365 for annual and ~10957 for 30yr averages)

(1c) sqrt(0.200^2 + 0.200^2) = 0.283

(2a) Hubbard 2002 provides σ = 0.25 gaussian distribution which Pat calculates to 3 decimal places as 0.254 for MMTS daily observations.

(2b) sqrt(N * 0.254^2 / (N-1)) = 0.254 where N is large as in (1b)

(2c) sqrt(0.254^2 + 0.254^2) = 0.359

(3) sqrt(0 283^2 + 0 359^2) = 0.46

Here are our concerns.

1) Folland and Hubbard seem to be describing the same thing so I question why they are being combined. Though Folland is rather terse on how or even exactly what his 0.2 figure even means.

2) The use of sqrt(N*x^2 / (N-1)) to propagate uncertainty into an annual or 30yr average implies a near perfect correlation for every single month in the average which I question. There’s almost certainly some auto-correlation there, but there’s no way it’s perfect.

3) There is propagation of uncertainty for the gridding, infilling, averaging steps nor is there any discussion of the coverage uncertainty or the spatial and temporal correlations that might require upward adjustments to the uncertainty.

Here are good references for the uncertainty analysis provided by several other groups each of which provide significantly more complex uncertainty analysis and significantly different results.

Christy at el. 2003 – Error Estimates of Version 5.0 of MSU–AMSU Bulk Atmospheric Temperatures

Mears et al. 2009 – Assessing uncertainty in estimates of atmospheric temperature changes from MSU and AMSU using a Monte‐Carlo estimation technique

Rhode et al. 2013 – Berkeley Earth Temperature Averaging Process

Lenssen et al. 2019 – Improvements in the GISTEMP Uncertainty Model

Haung et al. 2020 – Uncertainty Estimates for Sea Surface Temperature and Land Surface Air Temperature in NOAAGlobalTemp Version 5

Bellman
Reply to  bdgwx
January 17, 2022 6:20 pm

You give me far too much credit for this. All I did was query the use of a misapplied equation from Bevington, and tried to find out what assigned uncertainty meant, and why it wasn’t described in the GUM.

Jim Gorman
Reply to  Bellman
January 18, 2022 6:07 am

It is addressed in the GUM.

JCGM 100:2008

4.3.2 The proper use of the pool of available information for a Type B evaluation of standard uncertainty calls for insight based on experience and general knowledge, and is a skill that can be learned with practice. It should be recognized that a Type B evaluation of standard uncertainty can be as reliable as a Type A evaluation, especially in a measurement situation where a Type A evaluation is based on a comparatively small number of statistically independent observations. 

Bellman
Reply to  Jim Gorman
January 18, 2022 6:31 am

But when I asked if this meant type B uncertainty I was told, no it wasn’t anything used in the GUM, the GUM didn’t cover all types of uncertainty, etc.

If the argument is that assigned uncertainty is Type B, the question then is why you don’t apply the same propagation rules to it. Equation (10) in the GUM section 5.1.2 specifically says it applies to both types of uncertainty:

Each u(xi) is a standard uncertainty evaluated as described in 4.2 (Type A evaluation) or as in 4.3 (Type B evaluation).

Bellman
Reply to  Bellman
January 18, 2022 6:44 am

Here’s what Pat Frank said, (my emphasis.)

Bellman, Folland assigned that uncertainty value. It can have no distribution. it has only the value Folland assigned to it. Finished. Period The end.

Assigned uncertainties do not appear in GUM.

And here’s Tim Gorman (his emphasis)

Type B uncertainty is *NOT* the same as an assigned uncertainty.

Type A uncertainty is calculated from observations. Type B uncertainty is estimated using available knowledge, e.g. manufacturer tolerance info, data generated during calibration, reference data from handbooks, etc.

Type A and Type B are not “assigned” uncertainties.

Then when I pointed out that Carlo, Monte said they were the same, Tim replied

Did you even bother to read what I posted? It doesn’t appear so.

Assigned uncertainties are *NOT* Type B uncertainties. Type B uncertainties are not just “assigned”, they are determined from all kinds of inputs, manufacturer specs, handbook values, etc.

Pat told you! You need to learn to read! “Assigned uncertainties do not appear in GUM”. So why do you keep asking where in the GUM are “assigned uncertainties” covered.

THE GUM DOESN’T SPEAK TO ASSIGNED UNCERTAINTIES.



Carlo, Monte
Reply to  Bellman
January 18, 2022 7:46 am

Why do you need the uncertainties of these linear plots to be as small as possible?

Bellman
Reply to  Carlo, Monte
January 18, 2022 9:26 am

What linear plots? I’m just trying to figure out how Pat calculates the uncertainty in the annual anomaly average.

If by linear plot you mean the linear regression, I’ve been arguing for some time that you have to look at the uncertainty and that this is generally larger than you think, because of auto correlation etc. I tend to use the Skeptical Science Trend Calculator because it shows larger uncertainties.

You on the other hand seem to want to ignore all uncertainties when it comes to trends you like, such as Monckton’s cartoon pause, or anyone claiming they has been cooling after the last few years.

Jim Gorman
Reply to  Bellman
January 18, 2022 9:37 am

Why don’t you see the study for how the uncertainty was determined. What was used to determine the “assigned uncertainty”?

Bellman
Reply to  Jim Gorman
January 18, 2022 2:00 pm

Because it keeps making assertions with no explanation and Pat was on hand so I thought it would be easier to ask him.

There’s little point wading through the whole document, most of which I’m unlikely to understand, when there’s already what appears to be a poor assumption underlying the argument.

Carlo, Monte
Reply to  Bellman
January 18, 2022 7:45 am

But when I asked if this meant type B uncertainty I was told, no it wasn’t anything used in the GUM,

Total nonsense.

the GUM didn’t cover all types of uncertainty, etc.

What I said was that the GUM is NOT the end-all-be-all for the subject. It can’t be.

Go read the title (again). Duh.

Bellman
Reply to  Carlo, Monte
January 18, 2022 9:19 am

I quoted the comments I was talking about. Either the “assigned” uncertainties of which Pat speaks are Type B or they are not. It’s up to you lot to come up with a consistent explanation. All I want to know is where is the explanation for how they propagate.

Carlo, Monte
Reply to  Jim Gorman
January 18, 2022 6:34 am

I am shocked by the revelation that the experts on GUM, NIST, equations etc. do not understand this.

Shocking information.

Jim Gorman
Reply to  bdgwx
January 18, 2022 9:14 am

You need to read the following very critically and with the purpose of expanding your knowledge.

4.1.1 In most cases, a measurand Y is not measured directly, but is determined from N other quantities X1, X2, …, XN through a functional relationship f : (

Y = f(X1, X2, …, XN)

NOTE 1 For economy of notation, in this Guide the same symbol is used for the physical quantity (the measurand) and for the random variable (see 4.2.1) that represents the possible outcome of an observation of that quantity. When it is stated that Xi has a particular probability distribution, the symbol is used in the latter sense; it is assumed that the physical quantity itself can be characterized by an essentially unique value (see 1.2 and 3.1.3). 

This should tell you that a measurand can be determined by a combination of other measured variables. In order to do so, you must be able to define a function that determines the value of “Y”. Something like the Ideal Gas Law where

PV = nRT or

for the equation of continuity for incompressible fluids

A1V1 = A2V2

I have asked you to provide the function you are using to determine the value of Y and you have yet to do.

Do some soul searching. The GUM and Dr. Taylor both deal with real physical measurements of a MEASURAND and how to determine the uncertainty associated with real physical measurements.

I suspect the best you will have for a function that defines GAT is calculating a mean or average.

THAT IS NOT DETERMINING A PHYSICAL QUANTITY OF A MEASURAND!

Consequently, none of this even comes close to applying most of the GUM or any other metrology technique for determining uncertainty. At best you are using statistics to try to prove a theory. As such, you need to insure that you are following the assumptions necessary for these statistical calculations when computing a GAT.

The very first decision you must make is if you are using samples (i.e. stations) of a larger population or if you are using the entire population. That very much affects the statistical parameters and their evaluation.

I have also tried to get you to accept Significant Digits rules when dealing with physical measurements. You continue to treat measurements as counted numbers, they are not.

Here is an explanation from a physics course at Bellview College. https://www.bellevuecollege.edu/physics/resources/measure-sigfigsintro/a-uncert-sigfigs/

“When computing results on a calculator we often end up with many digits displayed. Because computation itself cannot increase our measurement accuracy we must decide how many of these figures are significant and round the result back to the appropriate number of figures.”

Here is a presentation from Purdue Univ.
http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch1/sigfigs.html

“It is important to be honest when reporting a measurement, so that it does not appear to be more accurate than the equipment used to make the measurement allows. We can achieve this by controlling the number of digits, or significant figures, used to report the measurement.”

And from Washington Univ. at St. Louis.
http://www.chemistry.wustl.edu/~coursedev/Online%20tutorials/SigFigs.htm

“Significant Figures: The number of digits used to express a measured or calculated quantity.

By using significant figures, we can show how precise a number is. If we express a number beyond the place to which we have actually measured (and are therefore certain of), we compromise the integrity of what this number is representing. It is important after learning and understanding significant figures to use them properly throughout your scientific career.”

bigoilbob
Reply to  Jim Gorman
January 18, 2022 3:14 pm

By using significant figures, we can show how precise a number is. If we express a number beyond the place to which we have actually measured (and are therefore certain of), we compromise the integrity of what this number is representing.”

True, which is why no one is doing that. But the value can be used to calculate others with more usable, statistically valid sig figs.

If we count you as one when you’re alive, and zero when you’re dead, we can’t justify anything on the r.h.s. of the decimal point. But if we look at the trend of death rate for your city, we would take that same data for all city residents and arrive at a trend with several, justifiable digits to the right of the decimal point.

Jim Gorman
Reply to  bigoilbob
January 18, 2022 6:13 pm

Hey, I posted references that justify my assertion. All you can do is say those references don’t matter because you and others aren’t doing that.

“True, which is why no one is doing that.”

What a simplistic claim! Come on man! How many graphs have you posted showing anomaly values in the early 20th century with 2 or 3 decimal places. You can’t get that from measurements recorded as integers!

See the attached graph. The point have at least 2 decimal points and probably 3. How do you get that from integer resolution?

Show us the Significant Digit rules from an accepted source that let you do that!

Jim Gorman
Reply to  Jim Gorman
January 18, 2022 6:22 pm

Here us the graph for the above.

XRecorder_Edited_18012022_200934.jpg
bigoilbob
Reply to  Jim Gorman
January 18, 2022 7:01 pm

“See the attached graph. The point have at least 2 decimal points and probably 3. How do you get that from integer resolution?
Show us the Significant Digit rules from an accepted source that let you do that!”

And predictably, you haven’t provided the data to back up your cartoon. Do so, and I will be happy to.

Jim Gorman
Reply to  bigoilbob
January 19, 2022 5:41 am

Did you not examine the graph? What does it show for the temperature in 1881? How many decimal places?

bigoilbob
Reply to  Jim Gorman
January 19, 2022 5:49 am

They scale has one decimal place to the right. But as you should know, the data need not have that as well.

Folks, first Pat Frank, and now Jim Gorman. Both with the usual reticence about actually providing data. They prefer cartoons that they can do Rorschachian interpretations with…

Jim Gorman
Reply to  bigoilbob
January 19, 2022 8:18 am

I’ll ask again! What does the graph have for the temperature value in 1881? How many decimal places? How about 1882? Are the values the same or different in the second decimal place?

Is it beyond you to say that 1881’s temp is shown as approximately -0.25? and probably even a third decimal place.

bigoilbob
Reply to  Jim Gorman
January 19, 2022 8:37 am

Is it beyond you to say that 1881’s temp is shown as approximately -0.25? and probably even a third decimal place.”

That was exactly my point. I have no idea what you’re getting at. If you’re saying that temperature measuring processes for 1881 individual measurements make this many rhs decimal places impossible, I say audit Engineering Statistics 101 at your local CC. The spatially weighted average of enough of those individual measurements, with known error bands, can easily justify more of those significant figures than the individual measurement(s). You seem hysterically blocked on this fundamental truth, understood for over a century. I can now better understand what patient Bellman has been tuting you on, over and over.

Or are you just confusing me with another poster…?

Jim Gorman
Reply to  bigoilbob
January 19, 2022 9:28 am

I answered you post.

“The spatially weighted average of enough of those individual measurements, with known error bands, can easily justify more of those significant figures than the individual measurement(s). “

You can not do this. Please show a reference from a certified lab or University that allows what you assert. Better yet, provide a rule as to how the number of decimal places is decided upon.

Quality control people, certified labs, and machinists all understand this isn’t possible. If it was higher and higher precision measuring devices would not be needed.

Why do you think the NWS spent billions changing thermometers from LIG to higher resolution devices?

I’ll give you an example that quality control people would be familiar with. Let’s say you process 10,000 rods of a length that is required to have a 0.01 mm tolerance.

Your proposal is to measure each one and find the average. That would be wrong. The machine, or worse machines, making the rods as they wear could be making rods that are further and further out of spec but if the errors were random, your average would still be ok. Sooner or later, you would find out from customers that your product will no longer work for them.

In fact, by using your process, you could even use measuring devices that don’t have sufficient resolution to precisely measure the individual rods since you say you can increase precision by averaging multiple measurements of different rods.

Quality people know that you must sample sufficient individual rods and measure each accurately, i.e., with resolution to insure they meet specs, to insure all the rods meet requirements. You simply can’t increase precision by averaging. Errors can grow and you’ll never know it by averaging.

bigoilbob
Reply to  Jim Gorman
January 19, 2022 9:36 am

The machine, or worse machines, making the rods as they wear could be making rods that are further and further out of spec but if the errors were random, your average would still be ok.”

You’re confusing/conflating the standard error of any one measurement with the loss of accuracy versus repeated operation, from equipment wear. It’s no wonder that you fail to understand basic statistical laws.

Jim Gorman
Reply to  bigoilbob
January 19, 2022 1:18 pm

I am confusing nothing. I try to show you that measures of different things can not be used to reduce precision.

There is no loss of accuracy in measurements in my example. The measuring devices are not what wears and changes. It is the rods themselves that are changing.

Just what statistical laws has this example violated? It is a real world example of quality control and the statistics needed to maintain quality. Please refute what I have given with some statistical laws.

bigoilbob
Reply to  Jim Gorman
January 19, 2022 1:51 pm

The measuring devices are not what wears and changes. It is the rods themselves that are changing.”

The loss of accuracy here comes from treating the run as a constant value, instead of a trend of changing values. Ever since we began making interchangeable parts, we noted this process and accounted for it. The average size of each run is:

  1. Somewhere between it’s initial and final values.
  2. Known.
  3. Substituted as the expected value, instead of what you mistakenly characterize as that for the first rod made in that run.

With more random sampling you will converge upon that correct expected value closer and closer. Also if you include subsequent runs with new cutting tools between runs, the convergence will continue. To wit, if you sampled 1000 items, your convergence on the actual average value would be 10* closer than if you just sampled 10. Hence, the justification for that extra sig fig.

bigoilbob
Reply to  Jim Gorman
January 19, 2022 9:49 am

Better yet, provide a rule as to how the number of decimal places is decided upon.”

It would be a generalized formulation of the statistical rule that the sum of the variance is equal to the variance of the sum. For measurements with equal standard deviations, 100 or more of them would justify another sig fig for their average. But it would also apply to unequal standard deviations for differing measurement mechanism, different distributions, whatever. Anything that reduced the averaged standard deviation by a factor of 10 or more from the smallest standard deviation of any datum within the averaged data set would justify it. Same for the next sig fig, with the minimum number of required data points at 10000.

Do you see what I’m doing? Yet?

Jim Gorman
Reply to  bigoilbob
January 19, 2022 1:12 pm

Word salad with no hard and fast rules or no math.

Show some valid references that confirm the math you are espousing.

bigoilbob
Reply to  Jim Gorman
January 19, 2022 1:37 pm

30 seconds I’ll never get back. The only adder here is notation of the fact that, once sampling without replacement the standard deviation of the average becoming an order of magnitude lower than that of the sample with the smallest one – if they are not identical – then that justifies another sig fig. Their is an engineering term for this. “By inspection”.

Ignorance isn’t the problem here. It’s more the willful effort to remain so….

https://www.investopedia.com/terms/l/lawoflargenumbers.asp#:~:text=The%20law%20of%20large%20numbers%2C%20in%20probability%20and%20statistics%2C%20states,average%20of%20the%20whole%20population.

Jim Gorman
Reply to  bigoilbob
January 20, 2022 6:39 pm

Once again you are dealing with numbers not measurements.

No one is going to believe your assertions without some references.

I have provided numerous references supporting mine from well known Universities. You need to do the same to provide some references of your own.

bigoilbob
Reply to  Jim Gorman
January 20, 2022 7:04 pm

I have provided numerous references supporting mine from well known Universities. You need to do the same to provide some references of your own.”

I agree with everything that your references say, because we are saying the same things. None of them however, discuss significant figures w.r.t. standard deviations.

Here’s one that does.

https://www2.chem21labs.com/labfiles/jhu_significant_figures.pdf

Here’s an excerpt:

Although the maximum number of significant figures for the slope is 4 for this data set, in this case it is further limited by the standard deviation. Since the standard deviation can only have one significant figure (unless the first digit is a 1), the standard deviation for the slope in this case is 0.005. Since this standard deviation is accurate to the thousandths place, the slope can only be accurate to the thousandths place at the most. Therefore, the slope for this data set is 0.169 ± 0.005 L K-1. If the standard deviation is very small such that it is in a digit that is not significant, you should not add additional digits to your slope. For example, if the standard deviation in the above example was two orders of magnitude smaller, you would report it as 0.1691 ± 0.00005 L K-1. Note that here the slope has its maximum number of significant digits based on the data, even though the standard deviation is in the next place. 

If you look at the figures above this excerpt. in the link, you can see how he demonstrated how to increase the number of significant figures in a slope if his standard deviation was small enough to justify it. The same rule applies to averaging. Since standard deviations tend to drop when averaging, with more data, whenever they get low enough so the first digit is one place to the right, then the average measurement may be one sig fig more precise.

Balls in your court. Feel free to provide anything that rebuts this.

bigoilbob
Reply to  bigoilbob
January 20, 2022 7:21 pm

Oh, BTW, in spite of Pat Frank’s radio silence, I found a non paywalled version of his 2010 paper. His bottom line appeared to be a constant standard deviation of 0.46 degC in every annual reading from 1880 to present.

The good news for Pat s that his singular error bars, replicated by no one, indeed raised the standard trend errors. By a factor of ~5 for the 1980 on data, and by a factor 4 for the earlier data. The bad news for the good Dr. is that they still left us a 1980-2018 slope (newer data used to avoid being accused of not using “pause” data) of 1.75 deg/century with a standard error of 0.69 deg/century, versus a pre 1980 slope of 0.36 deg/century with a standard deviation of 0.17 deg/century. Put it all together and the chance that the change in trend was zero or less is all of 2.5%. The chance that the change in slope is >1 deg/century is 70.5%.

Those Rorschachian eyeballs might need some checkin’….

bigoilbob
Reply to  Pat Frank
January 19, 2022 6:02 am

Gosh Pat. Four days now and nada from you. What does it take to get you to provide usable y scale distribution data for a 10 year old cartoon that you presented.?

I’m not even saying that your data free “hopelessly lost within the uncertainty envelope” claim is incorrect. I’m just interested to see if it bears normal scrutiny.  What’s the harm?

Carlo, Monte
Reply to  bigoilbob
January 19, 2022 6:45 am

You are in no position to make demands, bighatsizeblob.

bigoilbob
Reply to  Carlo, Monte
January 19, 2022 6:58 am

What “demand”? I can’t “demand” anything from Pat Frank. He has a comfortable government sinecure from which he is functionally fire proof.

Dr. Frank should be tickled pink to provide the data that backs up his assertion. Unless…..

What “demand”? I can’t “demand” anything from Pat Frank. He has a comfortable university/government sinecure from which he is functionally fire proof.

Dr. Frank should be tickled pink to provide the data that backs up his assertion. Unless…..

Carlo, Monte
Reply to  bigoilbob
January 19, 2022 7:27 am

Have a nice day, blob.

Pat from kerbob
January 15, 2022 11:00 am

With upcoming reversals of ocean cycles and likely decreasing temperatures, it occurs to me that we should start watching for the adjustment bureau to start removing adjustments they have made to the last 40 years of records that increased the temperatures, saying they have a new algorithm or “fixed” the old one, thereby cooling the recent past and by magic there is no current cooling.

Am I overthinking it?
Clearly cannot trust climate Scientology?

Pat from kerbob
January 15, 2022 1:24 pm

Griff, a question.

If this year was a tipping point in extreme weather events, and it turns out that it was cooler, doesn’t that mean it’s cooling that is the danger?

Paul Blase
January 15, 2022 1:30 pm

Collectively, the past eight years are the warmest years since modern recordkeeping began in 1880. 

ok, and the problem is …?
1880 is at, or nearly at, the end of the “Little Ice Age”; George Washington hauling cannon across the Patomac and all that.

%d
Verified by MonsterInsights