Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Unstoppable Momentum of Outdated Science (rogerpielkejr.substack.com)
210 points by zmaten on March 8, 2021 | hide | past | favorite | 132 comments


This is a good read, although somewhat depressing.

When you have a topic that has been weaponized politically, as climate change has, people who use that weapon actively work against information they perceive would make the weapon less effective. The author sums it up as follows:

---- from the article:

According to Rayner, in such a context we should expect to see reactions to uncomfortable knowledge that include:

denial -- (that scenarios are off track),

dismissal -- (the scenarios are off track, but it doesn’t matter),

diversion -- (the scenarios are off track, but saying so advances the agenda of those opposed to action) and,

displacement -- (the scenarios are off track but there are perhaps compensating errors elsewhere within scenario assumptions).

---- end include.

Not enough people understand that science is expected to be wrong, and to converge over time to the correct answer. That is is okay to say "We used to think this, but these studies/experiments have shown that understanding to be incorrect. Now we believe this." That is how science works. And sadly when it is politicized, it becomes much harder to have conversations about it.


The entire opening of the article discussing 2020 papers erroneously using skin cancer data in breast cancer research papers which was a known error for at least 13 years - makes the point very strongly that " ... but these studies/experiments have shown that understanding to be incorrect. Now we believe this." happens much slower than perhaps people expect.

I understand science "is expected to be wrong" sometimes, but I'm still somewhat boggled that there's 13 years worth of (presumably) peer reviewed papers making claims about breast cancer using "wrong science" that was proven wrong in 2007...

(I wonder what _else_ all those peer reviewers let slide or didn't know when reviewing other papers?)


Actually, the cell line issue might not be so bad or rather all cell line work is much worse than you all imagine. No cell lines actually resemble the real tissue in gene expression if you say, do an rna screen and see what actuallg gets expressed. They are changed by line establishing process, use of various media, number of passages aka how long they’ve been in culture from thawing a line till the test (cant be too little or too much, requires line to recover after being frozen), person who’se culturing (some people tend to starve cells and passage full plates and have different parameters of work even despite some labwide protocols), adherence to the plate and growing in flat monoculture, getting plates out too much or watching them too lonv under the microscope. Some effects are temporary and not so serious. Some effects are rather serious. Like the fact that flat surface cell lines will never be a good proxy for the actual tumor growing in 3d in the actual human. That’s why people experimenting with spheres, multucultures with other cells, organelle cultures, animal models, and eventually all the clues from all the models add up to something that could be a theory of some mechanism. And then in most cases when applied to humans in real life it might not work or be harmful. Now, the skin cell in cell media was likely arrayed and had features similar to breast cancer cells so it might have been okay to use for this super imperfect approximation. Not sure if this particular instance. In vitro is only a part of it and peer reviewed journals differ in their authority ratings, with big boss journals like cell requiring you do deliver crazy amounts of different models to make any statement. Reading those papers is like reading house ads, scientists kind of learn to cut through bs


I agree with the sentiment expressed here.

> Reading those papers is like reading house ads, scientists kind of learn to cut through bs

In addition to recognizing general BS, in any given field the papers come with an enormous block of implicit fine print. An important part of being an expert in a particular field is knowing what that fine print is.


> I wonder what _else_ all those peer reviewers let slide or didn't know when reviewing other papers?

Peer-review is only one layer for identifying errors. Some things will pass through it. Then there will be retractions if the paper proves to be flawed, but, for that to happen, someone needs to identify the flaw and reach out to the authors.

It's a slow process.


13 years for turnaround on bad results isn’t abysmal in my view. Science is hard and will take a long time.

Getting people used to that seems impossible though.


And this, of course, is why double blind clinical trials are so incredibly important.


> Not enough people understand that science is expected to be wrong.

The propaganda for making Science the new-religion is far too advanced sadly - even the classic development of heliocentric model is so riddled with misinformation that it's hard to have much hope for much else.

The effect of all this is that it brings to the fore all the forgotten protestant templates in the form of the vitriolic "anti-science" and "pro-science" communities. One fringe punishes any criticism of the establishment view viciously (the epidemiologists & medical experts who weren't for lockdowns comes to mind), and the other takes this as evidence for "science" being compromised.

The Anglo-Saxon world is too far gone IMO. I hope this atleast allows the peripheral colonial vassals like India to finally break free from their destructive clutches... but even that seems unlikely now.


> One fringe punishes any criticism of the establishment view viciously (the epidemiologists & medical experts who weren't for lockdowns comes to mind),

This is rewriting history. At the start of the pandemic the position of "lockdown is a bad idea" was taken very seriously. Only when it failed to provide results and some people still clung to it they got called out. That's literally the opposite of what the article describes (holding on to outdated science, because of "momentum").


Even expecting science to 'converge' to the correct answer is a bit iffy. Don't get me wrong science should definitely be expected to improve and bring forth better theories and models, but accuracy isn't the only performance metric that's relevant here. The words 'converge' and 'correct' are doing a lot of the heavy lifting when you claim science 'converges' upon the 'correct' answer.


I think the problem is that too many people in government and science believe in the "noble lie".

"Yes, we don't actually believe our prediction, but what does it matter if it's exaggerated? We're doing it in the name of XXX which is a great cause."

Sadly, when those predictions fail to materialise, they become the science equivalent of the Boy who Cried Wolf.


This reminds me of a talk I watched about research into human pheromones.

The speaker talked about how literature and studies on pheromones all lead back to 1 or 2 old studies that were basically considered bunk, but researchers kept citing it, and other researchers would also cite the other studies that linked back to the original low-quality studies. His point was that the whole field of research into human pheromones needs to blow away all the literature up to this point and start over because the whole thing is so tainted, and few are willing to actually make sense of what's true or false with what currently exists because so much of it is self-referential.

It wouldn't surprise me in the least that there are other areas of science that border on needing to start from scratch.

EDIT: I believe it may have been this talk, but I don't have time to scan the whole thing just yet. It looks like he begins talking about the problematic research after ~17 minutes. It's still worth watching none the less:

https://www.youtube.com/watch?v=zLENtzBbXdY


It would surprise me if there were any areas of science that were not heavily citing falsified papers.

We are perhaps fortunate that most cited papers are not read.


Mathematics and the non ml part of computer science should be ok, I guess.

Everything that doesn’t involve data, maybe ? I’m making this on the fly.


I call it "The founder effect" of bad studies. Essentially the first few studies set the tone for the rest of the field, and it's very hard to correct the course due to the momentum. I'm glad people are starting to notice.


The graph shows emissions trajectories projected by the most commonly used climate scenarios (called SSP5-8.5 and RCP8.5, with labels on the right vertical axis), along with other scenario trajectories. Actual emissions to date (dark purple curve) and those of near-term energy outlooks (labeled as EIA, BP and ExxonMobil) all can be found at the very low end of the scenario range, and far below the most commonly used scenarios.

RCP 8.5 was intended as the highest-emissions scenario, not the most likely scenario.

"RCP 8.5—A scenario of comparatively high greenhouse gas emissions (2011)"

https://link.springer.com/article/10.1007/s10584-011-0149-y

This paper summarizes the main characteristics of the RCP8.5 scenario. The RCP8.5 combines assumptions about high population and relatively slow income growth with modest rates of technological change and energy intensity improvements, leading in the long term to high energy demand and GHG emissions in absence of climate change policies. Compared to the total set of Representative Concentration Pathways (RCPs), RCP8.5 thus corresponds to the pathway with the highest greenhouse gas emissions.

Even 10 years ago, when this paper was published, it was apparent that technology and energy intensity were improving more than the RCP 8.5 scenario posited. I don't know if RCP 8.5 has since been misused as a most common scenario, but the original post quoted above doesn't quantify that either.


From the article :

"For instance, O’Neill and colleagues find that “many studies” use scenarios that are “unlikely.” In fact, in their literature review such “unlikely” scenarios comprise more than 20% of all scenario applications from 2014 to 2019. They also call for “re-examining the assumptions underlying” the high-end emissions scenarios that are favored in physical climate research, impact studies and economic and policy analyses. As a result of such high prevalence of such studies in the literature, they are also the most commonly cited within scientific assessments of the Intergovernmental Panel on Climate Change. O’Neill and colleagues find that the highest emission scenarios comprise about 30% of all applications in studies over the past five years, from a family of 35 different scenarios that they surveyed."


This sounds like good science being done. When you are trying to predict something like carbon emissions, standard practice is to run a few different scenarios. If you run 5 models, one with predicted CO2, and the others with higher and lower values, of course some of your scenarios will be unlikely. You're doing that on purpose so that you have a ability to see how wide the range of possible results is even with uncertainty in CO2.


From the article:

"Correspondingly, the authors “recommend establishing a process for regular updates” to the scenarios and recommend that key variables in the scenarios “be updated now to be consistent with new historical data.”"

So the author is explicitly saying different scenarios should be used, just that they should be updated regularly.


Looking a CO2 measurements, I don't see much sign that the rate of increase is slowing down: https://climate.nasa.gov/vital-signs/carbon-dioxide/. Do the current models assume it will accelerate even faster?


The article is talking about annual CO2 emissions, not total CO2 levels. CO2 levels and their rate of increase are a lagging indicator; it takes many years for a reduction in emissions to show up in CO2 levels.

The models all assume that CO2 emissions will continue to increase; the main difference in the assumptions is how fast emissions are assumed to increase.


moreover, plenty of scenario work predicts plenty of bad climate effects for rcps and ssps tighter than rcp8.5. by any estimation, the results of rcp8.5 would be disastrous.


The emission scenarios were developed decades ago and are proving to be over-estimates. The author interprets that as outdated science which is a rather pessimistic view.

I interpret it differently: Scientists make a spread of climate predictions and their potential consequences. Those listening mustered significant market/public/government pressure and ... it worked. We pushed the outcome towards the low end of the predictions. This is good news and demonstrates the efficacy of political action based on science.

If those predictions are correct, between RCP2.6 and RCP4.5 is not a bad place to be. Climate change is still locked in but life on earth has a chance of adapting to it without major upheaval. RCP8.5 otoh, if it were to happen, we'd be screwed.


Having consulted a graph [0] of coal production in the UK, it seems a difficult sell to argue that there have been any policy changes in the last century.

It seems likely oil will follow the same trajectory. We've been due to hit the global peak sometime soon, more oil has been consumed than discovered for a while now.

If people persist in using the stuff until it runs out, it is hard to see what the effect of all the pressure was.

[0] https://ourworldindata.org/death-uk-coal


I don't understand your comment - there have been many policy interventions that have changed coal consumption in the UK in the last 100 years. There is a large amount of coal still available for mining in the uk - but lack of demand and low prices coupled with high extraction costs have destroyed the industry.

- The clean air act made it illegal to burn many forms of coal and required coal producers to use higher standards. Many areas were banned from coal burning of any sort. This drove the adoption of gas central heating. - Safety measures introduced after many mining tragedies drove up the costs of the industry significantly. - The attack on organised labour in the 1980's resulted in the shuttering of large parts of the UK coal industry. - The development of the EU single market drove the importation of cheap coal from Germany and Poland, dealing a further blow to the domestic industry. - Many initiatives on home insulation driving down utilization.

But you may be making a point I just haven't understood - can you clarify?


> The attack on organised labour in the 1980's resulted in the shuttering of large parts of the UK coal industry.

My understanding is that Thatcher closed down coal because production was uneconomical and she didn't want government coffers propping it up. Your use of the word "attack" is emotionally charged, as if Thatcher was just doing it for evil lulz.


>My understanding is that Thatcher closed down coal because production was uneconomical and she didn't want government coffers propping it up. Your use of the word "attack" is emotionally charged, as if Thatcher was just doing it for evil lulz.

I don't think she was doing for "evil lulz" but it was clearly an attack, the government developed the "Ridley Plan" some years before to prevent and curtail the power of organised labour in the UK. The motivation for this was the capitulation of the Heath Government to the NUM in 1972 and the fear of the use of strike action as a military weapon by the Soviet Union - the British Unions had extensive ties to the Soviets. The industry wasn't uneconomic but the installation and development of subsidized oil and nuclear plants and the willingness of the British to import coal made uncompetitive. There was a prospect of another 50 or 70 years of deepmined coal for Britain - a highly undesirable prospect, but there is an alternative universe in which the NUM won and we would be trying to deal with this now.


They've been in a steady linear downtrend of coal production for more than a century. Then stopped producing the stuff roughly on schedule for a prediction made in 1970 or 1980.

Occam's razor here is they stopped producing coal because they ran out of coal, rather than because of any political will.

> There is a large amount of coal still available for mining in the uk - but lack of demand and low prices coupled with high extraction costs have destroyed the industry.

That is what running out of coal looks like. Based on the 100 year trendline, Occam's razor is they can't sustain their lifestyle using coal. It wouldn't surprise me if they struggled even with new forms, their energy use figures [0] are eye watering.

[0] https://en.wikipedia.org/wiki/Energy_in_the_United_Kingdom#O...


> they stopped producing coal because they ran out of coal

"The UK has identified hard coal resources of 3 910 million tonnes, although total resources could be as large as 187 billion tonnes."

https://euracoal.eu/info/country-profiles/united-kingdom/


I don't get this - is the reduction in energy consumption per head a bad thing in your eyes?

And occam's razor states "the simplest theory that fits the evidence" there is extensive evidence that the UK has large reserves of coal and could have continued to exploit them for many decades. However, for political reasons it didn't.


"Discovered" is a pretty flexible term. Geologists will only go out to look for the stuff if there's an expected return on extracting it. Similarly, "reserves" means stuff that (a) we know about and (b) that we think can be extracted economically. In effect, it's tautological to say that we'll "use the stuff until it runs out", because at whatever point we stop using it there'll still be some left that's just slightly too difficult to extract (given, hopefully, alternatives and taxes) for it to be worth it.


But the problem is that they are continuing to use the outdated models for current science.


What exactly was done because of pressure? More solar cells and wind mills? Have those even crossed the point where they produced more energy than what was used in their production yet? In my country, they are shutting down nuclear power plants, relying on coal instead. Electric cars are still a rarity, and global car use seems to be rising still.

I find your line of reasoning maddening. While it is possible that pressure had an effect, it can be used to prove almost everything. It rained after rain dances, so rain dances work!


Counter anecdote: South Australia had almost 100% of its energy from coal burning just a few years ago... Today, it's nearly entirely solar-powered: https://www.abc.net.au/news/2020-10-25/all-sa-power-from-sol...

Another one: in Norway, most cars being sold right now are electric: https://www.voanews.com/europe/norway-says-more-50-new-cars-...

These are huge local achievements, but it shows that at least in some parts of the world, things have changed very rapidly in the right direction.


Australia has a lot of land and few people. Norway pays its citizens to buy electric cars with the money they make from their oil fields. So I guess the oil is still being burnt.

Not saying it is not nice progress, but I am not convinced those are making a dent.

Overall I think technology also progresses without government pressure or climate panic. For example batteries became cheaper because people don't want to charge their phones so often. In general people like more efficient technology and appreciate an unpolluted environment.


Worth pointing out with the Norwegian one as well that almost all of their electricity is renewable. So those cars aren’t just going from oil to coal but oil to renewables.


> Have those even crossed the point where they produced more energy than what was used in their production yet

Yes, payback time is quite short.


> What exactly was done because of pressure?

In the past 2 decades:

* Big business on the whole went from ignoring or denying climate change to embracing solutions.

* Innovation in alternative energy tech and its production has driven prices for non-carbon energy down to parity with coal.

* The political world has gone from ignoring it to signing on to international support for climate agreements.

* People are aware of the issue and it's a central issue to billions of voters globally.

* Carbon capture and other methods of reducing atmospheric carbon are being developed and scaled up significantly.

> I find your line of reasoning maddening

I can't prove causality but any good faith scientific argument needs to at least consider the hypothesis that the efforts of the last two decades might have had an effect. Simply ignoring the changes and saying that the original predictions were "wrong" is not a good faith argument, let alone a scientific one.


I don't think the article was about pointing out that the original predictions were wrong. It was about the need to adjust them according to actual developments.

About the other points, it is possible that politics had some effect, but I personally rather doubt it. The general assumption that businesses would just waste away energy is incorrect imo. There is always an incentive to become more efficient.


> Have those even crossed the point where they produced more energy than what was used in their production yet?

Yes, and they do so since the 1980s. Pardon the snark, but did you try to illustrate the article's topic?


The wind mills that were built in the 80ies are already scrapped, afaik they only last 20 years.

How long does it take for a modern wind mill to produce as much energy as it took to produce it?

Edit: a lot of estimates for 6 to 9 months seem to be floating around, but this one cites several years at least for economic break even (not exactly the same as CO2 break even, but perhaps an approximator) https://weatherguardwind.com/how-much-does-wind-turbine-cost...

Of course, pick the estimate that suits your ideology!


"We examine 119 wind turbines from 50 different analyses, ranging in publication date from 1977 to 2007. We extend on previous work by including additional and more recent analyses, distinguishing between important assumptions about system boundaries and methodological approaches, and viewing the EROI as function of power rating. Our survey shows an average EROI for all studies (operational and conceptual) of 25.2 (n = 114; std. dev = 22.3)." https://www.sciencedirect.com/science/article/abs/pii/S09601...

Even a quick glance at Wikipedia would have helped: https://en.wikipedia.org/wiki/Energy_return_on_investment#Wi...

TL;DR: Wind Mill EROI is at least about 20, i.e. they produce about 20x the energy that production and maintenance consume.


Wikipedia doesn't really prove anything, and I already mentioned that there are many claims of 6 to 9 months. Your paper seems to suggest 12 months (assuming a lifetime of 20 years).


When something becomes widely known as a proxy to measure an effect, it ceases to be a useful measure, because people begin to consciously influence it. It's a well-known thing.

When scientific results become a proxy to the correctness of a political agenda (which is often a business interest in disguise), they become less and less scientific.

It's a perfect catch-22.

Same sadly applies to the internal tools of science: impact index, citation index, discovery of a novel effect, etc. There is a lot of incentives to game these metrics, and gaming them is, well, not unheard of.

In applied research there is more of a reality check: if the device based on your research obviously does not work, it cannot be produced and sold, so you want your research to reflect reality as best you can. But more subtle things, like public policy, lack this immediate feedback loop. So there is a large hazard for a researcher, when asked: "what is the value of 2 × 2 ?" to ask back: "are we selling or buying?" and find a plausible "right" answer that supports a desired case.

But, unlike in elementary arithmetics, nobody knows a certain right answer in many areas of natural science, so an honest mistake is hard to tell from a... less honest one. There must be a colossal pressure to exploit this in a situation of a high-stakes political choice.

I have no idea how to solve these problems. But at least the public should be aware that these problems exist.


I agree with you and I want to add one more layer. I program analytics for a living. I often find that people fudge numbers without even being conscious of it. If I know my boss wants the numbers on this report to be high, I'll unconsciously choose an algorithm that gives higher numbers than one that gives lower numbers. Even if both algorithms are "reasonable".


The text says one thing, but the graph they display to support their argument does not say what I thought it said.

The purple line is supposed to be our "real" emissions, and the impression you get from the image and text is that our real emissions are much lower than the projected emissions.

But if you actually look at the values of the purple line vs the other lines now, in 2020/2021, the values are almost exactly the same.

I don't understand how to reconcile that with the text.

https://cdn.substack.com/image/fetch/w_1456,c_limit,f_auto,q...


Great! So it's not just me! As I understand the paper, their main point seems to be that as you extrapolate out in time, you need to not just handle the physics of CO2 that is already in the atmosphere, but predict say how much populations will grow, as well as how much GDP will grow, and how CO2 emissions will be effected by that. So, say if there is an economic recession, then there will be less CO2 produced than if you predicted just say steady growth. This seems to be a case where I could see different assumptions (some reasonable) leading to different results--it's a far cry from say the breast cancer-vs skin-cancer mistake that he leads with


I think the point is that the actual emission are on the low side of the IPCC scenario bounds, and are tracking to exit completely the IPCC scenario bounds around 2021.

Full paper in PDF format here btw: https://iopscience.iop.org/article/10.1088/1748-9326/abcdd2/...


I don’t understand the graph at all. Which line is the purple line? Is it labeled?

It seems like real emissions should end at or before 2020 and none of them do that. They all seem to be projections of some sort?


There is a blueish wider line which is below all the others, shows the effect of the 2008 recession, and splits into multiple forecasts around 2019.


Good catch. -1 point to me for not staring hard enough at the x axis also.


In this case it kind of seems like what you'd want to happen is happening: we've been warned about the dangers of climate change under "business as usual" scenarios, so we're not carrying on "business as usual", but reducing our CO2 emissions.

So the question then is quite fundamental: what does it mean to describe a "business as usual" scenario that extrapolates from today? Does that mean carrying on doing what we're currently doing, or does it mean to stop caring about CO2 emissions and go back to "business as usual" ten years ago?


Now we must imagine what the impact on policies would be to prematurely say we managed to cut emissions more effectively than predicted.


But what's the alternative? Ignoring the fact that our data is based on wrong assumptions? Act as if nothing has changed?

Sure, this is not good in the short time. But this is already a smoking gun for climate change deniers; if we were to put this under the rug for another decade, it would become a smoking nuke. And imagine the impact of that.


We know the assumptions are incorrect. Based on that, do we change course or not?

My understanding is that we are reducing emissions faster than predicted and will hit our politically agreed targets sooner than we expected. The political decision to change targets at any given point will be revisited based on what we understand at that time.


Note that Roger A. Pielke Jr. is a political science professor not an atmospheric scientist and has, in the past, been involved in a number of climate science controversies. The analogy to outdated breast cancer research feels like a straw man. Anyway, I cannot claim that he is wrong, but I would regard this piece with some skepticism.


Somehow, 'feels like' and 'some skepticism' doesn't add up too much of a critique.

He says Evidence indicates the scenarios of the future to 2100 that are at the focus of much of climate research have already diverged from the real world and thus offer a poor basis for projecting policy-relevant variables like economic growth and carbon dioxide emissions.

The question is: is he right or wrong and if the latter, why? We're spending trillions of dollars & affecting people's lives on a massive scale - on betting he's wrong. The issue at least needs discussion.

One other point: How would atmospheric scientists be especially competent to assess 'key variables in climate scenarios compared with data from the real world'? Would they really want to skip their core science to put economic hats on to study how 'population, economic growth, energy intensity of economic growth and carbon intensity of energy consumption' relate to these key variables? Pielke's competence is in this field and climate policy generally. Some of his publications: https://rogerpielkejr.com/pielkeonclimatechange/


> Somehow, 'feels like' and 'some skepticism' doesn't add up too much of a critique.

This is not helpful. Pielke has been heavily involved in one side (and only one side) of the political debate on this topic. It's dishonest to pretend that information isn't relevant in the context of a post like this. He's not a neutral academic that decided to look at the evidence and was shocked to see issues.

> How would atmospheric scientists be especially competent to assess 'key variables in climate scenarios compared with data from the real world'? Would they really want to skip their core science to put economic hats on to study how 'population, economic growth, energy intensity of economic growth and carbon intensity of energy consumption' relate to these key variables? Pielke's competence is in this field and climate policy generally.

While technically this is correct, it's deeply misleading (or whatever is the next step above that). Economists study those issues and have their own journals for publishing those results. Pielke, on the other hand, is trained as a political scientist. Trying to make it sound like he's the only one with the qualifications is, as they used to say, not cool.


Nowadays, the scarce resources are humility and intellectual courage, not qualifications. This is especially true when looking at public policy that is guided by scientific results, because the science often needs to take a back seat to more human concerns. It is easy to fall into the trap of thinking that the science is all that matters, but for every scientific paper published on a subject there are an infinite number of unpublished papers and unexamined data sets. See also: Frederic Bastiat’s essay on “The Seen and the Unseen”


The article is probably right about the bare facts, but it seems much like one of those that insist that Y2K was a big nothing, and that the Population Bomb fizzled.

In each case, there was a warning that was followed by a phenomenal amount of hard work in back rooms to head off disaster. Disaster was headed off successfully, and then the people who had warned about it were criticized for their failed prediction. In the case of population, we were very, very lucky that the Green Revolution turned out to be possible. There was no reason, beforehand, to expect such an idea to work well enough. (The Green Revolution, with its dependence on fossil-fueled fertilization, has ironically contributed to the present crisis.)

Would CO2 emissions have continued on one of the higher curves, without enormous investment in solar and wind power systems, and in more efficient usage? Counterfactuals are hard to prove, but I don't see any reason to think not.

What I notice about the graph is how lines below current and projected emissions, that identify behavior needed to prevent readily predictable disaster, are omitted.

That current climate research must compare improved methods using the same reference numbers that previous papers used, despite those numbers being now obsolete, seems like a small problem compared to others faced in that field, such as the still unpredictable behavior of cloud cover in different scenarios, or of ocean currents.


Exactly. It is like your physician telling you if you keep drinking your liver will die. And a year later you proclaim: this guy is so unscientific, I stopped drinking, how come he didn't predict that.


I'm not sure where this line of thought is headed. I once had to work with the Hela cell line, coating it in alginate polysaccharide. I thought to myself,"How can I test if the functioning of the cells have been altered by the coating?" A little book research convinced me that any tests would be useless, as a cancer cell has basically shut down all functions except those required for growth. So I just worked on trying to improve the coating, and continued staring through the micrscope at the somewhat terrifying malignant growth.

As regards, climate studies. Where have we heard the term "peak" something-or-other before. Ah yes, "peak oil". Well that didn't happen: we found more oil, we substituted gas, ... "Peak CO2": I nearly choked on my morning biscuit. Please tell China and India and Africa to stop developing and remain underdeveloped. Come on man, are you on drugs? Maybe, when a few billion have been displaced from their homes and invaded our comfortable lifestyles, we all might decide to stop driving and generating power with coal, oil and gas.

I remember the time at uni 20 years when the power failed. The cafeteria went dark, the fridges didn't work, the heated food display cabinets didn't function, the cash registers were useless and the cashiers had to write down amounts with pen and paper. When our grandchildren have to make the hard decision to drastically reduce power consumption in a last ditch effort to reduce CO2, they will be turning off the switches in fully-automated societies. Will societies cope with that scenario?


"Peak oil" simply means the year when the maximum amount of oil is extracted; the implication is (was) that if demand > supply when this happened, this would cause a spike in prices. As it happens, it's pretty likely we did hit peak oil in 2019, before COVID reshuffled everything and oil prices hit historic lows in 2020.

Same for "peak CO2": energy consumption is growing and will continue to do so, but if we're swapping out coal and gas for solar and wind, the amount of CO2 generated can still decline overall.


> As regards, climate studies. Where have we heard the term "peak" something-or-other before. Ah yes, "peak oil". Well that didn't happen: we found more oil, we substituted gas, ... "Peak CO2": I nearly choked on my morning biscuit. Please tell China and India and Africa to stop developing and remain underdeveloped. Come on man, are you on drugs? Maybe, when a few billion have been displaced from their homes and invaded our comfortable lifestyles, we all might decide to stop driving and generating power with coal, oil and gas.

It's more plausible than it sounds. I had the same initial reaction as you, but it's worth keeping in mind that:

1.) China is quickly moving away from manufacturing. If you look at province-level GDP, manufacturing has been flat, or even fallen, in many/most provinces over the last couple of years.

2.) China and India are oil constrained. Although they have large coal reserves, at current mining rates, those reserves (at least, in China), will be used up in the next 30-40 years, with all of the policy implications that come along with it.

3.) People there, especially now, are far more environmentally conscious than generally given credit for.


We know how to generate essentially limitless amounts of energy without producing CO2. There is no need to turn off switches, we just need to build enough solar panels, wind turbines, batteries, power-to-gas reactors, and, if we like, nuclear power plants.


(*) CO2 caused by manufacturing and maintenance not included


You get enough energy out to sequester that CO2 back.


I'd prefer pointers to non-handwavy estimations of energy surplus/EROI of solar+batteries+sequestering system.


The effort, in terms of time and money, required to move to greener power generation will continue to be substantial as evidenced by this extract from the Berkshire Hathaway 2020 Annual Report:

"[O]ur country’s electric utilities need a massive makeover in which the ultimate costs will be staggering. The effort will absorb all of [Berkshire Hathaway Energy's] earnings for decades to come. We welcome the challenge and believe the added investment will be appropriately rewarded. Let me tell you about one of BHE’s endeavors – its $18 billion commitment to rework and expand a substantial portion of the outdated grid that now transmits electricity throughout the West. BHE began this project in 2006 and expects it to be completed by 2030 – yes, 2030. The advent of renewable energy made our project a societal necessity.

Historically, the coal-based generation of electricity that long prevailed was located close to huge centers of population. The best sites for the new world of wind and solar generation, however, are often in remote areas. When BHE assessed the situation in 2006, it was no secret that a huge investment in western transmission lines had to be made. Very few companies or governmental entities, however, were in a financial position to raise their hand after they tallied the project’s cost.

BHE’s decision to proceed, it should be noted, was based upon its trust in America’s political, economic and judicial systems. Billions of dollars needed to be invested before meaningful revenue would flow. Transmission lines had to cross the borders of states and other jurisdictions, each with its own rules and constituencies. BHE would also need to deal with hundreds of landowners and execute complicated contracts with both the suppliers that generated renewable power and the far-away utilities that would distribute the electricity to their customers. Competing interests and defenders of the old order, along with unrealistic visionaries desiring an instantly-new world, had to be brought on board.

Both surprises and delays were certain."

https://www.berkshirehathaway.com/


As of 2019, oil, gas and coal were still king. They provided 84% of the world's energy needs. There were some exceptions: France: used nuclear for 38% of its needs; Canada: hydro for 24%; and Denmark: wind for 18%.

See graphs from

https://bourne2learn.com/math/energy/consumption-totals.php

https://bourne2learn.com/math/energy/consumption-fuels.php

using data from

https://ourworldindata.org/grapher/energy-consumption-by-sou...


Say hello to my little friend, entropy.


Earth is not a closed system. You might have noticed the giant miasma of incandescent plasma that regularly makes an appearance in the sky.


In the context of collecting pollutants, entropy is huge. Big light in sky is source of juicy low entropy energy, true, and the big win is the flux of infrared out. Ten times as many photons to carry away the same energy.


Perhaps it isn't visible at the moment.


From my time working within an academic environment, I can attest this is absolutely true. Especially since the gatekeepers (older professors, etc) have a general interest to see these older articles cited more and referenced. It's not malicious, it's just a fact of how these things work. It's also why the general public's knowledge of how science works needs to be adjusted. Science is not a "100% correct" or "100% not correct" thing, and unfortunately I've seen too many institutions and individuals use that myth to try and push an agenda.

It's a combination of institutional momentum, combined with the fact that a specific narrative is politically expedient. The unfortunate thing is bad decisions, made off of bad science, regardless of the reasons external to the science, generally lead to bad outcomes.


As Max Planck observed, science advances one funeral at a time.

https://en.m.wikipedia.org/wiki/Planck%27s_principle


I think we need to make the clear distinction between "Science" - the application of the Scientific Method, and "Academia" - the set of institutions, processes, and people who should be applying the scientific method, but are actually just trying to make a living in a messed-up system like the rest of us.

Science is always right in the long term.

Academia can be very wrong for a very long time.


> Science is always right in the long term.

... may I suggest, "application of the Scientific Method has worked better than other alternatives, so far"


Is this true though? Wouldn't it be rather boring if the standard model continues work? In the end, Josephson won. Even the example that the author here mentions with breast cancer-vs-skin cancer was eventually discovered to be wrong. The main question is the time scale.


>It's not malicious, it's just a fact of how these things work

While I agree it's not malicious, I'd say the reason it works this way is that there is a financial incentive to have it work this way.


It is not finance. There are a lot of motivations which are based on ego or prestige. Scholarship is unfortunately often based on competition - but the standards of winning are different from that of oil tycoons. Boiling everything down to finance will not lead to accurate understanding of how academia works.


> Boiling everything down to finance will not lead to accurate understanding of how academia works

Why not? Given that only people with the 'correct' views get tenure to begin with, it doesn't surprise me at all that while there is plenty of ego and prestige, I'd classify it as bickering among priests. Point remains, only people who are indoctrinated and interested in advancing the interests of the church get to be priests to begin with.


While this seems to give the impression that science is not functioning, I don't see this as the case. I will give an example from my field. Before Covid struck, I was at a conference in Japan. There was a talk about an interesting superconductor. Someone repeated some of the earlier measurements and determined that they were misinterpreted--that the measurement technique led to heating the material and thus the misinterpretation of the results and that the material wasn't so interesting after all. I think this was say 15-20 years after the original measurement. During the Q&A session, someone asked the original author what he thought. He stood up and said that the new measurement was correct. To me, that says science is working--it may work slowly, but over time, it does correct. Sure, there are people that take a long time convince (and sometimes they are never convinced), but science itself eventually corrects--but the challenge of new measurements that contradict the current consensus is that it's not that each one is correct--some are just measurement errors. For example, there was the case of a recent experiment that seemed to show faster than light travel of neutrinos. The authors presented it with the idea that there was a glitch somewhere that they couldn't find. After a flurry of papers (some with exotic new theories), they eventually were able to find the electronics glitch. It would have been exciting if they were correct--but they weren't--and science again worked.


Agreed (10 days late, but still.)

These kinds of issues are issues with how human societies function. Scientific methods can't eliminate that, but they can help manage it. In fact that's largely the point.


Maybe off the wall, but my first thought is certificate revocation. Could there be (or is there already) a semi-centralized database of research that should no longer be cited? With maybe a dependency graph of research that maybe needs to be revisited?


There's something quite like this in US law. If you are citing a case in a legal brief, you need to make sure it hasn't been overturned along the way, so you 'Shepardize' it. From Wikipedia [0]:

Shepard's Citations is a citator used in United States legal research that provides a list of all the authorities citing a particular case, statute, or other legal authority. The verb Shepardizing refers to the process of consulting Shepard's to see if a case has been overturned, reaffirmed, questioned, or cited by later cases.

[0]: https://en.wikipedia.org/wiki/Shepard%27s_Citations


We're effectively bringing shepardizing to science at scite (scite.ai). Here's a piece I published on the topic recently: https://www.sciencedirect.com/science/article/pii/S259023852...


Hi! I checked out scite and really like the product- however, I can't seem to find relevant papers in my field (ML&AI), ie., I searched for the batchnorm paper, which there has been a lot of 'reevaluation' of- but it's not available. I suppose at this point you don't have access to (some of) the big AI journals/conferences. Is this something on the horizon?


this sounds like a good approach. right now every citation is an endoresement, and the choice is to support a publication or to ignore it. there should be a way to make a negative citation, that allows you to declare that the referenced publication is contradicting. as a result there would be two citation counts for each publication. like upvotes and downvotes. i believe contradicting citations do happen now, but the fact that they are contradicting can't be seen from the citation count.


> right now every citation is an endoresement

This is a valid point, but it's important to note that it only applies when using aggregate citation counts as a rough proxy for importance within a field. That's useful for prioritizing what to read but not for assessing the validity of any given result (of which there are often quite a few, at least in the life sciences).

Within a given paper, it's not at all uncommon for the authors to explicitly call out some detail from another study as being incorrect in their view. That doesn't mean that they necessarily agree or disagree with the rest of the cited work though.

What I'm getting at here is that negative citations would likely be far too coarse to be useful in practice. It's relatively rare that a paper outright disagrees with an entire work.


I've thinking how to make this idea work, but the more I think about it the less I think it's viable.

First, you would be centralizing the ability to "cancel" papers. One bad member of the board could wreak havok. Simultaneously, people in charge of the database would be constantly bombarded by pleas (not all of them honest) to revoke this and that researcher's paper. So it would be bad for both sides.

You could get away with something like Retraction Watch, but you'll always be behind - you could keep track of which papers were retracted (which is already a lot of work), but not every paper that needs to be revised is retracted.

I guess figuring out a perfect system for deciding which ideas are right is hard.


You point out the flaws of one potential implementation of such system and reject the whole concept? I think it's conceivable that there is an implementation of such concept that would work better than not having it.

Example off the top of my head: Don't have one board decide which papers are cancelled. Have multiple. Have git repos with lists of cancelled papers. Have a process for finding which papers are flawed due to citing cancelled papers from a particular paper. Have browser extensions that tells you if the paper you're opening is carrying such flaws. Allow user of such extension to pick which board they trust. Have the extension link to a discussion about why the paper is cancelled. Give ability to challenge such claim in the open. Etc etc.

I'm sure many more such ideas can be found. The problem is probably rather funding of such system or if whoever funds it would have the right incentives to begin with.


This is part of what https://scite.ai/ is doing. They categorize citations as supporting, detracting, or neutral. The theory being you want to know if you’re citing something debunked. Scite also flags citations of retracted papers (and a Twitter bot that tweets when a new paper is published that cites a retracted study). And I think they have zotero integration (?)

I have no affiliation other than I have met the founder and think the product is cool.


WOT signed endorsements. Back when I had time I drafted a P2P application to share papers and execute peer review via GPG based digital signatures. Signed metadata - at least that was the plan - allowed communities to endorse, retract, flag spam in a distributed opt-in manner.

It's rotting on GitHub, never managed to drum up enough interest...


I’ve been toying with a similar idea but as a means of crowdsourcing a current best understanding of what science suggest. Primarily to weed through all the wild nutrition recommendations floating around into sensible advice based on what can actually be supported. I’m sure the idea extends to other areas.

In my mind it would be some reddit/Wikipedia kind of thing where advice and understanding would be debated using some formal language elements based on rdf or whatever to build up a knowledge graph usable inside applications.


Sounds like a great idea. WOT did not work for general public, but scientists are very different.


Things that work for the general public need to be far less technical and way more automatic; from an end user experience perspective.

WoT could work exceedingly well for a decentralized replacement of Facebook... though at that point there's also the issue of competing against a walled garden with moats and most of the population already inside.


Maybe instead of a centralized repository, individuals could be given the ability to create their own partial sets of recommendations that could be aggregated by a user. Instead of trying to globally solve trust and designation of expertise, you explicitly define the group of people whom you designate experts and whose assessments you trust.


Science's decentralized nature is a strength, not a weakness. Do you really think such a database would be immune to coercion from, say, Monsanto or Chevron?


building that in Zotero would be great. I all ready like that they flag retracted papers.



you sir/madam, are awesome.


It happens in theoretical physics, too. Papers using the debunked “minimum energy production principle” still occasionally appear.

https://www.annualreviews.org/doi/abs/10.1146/annurev.pc.31....


This discussion has been going on for a while. Though it's important to understand what exactly this is about.

Based on current trends the most pessimistic climate scenarios (most notably RCP 8.5) seem unlikely, mostly due to its large use of coal and better development of renewables than what was expected. Important to note: These scenarios became unlikely because human behavior and technology have developed in a different direction from what was projected. It's not about physical predictions being wrong.

There is a big caveat in all of this: There is still a lot of uncertainty in the understanding of the climate system and feedback loops. This may very well mean that a) RCP 8.5 is unlikely, because humanity will never use that much coal, but b) it could still be just as bad in terms of warming, because climate effects are worse than we thought.


>diversion -- (the scenarios are off track, but saying so advances the agenda of those opposed to action) and,

It is probably worthwhile to note: while the above might actually be a defensive response to these data, the argument is not necessarily valid -- the fact that emissions have been overestimated does not necessarily weaken the case for action. In particular, changes in temperature and precipitation continue to be observed even consistent with "wrong" models[1], and as such the sensitivity of the climate to these lower levels of emissions is not "good news".

1: https://climate.nasa.gov/news/2943/study-confirms-climate-mo...


The one that amazes me is the "How the Sugar Industry Shifted Blame to Fat" stuff (https://news.ycombinator.com/item?id=12480733 https://news.ycombinator.com/item?id=26126183)

I mean it dates back to

>In 1965, Mr. Hickson enlisted the Harvard researchers to write a review that would debunk the anti-sugar studies. He paid them a total of $6,500, the equivalent of $49,000 today.

and before but I think the 50+ year old bad research still refuses to die


before you spend 5 minutes searching for the real emissions line on the graph, it looks like a slight bolding of one of the labelled lines that ends in about 2018


The blue one with the dip around 2009?


Of course we'd like researchers to always start from the best premises. But the process is such that if you want to update your premises, you have twenty competing theories to choose from, all in the process of getting validated. Do you move and where do you move?

When I read this article I immediately felt with the author how these people should update their models. How long can it take? But then again, how do I know the author's choice studies are the good ones? If I were to talks to somebody in the field, they'd note that those studies are "promising" or "concerning". But maybe they won't change what they're doing just yet. Inertia? Yes. Bad? Oh, where's hindsight when you need it the most!

The book The Golem by Collins and Pinch discusses historical examples of the scientific process for things we take for granted today. They show how shaky the confirming experiments were. How they could've easily gone the other way, and how some positive early results were based on selecting favourable data.


I'm not sure if I understand this graph. The cone seems to represent "no climate policy". Why should we be surprised that we are currently trending out of the "no climate policy" cone since we are implementing climate policy?

Taking a second look, that plot is only CO2 from energy and the the plot y scale has a discontinuity. If we need to get to net 0 emissions and the rate of CO2 production is still increasing that would suggest we are still a far ways off from the goal.

I just checked and the IPCC 5th assessment was made in 2014. The 6th assessment is scheduled for 2022. Does anyone know if the baselines get updated with each IPCC assessment?


Having done some modeling myself, I am always stupefied and rankled when I read and hear the certitude with which people, mostly in the media, make claims about future climate scenarios. I am not at all surprised by Burgess' results. This is an extremely complex system being modeled, with the inputs to the model also being modeled...

Also, is it just me or does this article bend over backwards to avoid the simple, obvious lede?: climate change scenario estimates are too high and it's even possible CO2 emissions are in decline.


It’s the right move rhetorically- the issue is too emotional and tied up with identity, at least in the US. This article delivers a message that could be treated as supporting climate denialism in a package that lets people on both of the issue read it without emotional triggers.

I’d love to have seen my take away though- current and past investments in regulation and technology have had an effect on carbon emissions. Future investments in regulation changes and technology may reduce it more. The flattening of the emissions curve didn’t just happen by itself.


Isn’t it better to act as if those edge cases are going to happen, so we can prevent the worst from happening.

Yes, scientific integrity is important, but if the message gets out to the general public, that those climate change models are based on outdated research, wouldn’t that make us procrastinate and not get off the fossil fuel based economy sooner?

Our biosphere degradation isn’t just about Ghg emissions, but pollution in general caused by our fossil fuel powered economy.


Pst, the "general public" are listening !


Some what offtopic, but I think there will be a limit when human have generated so much knowledge, that our offsprings might in one day never been able to catch up, e.g. having to learn to a very old age to get the cutting edge stuff.


We develop better abstractions and tools though.

I can write programs without needing to know how the CPU chip works, an engineer can build a new device without having to understand every detail of the new cutting edge materials science etc.

As such we can still effectively divide the labour between people, and people can start from further along the track than before. I mean, most people learn calculus but they don't have to spend years deriving and proving it.


See also: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/

However, our world is not a fable, so there is no reason, a priori, to expect life extension to be a problem that is beyond the asymptotic limit. A more critical issue is that fluid intelligence declines with age. It is for this reason that transhumanists sometimes say that no human has yet reached the boundary of human competence. Once we can accumulate knowledge for a century with no degradation in mental flexibility, we will start to see the true capability of the human neural architecture.

Of course, by that time it's also likely that AI will vastly outpace us...


Why would research take imagined futures into consideration?


There is so much wrong with this article... Where to begin...


Please do begin somewhere, if there really is something factually wrong.


> Evidence indicates the scenarios of the future to 2100...

Your 100 year out model is off? Colour me shocked!


How many times do we have to rediscover Kuhn's 'The Structure of Scientific Revolutions' and write articles about ideas that were well understood 60 years ago?


Kuhn never argued about the misuse of data. He instead argued that the scientific understanding of the data lead to models that became increasingly complex. So much so, that scientists realized it was time to try a new way of understanding the underlying data. The new models were the "revolution".


Kuhn spent a great deal of time talking about the behavior of (usually older), establishment scientists clinging to outdated science, which is part of the problem here.


Because there will always be people who just happened to never come across those ideas.


I didn't ask "why", I asked how many times.


7


> diversion (the scenarios are off track, but saying so advances the agenda of those opposed to action)

This seems the basis for much of so called “cancel culture”.


It's been a long time since i've read anything so well written that was so devoid of any pertinent points. I'm only reading that page, but the only thing i've come away with is that breast cancer research is flawed, somehow, by what seems to be the most egregious lack of anything that approaches professional standards on the part of researchers.

Oh, and a graph with different lines drawn on it with a large space between some of them.


go to bed before you burn too many neurons


how do you burn neurons and why should i care about them?


Please keep this sort of antagonizing off of HN.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: