Energy, data, environment
1476 stories
·
2 followers

Adopting low-cost ‘healthy’ diets could cut food emissions by one-third - Carbon Brief

1 Share

Choosing the “least expensive” healthy food options could cut dietary emissions by one-third, according to a recent study.

In addition to the lower emissions, diets composed of low-cost, healthy foods would cost roughly one-third as much as a diet of the most-consumed foods in every country.

The study, published in Nature Food, compares prices and emissions associated with 440 local food products in 171 countries.

The researchers identify some food groups that are low in both cost and emissions, including legumes, nuts and seeds, as well as oils and fats.

Some of the most widely consumed foods – such as wheat, maize, white beans, apples, onions, carrots and small fish – also fall into this category, the study says.

One of the lead authors tells Carbon Brief that while food marketing has promoted the idea that eating environmentally friendly diets is “very fancy and expensive”, the study shows that such diets are achievable through cheap, everyday foods.

Meanwhile, a separate Nature Food study found that reforming the policies that reduce taxes on meat products in the EU could decrease food-related emissions by up to 5.7%.

Costs and emissions

The study defines a healthy diet using the “healthy diet basket” (HDB), which is a standard based on nutritional guidelines that includes a range of food groups with the needed nutrients to provide long-term health.

Using both data on locally available products and food-specific emissions databases, the authors estimate the costs and greenhouse gas emissions of 440 food products needed for healthy diets in 171 countries.

They examine three different healthy diets: one using the most-consumed food products, one using the least expensive food products and one using the lowest-emitting food products.

Each of these diets is constructed for each country, based on costs, emissions, availability and consumption patterns.

The researchers find that a healthy diet comprising the most-consumed foods within each country – such as beef, chicken, pork, milk, rice and tomatoes – emits an average of 2.44 kilograms of CO2-equivalent (kgCO2e) and costs $9.96 (£7.24) in 2021 prices, per person and per day.

However, they find that a healthy diet with the least-expensive locally available foods in each country – such as bananas, carrots, small fish, eggs, lentils, chicken and cassava – emits 1.65kgCO2e and costs $3.68 (£2.68). That is approximately one-third of the emissions and one-third of the cost of the most-consumed products diet.

In comparison, a healthy diet with the lowest-emissions products – such as oats, tuna, sardines and apples – would emit just 0.67kgCO2e, but would cost nearly double the least-expensive diet, at $6.95 (£5.05).

This reveals the tradeoffs of affordability and sustainability – and shows that the least-expensive foods tend to produce lower emissions, according to the study.

Dr Elena Martínez, a food-systems researcher at Tufts University and one of the lead authors of the study, tells Carbon Brief this is generally true because lower-cost food production tends to use fewer fossil fuels and require less land-use change, which also cuts emissions.

Ignacio Drake is coordinator of the fiscal and economic policies at Colansa, an organisation promoting healthy eating and sustainable food systems in Latin America and the Caribbean. 

Drake, who was not involved in the study, tells Carbon Brief that the research is a “step further” than previous work on healthy diets. He adds that the study “integrates and consolidates” previous analyses done by other groups, such as the World Bank and the UN Food and Agriculture Organization.

Food group differences 

The research looks at six food groups: animal-sourced foods, oils and fats, fruits, legumes (as well as nuts and seeds), vegetables and starchy staples.

Animal-sourced foods – such as meat and dairy – are typically the most-emitting, and most-expensive, food group. 

Within this group, the study finds that beef has the highest costs and emissions, while small fish, such as sardines, have the lowest emissions. Milk and poultry are amongst the least-expensive products for a healthy diet.

Starchy staple products also contribute to high emissions too, adds the study, because they make up such a large portion of most people’s calories. 

Emissions from fruits, vegetables, legumes and oil are lower than those from animal-derived foods.

The following chart shows the energy contributions (top) and related emissions (bottom) from six major food groups in the three diets modelled by the study: lowest-cost (left), lowest-emission (middle) and most-common (right) food items.

The six food groups examined in the study are shown in different colours: animal-sourced foods (red), legumes, nuts and seeds (blue), oils and fats (purple), vegetables (green), fruits (orange) and starchy staples (yellow). The size of each box represents the contribution of that food to the overall dietary energy (top) and greenhouse gas emissions (bottom) of each diet.

Energy (top) and emissions (bottom) contributions from different food groups within the three diets modelled by the study.Energy (top) and emissions (bottom) contributions from different food groups within the three diets modelled by the study. Each column represents a different diet (left to right): lowest-cost, lowest-emission and most common items. The boxes are coloured by food group: animal-sourced foods (red), legumes, nuts and seeds (blue), oils and fats (purple), vegetables (green), fruits (orange) and starchy staples (yellow). Source: Bai et al. (2025).

Prof William Masters, a professor at Tufts University and author on the study, tells Carbon Brief that balancing food groups is important for human health and the environment, but local context is also important. For example, he points out that in low-income countries, some people do not get enough animal-sourced foods.

For Drake, if there are foods with the same nutritional quality, but that are cheaper and produce fewer emissions, it is logical to think that the “cost-benefit ratio [of switching] is clear”.

Other studies and reports have also modelled healthy and sustainable diets and, although they do not exclude animal-sourced foods, they do limit their consumption.

A recent study estimated that a global food system transformation – including a diet known as the “planetary health diet”, based on cutting meat, dairy and sugar and increasing plant-based foods, along with other actions – can help limit global temperature rise to 1.85C by 2050.

The latest EAT-Lancet Commission report found that a global shift to healthier diets could cut non-CO2 emissions from agriculture, such as methane and nitrous oxide, by 15%. The report recommends increasing the production of fruit, vegetable and nuts by two-thirds, while reducing livestock meat production by one-third.

Dr Sonia Rodríguez, head of the department of food, culture and environment at Mexico’s National Institute of Public Health, says that unlike earlier studies, which project ideal scenarios, this new study also evaluates real scenarios and provides a “global view” of the costs and emissions of diets in various countries.

Increasing access

The study points out that as people’s incomes increase, their consumption of expensive foods also increases. However, it adds, some people with high income that can afford healthy diets often consume other types of foods, due to reasons such as preferences, time and cooking costs.

The study stresses that nearly one-third of the world’s population – about 2.6 billion people – cannot afford sufficient food products required for a healthy diet.

In low-income countries, primarily in sub-Saharan Africa and south Asia, 75% of the population cannot afford a healthy diet, says the study.

In middle-income countries, such as China, Brazil, Mexico and Russia, more than half of the population can afford such a diet.

To improve the consumption of healthy, sustainable and affordable foods, the authors recommend changes in food policy, increasing the availability of food at the local level and substituting highly emitting products.

Martínez also suggests implementing labelling systems with information on the environmental footprint and nutritional quality of foods. She adds:

“We need strategies beyond just reducing the cost of diets to get people to eat climate-friendly foods.”

Drake notes that there are public and financial policies that can help reduce the consumption of unhealthy and unsustainable foods, such as taxes on unhealthy foods and sugary drinks. This, he adds, would lead to better health outcomes for countries and free up public resources for implementing other policies, such as subsidies for producing healthy food.

Separately, another recent Nature Food study looks at taxes specifically on meat products, which are subject to reduced value-added tax (VAT) in 22 EU member states. 

It finds that taxing meat at the standard VAT rate could decrease dietary-related greenhouse gases by 3.5-5.7%. Such a levy would also have positive outcomes for water and land use, as well as biodiversity loss, according to the study.

Read the whole story
strugk
21 hours ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Putting solar panels on land used for biofuels would produce enough electricity for all cars and trucks to go electric

1 Share

The world dedicates a Poland-sized area of land to liquid biofuels. Is there a more efficient way to generate energy?

Electric vehicles might be promoted as the key technological solution for low-carbon transport today, but they weren’t always the obvious option. Back in the early 2000s, it was biofuels.1 Rather than extracting and burning oil, we could grow crops like cereals and sugarcane, and turn them into viable fuels.

While we might expect biofuels to be a solution of the past due to the cost-competitiveness and rise of electric cars, the world produces more biofuels than ever. And this rise is expected to continue.

In this article, we give a sense of perspective on how much land is used to produce biofuels, and what the potential of that land could be if we used it for other forms of energy. We’ll focus on what would happen if we used that land for solar panels, and then how many electric vehicles could be powered as a result.

We’ll mostly focus on road transport, as that is where 99% of biofuels are currently used. The world generates small amounts of “biojet fuel” — used in aviation — but this accounts for only 1% of the total.2 While aviation biofuels will increase in the coming years, in the near-to-medium-term, they’ll still be small compared to fuel for cars and trucks. By 2028, the IEA projects that aviation might consume around 2% of global biofuels.

To be clear: we’re not proposing that we should replace all biofuel land with solar panels. There are many ways we could utilise this land, whether for food production, some biofuel production, or rewilding. Maybe some combination of all of the above. But to make informed decisions about how to use our land effectively, we need to get a perspective on the potential of each option. That’s what we aim to do here for solar power and electrified transport.

For this analysis, we draw on a range of sources and, at times, produce our own estimates. We’ve written a full methodological document that explains our assumptions and guides you through each calculation.

Which countries produce biofuels, and what are the impacts?

Before we get into the calculations, it’s worth a quick overview of where biofuels are produced today, and what their impacts are.

Some might imagine that biofuels have lost their relevance. But historical policies supporting them are still in place. As shown in the chart below, the world produces more biofuels than ever, and this trend is expected to continue. Global production is focused in a relatively small number of markets, with the United States, Brazil, and the European Union dominating. Since there are no signs of policies changing in these regions, we would not expect the rise of biofuels to end.

Most of the world’s biofuels come from sugarcane (mostly grown in Brazil), cereal crops such as corn (mostly grown in the United States and the European Union), and oil crops such as soybean and palm oil (which are grown in the US, Brazil, and Indonesia).

In the map below, you can get a view of where the world’s biofuels are grown.

Collectively, these biofuels produce around 4% of the world’s energy demand for transport. While that does push some oil from the energy mix, the climate benefits of biofuels are not always as clear as people might assume.

Once we consider the climate impact of growing the food and manufacturing the fuel, the carbon savings relative to petrol can be small for some crops.3 But more importantly, when the opportunity costs of the land used to grow those crops are taken into account, they might be worse for the climate.4 That’s because agricultural land use is not “free”. If we chose not to use it for agriculture, then it could be rewilded and reforested, which would sequester carbon from the atmosphere.

From a climate perspective, freeing up that cropland from biofuels would be one alternative. However, another option is to utilise it for another form of energy, which could offer a much greater climate benefit.

How much land do biofuels use?

This should be easy to estimate. If you know how much land in the United States (or any other country) is used for corn, and what fraction of corn is for biofuels, you can calculate the amount of land used for biofuels.

What makes things complicated is that biofuels often produce co-products that are allocated to other uses, such as animal feed. Not all of the corn or soybeans turn into liquid that can be put in a car; some residues can then be fed to pigs and chickens. How you adjust this land used for biofuels and their co-products can lead to quite different results.

A recent analysis from researchers at Cerulogy estimated that biofuels are grown on 61 million hectares of land.5 But when they split this allocation between land for biofuels and land for animal feed, the land use for biofuels alone was 32 million hectares. The other 29 million hectares would be allocated for land use for animal feed.

There are much higher published figures. The Union for the Promotion of Oil and Protein Plants estimates that as much as 112 million hectares are “used to supply feedstock for biofuels”.6 By this definition, there is no adjustment for dual use of that land or the land use of co-products. That’s one of the reasons why the figures are much higher. Even taking this into account, the numbers are still higher, and the honest answer is that we don’t know why.

For this article, we’re going to assume a net land use of 32 million hectares. This is conservative, and that is deliberate. As we’ll soon see, the amount of solar power we could generate, or the number of electric vehicles we could power on this land, is extremely large. And that’s with us being fairly ungenerous about the amount of land available. Larger land use figures could also be credible; in that case, the potential would be even higher.

How large is 32 million hectares? Imagine an area like the one in the box below: 640 kilometers across, and 500 kilometers high. For context, that’s about the size of Germany, Poland, the Philippines, Finland, or Italy.

How much solar power could you produce on that land, and how many cars could you run?

Could we use those 32 million hectares of land differently to produce even more energy than we currently get from biofuels?

The answer is yes. If we put solar panels on that land, we could produce roughly 32,000 terawatt-hours of electricity each year.7 That’s 23 times more than the energy that is currently produced in the form of all liquid biofuels.8 You can see this comparison in the chart.

32,000 terawatt-hours is a big number. The world generated 31,000 TWh of electricity in 2024. So, these new solar panels would produce enough to meet the world’s current electricity demand.

Again, our proposal isn’t that we should cover all of this land in solar panels, or that it could easily power the world on its own. We don’t account for the fact that we’d need energy storage and other options to make sure that power is available where and when it’s needed (not just when the sun is shining). We’re just trying to get a sense of perspective for how much electricity could be produced by using that land in more efficient ways.

If we put solar panels on that land, we could produce roughly 32,000 terawatt-hours of electricity each year.

These comparisons might seem surprising at first. But they can be explained by the fact that growing crops is a very inefficient process. Plants convert less than 1% of sunlight into biomass through photosynthesis.9 Even more energy is then lost when we turn those plants into liquid fuels. Crops such as sugarcane tend to perform better than others, like maize or soybeans, but even they are still inefficient.

By comparison, solar panels convert 15% to 20% of sunlight into electricity, with some recent designs achieving as much as 25%.10 That means replacing crops with solar panels will generate a lot more energy.

Now, you might think that we’re comparing very different things here: energy from liquid biofuels meant to decarbonize transport, and solar, which could decarbonize electricity. But with the rise of affordable and high-quality electric vehicles, solar power can be a way to decarbonize transport, too.

Run the numbers, and we find that you could power all of the world’s cars and trucks on this solar energy if transport were electrified.

Of course, these vehicles would need to be electrified in the first place. This is happening — electric car sales are rising, and electric trucks are now starting to get some attention — but it will take time for most vehicles on the road to be electric. For now, we’ll imagine that they are.

We estimate that the total electricity needed to power all cars and trucks is around 7,000 TWh per year, comprising 3,500 TWh for cars and a similar amount for trucks. We’ve added this comparison to the chart.

You could power all of the world’s cars and trucks on this solar energy if transport were electrified.

That’s less than one-quarter of the 32,000 TWh that solar panels could produce on biofuel land. Consider those options. The world could meet 3% or 4% of transport demand with biofuels. Or it could meet all road transport demand on just one-quarter of that land. The other three-quarters could be used for other things, such as food production, biofuels for aviation, or it could be left alone to rewild.

It’s worth noting that in this scenario — unlike using solar for bulk electricity needs — we would need much less additional energy storage solutions, because every car and truck is essentially a big battery in itself.

The reason these comparisons are even more stark than biofuels versus solar is that most of the energy consumed in a petrol car is wasted; either as heat (if you put your hand over the bonnet, you will often notice that it’s extremely warm after driving) or from friction when braking. An electric car is much more efficient without a combustion engine, and thanks to regenerative braking (which uses braking energy to recharge the battery). That means that driving one mile in an electric car uses just one-third of the energy of driving one mile in a combustion engine car.

Put these two efficiencies together, and we find that you could drive 70 times as many miles in a solar-powered electric car as you could in one running on biofuels from the same amount of land.

Land use comes at a cost, so we should think carefully about how to use it well

Our point here is not that we should cover all of our biofuel land in solar panels. There are reasons why the comparisons above are simpler than the real world, and why dedicating all of that land to solar power would not be ideal.11

The world could meet 3% or 4% of transport demand with biofuels. Or it could meet all road transport demand on just one-quarter of that land.

What we do want to challenge is how we think and talk about land use. People rightly question the impact of solar or wind farms on landscapes, but rarely consider the land use of existing biofuel crops, which do very little to decarbonize our energy supplies. Whether we’ll run out of land for solar or wind is a common concern, but when we run the numbers, it’s clear that there is more than enough; we’re just using it for other things. Stacking up the comparative benefits of those other things allows us to make better choices, if they’re available.

In this article, we wanted to run the numbers and get some perspective on how we could use that Germany- or Poland-sized area of land in the most efficient way. What’s clear is that we could produce a huge amount of electricity from solar on just a fraction of that land. We could power an entire global electric car and truck fleet on just one-quarter of it.

Land use comes at a cost: for the climate, ecosystems, and other species we share the planet with. That means we should think carefully about how to use it well. That might mean a mix of biofuels for aviation, and solar power for road transport and electricity grids. It might mean going all-in on solar. Or it could mean using some of it for solar power, and leaving the rest alone. Sometimes, the most thoughtful option is not using land at all and letting it return to nature.

Acknowledgments

We would like to thank Max Roser and Edouard Mathieu for editorial feedback and comments on this article. We also thank Marwa Boukarim for help and support with the visualizations.

Endnotes

  1. Other options didn’t rely on switching fuels, such as improving car efficiency and expanding public transport, but these only go so far.

    Here’s a quote from the Intergovernmental Panel on Climate Change in 2007: “Within the transport sector there are five mitigation options with a clear link between sustainable development, adaptation and mitigation. These areas are biofuels, energy efficient, public transport, non-motorised transport and urban planning.”

  2. In 2024, the International Energy Agency estimates that 1.8 billion litres of liquid biofuel were for “biojet” fuel. Total production was 118 billion litres. That means biojet fuel was only 1%.

    Most of this biojet fuel comes from waste fats and oils, which also don’t have the same land use dilemmas as bioethanol and biodiesel used for road transport.

  3. Carbon savings for sugarcane feedstocks tend to be much larger than they are for corn, wheat, and palm oil feedstocks.

    This can vary a lot, depending on location, crop type, and production system. But this meta-analysis finds that some, such as sugarcane ethanol from Brazil, can achieve more than 60% savings (if no land use change is involved), but some crops produce almost no savings at all.

    Jeswani, H. K., Chilvers, A., & Azapagic, A. (2020). Environmental sustainability of biofuels: a review. Proceedings of the Royal Society A.

    These results can be very sensitive to the methodology and life-cycle assessment tools.

    Pereira, L. G., Cavalett, O., Bonomi, A., Zhang, Y., Warner, E., & Chum, H. L. (2019). Comparison of biofuel life-cycle GHG emissions assessment tools: The case studies of ethanol produced from sugarcane, corn, and wheat. Renewable and Sustainable Energy Reviews.

  4. Searchinger, T. D., Wirsenius, S., Beringer, T., & Dumas, P. (2018). Assessing the efficiency of changes in land use for mitigating climate change. Nature, 564(7735), 249-253.

    Fehrenbach, H., & Bürck, S. (2022). Carbon opportunity costs of biofuels in Germany—An extended perspective on the greenhouse gas balance including foregone carbon storage. Frontiers in Climate.

  5. Sandford et al. (2024). Diverted harvest: Environmental Risk from Growth in International Biofuel Demand. Cerulogy.

  6. They estimate that 8% of global croplands supply feedstock for biofuel production. Using their estimate of 1.4 billion hectares of total cropland, this would be 112 million hectares.

  7. This is based on the power density of modern solar panels — how much energy can be produced for a given area. For more details on these calculations, see our full methodological document.

  8. This 1424 TWh is based on data from the Energy Institute. We converted this from petajoules (EJ) to TWh using a conversion factor of 0.27778.

  9. Croce, R., Carmo-Silva, E., Cho, Y. B., Ermakova, M., Harbinson, J., Lawson, T., ... & Zhu, X. G. (2024). Perspectives on improving photosynthesis to increase crop yield. The Plant Cell.

  10. Oni, A. M., Mohsin, A. S., Rahman, M. M., & Bhuian, M. B. H. (2024). A comprehensive evaluation of solar cell technologies, associated loss mechanisms, and efficiency enhancement strategies for photovoltaic cells. Energy Reports.

  11. For example, global biofuel land is not located precisely where solar electricity or electric vehicle demand is expected to be.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

Hannah Ritchie and Pablo Rosado (2026) - “Putting solar panels on land used for biofuels would produce enough electricity for all cars and trucks to go electric” Published online at <a href="http://OurWorldinData.org" rel="nofollow">OurWorldinData.org</a>. Retrieved from: 'https://archive.ourworldindata.org/20260113-111630/biofuel-land-solar-electric-vehicles.html' [Online Resource] (archived on January 13, 2026).

BibTeX citation

@article{owid-biofuel-land-solar-electric-vehicles,
    author = {Hannah Ritchie and Pablo Rosado},
    title = {Putting solar panels on land used for biofuels would produce enough electricity for all cars and trucks to go electric},
    journal = {Our World in Data},
    year = {2026},
    note = {https://archive.ourworldindata.org/20260113-111630/biofuel-land-solar-electric-vehicles.html}
}

Our World in Data logo

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license. You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Read the whole story
strugk
3 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

AI companies will fail. We can salvage something from the wreckage | Cory Doctorow

1 Share

I am a science-fiction writer, which means that my job is to make up futuristic parables about our current techno-social arrangements to interrogate not just what a gadget does, but who it does it for, and who it does it to.

What I do not do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean we couldn’t change it.

Now, not everyone understands the distinction. They think science-fiction writers are oracles. Even some of my colleagues labor under the delusion that we can “see the future”.

Then there are science-fiction fans who believe that they are reading the future. A depressing number of those people appear to have become AI bros. These guys can’t shut up about the day that their spicy autocomplete machine will wake up and turn us all into paperclips has led many confused journalists and conference organizers to try to get me to comment on the future of AI.

That’s something I used to strenuously resist doing, because I wasted two years of my life explaining patiently and repeatedly why I thought crypto was stupid, and getting relentlessly bollocked by cryptocurrency cultists who at first insisted that I just didn’t understand crypto. And then, when I made it clear that I did understand crypto, they insisted that I must be a paid shill.

This is literally what happens when you argue with Scientologists, and life is just too short. That said, people would not stop asking – so I’m going to explain what I think about AI and how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.”


An army of reverse centaurs

In automation theory, a “centaur” is a person who is assisted by a machine. Driving a car makes you a centaur, and so does using autocomplete.

A reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.

For example, an Amazon delivery driver, who sits in a cabin surrounded by AI cameras that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing is not allowed on the job, and rats the driver out to the boss if they do not make quota.

The driver is in that van because the van cannot drive itself and cannot get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance.

Obviously, it’s nice to be a centaur, and it’s horrible to be a reverse centaur. There are lots of AI tools that are potentially very centaurlike, but my thesis is that these tools are created and funded for the express purpose of creating reverse centaurs, which none of us want to be.

But like I said, the job of a science-fiction writer is to do more than think about what the gadget does, and drill down on who the gadget does it for and who the gadget does it to. Tech bosses want us to believe that there is only one way a technology can be used. Mark Zuckerberg wants you to think that it is technologically impossible to have a conversation with a friend without him listening in. Tim Cook wants you to think that it is impossible for you to have a reliable computing experience unless he gets a veto over which software you install and without him taking 30 cents out of every dollar you spend. Sundar Pichai wants you to think that it is for you to find a webpage unless he gets to spy on you from asshole to appetite.

This is all a kind of vulgar Thatcherism. Margaret Thatcher’s mantra was: “There is no alternative.” She repeated this so often they called her “Tina” Thatcher: There. Is. No. Alternative.

“There is no alternative” is a cheap rhetorical slight. It’s a demand dressed up as an observation. “There is no alternative” means: “stop trying to think of an alternative.”

I’m a science-fiction writer – my job is to think of a dozen alternatives before breakfast.

So let me explain what I think is going on here with this AI bubble and who the reverse-centaur army is serving, and sort out the bullshit from the material reality.


How to pump a bubble

Start with monopolies: tech companies are gigantic and they don’t compete, they just take over whole sectors, either on their own or in cartels.

Google and Meta control the ad market. Google and Apple control the mobile market, and Google pays Apple more than $20bn a year not to make a competing search engine, and of course, Google has a 90% search market share.

Now, you would think that this was good news for the tech companies, owning their whole sector.

But it’s actually a crisis. You see, when a company is growing, it is a “growth stock”, and investors really like growth stocks. When you buy a share in a growth stock, you are making a bet that it will continue to grow. So growth stocks trade at a huge multiple of their earnings. This is called the “price to earnings ratio” or “PE ratio”.

But once a company stops growing, it is a “mature” stock, and it trades at a much lower PE ratio. So for every dollar that Target – a mature company – brings in, it is worth $10. It has a PE ratio of 10, while Amazon has a PE ratio of 36, which means that for every dollar Amazon brings in, the market values it at $36.

It’s wonderful to run a company that has a growth stock. Your shares are as good as money. If you want to buy another company or hire a key worker, you can offer stock instead of cash. And stock is very easy for companies to get, because shares are manufactured right there on the premises, all you have to do is type some zeros into a spreadsheet, while dollars are much harder to come by. A company can only get dollars from customers or creditors.

So when Amazon bids against Target for a key acquisition or a key hire, Amazon can bid with shares they make by typing zeros into a spreadsheet, and Target can only bid with dollars they get from selling stuff to us or taking out loans, which is why Amazon generally wins those bidding wars.

That’s the upside of having a growth stock. But here is the downside: eventually a company has to stop growing. Like, say you get a 90% market share in your sector, how are you going to grow?

If you are an exec at a dominant company with a growth stock, you have to live in constant fear that the market will decide that you are not likely to grow any further. Think of what happened to Facebook in the first quarter of 2022. They told investors that they experienced slightly slower growth in the US than they had anticipated, and investors panicked. They staged a one-day, $240bn sell-off. A quarter-trillion dollars in 24 hours! At the time, it was the largest, most precipitous drop in corporate valuation in human history.

That’s a monopolist’s worst nightmare, because once you’re presiding over a “mature” firm, the key employees you have been compensating with stock experience a precipitous pay drop and bolt for the exits, so you lose the people who might help you grow again, and you can only hire their replacements with dollars – not shares.

This is the paradox of the growth stock. While you are growing to domination, the market loves you, but once you achieve dominance, the market lops 75% or more off your value in a single stroke if they do not trust your pricing power.

Which is why growth-stock companies are always desperately pumping up one bubble or another, spending billions to hype the pivot to video or cryptocurrency or NFTs or the metaverse or AI.

I am not saying that tech bosses are making bets they do not plan on winning. But winning the bet – creating a viable metaverse – is the secondary goal. The primary goal is to keep the market convinced that your company will continue to grow, and to remain convinced until the next bubble comes along.

So this is why they’re hyping AI: the material basis for the hundreds of billions in AI investment.


AI can’t do your job

Now I want to talk about how they’re selling AI. The growth narrative of AI is that AI will disrupt labor markets. I use “disrupt” here in its most disreputable tech-bro sense.

The promise of AI – the promise AI companies make to investors – is that there will be AI that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself and give the other half to the AI company.

That is the $13tn growth story that Morgan Stanley is telling. It’s why big investors are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family’s financial security.

Now, if AI could do your job, this would still be a problem. We would have to figure out what to do with all these unemployed people.

But AI can’t do your job. It can help you do your job, but that does not mean it is going to save anyone money.

Take radiology: there is some evidence that AI can sometimes identify solid-mass tumors that some radiologists miss. Look, I’ve got cancer. Thankfully, it’s very treatable, but I’ve got an interest in radiology being as reliable and accurate as possible.

Let’s say my hospital bought some AI radiology tools and told its radiologists: “Hey folks, here’s the deal. Today, you’re processing about 100 X-rays per day. From now on, we’re going to get an instantaneous second opinion from the AI, and if the AI thinks you’ve missed a tumor, we want you to go back and have another look, even if that means you’re only processing 98 X-rays per day. That’s fine, we just care about finding all those tumors.”

If that’s what they said, I’d be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if that also makes radiology more accurate. The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: “Look, you fire nine out of 10 of your radiologists, saving $20m a year. You give us $10m a year, and you net $10m a year, and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed – and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.

“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop’. It’s their signature on the diagnosis.”

This is a reverse centaur, and it is a specific kind of reverse centaur: it is what Dan Davies calls an “accountability sink”. The radiologist’s job is not really to oversee the AI’s work, it is to take the blame for the AI’s mistakes.

This is another key to understanding – and thus deflating – the AI bubble. The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job. This is key because it helps us build the kinds of coalitions that will be successful in the fight against the AI bubble.

If you are someone who is worried about cancer, and you are being told that the price of making radiology too cheap to meter, is that we are going to have to rehouse America’s 32,000 radiologists, with the trade-off that no one will ever be denied radiology services again, you might say: “Well, OK, I’m sorry for those radiologists, and I fully support getting them job training or UBI or whatever. But the point of radiology is to fight cancer, not to pay radiologists, so I know what side I’m on.”

AI hucksters and their customers in the C-suites want the public on their side. They want to forge a class alliance between AI deployers and the people who enjoy the fruits of the reverse centaurs’ labor. They want us to think of ourselves as enemies to the workers.

Now, some people will be on the workers’ side because of politics or aesthetics. But if you want to win over all the people who benefit from your labor, you need to understand and stress how the products of the AI will be substandard. That they are going to get charged more for worse things. That they have a shared material interest with you.

Will those products be substandard? There is every reason to think so.

Think of AI software generation: there are plenty of coders who love using AI. Using AI for simple tasks can genuinely make them more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they are not hoping to make some centaurs.

They want to fire a lot of tech workers – 500,000 over the past three years – and make the rest pick up their work with coding, which is only possible if you let the AI do all the gnarly, creative problem solving, and then you do the most boring, soul-crushing part of the job: reviewing the AI’s code.

And because AI is just a word-guessing program, because all it does is calculate the most probable word to go next, the errors it makes are especially subtle and hard to spot, because these bugs are nearly indistinguishable from working code.

For example: programmers routinely use standard “code libraries” to handle routine tasks. Say you want your program to slurp in a document and make some kind of sense of it – find all the addresses, say, or all the credit card numbers. Rather than writing a program to break down a document into its constituent parts, you’ll just grab a library that does it for you.

These libraries come in families, and they have predictable names. If it’s a library for pulling in an html file, it might be called something like lib.html.text.parsing; and if it’s a for docx file, it’ll be lib.docx.text.parsing.

But reality is messy, humans are inattentive and stuff goes wrong, so sometimes, there will be another library, say, one for parsing pdfs, and instead of being called lib.pdf.text.parsing, it’s called lib.text.pdf.parsing. Someone just entered an incorrect library name and it stuck. Like I said, the world is messy.

Now, AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past. That means that it will “hallucinate” a library called lib.pdf.text.parsing, because that matches the pattern it’s already seen. And the thing is, malicious hackers know that the AI will make this error, so they will go out and create a library with the predictable, hallucinated name, and that library will get automatically sucked into the AI’s program, and it will do things like steal user data or try to penetrate other computers on the same network.

And you, the human in the loop – the reverse centaur – you have to spot this subtle, hard-to-find error, this bug that is indistinguishable from correct code. Now, maybe a senior coder could catch this, because they have been around the block a few times, and they know about this tripwire.

But guess who tech bosses want to preferentially fire and replace with AI? Senior coders. Those mouthy, entitled, extremely highly paid workers, who don’t think of themselves as workers. Who see themselves as founders in waiting, peers of the company’s top management. The kind of coder who would lead a walkout over the company building drone-targeting systems for the Pentagon, which cost Google $10bn in 2018.

For AI to be valuable, it has to replace high-wage workers, and those are precisely the workers who might spot some of those statistically camouflaged AI errors.

If you can replace coders with AI, who can’t you replace with AI? Firing coders is an ad for AI.

Which brings me to AI art – or “art” – which is often used as an ad for AI, even though it is not part of AI’s business model.

Let me explain: on average, illustrators do not make any money. They are already one of the most immiserated, precarious groups of workers out there. If AI image-generators put every illustrator working today out of a job, the resulting wage-bill savings would be undetectable as a proportion of all the costs associated with training and operating image-generators. The total wage bill for commercial illustrators is less than the kombucha bill for the company cafeteria at just one of OpenAI’s campuses.

The purpose of AI art – and the story of AI art as a death knell for artists – is to convince the broad public that AI is amazing and will do amazing things. It is to create buzz. Which is not to say that it is not disgusting that former OpenAI CTO Mira Murati told a conference audience that “some creative jobs shouldn’t have been there in the first place”.

It’s supposed to be disgusting. It’s supposed to get artists to run around and say: “The AI can do my job, and it’s going to steal my job, and isn’t that terrible?

But can AI do an illustrator’s job? Or any artist’s job?

Let’s think about that for a second. I have been a working artist since I was 17 years old, when I sold my first short story. Here’s what I think art is: it starts with an artist, who has some vast, complex, numinous, irreducible feeling in their mind. And the artist infuses that feeling into some artistic medium. They make a song, a poem, a painting, a drawing, a dance, a book or a photograph. And the idea is, when you experience this work, a facsimile of the big, numinous, irreducible feeling will materialize in your mind.

But the image-generation program does not know anything about your big, numinous, irreducible feeling. The only thing it knows is whatever you put into your prompt, and those few sentences are diluted across a million pixels or a hundred-thousand words, so that the average communicative density of the resulting work is indistinguishable from zero.

It is possible to infuse more communicative intent into a work: writing more detailed prompts, or doing the selective work of choosing from among many variants, or directly tinkering with the AI image after the fact, with a paintbrush or Photoshop or the Gimp. And if there will ever be a piece of AI art that is good art – as opposed to merely striking, interesting or an example of good draftsmanship – it will be thanks to those additional infusions of creative intent by a human.

And in the meantime, it’s bad art. It’s bad art in the sense of being “eerie”, the word that cultural theorist Mark Fisher used to describe “when there is something present where there should be nothing, or there is nothing present when there should be something”.

AI art is eerie because it seems like there is an intender and an intention behind every word and every pixel, because we have a lifetime of experience that tells us that paintings have painters, and writing has writers. But it is missing something. It has nothing to say, or whatever it has to say is so diluted that it is undetectable.


We should not simply shrug our shoulders and accept Thatcherism’s fatalism: “There is no alternative.”

So what is the alternative? A lot of artists and their allies think they have an answer: they say we should extend copyright to cover the activities associated with training a model.

And I am here to tell you they are wrong. Wrong because this would represent a massive expansion of copyright over activities that are currently permitted – for good reason. I’ll explain:

AI training involves scraping a bunch of webpages, which is unambiguously legal under present copyright law. Next, you perform analysis on those works. Basically, you count stuff on them: count pixels and their colors and proximity to other pixels; or count words. This is obviously not something you need a license for.

And after you count all the pixels or the words, it is time for the final step: publishing them. Because that is what a model is: a literary work (that is, a piece of software) that embodies a bunch of facts about a bunch of other works, word and pixel distribution information, encoded in a multidimensional array.

And again, copyright absolutely does not prohibit you from publishing facts about copyrighted works. And again, no one should want to live in a world where someone else gets to decide which factual statements you can publish.

But hey, maybe you think this is all sophistry. Maybe you think I’m full of shit. That’s fine. It wouldn’t be the first time someone thought that.

After all, even if I’m right about how copyright works now, there’s no reason we couldn’t change copyright to ban training activities, and maybe there’s even a clever way to wordsmith the law so that it only catches bad things we don’t like, and not all the good stuff that comes from scraping, analyzing and publishing – such as search engines and academic scholarship.

Well, even then, you’re not gonna help out creators by creating this new copyright. We have monotonically expanded copyright since 1976, so that today, copyright covers more kinds of works, grants exclusive rights over more uses, and lasts longer.

And today, the media industry is larger and more profitable than it has ever been, and also – the share of media industry income that goes to creative workers is lower than it has ever been, both in real terms, and as a proportion of those incredible gains made by creators’ bosses at the media company.

In a creative market dominated by five publishers, four studios, three labels, two mobile app stores, and a single company that controls all the ebooks and audiobooks, giving a creative worker extra rights to bargain with is like giving your bullied kid more lunch money.

It doesn’t matter how much lunch money you give the kid, the bullies will take it all. Give that kid enough money and the bullies will hire an agency to run a global campaign proclaiming: “Think of the hungry kids! Give them more lunch money!”

Creative workers who cheer on lawsuits by the big studios and labels need to remember the first rule of class warfare: things that are good for your boss are rarely what’s good for you.

A new copyright to train models will not get us a world where models are not used to destroy artists, it will just get us a world where the standard contracts of the handful of companies that control all creative labor markets are updated to require us to hand over those new training rights to those companies. Demanding a new copyright just makes you a useful idiot for your boss.

When really what they’re demanding is a world where 30% of the investment capital of the AI companies go into their shareholders’ pockets. When an artist is being devoured by rapacious monopolies, does it matter how they divvy up the meal?

We need to protect artists from AI predation, not just create a new way for artists to be mad about their impoverishment.

Incredibly enough, there is a really simple way to do that. After more than 20 years of being consistently wrong and terrible for artists’ rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans. That is why the “monkey selfie” is in the public domain. Copyright is only awarded to works of human creative expression that are fixed in a tangible medium.

And not only has the Copyright Office taken this position, they have defended it vigorously in court, repeatedly winning judgments to uphold this principle.

The fact that every AI-created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them or give them away for nothing. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission.

The US Copyright Office’s position means that the only way these companies can get a copyright is to pay humans to do creative work. This is a recipe for centaurhood. If you are a visual artist or writer who uses prompts to come up with ideas or variations, that’s no problem, because the ultimate work comes from you. And if you are a video editor who uses deepfakes to change the eyelines of 200 extras in a crowd scene, then sure, those eyeballs are in the public domain, but the movie stays copyrighted.

But creative workers do not have to rely on the US government to rescue us from AI predators. We can do it ourselves, the way the writers did in their historic writers’ strike. The writers brought the studios to their knees. They did it because they are organized and solidaristic, but also are allowed to do something that virtually no other workers are allowed to do: they can engage in “sectoral bargaining”, whereby all the workers in a sector can negotiate a contract with every employer in the sector.

That has been illegal for most workers since the late 1940s, when the Taft-Hartley Act outlawed it. If we are gonna campaign to get a new law passed in hopes of making more money and having more control over our labor, we should campaign to restore sectoral bargaining, not to expand copyright.


How to burst the bubble

AI is a bubble and bubbles are terrible.

Bubbles transfer the life savings of normal people who are just trying to have a dignified retirement to the wealthiest and most unethical people in our society, and every bubble eventually bursts, taking their savings with it.

But not every bubble is created equal. Some bubbles leave behind something productive. Worldcom stole billions from everyday people by defrauding them about orders for fiber optic cables. The CEO went to prison and died there. But the fiber outlived him. It’s still in the ground. At my home, I’ve got 2gb symmetrical fiber, because AT&T lit up some of that old Worldcom dark fiber.

It would have been better if Worldcom had not ever existed, but the only thing worse than Worldcom committing all that ghastly fraud would be if there was nothing to salvage from the wreckage.

I do not think we will salvage much from cryptocurrency, for example. When crypto dies, what it will leave behind is bad Austrian economics and worse monkey jpegs.

AI is a bubble and it will burst. Most of the companies will fail. Most of the datacenters will be shuttered or sold for parts. So what will be left behind?

We will have a bunch of coders who are really good at applied statistics. We will have a lot of cheap GPUs, which will be good news for, say, effects artists and climate scientists, who will be able to buy that critical hardware at pennies on the dollar. And we will have the open-source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video; describing images; summarizing documents; and automating a lot of labor-intensive graphic editing – such as removing backgrounds or airbrushing passersby out of photos. These will run on our laptops and phones, and open-source hackers will find ways to push them to do things their makers never dreamed of.

If there had never been an AI bubble, if all this stuff arose merely because computer scientists and product managers noodled around for a few years coming up with cool new apps, most people would have been pleasantly surprised with these interesting new things their computers could do. We would call them “plugins”.

It’s the bubble that sucks, not these applications. The bubble doesn’t want cheap useful things. It wants expensive, “disruptive” things: big foundation models that lose billions of dollars every year.

When the AI-investment mania halts, most of those models are going to disappear, because it just won’t be economical to keep the datacenters running. As Stein’s law has it: “Anything that can’t go on forever eventually stops.”

The collapse of the AI bubble is going to be ugly. Seven AI companies currently account for more than a third of the stock market, and they endlessly pass around the same $100bn IOU.

AI is the asbestos in the walls of our technological society, stuffed there with wild abandon by a finance sector and tech monopolists run amok. We will be excavating it for a generation or more.

To pop the bubble, we have to hammer on the forces that created the bubble: the myth that AI can do your job, especially if you get high wages that your boss can claw back; the understanding that growth companies need a succession of ever more outlandish bubbles to stay alive; the fact that workers and the public they serve are on one side of this fight, and bosses and their investors are on the other side.

Because the AI bubble really is very bad news, it’s worth fighting seriously, and a serious fight against AI strikes at its roots: the material factors fueling the hundreds of billions in wasted capital that are being spent to put us all on the breadline and fill all our walls with hi-tech asbestos.

  • Cory Doctorow is a science fiction author, activist and journalist. He is the author of dozens of books, most recently Enshittification: Why Everything Suddenly Got Worse and What To Do About It. This essay was adapted from a recent lecture about his forthcoming book, The Reverse Centaur’s Guide to Life After AI, which is out in June

  • Spot illustrations by Brian Scagnelli

Read the whole story
strugk
3 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

400km Hydrogen Pipeline With No Users Will Raise Germany’s Electricity Prices - CleanTechnica

1 Share

Support CleanTechnica's work through a Substack subscription or on Stripe.


Germany recently completed and pressurized the first roughly 400km segment of its national hydrogen backbone. The pipes are in the ground, the compressors work, and the system is technically ready. There is only one problem. There are no meaningful hydrogen suppliers connected and no material customers contracted. This is not a commissioning delay or a temporary mismatch. It is a structural failure of demand. The reason this matters far beyond hydrogen policy is simple. The cost of this infrastructure will not disappear. It will persist for decades and will be paid for through higher electricity bills.

The original intent behind Germany’s hydrogen backbone was straightforward and politically appealing. Hydrogen was framed as a future energy carrier that would replace natural gas across multiple sectors. A national transmission network of around 9,000km was proposed, with individual corridors sized at 10GW to 20GW. The idea was to build the infrastructure first and allow supply and demand to follow. Hydrogen would serve steel, chemicals, transport fuels, dispatchable power generation, and heavy industry. In policy documents and commissioned studies, hydrogen demand rose quickly into the 100 TWh to 130 TWh range by 2030 and beyond. At that scale, a national backbone looked reasonable.

As a note on the choice of units which is supporting a lot of bad assumptions about hydrogen, let’s look at the choice to use TWh by Germany. At hydrogen’s lower heating value, 1 kg of hydrogen contains about 33.3 kWh of usable chemical energy, with the lower heating value convention meaning the latent heat in the water vapor formed during combustion is not counted because most real systems do not recover it. On that basis, 1 TWh of hydrogen corresponds to about 30,000 tons of hydrogen.

One recurring analytical error I have highlighted in European hydrogen policy is the persistent misuse of energy units to describe what is fundamentally a material flow problem. Hydrogen is not electricity. It is an industrial feedstock measured and traded in kilograms and tons, yet European strategies repeatedly describe hydrogen demand and infrastructure in TWh, borrowing the language of power systems and gas grids. This unit choice embeds a false analogy, implying hydrogen is a fungible energy carrier moving through the economy like electrons. It obscures mass balance constraints, hides volumetric and compression penalties, and makes pipelines appear comparable to transmission lines.

A further distinction often missed in hydrogen modeling is the difference between a TWh of delivered electricity to a load and a TWh of delivered hydrogen. A TWh of electricity arrives at a customer meter with transmission and distribution losses typically around 5% to 8%, and nearly all of that energy can be converted directly into useful heat or work. A TWh of hydrogen, by contrast, represents chemical energy after a long chain of losses. Producing that hydrogen via electrolysis typically consumes about 1.5 TWh of electricity. Compressing it to pipeline pressures, storing it, and distributing it erodes another 5% to 15%.

If the hydrogen is then used for heating, combustion losses mean that less useful heat reaches the end use than direct electric heating would have delivered from the original electricity. If the hydrogen is used to perform work, such as moving a vehicle, the losses multiply. Fuel cells or engines convert only a fraction of the hydrogen’s chemical energy into motion, leaving overall electricity to wheels efficiency commonly below 30%. In practical terms, a TWh of electricity delivers close to a TWh of service, while a TWh of hydrogen often represents two to three TWh of upstream electricity consumed to deliver the same or less useful outcome. Using TWh embeds the primary energy fallacy in German and European energy policy.

When hydrogen demand is expressed in tons, it is immediately placed in its proper category as an industrial material rather than an energy flow. Germany’s realistic end state hydrogen requirement is a few hundred thousand tons per year, which is comparable to other specialized chemical feedstocks and entirely inconsistent with the scale implied by national energy infrastructure. Framed this way, hydrogen looks like something to be produced where it is cheapest, shipped where it is needed—likely in intermediate products such as hot briquetted iron, ammonia and methanol—and used sparingly in specific processes, not something that warrants a country-spanning transmission network. When the same quantities are expressed in TWh, they encouraging planners to think in terms of power systems and pipelines rather than chemistry and supply chains. This unit choice inflated perceived scale, blurred the distinction between energy and material use, and helped justify a hydrogen backbone that only makes sense if hydrogen is misclassified as a general energy commodity.

The direct problem is that none of the hydrogen volume assumptions, regardless of units, survive contact with physics, economics, or observed market behavior. Start with supply. Germany is not a low cost electricity jurisdiction. Industrial power prices have been persistently high relative to most of the world, and electrolysis only converts electricity into hydrogen with losses. Even optimistic system assumptions require 50kWh to 55kWh of electricity per kilogram of hydrogen. At German power prices, domestic green hydrogen struggles to compete with imports even before compression, storage, and distribution costs are included. Electrolyser buildout has lagged targets, and there is no credible path to producing tens of TWh of hydrogen domestically at competitive cost.

Imports were supposed to close the gap. Ports such as Rostock and Wilhelmshaven were highlighted as gateways for hydrogen and hydrogen derivatives. In practice, exporters prefer to ship finished molecules such as ammonia, methanol, or iron products rather than gaseous hydrogen. Dedicated hydrogen pipelines from other countries have been delayed, resized, or quietly abandoned when buyers declined to sign contracts at required prices. Germany built transmission capacity before securing supply at scale, and the suppliers did not appear.

Table contrasting Germany’s hydrogen strategy demand projections with reality by author.

The demand side is where the strategy truly collapses. Oil refining has historically been Germany’s largest hydrogen consumer, using roughly 25 TWh to 30 TWh—750,000 to 900,000 tons—per year for hydrocracking and desulfurization. That demand exists only because Germany refines fossil fuels. In any credible decarbonization pathway, fuel refining declines steadily and eventually disappears. In an end state with no refined fossil fuels, refinery hydrogen demand goes to zero. There is no offsetting growth from petrochemicals, because German refineries are fuel oriented. About 85% to 90% of crude oil processed in Germany becomes fuels, not chemical feedstocks.

Petrochemicals remain, but their hydrogen demand is far smaller than often implied. Steam crackers do not consume hydrogen. They typically produce hydrogen as a byproduct, on the order of 1.5% to 3% of feed by mass. Some hydrogen is required for selective hydrogenation and purification steps in aromatics and specialty chemicals, but the quantities are bounded. A conservative upper estimate is 5 kg to 10 kg of hydrogen per ton of petrochemical product. Applied to Germany’s chemical output, that yields roughly 4 TWh to 8 TWh—120,000 to 240,000 tons—of hydrogen demand. This is the largest durable hydrogen use case in a fuel free Germany, and it is an order of magnitude smaller than what backbone planners assumed.

Ammonia is often presented as another anchor customer for domestic hydrogen, but the economics point in a different direction. Ammonia production in Germany has already shown how exposed it is to energy prices, with plants shutting down or idling during periods of high electricity and gas costs. What Germany is competitive at is not bulk ammonia synthesis, but the downstream, higher value manufacturing that uses ammonia as an intermediate, including fertilizers, nitric acid, and specialty chemical products. In a realistic end state, Germany would import green ammonia from regions with abundant low cost electricity and established export logistics, then convert that ammonia domestically into higher value derivatives close to end markets. This preserves industrial employment and value creation while minimizing energy system costs. Under this model, domestic hydrogen demand for ammonia synthesis largely disappears, aside from a few niche or transitional facilities, and treating ammonia as a stable domestic hydrogen sink misreads how chemical value chains and trade actually function.

Steel is the centerpiece of Germany’s hydrogen narrative and one of its many weak links. Strategy documents assume roughly 14 million tons to 15 million tons of domestic hydrogen based direct reduced iron capacity by 2030, corresponding to about 28 TWh to 29 TWh—840,000 to 870,000 tons—of hydrogen demand. This assumes that German steelmakers will run large DRI modules on green hydrogen produced or delivered domestically. That assumption fails on several fronts. Germany already produces about 35 million tons to 37 million tons of crude steel per year while consuming only about 26 million tons to 27 million tons domestically. The rest is exported into competitive global markets. Cost matters.

Germany currently produces about one third of its steel in electric arc furnaces. The United States operates at roughly 71% EAF. Germany cannot reach that level because of product mix and residual contamination limits, but it can plausibly reach 45% to 55% EAF using better scrap sorting and blending. That shift alone displaces a large share of primary steelmaking without any hydrogen. The remaining need for clean iron units is best met by importing hot briquetted iron produced where electricity is cheap, or by using biomethane based DRI domestically before hydrogen based DRI. Biomethane with carbon capture produces a concentrated biogenic CO2 stream for sequestration and avoids hydrogen entirely. Under this rational pathway, domestic hydrogen demand for steel goes to zero.

That upper bound of roughly 55% EAF is not necessarily permanent, by the way, but it is a realistic constraint under today’s conditions of scrap quality and product mix. Germany’s limitation is not conceptual but material. Its scrap stream is more contaminated and its steel demand skews toward high end flat and precision products. Over time, both of those constraints could soften. One pathway is active scrap triage, where the most copper and tin contaminated scrap is deliberately separated and exported, while the cleanest scrap fractions are retained for domestic EAF use. That approach treats scrap quality as a strategic resource rather than a homogeneous waste stream. Another pathway is the eventual commercialization of impurity removal processes that are currently confined to laboratories or pilot plants. As imported green iron units will remain structurally expensive relative to fossil iron, and as carbon pricing tightens further, processes that selectively remove copper or other residuals from scrap would become competitive at the margin. If either or both of these developments materialize, Germany could push scrap based EAF production beyond today’s plausible ceiling, and Germany should adopt this strategy. For now, however, that ceiling reflects present economics and metallurgy, not an immutable physical limit.

Transport was another major projected demand wedge. In reality, battery electric vehicles dominate road transport on cost and efficiency. Hydrogen trucks have failed to scale and are being abandoned, while battery electric trucks are taking market share. Hydrogen trains are dead, with Alstom leaving the space entirely and German transit agencies ditching their hydrogen plans. Aviation and shipping fuels, where hydrogen appears indirectly as a biofuel hydrotreater, are imported molecules. Germany is not going to produce e fuels domestically at scale using high priced electricity, and e fuels will have at best a niche play to fill in any biofuels gaps. Setting transport and e fuels to zero domestic hydrogen demand is not aggressive. It reflects market outcomes already visible.

Power generation is often cited as a future hydrogen sink through hydrogen ready gas plants. Capacity is not demand. A plant that runs a few hundred hours per year as insurance does not consume TWh of fuel. Hydrogen is an expensive way to provide dispatchable power compared to batteries, grids, and demand response. Annual hydrogen consumption for power in Germany is likely measured in fractions of a TWh, if it exists at all.

When all of these sectors are examined honestly, Germany’s realistic steady state hydrogen demand collapses. Instead of 110 TWh to 130 TWh, the number is about 4 TWh to 14 TWh—120,000 to 420,000 tons—, with the lower end representing petrochemicals only and the upper end including residual ammonia or niche uses. Using the midpoint, Germany needs roughly 0.5 GW to 1 GW of continuous hydrogen flow. Even allowing for peaks, 2 GW covers the system, but buffering with storage would be more reasonable than a 2 GW pipeline.

Now compare that to the hydrogen backbone that is being built. The commissioned 400km segment alone is framed as having around 20 GW of capacity. At full utilization, that corresponds to roughly 175 TWh—5.25 million tons—per year. Against a realistic demand of 4 TWh to 8 TWh, this is an overbuild of about 22x to 44x. Even against generous peak assumptions, the system is scaled an order of magnitude too large. This is not a rounding error. It is a fundamental mismatch between infrastructure and need.

Germany’s hydrogen backbone also bakes in a severe unit cost problem that is temporarily hidden by subsidized ramp-up tariffs at the start of the pipeline’s lifetime. In the early years, hydrogen network charges are deliberately set well below full cost recovery to make hydrogen appear affordable to hypothetical users, with the shortfall deferred and socialized through the regulatory asset base. This creates the impression that transport costs are modest, but it is an accounting artifact, not an economic reality. Even with the artificially low transportation charges, there are no takers because production remains expensive. The core network is expected to require on the order of $500 million to $700 million per year to recover capital and regulated returns.

At the designed utilization of roughly 175 TWh per year, equivalent to about 5.25 million tons of hydrogen, that would translate into a network cost of roughly $0.10 to $0.15 per kg. That benign figure is implicitly assumed in strategy documents. In the realistic end state, however, Germany’s domestic hydrogen demand is closer to 120,000 to 240,000 tons per year. Spread across that volume, the same fixed network costs rise to roughly $2 to $5 per kg of hydrogen, before production, compression, storage, or distribution are counted. The initial subsidy merely postpones this outcome. As the deferred costs are eventually recovered, the roughly 44x mismatch between designed capacity and actual usage ensures that pipeline transport becomes prohibitively expensive per unit, reinforcing weak demand and locking in a long-term subsidy burden that electricity consumers must carry for decades.

The financial implications follow from Germany’s regulatory model. Hydrogen pipelines are treated as regulated assets. Transmission system operators finance construction with debt and equity and place the assets into the regulated asset base. They earn an allowed return and recover depreciation over 30 to 40 years. Utilization is not required for cost recovery. During the ramp up period, hydrogen tariffs are deliberately set below cost to attract hypothetical users. The shortfall is accumulated and socialized.

When hydrogen demand does not materialize, the pipes are not written off. There is no stranding trigger. The assets are considered used because they are available. With few hydrogen customers to pay tariffs, costs are shifted across the wider energy system. In practice, this means electricity network charges, levies, and federal budget transfers funded by taxpayers and electricity consumers.

The core hydrogen network is estimated to cost about $20 billion. Spread over 40 years, annualized recovery including returns is on the order of $500 million to $700 million per year. Germany consumes about 500 TWh of electricity annually. Socialized across electricity users, this adds roughly $1 to $1.5 per MWh, or about $0.001 to $0.0015 per kWh. On its own, this looks modest. It is not isolated. It stacks on top of other fixed system costs and raises the baseline price of electricity for decades.

The more important effect is opportunity cost. $20 billion invested in grid reinforcement, wind, solar, storage, and flexibility would lower wholesale prices, reduce congestion, and speed electrification. Locked into underused pipelines, that capital instead earns regulated returns while delivering no economic value. The result is higher electricity prices than necessary, which slows adoption of heat pumps, electric vehicles, and industrial electrification. Hydrogen overbuild indirectly undermines the energy transition it was supposed to support.

None of this was unpredictable, but was a complete failure of technoeconomic analysis and governance in Germany. In reviewing Germany’s hydrogen assumptions, I examined work from organizations that are widely treated as authoritative in research and policy analysis, including Fraunhofer institutes, Agora Energiewende, Deutsche Energie-Agentur (dena), the Potsdam Institute for Climate Impact Research (PKI), European Commission modeling groups, and consultancies such as DNV working closely with gas transmission operators.

When reviewing the studies, a consistent pattern emerged. Hydrogen prices were routinely assumed to fall to levels that were disconnected from physical reality, often based on optimistic electrolyser learning curves while quietly excluding the costs of compression to pipeline pressures, storage losses, boil off, reconversion, and distribution. Electricity input prices were frequently taken from best hour renewable scenarios rather than system average prices, even though electrolysers require high utilization to be economical. In parallel, demand was rarely grounded in signed contracts or credible purchasing behavior. Instead, models treated hydrogen demand as an outcome of policy intent, assuming that if infrastructure existed, industry would adapt its processes regardless of cost. This inverted causality allowed demand to be assumed into existence rather than earned through competitiveness.

In one memorable case, a bar chart included in a report from PKI had the cost of energy for green hydrogen at half the cost per MWh of the cost of the electricity used to create it, an energetically impossible outcome, yet none of the researchers involved or reviewers of the paper noticed the massive and glaring discrepancy. Instead, the researchers assumed that they had entered the correct numbers for electricity and that the hydrogen would therefore adjust, not realizing that unrealistically low hydrogen prices were hard coded in the models.

A second recurring issue was institutional bias. Gas transmission operators and their affiliated research partners were deeply embedded in scenario development, and unsurprisingly produced pathways in which repurposed gas pipelines became hydrogen backbones. These studies often compared hydrogen transmission to electricity transmission using energy units, masking volumetric inefficiencies and reinforcing the false equivalence between moving electrons and moving molecules. Hydrogen was framed as a system wide energy carrier rather than a constrained chemical input, which inflated perceived scale and justified national infrastructure. Steel, transport, and power generation demand were repeatedly overstated by assuming hydrogen would be chosen even where simpler electrified alternatives were already cheaper or clearly trending that way.

Perhaps most striking was that these assumptions did not converge toward reality over time. As evidence accumulated that hydrogen trucks were failing, that industrial offtakers were unwilling to sign long term contracts at required prices, and that electrolyser projects were stalling, the models were not revised in kind. Instead, new reports recycled similar assumptions with minor parameter tweaks, reinforcing the same conclusions. The analytical errors were not hidden or technical. They were structural and visible to anyone checking mass balances, cost stacks, or trade dynamics. Those critiques were published, debated, and dismissed. Germany did not lack warning signals. It chose to proceed anyway, and the consequences are now embedded in steel, concrete, and regulated assets that will shape electricity costs for decades.

The deeper failure is conceptual. Hydrogen makes sense where chemistry requires it. It performs poorly as a way to move or store energy compared to moving electrons directly. Germany blurred that distinction, built policy around the blur, and then committed capital at national scale. The result is a pressurized pipeline with no molecules, no customers, and a long tail of costs.

Germany still has a choice. It can stop expanding the hydrogen backbone now, before more capital is sunk into assets that will never be used economically. It can right size hydrogen infrastructure to regional industrial gas networks measured in single digit GW, not national energy corridors. It can redirect investment toward the electricity system, where decarbonization actually happens. If it does not, electricity consumers will keep paying for a hydrogen fantasy that never matched reality.


Sign up for CleanTechnica's Weekly Substack for Zach and Scott's in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!


 

Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.



CleanTechnica uses affiliate links. See our policy here.

CleanTechnica's Comment Policy


Read the whole story
strugk
8 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

‘A bombshell’: doubt cast on discovery of microplastics throughout human body

1 Share

High-profile studies reporting the presence of microplastics throughout the human body have been thrown into doubt by scientists who say the discoveries are probably the result of contamination and false positives. One chemist called the concerns “a bombshell”.

Studies claiming to have revealed micro and nanoplastics in the brain, testes, placentas, arteries and elsewhere were reported by media across the world, including the Guardian. There is no doubt that plastic pollution of the natural world is ubiquitous, and present in the food and drink we consume and the air we breathe. But the health damage potentially caused by microplastics and the chemicals they contain is unclear, and an explosion of research has taken off in this area in recent years.

However, micro- and nanoplastic particles are tiny and at the limit of today’s analytical techniques, especially in human tissue. There is no suggestion of malpractice, but researchers told the Guardian of their concern that the race to publish results, in some cases by groups with limited analytical expertise, has led to rushed results and routine scientific checks sometimes being overlooked.

The Guardian has identified seven studies that have been challenged by researchers publishing criticism in the respective journals, while a recent analysis listed 18 studies that it said had not considered that some human tissue can produce measurements easily confused with the signal given by common plastics.

There is an increasing international focus on the need to control plastic pollution but faulty evidence on the level of microplastics in humans could lead to misguided regulations and policies, which is dangerous, researchers say. It could also help lobbyists for the plastics industry to dismiss real concerns by claiming they are unfounded. While researchers say analytical techniques are improving rapidly, the doubts over recent high-profile studies also raise the questions of what is really known today and how concerned people should be about microplastics in their bodies.


‘The paper is a joke’

“Levels of microplastics in human brains may be rapidly rising” was the shocking headline reporting a widely covered study in February. The analysis, published in a top-tier journal and covered by the Guardian, said there was a rising trend in micro- and nanoplastics (MNPs) in brain tissue from dozens of postmortems carried out between 1997 and 2024.

However, by November, the study had been challenged by a group of scientists with the publication of a “Matters arising” letter in the journal. In the formal, diplomatic language of scientific publishing, the scientists said: “The study as reported appears to face methodological challenges, such as limited contamination controls and lack of validation steps, which may affect the reliability of the reported concentrations.”

One of the team behind the letter was blunt. “The brain microplastic paper is a joke,” said Dr Dušan Materić, at the Helmholtz Centre for Environmental Research in Germany. “Fat is known to make false-positives for polyethylene. The brain has [approximately] 60% fat.” Materić and his colleagues suggested rising obesity levels could be an alternative explanation for the trend reported in the study.

Materić said: “That paper is really bad, and it is very explainable why it is wrong.” He thinks there are serious doubts over “more than half of the very high impact papers” reporting microplastics in biological tissue.

Prof Matthew Campen, senior author of the brain study in question, told the Guardian: “In general, we simply find ourselves in an early period of trying to understand the potential human health impacts of MNPs and there is no recipe book for how to do this. Most of the criticism aimed at the body of work to date (ie from our lab and others) has been conjectural and not buffeted by actual data.

“We have acknowledged the numerous opportunities for improvement and refinement and are trying to spend our finite resources in generating better assays and data, rather than continually engaging in a dialogue.”


‘Bombshell’ doubts

But the brain study is far from alone in having been challenged. One, which reported that patients with MNPs detected in carotid artery plaques had a higher risk of heart attacks and strokes than patients with no MNPs detected, was subsequently criticised for not testing blank samples taken in the operating room. Blank samples are a way of measuring how much background contamination may be present.

Another study reported MNPs in human testes, “highlighting the pervasive presence of microplastics in the male reproductive system”. But other scientists took a different view: “It is our opinion that the analytical approach used is not robust enough to support these claims.”

This study was by Prof Campen and colleagues, who responded: “To steal/modify a sentiment from the television show Ted Lasso, ‘[Bioanalytical assays] are never going to be perfect. The best we can do is to keep asking for help and accepting it when you can and if you keep on doing that, you’ll always be moving toward better.’”

Further challenged studies include two reporting plastic particles in blood – in both cases the researchers contested the criticisms – and another on their detection in arteries. A study claiming to have detected 10,000 nanoplastic particles per litre of bottled water was called “fundamentally unreliable” by critics, a charge disputed by the scientists.

The doubts amount to a “bombshell”, according to Roger Kuhlman, a chemist formerly at the Dow Chemical Company. “This is really forcing us to re-evaluate everything we think we know about microplastics in the body. Which, it turns out, is really not very much. Many researchers are making extraordinary claims, but not providing even ordinary evidence.”

While analytical chemistry has long-established guidelines on how to accurately analyse samples, these do not yet exist specifically for MNPs, said Dr Frederic Béen, at Vrije Universiteit Amsterdam: “But we still see quite a lot of papers where very standard good laboratory practices that should be followed have not necessarily been followed.”

These include measures to exclude background contamination, blanks, repeating measurements and testing equipment with samples spiked with a known amount of MNPs. “So you cannot be assured that whatever you have found is not fully or partially derived from some of these issues,” Béen said.


Biologically implausible

A key way of measuring the mass of MNPs in a sample is, perhaps counterintuitively, vaporising it, then capturing the fumes. But this method, dubbed Py-GC-MS, has come under particular criticism. “[It] is not currently a suitable technique for identifying polyethylene or PVC due to persistent interferences,” concluded a January 2025 study led by Dr Cassandra Rauert, an environmental chemist at the University of Queensland in Australia.

“I do think it is a problem in the entire field,” Rauert told the Guardian. “I think a lot of the concentrations [of MNPs] that are being reported are completely unrealistic.”

“This isn’t a dig at [other scientists],” she added. “They use these techniques because we haven’t got anything better available to us. But a lot of studies that we’ve seen coming out use the technique without really fully understanding the data that it’s giving you.” She said the failure to employ normal quality control checks was “a bit crazy”.

Py-GC-MS begins by pyrolysing the sample – heating it until it vaporises. The fumes are then passed through the tubes of a gas chromatograph, which separates smaller molecules from large ones. Last, a mass spectrometer uses the weights of different molecules to identify them.

The problem is that some small molecules in the fumes derived from polyethylene and PVC can also be produced from fats in human tissue. Human samples are “digested” with chemicals to remove tissue before analysis, but if some remains the result can be false positives for MNPs. Rauert’s paper lists 18 studies that did not include consideration of the risk of such false positives.

Rauert also argues that studies reporting high levels of MNPs in organs are simply hard to believe: “I have not seen evidence that particles between 3 and 30 micrometres can cross into the blood stream,” she said. “From what we know about actual exposure in our everyday lives, it is not biologically plausible that that mass of plastic would actually end up in these organs.”

“It’s really the nano-size plastic particles that can cross biological barriers and that we are expecting inside humans,” she said. “But the current instruments we have cannot detect nano-size particles.”

Further criticism came in July, in a review study in the Deutsches Ärzteblatt, the journal of the German Medical Association. “At present, there is hardly any reliable information available on the actual distribution of microplastics in the body,” the scientists wrote.


Fresh blood

Plastic production has ballooned by 200 times since the 1950s and is set to almost triple again to more than a billion tonnes a year by 2060. As a result, plastic pollution has also soared, with 8bn tonnes now contaminating the planet from the top of Mount Everest to the deepest ocean trench. Less than 10% of plastic is recycled.

An expert review published in the Lancet in August called plastics a “grave, growing and underrecognised danger” to human and planetary health. It cited harm from the extraction of the fossil fuels they are made from, to their production, use and disposal, which result in air pollution and exposure to toxic chemicals.

In recent years, the infiltration of the body with MNPs has become a serious concern, and a landmark study in 2022 first reported detection in human blood. That study is one of the 18 listed in Rauert’s paper and was criticised by Kuhlman.

But the study’s senior author, Prof Marja Lamoree, at Vrije Universiteit Amsterdam, rejected suggestions of contamination. “The reason we focused on blood in the first place is that you can take blood samples freshly, without the interference of any plastics or exposure to the air,” she said.

“I’m convinced we detected microplastics,” she said. “But I’ve always said that [the amount estimated] could be maybe twice lower, or 10 times higher.” In response to Kuhlman’s letter, Prof Lamoree and colleagues said he had “incorrectly interpreted” the data.

Prof Lamoree does agree there is a wider issue. “It’s still a super-immature field and there’s not many labs that can do [these analyses well]. When it comes to solid tissue samples tissues, then the difficulty is they are usually taken in an operating theatre that’s full of plastic.”

“I think most of the, let’s say, lesser quality analytical papers come from groups that are medical doctors or metabolomics [scientists] and they’re not driven by analytical chemistry knowledge,” she said.


Scaremongering

Improving the quality of MNP measurements in the human body matters, the scientists said. Poor quality evidence is “irresponsible” and can lead to scaremongering, said Rauert: “We want to be able to get the data right so that we can properly inform our health agencies, our governments, the general population and make sure that the right regulations and policies are put in place.

“We get a lot of people contacting us, very worried about how much plastics are in their bodies,” she said. “The responsibility [for scientists] is to report robust science so you are not unnecessarily scaring the general population.”

Rauert called treatments claiming to clean microplastics from your blood “crazy” – some are advertised for £10,000. “These claims have no scientific evidence,” she said, and could put more plastic into people’s blood, depending on the equipment used.

Materić said insufficiently robust studies might also help lobbyists for the plastics industry downplay known risks of plastic pollution.

The good news, said Béen, is that analytical work across multiple techniques is improving rapidly: “I think there is less and less doubt about the fact that MNPs are there in tissues. The challenge is still knowing exactly how many or how much. But I think we’re narrowing down this uncertainty more and more.”

Prof Lamoree said: “I really think we should collaborate on a much nicer basis – with much more open communication – and don’t try to burn down other people’s results. We should all move forward instead of fighting each other.”


‘On the safe side’

In the meantime, should the public be worried about MNPs in their bodies?

Given the very limited evidence, Prof Lamoree said she could not say how concerned people should be: “But for sure I take some precautions myself, to be on the safe side. I really try to use less plastic materials, especially when cooking or heating food or drinking from plastic bottles. The other thing I do is ventilate my house.”

“We do have plastics in us – I think that is safe to assume,” said Materić. “But real hard proof on how much is yet to come. There are also very easy things that you can do to hugely reduce intake of MNPs. If you are concerned about water, just filtering through charcoal works.” Experts also advise avoiding food or drink that has been heated in plastic containers.

Rauert thinks most of the MNPs that people ingest or breathe in are probably expelled by their bodies, but said it can’t hurt to reduce your plastics exposure. Furthermore, she said, it remains vital to resolve the uncertainty over what MNPs are doing to our health: “We know we’re being exposed, so we definitely want to know what happens after that and we’ll keep working at it, that’s for sure.”

Read the whole story
strugk
8 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Some of your cells are not genetically yours — what can they tell us about life and death?

1 Share

Hidden Guests: Migrating Cells and How the New Science of Microchimerism is Redefining Human Identity Lise Barnéoud, transl. Bronwyn Haslam Greystone Books (2025)

The chimaera of Greek mythology was “an evil creature with a lion’s head, a goat’s body, and a serpent’s tail”, notes journalist Lise Barnéoud in Hidden Guests. Humans are also chimaeras — thanks to the presence of cells that are not our own inside our bodies.

Could baby’s first bacteria take root before birth?

Mothers carry cells that came from their biological children, passed across the placenta when the baby was in the womb. Likewise, children carry cells that were transferred to them in utero from their mothers — some of which might even be from the child’s maternal grandmothers, older siblings or twin.

These ‘microchimeric’ cells have been found in every organ that has been studied so far. But they are also rare — much rarer, for example, than the trillions of microorganisms that reside in our guts, on our skin and in many other organs. We carry only one microchimeric cell for every 10,000 to 1 million of our own cells.

In Hidden Guests, the author invites readers to learn about the pioneering “microchimerists” — scientists who discovered these fascinating shared cells. She also challenges us to consider the broader implications for health and science, and the philosophy of the fact that we are all chimaeras.

Unplanned discovery

Microchimeric cells were, we learn, discovered through a series of accidental observations. In the late 1800s, pathologist Georg Schmorl described ‘giant cells’ in the lungs of people who had died from eclampsia — a life-threatening inflammatory condition that can occur during pregnancy. These giant cells resembled the cells of the placenta, leading Schmorl to suggest that fetal cells passing into the bloodstream of mothers was the norm, rather than the exception.

The probiotic home: where microbes are welcome guests

Then, in 1969, a team studying immunity in pregnant people detected white blood cells that contained the Y chromosome in the blood of individuals who would eventually give birth to boys1. For more than two decades, it was presumed that these microchimeric cells were a temporary feature of pregnancy. It wasn’t until 1993 that geneticist Diana Bianchi found cells with Y chromosomes in women who had given birth to sons between one and 27 years earlier2.

This finding overturned the dogma that children inherit genes from their parents and never the other way around — these transferred fetal cells move through the family tree, travelling ‘backwards in time’ from children to their mothers. Bianchi and others would go on to show that these cells have remarkable regenerative properties — promoting wound healing in mothers by transforming into blood vessels or skin cells.

Immunological implications

Microchimerism also calls into question a central tenet of immunology: that the immune system works by classifying cells in a binary fashion, as ‘self’ or ‘non-self’. Under this simplistic model, microchimeric cells should trigger an immune response and be rejected by the body — but they do not. Barnéoud challenges readers to consider whether forcing microchimerism-related discrepancies to fit into existing immunological rules is appropriate, instead of allowing these cells to influence new immunological rules.

At around the same time that Bianchi was making her groundbreaking discovery, rheumatologist Lee Nelson also found Y-chromosome-containing cells in people who had previously given birth to sons3. Nelson was studying autoimmune diseases, which disproportionately affect middle-aged women, and at the time were thought to be caused by hormone imbalances.

Read the whole story
strugk
15 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories