Energy, data, environment
1462 stories
·
2 followers

How Much Energy Does It Take To Think? | Quanta Magazine

2 Shares

Studies of neural metabolism reveal our brain’s effort to keep us alive and the evolutionary constraints that sculpted our most complex organ.

Introduction

You’ve just gotten home from an exhausting day. All you want to do is put your feet up and zone out to whatever is on television. Though the inactivity may feel like a well-earned rest, your brain is not just chilling. In fact, it is using nearly as much energy as it did during your stressful activity, according to recent research.

Sharna Jamadar (opens a new tab), a neuroscientist at Monash University in Australia, and her colleagues reviewed research from her lab and others around the world to estimate the metabolic cost of cognition (opens a new tab) — that is, how much energy it takes to power the human brain. Surprisingly, they concluded that effortful, goal-directed tasks use only 5% more energy than restful brain activity. In other words, we use our brain just a small fraction more when engaging in focused cognition than when the engine is idling.

It often feels as though we allocate our mental energy through strenuous attention and focus. But the new research builds on a growing understanding that the majority of the brain’s function goes to maintenance. While many neuroscientists have historically focused on active, outward cognition, such as attention, problem-solving, working memory and decision-making, it’s becoming clear that beneath the surface, our background processing is a hidden hive of activity. Our brains regulate our bodies’ key physiological systems, allocating resources where they’re needed as we consciously and subconsciously react to the demands of our ever-changing environments.

“There is this sentiment that the brain is for thinking,” said Jordan Theriault (opens a new tab), a neuroscientist at Northeastern University who was not involved in the new analysis. “Where, metabolically, [the brain’s function is] mostly spent on managing your body, regulating and coordinating between organs, managing this expensive system which it’s attached to, and navigating a complicated external environment.”

The brain is not purely a cognition machine, but an object sculpted by evolution — and therefore constrained by the tight energy budget of a biological system. Thinking may make you feel tired, then, not because you are out of energy, but because you have evolved to preserve resources. This study of neural metabolism, when tied to research on the dynamics of the brain’s electrical firing, points to the competing evolutionary forces that explain the limitations, scope and efficiencies of our cognitive capabilities.

The Cost of a Predictive Engine

The human brain is incredibly expensive to run. At roughly 2% of body weight, the organ gorges on 20% of our body’s energetic resources. “It’s hugely metabolically demanding,” Jamadar said. For infants, that number is closer to 50%.

The brain’s energy comes in the form of the molecule adenosine triphosphate (ATP), which cells make from glucose and oxygen. A tremendous expanse of thin capillaries — an estimated 400 miles of vascular wiring — weaves through brain tissue to carry glucose- and oxygen-rich blood to neurons and other brain cells. Once synthesized within cells, ATP powers communication between neurons, which enact the brain’s functions. Neurons carry electrical signals to their synapses, which allow the cells to exchange molecular messages; the strength of a signal determines whether they will release molecules (or “fire”). If they do, that molecular signal determines whether the next neuron will pass on the message, and so on. Maintaining what are known as membrane potentials — stable voltages across a neuron’s membrane that ensure that the cell is primed to fire when called upon — is known to account for at least half of the brain’s total energy budget.

Measuring ATP directly in the human brain is highly invasive. So, for their paper, Jamadar’s lab reviewed studies (opens a new tab), including their own findings, that used other estimates of energy use — glucose consumption, measured by positron-emission tomography (PET), and blood flow, measured by functional magnetic resonance imaging (fMRI) — to track differences in how the brain uses energy during active tasks and rest. When performed simultaneously, PET and fMRI can provide complementary information on how glucose is being consumed by the brain, Jamadar said. It’s not a complete measure of the brain’s energy use because neural tissues can also convert some amino acids (opens a new tab) into ATP, but the vast majority of the brain’s ATP is produced by glucose metabolism.

Jamadar’s analysis showed that a brain performing active tasks consumes just 5% more energy compared to a resting brain. When we are engaged in an effortful, goal-directed task, such as studying a bus schedule in a new city, neuronal firing rates increase in the relevant brain regions or networks — in that example, visual and language processing regions. This accounts for that extra 5%; the remaining 95% goes to the brain’s base metabolic load.

Researchers don’t know precisely how that load is allocated, but over the past few decades, they have clarified what the brain is doing in the background. “Around the mid-’90s we started to realize as a discipline [that] actually there is a whole heap of stuff happening when someone is lying there at rest and they’re not explicitly engaged in a task,” she said. “We used to think about ongoing resting activity that is not related to the task at hand as noise, but now we know that there is a lot of signal in that noise.”

Much of that signal is from the default mode network, which operates while we’re resting or otherwise not engaged in apparent activity. This network is involved in the mental experience of drifting between past, present and future scenarios — what you might make for dinner, a memory from last week, some pain in your hip. Additionally, beneath the iceberg of awareness, our brains are keeping track of the mosaic of physical variables — body temperature, blood glucose level, heart rate, respiration, and so on — that must remain stable, in a state known as homeostasis, to keep us alive. If any of them stray too far, things can get bad pretty quickly.

Theriault speculates that most of the brain’s base metabolic load goes toward prediction. To achieve its homeostatic goals, the brain needs to always be planning for what comes next — building a sophisticated model of the environment and how changes might affect the body’s biological systems. Prediction, rather than reaction, Theriault said, allows the brain to dole out resources to the body efficiently.

The Brain’s Evolutionary Constraints

A 5% increased energy demand during active thought may not sound like much, but in the context of the entire body and the energy-hungry brain, it can add up. And when you consider the strict energetic constraints our ancestors had to deal with, weariness at the end of a taxing day suddenly makes a lot more sense.

“The reason you are fatigued, just like you are fatigued after physical activity, isn’t because you don’t have the calories to pay for it,” said Zahid Padamsey (opens a new tab), a neuroscientist at Weill Cornell Medicine-Qatar, who was not involved in the new research. “It is because we have evolved to be very stingy systems. … We evolved in energy-poor environments, so we hate exerting energy.”

The modern world, in which calories are relatively abundant for many people, contrasts starkly with the conditions of scarcity that Homo sapiens evolved in. That 5% increase in burn rate, over 20 days of persistent, active, task-related focus, can amount to a whole day’s worth of cognitive energy. If food is hard to come by, it could mean the difference between life and death.

“This can be substantial over time if you don’t cap the burn rate, so I think it is largely a relic of our evolutionary heritage,” Padamsey said. In fact, the brain has built-in systems to prevent overexertion. “You’re going to activate fatigue mechanisms that prevent further burn rates,” he said.

To better understand these energetic constraints, in 2023 Padamsey summarized research on certain peculiarities of electrical signaling (opens a new tab) that indicate an evolutionary tendency toward energy efficiency. For one thing, you might imagine that the faster you transmit information, the better. But the brain’s optimal transmission rate is far lower than might be expected.

Theoretically, the top speed for a neuron to feasibly fire and send information to its neighbor is 500 hertz. However, if neurons actually fired at 500 hertz, the system would become completely overwhelmed. The optimal information rate (opens a new tab) — the fastest rate at which neurons can still distinguish messages from their neighbors — is half that, or 250 hertz.

Our neurons, however, have an average firing rate of 4 hertz, 50 to 60 times less than what is optimal for information transmission. What’s more, many synaptic transmissions fail: Even when an electrical signal is sent to the synapse, priming it to release molecules to the next neuron, it will do so only 20% of the time.

That’s because we didn’t evolve to maximize total information sent. “We have evolved to maximize information transmission per ATP spent,” Padamsey said. “That’s a very different equation.” Sending the maximum amount of information for as little energy as possible (bits per ATP), the optimal neuronal firing rate is under 10 hertz.

Evolutionarily, the large, sophisticated human brain offered an unprecedented level of behavioral complexity — at a great energetic cost. This negotiation, between the flexibility and innovation of a large brain and the energetic constraints of a biological system, defines the dynamics of how our brain transmits information, the mental fatigue we feel after periods of concentration, and the ongoing work our brain does to keep us alive. That it does so much within its limitations is rather astonishing.

Read the whole story
· · · · ·
strugk
11 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Innovation to Impact: Advancing Solid-State Cooling to Market

RMI
1 Share

Introduction

As the world encounters another hot summer, cooling is becoming an even hotter topic. Cooling demand is skyrocketing, driven primarily by the Global South and fueled by rising income levels, population growth, urbanization, and increasing global temperatures.

For investors, a booming future market — with most AC purchases yet to be made — opens the opportunity to invest now in superior sustainable cooling solutions that will shape our future.

One such solution is solid-state cooling — a technology class with the potential to revolutionize the cooling industry. Why? Solid state cooling technology can offer improved efficiency, reduce emissions and energy costs, and eliminate the need for super-polluting refrigerants compared to incumbent century-old vapor-compression technology.

In our first article of this series, we explained what solid-state cooling is, the promise it holds, and why it can be an important solution to the cooling challenge. In this article, we’ll dive into its market potential, market drivers, and what it will take to get these innovative cooling solutions to commercialization and scale.


The advantages of solid-state cooling: No refrigerants, higher performance ceilings, simpler systems

Since the advent of modern air conditioning in the late 19th century, vapor compression has remained the dominant technology powering the global cooling market. This refers to first compressing and condensing a refrigerant, releasing heat in the process, and then expanding and evaporating it to absorb heat and produce a cooling effect. Today, approximately 95 percent of all cooling equipment relies on the vapor compression cycle.

The efficiency improvements associated with this technology have been slow and incremental and are effectively capped by the “Carnot Limit” — determined by the temperatures of the hot and cold reservoirs used by the cooling system.

Because they are not dependent on moving heat between reservoirs, solid-state technologies have already demonstrated much higher potential performance ceilings. The graphic below shows that some solid-state technologies can have a coefficient of performance (COP) above 10, almost double the COP of incumbent AC systems, where the best score is roughly 5.5.

The challenge, of course, is to translate this potential into reality. Innovators are working to do just that (i.e., to achieve system-level performance comparable to or higher than their vapor compression counterparts). The team at Pascal, for example, has recently demonstrated that barocaloric materials can deliver effective cooling and heating at pressure levels comparable to those used in conventional air conditioners.

This is an exciting breakthrough in terms of material performance, and it offers a potential pathway toward cooling systems that are energy efficient and easier to design and integrate over time with standard components. Similarly, thermoelectric systems can achieve precision cooling without moving parts — no pistons, compressors, or hydraulics — simplifying the components needed.

The other major advantage of solid-state solutions is that they do away with refrigerants, which often have very high global warming potential (GWP) (for example, R-134a, one of the most common refrigerants used today, has a GWP 1,430 times more potent than carbon dioxide) and are increasingly subject to regulatory phase-outs.

In sum, while industry incumbents will likely continue to move the needle on vapor compression systems in response to regulatory and market pressure, solid-state cooling has the potential to outpace these improvements. There is step-change potential associated with its refrigerant-free operation, higher performance ceiling, and streamlined system design.


The potential market for solid-state cooling technologies is enormous.

The global cooling market is undergoing a dynamic transformation, creating unprecedented opportunities for sustainable cooling solutions, like solid-state technologies. The global active cooling sector, which includes air conditioning (AC), refrigeration, and mobile cooling, was valued at an estimated $633 billion in 2023 and is expected to top $1 trillion by 2050. Much of this growth will be driven by developing economies, which will comprise 60 percent of this demand, creating a $600 billion market in 2050 — more than doubling from its $272 billion size today.

The global cooling market is large enough and segmented enough that solid-state cooling startups could find sizeable beachheads (starting markets). Take two Third Derivative portfolio companies as examples. MIMiC, based in New York, is developing thermoelectric solid-state systems that can replace the standard packaged thermal AC (PTAC) units you often see in hotel rooms and many multifamily buildings. For US hotels alone, this could be a market worth more than $7 billion. Magnotherm, based in Darmstadt, Germany, is developing magnetocaloric refrigerators for supermarkets, grocery stores, and food and beverage retail. In Europe alone, this is an estimated $17 billion market. Even carving out a niche and targeting a few specialized segments in this market presents a multi-billion-dollar opportunity for young, innovative companies.

While the market for cooling solutions is booming overall, some dynamics are creating particularly favorable conditions for solid-state cooling. For example, there is a push in the regulatory landscape that may support the advancement of solid-state cooling technology. Most directly, regulations that tighten allowable refrigerant GWPs — largely being driven by the EU (building on the accelerating global effort to phase down high-GWP synthetic refrigerants)  — would benefit solid-state as it’s free of potent, high-GWP refrigerants.

Additionally, efficiency standards and incentives, including minimum energy performance standards (like Japan’s Top Runner program which sets performance standards for a range of appliances — labeling the most efficient AC and refrigeration models on the market, encouraging competition between companies to be the “Top Runner”), can support efficient solid-state cooling systems. Cities, states, countries, or regions with strong efficiency standards or incentives could become strong beachhead markets for solid-state cooling startups.

All in all, there is a perfect storm brewing to disrupt a global cooling market that has not witnessed a radical and environmentally sustainable innovation for nearly a century — either though innovations in vapor compression systems or alternative approaches, like solid-state cooling.


To enter the mainstream, solid-state cooling still has challenges to overcome

As a nascent technology, solid-state cooling systems are still relatively scarce in the market. While the efficiency potential is significant, there remains a wide gap between having an efficient material and building an efficient, integrated system. Startups often face challenges when it comes to system integration — combining materials with components like heat exchangers, controllers, and power supplies, without significant losses in efficiency. For example, the most efficient elastocaloric materials right now have a material efficiency coefficient of performance (COP) above 10, but once fully integrated into a cooling system, COPs will likely be more comparable with vapor compression systems to start (a COP of around 3).

Material fatigue is another critical hurdle for some approaches, namely barocaloric and elastocaloric, which generate cooling through the repetitive stretching or compression of materials. Consumers expect their air conditioners and refrigerators to last 15 years or more, and solid-state systems must demonstrate long-term reliability under continuous cycling.

Supply chain limitations — particularly for magnetocaloric systems, which rely on rare earth materials for permanent magnets — pose additional challenges. However, several solid-state startups are proactively working to leverage existing supply chains for components, reducing supply chain risks and offering a pathway toward sustainability in the future.  AI can play a role in material discovery, identifying new, more promising materials for solid-state cooling. One Third Derivative portfolio company Matnex is working on just this — identifying and scaling new materials using AI and machine learning — which could support solid-state innovators as well.

Above all, the most significant challenge facing solid-state cooling today is cost. Like many emerging technologies, solid-state systems will initially come at a premium price, though starting at a higher cost is typical for new technologies.

Solar panels, for example, were once over 100 times more expensive than they are today. With economies of scale, optimized manufacturing processes, and improvements in the cell technology itself, costs fell dramatically. The historical “learning rate” — the fall in costs associated with each doubling of production volumes —  for solar panels is 20 percent.

Given the urgency of climate change and the global need for efficient, sustainable cooling, there are strong reasons to believe that market forces will drive adoption and therefore open a path for cost reductions for solid-state cooling technologies in the near future.

Some startups are already demonstrating cost effectiveness. For example, UK-based startup Anzen Walls, supported by Third Derivative ecosystem partner Carbon 13, is developing thermoelectric heat pumps for homes that target a price comparable to or lower than traditional heat pumps.

Some recent partnerships between solid-state cooling startups and original equipment manufacturers (OEMs) such as Carrier, Copeland, and Trane show opportunities to cut costs and make solid-state more affordable. OEMs are very experienced in system and cost optimization and have access to large-scale distribution networks that would rapidly streamline this process. If startups can demonstrate performance and reliability, OEM partners can help drive down costs while offering access to established distribution networks, manufacturing infrastructure, and customer relationships, thereby paving the pathway for commercialization and scaling of these technologies.


What are the pathways to market?

There is a collection of exciting startups in the solid-state cooling space. The typical path to market holds true for solid-state cooling as well:

  1. Refinement: continued product performance refinement with grant and VC support.
  2. Validation: startups will build pilot manufacturing facilities on their own to validate performance with real-world pilots. Alternatively, early partnerships with manufacturers builds confidence and can open doors for future investment or even acquisition. This approach isn’t new to this sector. In 2020, Emerson acquired 7AC – a startup developing more efficient air conditioning technology through liquid desiccants – after collaborating with the company to commercialize the new technology.
  3. Demand: market interest indicated through demand signals. This is already appearing in the space – Walmart and IKEA have both committed to significantly reducing, or using no, GWP refrigerants.
  4. Partnerships: a faster, scalable path necessitates partnerships with manufacturers through licensing, direct sales or components, joint ventures, or acquisition to bring their innovations to market. Manufacturers in the cooling space are rather consolidated and very well-established – they not only take up major market shares, but they also have deep expertise in system design, cost reduction and market access. Additional partnerships with testing bodies will need to be established for standards to be developed for solid-state cooling and integrated into existing standards.

Solid-state’s potential has not gone unnoticed—established manufacturers are actively monitoring and engaging with the space. For example, Carrier Ventures recently invested in elastocaloric startup Exergyn, while Copeland has backed thermoacoustic heat pump startup BlueHeart Energy. These moves signal growing industry confidence in solid-state technologies as the next frontier in sustainable cooling.


Solid state’s right to win in the cooling market

Solid-state technologies are emerging as a potential frontrunner to disrupt the cooling market, but what gives solid-state a “right to win” or an unbeatable edge in the market? Some aspects are yet to be fully proven, but there are two exciting edges that solid-state offers: the elimination of potent refrigerants, and the potential for very high-performance ceilings. If the performance is actualized (for example, achieving system COPs of at least 3), solid-state will likely have a right to win in certain beachhead markets, and use those footholds to scale beyond.

For investors, this presents a timely opportunity to place an early bet on a rapidly evolving technology with major promise. Interest from major OEMs signals strong industry momentum toward a new era of cooling. Looking ahead, two key areas to watch are how startups improve performance and drive down costs — with effective systems integration being key to both. We see a clear opportunity for early-stage capital — especially pre-seed and seed investments — to play a pivotal role in supporting startups as they scale and commercialize their innovations.

The authors would like to thank Blue Haven Initiative for funding this research, and Ankit Kalanki, Chetan Krishna, and Shruti Naginkumar Prajapati for their contributions.

Read the whole story
· · · · · · · · · · · · · · · · ·
strugk
11 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Eating wild animals might be sustainable for the few, but not for the many

1 Share

I’ve written a lot about the environmental impacts of our food systems. People often underestimate the impact of what we eat on everything from climate change to land use, biodiversity loss, water use, and pollution. For most people: if you want to reduce the environmental impact of your diet then eating less meat and dairy is the best place to start. That’s where you’ll have the largest impact.

Our livestock systems today are big. Not just in terms of land use, but the number of animals we slaughter every year. It’s over 80 billion animals (or hundreds of millions a day) and that’s just for those on land. Include fish and shrimp, and the total is in the trillions. Most of those animals are factory-farmed, living pretty miserable — and often painful — lives.

Some of those who are concerned about the environmental and welfare impacts of their diet either reduce how much meat and dairy they’re eating, or cut it out completely. I did the former then progressed to the latter.

But another alternative that I get asked about a lot, and goes through popularity waves in the media, is consuming wild meat instead.

“I’ve heard that wild meat is much more sustainable, is it better for me to eat that?”

Usually when I speak to people about this, they start getting bogged down in metrics about carbon footprints. But I think that these standard environmental footprint metrics are often missing the point when it comes to the sustainability of wild meat. Or, at least, it’s not the climate impact that’s the biggest “problem”: it’s the fact that we eat a lot of meat, and there’s not that much wild meat out there. The biggest limitation is not molecules of carbon dioxide, but the risk that we quickly run out.

To test this, I wanted to run some quick numbers on how supply of wild meat might stack up against demand. How long could wild meat keep us going for?

Note that some readers will think that we shouldn’t eat wild animals either, for ethical reasons.1 That’s fine and valid. But this article is not about the mortality of eating meat, which is a whole separate discussion. It’s purely trying to get a sense of how current levels of meat consumption compare to how much meat is available from wild sources.

Let’s take the UK as an example. Overpopulation of deer, which can negatively impact landscapes and other wildlife, is seen as a problem. Wild venison, then, is sometimes promoted as a sustainable food choice.

Now, this can make sense for some people. But is it good general i.e. population-wide advice?

Some back-of-the-envelope calculations.

Here, I’m going to assume there are around 2 million deer in the UK. Now, this is a particularly contentious figure (I didn’t know how controversial it was in some groups until I started trying to find a reasonable estimate). It appears to be a headline number that is quoted a lot, and too confidently given the uncertainty. The reality is that no one really knows this figure with high accuracy.2 Estimates range from as low as 650,000 to 2.5 million. I’ve seen others claim that it’s even higher.

I am using this 2 million number, knowing it is not perfect and will make some people angry, but as we’ll see, the choice of figure won’t change the overall conclusion. If the truth is closer to 650,000, then my point is even more obvious. If there are actually, say, 4 million deer, we can just double the final result.

You might get 25 to 30 kilograms of meat from one deer.3 That’s around 50 million kilograms (or 50,000 tonnes) of deer meat for the entire UK.

Sounds like a lot.

But how much meat do Brits eat? Around 75 kilograms per person per year, which comes to around 5 billion kilograms for the country as a whole.

If we killed every deer in the UK, it could maybe supply 1% of our meat consumption — or three days’ worth of meat. Of course, there would be no deer left, so after three days, our entire “wild meat” transition would be over.

Realistically, we would never kill all our deer at once. I’ve heard people recommend a “sustainable” cull rate of around 20% of the population (again, this is a very contentious figure and depends on population dynamics). So we need to divide our 1% of meat supply by five, giving just 0.2%. That’s around half a day of the country’s meat.

The only way this works as general dietary advice is if:

  • Everyone eats a lot less meat than they currently do. Even then, it’s still going to be a pretty small share of total meat supply.

  • It’s very clear that people can only eat wild venison very rarely. And by that, I mean one meal a year or something.

  • Most of the population are resistant to eating venison, or it’s too expensive, so they would never take this advice anyway (this doesn’t seem that implausible to me).

Again, this can work as advice for some people. I’m not saying that your cut of venison is unsustainable. On many metrics, it’s better than farmed livestock. Clearly, 20% of the UK’s deer would provide quite a bit of meat for some consumers. But it does not scale to the population as a whole, and I therefore think it’s not great general advice to dish out.

As I said above, the exact number of deer in the UK is highly uncertain. If there were only 650,000 deer, the percentages would be even smaller. If there were as many as 4 million (which I’m using as a very high estimate), it’s still just 0.4% of the country’s meat.

Maybe Brits just eat a lot of meat, and live on a small island without many animals. How do the numbers stack up globally?

Not much better.

Let’s be “generous” and say that we don’t just restrict it to deer. All of the world’s wild mammals are available, from elephants and rhinos to mice. How we’re going to catch them all is beyond me, but let’s not ruin the hypothetical.4

Wild mammals on land weigh around 20 million tonnes, equivalent to 3 kilograms per person on Earth.5 Not all of that mass is going to be edible, so let’s take a very rough estimate of a 50% “cut”. That would be 1.5 kilograms per person.

Globally, meat supply is around 44 kilograms per person per year (this doesn’t include fish).6

This means that all of the world’s wild mammals in the world would meet just two weeks of our demand for meat. Within a fortnight, they’d all be gone, and their populations would not rebuild.

If we were to also include mammals in the ocean (mostly whales), we’d still only have enough for just over one month.7

Let me be clear: some populations do rely heavily on wild meat (but in smaller quantities than people eat in many high and middle-income countries today). Many of our ancestors ate wild game, too. For some, it’s an essential source of nutrition and is not necessarily unsustainable.

My point is not that this is “wrong”. It’s that this simply doesn’t scale for 8 billion people. Especially not with our current levels of meat consumption (which, globally, are still increasing). It might work for thousands, or even millions, but not for billions. If we were to all eat in this way, animal populations would quickly disappear (and we’d be left without any wild animals or any meat).

Read the whole story
· · · ·
strugk
14 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Why Direct Air Capture Won't Replicate the Solar Revolution - CleanTechnica

1 Share



The remarkable cost declines in solar photovoltaic (PV) and lithium-ion batteries over the past several decades have fueled optimism in the climate policy and investment community, with many hoping direct air capture (DAC) technologies might follow a similar trajectory. Policymakers, investors, and industry proponents frequently draw analogies between DAC and these wildly successful clean-energy technologies, invoking Wright’s Law—a rule of thumb where costs fall predictably with cumulative doubling of production—to justify unjustifiably bullish projections for DAC’s future costs. Given the unrealistic scenarios requiring carbon removal to achieve net-zero emissions, the attractiveness of DAC scaling cheaply and quickly is understandable, yet such optimism demands critical scrutiny grounded in the realities of technology, markets, and physics.

I’m drawn back to this space, one I’ve been exploring for 15 years, doing technoeconomic assessments of Global Thermostat’s applicability in heavy rail, Carbon Engineering’s natural market of enhanced oil recovery and speaking to many of the leading researchers and entrepreneurs in the space. Why am I drawn back? Because I wrote about Climework’s ongoing trainwreck—105 tons total captured from a 40,000-ton annual homeopathy device and now 22% layoffs—recently, and many commenters made it clear that they thought DAC would follow solar and batteries into the nirvana of cheapness.

Solar PV and batteries achieved their cost revolutions through clear, consistent factors. Foremost was Wright’s Law itself: with each doubling of global cumulative production, solar PV saw about a 20% reduction in cost, while lithium-ion batteries experienced roughly a 19% drop per doubling. Historically, Wright’s Law saw 20% to 27% decreases, depending on the simplicity of the product. These impressive and predictable learning rates emerged for solar and batteries because both technologies quickly found mass-market applications with billions of end-users—solar panels across rooftops worldwide and lithium-ion cells powering consumer electronics and, later, electric vehicles. Such enormous, diverse markets spurred massive economies of scale, standardization of manufacturing processes, and continuous incremental innovation, driving down prices dramatically over relatively short periods.

In solar manufacturing, scale-up to gigawatt-scale factories allowed for unprecedented efficiency gains. Automation, production-line standardization, reduced material usage, and steady incremental improvements in cell efficiency combined to achieve a 99% reduction in costs since the 1970s. Batteries followed a similar pattern. Initially expensive lithium-ion cells rapidly benefited from global consumer electronics markets, then exploded in scale with the electric vehicle boom of the 2010s. Innovations in chemistry, manufacturing methods, and supply chain management drove battery costs down by over 90% since 2010 alone. Crucially, both technologies became genuinely commoditized, their costs falling sufficiently low to be attractive purely on market economics, independent of ongoing subsidies.

In stark contrast, DAC technology faces fundamental structural, thermodynamic, and market constraints that severely limit its potential to emulate these learning-curve successes. While DAC systems like those developed by Climeworks and Carbon Engineering also involve engineered modular units, their scale and replicability differ drastically from solar and batteries. Solar PV and battery units are small, identical, easily mass-produced components numbering in the billions, allowing rapid parallel production and iterative optimization. DAC, conversely, involves large, complex industrial-scale modules that process massive volumes of air. Even highly modularized DAC units like those envisioned by Climeworks represent significant, capital-intensive systems, each processing hundreds of thousands to millions of cubic meters of air per ton of CO₂ captured. Achieving large-scale global deployment would involve thousands of units—not billions—limiting opportunities for rapid learning through repetition and optimization.

Further compounding this problem, DAC relies heavily on mature, off-the-shelf technologies. Key components such as large industrial fans, chemical sorbents, heat exchangers, compressors, and pumps are already widely used across industries. Unlike emerging semiconductor processes or battery chemistries that initially featured substantial inefficiencies ripe for innovation, DAC’s hardware components are closer to their optimized cost floors, having already benefited from decades of engineering and scale in other applications. Incremental improvements in sorbent chemistry or component efficiency may yield modest savings, but the potential for radical cost reductions through fundamentally new approaches or extensive technological simplifications is inherently limited.

Perhaps the most stubborn barrier DAC faces in following a PV-like cost curve is rooted in basic physics: the energy-intensive nature of extracting CO₂ from the atmosphere. Unlike solar cells, whose primary cost drivers are fabrication efficiency and material utilization, DAC confronts unavoidable thermodynamic constraints. The fundamental minimum energy required to capture CO₂ at the dilute concentrations found in ambient air sets a hard, non-negotiable energy floor. Current DAC operations use energy at several times the theoretical minimum, but even highly optimistic scenarios still require substantial energy input, typically hundreds to thousands of kilowatt-hours per ton of CO₂. Thus, DAC will always incur significant operational energy costs that place a lower bound on achievable pricing, unlike solar panels and batteries, whose unit costs dropped rapidly with better manufacturing processes and materials science advances.

Adding complexity, DAC is physically and materially intensive. Capturing millions of tons of CO₂ per year demands enormous amounts of infrastructure—steel, concrete, sorbent materials, and sophisticated capital equipment. Unlike digital technology or small-scale consumer goods, DAC units cannot shrink significantly or dramatically reduce material inputs without sacrificing performance. Indeed, the large physical dimensions of air contactors, substantial volumes of sorbent material needed, and considerable infrastructure for regeneration and compression suggest that DAC systems will remain heavy, complex installations. As DAC scales, rather than benefit from continuously cheaper materials, increased demand for specialty chemicals and industrial materials may drive prices upward, potentially offsetting some manufacturing efficiency gains. This scenario contrasts sharply with the declining per-unit material intensity that helped accelerate solar and battery cost reductions.

Critically, DAC lacks the autonomous, self-sustaining market demand that propelled solar PV and batteries. Solar power and battery storage offered direct economic benefits to millions of end-users, enabling them to become cost-competitive with conventional energy sources over time. DAC, however, provides an environmental service—carbon removal—whose value remains purely policy-dependent. Without robust carbon pricing, governmental incentives, or regulatory mandates, DAC has no inherent private market demand, severely limiting its potential cumulative production growth. Whereas solar panels and batteries rapidly scaled through consumer and business demand, DAC expansion hinges exclusively on sustained public policy support. Such policy-driven markets are vulnerable to political shifts, budget constraints, and public sentiment, making exponential growth in DAC production far less predictable or assured.

Historical analogues from other large-scale industrial and environmental technologies underscore DAC’s challenging trajectory. Technologies such as nuclear power, large-scale carbon capture on fossil plants, and industrial chemical plants have all faced similar complexities and constraints, often resulting in slow, incremental cost reductions—or even cost escalation—as they scaled. These technologies offer more instructive benchmarks for DAC than solar or batteries, highlighting the cautious reality that DAC may experience only modest learning curves of around 10% per cumulative doubling, far slower than the 20% or more seen in clean-energy consumer markets.

All of this leads to 10% or less cost take out for a lot fewer doublings for DAC fan units, the only component which will have any volumes. The chart ends at just below 10,000 units. For context, a million ton per year Carbon Engineering system might have 250 contactor units, the basic module in a wall two kilometers long and 20 meters high. They’d have to build 64 km of their system to get to 8,000 fans, and that’s exceedingly unlikely. To get another 10%, they’d have to build 128 km of walls of their system with 16,000 units. To get another 10, 256 km with 32,000 units.

Meanwhile, a single one GW solar farm has around 1.8 million solar panels. The volumes are radically different, and the rate of cost decreases per doubling are radically different.

Looking forward, expert analyses from independent institutions like the International Energy Agency, Harvard’s Belfer Center, and the National Academies broadly agree: DAC costs will likely remain in the triple-digit dollar range per ton even after decades of scaling. Starry eyed scenarios predict DAC might achieve costs around $150 to $250 per ton by mid-century under aggressive deployment assumptions. More realistic projections settle higher, acknowledging inherent thermodynamic limits, persistent energy costs, and material constraints. Industry-driven forecasts that envision DAC below $100 per ton are simply delusional, hinging on technological breakthroughs that would require changing the laws of physics and ludicrously low energy cost assumptions as a result.

Given these realities, policymakers and investors must fundamentally rethink their near-term engagement with DAC. Aggressively reducing emissions through proven, lower-cost technologies such as electrification, renewable energy, and energy efficiency should remain the clear and unambiguous priority until energy systems are fully decarbonized and surplus renewable electricity is abundant—likely not until after 2040 and probably beyond 2050. DAC, due to its inherently high energy intensity and substantial infrastructure requirements, should not divert limited resources from direct emission-reduction strategies until we reach a point where clean energy is inexpensive and plentiful.

Policymakers and investors should limit current DAC involvement strictly to research and development, aiming to improve technology performance, reduce energy requirements, and better understand realistic long-term potential. Public spending on commercial-scale DAC deployment or infrastructure is premature and risks locking in inefficient, high-cost solutions before cleaner, lower-cost alternatives are fully exploited.

Carbon removal strategies in the immediate decades should instead emphasize nature-based methods and improved soil carbon sequestration—technologies with significantly lower energy demands and clearer short-term scalability. The belief that we can vacuum enough CO2 out of the atmosphere to reach 2050 goals should be abandoned, and more aggressive decarbonization scenarios driven through.

Sign up for CleanTechnica's Weekly Substack for Zach and Scott's in-depth analyses and high level summaries, sign up for our daily newsletter, and/or follow us on Google News!


Whether you have solar power or not, please complete our latest solar power survey.



Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.


 

CleanTechnica uses affiliate links. See our policy here.

CleanTechnica's Comment Policy


Read the whole story
· · · · · · · · · · ·
strugk
20 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

The Neuroscience of Dopamine: How to Triumph Over Constant Wanting

1 Share

Michael Long is not the typical neuroscience guy. He was trained as a physicist, but is primarily a writer. He coauthored the international bestseller The Molecule of More. As a speechwriter, he has written for members of Congress, cabinet secretaries, presidential candidates, and Fortune 10 CEOs. His screenplays have been performed on most New York stages. He teaches writing at Georgetown University.

What’s the big idea?

Dopamine is to blame for a lot of your misery. It compels us to endlessly chase more, better, and greater—even when our dreams have come true. Thanks to dopamine, we often feel restless and hopeless. So no, maybe it’s not quite accurate to call it the “happiness” molecule, but it has gifted humans some amazing powers. Dopamine is the source of imagination, creativity, and ingenuity. There are practical ways to harness the strengths of our dopamine drives while protecting and nurturing a life of consistent joy.

Below, Michael shares five key insights from his new book, Taming the Molecule of More: A Step-by-Step Guide to Make Dopamine Work for You. Listen to the audio version—read by Michael himself—in the Next Big Idea App.

Audio Player

1. Dopamine is not the brain chemical that makes you happy.

Dopamine makes you curious and imaginative. It can even make you successful, but a lot of times it just makes you miserable. That’s because dopamine motivates you to chase every new possibility, even if you already have everything you want. It turns out that brain evolution hasn’t caught up with the evolution of the world.

For early humans, dopamine ensured our survival by alerting us to anything new or unusual. In a world with danger around every corner and resources hard to acquire, we needed an early warning system to motivate us even more. Dopamine made us believe that once we got the thing we were chasing, we’d be safer, happier, or more satisfied. That served humans well, until it didn’t.

Now that we’ve tamed the world, we don’t need to explore every new thing, but dopamine is still on duty, and it works way out of proportion to the needs of the modern world. Since self-discipline has a short shelf life, I share proven techniques that don’t rely on willpower alone.

2. Dopamine often promises more than reality can deliver.

When we have problems obsessing with social media, the news, or when we’re doing excessive shopping, we feel edgy and restless. This is because dopamine floods us with anticipation and urgency. We desperately scroll for the next hit, searching for the latest story, or watching the porch for that next Amazon package. As this anticipation becomes a normal way of living, the rest of life starts to feel dull and flat. That restarts the cycle of chasing what we think will make us happy. Then we get it, and when it doesn’t make us happy, we experience a letdown, and that makes us restless all over again.

Here’s how that works for love and romance. When we go on date after date and can’t find the right person or a long-term relationship gets stale, we start to feel hopeless. The dopamine chase has so raised our expectations about reality that we no longer enjoy the ordinary. Now we’re expecting some perfect partner, and we won’t find them because they don’t exist. Fight back with three strategies:

  • Rewire your habits to ditch the chase.
  • Redirect your focus to the here and now.
  • Rebuild meaning, so life feels more like it matters.

I describe specific ways to do this through simple planning, relying more on friendships, doing a particular kind of personal assessment, and there’s even a little technology involved that you wouldn’t expect.

3. Dopamine is the source of imagination.

The dopamine system has three circuits. The first has only a little to do with behavior and feeling, so we’ll set that one aside. The second circuit (that early warning system) is called the desire dopamine system because it plays on our desires. The third system is very different. It’s called the control system, and it gives us an ability straight out of science fiction: mental time travel. You can create in your mind any possible future in as much detail as you like and investigate the results without lifting a finger.

We do this all the time without realizing that’s what it is. Little things like figuring out where to go for lunch: we factor in traffic, how long we’ll have to wait for a table, think over the menu, and game it all out to decide where to go. But this system also lets us imagine far more consequential mental time travel, figuring out the best way to build a building, design an engine, or travel to the moon.

“Dopamine really is the source of creativity and analytical power that allows us to create the future.”

The dopamine control circuit lets us think in abstractions and play out various plans using only our minds. That means not only can we imagine a particular future, but we can also imagine entire abstract disciplines, come to understand them, and make use of them in the real world based on what we thought about. Fields like chemistry, quantum mechanics, and number theory exist because of controlled dopamine. Dopamine really is the source of creativity and analytical power that allows us to create the future. Dopamine brings a lot of dissatisfaction to the modern world, but we wouldn’t have the modern world without dopamine.

4. You’re missing out on the little things.

When my best friend died at age 39, the speaker at his funeral said, “You may not remember much of what you did with Kent, but it’s okay because it happened.”

I did not know what that could mean, but years later, while writing this book, I got it. We don’t live life just to look back on it. The here and now ought to be fun. You may not remember it all, but while it’s happening, enjoy it. That requires fighting back against dopamine because it’s always saying, Never mind what’s in front of you, think about what might be. When Warren Zevon was at the end of his life, David Letterman asked him what he’d learned. Warren said, “Enjoy every sandwich.”

5. A satisfying life requires meaning, and there’s a practical way to find it.

Even if you fix every dopamine-driven problem in your life, you may still feel like something is missing. To find a satisfying balance between working for the future and enjoying the here and now, we must choose a meaning for life and work toward it as we go.

“If you’re making life better for others with something you do well and enjoy, the days feel brighter and life acquires purpose.”

Is it possible to live in the moment, anticipate the future, and have it add up to something? The psychiatrist Viktor Frankl said we need to look beyond ourselves because that’s where a sense of purpose begins. Aristotle gave us a simple formula for taking pleasure in the present, finding a healthy anticipation for the future, and creating meaning. He said it’s found where three things intersect: what we like to do, what we’re good at, and what builds up the world beyond ourselves. Things like working for justice, making good use of knowledge, or simply living a life of kindness and grace.

What you do with your life doesn’t have to set off fireworks, and you don’t have to make history. You can be a plumber, a mail carrier, or an accountant. I’m a writer. I like what I do. I seem to be pretty good at it, and it helps people. The same can be true if you repair the highway, fix cars, or serve lunch in a school cafeteria. If you’re making life better for others with something you do well and enjoy, the days feel brighter and life acquires purpose. Life needs meaning, and that’s the last piece of the puzzle in dealing with dopamine and taming the molecule of more.

Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea App:

Read the whole story
· · · ·
strugk
25 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

How giant concrete balls on ocean floors could store renewable energy

1 Share

In an effort to reduce the use of precious land to build renewable energy storage facilities, the Fraunhofer Institute has been cooking up a wild but plausible idea: dropping concrete storage spheres down to the depths of our oceans.

Since 2011, the StEnSea (Stored Energy in the Sea) project has been exploring the possibilities of using the pressure in deep water to store energy in the short-to-medium term, in giant hollow concrete spheres sunken into seabeds, hundreds of feet below the surface.

An empty sphere is essentially a fully charged storage unit. Opening its valve enables water to flow into the sphere, and this drives a turbine and a generator that feed electricity into the grid. To recharge the sphere, water is pumped out of it against the surrounding water pressure using energy from the grid.

Each hollow concrete sphere measures 30 ft (9 m) in diameter, weighs 400 tons, and will be anchored to the sea floor at depths of 1,970 - 2,625 ft (600 - 800 m) for optimal performance.

Fraunhofer has previously tested a smaller model in Europe's Lake Constance near the Rhine river, and is set to drop a full-size 3D-printed prototype sphere to the seabed off Long Beach near Los Angeles by the end of 2026. It's expected to generate 0.5 megawatts of power, and have a capacity of 0.4 megawatt-hours. For reference, that should be enough to power an average US household for about two weeks.

The bigger goal is to test whether this tech can be expanded to support larger spheres with a diameter of nearly 100 ft (30 m). Fraunhofer researchers estimate StEnSea has a massive global storage potential of 817,000 gigawatt-hours in total – enough to power every one of approximately 75 million homes across Germany, France, and the UK put together for a year.

The institute estimates storage costs at around US5.1¢ (EUR 4.6¢) per kilowatt-hour, and investment costs at $177 (EUR 158) per kilowatt-hour of capacity – based on a storage park with six spheres, a total power capacity of 30 megawatts, and a capacity of 120 megawatt-hours.

According to Fraunhofer, StEnSea spherical storage is best suited for stabilizing power grids with frequency regulation support or operating reserves, and for arbitrage. The latter refers to buying electricity at low prices and selling it at high market prices – which grid operators, utilities providers, and power trading companies can engage in.

Ultimately, StEnSea could rival pumped storage as a way to store surplus grid electricity, with an obvious advantage: it doesn't take up room on land. Plus, pumped storage only really works when you have two reservoirs at different elevations to drive pumps that act as turbines. While pumped storage is cheaper to run and slightly more efficient over an entire storage cycle, StEnSea can potentially be installed across several locations worldwide to support immense storage capacity.

The US Department of Energy has invested $4 million into the project, so it will be keen to see how the 2026 pilot plays out off the California coast.

If you enjoy discovering all the weird ways in which we produce and store energy, see how falling rain can generate electricity, and get a closer look at plans to turn millions of disused mines around the world into massive underground batteries.

Source: Fraunhofer IEE

Read the whole story
· ·
strugk
28 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories
Loading...
Page 2

Climeworks’ capture fails to cover its own emissions

1 Share

Climeworks in Iceland has only captured just over 2,400 carbon units since it began operations in the country in 2021, out of the twelve thousand units that company officials have repeatedly claimed the company’s machines can capture. This is confirmed by figures from the Finnish company Puro.Earth on the one hand and from the company’s annual accounts on the other. Climeworks has made international news for capturing carbon directly from the atmosphere. For this, the company uses large machines located in Hellisheiði, in South Iceland. They are said to have the capacity to collect four thousand tons of CO2 each year directly from the atmosphere.

According to data available to Heimildin, it is clear that this goal has never been achieved and that Climeworks does not capture enough carbon units to offset its own operations, emissions amounting to 1,700 tons of CO2 in 2023. The emissions that occur due to Climeworks' activities are therefore more than it captures. Since the company began capturing in Iceland, it has captured a maximum of one thousand tons of CO2 in one year.

The company's operations in Iceland rely entirely on funding from its Swiss parent company, Climeworks AG, but the Icelandic subsidiary's equity position was negative by almost $30 million in 2023. Poor performance in direct CO2 capture has caused a depreciation of the Orca capture machine of $1.4 million in 2023 as the capture plant did not meet expectations, according to the company's annual accounts.

Last year, Climeworks partially commissioned the Mammoth capture plant, which is expected to capture nine times more than what had been done since 2021, or 36,000 tons. That plant has only managed to capture 105 tons of CO2 in its first ten months of operation, according to information from Puro.Earth. It is responsible for verifying the Swiss company's capture and is paid for that work by Climeworks.

Construction on Mammoth began in June 2022, and in a press release about a year ago for the plant's opening, it was reported that 12 of the 72 machines that are to be in the complex had been installed. According to information from Climeworks, work is currently underway to install an additional twelve machines. That work is in the final stages, according to Sara Lind Guðbergsdóttir, Climeworks' managing director in Iceland. According to her, the installation of the Mammoth plant will be completed this year. The Climeworks website had previously claimed that it would be completed last year.

Sara Lind says she cannot answer questions about why CO2 capture is going so poorly that the company is unable to offset its own carbon footprint. She also cannot say when subscribers to the company's carbon credits can expect to receive them.

“You can definitely send me the questions you have, then I can pass them on. Unfortunately, it is not up to me to answer media questions,” she says, offering to mediate in getting Heimildin’s questions to Climeworks in Switzerland. That was two weeks ago, but eleven weeks had passed since the same questions were directed to Climeworks' information representatives in Switzerland. The questions had still not been answered when Heimildin’s print edition went to press on May 8.

A professor of environmental and civil engineering at Stanford University in California says the carbon capture and disposal industry is a scam and is causing harm when it comes to climate solutions. More than 20,000 people pay Climeworks monthly for CO2 capture. A retired scientist in the UK says he feels like a gullible idiot after buying carbon credits from Climeworks, which he hopes to receive in about six years. However, the wait will be much longer unless significant progress is made in capturing carbon quickly. He can therefore expect to receive the two tonnes – which he has already paid for – in a few decades at the earliest.

Optimistic plans

The Swiss founders of Climeworks were ambitious when they started in 2009. In a 2017 interview, they said that by 2025 they planned to capture one percent of all global emissions. That amounts to 400 million tons of CO2. Those plans have not been realised, and the company has never come close to achieving that. They also planned to reduce the cost of capturing each ton of CO2 from the atmosphere to about $100. Today, a ton of CO2 costs about $1,000, according to the Climeworks website – ten times more than the target this year.

Despite not achieving those goals, the company’s executives have set new, more ambitious goals. The company now says it plans to capture 1 billion tons of CO2 by 2050. Climeworks’ operations and ambitious goals have garnered worldwide attention, and the company recently ranked second on Time Magazine’s list of the 100 top green tech companies in the world. This is the first time Climeworks has made the list, but another unrelated company, which has operated in Iceland, made the list last year, Running Tide.

Big machines, little capture

When Climeworks opened its first capture plant in Iceland in September 2021, company officials said it could capture 4,000 tons of CO2 each year it operated. The company sends the carbon dioxide captured by both plants to Carbfix, a subsidiary of Reykjavík Energy, which then pumps it into the ground. Finally, the company sells carbon credits to other companies that emit CO2 and need or want to offset their carbon emissions.

The capture plants work by using large fans to suck air through filters that capture CO2 from the atmosphere. This operation requires a lot of energy, as only a very small portion of the atmosphere contains CO2. The percentage of CO2 in the atmosphere is measured as a percentage of parts per million. According to measurements, there are about 427 parts of CO2 in every million parts of the atmosphere. It should be noted that although this is a small amount of CO2, this small amount has a major impact on the Earth's climate.

In an interview with the Japanese outlet Nikkei, Jan Wurzbacher, CEO and one of the founders of Climeworks, said that for every ton that the Mammoth capture plant captured, up to 5,000 to 6,000 kilowatt-hours of energy would be required. He also said in the same interview that the Mammoth capture plant was not designed with energy efficiency in mind, but only how much CO2 it could capture. This means that for every 1,000 tons of CO2 captured, 5 to 6 million kilowatt-hours of energy would be required. To put Climeworks' energy needs in perspective, it would take up to 72 terawatts to fully offset Iceland's carbon footprint each year, with the country's total emissions of 12.4 million tons of CO2 in 2024. This is equivalent to almost four times Iceland's electricity production, which is about 20 terawatts per year.

Future credits already sold

Climeworks has sold a significant amount of carbon credits. They are not only credits that have already been certified and captured, but also a large amount of credits that Climeworks plans to capture in the future. According to the company, one third of all the credits that the Mammoth capture plant is expected to capture from the atmosphere over the next 25 years have already been sold. About 21 thousand people have a subscription with the company, where they pay monthly for the capture and disposal of carbon credits. The waiting time to receive these carbon credits can be up to six years, according to the company's terms. If Climeworks' capture figures do not improve, the wait could extend from years to decades.

Since the company was founded, it has always focused on capturing CO2 from the atmosphere with capture plants. Now, however, the company has taken a change of direction. Recently, it began to focus on so-called enhanced weathering. This method involves crushing rocks into smaller particles, but this method is controversial within the scientific community. With this method, it is believed that CO2 can be bound to the rock much faster than it already does naturally. The experts that Heimildin has spoken to believe that this step by the company is a sign that Climeworks' capture projects are not delivering the results that were expected and that this method is now being used to try to produce carbon credits that the company has already sold, but is having difficulty delivering.

Failing to offset its own carbon footprint

Climeworks has kept a carbon accounting that the company publishes on its website. It states that it is growing rapidly and now employs 387 people. That is an increase of 45 percent between years. It also says that the company has been expanding systematically and in 2023 it entered new markets, such as the United States, Kenya, Canada, Norway and the United Kingdom. Due to the company's rapid expansion, its carbon footprint has grown in parallel and is attributed to travel and the company's activities.

Climeworks calculates that its own carbon footprint due to the company's activities is 1,079 tons of CO2 in 2022. The following year it increased by 57 percent, or up to 1,700 tons of CO2. Climeworks' total capture figures since its founding are slightly lower overall, at around 2,400 tonnes. This means that Climeworks cannot yet offset its own carbon footprint. Carbon accounting for 2024 is not available, but there have been no reports that the company is cutting back, so the carbon footprint can be expected to be the same as in 2023. Climeworks captured 876 tonnes of carbon dioxide between December 1, 2023 and October 31, 2024, and therefore a shortfall of almost 1,000 tonnes of CO2 could occur for the company’s plans to offset its own operations.

However, the company does not use the units it captures in its own operations, but has sold them to the company's customers, either directly or through a subscription.

Unfavourable accounts

Climeworks' accounts in Iceland were unfavourable in 2023, with the company's equity position negative by 3.6 billion Icelandic krónur at the end of the year. However, its operational capacity is guaranteed as the parent company in Switzerland finances the operations entirely, while the Icelandic company owes the Swiss one almost 5 billion Icelandic krónur. The value of Climeworks' main asset in Iceland – the Orca machine that captures CO2 from the atmosphere – has also declined significantly because the machine has not met expectations in recent years, according to the annual accounts. The depreciation amounts to a total of 2.7 billion in the operating years 2022 and 2023, and will continue should the machine not have more success in the future.

The Swiss company says it has raised or received approval for around eight hundred million dollars, and is therefore worth at least one hundred billion Icelandic krónur. The largest part of the capital comes from the US Department of Energy and is related to the construction of a giant capture plant that is planned in that country in the future. The operations in Iceland are run in three different companies. The Swiss company owns the Icelandic holding company Climeworks Operational, which also submits annual accounts in Iceland, but under that company are the respective limited liability companies, one that includes the Orca machine and the other that was founded around Climeworks' larger capture machine, Mammoth. The annual accounts of the parent company in this country have been delivered to the tax authorities but are under review, as stated in a conversation with the Iceland Revenue and Customs and therefore not accessible at the moment.

Not only investors have financed the operation of Climeworks. Since the company was founded, it has received approval for over one hundred billion krónur in grants from public sources, as stated on the Climeworks website, including from Swiss and American taxpayers. The Swiss handed over $5 million to the company, while the US government has promised the company $625 million.

Gullible idiot? 

“I’m 65 years old and retired when I was 60. One of the things I wanted to do in my later years was reduce my carbon footprint,” says Michael de Podesta, who worked for the UK’s National Physical Laboratory for most of his life, where he studied cold fusion, including debunking it. Michael is one of 21,000 subscribers to Climeworks, but after paying his subscription dutifully for about two years, he began to have doubts. Finally, he wrote on his widely read blog and asked a simple question: Am I a gullible idiot?

“I paid £40 [around 7,000 ISK] a month for fifty kilograms of CO2. I was supposed to get around 600 kilograms a year. I paid them right up until October last year and by then I had paid them to dispose of almost 2.2 tonnes of CO2 and got the tonne for around £800 [135 thousand ISK],” says Michael. He says the project seemed promising at first. He saw coverage of the project in the scientific journal Nature, one of the most prestigious in the world. The science was certainly there: CO2 can be sucked out of the atmosphere and Climeworks’ plans were ambitious.

However, Michael quickly noticed that no matter how much he paid, no CO2 had been captured and disposed of in his name. The math didn’t add up when the energy requirements were examined more closely.

“Maybe I should have read the fine print more carefully,” he says, which says the company plans to deliver the carbon credits in less than six years. In addition to its commitments to subscribers, the company has committed to capturing CO2 from the atmosphere for various airlines and investment bank Morgan Stanley – which alone has been promised to capture 40,000 tons from the air. Based on today’s capture rates, which are about 860 tons per year, those credits would be delivered in about half a century. So Michael is moderately optimistic that he will one day receive the two kilograms.

Michael began asking Climeworks tough questions. He asked for better data but received few answers.

Michael says he was aware that scientists had questioned whether capturing CO2 from the atmosphere was a viable way to tackle the climate problem. The biggest concern was the energy consumption of capturing each ton. This requires a huge amount of energy if done at scale, and the capture process also requires a lot of hot water. Iceland is therefore a good option when it comes to powering machines that want to capture CO2 from the atmosphere. But there are few countries in the world with green and cheap energy like Iceland.

“The company sent out information emails every six months,” says Michael, noting that they were rather sparse in content. So he took the opportunity to send an email back asking how the carbon dioxide capture was going. “I started asking quite specific questions about it all,” explains Michael, who says he received vague answers from the company’s information representatives when he finally received an answer.

“Then the idea came to me; could this be a scam?” says Michael.

So he took to social media X where he asked directly: Is Climeworks a scam? To his surprise, the company's information officer responded much more quickly than when he sent them emails. The company responded badly, he says, and he was criticised for thinking such a thing. Subsequently, however, he found information about the real figures on how much the company has captured.

Michael did not let the criticism deter him, but wrote a blog post with the title: Am I a Gullible idiot? In it, he emphasised that the question of whether Climeworks was a scam was not unfair.

“This has all the hallmarks of a scam. There are undoubtedly a lot of highly paid people traveling the world to sell their services to large corporations to remove carbon credits in the future. They are using a semi-magical technology that doesn't work as well as expected (better known as Orca) but will work perfectly in a larger version (Mammoth). I am urged to convince my friends to join the project. The answers are scarce and full of PR chatter. Climeworks' operations look like a scam and talk like one. But is it a scam? I don't know. I think it could work, but the company's answers are so opaque that it's hard to say.”

Michael says his biggest question is simple. “Why are they scaling up so slowly? There’s nothing special about the factories themselves. They’re not even that complicated,” adds Michael, who has dedicated his life to science. The answer is probably that the energy demand is enormous.

He says he can’t answer yet whether he’s a gullible man. Climeworks has until 2027 to deliver the product he paid for, which is to capture and dispose of 2.2 tons of CO2. “But I definitely feel that way, like I’m a gullible idiot,” he concludes.

Carbon capture “the Theranos of the energy industry”

A professor of civil and environmental engineering at Stanford University in the United States, Mark Z. Jacobson, says the capture and disposal industry – abbreviated CCS in English – is nothing more than a scam. “This is the Theranos of the energy industry,” he says in an interview with Heimildin, referring to the notorious fraud company of Elisabeth Holmes, who claimed to be able to diagnose diseases from a single drop of blood with a machine that never worked. The analogy is clear, even if it is presented here symbolically. Mark intends to say that the carbon capture industry is nothing but a scam.

“Direct capture is a scam, carbon capture is a scam, blue hydrogen is a scam, and electrofuel is a scam. These are all scam technologies that do nothing for the climate or air pollution,” Mark says bluntly.

Mark is well known in his field in the United States and was, among other things, an expert witness in the first climate trial in the United States, where the plaintiffs are sixteen children. They demand that their right to access clean air and a healthy environment be recognised. Mark himself has been repeatedly called upon as an expert on environmental protection issues in the American media. In 2009, he published a scientific article in which he said that the most successful path for the world was to switch entirely to renewable energy, such as solar, wind and electricity. As a result, he has been criticised, including by his colleagues.

Mark is adamant that carbon capture and storage are designed to delay the energy transition. “This technology perpetuates the business model of oil and gas companies,” he says, adding: “This is what this technology is designed to do. None of this is doing any good for the climate. On the contrary, this kind of technology is making things worse.”

Mark conducted a study on the impact of CCS technology and published it in 2019. “We looked at what would happen if 149 countries adopted CCS technology and compared it to renewable energy. If we used only renewable energy, we would eliminate about 90 percent of global air pollution, with the added benefit that 7.5 million people would not die from air pollution. Carbon capture would not prevent any of these deaths; on the contrary, the technology would add a million lives to the problem per year if the energy used to power industry were not renewable. If it were renewable, nothing would change, seven and a half million would still die annually, and green energy would be wasted on this technology,” Mark explains, asking: “Why would you waste green energy on this instead of replacing fossil fuels?”

He says the conclusion is simple. “By powering this technology with green energy, the CCS industry is adding to the number of deaths each year from pollution, by preventing renewable energy from replacing fossil fuels.”

No benefits

Mark also points out that the energy requirements of this technology, especially direct capture, are enormous. This has its effects. “If this technology is used, the demand for green energy increases, which in turn increases its price. So pollution increases, as does demand and thus the price of renewable energy. That cost is ultimately passed on to consumers, who are also left with the same air pollution, if not higher in the future,” says Mark, adding: “There are no benefits to this technology. It is just harmful.”

He says Climeworks uses traditional methods to embellish its numbers.

“This is a typical embellishment of the real numbers that these types of systems manage to capture. The big picture is rarely considered when carbon dioxide is captured in this way,” says Mark.

“The energy production to power this increases emissions, and no matter how you look at it, they are increasing emissions by not using it to replace fossil fuels,” he adds.

He says he understands that projects of this nature may look like a solution to the climate problem. However, that is not the case. “I’ve been working on this and researching it for decades and I understand it incredibly well for those reasons. But 99 percent of people don’t get this complex interaction and instead grab what they hear on the news or read online. And most of what’s out there is positive. It was initially pushed on the public and the government by oil and gas companies. Then it took on a life of its own. People want to make a difference, they want to do good, and they want to tackle the climate problem. Without understanding the context, these ideas might seem like real solutions. But they aren’t,” says Mark, adding that once people get involved in an industry like this, it’s not easy to turn their backs on it. “Because that’s where the paycheck comes from,” he concludes.

Read the whole story
· · · · · · · · · ·
strugk
31 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Are you more likely to die on your birthday?

2 Shares
Read the whole story
strugk
36 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Replacing the Scunthorpe blast furnaces with electric arc furnaces. | Carbon Commentary

1 Share

Last week the UK government effectively nationalised the blast furnaces at Scunthorpe on the north-east coast of England. These furnaces are the last sites in the UK that can manufacture iron from ore as a precursor to the production of virgin steel. The emergency legislation will help to keep open this important source of local employment and industrial activity.

Nevertheless, I argue that it was an expensive and unnecessary move. Instead of making new virgin steel, the UK should concentrate on recycling the large amounts of old scrap steel that are exported from this country for reprocessing around the world. The owners of Scunthorpe already have plans to switch to using steel using electricity and scrap. Of critical importance to any plan, the price of electricity used for electric arc furnaces needs to be roughly the same as in competitor countries, necessitating a substantial subsidy. Without it, UK steel-making cannot hope to be financially self-reliant. Other countries do this and without financial support, the UK cannot hope to be competitive.

Basic numbers 

The most recent data from industry body UK Steel gives the following figures for the UK’s consumption and production of steel. These figures relate to 2023. In 1970, the peak year for the country’s steel production, the number was five times higher.

UK Production of steel -         5.6 million tonnes, of which 4.5 million tonnes came from blast furnaces

UK Demand for steel -            7.6 million tonnes 

So about 2m tonnes of steel had to be imported in 2023. This number probably rose in 2024 after the closure of the blast furnaces at Port Talbot but the figures are not publicly available yet. 

But at the same time as importing 2m tonnes of finished metal, the UK collected about 10.5 million tonnes of scrap steel, almost three million tonnes more than total steel demand in the country. Some scrap was used in the existing electric arc furnaces here but most was exported; about 8.5m tonnes of scrap was sent abroad for reprocessing elsewhere back into new steel. (Some of this new steel will have eventually come back to the UK).  This makes the UK the world’s second largest exporter of scrap steel for recycling. Expressed in per capita terms, the country is the top source of used steel.  

Put another way, the country’s exports of scrap, which can be easily recycled in electric arc furnaces, alone exceeded its total demand for the metal. There is no need for blast furnaces, such as the ones in Scunthorpe, for the UK to build self-sufficiency in steel production. This has been a consistent worry expressed over recent weeks with many expressing a view that the UK needed to retain the capability to make steel from iron ore in blast furnaces. But simply keeping used steel available for recycling in the UK would provide enough of the metal for the country’s needs.

It may be worth noting that many other countries restrict or block the export of steel scrap in order to ensure adequate supplies for recycling in local electric arc furnaces.

What is stopping the UK switching from blast furnaces to make the metal, rather than using scrap steel? 

·      Large electric arc furnaces (EAFs) for recycling steel are expensive to construct. The EAFs to be constructed by Tata Steel at Port Talbot in South Wales are projected to cost around £1.25bn for a projected capacity of 3m tonnes a year (or potentially around 40% of the UK’s total steel needs). The government has committed £500m to assist the transition there from blast furnaces to EAFs.

British Steel (owned by Jingye of China) has stated that the cost of creating two new EAFs on the north east coast will also be about £1.25 billion. The projected total capacity doesn’t appear to have been published but based on the Tata numbers we can perhaps assume a similar figure of about 3m tonnes a year.

·      UK electricity costs are higher than nearby countries. Even after the government intervention to reduce the costs of electricity transmission to steelworks, one recent study suggests that the British steel industry pays £66 a megawatt hour (MWh) compared to £50 in Germany and £43 in France.[1] Because electric arc furnaces use about 0.5 MWh per tonne of steel output, these higher costs can mean a handicap of £11.50 a tonne of steel from an EAF. At current finished steel prices of around £500 a tonne ($660), this imposes a burden of over 2%. In a low margin industry such as steelmaking, this difference is significant.

·      Falling UK demand for steel has imposed an additional weight on investment enthusiasm. Investing £1.25bn in a shrinking market looks a dangerous decision to take. On the other hand, some demand increases are likely in future; wind turbine columns alone might add 1m tonnes a year to UK needs. 

·      EAFs need far fewer employees per tonne of output, making it politically difficult to allow the closure of a major source of local employment in Scunthorpe. And any new EAFs in that part of the UK will take several years before they begin to hire permanent staff.

The advantages of using EAFs rather than keeping the Scunthorpe blast furnaces open 

·      EAFs use local scrap metal, reducing the amount exported.

·      The UK scrap also contains other metals, such as copper, increasing its value and reducing the need to import materials.

·      EAFs produce much less local air pollution than the older steel-making method.

·      The carbon footprint of EAFs is about one sixth of steel originating in blast furnaces. The figures will depend on the fossil fuel intensity of the electricity used but most sources estimate a footprint of about 0.35 tonnes of CO2 per tonne of steel, compared to about 2 tonnes from the blast furnace route. Replacing the 2023 4.5 million tonnes of steel with EAF output would save about 7.4 million tonnes of CO2 or just under 2% of UK emissions.

·      Potentially the economics of using scrap could be better. The open market scrap price is around $350 per tonne, equivalent to £263 today, or just over half the value of a tonne of steel in the UK. The price of raw materials is likely to be more stable, avoiding the need to have to buy much coking coal and iron ore on international markets.[2]

·      EAFs can help stabilise the electricity market, using power mostly at times when the wind is blowing and not at times of scarcity. Unlike blast furnaces, EAFs can decide when to operate. While not a trivial exercise, steel-making can adjust its demand to match national supplies of electricity.

In summary, both industrial strategy and carbon reduction aims should push us towards EAFs rather than keeping open the Scunthorpe blast furnaces. It makes very little sense to spend large sums keeping the furnaces open rather than sponsoring the building of new EAFs when the UK has such abundant supplies of metal for recycling and the carbon footprint benefits may be equal to at least one per cent of the UK emissions. This is not to dismiss the profound social consequences of the reduced employment prospects for steelworkers in the Scunthorpe area.

[1] <a href="https://www.uksteel.org/electricity-prices" rel="nofollow">https://www.uksteel.org/electricity-prices</a>

[2] EAFs use some iron ore and some coal but in much smaller quantities than blast furnaces.

Read the whole story
· · · ·
strugk
46 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Why is English so weirdly different from other languages? | Aeon Essays

1 Share

English speakers know that their language is odd. So do people saddled with learning it non-natively. The oddity that we all perceive most readily is its spelling, which is indeed a nightmare. In countries where English isn’t spoken, there is no such thing as a ‘spelling bee’ competition. For a normal language, spelling at least pretends a basic correspondence to the way people pronounce the words. But English is not normal.

Spelling is a matter of writing, of course, whereas language is fundamentally about speaking. Speaking came long before writing, we speak much more, and all but a couple of hundred of the world’s thousands of languages are rarely or never written. Yet even in its spoken form, English is weird. It’s weird in ways that are easy to miss, especially since Anglophones in the United States and Britain are not exactly rabid to learn other languages. But our monolingual tendency leaves us like the proverbial fish not knowing that it is wet. Our language feels ‘normal’ only until you get a sense of what normal really is.

There is no other language, for example, that is close enough to English that we can get about half of what people are saying without training and the rest with only modest effort. German and Dutch are like that, as are Spanish and Portuguese, or Thai and Lao. The closest an Anglophone can get is with the obscure Northern European language called Frisian: if you know that tsiis is cheese and Frysk is Frisian, then it isn’t hard to figure out what this means: Brea, bûter, en griene tsiis is goed Ingelsk en goed Frysk. But that sentence is a cooked one, and overall, we tend to find that Frisian seems more like German, which it is.

We think it’s a nuisance that so many European languages assign gender to nouns for no reason, with French having female moons and male boats and such. But actually, it’s us who are odd: almost all European languages belong to one family – Indo-European – and of all of them, English is the only one that doesn’t assign genders that way.

More weirdness? OK. There is exactly one language on Earth whose present tense requires a special ending only in the third‑person singular. I’m writing in it. I talk, you talk, he/she talk-s – why just that? The present‑tense verbs of a normal language have either no endings or a bunch of different ones (Spanish: hablo, hablas, habla). And try naming another language where you have to slip do into sentences to negate or question something. Do you find that difficult? Unless you happen to be from Wales, Ireland or the north of France, probably.

Why is our language so eccentric? Just what is this thing we’re speaking, and what happened to make it this way?

English started out as, essentially, a kind of German. Old English is so unlike the modern version that it feels like a stretch to think of them as the same language at all. Hwæt, we gardena in geardagum þeodcyninga þrym gefrunon – does that really mean ‘So, we Spear-Danes have heard of the tribe-kings’ glory in days of yore’? Icelanders can still read similar stories written in the Old Norse ancestor of their language 1,000 years ago, and yet, to the untrained eye, Beowulf might as well be in Turkish.

The first thing that got us from there to here was the fact that, when the Angles, Saxons and Jutes (and also Frisians) brought their language to England, the island was already inhabited by people who spoke very different tongues. Their languages were Celtic ones, today represented by Welsh, Irish and Breton across the Channel in France. The Celts were subjugated but survived, and since there were only about 250,000 Germanic invaders – roughly the population of a modest burg such as Jersey City – very quickly most of the people speaking Old English were Celts.

Crucially, their languages were quite unlike English. For one thing, the verb came first (came first the verb). But also, they had an odd construction with the verb do: they used it to form a question, to make a sentence negative, and even just as a kind of seasoning before any verb. Do you walk? I do not walk. I do walk. That looks familiar now because the Celts started doing it in their rendition of English. But before that, such sentences would have seemed bizarre to an English speaker – as they would today in just about any language other than our own and the surviving Celtic ones. Notice how even to dwell upon this queer usage of do is to realise something odd in oneself, like being made aware that there is always a tongue in your mouth.

At this date there is no documented language on earth beyond Celtic and English that uses do in just this way. Thus English’s weirdness began with its transformation in the mouths of people more at home with vastly different tongues. We’re still talking like them, and in ways we’d never think of. When saying ‘eeny, meeny, miny, moe’, have you ever felt like you were kind of counting? Well, you are – in Celtic numbers, chewed up over time but recognisably descended from the ones rural Britishers used when counting animals and playing games. ‘Hickory, dickory, dock’ – what in the world do those words mean? Well, here’s a clue: hovera, dovera, dick were eight, nine and ten in that same Celtic counting list.

pretty soon their bad Old English was real English, and here we are today: the Scandies made English easier

The second thing that happened was that yet more Germanic-speakers came across the sea meaning business. This wave began in the ninth century, and this time the invaders were speaking another Germanic offshoot, Old Norse. But they didn’t impose their language. Instead, they married local women and switched to English. However, they were adults and, as a rule, adults don’t pick up new languages easily, especially not in oral societies. There was no such thing as school, and no media. Learning a new language meant listening hard and trying your best. We can only imagine what kind of German most of us would speak if this was how we had to learn it, never seeing it written down, and with a great deal more on our plates (butchering animals, people and so on) than just working on our accents.

As long as the invaders got their meaning across, that was fine. But you can do that with a highly approximate rendition of a language – the legibility of the Frisian sentence you just read proves as much. So the Scandinavians did pretty much what we would expect: they spoke bad Old English. Their kids heard as much of that as they did real Old English. Life went on, and pretty soon their bad Old English was real English, and here we are today: the Scandies made English easier.

I should make a qualification here. In linguistics circles it’s risky to call one language ‘easier’ than another one, for there is no single metric by which we can determine objective rankings. But even if there is no bright line between day and night, we’d never pretend there’s no difference between life at 10am and life at 10pm. Likewise, some languages plainly jangle with more bells and whistles than others. If someone were told he had a year to get as good at either Russian or Hebrew as possible, and would lose a fingernail for every mistake he made during a three-minute test of his competence, only the masochist would choose Russian – unless he already happened to speak a language related to it. In that sense, English is ‘easier’ than other Germanic languages, and it’s because of those Vikings.

Old English had the crazy genders we would expect of a good European language – but the Scandies didn’t bother with those, and so now we have none. Chalk up one of English’s weirdnesses. What’s more, the Vikings mastered only that one shred of a once-lovely conjugation system: hence the lonely third‑person singular –s, hanging on like a dead bug on a windshield. Here and in other ways, they smoothed out the hard stuff.

They also followed the lead of the Celts, rendering the language in whatever way seemed most natural to them. It is amply documented that they left English with thousands of new words, including ones that seem very intimately ‘us’: sing the old song ‘Get Happy’ and the words in that title are from Norse. Sometimes they seemed to want to stake the language with ‘We’re here, too’ signs, matching our native words with the equivalent ones from Norse, leaving doublets such as dike (them) and ditch (us), scatter (them) and shatter (us), and ship (us) vs skipper (Norse for ship was skip, and so skipper is ‘shipper’).

But the words were just the beginning. They also left their mark on English grammar. Blissfully, it is becoming rare to be taught that it is wrong to say Which town do you come from?, ending with the preposition instead of laboriously squeezing it before the wh-word to make From which town do you come? In English, sentences with ‘dangling prepositions’ are perfectly natural and clear and harm no one. Yet there is a wet-fish issue with them, too: normal languages don’t dangle prepositions in this way. Spanish speakers: note that El hombre quien yo llegué con (‘The man whom I came with’) feels about as natural as wearing your pants inside out. Every now and then a language turns out to allow this: one indigenous one in Mexico, another one in Liberia. But that’s it. Overall, it’s an oddity. Yet, wouldn’t you know, it’s one that Old Norse also happened to permit (and which Danish retains).

as if all this wasn’t enough, English got hit by a firehose spray of words from yet more languages

We can display all these bizarre Norse influences in a single sentence. Say That’s the man you walk in with, and it’s odd because 1) the has no specifically masculine form to match man, 2) there’s no ending on walk, and 3) you don’t say ‘in with whom you walk’. All that strangeness is because of what Scandinavian Vikings did to good old English back in the day.

Finally, as if all this wasn’t enough, English got hit by a firehose spray of words from yet more languages. After the Norse came the French. The Normans – descended from the same Vikings, as it happens – conquered England, ruled for several centuries and, before long, English had picked up 10,000 new words. Then, starting in the 16th century, educated Anglophones developed a sense of English as a vehicle of sophisticated writing, and so it became fashionable to cherry-pick words from Latin to lend the language a more elevated tone.

It was thanks to this influx from French and Latin (it’s often hard to tell which was the original source of a given word) that English acquired the likes of crucified, fundamental, definition and conclusion. These words feel sufficiently English to us today, but when they were new, many persons of letters in the 1500s (and beyond) considered them irritatingly pretentious and intrusive, as indeed they would have found the phrase ‘irritatingly pretentious and intrusive’. (Think of how French pedants today turn up their noses at the flood of English words into their language.) There were even writerly sorts who proposed native English replacements for those lofty Latinates, and it’s hard not to yearn for some of these: in place of crucified, fundamental, definition and conclusion, how about crossed, groundwrought, saywhat, and endsay?

But language tends not to do what we want it to. The die was cast: English had thousands of new words competing with native English words for the same things. One result was triplets allowing us to express ideas with varying degrees of formality. Help is English, aid is French, assist is Latin. Or, kingly is English, royal is French, regal is Latin – note how one imagines posture improving with each level: kingly sounds almost mocking, regal is straight-backed like a throne, royal is somewhere in the middle, a worthy but fallible monarch.

Then there are doublets, less dramatic than triplets but fun nevertheless, such as the English/French pairs begin and commence, or want and desire. Especially noteworthy here are the culinary transformations: we kill a cow or a pig (English) to yield beef or pork (French). Why? Well, generally in Norman England, English-speaking labourers did the slaughtering for moneyed French speakers at table. The different ways of referring to meat depended on one’s place in the scheme of things, and those class distinctions have carried down to us in discreet form today.

Caveat lector, though: traditional accounts of English tend to oversell what these imported levels of formality in our vocabulary really mean. It is sometimes said that they alone make the vocabulary of English uniquely rich, which is what Robert McCrum, William Cran and Robert MacNeil claim in the classic The Story of English (1986): that the first load of Latin words actually lent Old English speakers the ability to express abstract thought. But no one has ever quantified richness or abstractness in that sense (who are the people of any level of development who evidence no abstract thought, or even no ability to express it?), and there is no documented language that has only one word for each concept. Languages, like human cognition, are too nuanced, even messy, to be so elementary. Even unwritten languages have formal registers. What’s more, one way to connote formality is with substitute expressions: English has life as an ordinary word and existence as the fancy one, but in the Native American language Zuni, the fancy way to say life is ‘a breathing into’.

Even in English, native roots do more than we always recognise. We will only ever know so much about the richness of even Old English’s vocabulary because the amount of writing that has survived is very limited. It’s easy to say that comprehend in French gave us a new formal way to say understand – but then, in Old English itself, there were words that, when rendered in Modern English, would look something like ‘forstand’, ‘underget’, and ‘undergrasp’. They all appear to mean ‘understand’, but surely they had different connotations, and it is likely that those distinctions involved different degrees of formality.

Nevertheless, the Latinate invasion did leave genuine peculiarities in our language. For instance, it was here that the idea that ‘big words’ are more sophisticated got started. In most languages of the world, there is less of a sense that longer words are ‘higher’ or more specific. In Swahili, Tumtazame mbwa atakavyofanya simply means ‘Let’s see what the dog will do.’ If formal concepts required even longer words, then speaking Swahili would require superhuman feats of breath control. The English notion that big words are fancier is due to the fact that French and especially Latin words tend to be longer than Old English ones – end versus conclusion, walk versus ambulate.

The multiple influxes of foreign vocabulary also partly explain the striking fact that English words can trace to so many different sources – often several within the same sentence. The very idea of etymology being a polyglot smorgasbord, each word a fascinating story of migration and exchange, seems everyday to us. But the roots of a great many languages are much duller. The typical word comes from, well, an earlier version of that same word and there it is. The study of etymology holds little interest for, say, Arabic speakers.

this muttly vocabulary is a big part of why there’s no language so close to English that learning it is easy

To be fair, mongrel vocabularies are hardly uncommon worldwide, but English’s hybridity is high on the scale compared with most European languages. The previous sentence, for example, is a riot of words from Old English, Old Norse, French and Latin. Greek is another element: in an alternate universe, we would call photographs ‘lightwriting’. According to a fashion that reached its zenith in the 19th century, scientific things had to be given Greek names. Hence our undecipherable words for chemicals: why can’t we call monosodium glutamate ‘one-salt gluten acid’? It’s too late to ask. But this muttly vocabulary is one of the things that puts such a distance between English and its nearest linguistic neighbours.

And finally, because of this firehose spray, we English speakers also have to contend with two different ways of accenting words. Clip on a suffix to the word wonder, and you get wonderful. But – clip on an ending to the word modern and the ending pulls the accent ahead with it: MO-dern, but mo-DERN-ity, not MO-dern-ity. That doesn’t happen with WON-der and WON-der-ful, or CHEER-y and CHEER-i-ly. But it does happen with PER-sonal, person-AL-ity.

What’s the difference? It’s that -ful and -ly are Germanic endings, while -ity came in with French. French and Latin endings pull the accent closer – TEM-pest, tem-PEST-uous – while Germanic ones leave the accent alone. One never notices such a thing, but it’s one way this ‘simple’ language is actually not so.

Thus the story of English, from when it hit British shores 1,600 years ago to today, is that of a language becoming delightfully odd. Much more has happened to it in that time than to any of its relatives, or to most languages on Earth. Here is Old Norse from the 900s CE, the first lines of a tale in the Poetic Edda called The Lay of Thrym. The lines mean ‘Angry was Ving-Thor/he woke up,’ as in: he was mad when he woke up. In Old Norse it was:

Vreiðr vas Ving-Þórr / es vaknaði.

The same two lines in Old Norse as spoken in modern Icelandic today are:

Reiður var þá Vingþórr / er hann vaknaði.

You don’t need to know Icelandic to see that the language hasn’t changed much. ‘Angry’ was once vreiðr; today’s reiður is the same word with the initial v worn off and a slightly different way of spelling the end. In Old Norse you said vas for was; today you say var – small potatoes.

In Old English, however, ‘Ving-Thor was mad when he woke up’ would have been Wraþmod wæs Ving-Þórr/he áwæcnede. We can just about wrap our heads around this as ‘English’, but we’re clearly a lot further from Beowulf than today’s Reykjavikers are from Ving-Thor.

Thus English is indeed an odd language, and its spelling is only the beginning of it. In the widely read Globish (2010), McCrum celebrates English as uniquely ‘vigorous’, ‘too sturdy to be obliterated’ by the Norman Conquest. He also treats English as laudably ‘flexible’ and ‘adaptable’, impressed by its mongrel vocabulary. McCrum is merely following in a long tradition of sunny, muscular boasts, which resemble the Russians’ idea that their language is ‘great and mighty’, as the 19th-century novelist Ivan Turgenev called it, or the French idea that their language is uniquely ‘clear’ (Ce qui n’est pas clair n’est pas français).

However, we might be reluctant to identify just which languages are not ‘mighty’, especially since obscure languages spoken by small numbers of people are typically majestically complex. The common idea that English dominates the world because it is ‘flexible’ implies that there have been languages that failed to catch on beyond their tribe because they were mysteriously rigid. I am not aware of any such languages.

What English does have on other tongues is that it is deeply peculiar in the structural sense. And it became peculiar because of the slings and arrows – as well as caprices – of outrageous history.

Read the whole story
· · · · · · · ·
strugk
106 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Rail Transit and Population Density in 250 Cities

1 Share

Good public transit connects people to places. Ideally, this is done efficiently and sustainably, with transit routes and stations serving and connecting the most amount of people possible. But in reality, there's a lot of variation within and between cities in how effectively this is done.

To look at this, we've created maps of major rail transit lines and stations (rapid transit, regional rail, LRT) overlaid onto population density for 250 of the most populated urban regions around the globe. Click the dropdowns below to view how well transit systems serve their populations in different cities.

Each map has the same geographic scale, 100km in diameter, to be easily comparable with each other.

Using these maps, we've also computed several metrics examining characteristics of transit oriented development, and ranked how well cities perform relative to each other. Generally, the greater the density and proportion of the population that lives near major rail transit, the better.

Population data for these maps are from GlobPOP, and rail transit data are from OpenStreetMap. At the bottom of this page we describe these data sources, our methodology, and limitations in more detail.


Rail transit line and station

Population density (people / km²)

14.13M

Urban population

3.44M

4,400

Urban population density (people / km²)

1,700

7,000

Population density in the area 1km from all major rail transit stations

2,200

63.0%

% of the urban population within 1km of a major rail transit station

16.2%

44.6%

% of the urban area within 1km of a major rail transit station

9.3%

1.41

Concentration ratio (% urban pop near transit / % urban area near transit)

1.74


City Rankings

Select by metric:

Select by region:

Data & Methods

Our list of cities came from a dataset from Natural Earth. We started with a list of the 300 most populated cities, but then manually removed cases where one city was essentially the suburb of another city at our scale (e.g. Howrah was removed since it is very close to Kolkata), as well as removed cities without any rail transit.

For each city, we then defined the urban region shown on the maps as a circle with a 50km radius from the centre point noted in the Natural Earth dataset. We chose to use a standard circle size for all regions to account for idiosyncrasies in how different parts of the world define metro areas. 50km is approximately the outer range that someone would commute to/from a city centre along a major rail corridor.

We sourced the population density data from GlobPOP which provides population count and density data at a spatial resolution of 30 arc-seconds (approximately 1km at the equator) around the globe. Our urban population density metrics are computed after removing areas where population density is less than 400km², to account for how regions vary in terms of how much agricultural land and un-habitable geography they have (e.g. mountains, water, etc. 400km² is the same threshold used by Statistics Canada to define populated places.

We downloaded rail and station data from OpenStreetMap (OSM) using overpass turbo with this query. We then calculated 1km buffers around each station and then estimated the population within the buffered area via aerial interpolation. OSM is crowd-sourced data, and while the quality and comprehensiveness of OSM data is quite good in most cities, there are several cities that have missing or incorrect data. If you see any errors, please update OSM! As OSM data is edited and improved, we'll aim to update our maps and metrics in the future.

There are two main limitations with this transit data: 1) it only includes rail transit, not Bus Rapid Transit (BRT), which in many cities provides comparable service to rail. 2) it does not account for frequency (i.e. headway) of routes. While many transit agencies share their routes and schedules in GTFS format, which includes information about frequency and often technology (bus, rail, etc.), we found that the availability of GTFS at a global scale was not available everywhere, particularly outside of Europe and North America.

Now of course, where people live is just one piece; the goal of transit is ultimately to take people where they want to go (work, school, recreation, etc.). It would be great to layer on employment and activity location data onto these maps to also look at the destination side of the equation as well as analyze connectivity of networks. Something to work on in the future!

---

More information about this project, code, data, etc. are available on GitHub.

Read the whole story
· · · ·
strugk
109 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

This robot worm digs for geothermal energy in your backyard

1 Share

Four billion years ago, Earth was a fiery, tumultuous world of molten rock, volcanic eruptions, and toxic skies, with searing heat and the constant threat of asteroid impacts.

Thankfully, our planet has cooled off a bit since then. Nevertheless, the Earth still radiates vast amounts of geothermal energy. It’s a clean, limitless, always-on power source lying beneath our feet — we just have to dig for it. Or get robots to do the hard work for us. 

Borobotics, a startup from Switzerland, has developed an autonomous drilling machine — dubbed the “world’s most powerful worm” — that promises to make harnessing geothermal heat cheaper and more accessible for everyone. 

“Drilling will become possible on properties where it would be unthinkable today — small gardens, parking lots, and potentially even basements,” Moritz Pill, Borobotics’ co-founder, tells TNW.  

At just 13.5 cm wide and 2.8 metres long, the compact boring robot can silently burrow just about anywhere. It could make geothermal a viable backyard energy source.

The machine — nicknamed “Grabowski” after the famous cartoon mole — is the world’s first geothermal drill that operates autonomously, according to the startup. Sensors in Grabowski’s head mean it can detect which type of material it’s boring through. If it bumps into a water spring or gas reservoir on its way down, the robot worm automatically seals the borehole shut. And unlike the diesel-powered drills typical to the industry, the machine plugs into a regular electrical socket. 

However, Grabowski’s humble frame has a few drawbacks. The device is less powerful than bigger rigs. It’s also slower and can only dig to a maximum depth of 500 metres. But for Borobotics’ target market, that’s more than adequate, it says.

Limitless heat just below our feet 

While most geothermal startups look to produce utility-scale electricity by digging many kilometres below the Earth’s crust, Borobotics is going shallow. 

“In many European countries, at a depth of 250 metres, you have an average temperature of 14 degrees C,” says Pill. “This is ideal for efficient heating in winter, while still being cold enough to cool the building in summer.”

Borobotics wants to tap the burgeoning demand for geothermal heat pumps. These devices use a network of subterranean pipes to transfer heat from below the ground to a building on the surface. Under the right conditions, they double-up as air conditioning. 

Heating and cooling buildings accounts for half of global energy consumption, the lion’s share of which comes from burning fossil fuels like natural gas. 

To curb emissions, the EU has committed to installing 43 million new heat pumps between 2023 and 2030, as part of the bloc’s €300bn REPowerEU plan. 

The advantages are obvious. Heat pumps use electricity instead of fossil fuels to transfer heat or cold air. They are up to three times more efficient than the equivalent gas boiler. If they plug into a renewable energy source, even better. 

The EU backs both geothermal and air-source heat pumps, but the latter dominate thanks to lower costs and easier installation. That’s despite geothermal heat pumps being more efficient because they rely on stable subterranean heat rather than fluctuating outdoor temperatures.

The potential of geothermal heat pumps to decarbonise Europe is substantial, as long as the cost comes down,” Torsten Kolind, managing partner at Underground Ventures, tells TNW. “The minute that happens, the market is open.”

Underground Ventures, based in Copenhagen, is the world’s first VC dedicated entirely to funding geothermal tech startups. The firm led Borobotics’ CHF 1.3mn (€1.38mn) pre-seed funding round, announced this week.

Due to their small size, Borobotics says its drill is “very resource efficient” to produce and maintain. What’s more, Grabowski’s autonomous capabilities, other than being cool, have a hidden advantage. 

Pill paints the following picture:

“A small team arrive to a site with a Sprinter van containing everything necessary to drill,” he explains. “They set the drill in half a day and from then on it works autonomously.” 

Pill predicts that one or two people will be able to handle 10-13 drill sites simultaneously. If correct, this means drilling companies can cover more ground in less time, even if Grabowski is a little more sluggish than its fossil-fuelled relatives. 

Given the EU’s chronic shortage of heat pump installers, an autonomous drilling robot may be a welcome helping hand.  

Despite the apparent potential, it’s still early days for Borobotics. Founded in 2023, the company is currently developing its first working prototype. Fuelled by its first major pot of funding, it looks to test the robot in real conditions this year. 

Geothermal tech is heating up   

In December, the International Energy Agency (IEA) released its first report on geothermal energy in over 10 years. In the report, the IEA predicted that geothermal could cater to 15% of global energy demand by 2050, up from just 1% today. 

Geothermal projects of old were largely state-led, and confined to volcanically active regions like Iceland or New Zealand where hot water bubbles at or near the surface. But the next wave of installations looks to be led by startups armed with state-of-the-art technology that allows them to dig deeper and more efficiently.

Geothermal energy startups attracted $650mn in VC funding in 2024, the highest value ever recorded, according to Dealroom data. One of those is US-based Fervo Energy, backed by Bill Gates’ Breakthrough Energy Ventures. Google has already plugged into Fervo’s geothermal plant in Nevada to power one of its data centres. Another upstart is Canada’s Eavor, which is currently building a giant underground “radiator” in Germany that could heat an entire town.

“The problem has always been geology and economics, but the advances of startups like Fervo and Eavor in recent years have changed the game,” says Kolind.

While US startups are leading the pack, Europe is well poised to compete. 

“Europe has excellent geothermal subsurface conditions, and, unlike America, it also has a strong tradition for district heating,” says Kolind. The investor believes it’s only a matter of time before Europe’s investors and policymakers go all-in on geothermal tech. 

“Unlike natural gas and coal, it is fossil-free. Unlike wind and solar, it is always-on. And unlike nuclear energy, it is geopolitically benign,” he says.

Read the whole story
· · · ·
strugk
125 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories
Loading...