Energy, data, environment
1462 stories
·
2 followers

How Much Energy Does It Take To Think? | Quanta Magazine

1 Share

Studies of neural metabolism reveal our brain’s effort to keep us alive and the evolutionary constraints that sculpted our most complex organ.

Introduction

You’ve just gotten home from an exhausting day. All you want to do is put your feet up and zone out to whatever is on television. Though the inactivity may feel like a well-earned rest, your brain is not just chilling. In fact, it is using nearly as much energy as it did during your stressful activity, according to recent research.

Sharna Jamadar (opens a new tab), a neuroscientist at Monash University in Australia, and her colleagues reviewed research from her lab and others around the world to estimate the metabolic cost of cognition (opens a new tab) — that is, how much energy it takes to power the human brain. Surprisingly, they concluded that effortful, goal-directed tasks use only 5% more energy than restful brain activity. In other words, we use our brain just a small fraction more when engaging in focused cognition than when the engine is idling.

It often feels as though we allocate our mental energy through strenuous attention and focus. But the new research builds on a growing understanding that the majority of the brain’s function goes to maintenance. While many neuroscientists have historically focused on active, outward cognition, such as attention, problem-solving, working memory and decision-making, it’s becoming clear that beneath the surface, our background processing is a hidden hive of activity. Our brains regulate our bodies’ key physiological systems, allocating resources where they’re needed as we consciously and subconsciously react to the demands of our ever-changing environments.

“There is this sentiment that the brain is for thinking,” said Jordan Theriault (opens a new tab), a neuroscientist at Northeastern University who was not involved in the new analysis. “Where, metabolically, [the brain’s function is] mostly spent on managing your body, regulating and coordinating between organs, managing this expensive system which it’s attached to, and navigating a complicated external environment.”

The brain is not purely a cognition machine, but an object sculpted by evolution — and therefore constrained by the tight energy budget of a biological system. Thinking may make you feel tired, then, not because you are out of energy, but because you have evolved to preserve resources. This study of neural metabolism, when tied to research on the dynamics of the brain’s electrical firing, points to the competing evolutionary forces that explain the limitations, scope and efficiencies of our cognitive capabilities.

The Cost of a Predictive Engine

The human brain is incredibly expensive to run. At roughly 2% of body weight, the organ gorges on 20% of our body’s energetic resources. “It’s hugely metabolically demanding,” Jamadar said. For infants, that number is closer to 50%.

The brain’s energy comes in the form of the molecule adenosine triphosphate (ATP), which cells make from glucose and oxygen. A tremendous expanse of thin capillaries — an estimated 400 miles of vascular wiring — weaves through brain tissue to carry glucose- and oxygen-rich blood to neurons and other brain cells. Once synthesized within cells, ATP powers communication between neurons, which enact the brain’s functions. Neurons carry electrical signals to their synapses, which allow the cells to exchange molecular messages; the strength of a signal determines whether they will release molecules (or “fire”). If they do, that molecular signal determines whether the next neuron will pass on the message, and so on. Maintaining what are known as membrane potentials — stable voltages across a neuron’s membrane that ensure that the cell is primed to fire when called upon — is known to account for at least half of the brain’s total energy budget.

Measuring ATP directly in the human brain is highly invasive. So, for their paper, Jamadar’s lab reviewed studies (opens a new tab), including their own findings, that used other estimates of energy use — glucose consumption, measured by positron-emission tomography (PET), and blood flow, measured by functional magnetic resonance imaging (fMRI) — to track differences in how the brain uses energy during active tasks and rest. When performed simultaneously, PET and fMRI can provide complementary information on how glucose is being consumed by the brain, Jamadar said. It’s not a complete measure of the brain’s energy use because neural tissues can also convert some amino acids (opens a new tab) into ATP, but the vast majority of the brain’s ATP is produced by glucose metabolism.

Jamadar’s analysis showed that a brain performing active tasks consumes just 5% more energy compared to a resting brain. When we are engaged in an effortful, goal-directed task, such as studying a bus schedule in a new city, neuronal firing rates increase in the relevant brain regions or networks — in that example, visual and language processing regions. This accounts for that extra 5%; the remaining 95% goes to the brain’s base metabolic load.

Researchers don’t know precisely how that load is allocated, but over the past few decades, they have clarified what the brain is doing in the background. “Around the mid-’90s we started to realize as a discipline [that] actually there is a whole heap of stuff happening when someone is lying there at rest and they’re not explicitly engaged in a task,” she said. “We used to think about ongoing resting activity that is not related to the task at hand as noise, but now we know that there is a lot of signal in that noise.”

Much of that signal is from the default mode network, which operates while we’re resting or otherwise not engaged in apparent activity. This network is involved in the mental experience of drifting between past, present and future scenarios — what you might make for dinner, a memory from last week, some pain in your hip. Additionally, beneath the iceberg of awareness, our brains are keeping track of the mosaic of physical variables — body temperature, blood glucose level, heart rate, respiration, and so on — that must remain stable, in a state known as homeostasis, to keep us alive. If any of them stray too far, things can get bad pretty quickly.

Theriault speculates that most of the brain’s base metabolic load goes toward prediction. To achieve its homeostatic goals, the brain needs to always be planning for what comes next — building a sophisticated model of the environment and how changes might affect the body’s biological systems. Prediction, rather than reaction, Theriault said, allows the brain to dole out resources to the body efficiently.

The Brain’s Evolutionary Constraints

A 5% increased energy demand during active thought may not sound like much, but in the context of the entire body and the energy-hungry brain, it can add up. And when you consider the strict energetic constraints our ancestors had to deal with, weariness at the end of a taxing day suddenly makes a lot more sense.

“The reason you are fatigued, just like you are fatigued after physical activity, isn’t because you don’t have the calories to pay for it,” said Zahid Padamsey (opens a new tab), a neuroscientist at Weill Cornell Medicine-Qatar, who was not involved in the new research. “It is because we have evolved to be very stingy systems. … We evolved in energy-poor environments, so we hate exerting energy.”

The modern world, in which calories are relatively abundant for many people, contrasts starkly with the conditions of scarcity that Homo sapiens evolved in. That 5% increase in burn rate, over 20 days of persistent, active, task-related focus, can amount to a whole day’s worth of cognitive energy. If food is hard to come by, it could mean the difference between life and death.

“This can be substantial over time if you don’t cap the burn rate, so I think it is largely a relic of our evolutionary heritage,” Padamsey said. In fact, the brain has built-in systems to prevent overexertion. “You’re going to activate fatigue mechanisms that prevent further burn rates,” he said.

To better understand these energetic constraints, in 2023 Padamsey summarized research on certain peculiarities of electrical signaling (opens a new tab) that indicate an evolutionary tendency toward energy efficiency. For one thing, you might imagine that the faster you transmit information, the better. But the brain’s optimal transmission rate is far lower than might be expected.

Theoretically, the top speed for a neuron to feasibly fire and send information to its neighbor is 500 hertz. However, if neurons actually fired at 500 hertz, the system would become completely overwhelmed. The optimal information rate (opens a new tab) — the fastest rate at which neurons can still distinguish messages from their neighbors — is half that, or 250 hertz.

Our neurons, however, have an average firing rate of 4 hertz, 50 to 60 times less than what is optimal for information transmission. What’s more, many synaptic transmissions fail: Even when an electrical signal is sent to the synapse, priming it to release molecules to the next neuron, it will do so only 20% of the time.

That’s because we didn’t evolve to maximize total information sent. “We have evolved to maximize information transmission per ATP spent,” Padamsey said. “That’s a very different equation.” Sending the maximum amount of information for as little energy as possible (bits per ATP), the optimal neuronal firing rate is under 10 hertz.

Evolutionarily, the large, sophisticated human brain offered an unprecedented level of behavioral complexity — at a great energetic cost. This negotiation, between the flexibility and innovation of a large brain and the energetic constraints of a biological system, defines the dynamics of how our brain transmits information, the mental fatigue we feel after periods of concentration, and the ongoing work our brain does to keep us alive. That it does so much within its limitations is rather astonishing.

Read the whole story
strugk
3 hours ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Innovation to Impact: Advancing Solid-State Cooling to Market

RMI
1 Share

Introduction

As the world encounters another hot summer, cooling is becoming an even hotter topic. Cooling demand is skyrocketing, driven primarily by the Global South and fueled by rising income levels, population growth, urbanization, and increasing global temperatures.

For investors, a booming future market — with most AC purchases yet to be made — opens the opportunity to invest now in superior sustainable cooling solutions that will shape our future.

One such solution is solid-state cooling — a technology class with the potential to revolutionize the cooling industry. Why? Solid state cooling technology can offer improved efficiency, reduce emissions and energy costs, and eliminate the need for super-polluting refrigerants compared to incumbent century-old vapor-compression technology.

In our first article of this series, we explained what solid-state cooling is, the promise it holds, and why it can be an important solution to the cooling challenge. In this article, we’ll dive into its market potential, market drivers, and what it will take to get these innovative cooling solutions to commercialization and scale.


The advantages of solid-state cooling: No refrigerants, higher performance ceilings, simpler systems

Since the advent of modern air conditioning in the late 19th century, vapor compression has remained the dominant technology powering the global cooling market. This refers to first compressing and condensing a refrigerant, releasing heat in the process, and then expanding and evaporating it to absorb heat and produce a cooling effect. Today, approximately 95 percent of all cooling equipment relies on the vapor compression cycle.

The efficiency improvements associated with this technology have been slow and incremental and are effectively capped by the “Carnot Limit” — determined by the temperatures of the hot and cold reservoirs used by the cooling system.

Because they are not dependent on moving heat between reservoirs, solid-state technologies have already demonstrated much higher potential performance ceilings. The graphic below shows that some solid-state technologies can have a coefficient of performance (COP) above 10, almost double the COP of incumbent AC systems, where the best score is roughly 5.5.

The challenge, of course, is to translate this potential into reality. Innovators are working to do just that (i.e., to achieve system-level performance comparable to or higher than their vapor compression counterparts). The team at Pascal, for example, has recently demonstrated that barocaloric materials can deliver effective cooling and heating at pressure levels comparable to those used in conventional air conditioners.

This is an exciting breakthrough in terms of material performance, and it offers a potential pathway toward cooling systems that are energy efficient and easier to design and integrate over time with standard components. Similarly, thermoelectric systems can achieve precision cooling without moving parts — no pistons, compressors, or hydraulics — simplifying the components needed.

The other major advantage of solid-state solutions is that they do away with refrigerants, which often have very high global warming potential (GWP) (for example, R-134a, one of the most common refrigerants used today, has a GWP 1,430 times more potent than carbon dioxide) and are increasingly subject to regulatory phase-outs.

In sum, while industry incumbents will likely continue to move the needle on vapor compression systems in response to regulatory and market pressure, solid-state cooling has the potential to outpace these improvements. There is step-change potential associated with its refrigerant-free operation, higher performance ceiling, and streamlined system design.


The potential market for solid-state cooling technologies is enormous.

The global cooling market is undergoing a dynamic transformation, creating unprecedented opportunities for sustainable cooling solutions, like solid-state technologies. The global active cooling sector, which includes air conditioning (AC), refrigeration, and mobile cooling, was valued at an estimated $633 billion in 2023 and is expected to top $1 trillion by 2050. Much of this growth will be driven by developing economies, which will comprise 60 percent of this demand, creating a $600 billion market in 2050 — more than doubling from its $272 billion size today.

The global cooling market is large enough and segmented enough that solid-state cooling startups could find sizeable beachheads (starting markets). Take two Third Derivative portfolio companies as examples. MIMiC, based in New York, is developing thermoelectric solid-state systems that can replace the standard packaged thermal AC (PTAC) units you often see in hotel rooms and many multifamily buildings. For US hotels alone, this could be a market worth more than $7 billion. Magnotherm, based in Darmstadt, Germany, is developing magnetocaloric refrigerators for supermarkets, grocery stores, and food and beverage retail. In Europe alone, this is an estimated $17 billion market. Even carving out a niche and targeting a few specialized segments in this market presents a multi-billion-dollar opportunity for young, innovative companies.

While the market for cooling solutions is booming overall, some dynamics are creating particularly favorable conditions for solid-state cooling. For example, there is a push in the regulatory landscape that may support the advancement of solid-state cooling technology. Most directly, regulations that tighten allowable refrigerant GWPs — largely being driven by the EU (building on the accelerating global effort to phase down high-GWP synthetic refrigerants)  — would benefit solid-state as it’s free of potent, high-GWP refrigerants.

Additionally, efficiency standards and incentives, including minimum energy performance standards (like Japan’s Top Runner program which sets performance standards for a range of appliances — labeling the most efficient AC and refrigeration models on the market, encouraging competition between companies to be the “Top Runner”), can support efficient solid-state cooling systems. Cities, states, countries, or regions with strong efficiency standards or incentives could become strong beachhead markets for solid-state cooling startups.

All in all, there is a perfect storm brewing to disrupt a global cooling market that has not witnessed a radical and environmentally sustainable innovation for nearly a century — either though innovations in vapor compression systems or alternative approaches, like solid-state cooling.


To enter the mainstream, solid-state cooling still has challenges to overcome

As a nascent technology, solid-state cooling systems are still relatively scarce in the market. While the efficiency potential is significant, there remains a wide gap between having an efficient material and building an efficient, integrated system. Startups often face challenges when it comes to system integration — combining materials with components like heat exchangers, controllers, and power supplies, without significant losses in efficiency. For example, the most efficient elastocaloric materials right now have a material efficiency coefficient of performance (COP) above 10, but once fully integrated into a cooling system, COPs will likely be more comparable with vapor compression systems to start (a COP of around 3).

Material fatigue is another critical hurdle for some approaches, namely barocaloric and elastocaloric, which generate cooling through the repetitive stretching or compression of materials. Consumers expect their air conditioners and refrigerators to last 15 years or more, and solid-state systems must demonstrate long-term reliability under continuous cycling.

Supply chain limitations — particularly for magnetocaloric systems, which rely on rare earth materials for permanent magnets — pose additional challenges. However, several solid-state startups are proactively working to leverage existing supply chains for components, reducing supply chain risks and offering a pathway toward sustainability in the future.  AI can play a role in material discovery, identifying new, more promising materials for solid-state cooling. One Third Derivative portfolio company Matnex is working on just this — identifying and scaling new materials using AI and machine learning — which could support solid-state innovators as well.

Above all, the most significant challenge facing solid-state cooling today is cost. Like many emerging technologies, solid-state systems will initially come at a premium price, though starting at a higher cost is typical for new technologies.

Solar panels, for example, were once over 100 times more expensive than they are today. With economies of scale, optimized manufacturing processes, and improvements in the cell technology itself, costs fell dramatically. The historical “learning rate” — the fall in costs associated with each doubling of production volumes —  for solar panels is 20 percent.

Given the urgency of climate change and the global need for efficient, sustainable cooling, there are strong reasons to believe that market forces will drive adoption and therefore open a path for cost reductions for solid-state cooling technologies in the near future.

Some startups are already demonstrating cost effectiveness. For example, UK-based startup Anzen Walls, supported by Third Derivative ecosystem partner Carbon 13, is developing thermoelectric heat pumps for homes that target a price comparable to or lower than traditional heat pumps.

Some recent partnerships between solid-state cooling startups and original equipment manufacturers (OEMs) such as Carrier, Copeland, and Trane show opportunities to cut costs and make solid-state more affordable. OEMs are very experienced in system and cost optimization and have access to large-scale distribution networks that would rapidly streamline this process. If startups can demonstrate performance and reliability, OEM partners can help drive down costs while offering access to established distribution networks, manufacturing infrastructure, and customer relationships, thereby paving the pathway for commercialization and scaling of these technologies.


What are the pathways to market?

There is a collection of exciting startups in the solid-state cooling space. The typical path to market holds true for solid-state cooling as well:

  1. Refinement: continued product performance refinement with grant and VC support.
  2. Validation: startups will build pilot manufacturing facilities on their own to validate performance with real-world pilots. Alternatively, early partnerships with manufacturers builds confidence and can open doors for future investment or even acquisition. This approach isn’t new to this sector. In 2020, Emerson acquired 7AC – a startup developing more efficient air conditioning technology through liquid desiccants – after collaborating with the company to commercialize the new technology.
  3. Demand: market interest indicated through demand signals. This is already appearing in the space – Walmart and IKEA have both committed to significantly reducing, or using no, GWP refrigerants.
  4. Partnerships: a faster, scalable path necessitates partnerships with manufacturers through licensing, direct sales or components, joint ventures, or acquisition to bring their innovations to market. Manufacturers in the cooling space are rather consolidated and very well-established – they not only take up major market shares, but they also have deep expertise in system design, cost reduction and market access. Additional partnerships with testing bodies will need to be established for standards to be developed for solid-state cooling and integrated into existing standards.

Solid-state’s potential has not gone unnoticed—established manufacturers are actively monitoring and engaging with the space. For example, Carrier Ventures recently invested in elastocaloric startup Exergyn, while Copeland has backed thermoacoustic heat pump startup BlueHeart Energy. These moves signal growing industry confidence in solid-state technologies as the next frontier in sustainable cooling.


Solid state’s right to win in the cooling market

Solid-state technologies are emerging as a potential frontrunner to disrupt the cooling market, but what gives solid-state a “right to win” or an unbeatable edge in the market? Some aspects are yet to be fully proven, but there are two exciting edges that solid-state offers: the elimination of potent refrigerants, and the potential for very high-performance ceilings. If the performance is actualized (for example, achieving system COPs of at least 3), solid-state will likely have a right to win in certain beachhead markets, and use those footholds to scale beyond.

For investors, this presents a timely opportunity to place an early bet on a rapidly evolving technology with major promise. Interest from major OEMs signals strong industry momentum toward a new era of cooling. Looking ahead, two key areas to watch are how startups improve performance and drive down costs — with effective systems integration being key to both. We see a clear opportunity for early-stage capital — especially pre-seed and seed investments — to play a pivotal role in supporting startups as they scale and commercialize their innovations.

The authors would like to thank Blue Haven Initiative for funding this research, and Ankit Kalanki, Chetan Krishna, and Shruti Naginkumar Prajapati for their contributions.

Read the whole story
strugk
7 hours ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Eating wild animals might be sustainable for the few, but not for the many

1 Share

I’ve written a lot about the environmental impacts of our food systems. People often underestimate the impact of what we eat on everything from climate change to land use, biodiversity loss, water use, and pollution. For most people: if you want to reduce the environmental impact of your diet then eating less meat and dairy is the best place to start. That’s where you’ll have the largest impact.

Our livestock systems today are big. Not just in terms of land use, but the number of animals we slaughter every year. It’s over 80 billion animals (or hundreds of millions a day) and that’s just for those on land. Include fish and shrimp, and the total is in the trillions. Most of those animals are factory-farmed, living pretty miserable — and often painful — lives.

Some of those who are concerned about the environmental and welfare impacts of their diet either reduce how much meat and dairy they’re eating, or cut it out completely. I did the former then progressed to the latter.

But another alternative that I get asked about a lot, and goes through popularity waves in the media, is consuming wild meat instead.

“I’ve heard that wild meat is much more sustainable, is it better for me to eat that?”

Usually when I speak to people about this, they start getting bogged down in metrics about carbon footprints. But I think that these standard environmental footprint metrics are often missing the point when it comes to the sustainability of wild meat. Or, at least, it’s not the climate impact that’s the biggest “problem”: it’s the fact that we eat a lot of meat, and there’s not that much wild meat out there. The biggest limitation is not molecules of carbon dioxide, but the risk that we quickly run out.

To test this, I wanted to run some quick numbers on how supply of wild meat might stack up against demand. How long could wild meat keep us going for?

Note that some readers will think that we shouldn’t eat wild animals either, for ethical reasons.1 That’s fine and valid. But this article is not about the mortality of eating meat, which is a whole separate discussion. It’s purely trying to get a sense of how current levels of meat consumption compare to how much meat is available from wild sources.

Let’s take the UK as an example. Overpopulation of deer, which can negatively impact landscapes and other wildlife, is seen as a problem. Wild venison, then, is sometimes promoted as a sustainable food choice.

Now, this can make sense for some people. But is it good general i.e. population-wide advice?

Some back-of-the-envelope calculations.

Here, I’m going to assume there are around 2 million deer in the UK. Now, this is a particularly contentious figure (I didn’t know how controversial it was in some groups until I started trying to find a reasonable estimate). It appears to be a headline number that is quoted a lot, and too confidently given the uncertainty. The reality is that no one really knows this figure with high accuracy.2 Estimates range from as low as 650,000 to 2.5 million. I’ve seen others claim that it’s even higher.

I am using this 2 million number, knowing it is not perfect and will make some people angry, but as we’ll see, the choice of figure won’t change the overall conclusion. If the truth is closer to 650,000, then my point is even more obvious. If there are actually, say, 4 million deer, we can just double the final result.

You might get 25 to 30 kilograms of meat from one deer.3 That’s around 50 million kilograms (or 50,000 tonnes) of deer meat for the entire UK.

Sounds like a lot.

But how much meat do Brits eat? Around 75 kilograms per person per year, which comes to around 5 billion kilograms for the country as a whole.

If we killed every deer in the UK, it could maybe supply 1% of our meat consumption — or three days’ worth of meat. Of course, there would be no deer left, so after three days, our entire “wild meat” transition would be over.

Realistically, we would never kill all our deer at once. I’ve heard people recommend a “sustainable” cull rate of around 20% of the population (again, this is a very contentious figure and depends on population dynamics). So we need to divide our 1% of meat supply by five, giving just 0.2%. That’s around half a day of the country’s meat.

The only way this works as general dietary advice is if:

  • Everyone eats a lot less meat than they currently do. Even then, it’s still going to be a pretty small share of total meat supply.

  • It’s very clear that people can only eat wild venison very rarely. And by that, I mean one meal a year or something.

  • Most of the population are resistant to eating venison, or it’s too expensive, so they would never take this advice anyway (this doesn’t seem that implausible to me).

Again, this can work as advice for some people. I’m not saying that your cut of venison is unsustainable. On many metrics, it’s better than farmed livestock. Clearly, 20% of the UK’s deer would provide quite a bit of meat for some consumers. But it does not scale to the population as a whole, and I therefore think it’s not great general advice to dish out.

As I said above, the exact number of deer in the UK is highly uncertain. If there were only 650,000 deer, the percentages would be even smaller. If there were as many as 4 million (which I’m using as a very high estimate), it’s still just 0.4% of the country’s meat.

Maybe Brits just eat a lot of meat, and live on a small island without many animals. How do the numbers stack up globally?

Not much better.

Let’s be “generous” and say that we don’t just restrict it to deer. All of the world’s wild mammals are available, from elephants and rhinos to mice. How we’re going to catch them all is beyond me, but let’s not ruin the hypothetical.4

Wild mammals on land weigh around 20 million tonnes, equivalent to 3 kilograms per person on Earth.5 Not all of that mass is going to be edible, so let’s take a very rough estimate of a 50% “cut”. That would be 1.5 kilograms per person.

Globally, meat supply is around 44 kilograms per person per year (this doesn’t include fish).6

This means that all of the world’s wild mammals in the world would meet just two weeks of our demand for meat. Within a fortnight, they’d all be gone, and their populations would not rebuild.

If we were to also include mammals in the ocean (mostly whales), we’d still only have enough for just over one month.7

Let me be clear: some populations do rely heavily on wild meat (but in smaller quantities than people eat in many high and middle-income countries today). Many of our ancestors ate wild game, too. For some, it’s an essential source of nutrition and is not necessarily unsustainable.

My point is not that this is “wrong”. It’s that this simply doesn’t scale for 8 billion people. Especially not with our current levels of meat consumption (which, globally, are still increasing). It might work for thousands, or even millions, but not for billions. If we were to all eat in this way, animal populations would quickly disappear (and we’d be left without any wild animals or any meat).

Read the whole story
strugk
3 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Why Direct Air Capture Won't Replicate the Solar Revolution - CleanTechnica

1 Share



The remarkable cost declines in solar photovoltaic (PV) and lithium-ion batteries over the past several decades have fueled optimism in the climate policy and investment community, with many hoping direct air capture (DAC) technologies might follow a similar trajectory. Policymakers, investors, and industry proponents frequently draw analogies between DAC and these wildly successful clean-energy technologies, invoking Wright’s Law—a rule of thumb where costs fall predictably with cumulative doubling of production—to justify unjustifiably bullish projections for DAC’s future costs. Given the unrealistic scenarios requiring carbon removal to achieve net-zero emissions, the attractiveness of DAC scaling cheaply and quickly is understandable, yet such optimism demands critical scrutiny grounded in the realities of technology, markets, and physics.

I’m drawn back to this space, one I’ve been exploring for 15 years, doing technoeconomic assessments of Global Thermostat’s applicability in heavy rail, Carbon Engineering’s natural market of enhanced oil recovery and speaking to many of the leading researchers and entrepreneurs in the space. Why am I drawn back? Because I wrote about Climework’s ongoing trainwreck—105 tons total captured from a 40,000-ton annual homeopathy device and now 22% layoffs—recently, and many commenters made it clear that they thought DAC would follow solar and batteries into the nirvana of cheapness.

Solar PV and batteries achieved their cost revolutions through clear, consistent factors. Foremost was Wright’s Law itself: with each doubling of global cumulative production, solar PV saw about a 20% reduction in cost, while lithium-ion batteries experienced roughly a 19% drop per doubling. Historically, Wright’s Law saw 20% to 27% decreases, depending on the simplicity of the product. These impressive and predictable learning rates emerged for solar and batteries because both technologies quickly found mass-market applications with billions of end-users—solar panels across rooftops worldwide and lithium-ion cells powering consumer electronics and, later, electric vehicles. Such enormous, diverse markets spurred massive economies of scale, standardization of manufacturing processes, and continuous incremental innovation, driving down prices dramatically over relatively short periods.

In solar manufacturing, scale-up to gigawatt-scale factories allowed for unprecedented efficiency gains. Automation, production-line standardization, reduced material usage, and steady incremental improvements in cell efficiency combined to achieve a 99% reduction in costs since the 1970s. Batteries followed a similar pattern. Initially expensive lithium-ion cells rapidly benefited from global consumer electronics markets, then exploded in scale with the electric vehicle boom of the 2010s. Innovations in chemistry, manufacturing methods, and supply chain management drove battery costs down by over 90% since 2010 alone. Crucially, both technologies became genuinely commoditized, their costs falling sufficiently low to be attractive purely on market economics, independent of ongoing subsidies.

In stark contrast, DAC technology faces fundamental structural, thermodynamic, and market constraints that severely limit its potential to emulate these learning-curve successes. While DAC systems like those developed by Climeworks and Carbon Engineering also involve engineered modular units, their scale and replicability differ drastically from solar and batteries. Solar PV and battery units are small, identical, easily mass-produced components numbering in the billions, allowing rapid parallel production and iterative optimization. DAC, conversely, involves large, complex industrial-scale modules that process massive volumes of air. Even highly modularized DAC units like those envisioned by Climeworks represent significant, capital-intensive systems, each processing hundreds of thousands to millions of cubic meters of air per ton of CO₂ captured. Achieving large-scale global deployment would involve thousands of units—not billions—limiting opportunities for rapid learning through repetition and optimization.

Further compounding this problem, DAC relies heavily on mature, off-the-shelf technologies. Key components such as large industrial fans, chemical sorbents, heat exchangers, compressors, and pumps are already widely used across industries. Unlike emerging semiconductor processes or battery chemistries that initially featured substantial inefficiencies ripe for innovation, DAC’s hardware components are closer to their optimized cost floors, having already benefited from decades of engineering and scale in other applications. Incremental improvements in sorbent chemistry or component efficiency may yield modest savings, but the potential for radical cost reductions through fundamentally new approaches or extensive technological simplifications is inherently limited.

Perhaps the most stubborn barrier DAC faces in following a PV-like cost curve is rooted in basic physics: the energy-intensive nature of extracting CO₂ from the atmosphere. Unlike solar cells, whose primary cost drivers are fabrication efficiency and material utilization, DAC confronts unavoidable thermodynamic constraints. The fundamental minimum energy required to capture CO₂ at the dilute concentrations found in ambient air sets a hard, non-negotiable energy floor. Current DAC operations use energy at several times the theoretical minimum, but even highly optimistic scenarios still require substantial energy input, typically hundreds to thousands of kilowatt-hours per ton of CO₂. Thus, DAC will always incur significant operational energy costs that place a lower bound on achievable pricing, unlike solar panels and batteries, whose unit costs dropped rapidly with better manufacturing processes and materials science advances.

Adding complexity, DAC is physically and materially intensive. Capturing millions of tons of CO₂ per year demands enormous amounts of infrastructure—steel, concrete, sorbent materials, and sophisticated capital equipment. Unlike digital technology or small-scale consumer goods, DAC units cannot shrink significantly or dramatically reduce material inputs without sacrificing performance. Indeed, the large physical dimensions of air contactors, substantial volumes of sorbent material needed, and considerable infrastructure for regeneration and compression suggest that DAC systems will remain heavy, complex installations. As DAC scales, rather than benefit from continuously cheaper materials, increased demand for specialty chemicals and industrial materials may drive prices upward, potentially offsetting some manufacturing efficiency gains. This scenario contrasts sharply with the declining per-unit material intensity that helped accelerate solar and battery cost reductions.

Critically, DAC lacks the autonomous, self-sustaining market demand that propelled solar PV and batteries. Solar power and battery storage offered direct economic benefits to millions of end-users, enabling them to become cost-competitive with conventional energy sources over time. DAC, however, provides an environmental service—carbon removal—whose value remains purely policy-dependent. Without robust carbon pricing, governmental incentives, or regulatory mandates, DAC has no inherent private market demand, severely limiting its potential cumulative production growth. Whereas solar panels and batteries rapidly scaled through consumer and business demand, DAC expansion hinges exclusively on sustained public policy support. Such policy-driven markets are vulnerable to political shifts, budget constraints, and public sentiment, making exponential growth in DAC production far less predictable or assured.

Historical analogues from other large-scale industrial and environmental technologies underscore DAC’s challenging trajectory. Technologies such as nuclear power, large-scale carbon capture on fossil plants, and industrial chemical plants have all faced similar complexities and constraints, often resulting in slow, incremental cost reductions—or even cost escalation—as they scaled. These technologies offer more instructive benchmarks for DAC than solar or batteries, highlighting the cautious reality that DAC may experience only modest learning curves of around 10% per cumulative doubling, far slower than the 20% or more seen in clean-energy consumer markets.

All of this leads to 10% or less cost take out for a lot fewer doublings for DAC fan units, the only component which will have any volumes. The chart ends at just below 10,000 units. For context, a million ton per year Carbon Engineering system might have 250 contactor units, the basic module in a wall two kilometers long and 20 meters high. They’d have to build 64 km of their system to get to 8,000 fans, and that’s exceedingly unlikely. To get another 10%, they’d have to build 128 km of walls of their system with 16,000 units. To get another 10, 256 km with 32,000 units.

Meanwhile, a single one GW solar farm has around 1.8 million solar panels. The volumes are radically different, and the rate of cost decreases per doubling are radically different.

Looking forward, expert analyses from independent institutions like the International Energy Agency, Harvard’s Belfer Center, and the National Academies broadly agree: DAC costs will likely remain in the triple-digit dollar range per ton even after decades of scaling. Starry eyed scenarios predict DAC might achieve costs around $150 to $250 per ton by mid-century under aggressive deployment assumptions. More realistic projections settle higher, acknowledging inherent thermodynamic limits, persistent energy costs, and material constraints. Industry-driven forecasts that envision DAC below $100 per ton are simply delusional, hinging on technological breakthroughs that would require changing the laws of physics and ludicrously low energy cost assumptions as a result.

Given these realities, policymakers and investors must fundamentally rethink their near-term engagement with DAC. Aggressively reducing emissions through proven, lower-cost technologies such as electrification, renewable energy, and energy efficiency should remain the clear and unambiguous priority until energy systems are fully decarbonized and surplus renewable electricity is abundant—likely not until after 2040 and probably beyond 2050. DAC, due to its inherently high energy intensity and substantial infrastructure requirements, should not divert limited resources from direct emission-reduction strategies until we reach a point where clean energy is inexpensive and plentiful.

Policymakers and investors should limit current DAC involvement strictly to research and development, aiming to improve technology performance, reduce energy requirements, and better understand realistic long-term potential. Public spending on commercial-scale DAC deployment or infrastructure is premature and risks locking in inefficient, high-cost solutions before cleaner, lower-cost alternatives are fully exploited.

Carbon removal strategies in the immediate decades should instead emphasize nature-based methods and improved soil carbon sequestration—technologies with significantly lower energy demands and clearer short-term scalability. The belief that we can vacuum enough CO2 out of the atmosphere to reach 2050 goals should be abandoned, and more aggressive decarbonization scenarios driven through.

Sign up for CleanTechnica's Weekly Substack for Zach and Scott's in-depth analyses and high level summaries, sign up for our daily newsletter, and/or follow us on Google News!


Whether you have solar power or not, please complete our latest solar power survey.



Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.


 

CleanTechnica uses affiliate links. See our policy here.

CleanTechnica's Comment Policy


Read the whole story
strugk
9 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

The Neuroscience of Dopamine: How to Triumph Over Constant Wanting

1 Share

Michael Long is not the typical neuroscience guy. He was trained as a physicist, but is primarily a writer. He coauthored the international bestseller The Molecule of More. As a speechwriter, he has written for members of Congress, cabinet secretaries, presidential candidates, and Fortune 10 CEOs. His screenplays have been performed on most New York stages. He teaches writing at Georgetown University.

What’s the big idea?

Dopamine is to blame for a lot of your misery. It compels us to endlessly chase more, better, and greater—even when our dreams have come true. Thanks to dopamine, we often feel restless and hopeless. So no, maybe it’s not quite accurate to call it the “happiness” molecule, but it has gifted humans some amazing powers. Dopamine is the source of imagination, creativity, and ingenuity. There are practical ways to harness the strengths of our dopamine drives while protecting and nurturing a life of consistent joy.

Below, Michael shares five key insights from his new book, Taming the Molecule of More: A Step-by-Step Guide to Make Dopamine Work for You. Listen to the audio version—read by Michael himself—in the Next Big Idea App.

Audio Player

1. Dopamine is not the brain chemical that makes you happy.

Dopamine makes you curious and imaginative. It can even make you successful, but a lot of times it just makes you miserable. That’s because dopamine motivates you to chase every new possibility, even if you already have everything you want. It turns out that brain evolution hasn’t caught up with the evolution of the world.

For early humans, dopamine ensured our survival by alerting us to anything new or unusual. In a world with danger around every corner and resources hard to acquire, we needed an early warning system to motivate us even more. Dopamine made us believe that once we got the thing we were chasing, we’d be safer, happier, or more satisfied. That served humans well, until it didn’t.

Now that we’ve tamed the world, we don’t need to explore every new thing, but dopamine is still on duty, and it works way out of proportion to the needs of the modern world. Since self-discipline has a short shelf life, I share proven techniques that don’t rely on willpower alone.

2. Dopamine often promises more than reality can deliver.

When we have problems obsessing with social media, the news, or when we’re doing excessive shopping, we feel edgy and restless. This is because dopamine floods us with anticipation and urgency. We desperately scroll for the next hit, searching for the latest story, or watching the porch for that next Amazon package. As this anticipation becomes a normal way of living, the rest of life starts to feel dull and flat. That restarts the cycle of chasing what we think will make us happy. Then we get it, and when it doesn’t make us happy, we experience a letdown, and that makes us restless all over again.

Here’s how that works for love and romance. When we go on date after date and can’t find the right person or a long-term relationship gets stale, we start to feel hopeless. The dopamine chase has so raised our expectations about reality that we no longer enjoy the ordinary. Now we’re expecting some perfect partner, and we won’t find them because they don’t exist. Fight back with three strategies:

  • Rewire your habits to ditch the chase.
  • Redirect your focus to the here and now.
  • Rebuild meaning, so life feels more like it matters.

I describe specific ways to do this through simple planning, relying more on friendships, doing a particular kind of personal assessment, and there’s even a little technology involved that you wouldn’t expect.

3. Dopamine is the source of imagination.

The dopamine system has three circuits. The first has only a little to do with behavior and feeling, so we’ll set that one aside. The second circuit (that early warning system) is called the desire dopamine system because it plays on our desires. The third system is very different. It’s called the control system, and it gives us an ability straight out of science fiction: mental time travel. You can create in your mind any possible future in as much detail as you like and investigate the results without lifting a finger.

We do this all the time without realizing that’s what it is. Little things like figuring out where to go for lunch: we factor in traffic, how long we’ll have to wait for a table, think over the menu, and game it all out to decide where to go. But this system also lets us imagine far more consequential mental time travel, figuring out the best way to build a building, design an engine, or travel to the moon.

“Dopamine really is the source of creativity and analytical power that allows us to create the future.”

The dopamine control circuit lets us think in abstractions and play out various plans using only our minds. That means not only can we imagine a particular future, but we can also imagine entire abstract disciplines, come to understand them, and make use of them in the real world based on what we thought about. Fields like chemistry, quantum mechanics, and number theory exist because of controlled dopamine. Dopamine really is the source of creativity and analytical power that allows us to create the future. Dopamine brings a lot of dissatisfaction to the modern world, but we wouldn’t have the modern world without dopamine.

4. You’re missing out on the little things.

When my best friend died at age 39, the speaker at his funeral said, “You may not remember much of what you did with Kent, but it’s okay because it happened.”

I did not know what that could mean, but years later, while writing this book, I got it. We don’t live life just to look back on it. The here and now ought to be fun. You may not remember it all, but while it’s happening, enjoy it. That requires fighting back against dopamine because it’s always saying, Never mind what’s in front of you, think about what might be. When Warren Zevon was at the end of his life, David Letterman asked him what he’d learned. Warren said, “Enjoy every sandwich.”

5. A satisfying life requires meaning, and there’s a practical way to find it.

Even if you fix every dopamine-driven problem in your life, you may still feel like something is missing. To find a satisfying balance between working for the future and enjoying the here and now, we must choose a meaning for life and work toward it as we go.

“If you’re making life better for others with something you do well and enjoy, the days feel brighter and life acquires purpose.”

Is it possible to live in the moment, anticipate the future, and have it add up to something? The psychiatrist Viktor Frankl said we need to look beyond ourselves because that’s where a sense of purpose begins. Aristotle gave us a simple formula for taking pleasure in the present, finding a healthy anticipation for the future, and creating meaning. He said it’s found where three things intersect: what we like to do, what we’re good at, and what builds up the world beyond ourselves. Things like working for justice, making good use of knowledge, or simply living a life of kindness and grace.

What you do with your life doesn’t have to set off fireworks, and you don’t have to make history. You can be a plumber, a mail carrier, or an accountant. I’m a writer. I like what I do. I seem to be pretty good at it, and it helps people. The same can be true if you repair the highway, fix cars, or serve lunch in a school cafeteria. If you’re making life better for others with something you do well and enjoy, the days feel brighter and life acquires purpose. Life needs meaning, and that’s the last piece of the puzzle in dealing with dopamine and taming the molecule of more.

Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea App:

Read the whole story
strugk
14 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

How giant concrete balls on ocean floors could store renewable energy

1 Share

In an effort to reduce the use of precious land to build renewable energy storage facilities, the Fraunhofer Institute has been cooking up a wild but plausible idea: dropping concrete storage spheres down to the depths of our oceans.

Since 2011, the StEnSea (Stored Energy in the Sea) project has been exploring the possibilities of using the pressure in deep water to store energy in the short-to-medium term, in giant hollow concrete spheres sunken into seabeds, hundreds of feet below the surface.

An empty sphere is essentially a fully charged storage unit. Opening its valve enables water to flow into the sphere, and this drives a turbine and a generator that feed electricity into the grid. To recharge the sphere, water is pumped out of it against the surrounding water pressure using energy from the grid.

Each hollow concrete sphere measures 30 ft (9 m) in diameter, weighs 400 tons, and will be anchored to the sea floor at depths of 1,970 - 2,625 ft (600 - 800 m) for optimal performance.

Fraunhofer has previously tested a smaller model in Europe's Lake Constance near the Rhine river, and is set to drop a full-size 3D-printed prototype sphere to the seabed off Long Beach near Los Angeles by the end of 2026. It's expected to generate 0.5 megawatts of power, and have a capacity of 0.4 megawatt-hours. For reference, that should be enough to power an average US household for about two weeks.

The bigger goal is to test whether this tech can be expanded to support larger spheres with a diameter of nearly 100 ft (30 m). Fraunhofer researchers estimate StEnSea has a massive global storage potential of 817,000 gigawatt-hours in total – enough to power every one of approximately 75 million homes across Germany, France, and the UK put together for a year.

The institute estimates storage costs at around US5.1¢ (EUR 4.6¢) per kilowatt-hour, and investment costs at $177 (EUR 158) per kilowatt-hour of capacity – based on a storage park with six spheres, a total power capacity of 30 megawatts, and a capacity of 120 megawatt-hours.

According to Fraunhofer, StEnSea spherical storage is best suited for stabilizing power grids with frequency regulation support or operating reserves, and for arbitrage. The latter refers to buying electricity at low prices and selling it at high market prices – which grid operators, utilities providers, and power trading companies can engage in.

Ultimately, StEnSea could rival pumped storage as a way to store surplus grid electricity, with an obvious advantage: it doesn't take up room on land. Plus, pumped storage only really works when you have two reservoirs at different elevations to drive pumps that act as turbines. While pumped storage is cheaper to run and slightly more efficient over an entire storage cycle, StEnSea can potentially be installed across several locations worldwide to support immense storage capacity.

The US Department of Energy has invested $4 million into the project, so it will be keen to see how the 2026 pilot plays out off the California coast.

If you enjoy discovering all the weird ways in which we produce and store energy, see how falling rain can generate electricity, and get a closer look at plans to turn millions of disused mines around the world into massive underground batteries.

Source: Fraunhofer IEE

Read the whole story
strugk
17 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories