Energy, data, environment
1466 stories
·
2 followers

How to make the hardest choices of your life

Vox
1 Share

Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. It’s based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. To submit a question, fill out this anonymous form. Here’s this week’s question from a reader, condensed and edited for clarity:

I’m soon to be a part of the legal profession. I went to law school to advocate for marginalized populations who seldom have their voices heard — people who are steamrolled by unethical landlords, employers, corporations, etc. I will clerk after law school, and then I’ll encounter my first major fork in the road: whether I pursue employment in a corporate firm or nonprofit/government. Corporate firms, ultimately, serve profitable clients, sometimes to the detriment of marginalized populations. Corporate firms also pay significantly better. Nonprofit or government work serves the populations I want to work for and alongside, but often pays under the area median income.

I’ll be 32 by the time I reach this fork, and I don’t know what to do. I’m extremely fortunate in that I won’t have law school debt — I was on a full ride. Still, I’m not “flush.” I want to buy a house one day, have some kids with my partner, feel financially secure enough to do so. I also want to have a morally congruent career and not enable (what I consider) systems of oppression. What do I do?

Dear Fork in the Road,

Your question reminds me of another would-be lawyer: a very bright American woman named Ruth Chang. When she was graduating from college, she felt torn between two careers: Should she become a philosopher or should she become a lawyer?

She loved the learning that life in a philosophy department would provide. But she’d grown up in an immigrant family, and she worried about ending up unemployed. Lawyering seemed like the financially safe bet. She got out some notepaper, drew a line down the middle, and tried to make a pro/con list that would reveal which was the better option.

But the pro/con list was powerless to help her, because there was no better option. Each option was better in some ways and worse in others, but neither was better overall.

Have a question you want me to answer in the next Your Mileage May Vary column?

So Chang did what many of us do when facing a hard choice: She chose the safe bet. She became a lawyer. Soon enough, she realized that lawyering was a poor fit for her personality, so she made a U-turn and became — surprise, surprise — a philosopher. And guess what she ended up devoting several years to studying? Hard choices! Choices like hers. Choices like yours. The kind where the pro/con list doesn’t really help, because neither option is better on balance than the other.

Here’s what Chang came to understand about hard choices: It’s a misconception to think they’re hard because of our own ignorance. We shouldn’t think, “There is a superior option, I just can’t know what it is, so the best move is always to go with the safer option.” Instead, Chang says, hard choices are genuinely hard because no best option exists.

But that doesn’t mean they’re both equally good options. If two options are equally good, then you could decide by just flipping a coin, because it really doesn’t matter which you choose. But can you imagine ever choosing your career based on a coin toss? Or flipping a coin to choose whether to live in the city or the country, or whether to marry your current partner or that ex you’ve been pining for?

Of course not! We intuitively sense that that would be absurd, because we’re not simply choosing between equivalent options.

So what’s really going on? In a hard choice, Chang argues, we’re choosing between options that are “on a par” with each other. She explains:

When alternatives are on a par, it may matter very much which you choose. But one alternative isn’t better than the other. Rather, the alternatives are in the same neighborhood of value, in the same league of value, while at the same time being very different in kind of value. That’s why the choice is hard.

To concretize this, think of the difference between lemon sorbet and apple pie. Both taste extremely delicious — they’re in the same league of deliciousness. The kind of deliciousness they deliver, however, is different. It matters which one you choose, because each will give you a very different experience: The lemon sorbet is delicious in a tart and refreshing way, the apple pie in a sweet and comforting way.

Now let’s consider your dilemma, which isn’t really about whether to do nonprofit work or to become a corporate lawyer, but about the values underneath: advocating for marginalized populations on the one hand, and feeling financially secure enough to raise a family on the other. Both of these values are in the same league as each other, because each delivers something of fundamental value to a human life: living in line with moral commitments or feeling a sense of safety and belonging. That means that no matter how long you spend on a pro/con list, the external world isn’t going to supply reasons that tip the scales. Chang continues:

When alternatives are on a par, the reasons given to us — the ones that determine whether we’re making a mistake — are silent as to what to do. It’s here in the space of hard choices that we get to exercise our normative power: the power to create reasons for yourself.

By that, Chang means that you have to put your own agency into the choice. You have to say, “This is what I stand for. I’m the kind of person who’s for X, even if that means I can’t fulfill Y!” And then, through making that hard choice, you become that person.

So ask yourself: Who do you want to be? Do you want to be the kind of person who serves profitable clients, possibly to the detriment of marginalized people, in order to be able to provide generously for a family? Or do you want to advocate for those who most need an advocate, even if it means you can’t afford to own property or send your kids to the best schools?

What is more important to you? Or, to ask this question in a different way: What kind of person would you want your future children to see you as? What legacy do you want to leave?

Only you can make this choice and, by making it, choose who you are to be.

I know this sounds hard — and it is! But it’s good-hard. In fact, it’s one of the most awesome things about the human condition. Because if there was always a best alternative to be found in every choice you faced, you would be rationally compelled to choose that alternative. You would be like a marionette on the fingers of the universe, forced to move this way, not that.

But instead, you’re free — we’re free — and that is a beautiful thing. Because we get the precious opportunity to make hard choices, Chang writes, “It is not facts beyond our agency that determine whether we should lead this kind of life rather than that, but us.”

Bonus: What I’m reading

  • Chang’s paper “Hard Choices” is a pleasure to read — but if you want an easier entry-point into her philosophy, check out her TED talk or the two cartoons that she says summarize her research interests. I cannot stop thinking about the cartoon showing a person pulling their own marionette strings.
  • In the AI world, when researchers think about how to teach an AI model to be good, they’ve too often resorted to the idea of inculcating a single ethical theory into the model. So I’m relieved to see that some researchers in the field are finally taking value pluralism seriously. This new paper acknowledges that it’s important to adopt an approach that “does not impose any singular vision of human flourishing but rather seeks to prevent sociotechnical systems from collapsing the diversity of human values into oversimplified metrics.” It even cites our friend Ruth Chang! We love to see it.
  • Nobel-winning Polish poet Wisława Szymborska has a witty poem, “A Word on Statistics,” that asks how many of us, out of every hundred people, exhibit certain qualities. For example: “those who always know better: fifty-two. Unsure of every step: almost all the rest.” It’s a clever meditation on all the different kinds of people we could choose to become.
Read the whole story
strugk
8 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Why Europe’s train Wi-Fi never works

1 Share

Europe’s new media freedom law has kicked in. Will it make any difference?

Europe’s new media freedom law has kicked in. Will it make any difference?

Media experts fear the law may be ignored as illiberal and populist parties erode media independence.

Aug 7 5 mins read

France burns TikTok over tan lines trend

France burns TikTok over tan lines trend

After the tech company caved to French pressure to ban “SkinnyTok,” politicians are now kicking up a fuss over the app’s newest obsession — sunburn.

Aug 4 2 mins read

Vague trade deal allows new US attacks on EU tech rules

Vague trade deal allows new US attacks on EU tech rules

With both sides claiming victory, Brussels may have to tread carefully when flexing its regulatory muscle against U.S. Big Tech.

Jul 31 6 mins read

How von der Leyen’s no-confidence vote fueled Russian propaganda

How von der Leyen’s no-confidence vote fueled Russian propaganda

A confidential study found that a known Russian disinformation network had ramped up its posts by 60 percent while pushing the narrative that the European Commission president was “toxic.”

Jul 21 5 mins read

Read the whole story
strugk
20 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

The truth about the Treasury’s Green Book

1 Share

How should the UK government choose which projects to invest in? How can it avoid making catastrophic mistakes? Labour’s new 10-year Infrastructure Strategy, published in June, shows the immense amount of public investment our economy needs and the sacrifices that will have to be made to fund it. Meanwhile, as the government sets out its noble intentions, HS2 is costing the Treasury more than £7 billion a year. 

In June, the CEO and the responsible minister for HS2 told parliament that the unfinished high-speed rail project is over-specified, that costs are out of control and that construction is running years late. The most beneficial branches of HS2—which would have helped northern cites—have been withdrawn. What is left will duplicate existing railway lines. 

When HS2 was first proposed in 2009, the appraisal, carried out according to the Treasury’s method, suggested that the benefits of the project might justify its vast costs. It also found that HS2 was a risky undertaking that would take many years to produce any return. It turns out that the costs had been understated and the benefits have withered. If spent on alternative public investments, the money invested in HS2 would almost certainly give a much better return. With demands for growth-enhancing investment outstripping the limited supply of public funds, it is particularly important to get this right. We cannot afford to go on wasting so much resource on unproductive initiatives for which there is lots of hand-waving aspiration, but not enough supporting analysis. 

But the Treasury does have a method for doing just that. The department has a rainbow of official “books”, each with its own purpose: Red, Blue, Magenta, Aqua, Orange and now Teal. Not forgetting the Pink Book, which records statistics on the UK’s balance of international trade. We all watch the chancellor delivering budgets and spending reviews, but few study these supporting documents. That, however, is where the substance—and the devilish detail—is to be found.   

HS2 was recommended for investment under the methodology of the Green Book. This particular Treasury book provides guidance on how to appraise spending on policies, programmes and projects. All government departments are supposed to follow it. In principle—if not always in practice—it determines what taxpayers’ money gets spent on. At the recent Spending Review, the Treasury published its audit of the Green Book, which the chancellor commissioned in January.   

The Green Book has, in the past, suffered a bad rap. There is a common misperception that it is only about cost-benefit analysis; that it is a rigid and technocratic set of pseudo-scientific rules; that it gives undue weight to arbitrary monetary valuations; that it misses many important social dimensions of public policy; and that it is incapable of dealing with large, “transformational” proposals.  

When lobbyists and politicians proclaim which investments are the right ones, they can be disappointed when Treasury funding is not forthcoming. Often, they blame the Green Book. One argument goes that looking at value-for-money tends to show good returns to investment in productive, high-income regions (such as London and the southeast) and poor returns in regions that need help (such as the north). But that is a case of “shooting the messenger”. 

The results of the audit commissioned by the chancellor are thoroughly sensible. In essence, the audit found that the Green Book should be tweaked (naturally) but that in general it is just fine. The problems come not from the book’s analysis, but in the ways it can be used and misused.  

The Green Book dates to the 1960s, when the UK Ministry of Transport introduced economic evaluation to help it select projects for investment. This was widely admired, and versions of the Green Book have been adopted in one form or another in many overseas administrations. It has been refined and developed as the underlying scientific disciplines have evolved. It was last updated in 2022.

This is how it works: the Green Book sets out a framework for the considerations to be made when deciding on government investment—a “five case model”. The cost-benefit analysis, one the five cases, is the economic dimension. It asks, as the Green Book itself says: “What is the net value to society (the social value) of the intervention compared to continuing with business as usual?” The second case is commercial: “Can a realistic and credible commercial deal be struck?” (meaning, who bears the risks for the investment and what are the proposed procurement arrangements?). The third case is the financial aspect. The book asks: “What is the impact of the proposal on the public sector budget in terms of the total cost of both capital and revenue?”. The fourth case relates to project management: “Are there realistic and robust delivery plans?” 

These four sit with the fifth, strategic question: “What is the case for change, including the rationale for intervention? What is the current situation? What is to be done? What outcomes are expected? How do these fit with wider government policies and objectives?”

The reputation of the case that deals with economic value for money has been damaged because too much is expected of it. The question should not be “is it perfect?”, rather “is it useful?”. As Rachel Reeves’s audit points out, cost-benefit analysis is certainly not expected to grind out decisions like a machine. Sensibly used, this five-case model puts the weight of cost and benefits in a broader context. 

Robust analysis of the government’s potential investments will expose assumptions for scrutiny. It will bring the best available evidence to bear, facilitating comparisons within and across departments of state. If done consistently, cost-benefit analysis can be helpful in guiding choices between, say, a road safety measure and spending that same public money on reducing risk of death in the health service. 

Looking at costs and benefits systematically will protect against economically incoherent elephant traps, of which there are a number. It will help to stop the proponents of various schemes “double counting”, which is surprisingly difficult to avoid. A famous example is to claim financial benefits for the lower rate of crowding and faster journeys to a location achieved by introducing a new road or railway, as well as for an increase in the value of the land at that location. But the second is a consequence of the first; one can claim one or the other, but not both.  

At the very least a Green Book appraisal forces proponents to write down and justify estimates of the basics, such as how much a project might cost, how many people might use it over the years and what the risks of the undertaking are. It is surprising how often people will assert that a project is a “good idea” without knowing these details. Often, a project’s champions will reel off an unquantified list of benefits from a scheme, giving little regard to how much public money it is going to cost, as if resources were free, and as if they could not be used in other, perhaps more beneficial, projects. Or even that they could remain in taxpayers’ pockets.

One criticism of cost-benefit analysis is that it can yield inequitable results, but there are ways of compensating for this, including to use a system of weighting. The chancellor’s audit recommends using unweighted analysis, which makes no pretence of dealing with equity, and then to discuss it as part of the strategic case and policy more generally. 

A fair charge is that cost-benefit analysis is less plausible for large, “transformational” schemes. But this is a reflection of the fact that large schemes are difficult to analyse, requiring a lot of data. The issue is whether an imperfect appraisal using the best available evidence is more helpful than the alternative—speculation.  

No cost-benefit analysis should be treated as definitive. But a poor estimated rate of benefit, in return for the costs at risk, should be heeded as a warning that there may be wiser ways of spending public money.  That was the view of the unjustly neglected independent Eddington Transport Study (2006, for HM Treasury and Department for Transport). For these reasons Eddington was sceptical of high-speed rail in UK conditions. How right he was.

The recent review of the Green Book rightly promises more thinking on the knotty issue of the future. At the point of decision on a project, how should the government account for benefit or cost that is expected to accrue in, for instance, 20 years’ time? Typically, once serious spending has started on a project, delays in completion are catastrophic for the overall social value. That is not a weakness of cost-benefit analysis, it is a reflection of a fundamental dilemma. To what extent should any society sacrifice the benefits of current consumption so that investment can be made to deliver benefits in the future?

The review made some other sensible recommendations. A committee will be established to develop guidance where several projects are inter-dependent. An editorial shortening of the Green Book and its many supporting documents is promised; it is asking a lot of national or local government officials, who may or may not be trained economists, that they get to grips with the existing mass of material. 

There is also a promise of more transparency, by publishing all the business cases for major projects and programmes. That can only improve the standard to which the work is done and it will facilitate holding government to account for decisions it makes—including explanations of its justification for proceeding with projects that don’t look like they represent good value for money. The problem in the past has not been too much reliance on mechanistic cost-benefit analysis. It has been a failure to show that available information on a proposed project has been given its due weight.

Success in the policy of investing to grow requires hard-headed analysis of the kind set out in the (revised) Green Book. This is how government can select the right projects for investment. We can no longer afford to squander resources on aspirational projects that do not have a firm, evidence-based justification.

Read the whole story
strugk
30 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

The Imperfectionist: Navigating by aliveness

1 Share
Read the whole story
strugk
51 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

How Much Energy Does It Take To Think? | Quanta Magazine

2 Shares

Studies of neural metabolism reveal our brain’s effort to keep us alive and the evolutionary constraints that sculpted our most complex organ.

Introduction

You’ve just gotten home from an exhausting day. All you want to do is put your feet up and zone out to whatever is on television. Though the inactivity may feel like a well-earned rest, your brain is not just chilling. In fact, it is using nearly as much energy as it did during your stressful activity, according to recent research.

Sharna Jamadar (opens a new tab), a neuroscientist at Monash University in Australia, and her colleagues reviewed research from her lab and others around the world to estimate the metabolic cost of cognition (opens a new tab) — that is, how much energy it takes to power the human brain. Surprisingly, they concluded that effortful, goal-directed tasks use only 5% more energy than restful brain activity. In other words, we use our brain just a small fraction more when engaging in focused cognition than when the engine is idling.

It often feels as though we allocate our mental energy through strenuous attention and focus. But the new research builds on a growing understanding that the majority of the brain’s function goes to maintenance. While many neuroscientists have historically focused on active, outward cognition, such as attention, problem-solving, working memory and decision-making, it’s becoming clear that beneath the surface, our background processing is a hidden hive of activity. Our brains regulate our bodies’ key physiological systems, allocating resources where they’re needed as we consciously and subconsciously react to the demands of our ever-changing environments.

“There is this sentiment that the brain is for thinking,” said Jordan Theriault (opens a new tab), a neuroscientist at Northeastern University who was not involved in the new analysis. “Where, metabolically, [the brain’s function is] mostly spent on managing your body, regulating and coordinating between organs, managing this expensive system which it’s attached to, and navigating a complicated external environment.”

The brain is not purely a cognition machine, but an object sculpted by evolution — and therefore constrained by the tight energy budget of a biological system. Thinking may make you feel tired, then, not because you are out of energy, but because you have evolved to preserve resources. This study of neural metabolism, when tied to research on the dynamics of the brain’s electrical firing, points to the competing evolutionary forces that explain the limitations, scope and efficiencies of our cognitive capabilities.

The Cost of a Predictive Engine

The human brain is incredibly expensive to run. At roughly 2% of body weight, the organ gorges on 20% of our body’s energetic resources. “It’s hugely metabolically demanding,” Jamadar said. For infants, that number is closer to 50%.

The brain’s energy comes in the form of the molecule adenosine triphosphate (ATP), which cells make from glucose and oxygen. A tremendous expanse of thin capillaries — an estimated 400 miles of vascular wiring — weaves through brain tissue to carry glucose- and oxygen-rich blood to neurons and other brain cells. Once synthesized within cells, ATP powers communication between neurons, which enact the brain’s functions. Neurons carry electrical signals to their synapses, which allow the cells to exchange molecular messages; the strength of a signal determines whether they will release molecules (or “fire”). If they do, that molecular signal determines whether the next neuron will pass on the message, and so on. Maintaining what are known as membrane potentials — stable voltages across a neuron’s membrane that ensure that the cell is primed to fire when called upon — is known to account for at least half of the brain’s total energy budget.

Measuring ATP directly in the human brain is highly invasive. So, for their paper, Jamadar’s lab reviewed studies (opens a new tab), including their own findings, that used other estimates of energy use — glucose consumption, measured by positron-emission tomography (PET), and blood flow, measured by functional magnetic resonance imaging (fMRI) — to track differences in how the brain uses energy during active tasks and rest. When performed simultaneously, PET and fMRI can provide complementary information on how glucose is being consumed by the brain, Jamadar said. It’s not a complete measure of the brain’s energy use because neural tissues can also convert some amino acids (opens a new tab) into ATP, but the vast majority of the brain’s ATP is produced by glucose metabolism.

Jamadar’s analysis showed that a brain performing active tasks consumes just 5% more energy compared to a resting brain. When we are engaged in an effortful, goal-directed task, such as studying a bus schedule in a new city, neuronal firing rates increase in the relevant brain regions or networks — in that example, visual and language processing regions. This accounts for that extra 5%; the remaining 95% goes to the brain’s base metabolic load.

Researchers don’t know precisely how that load is allocated, but over the past few decades, they have clarified what the brain is doing in the background. “Around the mid-’90s we started to realize as a discipline [that] actually there is a whole heap of stuff happening when someone is lying there at rest and they’re not explicitly engaged in a task,” she said. “We used to think about ongoing resting activity that is not related to the task at hand as noise, but now we know that there is a lot of signal in that noise.”

Much of that signal is from the default mode network, which operates while we’re resting or otherwise not engaged in apparent activity. This network is involved in the mental experience of drifting between past, present and future scenarios — what you might make for dinner, a memory from last week, some pain in your hip. Additionally, beneath the iceberg of awareness, our brains are keeping track of the mosaic of physical variables — body temperature, blood glucose level, heart rate, respiration, and so on — that must remain stable, in a state known as homeostasis, to keep us alive. If any of them stray too far, things can get bad pretty quickly.

Theriault speculates that most of the brain’s base metabolic load goes toward prediction. To achieve its homeostatic goals, the brain needs to always be planning for what comes next — building a sophisticated model of the environment and how changes might affect the body’s biological systems. Prediction, rather than reaction, Theriault said, allows the brain to dole out resources to the body efficiently.

The Brain’s Evolutionary Constraints

A 5% increased energy demand during active thought may not sound like much, but in the context of the entire body and the energy-hungry brain, it can add up. And when you consider the strict energetic constraints our ancestors had to deal with, weariness at the end of a taxing day suddenly makes a lot more sense.

“The reason you are fatigued, just like you are fatigued after physical activity, isn’t because you don’t have the calories to pay for it,” said Zahid Padamsey (opens a new tab), a neuroscientist at Weill Cornell Medicine-Qatar, who was not involved in the new research. “It is because we have evolved to be very stingy systems. … We evolved in energy-poor environments, so we hate exerting energy.”

The modern world, in which calories are relatively abundant for many people, contrasts starkly with the conditions of scarcity that Homo sapiens evolved in. That 5% increase in burn rate, over 20 days of persistent, active, task-related focus, can amount to a whole day’s worth of cognitive energy. If food is hard to come by, it could mean the difference between life and death.

“This can be substantial over time if you don’t cap the burn rate, so I think it is largely a relic of our evolutionary heritage,” Padamsey said. In fact, the brain has built-in systems to prevent overexertion. “You’re going to activate fatigue mechanisms that prevent further burn rates,” he said.

To better understand these energetic constraints, in 2023 Padamsey summarized research on certain peculiarities of electrical signaling (opens a new tab) that indicate an evolutionary tendency toward energy efficiency. For one thing, you might imagine that the faster you transmit information, the better. But the brain’s optimal transmission rate is far lower than might be expected.

Theoretically, the top speed for a neuron to feasibly fire and send information to its neighbor is 500 hertz. However, if neurons actually fired at 500 hertz, the system would become completely overwhelmed. The optimal information rate (opens a new tab) — the fastest rate at which neurons can still distinguish messages from their neighbors — is half that, or 250 hertz.

Our neurons, however, have an average firing rate of 4 hertz, 50 to 60 times less than what is optimal for information transmission. What’s more, many synaptic transmissions fail: Even when an electrical signal is sent to the synapse, priming it to release molecules to the next neuron, it will do so only 20% of the time.

That’s because we didn’t evolve to maximize total information sent. “We have evolved to maximize information transmission per ATP spent,” Padamsey said. “That’s a very different equation.” Sending the maximum amount of information for as little energy as possible (bits per ATP), the optimal neuronal firing rate is under 10 hertz.

Evolutionarily, the large, sophisticated human brain offered an unprecedented level of behavioral complexity — at a great energetic cost. This negotiation, between the flexibility and innovation of a large brain and the energetic constraints of a biological system, defines the dynamics of how our brain transmits information, the mental fatigue we feel after periods of concentration, and the ongoing work our brain does to keep us alive. That it does so much within its limitations is rather astonishing.

Read the whole story
strugk
89 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Innovation to Impact: Advancing Solid-State Cooling to Market

RMI
1 Share

Introduction

As the world encounters another hot summer, cooling is becoming an even hotter topic. Cooling demand is skyrocketing, driven primarily by the Global South and fueled by rising income levels, population growth, urbanization, and increasing global temperatures.

For investors, a booming future market — with most AC purchases yet to be made — opens the opportunity to invest now in superior sustainable cooling solutions that will shape our future.

One such solution is solid-state cooling — a technology class with the potential to revolutionize the cooling industry. Why? Solid state cooling technology can offer improved efficiency, reduce emissions and energy costs, and eliminate the need for super-polluting refrigerants compared to incumbent century-old vapor-compression technology.

In our first article of this series, we explained what solid-state cooling is, the promise it holds, and why it can be an important solution to the cooling challenge. In this article, we’ll dive into its market potential, market drivers, and what it will take to get these innovative cooling solutions to commercialization and scale.


The advantages of solid-state cooling: No refrigerants, higher performance ceilings, simpler systems

Since the advent of modern air conditioning in the late 19th century, vapor compression has remained the dominant technology powering the global cooling market. This refers to first compressing and condensing a refrigerant, releasing heat in the process, and then expanding and evaporating it to absorb heat and produce a cooling effect. Today, approximately 95 percent of all cooling equipment relies on the vapor compression cycle.

The efficiency improvements associated with this technology have been slow and incremental and are effectively capped by the “Carnot Limit” — determined by the temperatures of the hot and cold reservoirs used by the cooling system.

Because they are not dependent on moving heat between reservoirs, solid-state technologies have already demonstrated much higher potential performance ceilings. The graphic below shows that some solid-state technologies can have a coefficient of performance (COP) above 10, almost double the COP of incumbent AC systems, where the best score is roughly 5.5.

The challenge, of course, is to translate this potential into reality. Innovators are working to do just that (i.e., to achieve system-level performance comparable to or higher than their vapor compression counterparts). The team at Pascal, for example, has recently demonstrated that barocaloric materials can deliver effective cooling and heating at pressure levels comparable to those used in conventional air conditioners.

This is an exciting breakthrough in terms of material performance, and it offers a potential pathway toward cooling systems that are energy efficient and easier to design and integrate over time with standard components. Similarly, thermoelectric systems can achieve precision cooling without moving parts — no pistons, compressors, or hydraulics — simplifying the components needed.

The other major advantage of solid-state solutions is that they do away with refrigerants, which often have very high global warming potential (GWP) (for example, R-134a, one of the most common refrigerants used today, has a GWP 1,430 times more potent than carbon dioxide) and are increasingly subject to regulatory phase-outs.

In sum, while industry incumbents will likely continue to move the needle on vapor compression systems in response to regulatory and market pressure, solid-state cooling has the potential to outpace these improvements. There is step-change potential associated with its refrigerant-free operation, higher performance ceiling, and streamlined system design.


The potential market for solid-state cooling technologies is enormous.

The global cooling market is undergoing a dynamic transformation, creating unprecedented opportunities for sustainable cooling solutions, like solid-state technologies. The global active cooling sector, which includes air conditioning (AC), refrigeration, and mobile cooling, was valued at an estimated $633 billion in 2023 and is expected to top $1 trillion by 2050. Much of this growth will be driven by developing economies, which will comprise 60 percent of this demand, creating a $600 billion market in 2050 — more than doubling from its $272 billion size today.

The global cooling market is large enough and segmented enough that solid-state cooling startups could find sizeable beachheads (starting markets). Take two Third Derivative portfolio companies as examples. MIMiC, based in New York, is developing thermoelectric solid-state systems that can replace the standard packaged thermal AC (PTAC) units you often see in hotel rooms and many multifamily buildings. For US hotels alone, this could be a market worth more than $7 billion. Magnotherm, based in Darmstadt, Germany, is developing magnetocaloric refrigerators for supermarkets, grocery stores, and food and beverage retail. In Europe alone, this is an estimated $17 billion market. Even carving out a niche and targeting a few specialized segments in this market presents a multi-billion-dollar opportunity for young, innovative companies.

While the market for cooling solutions is booming overall, some dynamics are creating particularly favorable conditions for solid-state cooling. For example, there is a push in the regulatory landscape that may support the advancement of solid-state cooling technology. Most directly, regulations that tighten allowable refrigerant GWPs — largely being driven by the EU (building on the accelerating global effort to phase down high-GWP synthetic refrigerants)  — would benefit solid-state as it’s free of potent, high-GWP refrigerants.

Additionally, efficiency standards and incentives, including minimum energy performance standards (like Japan’s Top Runner program which sets performance standards for a range of appliances — labeling the most efficient AC and refrigeration models on the market, encouraging competition between companies to be the “Top Runner”), can support efficient solid-state cooling systems. Cities, states, countries, or regions with strong efficiency standards or incentives could become strong beachhead markets for solid-state cooling startups.

All in all, there is a perfect storm brewing to disrupt a global cooling market that has not witnessed a radical and environmentally sustainable innovation for nearly a century — either though innovations in vapor compression systems or alternative approaches, like solid-state cooling.


To enter the mainstream, solid-state cooling still has challenges to overcome

As a nascent technology, solid-state cooling systems are still relatively scarce in the market. While the efficiency potential is significant, there remains a wide gap between having an efficient material and building an efficient, integrated system. Startups often face challenges when it comes to system integration — combining materials with components like heat exchangers, controllers, and power supplies, without significant losses in efficiency. For example, the most efficient elastocaloric materials right now have a material efficiency coefficient of performance (COP) above 10, but once fully integrated into a cooling system, COPs will likely be more comparable with vapor compression systems to start (a COP of around 3).

Material fatigue is another critical hurdle for some approaches, namely barocaloric and elastocaloric, which generate cooling through the repetitive stretching or compression of materials. Consumers expect their air conditioners and refrigerators to last 15 years or more, and solid-state systems must demonstrate long-term reliability under continuous cycling.

Supply chain limitations — particularly for magnetocaloric systems, which rely on rare earth materials for permanent magnets — pose additional challenges. However, several solid-state startups are proactively working to leverage existing supply chains for components, reducing supply chain risks and offering a pathway toward sustainability in the future.  AI can play a role in material discovery, identifying new, more promising materials for solid-state cooling. One Third Derivative portfolio company Matnex is working on just this — identifying and scaling new materials using AI and machine learning — which could support solid-state innovators as well.

Above all, the most significant challenge facing solid-state cooling today is cost. Like many emerging technologies, solid-state systems will initially come at a premium price, though starting at a higher cost is typical for new technologies.

Solar panels, for example, were once over 100 times more expensive than they are today. With economies of scale, optimized manufacturing processes, and improvements in the cell technology itself, costs fell dramatically. The historical “learning rate” — the fall in costs associated with each doubling of production volumes —  for solar panels is 20 percent.

Given the urgency of climate change and the global need for efficient, sustainable cooling, there are strong reasons to believe that market forces will drive adoption and therefore open a path for cost reductions for solid-state cooling technologies in the near future.

Some startups are already demonstrating cost effectiveness. For example, UK-based startup Anzen Walls, supported by Third Derivative ecosystem partner Carbon 13, is developing thermoelectric heat pumps for homes that target a price comparable to or lower than traditional heat pumps.

Some recent partnerships between solid-state cooling startups and original equipment manufacturers (OEMs) such as Carrier, Copeland, and Trane show opportunities to cut costs and make solid-state more affordable. OEMs are very experienced in system and cost optimization and have access to large-scale distribution networks that would rapidly streamline this process. If startups can demonstrate performance and reliability, OEM partners can help drive down costs while offering access to established distribution networks, manufacturing infrastructure, and customer relationships, thereby paving the pathway for commercialization and scaling of these technologies.


What are the pathways to market?

There is a collection of exciting startups in the solid-state cooling space. The typical path to market holds true for solid-state cooling as well:

  1. Refinement: continued product performance refinement with grant and VC support.
  2. Validation: startups will build pilot manufacturing facilities on their own to validate performance with real-world pilots. Alternatively, early partnerships with manufacturers builds confidence and can open doors for future investment or even acquisition. This approach isn’t new to this sector. In 2020, Emerson acquired 7AC – a startup developing more efficient air conditioning technology through liquid desiccants – after collaborating with the company to commercialize the new technology.
  3. Demand: market interest indicated through demand signals. This is already appearing in the space – Walmart and IKEA have both committed to significantly reducing, or using no, GWP refrigerants.
  4. Partnerships: a faster, scalable path necessitates partnerships with manufacturers through licensing, direct sales or components, joint ventures, or acquisition to bring their innovations to market. Manufacturers in the cooling space are rather consolidated and very well-established – they not only take up major market shares, but they also have deep expertise in system design, cost reduction and market access. Additional partnerships with testing bodies will need to be established for standards to be developed for solid-state cooling and integrated into existing standards.

Solid-state’s potential has not gone unnoticed—established manufacturers are actively monitoring and engaging with the space. For example, Carrier Ventures recently invested in elastocaloric startup Exergyn, while Copeland has backed thermoacoustic heat pump startup BlueHeart Energy. These moves signal growing industry confidence in solid-state technologies as the next frontier in sustainable cooling.


Solid state’s right to win in the cooling market

Solid-state technologies are emerging as a potential frontrunner to disrupt the cooling market, but what gives solid-state a “right to win” or an unbeatable edge in the market? Some aspects are yet to be fully proven, but there are two exciting edges that solid-state offers: the elimination of potent refrigerants, and the potential for very high-performance ceilings. If the performance is actualized (for example, achieving system COPs of at least 3), solid-state will likely have a right to win in certain beachhead markets, and use those footholds to scale beyond.

For investors, this presents a timely opportunity to place an early bet on a rapidly evolving technology with major promise. Interest from major OEMs signals strong industry momentum toward a new era of cooling. Looking ahead, two key areas to watch are how startups improve performance and drive down costs — with effective systems integration being key to both. We see a clear opportunity for early-stage capital — especially pre-seed and seed investments — to play a pivotal role in supporting startups as they scale and commercialize their innovations.

The authors would like to thank Blue Haven Initiative for funding this research, and Ankit Kalanki, Chetan Krishna, and Shruti Naginkumar Prajapati for their contributions.

Read the whole story
strugk
89 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories