Energy, data, environment
1446 stories
·
2 followers

How meditation deconstructs your mind

Vox
1 Share

We’re laying out the latest science of what meditation does to your mind. The better we understand the common mechanisms across how different meditation practices affect the mind, the more meditation science can contribute to broader understandings of human psychology.

This was first published in More to Meditation

More to Meditation is Vox’s five-day course on deepening your meditation practice. Sign up here!

More relevant for us non-scientists, we’ll get better at developing and fine-tuning styles of practice that can help us get the most out of whatever we’re looking for in taking up meditation. (It’s possible, after all, that there are improvements to be made on the instructions we received a few thousand years ago.)

There’s a lot to get into here, but if you walk away from this with anything, it should be that in the past few years, a breakthrough has begun sweeping across meditation research, delivering science’s first “general theory of meditation.” That means very exciting days — and more to the point, scientifically refined meditation frameworks and practices — are not too far ahead.

Don’t we already know what meditation is?

Over the last decade or two, the rise of mindfulness-related practices as a profitable industry has spread the most accessible forms of meditation — like short, guided stress-relief meditations, or gratitude journals — to millions of Americans.

Which is great — basic mindfulness practices that help us concentrate on the present are both relaxing and useful. But as psychotherapist Miles Neale, who coined the term “McMindfulness,” writes, if stress relief is all we take meditation to be, it’s “like using a rocket launcher to light a candle.” Some meditation practices can help ease the anxious edges of modern life. Others can change your mind forever.

One way to pursue happiness is to try and fill your experience with things that make you happy — loving relationships, prestige, kittens, whatever. Another is to change the way your mind generates experience in the first place. This is where more advanced meditation focuses. It operates on our deep mental habits so that well-being can more naturally arise in how we experience anything at all, kittens or not.

But the deeper terrain of meditation is often shrouded in hazy platitudes. You may hear that meditation is about “awakening,” “liberation,” or jubilantly realizing the inherent emptiness of all phenomena, at which point you’d be forgiven for tuning out. Descriptions of more advanced meditation often sound … weird, and therefore, inaccessible or irrelevant to most people.

Part of my hope for this course is to change that. Even if you don’t want to join a monastery (I do not), there’s still a huge range of more “advanced meditation” practices to explore that go beyond the mainstream basic mindfulness stuff. Some can feel like melting into “a laser beam of intense tingly pleasurably electricity,” and ultimately change the way you relate to pleasure, like the jhānas. Others, like non-dual practices (which I’ll get into later), can plunge you into strange modes of consciousness full of wonder and insight that you might never have known were there.

Which might leave you wondering why it’s mindful relaxation that gets all the attention. For one thing, there’s how much time we imagine deeper meditation practices will take — we’ll get into that later in this course. Another obstacle blocking advanced meditation’s path into the mainstream is that a critical mass of Americans aren’t exactly itching to become full-on Buddhists. But if you turned to science instead of religion for guidance on these meditation practices in the past few decades, you’d mostly find a bunch of scattered neuroscience jargon that doesn’t all hang together.

Buddhism can paint a really elaborate picture of what’s going on with meditation, with ancient models of meditative development still being used today, like the four-path model. Science has struggled to do the same. We know some interesting but scattered things: Meditation makes parts of your brain grow thicker. It changes patterns of electrical activity in key brain networks. It raises the baseline of gamma wave activity. It shrinks your amygdala.

The problem, as Shamil Chandaria, a senior research fellow at the University of Oxford’s Center for Eudaimonia and Human Flourishing, put it to me, is weaving it all together into a story that shows us the big picture. “In terms of all these neuroscience results,” Chandaria said, “there’s this problem of what does it all mean?”

In a pivotal 2021 paper by cognitive scientists Ruben Laukkonen and Heleen Slagter, that big picture — a model of how meditation affects the mind that can explain the effects of simple breathing practices and the most advanced transformations of consciousness alike — finally began coming together.

A general theory of meditation

Let’s start with plain language. Think of meditation as having four stages of depth, each with a corresponding style of practice: focused attention, open-monitoring, non-dual, and cessation.

Near the surface,“focused attention” practices help settle the mind. By default, our minds are usually snow globes in constant frenzy. Our attention constantly jumps from one flittering speck to the next, and the storm of activity blocks our view of the whole sphere. By focusing attention on an object — the breath, repeating a mantra, the back of your thigh, how a movement feels in the body — we can train the mind to stop getting yanked around. With the mind settled on just one thing, it’s easier to see through the storm.

“Open-monitoring” practices help us get untangled from focusing on any particular thing happening in the mind, opening the aperture of our attention to notice the wider field of awareness that all those thoughts, feelings, and ideas all arise and fall within.

Once you’ve settled the mind and gotten acquainted with the more spacious awareness beneath it, “non-dual” practices help you shift your mental center of gravity so that you identify with that expansive field of awareness itself, rather than everything that arises within it, as we normally do. (I know this probably sounds weird, we’ll get more into it later. Some things in meditation are irreducibly weird, which is part of what makes me think it’s worth paying attention to.)

And finally, for practitioners with serious meditation chops, you can go one step deeper, where even the field of non-dual awareness disappears. If you sink deep enough into the mind, you’ll find that it just extinguishes, like a candle flame blown out by a sudden gust of wind. That can happen for seconds at a time, called nirodhas in Theravada Buddhism, or it can last for days at a time, called nirodha-sammapati, or cessation attainment.

An illustration shows a ladder with four rungs, labeled “Focused attention, open monitoring, non-dual, and cessation”

Pete Gamlen for Vox

You can think of this progression as four rungs on a ladder that lead from the surface of the mind all the way down to the bottom. Or, from the beginner stages of meditation, all the way through to the very advanced. You can place a huge variety of meditative practices — though not all — somewhere along this spectrum.

And just about everything that’s grown popular under the label of mindfulness is in that first group of focused-attention practices. The idea that meditation can make you “10 percent happier” is talking about these introductory practices that settle the mind.

But the idea that meditation can make you 10 times happier, like meditation teacher Shinzen Young claims, references the next stages: practices that open up once the mind begins to settle.

Once more, with science

Now, bear with me. We’re going to retell that story, but using Laukkonen and Slagter’s innovation — the general theory of meditation. The key to this framework is a theory that’s risen to dominate cognitive science in the past decade or so: predictive processing.

Predictive processing says that we don’t experience the world as it is, but as we predict it to be. Our conscious experience is a construction of layered mental habits acquired through past experiences. We don’t see the world through our eye sockets; we don’t hear the world through our ear canals. These all feed information into our brains, which conjure our experience of the world from scratch — like when we dream — only that in waking consciousness, they’re at least trying to match what they whip up in our experience to what might actually be going on in the world outside our skulls.

The building blocks for these conjured models of the world we experience — the predictive mind — are called “priors,” those beliefs or expectations based on the past. Priors run a spectrum from deep and ancestral to superficial and personal.

For example, say you ventured an opinion in front of your third grade class and everyone laughed. You might have formed a prior that assumes sharing your thoughts leads to ridicule. If that experience was particularly meaningful to you, it could embed deep in your predictive mind, shaping your behavior, and even perception of the world, for the rest of your life.

Similarly, our bodies know how to do some of their most basic functions — like maintaining body temperature around 98.6 degrees Fahrenheit — because we’ve inherited priors from our evolutionary history that holding our body in that range will keep us alive. According to predictive processing, consciousness is constructed via this hierarchy of priors like a house of cards.

With all that in place, science’s new meditation story can be put nice and short: Meditation deconstructs the predictive mind.

But hold on. It took billions of years for evolution to slowly, patiently build us these predictive minds. They’re one of the great marvels of biology. Why would we want to deconstruct them?

Well, evolution doesn’t care whether survival feels good. Conscious experience — as we know it — might be a really useful trick for adapting to our environments and achieving the goals that further life’s crusade against entropy and death. But natural selection cares about ensuring our bodies survive, not that we achieve happiness and well-being.

Which is why you often hear meditation teachers talking about “reprogramming” the mind. We don’t want to just leave the predictive mind in pieces. Again, it’s one of the most useful adaptations life on Earth has ever mustered. But in some departments, we might want to kindly thank evolution, while taking the reins and revising a bit of its work to make this whole business of living feel better.

“Precision weighting” is the volume knob on the predictive mind

Each step, from focused attention through to cessation, is a deeper deconstruction of the predictive mind. But “deconstructing” doesn’t mean, like, breaking it.

Instead, the key idea is “precision weighting,” which you can think of as the volume knob on each of the priors that make up your predictive mind. The higher the precision — or volume — assigned to something, the more focus your mind pays to it. The more your experience warps around it.

Deconstructing the mind is to progressively turn down the volume on each layer of stacked priors, releasing the grip they ordinarily hold on awareness. By definition, then, the deeper meditation goes, the stranger (as in, further from ordinary) the resulting experience will be.

How meditation deconstructs the predictive mind

So let’s go back to our four-step model of meditative depth. We said the first step, focused attention practices, “settle the mind.”

Now, we can say that with a bit more detail. By focusing on one particular thing, like the breath, you’re cranking up the precision weighting assigned to it. You’re holding up the volume knob so that your experience settles around it.

By doing so, you also turn down the volume on everything else. You can see this happen in real time pretty easily — just try picking out one specific thing in your current experience. Like your left earlobe — how does it feel right now?

Really, take five seconds and tune into it.

Looking back, you might notice that the more you tuned into that earlobe, the more everything else began to fade into the background. That helps explain why focused attention practices like basic mindfulness can be so relaxing. You’re turning down the volume on everything that’s stressing you out.

Next, in open-monitoring practices, you drop that object of attention and release the volume knob. But it doesn’t twist back to its normal resting position. Since your focusing practice turned down the volume on everything else, the default setting across your mind at large is now lower.

Focused attention settles your mind onto one object of attention. In open-monitoring, you drop into a more settled mind across the board.

It’s not that you no longer have thoughts springing up. But as those thoughts do, your mind reacts less to them. They’re muted, less sticky, so attention clings to them less. They just come and go more easily.

That’s why during the open-monitoring stage, you begin to see the entire snow globe that mental activity is happening inside of. The idea of a “field” of awareness is no longer a metaphor; you can see it directly.

“Advanced practitioners are said to be able to effortlessly observe experience as a whole,” write Laukkonen and Slagter, without being ‘caught’ by thoughts, emotions, or anything else that arises in one’s sensorium.”

Focused attention practices are an important step in meditation — it helps to calm your mind before trying to see through it. But on their own, they don’t usually lead to big revelations about how your mind works. Open monitoring is where this “seeing through” process really kicks in.

“There is a space of awareness that’s different from the contents of awareness,” said Chandaria, who’s been meditating for about 37 years. “And that’s something that most people aren’t even aware of. The first time we see that, it’s like, oh, I never knew that there was actually an ocean on which these waves were arising. I never knew the ocean.”

And then there’s non-dual experience

As you sink into open-monitoring practice, the predictive mind has loosened its grip on experience. But there are still deep priors at play.

For example, in open-monitoring practice, it probably still feels like there’s a “you” doing the meditating. And that “you” is experiencing “your” awareness. There’s a subject — you — aware of an object, the field of experience.

But according to heaps of meditators and mystics through the millennia, this, too, can be deconstructed.

Non-dual meditation aims at turning down even those deep priors that construct distinctions between subject and object altogether. As well as basically every other possible distinction. During non-dual experiences, there’s no self/other, good/bad, here/there, now/later. All these dualities that underlie ordinary cognition basically melt into a big soup of the Now.

This is the thing — the big soupy Now — that you’ll quickly hear a ton of platitudes about in meditation circles. The illusion of separation, the truth of universal oneness.

That’s because there’s just no great way to describe it — it’s either incredibly weird, or incredibly trite. But if you’re after more descriptions anyway, philosopher and meditator Thomas Metzinger recently published a book containing over 500 different accounts of non-duality, or “minimal phenomenal experience” as he calls it, from advanced meditators across 57 countries. Metzinger is usually at least a decade ahead of the field, so it’s worth a read.

If open-monitoring practice is where meditation’s hefty insights begin kicking into gear, non-duality is where they ramp up. It’s often described as “coming home.” One meditator from Metzinger’s research described it as: “the realization of having finally found home after an eternal search. The pathological searching, the agony of control, comes to an abrupt end, and for the first time you realize what it means to be alive.”

According to Laukkonen and Slagter’s framework, non-duality is the baseline of all experience. It’s always beneath our ordinary experiences — awareness in its least constructed form. Non-dual meditation practice is about “creating the conditions that reduce ordinary cognition that normally ‘hides’ non-dual awareness.”

But even non-duality isn’t the end of the road. It’s still a mode of consciousness. And according to predictive processing, wherever there’s conscious experience, there’s an underlying prior, or expectation, that’s holding it up. This, too, can be deconstructed.

When the mind has no priors left: Cessation

In the past year, meditation researchers have begun to corroborate long-standing claims from Buddhist scripture that if your meditation goes deep enough, the whole show of consciousness can be extinguished — temporarily, that is — altogether.

Nirodha-samāpatti, or “cessation of thought and feeling,” is a summit of meditative attainment in Theravada Buddhism, the oldest surviving form of Buddhism most commonly practiced in Southeast Asia. Cessation is like going under general anesthesia, but without any drugs. Consciousness can be switched off from the inside, for — according to the scriptures — up to seven days at a time (though the first lab data on cessation looked at a more modest 90-minute stretch).

Cessation is a bonafide advanced meditation thing — I’ll make zero effort to convince you it’s accessible to us non-monastic folks. But according to neuroscientist Matthew Sacchet, who leads the Meditation Research Program at Harvard Medical School and Massachusetts General Hospital, the early data collected from studying cessations with neuroscience gizmos supports the idea that meditation deconstructs the predictive mind.

“Cessation could thus reflect a final release of the expectation to be aware or alert,” Luakkonen and Slagter write. It’s like a bottoming-out of the predictive mind.

Coming out of cessation, meditators can observe the reconstruction of the predictive mind, prior by prior. “That puts us in a special state,” Chandaria said. “You can call it reprogramming mode. And in reprogramming mode, we can start to reprogram ourselves in ways that could be more conducive to human flourishing.”

Why does this matter?

For those of us who aren’t neuroscientists, or don’t care about “predictive processing,” what good does this model of meditation do?

It’s not the objective truth about what meditation actually does. It’s just a story. It’s not comprehensive — there are styles of meditation that wouldn’t fit neatly onto this framework. And meditation doesn’t always follow this trajectory — you can go straight into non-dual practices, or try out open-monitoring before focused attention.

On a personal note, I find this framework really helpful. Immediately after reading Laukkonen and Slagter’s paper, it gave me a way to see my own practice that clicked with my experience better than other stories — which stem from other cultures — about what meditation does.

Now, I usually spend the beginning of my meditation sessions doing focused attention practice to settle the mind. And when I notice my concentration is stable enough, I release the focus and drop into open-monitoring practices. And when my mind falls into an especially weird place that words don’t really capture, I figure, maybe that’s leaning into this non-dual stuff? Just having the labels helped kindle my interest in playing around with things.

And as a scientific framework, this model is generating all sorts of new hypotheses to test. More broadly, it also gives us a way to think through how it’s possible that so many people are trying meditation, but so few are having the big transformative experiences that more advanced practitioners talk about.

Even if some 60 million Americans tried meditation in 2022, if most of them only do some sort of focused attention practice, they’re never trying anything beyond the first step. That’s like concluding that running probably won’t make you significantly healthier because you laced up your sneakers and nothing transformative happened.

When I asked Chandaria how this new scientific model compares to religious models that have been around for ages, like Theravada Buddhism’s four-path model, he said that “Ultimately…all these stories are pointing to the moon. But [contemplative traditions] were pointing with their fingers. Now, we have laser pointers.” And as science progresses, “we’ll be able to work with what we’re finding out about the brain,” he added. “It’s actually about making progress, and by progress, I mean more useful stories.”

Want to dive deeper into meditation?

Check out Vox’s free meditation course. For five days, staff reporter Oshan Jarow breaks down what you need to know to fit meditation into your everyday life, features exclusive interviews with different meditation experts, and offers bite-size meditation practice exercises. Sign up here!

Read the whole story
strugk
6 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Fish Have a Brain Microbiome. Could Humans Have One Too?

1 Share

The discovery that other vertebrates have healthy, microbial brains is fueling the still controversial possibility that we might have them as well.

Introduction

Bacteria are in, around and all over us. They thrive in almost every corner of the planet, from deep-sea hydrothermal vents to high up in the clouds, to the crevices of your ears, mouth, nose and gut. But scientists have long assumed that bacteria can’t survive in the human brain. The powerful blood-brain barrier, the thinking goes, keeps the organ mostly free from outside invaders. But are we sure that a healthy human brain doesn’t have a microbiome of its own?

Over the last decade, initial studies have presented conflicting evidence. The idea has remained controversial, given the difficulty of obtaining healthy, uncontaminated human brain tissue that could be used to study possible microbial inhabitants.

Recently, a study published in Science Advances provided the strongest evidence yet (opens a new tab) that a brain microbiome can and does exist in healthy vertebrates — fish, specifically. Researchers at the University of New Mexico discovered communities of bacteria thriving in salmon and trout brains. Many of the microbial species have special adaptations that allow them to survive in brain tissue, as well as techniques to cross the protective blood-brain barrier.

Matthew Olm (opens a new tab), a physiologist who studies the human microbiome at the University of Colorado, Boulder and was not involved with the study, is “inherently skeptical” of the idea that populations of microbes could live in the brain, he said. But he found the new research convincing. “This is concrete evidence that brain microbiomes do exist in vertebrates,” he said. “And so the idea that humans have a brain microbiome is not outlandish.”

While fish physiology is, in many ways, similar to humans’, there are some key differences. Still, “it certainly puts another weight on the scale to think about whether this is relevant to mammals and us,” said Christopher Link (opens a new tab), who studies the molecular basis of neurodegenerative disease at the University of Colorado, Boulder and was also not involved in the work.

The human gut microbiome plays a critical role in the body, communicating with the brain and maintaining the immune system through the gut-brain axis. So it isn’t totally far-fetched to suggest that microbes could play an even larger role in our neurobiology.

Fishing for Microbes

For years, Irene Salinas (opens a new tab) has been fascinated by a simple physiological fact: The distance between the nose and the brain is quite small. The evolutionary immunologist, who works at the University of New Mexico, studies mucosal immune systems in fish to better understand how human versions of these systems, such as our intestinal lining and nasal cavity, work. The nose, she knows, is loaded with bacteria, and they’re “really, really close” to the brain — mere millimeters from the olfactory bulb, which processes smell. Salinas has always had a hunch that bacteria might be leaking from the nose into the olfactory bulb. After years of curiosity, she decided to confront her suspicion in her favorite model organisms: fish.

Salinas and her team* started by extracting DNA from the olfactory bulbs of trout and salmon, some caught in the wild and some raised in her lab. They planned to look up the DNA sequences in a database to identify any microbial species.

These kinds of samples, however, are easily contaminated — by bacteria in the lab or from other parts of a fish’s body — which is why scientists have struggled to study this subject effectively. If they did find bacterial DNA in the olfactory bulb, they would have to convince themselves and other researchers that it truly originated in the brain.

To cover their bases, Salinas’ team studied the fishes’ whole-body microbiomes, too. They sampled the rest of the fishes’ brains, guts and blood; they even drained blood from the many capillaries of the brain to make sure that any bacteria they discovered resided in the brain tissue itself.

“We had to go back and redo [the experiments] many, many times just to be sure,” Salinas said. The project took five years — but even in the early days it was clear that the fish brains weren’t barren.

As Salinas expected, the olfactory bulb hosted some bacteria. But she was shocked to see that the rest of the brain had even more. “I thought the other parts of the brain wouldn’t have bacteria,” she said. “But it turned out that my hypothesis was wrong.” The fish brains hosted so much that it took only a few minutes to locate bacterial cells under a microscope. As an additional step, her team confirmed that the microbes were actively living in the brain; they weren’t dormant or dead.

Olm was impressed by their thorough approach. Salinas and her team circled “the same question, from all these different ways, using all these different methods — all of which produced convincing data that there actually are living microbes in the salmon brain,” he said.

But if there are, how did they get there?

Invading the Fortress

Researchers have long been skeptical that the brain could have a microbiome because all vertebrates, including fish, have a blood-brain barrier. These blood vessels and surrounding brain cells are fortified to serve as gatekeepers that allow only some molecules in and out of the brain and keep invaders, especially larger ones like bacteria, out. So Salinas naturally wondered how the brains in her study had been colonized.

By comparing microbial DNA from the brain to that collected from other organs, her lab found a subset of species that didn’t appear elsewhere in the body. Salinas hypothesized that these species may have colonized the fish brains early in their development, before their blood-brain barriers had fully formed. “Early on, anything can go in; it’s a free-for-all,” she said.

But many of the microbial species were also found throughout the body. She suspects that most bacteria in the fishes’ brain microbiomes originated in their blood and guts, and continuously leak into the brain.

“After that first wave of colonization,” she said, “you need to have specific features to go in and out.”

Salinas was able to identify features that let bacteria make the crossing. Some could produce molecules, known as polyamines, that can open and close junctions, which are like little doors in the barrier that allow molecules to pass through. Others could produce molecules that help them evade the body’s immune response or compete with other bacteria.

Salinas even caught a bacterium in the act. Looking under the microscope, she captured an image of a bacterium frozen in time within the blood-brain barrier. “We literally caught it right in the middle of crossing,” she said.

It is possible that the microbes don’t live freely in the brain tissue but are engulfed by immune cells. That would be the “most boring interpretation of this paper,” Olm said, and would suggest that the fish have adapted to bacterial inhabitants by containing them.

However, if the bacteria are free-living, they could be involved in the body’s processes beyond the brain. It’s possible that the microbes actively regulate aspects of the creatures’ physiology, Salinas suggested, the way human gut microbiomes help regulate the digestive and immune systems.

Fish, of course, are not humans, but they allow a fair comparison, Salinas said. And her work suggests that if fish have microbes living in their brains, it’s possible we have them, too.

Impenetrable or Not?

Bacteria have been found living in just about every human organ system, but to many scientists the brain is a step too far. The blood-brain barrier has traditionally been seen as “impenetrable,” said Janosch Heller (opens a new tab), who studies the barrier at Dublin City University and was not involved in the new research. Plus, the brain has immune cells working overtime to zap any potentially harmful invaders. When microbes have been found in the human brain, they are are associated with active infections or typically linked to a breakdown in the barrier due to diseases such as Alzheimer’s.

This assumption was challenged in 2013, when scientists studying the neurological impacts of HIV/AIDS found genetic hints of bacteria in the brains of both sick and healthy people. The findings were the first to suggest (opens a new tab) that maybe humans could have a brain microbiome in the absence of disease.

“No one believed it 10 years ago,” Heller said. Follow-up studies — there haven’t been many — have been inconclusive. “It is very easy to trick yourself into thinking microbes are present because microbial DNA is essentially everywhere,” Olm said. “So it would take a lot of evidence to convince me that it does exist.”

The fish experiment did convince him, and other researchers, that a human brain microbiome is not impossible. What is nearly impossible, however, is confirming that without harming healthy people. To build a case, Link suggested repeating the fish experiment in rodents. “This protocol should be able to be adapted really easily to mouse brains,” Salinas said — and indeed her team has started looking into it. They have found early hints that microbes exist in the olfactory bulbs of healthy mice and, to a lesser extent, throughout the brain.

“There’s no reason, if fish have them, that you wouldn’t have them, or that mice wouldn’t have them,” Link said. If microbes have adapted to cross the fish blood-brain barrier and survive in the fish brain, they could do the same in our bodies. It’s unlikely they would be present at the same levels as they are in fish, he added, “but that doesn’t mean there’s none.”

Even in small numbers, Link said, resident microbes could influence our brain metabolism and immune systems. If they are truly present, this would suggest an extra layer of neurological regulation that we didn’t know existed. We already know that microbes influence our neurobiology: Right now, microbes in your gut are modulating your brain activity through the gut-brain axis by producing metabolites that are sensed by enteric neurons winding through your digestive system.

It’s a fascinating, though unproved, proposition that bacteria in the brain are directly impacting our physiology. However, thanks to research like Salinas’, more scientists are open to the idea that healthy human brains might also be home to microbes.

“Why not?” Heller said. “I’m not shocked anymore that they are there.” The more interesting question, he said, is: “Are they all there for a reason, or are they there by mistake?”

* Update: December 5, 2024
Important contributions to the research were made by Amir Mani, the lead author of the paper.

Read the whole story
strugk
9 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Open Source on its own is no alternative to Big Tech - Bert Hubert's writings

1 Share
Read the whole story
strugk
16 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Paris is replacing 60,000 parking spaces with trees

1 Share
Read the whole story
strugk
19 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Watching the Generative AI Hype Bubble Deflate – Ash Center

1 Share

An archival PDF of this essay can be found here.

Only a few short months ago, generative AI was sold to us as inevitable by AI company leaders, their partners, and venture capitalists. Certain media outlets promoted these claims, fueling online discourse about what each new beta release could accomplish with a few simple prompts. As AI became a viral sensation, every business tried to become an AI business. Some even added “AI” to their names to juice their stock prices,1 and companies that mentioned “AI” in their earnings calls saw similar increases.2

Investors and consultants urged businesses not to get left behind. Morgan Stanley positioned AI as key to a $6 trillion opportunity.3 McKinsey hailed generative AI as “the next productivity frontier” and estimated $2.6 to 4.4 trillion gains,4 comparable to the annual GDP of the United Kingdom or all the world’s agricultural production.5 6 Conveniently, McKinsey also offers consulting services to help businesses “create unimagined opportunities in a constantly changing world.”7 Readers of this piece can likely recall being exhorted by news media or their own industry leaders to “learn AI” while encountering targeted ads hawking AI “boot camps.”

While some have long been wise to the hype,8 9 10 11 global financial institutions and venture capitalists are now beginning to ask if generative AI is overhyped.12 In this essay, we argue that even as the generative AI hype bubble slowly deflates, its harmful effects will last: carbon can’t be put back in the ground, workers continue to face AI’s disciplining pressures, and the poisonous effect on our information commons will be hard to undo.

Historical Hype Cycles in the Digital Economy

Attempts to present AI as desirable, inevitable, and as a more stable concept than it actually is follow well-worn historical patterns.13 A key strategy for a technology to gain market share and buy-in is to present it as an inevitable and necessary part of future infrastructure, encouraging the development of new, anticipatory infrastructures around it. From the early history of automobiles and railroads to the rise of electricity and computers, this dynamic has played a significant role. All these technologies required major infrastructure investments — roads, tracks, electrical grids, and workflow changes — to become functional and dominant. None were inevitable, though they may appear so in retrospect.14 15 16 17

The well-known phrase “nobody ever got fired for buying IBM” is a good, if partial, historical analogue to the current feeding frenzy around AI. IBM, while expensive, was a recognized leader in automating workplaces, ostensibly to the advantage of those corporations. IBM famously re-engineered the environments where its systems were installed, ensuring that office infrastructures and workflows were optimally reconfigured to fit its computers, rather than the other way around. Similarly, AI corporations have repeatedly claimed that we are in a new age of not just adoption but of proactive adaptation to their technology. Ironically, in AI waves past, IBM itself over-promised and under-delivered; some described their “Watson AI” product as a “mismatch” for the health care context it was sold for, while others described it as “dangerous.”18 Time and again, AI has been crowned as an inevitable “advance” despite its many problems and shortcomings: built-in biases, inaccurate results, privacy and intellectual property violations, and voracious energy use.

Nevertheless, in the media and — early on at least — among investors and corporations seeking to profit, AI has been publicly presented as unstoppable.19 20 21 This was a key form of rhetoric came from those eager to pave the way for a new set of heavily funded technologies; it was never a statement of fact about the technology’s robustness, utility, or even its likely future utility. Rather, it reflects a standard stage in the development of many technologies, where a technology’s manufacturers, boosters, and investors attempt to make it indispensable by integrating it, often prematurely, into existing infrastructures and workflows, counting on this entanglement to “save a spot” for the technology to be more fully integrated in the future. The more far-reaching this early integration, the more difficult it will be to disentangle or roll back the attendant changes–meaning that even broken or substandard technologies stand a better chance of becoming entrenched.22

In the case of AI, however, as with many other recent technology booms or boomlets (from blockchain to the metaverse to clunky VR goggles23 24), this stage was also accompanied by severe criticism of both the rhetorical positioning of the technology as indispensable and of the technology’s current and potential states. Historically, this form of critique is an important stage of technological development, offering consumers, users, and potential users a chance to alter or improve upon the technology by challenging designers’ assumptions before the “black box” of the technology is closed.25 It also offers a small and sometimes unlikely — but not impossible — window for partial or full rejection of the technology.

...even as the Generative AI hype bubble slowly deflates, its harmful effects will last: carbon can’t be put back in the ground, workers continue to need to fend off AI’s disciplining effects, and the poisonous effect on our information commons will be hard to undo.

David Gray Widder and Mar Hicks

Deflating the Generative AI Bubble

While talk of a bubble has simmered beneath the surface while the money faucet continues to flow,26 we observe a recent inflection point. Interlocutors are beginning to sound the alarm that AI is overvalued. The perception that AI is a bubble, rather than a gold rush, is making its way into wider discourse with increasing frequency and strength. The more industry bosses protest that it’s not a bubble,27 the more people have begun to look twice.

For instance, users and artists slammed Adobe for ambiguous statements about using customers’ creative work to train generative AI, forcing the company to later clarify that it would only do so in specific circumstances. At the same time, the explicit promise of not using customer data for AI training has started to become a selling point for others, with a rival positioning their product as “not a trick to access your media for AI training.”28 Another company boasted a “100% LLM [large-language model]-Free” product, spotlighting that it “never present[s] chatbot[s] that act human or imitate human experts.”29 Even major players like Amazon and Google have attempted to lower business expectations for generative AI, recognizing its expense, accuracy issues, and as yet uncertain value proposition.30 Nonetheless, they have done so in ways that attempt to preserve the hype surrounding AI, which will likely remain profitable for their cloud businesses.

It’s not just technology companies questioning something they initially framed as inevitable. Recently, venture capital firm Sequoia Capital said that “the AI bubble is reaching a tipping point”31 after failing to find a satisfactory answer to a question they posed last year: “Where is all the revenue?”32 Similarly, in Goldman Sachs’ recent report, “Gen AI: too much spend, too little benefit?”,33 their global head of equity research stated, “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.” Still, the report tellingly notes that even if AI doesn’t “deliver on its promise,” it may still generate investor returns, as “bubbles take a long time to burst.” In short, financial experts are pointing out that capital expenditures on things like graphics cards or cloud compute have not been met by commensurate revenue, nor does there seem to be a clear pathway to remedy this. This shift is a recognizable stage in which a product and its promoters do not suffer swift devaluation but begin to lose their top spots on the NASDAQ and other major exchanges.

Why is this happening? Technically, large-language models (LLMs) continue to produce erroneous but confident text (“hallucinations”) because they are inherently probabilistic machines, and no clear fixes exist because this is a fundamental feature of how the technology works.34  In many cases, LLMs fail to automate the labor that CEOs confidently claimed they could, and instead often decrease employee productivity.35 Economically, interest rates have risen, so “easy money” is no longer available to fund boosters’ loftiest and horrifically expensive generative AI dreams.36 Meanwhile, federal regulators have intensified their scrutiny, even as they struggle to reign in social media platforms. FTC chair Lina Khan has said, “There is no AI exemption to the laws on the books,” encouraging regulators to apply standard regulatory tools to AI.37 Legally, after misappropriating or allegedly stealing much of their training data during early generative AI development, companies now face lawsuits and must pay for their inputs.38 Public discourse is catching up too. We were promised that AI would automate tedious tasks, freeing people for more fulfilling work. Increasingly, users recognize that these technologies are built to “do my art and writing so that I can do my laundry and dishes,” in the words of one user, rather than the reverse.39

Today’s hype will have lasting effects that constrain tomorrow’s possibilities.

David Gray Widder and Mar Hicks

Hype’s Harmful Effects Are Not Easily Reversed

While critics of any technology bubble may feel vindicated by seeing it pop — and when stock markets and the broader world catch up with their gimlet-eyed early critiques — those who have been questioning the AI hype also know that the deflation, or even popping, of the bubble does not undo the harm already caused. Hype has material and often harmful effects in the real world. The ephemerality of these technologies is grounded in real-world resources, bodies, and lives, reminiscent of the destructive industrial practices of past ages. Decades of regulation were required to roll back the environmental and public health harms of technologies we no longer use, from short-lived ones like radium to longer-lived ones like leaded gasoline.40 41 Even ephemeral phenomena can have long-lasting negative effects.

The hype around AI has already impacted climate goals. In the United States, plans to retire polluting coal power plants have slowed by 40%, with politicians and industry lobbyists citing the need to win the “AI war.”42 Microsoft, which  planned to be carbon negative by 2030,43 walked back that goal after its 2023 emissions were 30% higher than 2020.44 Brad Smith, its president, said that this “moonshot” goal was made before the “explosion in artificial intelligence,” and now “the moon is five times as far away,” with AI as the driving factor. After firing employees for raising concern about generative AI’s environmental costs,45 46 Google has also seen its emissions increase and no longer claims to be carbon-neutral while pushing its net-zero emissions goal date further into the future.47 This carbon can’t be unburned, and the breathless discourse surrounding AI has helped ratchet up the existing climate emergency, providing justification for companies to renege on their already-imperiled environmental promises.48

The discourse surrounding AI will also have lasting effects on labor. Some workers will see the scope of their work reduced, while others will face wage stagnation or cuts owing to the threat, however empty, that they might be replaced with poor facsimiles of themselves. Creative industries are especially at risk: as illustrator Molly Crabapple states, while demand for high-end human-created art may remain, generative AI will harm many working illustrators, as editors opt for generative AI’s fast and low-cost illustrations over original creative output.49 Even as artists mobilize with technical and regulatory countermeasures,50 this burden distracts from their artistic pursuits. Unions such as SAG-AFTRA have won meager protections against AI,51 and while this hot-button issue perhaps raised the profile of their strike, it also distracted from other crucial contract negotiations. Even if generative AI doesn’t live up to the hype, its effect on how we value creative work may be hard to shake, leaving creative workers to reclaim every inch lost during the AI boom.

Lastly, generative AI will have long-term effects on our information commons. The ingestion of massive amounts of user-generated data, text, and artwork — often in ways that appear to violate copyright and fair use — has pushed us closer to the enclosure of the information commons by corporations.52 Google’s AI search snippets tool, for example, authoritatively suggested putting glue in pizza and recommended eating at least one small rock per day.53 While these may seem obvious enough to be harmless, most AI-generated misinformation is not so easy to detect. The increasing prevalence of AI-generated nonsense on the internet will make it harder to find trusted information, allow misinformation to propagate, and erode trust in sources we used to count on for reliable information.

A key question remains, and we may never have a satisfactory answer: what if the hype was always meant to fail? What if the point was to hype things up, get in, make a profit, and entrench infrastructure dependencies before critique, or reality, had a chance to catch up?54 Path dependency is well understood by historians of technology and those seeking to profit from AI. Today’s hype will have lasting effects that constrain tomorrow’s possibilities. Using the AI hype to shift more of our infrastructure to the cloud increases dependency on cloud companies, creating dependencies that will be hard to undo even as inflated promises for AI are dashed.

Inventors, technologists, corporations, boosters, and investors regularly seek to create inevitability, in part by encouraging a discourse of inexorable technological “progress” tied to their latest investment vehicle. By referencing past technologies, which now seem natural and necessary, they claim their current developments (tautologically) as inevitable. Yet the efforts to make AI indispensable on a large scale, culturally, technologically, and economically, have not lived up to their promises. In a sense, this is not surprising, as generative AI does not so much represent the wave of the future as it does the ebb and flow of waves past.

Acknowledgements

We are grateful to Ali Alkhatib, Sireesh Gururaja, and Alex Hanna for their insightful comments on earlier drafts.

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

See the collection

Citations
  1. Benzinga. “Stocks With ‘AI’ In the Name Are Soaring: Could It Be The Next Crypto-, Cannabis-Style Stock Naming Craze?” Markets Insider, January 31, 2023. <a href="https://markets.businessinsider.com/news/stocks/stocks-with-ai-in-the-name-are-soaring-could-it-be-the-next-crypto-cannabis-stock-naming-craze-1032055463" rel="nofollow">https://markets.businessinsider.com/news/stocks/stocks-with-ai-in-the-name-are-soaring-could-it-be-the-next-crypto-cannabis-stock-naming-craze-1032055463</a>.
  2. Wiltermuth, Joy. “AI Talk Is Surging during Company Earnings Calls — and so Are Those Companies’ Shares.” Market Watch, March 16, 2024. <a href="https://www.marketwatch.com/story/ai-talk-is-surging-during-company-earnings-calls-and-so-are-those-companies-shares-f924d91a" rel="nofollow">https://www.marketwatch.com/story/ai-talk-is-surging-during-company-earnings-calls-and-so-are-those-companies-shares-f924d91a</a>.
  3. Morgan Stanley. “The $6 Trillion Opportunity in AI.” April 18, 2023. <a href="https://www.morganstanley.com/ideas/generative-ai-growth-opportunity" rel="nofollow">https://www.morganstanley.com/ideas/generative-ai-growth-opportunity</a>.
  4. Chui, Michael, Roger Roberts, Lareina Yee, et al. “The Economic Potential of Generative AI: The Next Productivity Frontier.” McKinsey & Company, June 14, 2023. <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights" rel="nofollow">https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights</a>.
  5. Statista, May 2024. <a href="https://www.statista.com/outlook/io/agriculture/worldwide" rel="nofollow">https://www.statista.com/outlook/io/agriculture/worldwide</a>.
  6. World Bank. “United Kingdom.” World Bank Open Data, 2023. https://data.worldbank.org.
  7. McKinsey & Company. “Quantum Black – AI by McKinsey,” 2024. http://ceros.mckinsey.com/qb-overview-desktop-2-1.
  8. Bender, Emily M., and Alex Hanna. “Mystery AI Hype Theater 3000.” The Distributed AI Research Institute, 2024. <a href="https://www.dair-institute.org/maiht3k/" rel="nofollow">https://www.dair-institute.org/maiht3k/</a>.
  9. Marcus, Gary. “The Great AI Retrenchment Has Begun.” Marcus on AI, June 15, 2024. <a href="https://garymarcus.substack.com/p/the-great-ai-retrenchment-has-begun" rel="nofollow">https://garymarcus.substack.com/p/the-great-ai-retrenchment-has-begun</a>.
  10. Marx, Paris. “The ChatGPT Revolution Is Another Tech Fantasy.” Disconnect, July 27, 2023. <a href="https://disconnect.blog/the-chatgpt-revolution-is-another/" rel="nofollow">https://disconnect.blog/the-chatgpt-revolution-is-another/</a>.
  11. Hanna, Alex. “The Grimy Residue of the AI Bubble.” Mystery AI Hype Theater 3000: The Newsletter , July 25, 2024. <a href="https://buttondown.email/maiht3k/archive/the-grimy-residue-of-the-ai-bubble/" rel="nofollow">https://buttondown.email/maiht3k/archive/the-grimy-residue-of-the-ai-bubble/</a>.
  12. Nathan, Allison, Jenny Grimberg, and Ashley Rhodes. “Gen AI: Too Much Spend, Too Little Benefit?” Top of Mind. Goldman Sachs Global Macro Research, June 25, 2024. <a href="https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf" rel="nofollow">https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf</a>.
  13. Suchman, Lucy. “The Uncontroversial ‘Thingness’ of AI.” Big Data & Society 10, no. 2 (July 2023): 20539517231206794. <a href="https://doi.org/10.1177/20539517231206794" rel="nofollow">https://doi.org/10.1177/20539517231206794</a>.
  14. Oldenziel, Ruth, M. Luísa Sousa, and Pieter van Wesemael. “Designing (Un)Sustainable Urban Mobility from Transnational Settings, 1850-Present.” In A U-Turn to the Future: Sustainable Urban M obility since 1850, edited by Martin Emanuel, Frank Schipper, and Ruth Oldenziel. Berghahn Books, 2020.
  15. Nye, David E. Electrifying America: Social Meanings of a New Technology, 1880-1940.  MIT Press, 2001.
  16. Hicks, Mar. Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. History of Computing. The MIT Press, 2018.
  17. Burrell, Jenna. “Artificial Intelligence and the Ever-Receding Horizon of the Future.” Tech Policy Press, June 6, 2023. <a href="https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future" rel="nofollow">https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future</a>.
  18. Strickland, Eliza. “How IBM Watson Overpromised and Underdelivered on AI Health Care.” IEEE Spectrum, April 2, 2019. <a href="https://spectrum.ieee.org/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care" rel="nofollow">https://spectrum.ieee.org/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care</a>.
  19. Taylor, Josh. “Rise of Artificial Intelligence Is Inevitable but Should Not Be Feared, ‘Father of AI’ Says.” The Guardian, May 7, 2023. <a href="https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says" rel="nofollow">https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says</a>.
  20. Shapiro, Daniel. “Artificial Intelligence: It’s Complicated And Unsettling, But Inevitable.” Forbes, September 10, 2019. <a href="https://www.forbes.com/sites/danielshapiro1/2019/09/10/artificial-intelligence-its-complicated-and-unsettling-but-inevitable/" rel="nofollow">https://www.forbes.com/sites/danielshapiro1/2019/09/10/artificial-intelligence-its-complicated-and-unsettling-but-inevitable/</a>.
  21. Raasch, Jon Michael. “In Education, ‘AI Is Inevitable,’ and Students Who Don’t Use It Will ‘Be at a Disadvantage’: AI Founder.” FOX Business, June 22, 2023. <a href="https://www.foxbusiness.com/technology/education-ai-inevitable-students-use-disadvantage-ai-founder" rel="nofollow">https://www.foxbusiness.com/technology/education-ai-inevitable-students-use-disadvantage-ai-founder</a>.
  22. Halcyon Lawrence explores this dynamic with speech recognition technologies that were unable to recognize the accents of the majority of global English speakers for much of their existence.
    Lawrence, Halcyon M. “Siri Disciplines.” In Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip. The MIT Press, 2021.
  23. Axon, Samuel. “RIP (Again): Google Glass Will No Longer Be Sold.” Ars Technica, March 16, 2023. <a href="https://arstechnica.com/gadgets/2023/03/google-glass-is-about-to-be-discontinued-again/" rel="nofollow">https://arstechnica.com/gadgets/2023/03/google-glass-is-about-to-be-discontinued-again/</a>.
  24. Barr, Kyle. “Apple Vision Pro U.S. Sales Are All But Dead, Market Analysts Say.” Gizmodo, July 11, 2024. <a href="https://gizmodo.com/apple-vision-pro-u-s-sales-2000469302" rel="nofollow">https://gizmodo.com/apple-vision-pro-u-s-sales-2000469302</a>.
  25. Kline, Ronald, and Trevor Pinch. “Users as Agents of Technological Change: The Social Construction of the Automobile in the Rural United States.” Technology and Culture 37, no. 4 (1996): 763–95. <a href="https://doi.org/10.2307/3107097" rel="nofollow">https://doi.org/10.2307/3107097</a>.
  26. Celarier, Michelle. “Money Is Pouring Into AI. Skeptics Say It’s a ‘Grift Shift.’” Institutional Investor, August 29, 2023. <a href="https://www.institutionalinvestor.com/article/2c4fad0w6irk838pca3gg/portfolio/money-is-pouring-into-ai-skeptics-say-its-a-grift-shift" rel="nofollow">https://www.institutionalinvestor.com/article/2c4fad0w6irk838pca3gg/portfolio/money-is-pouring-into-ai-skeptics-say-its-a-grift-shift</a>.
  27. Bratton, Laura, and Britney Nguyen. “The AI Craze Is No Dot-Com Bubble. Here’s Why.” Quartz, April 15, 2024. <a href="https://qz.com/ai-stocks-dot-com-bubble-nvidia-google-microsoft-amazon-1851407019" rel="nofollow">https://qz.com/ai-stocks-dot-com-bubble-nvidia-google-microsoft-amazon-1851407019</a>.
  28. Gray, Jeremy. “Blackmagic Taunts Adobe Following Terms of Use Controversy.” PetaPixel, June 28, 2024. <a href="https://petapixel.com/2024/06/28/blackmagic-taunts-adobe-following-terms-of-use-controversy/" rel="nofollow">https://petapixel.com/2024/06/28/blackmagic-taunts-adobe-following-terms-of-use-controversy/</a>.
  29. Inqwire. “Inqwire.” Accessed July 29, 2024. https://www.inqwire.io/www.inqwire.io.
  30. Gardizy, Anissa, and Aaron Holmes. “Amazon, Google Quietly Tamp Down Generative AI Expectations.” The Information, March 12, 2024.
  31. Cahn, David. “AI’s $600B Question.” Sequoia Capital, June 20, 2024. https://www.sequoiacap.com/article/ais-600b-question/.
  32. Cahn, David. “AI’s $200B Question.” Sequoia Capital, September 20, 2023. https://www.sequoiacap.com/article/follow-the-gpus-perspective/.
  33. Nathan, Allison, Jenny Grimberg, and Ashley Rhodes. “Gen AI: Too Much Spend, Too Little Benefit?” Top of Mind. Goldman Sachs Global Macro Research, June 25, 2024. <a href="https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf" rel="nofollow">https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf</a>.
  34. Leffer, Lauren. “Hallucinations Are Baked into AI Chatbots.” Scientific American, April 5, 2024. <a href="https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/" rel="nofollow">https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/</a>.
  35. Robinson, Bryan. “77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds.” Forbes, July 23, 2024. <a href="https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/" rel="nofollow">https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/</a>.
  36. Karma, Rogé. “The Era of Easy Money Is Over. That’s a Good Thing.” The Atlantic, December 11, 2023. <a href="https://www.theatlantic.com/ideas/archive/2023/12/higher-interest-rates-fed-economy/676282/" rel="nofollow">https://www.theatlantic.com/ideas/archive/2023/12/higher-interest-rates-fed-economy/676282/</a>.
  37. Khan, Lina. “Statement of Chair Lina M. Khan Regarding the Joint Interagency Statement on AI.” Federal Trade Commission, April 25, 2023.
  38. O’Donnell, James. “Training AI Music Models Is about to Get Very Expensive.” MIT Technology Review, June 27, 2024. <a href="https://www.technologyreview.com/2024/06/27/1094379/ai-music-suno-udio-lawsuit-record-labels-youtube-licensing/" rel="nofollow">https://www.technologyreview.com/2024/06/27/1094379/ai-music-suno-udio-lawsuit-record-labels-youtube-licensing/</a>.
  39. Joanna Maciejewska (AuthorJMac), “I Want AI to Do My Laundry and Dishes so That I Can Do Art and Writing…” X (formerly Twitter), March 29, 2024. <a href="https://x.com/AuthorJMac/status/1773679197631701238" rel="nofollow">https://x.com/AuthorJMac/status/1773679197631701238</a>.
  40. Clark, Claudia. Radium Girls, Women and Industrial Health Reform: 1910-1935. Chapel Hill, NC: University of North Carolina Press, 1997.
  41. Nader, Ralph. Unsafe at Any Speed: The Designed-in Dangers of the American Automobile. Grossman, 1965.
  42. Chu, Amanda. “US Slows Plans to Retire Coal-Fired Plants as Power Demand from AI Surges.” Financial Times, May 30, 2024. https://web.archive.org/web/20240702094041/https://www.ft.com/content/ddaac44b-e245-4c8a-bf68-c773cc8f4e63.
  43. Smith, Brad. “Microsoft Will Be Carbon Negative by 2030.” The Official Microsoft Blog, January 16, 2020. <a href="https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/" rel="nofollow">https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/</a>.
  44. Rathi, Akshat, Bass, Dina, and Rao, Mythili. “A Big Bet on AI Is Putting Microsoft’s Climate Targets at Risk.” Bloomberg, May 23, 2024. <a href="https://www.bloomberg.com/news/articles/2024-05-23/a-big-bet-on-ai-is-putting-microsoft-s-climate-targets-at-risk" rel="nofollow">https://www.bloomberg.com/news/articles/2024-05-23/a-big-bet-on-ai-is-putting-microsoft-s-climate-targets-at-risk</a>.
  45. Simonite, Tom. “What Really Happened When Google Ousted Timnit Gebru.” Wired, June 8, 2021. <a href="https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/" rel="nofollow">https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/</a>.
  46. Bender, Emily M., and Alex Hanna. “Mystery AI Hype Theater 3000.” The Distributed AI Research Institute, 2024. <a href="https://www.dair-institute.org/maiht3k/" rel="nofollow">https://www.dair-institute.org/maiht3k/</a>.
  47. Rathi, Akshat. “Google Is No Longer Claiming to Be Carbon Neutral.” Bloomberg, July 8, 2024. <a href="https://www.bloomberg.com/news/articles/2024-07-08/google-is-no-longer-claiming-to-be-carbon-neutral" rel="nofollow">https://www.bloomberg.com/news/articles/2024-07-08/google-is-no-longer-claiming-to-be-carbon-neutral</a>.
  48. Kneese, Tamara, and Meg Young. “Carbon Emissions in the Tailpipe of Generative AI.” Harvard Data Science Review, Special Issue 5 (June 11, 2024). <a href="https://doi.org/10.1162/99608f92.fbdf6128" rel="nofollow">https://doi.org/10.1162/99608f92.fbdf6128</a>.
  49. Crabapple, Molly, and Paris Marx. “Why AI Is a Threat to Artists, with Molly Crabapple.” Tech Won’t Save Us, June 29, 2023. <a href="https://techwontsave.us/episode/174_why_ai_is_a_threat_to_artists_w_molly_crabapple.html" rel="nofollow">https://techwontsave.us/episode/174_why_ai_is_a_threat_to_artists_w_molly_crabapple.html</a>.
  50. Jiang, Harry H., Lauren Brown, Jessica Cheng, et al. “AI Art and Its Impact on Artists.” Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 363–74. AIES ’23. Association for Computing Machinery, 2023. <a href="https://doi.org/10.1145/3600211.3604681" rel="nofollow">https://doi.org/10.1145/3600211.3604681</a>.
  51. Frawley, Chris. “Unpacking SAG-AFTRA’s New AI Regulations: What Actors Should Know.” Backstage, January 18, 2024. <a href="https://www.backstage.com/magazine/article/sag-aftra-ai-deal-explained-76821/" rel="nofollow">https://www.backstage.com/magazine/article/sag-aftra-ai-deal-explained-76821/</a>.
  52. See Noble, Algorithms of Oppression, for a fuller discussion of how the U.S. (and global) online ecosystem has been reconfigured to fall firmly under the control of for-profit companies making billions, mostly through advertising revenue.
  53. Kelly, Jack. “Google’s AI Recommends Glue on Pizza: What Caused These Viral Blunders?” Forbes, May 31, 2024. <a href="https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-glue-to-pizza-viral-blunders/" rel="nofollow">https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-glue-to-pizza-viral-blunders/</a>.
  54. Some financial self-regulatory authorities have even added warnings about AI pump-and-dump schemes. Financial Industry Regulatory Authority. “Avoid Fraud: Artificial Intelligence (AI) and Investment Fraud.” January 25, 2024. <a href="https://www.finra.org/investors/insights/artificial-intelligence-and-investment-fraud" rel="nofollow">https://www.finra.org/investors/insights/artificial-intelligence-and-investment-fraud</a>.
Read the whole story
strugk
21 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

We passed 1.5°C of human-caused warming this year (just not as the Paris agreement measures it)

1 Share

Human-caused global warming has just nudged past 1.5°C, according to a new method we have developed. That’s approaching 0.2°C higher than previously thought.

But this does not mean the goal of keeping warming below 1.5°C is dead, as the Paris agreement and the UN climate summits are based on different methodology.

This additional warming comes out of how we define what was pre-industrial, with our method using bubbles of air buried in Antarctic ice to gather data reaching back well before the industrial era. Current methods exclude some of the early warming. We also radically improve how well we know these numbers.

There are two steps to measuring human-caused warming. The first requires us to compare temperature measurements with their pre-industrial counterpart – we call this the pre-industrial baseline. The second step involves separating the human contribution from the part humans are not responsible for, such as volcanic eruptions, El Niño, or random weather events – we call this natural variability.

The Intergovernmental Panel on Climate Change (IPCC) chose the period 1850-1900 as the pre-industrial baseline as that’s when we started meaningfully measuring the temperature around the world, even if the Industrial Revolution actually began earlier. Warming since this period is what negotiators would have had in mind when setting up the Paris agreement. Climate models and statistical analysis are then used to tease out the volcanoes and short-term weather fluctuations in the data, to leave just the human-caused bit.

Using these methods, by 2023 there had been 1.31°C of human-caused warming since 1850-1900. However, there is considerable uncertainty in figures like this, and the reality could be somewhere between 1.1°C and 1.6°C. So although we are likely to be around 0.2°C below the 1.5°C limit, using these previous approaches we cannot be certain that we are not already past it.

Unfortunately, the 1850-1900 pre-industrial baseline probably has human-caused warming baked into it because the Industrial Revolution started significantly before then. As a result, the human-caused warming we are currently negotiating on is an underestimate.

A new approach

Fortunately, our new method means we can make significantly more accurate estimates. That’s because of a simple but previously overlooked relationship between the CO₂ concentration we measure in the atmosphere and the temperature change we see.

We treat this relationship as a straight line, meaning a certain amount of additional atmospheric carbon will always be associated with the same amount of warming. This is somewhat controversial, but allows us to do a number of very useful things.

First, it allows us to build from a pre-industrial baseline well before 1850. This is because unlike global temperature measurements we have ice core CO₂ data stretching back thousands of years, well before the start of the Industrial Revolution. This data is gathered by drilling down through the Antarctic ice cap and harvesting the air trapped in the bubbles in the ice. The further you drill down, the older the air.

That data tells us that CO₂ concentrations in the atmosphere were pretty constant for two millennia at 280 parts per million, before they started rising from about 1700. We can then estimate the temperature change associated with that additional carbon, which tells us how much warming was baked into the 1850-1900 baseline currently used by the IPCC.

Second, the CO₂-temperature straight line relationship also allows us to separate the human-caused warming from the natural variability, because the warming trend we are after is so closely related to increases in CO₂ we measure.

A more accurate estimate

Using our new method we can estimate human-caused warming either from our pre-1700 baseline or from the IPCC’s 1850-1900 baseline. Using the 1850-1900 baseline we estimate human-caused warming for 2023 at 1.31°C, agreeing with the IPCC-based best guess. However, our estimate is three times better defined. Although we have experienced record warming in 2024, we can be sure human-caused warming has not yet passed 1.5°C if measured from 1850-1900.

When measured from the pre-1700 baseline, humans have caused warming that almost hit 1.5°C in 2023, and as of October 2024 is at 1.53°C (within a range of 0.11°C). This captures a fuller picture of the warming caused by centuries of human activity as rising levels of deforestation, farming and early industries all contributed to increases in carbon dioxide levels. This result tells us there is approaching 0.2°C of warming baked into the 1850-1900 baseline from ignoring the effects of the early emissions released before the temperature records began.

1.5, dead or alive?

As the Paris agreement is based on science that used 1850-1900 as a baseline, the additional early warming we flag may not in the end be counted towards the temperature goals. So it’s unfair to say the 1.5°C limit has been breached by our latest estimate. Yet even if we stick to the 1850-1900 pre-industrial baseline, 1.5°C of human-caused warming is less than a decade away at current warming rates. The 1.5°C Paris limit is certainly critically ill.

But perhaps this is not the right way to see 1.5’s role. The agreed aim is to hold the temperature increase “well below 2 degrees”, and the super tanker that is the global economy will need something strong to pivot from to change course. Keeping 1.5 in reach is currently that anchor point. Knowing precisely where we are in relation to this anchor will be critical. And maybe this is where our research will help most.


Don’t have time to read about climate change as much as you’d like?
Get our award-winning weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 40,000+ readers who’ve subscribed so far.


Read the whole story
strugk
44 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories