Open Source on its own is no alternative to Big Tech - Bert Hubert's writings

1 Share
Read the whole story
strugk
282 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Paris is replacing 60,000 parking spaces with trees

1 Share
Read the whole story
strugk
285 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Watching the Generative AI Hype Bubble Deflate – Ash Center

1 Share

An archival PDF of this essay can be found here.

Only a few short months ago, generative AI was sold to us as inevitable by AI company leaders, their partners, and venture capitalists. Certain media outlets promoted these claims, fueling online discourse about what each new beta release could accomplish with a few simple prompts. As AI became a viral sensation, every business tried to become an AI business. Some even added “AI” to their names to juice their stock prices,1 and companies that mentioned “AI” in their earnings calls saw similar increases.2

Investors and consultants urged businesses not to get left behind. Morgan Stanley positioned AI as key to a $6 trillion opportunity.3 McKinsey hailed generative AI as “the next productivity frontier” and estimated $2.6 to 4.4 trillion gains,4 comparable to the annual GDP of the United Kingdom or all the world’s agricultural production.5 6 Conveniently, McKinsey also offers consulting services to help businesses “create unimagined opportunities in a constantly changing world.”7 Readers of this piece can likely recall being exhorted by news media or their own industry leaders to “learn AI” while encountering targeted ads hawking AI “boot camps.”

While some have long been wise to the hype,8 9 10 11 global financial institutions and venture capitalists are now beginning to ask if generative AI is overhyped.12 In this essay, we argue that even as the generative AI hype bubble slowly deflates, its harmful effects will last: carbon can’t be put back in the ground, workers continue to face AI’s disciplining pressures, and the poisonous effect on our information commons will be hard to undo.

Historical Hype Cycles in the Digital Economy

Attempts to present AI as desirable, inevitable, and as a more stable concept than it actually is follow well-worn historical patterns.13 A key strategy for a technology to gain market share and buy-in is to present it as an inevitable and necessary part of future infrastructure, encouraging the development of new, anticipatory infrastructures around it. From the early history of automobiles and railroads to the rise of electricity and computers, this dynamic has played a significant role. All these technologies required major infrastructure investments — roads, tracks, electrical grids, and workflow changes — to become functional and dominant. None were inevitable, though they may appear so in retrospect.14 15 16 17

The well-known phrase “nobody ever got fired for buying IBM” is a good, if partial, historical analogue to the current feeding frenzy around AI. IBM, while expensive, was a recognized leader in automating workplaces, ostensibly to the advantage of those corporations. IBM famously re-engineered the environments where its systems were installed, ensuring that office infrastructures and workflows were optimally reconfigured to fit its computers, rather than the other way around. Similarly, AI corporations have repeatedly claimed that we are in a new age of not just adoption but of proactive adaptation to their technology. Ironically, in AI waves past, IBM itself over-promised and under-delivered; some described their “Watson AI” product as a “mismatch” for the health care context it was sold for, while others described it as “dangerous.”18 Time and again, AI has been crowned as an inevitable “advance” despite its many problems and shortcomings: built-in biases, inaccurate results, privacy and intellectual property violations, and voracious energy use.

Nevertheless, in the media and — early on at least — among investors and corporations seeking to profit, AI has been publicly presented as unstoppable.19 20 21 This was a key form of rhetoric came from those eager to pave the way for a new set of heavily funded technologies; it was never a statement of fact about the technology’s robustness, utility, or even its likely future utility. Rather, it reflects a standard stage in the development of many technologies, where a technology’s manufacturers, boosters, and investors attempt to make it indispensable by integrating it, often prematurely, into existing infrastructures and workflows, counting on this entanglement to “save a spot” for the technology to be more fully integrated in the future. The more far-reaching this early integration, the more difficult it will be to disentangle or roll back the attendant changes–meaning that even broken or substandard technologies stand a better chance of becoming entrenched.22

In the case of AI, however, as with many other recent technology booms or boomlets (from blockchain to the metaverse to clunky VR goggles23 24), this stage was also accompanied by severe criticism of both the rhetorical positioning of the technology as indispensable and of the technology’s current and potential states. Historically, this form of critique is an important stage of technological development, offering consumers, users, and potential users a chance to alter or improve upon the technology by challenging designers’ assumptions before the “black box” of the technology is closed.25 It also offers a small and sometimes unlikely — but not impossible — window for partial or full rejection of the technology.

...even as the Generative AI hype bubble slowly deflates, its harmful effects will last: carbon can’t be put back in the ground, workers continue to need to fend off AI’s disciplining effects, and the poisonous effect on our information commons will be hard to undo.

David Gray Widder and Mar Hicks

Deflating the Generative AI Bubble

While talk of a bubble has simmered beneath the surface while the money faucet continues to flow,26 we observe a recent inflection point. Interlocutors are beginning to sound the alarm that AI is overvalued. The perception that AI is a bubble, rather than a gold rush, is making its way into wider discourse with increasing frequency and strength. The more industry bosses protest that it’s not a bubble,27 the more people have begun to look twice.

For instance, users and artists slammed Adobe for ambiguous statements about using customers’ creative work to train generative AI, forcing the company to later clarify that it would only do so in specific circumstances. At the same time, the explicit promise of not using customer data for AI training has started to become a selling point for others, with a rival positioning their product as “not a trick to access your media for AI training.”28 Another company boasted a “100% LLM [large-language model]-Free” product, spotlighting that it “never present[s] chatbot[s] that act human or imitate human experts.”29 Even major players like Amazon and Google have attempted to lower business expectations for generative AI, recognizing its expense, accuracy issues, and as yet uncertain value proposition.30 Nonetheless, they have done so in ways that attempt to preserve the hype surrounding AI, which will likely remain profitable for their cloud businesses.

It’s not just technology companies questioning something they initially framed as inevitable. Recently, venture capital firm Sequoia Capital said that “the AI bubble is reaching a tipping point”31 after failing to find a satisfactory answer to a question they posed last year: “Where is all the revenue?”32 Similarly, in Goldman Sachs’ recent report, “Gen AI: too much spend, too little benefit?”,33 their global head of equity research stated, “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.” Still, the report tellingly notes that even if AI doesn’t “deliver on its promise,” it may still generate investor returns, as “bubbles take a long time to burst.” In short, financial experts are pointing out that capital expenditures on things like graphics cards or cloud compute have not been met by commensurate revenue, nor does there seem to be a clear pathway to remedy this. This shift is a recognizable stage in which a product and its promoters do not suffer swift devaluation but begin to lose their top spots on the NASDAQ and other major exchanges.

Why is this happening? Technically, large-language models (LLMs) continue to produce erroneous but confident text (“hallucinations”) because they are inherently probabilistic machines, and no clear fixes exist because this is a fundamental feature of how the technology works.34  In many cases, LLMs fail to automate the labor that CEOs confidently claimed they could, and instead often decrease employee productivity.35 Economically, interest rates have risen, so “easy money” is no longer available to fund boosters’ loftiest and horrifically expensive generative AI dreams.36 Meanwhile, federal regulators have intensified their scrutiny, even as they struggle to reign in social media platforms. FTC chair Lina Khan has said, “There is no AI exemption to the laws on the books,” encouraging regulators to apply standard regulatory tools to AI.37 Legally, after misappropriating or allegedly stealing much of their training data during early generative AI development, companies now face lawsuits and must pay for their inputs.38 Public discourse is catching up too. We were promised that AI would automate tedious tasks, freeing people for more fulfilling work. Increasingly, users recognize that these technologies are built to “do my art and writing so that I can do my laundry and dishes,” in the words of one user, rather than the reverse.39

Today’s hype will have lasting effects that constrain tomorrow’s possibilities.

David Gray Widder and Mar Hicks

Hype’s Harmful Effects Are Not Easily Reversed

While critics of any technology bubble may feel vindicated by seeing it pop — and when stock markets and the broader world catch up with their gimlet-eyed early critiques — those who have been questioning the AI hype also know that the deflation, or even popping, of the bubble does not undo the harm already caused. Hype has material and often harmful effects in the real world. The ephemerality of these technologies is grounded in real-world resources, bodies, and lives, reminiscent of the destructive industrial practices of past ages. Decades of regulation were required to roll back the environmental and public health harms of technologies we no longer use, from short-lived ones like radium to longer-lived ones like leaded gasoline.40 41 Even ephemeral phenomena can have long-lasting negative effects.

The hype around AI has already impacted climate goals. In the United States, plans to retire polluting coal power plants have slowed by 40%, with politicians and industry lobbyists citing the need to win the “AI war.”42 Microsoft, which  planned to be carbon negative by 2030,43 walked back that goal after its 2023 emissions were 30% higher than 2020.44 Brad Smith, its president, said that this “moonshot” goal was made before the “explosion in artificial intelligence,” and now “the moon is five times as far away,” with AI as the driving factor. After firing employees for raising concern about generative AI’s environmental costs,45 46 Google has also seen its emissions increase and no longer claims to be carbon-neutral while pushing its net-zero emissions goal date further into the future.47 This carbon can’t be unburned, and the breathless discourse surrounding AI has helped ratchet up the existing climate emergency, providing justification for companies to renege on their already-imperiled environmental promises.48

The discourse surrounding AI will also have lasting effects on labor. Some workers will see the scope of their work reduced, while others will face wage stagnation or cuts owing to the threat, however empty, that they might be replaced with poor facsimiles of themselves. Creative industries are especially at risk: as illustrator Molly Crabapple states, while demand for high-end human-created art may remain, generative AI will harm many working illustrators, as editors opt for generative AI’s fast and low-cost illustrations over original creative output.49 Even as artists mobilize with technical and regulatory countermeasures,50 this burden distracts from their artistic pursuits. Unions such as SAG-AFTRA have won meager protections against AI,51 and while this hot-button issue perhaps raised the profile of their strike, it also distracted from other crucial contract negotiations. Even if generative AI doesn’t live up to the hype, its effect on how we value creative work may be hard to shake, leaving creative workers to reclaim every inch lost during the AI boom.

Lastly, generative AI will have long-term effects on our information commons. The ingestion of massive amounts of user-generated data, text, and artwork — often in ways that appear to violate copyright and fair use — has pushed us closer to the enclosure of the information commons by corporations.52 Google’s AI search snippets tool, for example, authoritatively suggested putting glue in pizza and recommended eating at least one small rock per day.53 While these may seem obvious enough to be harmless, most AI-generated misinformation is not so easy to detect. The increasing prevalence of AI-generated nonsense on the internet will make it harder to find trusted information, allow misinformation to propagate, and erode trust in sources we used to count on for reliable information.

A key question remains, and we may never have a satisfactory answer: what if the hype was always meant to fail? What if the point was to hype things up, get in, make a profit, and entrench infrastructure dependencies before critique, or reality, had a chance to catch up?54 Path dependency is well understood by historians of technology and those seeking to profit from AI. Today’s hype will have lasting effects that constrain tomorrow’s possibilities. Using the AI hype to shift more of our infrastructure to the cloud increases dependency on cloud companies, creating dependencies that will be hard to undo even as inflated promises for AI are dashed.

Inventors, technologists, corporations, boosters, and investors regularly seek to create inevitability, in part by encouraging a discourse of inexorable technological “progress” tied to their latest investment vehicle. By referencing past technologies, which now seem natural and necessary, they claim their current developments (tautologically) as inevitable. Yet the efforts to make AI indispensable on a large scale, culturally, technologically, and economically, have not lived up to their promises. In a sense, this is not surprising, as generative AI does not so much represent the wave of the future as it does the ebb and flow of waves past.

Acknowledgements

We are grateful to Ali Alkhatib, Sireesh Gururaja, and Alex Hanna for their insightful comments on earlier drafts.

Political Economy of AI Essay Collection

Earlier this year, the Allen Lab for Democracy Renovation hosted a convening on the Political Economy of AI. This collection of essays from leading scholars and experts raise critical questions surrounding power, governance, and democracy as they consider how technology can better serve the public interest.

See the collection

Citations
  1. Benzinga. “Stocks With ‘AI’ In the Name Are Soaring: Could It Be The Next Crypto-, Cannabis-Style Stock Naming Craze?” Markets Insider, January 31, 2023. <a href="https://markets.businessinsider.com/news/stocks/stocks-with-ai-in-the-name-are-soaring-could-it-be-the-next-crypto-cannabis-stock-naming-craze-1032055463" rel="nofollow">https://markets.businessinsider.com/news/stocks/stocks-with-ai-in-the-name-are-soaring-could-it-be-the-next-crypto-cannabis-stock-naming-craze-1032055463</a>.
  2. Wiltermuth, Joy. “AI Talk Is Surging during Company Earnings Calls — and so Are Those Companies’ Shares.” Market Watch, March 16, 2024. <a href="https://www.marketwatch.com/story/ai-talk-is-surging-during-company-earnings-calls-and-so-are-those-companies-shares-f924d91a" rel="nofollow">https://www.marketwatch.com/story/ai-talk-is-surging-during-company-earnings-calls-and-so-are-those-companies-shares-f924d91a</a>.
  3. Morgan Stanley. “The $6 Trillion Opportunity in AI.” April 18, 2023. <a href="https://www.morganstanley.com/ideas/generative-ai-growth-opportunity" rel="nofollow">https://www.morganstanley.com/ideas/generative-ai-growth-opportunity</a>.
  4. Chui, Michael, Roger Roberts, Lareina Yee, et al. “The Economic Potential of Generative AI: The Next Productivity Frontier.” McKinsey & Company, June 14, 2023. <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights" rel="nofollow">https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights</a>.
  5. Statista, May 2024. <a href="https://www.statista.com/outlook/io/agriculture/worldwide" rel="nofollow">https://www.statista.com/outlook/io/agriculture/worldwide</a>.
  6. World Bank. “United Kingdom.” World Bank Open Data, 2023. https://data.worldbank.org.
  7. McKinsey & Company. “Quantum Black – AI by McKinsey,” 2024. http://ceros.mckinsey.com/qb-overview-desktop-2-1.
  8. Bender, Emily M., and Alex Hanna. “Mystery AI Hype Theater 3000.” The Distributed AI Research Institute, 2024. <a href="https://www.dair-institute.org/maiht3k/" rel="nofollow">https://www.dair-institute.org/maiht3k/</a>.
  9. Marcus, Gary. “The Great AI Retrenchment Has Begun.” Marcus on AI, June 15, 2024. <a href="https://garymarcus.substack.com/p/the-great-ai-retrenchment-has-begun" rel="nofollow">https://garymarcus.substack.com/p/the-great-ai-retrenchment-has-begun</a>.
  10. Marx, Paris. “The ChatGPT Revolution Is Another Tech Fantasy.” Disconnect, July 27, 2023. <a href="https://disconnect.blog/the-chatgpt-revolution-is-another/" rel="nofollow">https://disconnect.blog/the-chatgpt-revolution-is-another/</a>.
  11. Hanna, Alex. “The Grimy Residue of the AI Bubble.” Mystery AI Hype Theater 3000: The Newsletter , July 25, 2024. <a href="https://buttondown.email/maiht3k/archive/the-grimy-residue-of-the-ai-bubble/" rel="nofollow">https://buttondown.email/maiht3k/archive/the-grimy-residue-of-the-ai-bubble/</a>.
  12. Nathan, Allison, Jenny Grimberg, and Ashley Rhodes. “Gen AI: Too Much Spend, Too Little Benefit?” Top of Mind. Goldman Sachs Global Macro Research, June 25, 2024. <a href="https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf" rel="nofollow">https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf</a>.
  13. Suchman, Lucy. “The Uncontroversial ‘Thingness’ of AI.” Big Data & Society 10, no. 2 (July 2023): 20539517231206794. <a href="https://doi.org/10.1177/20539517231206794" rel="nofollow">https://doi.org/10.1177/20539517231206794</a>.
  14. Oldenziel, Ruth, M. Luísa Sousa, and Pieter van Wesemael. “Designing (Un)Sustainable Urban Mobility from Transnational Settings, 1850-Present.” In A U-Turn to the Future: Sustainable Urban M obility since 1850, edited by Martin Emanuel, Frank Schipper, and Ruth Oldenziel. Berghahn Books, 2020.
  15. Nye, David E. Electrifying America: Social Meanings of a New Technology, 1880-1940.  MIT Press, 2001.
  16. Hicks, Mar. Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. History of Computing. The MIT Press, 2018.
  17. Burrell, Jenna. “Artificial Intelligence and the Ever-Receding Horizon of the Future.” Tech Policy Press, June 6, 2023. <a href="https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future" rel="nofollow">https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future</a>.
  18. Strickland, Eliza. “How IBM Watson Overpromised and Underdelivered on AI Health Care.” IEEE Spectrum, April 2, 2019. <a href="https://spectrum.ieee.org/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care" rel="nofollow">https://spectrum.ieee.org/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care</a>.
  19. Taylor, Josh. “Rise of Artificial Intelligence Is Inevitable but Should Not Be Feared, ‘Father of AI’ Says.” The Guardian, May 7, 2023. <a href="https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says" rel="nofollow">https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says</a>.
  20. Shapiro, Daniel. “Artificial Intelligence: It’s Complicated And Unsettling, But Inevitable.” Forbes, September 10, 2019. <a href="https://www.forbes.com/sites/danielshapiro1/2019/09/10/artificial-intelligence-its-complicated-and-unsettling-but-inevitable/" rel="nofollow">https://www.forbes.com/sites/danielshapiro1/2019/09/10/artificial-intelligence-its-complicated-and-unsettling-but-inevitable/</a>.
  21. Raasch, Jon Michael. “In Education, ‘AI Is Inevitable,’ and Students Who Don’t Use It Will ‘Be at a Disadvantage’: AI Founder.” FOX Business, June 22, 2023. <a href="https://www.foxbusiness.com/technology/education-ai-inevitable-students-use-disadvantage-ai-founder" rel="nofollow">https://www.foxbusiness.com/technology/education-ai-inevitable-students-use-disadvantage-ai-founder</a>.
  22. Halcyon Lawrence explores this dynamic with speech recognition technologies that were unable to recognize the accents of the majority of global English speakers for much of their existence.
    Lawrence, Halcyon M. “Siri Disciplines.” In Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip. The MIT Press, 2021.
  23. Axon, Samuel. “RIP (Again): Google Glass Will No Longer Be Sold.” Ars Technica, March 16, 2023. <a href="https://arstechnica.com/gadgets/2023/03/google-glass-is-about-to-be-discontinued-again/" rel="nofollow">https://arstechnica.com/gadgets/2023/03/google-glass-is-about-to-be-discontinued-again/</a>.
  24. Barr, Kyle. “Apple Vision Pro U.S. Sales Are All But Dead, Market Analysts Say.” Gizmodo, July 11, 2024. <a href="https://gizmodo.com/apple-vision-pro-u-s-sales-2000469302" rel="nofollow">https://gizmodo.com/apple-vision-pro-u-s-sales-2000469302</a>.
  25. Kline, Ronald, and Trevor Pinch. “Users as Agents of Technological Change: The Social Construction of the Automobile in the Rural United States.” Technology and Culture 37, no. 4 (1996): 763–95. <a href="https://doi.org/10.2307/3107097" rel="nofollow">https://doi.org/10.2307/3107097</a>.
  26. Celarier, Michelle. “Money Is Pouring Into AI. Skeptics Say It’s a ‘Grift Shift.’” Institutional Investor, August 29, 2023. <a href="https://www.institutionalinvestor.com/article/2c4fad0w6irk838pca3gg/portfolio/money-is-pouring-into-ai-skeptics-say-its-a-grift-shift" rel="nofollow">https://www.institutionalinvestor.com/article/2c4fad0w6irk838pca3gg/portfolio/money-is-pouring-into-ai-skeptics-say-its-a-grift-shift</a>.
  27. Bratton, Laura, and Britney Nguyen. “The AI Craze Is No Dot-Com Bubble. Here’s Why.” Quartz, April 15, 2024. <a href="https://qz.com/ai-stocks-dot-com-bubble-nvidia-google-microsoft-amazon-1851407019" rel="nofollow">https://qz.com/ai-stocks-dot-com-bubble-nvidia-google-microsoft-amazon-1851407019</a>.
  28. Gray, Jeremy. “Blackmagic Taunts Adobe Following Terms of Use Controversy.” PetaPixel, June 28, 2024. <a href="https://petapixel.com/2024/06/28/blackmagic-taunts-adobe-following-terms-of-use-controversy/" rel="nofollow">https://petapixel.com/2024/06/28/blackmagic-taunts-adobe-following-terms-of-use-controversy/</a>.
  29. Inqwire. “Inqwire.” Accessed July 29, 2024. https://www.inqwire.io/www.inqwire.io.
  30. Gardizy, Anissa, and Aaron Holmes. “Amazon, Google Quietly Tamp Down Generative AI Expectations.” The Information, March 12, 2024.
  31. Cahn, David. “AI’s $600B Question.” Sequoia Capital, June 20, 2024. https://www.sequoiacap.com/article/ais-600b-question/.
  32. Cahn, David. “AI’s $200B Question.” Sequoia Capital, September 20, 2023. https://www.sequoiacap.com/article/follow-the-gpus-perspective/.
  33. Nathan, Allison, Jenny Grimberg, and Ashley Rhodes. “Gen AI: Too Much Spend, Too Little Benefit?” Top of Mind. Goldman Sachs Global Macro Research, June 25, 2024. <a href="https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf" rel="nofollow">https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf</a>.
  34. Leffer, Lauren. “Hallucinations Are Baked into AI Chatbots.” Scientific American, April 5, 2024. <a href="https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/" rel="nofollow">https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/</a>.
  35. Robinson, Bryan. “77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds.” Forbes, July 23, 2024. <a href="https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/" rel="nofollow">https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/</a>.
  36. Karma, Rogé. “The Era of Easy Money Is Over. That’s a Good Thing.” The Atlantic, December 11, 2023. <a href="https://www.theatlantic.com/ideas/archive/2023/12/higher-interest-rates-fed-economy/676282/" rel="nofollow">https://www.theatlantic.com/ideas/archive/2023/12/higher-interest-rates-fed-economy/676282/</a>.
  37. Khan, Lina. “Statement of Chair Lina M. Khan Regarding the Joint Interagency Statement on AI.” Federal Trade Commission, April 25, 2023.
  38. O’Donnell, James. “Training AI Music Models Is about to Get Very Expensive.” MIT Technology Review, June 27, 2024. <a href="https://www.technologyreview.com/2024/06/27/1094379/ai-music-suno-udio-lawsuit-record-labels-youtube-licensing/" rel="nofollow">https://www.technologyreview.com/2024/06/27/1094379/ai-music-suno-udio-lawsuit-record-labels-youtube-licensing/</a>.
  39. Joanna Maciejewska (AuthorJMac), “I Want AI to Do My Laundry and Dishes so That I Can Do Art and Writing…” X (formerly Twitter), March 29, 2024. <a href="https://x.com/AuthorJMac/status/1773679197631701238" rel="nofollow">https://x.com/AuthorJMac/status/1773679197631701238</a>.
  40. Clark, Claudia. Radium Girls, Women and Industrial Health Reform: 1910-1935. Chapel Hill, NC: University of North Carolina Press, 1997.
  41. Nader, Ralph. Unsafe at Any Speed: The Designed-in Dangers of the American Automobile. Grossman, 1965.
  42. Chu, Amanda. “US Slows Plans to Retire Coal-Fired Plants as Power Demand from AI Surges.” Financial Times, May 30, 2024. https://web.archive.org/web/20240702094041/https://www.ft.com/content/ddaac44b-e245-4c8a-bf68-c773cc8f4e63.
  43. Smith, Brad. “Microsoft Will Be Carbon Negative by 2030.” The Official Microsoft Blog, January 16, 2020. <a href="https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/" rel="nofollow">https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/</a>.
  44. Rathi, Akshat, Bass, Dina, and Rao, Mythili. “A Big Bet on AI Is Putting Microsoft’s Climate Targets at Risk.” Bloomberg, May 23, 2024. <a href="https://www.bloomberg.com/news/articles/2024-05-23/a-big-bet-on-ai-is-putting-microsoft-s-climate-targets-at-risk" rel="nofollow">https://www.bloomberg.com/news/articles/2024-05-23/a-big-bet-on-ai-is-putting-microsoft-s-climate-targets-at-risk</a>.
  45. Simonite, Tom. “What Really Happened When Google Ousted Timnit Gebru.” Wired, June 8, 2021. <a href="https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/" rel="nofollow">https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/</a>.
  46. Bender, Emily M., and Alex Hanna. “Mystery AI Hype Theater 3000.” The Distributed AI Research Institute, 2024. <a href="https://www.dair-institute.org/maiht3k/" rel="nofollow">https://www.dair-institute.org/maiht3k/</a>.
  47. Rathi, Akshat. “Google Is No Longer Claiming to Be Carbon Neutral.” Bloomberg, July 8, 2024. <a href="https://www.bloomberg.com/news/articles/2024-07-08/google-is-no-longer-claiming-to-be-carbon-neutral" rel="nofollow">https://www.bloomberg.com/news/articles/2024-07-08/google-is-no-longer-claiming-to-be-carbon-neutral</a>.
  48. Kneese, Tamara, and Meg Young. “Carbon Emissions in the Tailpipe of Generative AI.” Harvard Data Science Review, Special Issue 5 (June 11, 2024). <a href="https://doi.org/10.1162/99608f92.fbdf6128" rel="nofollow">https://doi.org/10.1162/99608f92.fbdf6128</a>.
  49. Crabapple, Molly, and Paris Marx. “Why AI Is a Threat to Artists, with Molly Crabapple.” Tech Won’t Save Us, June 29, 2023. <a href="https://techwontsave.us/episode/174_why_ai_is_a_threat_to_artists_w_molly_crabapple.html" rel="nofollow">https://techwontsave.us/episode/174_why_ai_is_a_threat_to_artists_w_molly_crabapple.html</a>.
  50. Jiang, Harry H., Lauren Brown, Jessica Cheng, et al. “AI Art and Its Impact on Artists.” Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 363–74. AIES ’23. Association for Computing Machinery, 2023. <a href="https://doi.org/10.1145/3600211.3604681" rel="nofollow">https://doi.org/10.1145/3600211.3604681</a>.
  51. Frawley, Chris. “Unpacking SAG-AFTRA’s New AI Regulations: What Actors Should Know.” Backstage, January 18, 2024. <a href="https://www.backstage.com/magazine/article/sag-aftra-ai-deal-explained-76821/" rel="nofollow">https://www.backstage.com/magazine/article/sag-aftra-ai-deal-explained-76821/</a>.
  52. See Noble, Algorithms of Oppression, for a fuller discussion of how the U.S. (and global) online ecosystem has been reconfigured to fall firmly under the control of for-profit companies making billions, mostly through advertising revenue.
  53. Kelly, Jack. “Google’s AI Recommends Glue on Pizza: What Caused These Viral Blunders?” Forbes, May 31, 2024. <a href="https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-glue-to-pizza-viral-blunders/" rel="nofollow">https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-glue-to-pizza-viral-blunders/</a>.
  54. Some financial self-regulatory authorities have even added warnings about AI pump-and-dump schemes. Financial Industry Regulatory Authority. “Avoid Fraud: Artificial Intelligence (AI) and Investment Fraud.” January 25, 2024. <a href="https://www.finra.org/investors/insights/artificial-intelligence-and-investment-fraud" rel="nofollow">https://www.finra.org/investors/insights/artificial-intelligence-and-investment-fraud</a>.
Read the whole story
strugk
286 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

We passed 1.5°C of human-caused warming this year (just not as the Paris agreement measures it)

1 Share

Human-caused global warming has just nudged past 1.5°C, according to a new method we have developed. That’s approaching 0.2°C higher than previously thought.

But this does not mean the goal of keeping warming below 1.5°C is dead, as the Paris agreement and the UN climate summits are based on different methodology.

This additional warming comes out of how we define what was pre-industrial, with our method using bubbles of air buried in Antarctic ice to gather data reaching back well before the industrial era. Current methods exclude some of the early warming. We also radically improve how well we know these numbers.

There are two steps to measuring human-caused warming. The first requires us to compare temperature measurements with their pre-industrial counterpart – we call this the pre-industrial baseline. The second step involves separating the human contribution from the part humans are not responsible for, such as volcanic eruptions, El Niño, or random weather events – we call this natural variability.

The Intergovernmental Panel on Climate Change (IPCC) chose the period 1850-1900 as the pre-industrial baseline as that’s when we started meaningfully measuring the temperature around the world, even if the Industrial Revolution actually began earlier. Warming since this period is what negotiators would have had in mind when setting up the Paris agreement. Climate models and statistical analysis are then used to tease out the volcanoes and short-term weather fluctuations in the data, to leave just the human-caused bit.

Using these methods, by 2023 there had been 1.31°C of human-caused warming since 1850-1900. However, there is considerable uncertainty in figures like this, and the reality could be somewhere between 1.1°C and 1.6°C. So although we are likely to be around 0.2°C below the 1.5°C limit, using these previous approaches we cannot be certain that we are not already past it.

Unfortunately, the 1850-1900 pre-industrial baseline probably has human-caused warming baked into it because the Industrial Revolution started significantly before then. As a result, the human-caused warming we are currently negotiating on is an underestimate.

A new approach

Fortunately, our new method means we can make significantly more accurate estimates. That’s because of a simple but previously overlooked relationship between the CO₂ concentration we measure in the atmosphere and the temperature change we see.

We treat this relationship as a straight line, meaning a certain amount of additional atmospheric carbon will always be associated with the same amount of warming. This is somewhat controversial, but allows us to do a number of very useful things.

First, it allows us to build from a pre-industrial baseline well before 1850. This is because unlike global temperature measurements we have ice core CO₂ data stretching back thousands of years, well before the start of the Industrial Revolution. This data is gathered by drilling down through the Antarctic ice cap and harvesting the air trapped in the bubbles in the ice. The further you drill down, the older the air.

That data tells us that CO₂ concentrations in the atmosphere were pretty constant for two millennia at 280 parts per million, before they started rising from about 1700. We can then estimate the temperature change associated with that additional carbon, which tells us how much warming was baked into the 1850-1900 baseline currently used by the IPCC.

Second, the CO₂-temperature straight line relationship also allows us to separate the human-caused warming from the natural variability, because the warming trend we are after is so closely related to increases in CO₂ we measure.

A more accurate estimate

Using our new method we can estimate human-caused warming either from our pre-1700 baseline or from the IPCC’s 1850-1900 baseline. Using the 1850-1900 baseline we estimate human-caused warming for 2023 at 1.31°C, agreeing with the IPCC-based best guess. However, our estimate is three times better defined. Although we have experienced record warming in 2024, we can be sure human-caused warming has not yet passed 1.5°C if measured from 1850-1900.

When measured from the pre-1700 baseline, humans have caused warming that almost hit 1.5°C in 2023, and as of October 2024 is at 1.53°C (within a range of 0.11°C). This captures a fuller picture of the warming caused by centuries of human activity as rising levels of deforestation, farming and early industries all contributed to increases in carbon dioxide levels. This result tells us there is approaching 0.2°C of warming baked into the 1850-1900 baseline from ignoring the effects of the early emissions released before the temperature records began.

1.5, dead or alive?

As the Paris agreement is based on science that used 1850-1900 as a baseline, the additional early warming we flag may not in the end be counted towards the temperature goals. So it’s unfair to say the 1.5°C limit has been breached by our latest estimate. Yet even if we stick to the 1850-1900 pre-industrial baseline, 1.5°C of human-caused warming is less than a decade away at current warming rates. The 1.5°C Paris limit is certainly critically ill.

But perhaps this is not the right way to see 1.5’s role. The agreed aim is to hold the temperature increase “well below 2 degrees”, and the super tanker that is the global economy will need something strong to pivot from to change course. Keeping 1.5 in reach is currently that anchor point. Knowing precisely where we are in relation to this anchor will be critical. And maybe this is where our research will help most.


Don’t have time to read about climate change as much as you’d like?
Get our award-winning weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 40,000+ readers who’ve subscribed so far.


Read the whole story
strugk
309 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

These Rats Learned to Drive—and They Love It

1 Share

THIS ARTICLE IS republished from The Conversation under a Creative Commons license.

We crafted our first rodent car from a plastic cereal container. After trial and error, my colleagues and I found that rats could learn to drive forward by grasping a small wire that acted like a gas pedal. Before long, they were steering with surprising precision to reach a Froot Loop treat.

As expected, rats housed in enriched environments—complete with toys, space, and companions—learned to drive faster than those in standard cages. This finding supported the idea that complex environments enhance neuroplasticity: the brain’s ability to change across the lifespan in response to environmental demands.

Science Newsletter

Your weekly roundup of the best stories on health care, the climate crisis, new scientific discoveries, and more. Delivered on Wednesdays.

After we published our research, the story of driving rats went viral in the media. The project continues in my lab with new, improved rat-operated vehicles, or ROVs, designed by robotics professor John McManus and his students. These upgraded electrical ROVs—featuring ratproof wiring, indestructible tires, and ergonomic driving levers—are akin to a rodent version of Tesla’s Cybertruck.

As a neuroscientist who advocates for housing and testing laboratory animals in natural habitats, I’ve found it amusing to see how far we’ve strayed from my lab practices with this project. Rats typically prefer dirt, sticks, and rocks over plastic objects. Now, we had them driving cars.

But humans didn’t evolve to drive either. Although our ancient ancestors didn’t have cars, they had flexible brains that enabled them to acquire new skills—fire, language, stone tools, and agriculture. And some time after the invention of the wheel, humans made cars.

Although cars made for rats are far from anything they would encounter in the wild, we believed that driving represented an interesting way to study how rodents acquire new skills. Unexpectedly, we found that the rats had an intense motivation for their driving training, often jumping into the car and revving the “lever engine” before their vehicle hit the road. Why was that?

The New Destination of Joy

Concepts from introductory psychology textbooks took on a new, hands-on dimension in our rodent driving laboratory. Building on foundational learning approaches such as operant conditioning, which reinforces targeted behavior through strategic incentives, we trained the rats step-by-step in their driver’s ed programs.

Initially, they learned basic movements, such as climbing into the car and pressing a lever. But with practice, these simple actions evolved into more complex behaviors, such as steering the car toward a specific destination.

The rats also taught me something profound one morning during the pandemic.

It was the summer of 2020, a period marked by emotional isolation for almost everyone on the planet, even laboratory rats. When I walked into the lab, I noticed something unusual: The three driving-trained rats eagerly ran to the side of the cage, jumping up like my dog does when asked if he wants to take a walk.

Had the rats always done this and I just hadn’t noticed? Were they just eager for a Froot Loop, or anticipating the drive itself? Whatever the case, they appeared to be feeling something positive—perhaps excitement and anticipation.

Behaviors associated with positive experiences are associated with joy in humans, but what about rats? Was I seeing something akin to joy in a rat? Maybe so, considering that neuroscience research is increasingly suggesting that joy and positive emotions play a critical role in the health of both human and nonhuman animals.

With that, my team and I shifted focus from topics such as how chronic stress influences brains to how positive events—and anticipation for these events—shape neural functions.

Working with postdoctoral fellow Kitty Hartvigsen, I designed a new protocol that used waiting periods to ramp up anticipation before a positive event. Bringing Pavlovian conditioning into the mix, rats had to wait 15 minutes after a Lego block was placed in their cage before they received a Froot Loop. They also had to wait in their transport cage for a few minutes before entering Rat Park, their play area. We also added challenges, such as making them shell sunflower seeds before eating.

This became our Wait for It research program. We dubbed this new line of study UPERs—unpredictable positive experience responses—where rats were trained to wait for rewards. In contrast, control rats received their rewards immediately. After about a month of training, we expose the rats to different tests to determine how waiting for positive experiences affects how they learn and behave. We’re currently peering into their brains to map the neural footprint of extended positive experiences.

Preliminary results suggest that rats required to wait for their rewards show signs of shifting from a pessimistic cognitive style to an optimistic one in a test designed to measure rodent optimism. They performed better on cognitive tasks and were bolder in their problem-solving strategies. We linked this program to our lab’s broader interest in behaviorceuticals, a term I coined to suggest that experiences can alter brain chemistry similarly to pharmaceuticals.

This research provides further support of how anticipation can reinforce behavior. Previous work with lab rats has shown that rats pressing a bar for cocaine—a stimulant that increases dopamine activation—already experience a surge of dopamine as they anticipate a dose of cocaine.

The Tale of Rat Tails

It wasn’t just the effects of anticipation on rat behavior that caught our attention. One day, a student noticed something strange: One of the rats in the group trained to expect positive experiences had its tail straight up with a crook at the end, resembling the handle of an old-fashioned umbrella.

I had never seen this in my decades of working with rats. Reviewing the video footage, we found that the rats trained to anticipate positive experiences were more likely to hold their tails high than untrained rats. But what, exactly, did this mean?

Curious, I posted a picture of the behavior on social media. Fellow neuroscientists identified this as a gentler form of what’s called Straub tail, typically seen in rats given the opioid morphine. This S-shaped curl is also linked to dopamine. When dopamine is blocked, the Straub tail behavior subsides.

Natural forms of opiates and dopamine—key players in brain pathways that diminish pain and enhance reward—seem to be telltale ingredients of the elevated tails in our anticipation training program. Observing tail posture in rats adds a new layer to our understanding of rat emotional expression, reminding us that emotions are expressed throughout the entire body.

While we can’t directly ask rats whether they like to drive, we devised a behavioral test to assess their motivation to drive. This time, instead of only giving rats the option of driving to the Froot Loop Tree, they could also make a shorter journey on foot—or paw, in this case.

Surprisingly, two of the three rats chose to take the less efficient path of turning away from the reward and running to the car to drive to their Froot Loop destination. This response suggests that the rats enjoy both the journey and the rewarding destination.

Rat Lessons on Enjoying the Journey

We’re not the only team investigating positive emotions in animals.

Neuroscientist Jaak Panksepp famously tickled rats, demonstrating their capacity for joy.

Research has also shown that desirable low-stress rat environments retune their brains’ reward circuits, such as the nucleus accumbens. When animals are housed in their favored environments, the area of the nucleus accumbens that responds to appetitive experiences expands. Alternatively, when rats are housed in stressful contexts, the fear-generating zones of their nucleus accumbens expand. It is as if the brain is a piano the environment can tune.

Neuroscientist Curt Richter also made the case for rats having hope. In a study that wouldn’t be permitted today, rats swam in glass cylinders filled with water, eventually drowning from exhaustion if they weren’t rescued. Lab rats frequently handled by humans swam for hours to days. Wild rats gave up after just a few minutes. If the wild rats were briefly rescued, however, their survival time extended dramatically, sometimes by days. It seemed that being rescued gave the rats hope and spurred them on.

The driving rats project has opened new and unexpected doors in my behavioral neuroscience research lab. While it’s vital to study negative emotions such as fear and stress, positive experiences also shape the brain in significant ways.

As animals—human or otherwise—navigate the unpredictability of life, anticipating positive experiences helps drive a persistence to keep searching for life’s rewards. In a world of immediate gratification, these rats offer insights into the neural principles guiding everyday behavior. Rather than pushing buttons for instant rewards, they remind us that planning, anticipating, and enjoying the ride may be key to a healthy brain. That’s a lesson my lab rats have taught me well.

Read the whole story
strugk
309 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Capturing carbon from the air just got easier - Berkeley News

1 Share

A new type of porous material called a covalent organic framework quickly sucks up carbon dioxide from ambient air

October 23, 2024

Capturing and storing the carbon dioxide humans produce is key to lowering atmospheric greenhouse gases and slowing global warming, but today’s carbon capture technologies work well only for concentrated sources of carbon, such as power plant exhaust. The same methods cannot efficiently capture carbon dioxide from ambient air, where concentrations are hundreds of times lower than in flue gases.

Yet direct air capture, or DAC, is being counted on to reverse the rise of CO2 levels, which have reached 426 parts per million (ppm), 50% higher than levels before the Industrial Revolution. Without it, according to the Intergovernmental Panel on Climate Change, we won’t reach humanity’s goal of limiting warming to 1.5 °C (2.7 °F) above preexisting global averages.

A new type of absorbing material developed by chemists at the University of California, Berkeley, could help get the world to negative emissions. The porous material — a covalent organic framework (COF) — captures CO2 from ambient air without degradation by water or other contaminants, one of the limitations of existing DAC technologies.

“We took a powder of this material, put it in a tube, and we passed Berkeley air — just outdoor air — into the material to see how it would perform, and it was beautiful. It cleaned the air entirely of CO2. Everything,” said Omar Yaghi, the James and Neeltje Tretter Professor of Chemistry at UC Berkeley and senior author of a paper that will appear online Oct. 23 in the journal Nature.

“I am excited about it because there’s nothing like it out there in terms of performance. It breaks new ground in our efforts to address the climate problem,” he added.

According to Yaghi, the new material could be substituted easily into carbon capture systems already deployed or being piloted to remove CO2 from refinery emissions and capture atmospheric CO2 for storage underground.

UC Berkeley graduate student Zihui Zhou, the paper’s first author, said that a mere 200 grams of the material, a bit less than half a pound, can take up as much CO2 in a year — 20 kilograms (44 pounds) — as a tree.

“Flue gas capture is a way to slow down climate change because you are trying not to release CO2 to the air. Direct air capture is a method to take us back to like it was 100 or more years ago,” Zhou said. “Currently, the CO2 concentration in the atmosphere is more than 420 ppm, but that will increase to maybe 500 or 550 before we fully develop and employ flue gas capture. So if we want to decrease the concentration and go back to maybe 400 or 300 ppm, we have to use direct air capture.”

COF vs MOF

Yaghi is the inventor of COFs and MOFs (metal-organic frameworks), both of which are rigid crystalline structures with regularly spaced internal pores that provide a large surface area for gases to stick or adsorb. Some MOFs that he and his lab have developed can adsorb water from the air, even in arid conditions, and when heated, release the water for drinking. He has been working on MOFs to capture carbon since the 1990s, long before DAC was on most people’s radar screens, he said.

Two years ago, his lab created a very promising material, MOF-808, that absorbs CO2, but the researchers found that after hundreds of cycles of adsorption and desorption, the MOFs broke down. These MOFs were decorated inside with amines (NH2 groups), which efficiently bind CO2 and are a common component of carbon capture materials. In fact, the dominant carbon capture method involves bubbling exhaust gases through liquid amines that capture the carbon dioxide. Yaghi noted, however, that the energy intensive regeneration and volatility of liquid amines hinders their further industrialization.

Working with colleagues, Yaghi discovered why some MOFs degrade for DAC applications — they are unstable under basic, as opposed to acidic, conditions, and amines are bases. He and Zhou worked with colleagues in Germany and Chicago to design a stronger material, which they call COF-999. Whereas MOFs are held together by metal atoms, COFs are held together by covalent carbon-carbon and carbon-nitrogen double bonds, among the strongest chemical bonds in nature.

As with MOF-808, the pores of COF-999 are decorated inside with amines, allowing uptake of more CO2 molecules.

“Trapping CO2 from air is a very challenging problem,” Yaghi said. “It’s energetically demanding, you need a material that has high carbon dioxide capacity, that’s highly selective, that’s water stable, oxidatively stable, recyclable. It needs to have a low regeneration temperature and needs to be scalable. It’s a tall order for a material. And in general, what has been deployed as of today are amine solutions, which are energy intensive because they’re based on having amines in water, and water requires a lot of energy to heat up, or solid materials that ultimately degrade with time.”

Yaghi and his team have spent the last 20 years developing COFs that have a strong enough backbone to withstand contaminants, ranging from acids and bases to water, sulfur and nitrogen, that degrade other porous solid materials. The COF-999 is assembled from a backbone of olefin polymers with an amine group attached. Once the porous material has formed, it is flushed with more amines that attach to NH2 and form short amine polymers inside the pores. Each amine can capture about one CO2 molecule.

When 400 ppm CO2 air is pumped through the COF at room temperature (25 °C) and 50% humidity, it reaches half capacity in about 18 minutes and is filled in about two hours. However, this depends on the sample form and could be speeded up to a fraction a minute when optimized. Heating to a relatively low temperature — 60 °C, or 140 °F — releases the CO2, and the COF is ready to adsorb CO2 again. It can hold up to 2 millimoles of CO2 per gram, standing out from other solid sorbents.

Yaghi noted that not all the amines in the internal polyamine chains currently capture CO2, so it may be possible to enlarge the pores to bind more than twice as much.

“This COF has a strong chemically and thermally stable backbone, it requires less energy, and we have shown it can withstand 100 cycles with no loss of capacity. No other material has been shown to perform like that,” Yaghi said. “It’s basically the best material out there for direct air capture.”

Yaghi is optimistic that artificial intelligence can help speed up the design of even better COFs and MOFs for carbon capture or other purposes, specifically by identifying the chemical conditions required to synthesize their crystalline structures. He is scientific director of a research center at UC Berkeley, the Bakar Institute of Digital Materials for the Planet (BIDMaP), which employs AI to develop cost-efficient, easily deployable versions of MOFs and COFs to help limit and address the impacts of climate change.

“We’re very, very excited about blending AI with the chemistry that we’ve been doing,” he said.

The work was funded by King Abdulaziz City for Science and Technology in Saudi Arabia, Yaghi’s carbon capture startup, Atoco Inc., Fifth Generation’s Love, Tito’s, and BIDMaP. Yaghi’s collaborators include Joachim Sauer, a visiting scholar from Humboldt University in Berlin, Germany, and computational scientist Laura Gagliardi from the University of Chicago.

RELATED INFORMATION

Read the whole story
strugk
337 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories