Energy, data, environment
1429 stories
·
2 followers

Inside Google’s 7-Year Mission to Give AI a Robot Body

1 Share

It was early January 2016, and I had just joined Google X, Alphabet’s secret innovation lab. My job: help figure out what to do with the employees and technology left over from nine robot companies that Google had acquired. People were confused. Andy “the father of Android” Rubin, who had previously been in charge, had suddenly left. Larry Page and Sergey Brin kept trying to offer guidance and direction during occasional flybys in their “spare time.” Astro Teller, the head of Google X, had agreed a few months earlier to bring all the robot people into the lab, affectionately referred to as the moonshot factory.

I signed up because Astro had convinced me that Google X—or simply X, as we would come to call it—would be different from other corporate innovation labs. The founders were committed to thinking exceptionally big, and they had the so-called “patient capital” to make things happen. After a career of starting and selling several tech companies, this felt right to me. X seemed like the kind of thing that Google ought to be doing. I knew from firsthand experience how hard it was to build a company that, in Steve Jobs’ famous words, could put a dent in the universe, and I believed that Google was the right place to make certain big bets. AI-powered robots, the ones that will live and work alongside us one day, was one such audacious bet.

Eight and a half years later—and 18 months after Google decided to discontinue its largest bet in robotics and AI—it seems as if a new robotics startup pops up every week. I am more convinced than ever that the robots need to come. Yet I have concerns that Silicon Valley, with its focus on “minimum viable products” and VCs’ general aversion to investing in hardware, will be patient enough to win the global race to give AI a robot body. And much of the money that is being invested is focusing on the wrong things. Here is why.

The Meaning of “Moonshot”

Google X—the home of Everyday Robots, as our moonshot came to be known—was born in 2010 from a grand idea that Google could tackle some of the world’s hardest problems. X was deliberately located in its own building a few miles away from the main campus, to foster its own culture and allow people to think far outside the proverbial box. Much effort was put into encouraging X-ers to take big risks, to rapidly experiment, and even to celebrate failure as an indication that we had set the bar exceptionally high. When I arrived, the lab had already hatched Waymo, Google Glass, and other science-fiction-sounding projects like flying energy windmills and stratospheric balloons that would provide internet access to the underserved.

What set X projects apart from Silicon Valley startups is how big and long-term X-ers were encouraged to think. In fact, to be anointed a moonshot, X had a “formula”: The project needed to demonstrate, first, that it was addressing a problem that affects hundreds of millions, or even billions, of people. Second, there had to be a breakthrough technology that gave us line of sight to a new way to solve the problem. Finally, there needed to be a radical business or product solution that probably sounded like it was just on the right side of crazy.

The AI Body Problem

It’s hard to imagine a person better suited to running X than Astro Teller, whose chosen title was literally Captain of Moonshots. You would never see Astro in the Google X building, a giant, three-story converted department store, without his signature rollerblades. Top that with a ponytail, always a friendly smile, and, of course, the name Astro, and you might think you’d entered an episode of HBO’s Silicon Valley.

When Astro and I first sat down to discuss what we might do with the robot companies that Google had acquired, we agreed something should be done. But what? Most useful robots to date were large, dumb, and dangerous, confined to factories and warehouses where they often needed to be heavily supervised or put in cages to protect people from them. How were we going to build robots that would be helpful and safe in everyday settings? It would require a new approach. The huge problem we were addressing was a massively global human shift—aging populations, shrinking workforces, labor shortages. Our breakthrough technology was—we knew, even in 2016—going to be artificial intelligence. The radical solution: fully autonomous robots that would help us with an ever-growing list of tasks in our everyday lives.

We were, in other words, going to give AI a body in the physical world, and if there was one place where something of this scale could be concocted, I was convinced it would be X. It was going to take a long time, a lot of patience, a willingness to try crazy ideas and fail at many of them. It would require significant technical breakthroughs in AI and robot technology and very likely cost billions of dollars. (Yes, billions.) There was a deep conviction on the team that, if you looked just a bit beyond the horizon, a convergence of AI and robotics was inevitable. We felt that much of what had only existed in science fiction to date was about to become reality.

It’s Your Mother

Every week or so, I’d talk to my mother on the phone. Her opening question was always the same: “When are the robots coming?” She wouldn’t even say hello. She just wanted to know when one of our robots would come help her. I would respond, “It’ll be a while, Mom,” whereupon she’d say, “They better come soon!”

Living in Oslo, Norway, my mom had good public health care; caregivers showed up at her apartment three times daily to help with a range of tasks and chores, mostly related to her advanced Parkinson’s disease. While these caregivers enabled her to live alone in her own home, my mother hoped that robots could support her with the myriad of small things that had now become insurmountable and embarrassing barriers, or sometimes simply offer her an arm to lean against.

It’s Really Hard

“You do know that robotics is a systems problem, right?” Jeff asked me with a probing look. Every team seems to have a “Jeff”; Jeff Bingham was ours. He was a skinny, earnest guy with a PhD in bioengineering who grew up on a farm and had a reputation for being a knowledge hub with deep insights about … kinda everything. To this day, if you ask me about robots, one of the first things I’ll tell you is that, well, it’s a systems problem.

One of the important things Jeff was trying to reinforce was that a robot is a very complex system and only as good as its weakest link. If the vision subsystem has a hard time perceiving what’s in front of it in direct sunlight, then the robots may suddenly go blind and stop working if a ray of sun comes through a window. If the navigation subsystem doesn’t understand stairs, then the robot may tumble down them and hurt itself (and possibly innocent bystanders). And so on. Building a robot that can live and work alongside us is hard. Like, really hard.

For decades people have been trying to program various forms of robots to perform even simple tasks, like grasping a cup on a table or opening a door, and these programs have always ended up becoming extremely brittle, failing at the slightest change in conditions or variations in the environment. Why? Because of the lack of predictability in the real world (like that ray of sunlight). And we haven’t even gotten to the hard stuff yet, like moving through the messy and cluttered spaces where we live and work.

Once you start thinking carefully about all this, you realize that unless you lock everything down, really tight, with all objects being in fixed, predefined locations, and the lighting being just right and never changing, simply picking up, say, a green apple and placing it in a glass bowl on a kitchen table becomes an all but impossibly difficult problem to solve. This is why factory robots are in cages. Everything from the lighting to the placement of the things they work on can be predictable, and they don’t have to worry about bonking a person on the head.

How to Learn Learning Robots

But all you need, apparently, is 17 machine-learning people. Or so Larry Page told me—one of his classic, difficult-to-comprehend insights. I tried arguing that there was no way we could possibly build the hardware and software infrastructure for robots that would work alongside us with only a handful of ML researchers. He waved his hand at me dismissively. “All you need is 17.” I was confused. Why not 11? Or 23? I was missing something.

Boiling it down, there are two primary approaches to applying AI in robotics. The first is a hybrid approach. Different parts of the system are powered by AI and then stitched together with traditional programming. With this approach the vision subsystem may use AI to recognize and categorize the world it sees. Once it creates a list of the objects it sees, the robot program receives this list and acts on it using heuristics implemented in code. If the program is written to pick that apple off a table, the apple will be detected by the AI-powered vision system, and the program would then pick out a certain object of “type: apple” from the list and then reach to pick it up using traditional robot control software.

The other approach, end-to-end learning, or e2e, attempts to learn entire tasks like “picking up an object,” or even more comprehensive efforts like “tidying up a table.” The learning happens by exposing the robots to large amounts of training data—in much the way a human might learn to perform a physical task. If you ask a young child to pick up a cup, they may, depending on how young they are, still need to learn what a cup is, that a cup might contain liquid, and then, when playing with the cup, repeatedly knock it over, or at least spill a lot of milk. But with demonstrations, imitating others, and lots of playful practice, they’ll learn to do it—and eventually not even have to think about the steps.

What I came to believe Larry was saying was that nothing really mattered unless we ultimately demonstrated that robots could learn to perform end-to-end tasks. Only then would we have a real shot at making robots reliably perform these tasks in the messy and unpredictable real world, qualifying us to be a moonshot. It wasn’t about the specific number 17, but about the fact that big breakthroughs require small teams, not armies of engineers. Obviously there is a lot more to a robot than its AI brain, so I did not discontinue our other engineering efforts—we still had to design and build a physical robot. It became clear, though, that demonstrating a successful e2e task would give us some faith that, in moonshot parlance, we could escape Earth's gravitational pull. In Larry’s world, everything else was essentially “implementation details.”

On the Arm-Farm

Peter Pastor is a German roboticist who received his PhD in robotics from the University of Southern California. On the rare occasion when he wasn’t at work, Peter was trying to keep up with his girlfriend on a kiteboard. In the lab, he spent a lot of his time wrangling 14 proprietary robot arms, later replaced with seven industrial Kuka robot arms in a configuration we dubbed “the arm-farm.”

These arms ran 24/7, repeatedly attempting to pick up objects, like sponges, Lego blocks, rubber ducklings, or plastic bananas, from a bin. At the start they would be programmed to move their claw-like gripper into the bin from a random position above, close the gripper, pull up, and see if they had caught anything. There was a camera above the bin that captured the contents, the movement of the arm, and its success or failure. This went on for months.

In the beginning, the robots had a 7 percent success rate. But each time a robot succeeded, it got positive reinforcement. (Basically meaning, for a robot, that so-called “weights” in the neural network used to determine various outcomes are adjusted to positively reinforce the desired behaviors, and negatively reinforce the undesired ones.) Eventually, these arms learned to successfully pick up objects more than 70 percent of the time. When Peter showed me a video one day of a robot arm not just reaching down to grasp a yellow Lego block but nudging other objects out of the way in order to get a clear shot at it, I knew we had reached a real turning point. The robot hadn’t been explicitly programmed, using traditional heuristics, to make that move. It had learned to do it.

The Big Story

Our deepest dives and cutting-edge features that will leave you smarter and sharper. Delivered on Sundays.

But still—seven robots working for months to learn how to pick up a rubber duckling? That wasn’t going to cut it. Even hundreds of robots practicing for years wouldn’t be enough to teach the robots to perform their first useful real-world tasks. So we built a cloud-based simulator and, in 2021, created more than 240 million robot instances in the sim.

Think of the simulator as a giant video game, with a model of real-world physics that was realistic enough to simulate the weight of an item or the friction of a surface. The many thousands of simulated robots would use their simulated camera input and their simulated bodies, modeled after the real robots, to perform their tasks, like picking up a cup from a table. Running at once, they would try and fail millions of times, collecting data to train the AI algorithms. Once the robots got reasonably good in simulation, the algorithms were transferred to physical robots to do final training in the real world so they could embody their new moves. I always thought of the simulation as robots dreaming all night and then waking up having learned something new.

It’s the Data, Stupid

The day we all woke up and discovered ChatGPT, it seemed like magic. An AI-powered system could suddenly write complete paragraphs, answer complicated questions, and engage in an ongoing dialog. At the same time, we also came to understand its fundamental limitation: It had taken enormous amounts of data to accomplish this.

Robots are already leveraging large language models to understand spoken language and vision models to understand what they see, and this makes for very nice YouTube demo videos. But teaching robots to autonomously live and work alongside us is a comparably huge data problem. In spite of simulations and other ways to create training data, it is highly unlikely that robots will “wake up” highly capable one day, with a foundation model that controls the whole system.

The verdict is still out on how complex the tasks will be that we can teach a robot to perform with AI alone. I have come to believe it will take many, many thousands, maybe even millions of robots doing stuff in the real world to collect enough data to train e2e models that make the robots do anything other than fairly narrow, well-defined tasks. Building robots that perform useful services—like cleaning up and wiping all the tables in a restaurant, or making the beds in a hotel—will require both AI and traditional programming for a long time to come. In other words, don’t expect robots to go running off outside our control, doing something they weren’t programmed to do, anytime soon.

But Should They Look Like Us?

Horses are very efficient at walking and running on four legs. Yet we designed cars to have wheels. Human brains are incredibly efficient biological computers. Yet chip-based computers don’t come close to performing like our brains. Why don’t cars have legs, and why weren’t computers modeled on our biology? The goal of building robots, I mean to say, shouldn’t just be mimicry.

This I learned one day at a meeting with a group of technical leaders at Everyday Robots. We were sitting around a conference table having an animated conversation about whether our robots should have legs or wheels. Such discussions tended to devolve more into religious debates than fact-based or scientific ones. Some people are very attached to the idea that robots should look like people. Their rationale is good. We have designed the places in which we live and work to accommodate us. And we have legs. So maybe robots should too.

After about 30 minutes, the most senior engineering manager in the room, Vincent Dureau, spoke up. He simply said, “I figure that if I can get there, the robots should be able to get there.” Vincent was seated in his wheelchair. The room went quiet. The debate was over.

The fact is, robot legs are mechanically and electronically very complex. They don’t move very fast. They’re prone to making the robot unstable. They’re also not very power-efficient compared to wheels. These days, when I see companies attempting to make humanoid robots—robots that try to closely mimic human form and function—I wonder if it is a failure of imagination. There are so many designs to explore that complement humans. Why torture ourselves reaching for mimicry? At Everyday Robots, we tried to make the morphology of the robot as simple as possible—because the sooner robots can perform real-world tasks, the faster we can gather valuable data. Vincent’s comment reminded us that we needed to focus on the hardest, most impactful problems first.

Desk Duty

I was at my desk when one of our one-armed robots with a head shaped like a rectangle with rounded corners rolled up, addressed me by name, and asked if it could tidy up. I said yes and stepped aside. A few minutes later it had picked up a couple of empty paper cups, a transparent iced tea cup from Starbucks, and a plastic Kind bar wrapper. It dropped these items into a trash tray attached to its base before turning toward me, giving me a nod, and heading over to the next desk.

This tidy-desk service represented an important milestone: It showed that we were making good progress on an unsolved part of the robotics puzzle. The robots were using AI to reliably see both people and objects! Benjie Holson, a software engineer and former puppeteer who led the team that created this service, was an advocate for the hybrid approach. He wasn’t against end-to-end learned tasks but simply had a let’s-try-to-make-them-do-something-useful-now attitude. If the ML researchers solved some e2e task better than his team could program it, they’d just pull the new algorithms into their quiver.

I’d gotten used to our robots rolling around, doing chores like tidying desks. Occasionally I would spot a visitor or an engineer who had just joined the team. They’d have a look of wonder and joy on their face as they watched the robots going about their business. Through their eyes I was reminded just how novel this was. As our head of design, Rhys Newman, would say when a robot rolled by one day (in his Welsh accent), “It’s become normal. That’s weird, isn’t it?”

Just Dance

Our advisers at Everyday Robots included a philosopher, an anthropologist, a former labor leader, a historian, and an economist. We vigorously debated economic, social, and philosophical questions like: If robots lived alongside us, what would the economic impact be? What about the long-term and near-term effects on labor? What does it mean to be human in an age of intelligent machines? How do we build these machines in ways that make us feel welcome and safe?

In 2019, after telling my team that we were looking for an artist in residence to do some creative, weird, and unexpected things with our robots, I met Catie Cuan. Catie was studying for her PhD in robotics and AI at Stanford. What caught my attention was that she had been a professional dancer, performing at places like the Metropolitan Opera Ballet in NYC.

You’ve probably seen YouTube videos of robots dancing—performances where the robot carries out a preprogrammed sequence of timed moves, synchronized to music. While fun to watch, these dances are not much different than what you’d experience on a ride at Disneyland. I asked Catie what it would be like if, instead, robots could improvise and engage with each other like people do. Or like flocks of birds, or schools of fish. To make this happen, she and a few other engineers developed an AI algorithm trained on the preferences of a choreographer. That being, of course, Catie.

Often during evenings and sometimes weekends, when the robots weren’t busy doing their daily chores, Catie and her impromptu team would gather a dozen or so robots in a large atrium in the middle of X. Flocks of robots began moving together, at times haltingly, yet always in interesting patterns, with what often felt like curiosity and sometimes even grace and beauty. Tom Engbersen is a roboticist from the Netherlands who painted replicas of classic masterpieces in his spare time. He began a side project collaborating with Catie on an exploration of how dancing robots might respond to music or even play an instrument. At one point he had a novel idea: What if the robots became instruments themselves? This kicked off an exploration where each joint on the robot played a sound when it moved. When the base moved it played a bass sound; when a gripper opened and closed it made a bell sound. When we turned on music mode, the robots created unique orchestral scores every time they moved. Whether they were traveling down a hallway, sorting trash, cleaning tables, or “dancing” as a flock, the robots moved and sounded like a new type of approachable creature, unlike anything I had ever experienced.

This Is Only the Beginning

In late 2022, the end-to-end versus hybrid conversations were still going strong. Peter and his teammates, with our colleagues in Google Brain, had been working on applying reinforcement learning, imitation learning, and transformers—the architecture behind LLMs—to several robot tasks. They were making good progress on showing that robots could learn tasks in ways that made them general, robust, and resilient. Meanwhile, the applications team led by Benjie was working on taking AI models and using them with traditional programming to prototype and build robot services that could be deployed among people in real-world settings.

Meanwhile, Project Starling, as Catie’s multi-robot installation ended up being called, was changing how I felt about these machines. I noticed how people were drawn to the robots with wonder, joy, and curiosity. It helped me understand that how robots move among us, and what they sound like, will trigger deep human emotion; it will be a big factor in how, even if, we welcome them into our everyday lives.

We were, in other words, on the cusp of truly capitalizing on the biggest bet we had made: robots powered by AI. AI was giving them the ability to understand what they heard (spoken and written language) and translate it into actions, or understand what they saw (camera images) and translate that into scenes and objects that they could act on. And as Peter’s team had demonstrated, robots had learned to pick up objects. After more than seven years we were deploying fleets of robots across multiple Google buildings. A single type of robot was performing a range of services: autonomously wiping tables in cafeterias, inspecting conference rooms, sorting trash, and more.

Which was when, in January 2023, two months after OpenAI introduced ChatGPT, Google shut down Everyday Robots, citing overall cost concerns. The robots and a small number of people eventually landed at Google DeepMind to conduct research. In spite of the high cost and the long timeline, everyone involved was shocked.

A National Imperative

In 1970, for every person over 64 in the world, there were 10 people of working age. By 2050, there will likely be fewer than four. We’re running out of workers. Who will care for the elderly? Who will work in factories, hospitals, restaurants? Who will drive trucks and taxis? Countries like Japan, China, and South Korea understand the immediacy of this problem. There, robots are not optional. Those nations have made it a national imperative to invest in robotics technologies.

Giving AI a body in the real world is both an issue of national security and an enormous economic opportunity. If a technology company like Google decides it cannot invest in “moonshot” efforts like the AI-powered robots that will complement and supplement the workers of the future, then who will? Will the Silicon Valley or other startup ecosystems step up, and if so, will there be access to patient, long-term capital? I have doubts. The reason we called Everyday Robots a moonshot is that building highly complex systems at this scale went way beyond what venture-capital-funded startups have historically had the patience for. While the US is ahead in AI, building the physical manifestation of it—robots—requires skills and infrastructure where other nations, most notably China, are already leading.

The robots did not show up in time to help my mother. She passed away in early 2021. Our frequent conversations toward the end of her life convinced me more than ever that a future version of what we started at Everyday Robots will be coming. In fact, it can’t come soon enough. So the question we are left to ponder becomes: How does this kind of change and future happen? I remain curious, and concerned.


Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Read the whole story
strugk
1 day ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Counterintuitive Secrets for Learning How to Live a Happy Life

1 Share

Stephanie Harrison is an expert in the science of happiness and founded a company called The New Happy, where she teaches millions of people how to be happier. Through it, she hosts The New Happy podcast. She is also a Harvard Business Review and CNBC contributor, and her work has been featured in other publications such as Fast Company, Forbes, and Architectural Digest. She regularly speaks at Fortune 500 companies, advising on employee wellbeing and company culture.

Below, Stephanie shares five key insights from her new book, New Happy: Getting Happiness Right in a World That’s Got It Wrong. Listen to the audio version—read by Stephanie herself—in the Next Big Idea App.

Audio Player

1. Everything you know about happiness is a lie.

When I was in my early twenties, I had everything that I thought would make me happy. I had a prestigious job, lived in New York City, and had complete freedom. Yet, I was absolutely miserable. At first, I ignored my emotions. Then, over time, I started to experience more challenges: getting physically ill, struggling with my mental health, and feeling lonely. One day, I found myself lying on my bedroom floor sobbing hysterically, wondering why I was so desperately unhappy.

Then, I had a moment of clarity. What if there wasn’t something wrong with me? What if I had been lied to by the world around me? Perhaps everything I had been told about what I needed to do to be happy was wrong.

At that moment, I didn’t know exactly what the lies were, but now, after ten years of research, I do. I call it Old Happy: our society’s incorrect definition of happiness and the culture we’ve created around it. Old Happy begins with the messages we receive as children from our families, all the way through to the media we consume and the institutions that enforce it. It comes from three cultural forces of individualism, capitalism, and domination, which tell us that to be happy, we must perfect ourselves, do more and more, and do it all by ourselves.

The devastating truth is that pursuing these objectives won’t make you happy. In fact, both research and experience show that it will actually make you miserable.

2. A happier life starts with unwinding Old Happy.

Due to Old Happy, Americans are struggling with unprecedented levels of unhappiness, illness, burnout, and loneliness, with no idea what’s wrong or what they need to do to feel better again. The evidence I’ve amassed about the harms of Old Happy is astounding. To live truly happy lives, we start by letting go of our Old Happy beliefs and adopt new ones by undergoing three key shifts:

  • Old Happy taught you that you’re not good enough and that there’s something wrong with you. Instead, you need to start seeing that you are worthy exactly as you are.
  • Old Happy taught you that, to prove how good you are, you must achieve a certain set of external goals and succeed. Instead, you need to focus on expressing yourself and growing as a person in whatever way feels authentic to you. You are not defined by your successes or failures.
  • Old Happy taught you that you have to do everything by yourself. Instead, you need to see that you are connected to others and that no one does anything alone. We are social creatures who are wired to need support. You are inextricably connected to others.

The best way to do this is by naming Old Happy when you see it pop up in your life. When you feel the pressure to overwork, say to yourself, “That’s Old Happy, not me.” When you judge your appearance, remind yourself, “I’m comparing myself to Old Happy’s made-up standards.” When you feel like you can’t ask for help, tell yourself that no one ever does anything alone, and it’s perfectly human to need support.

3. The real secret to happiness is counterintuitive.

Once we’ve named Old Happy and started unwinding it from our lives, we can discover the real secret to happiness. If you want to be happy, you need to help other people be happy. This is the proven path to happiness, supported by my research across multiple fields.

Everyone wants to live a happy life. The way to experience that is through finding ways to be of service to one another. Helping others is scientifically proven to benefit our wellbeing; it connects us to one another and helps us find a greater purpose in life. It doesn’t just improve your mental health but your physical health, too. Just like we have a need for food and shelter, we also have a profound need to go beyond ourselves and help others.

“If you want to be happy, you need to help other people be happy.”

Many people have a narrow definition of helping: we think of it as going out and volunteering. While that’s a wonderful way to help, we need a more expansive understanding. You help by listening to your loved ones, holding the door for someone, collaborating at work, sharing your ideas and unique perspective, and encouraging others to be their best. Every day, there are countless ways to help, meaning there are countless opportunities to experience happiness.

4. You possess unique gifts that need to be shared.

I argue that the best way to help others comes from sharing your unique gifts with those around you—whether in your family, communities, at work, or for the broader world.

There are three types of gifts that all human beings possess: humanity, talent, and wisdom:

  • Your humanity is who you are as a person. It’s your character, your best qualities, your good and loving nature. When you call a friend to listen to them talk about a challenge, take time to play with your kids after work, or smile at a stranger on the street, you are using your humanity gifts.
  • Your wisdom is what you have learned. Each of us possesses a completely unique life packed with experiences that teach us important and meaningful lessons. That wisdom can be used to help people in countless ways—from helping others through hard times to preventing them from ever happening.
  • Your talent is what you can do. Talents are cultivated through time, energy, and effort. Every single one of us has the power to either develop new talents or deepen existing ones, using them to inspire others and make powerful contributions.

Your gifts are what make you you. When you use them in service of others, you’ll experience profound joy, purpose, and contentment. That’s what New Happy is all about: being yourself and giving of yourself.

5. Your happiness has the power to change the world.

When we live by Old Happy, we are not only making ourselves miserable, but we’re contributing to creating a world that makes the collective unhappy, too. It only leads to competition, judgment, disempowerment, burnout, and isolation. No one wins when Old Happy is our dominant understanding of happiness.

But when you adopt New Happy, all of that changes. Through your daily actions, you’re now contributing to making the world a better place. By helping others experience happiness and by devoting your incredible gifts to the problems we face, you are slowly but surely transforming the world into a place where more and more people get to be happy. Isn’t that what we all long for? A better, more just, more compassionate world?

I often hear from people in my community that they feel so helpless about the state of the world. But you can start making it better right now simply by changing your definition of happiness and living in alignment with it. Working for the greater good facilitates your highest good.

To listen to the audio version read by author Stephanie Harrison, download the Next Big Idea App today:

Read the whole story
strugk
6 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Nationalise Us

1 Share
Read the whole story
strugk
13 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Swedish startup bets big on zinc-ion batteries with world’s first megaplant

1 Share

Swedish startup Enerpoly has opened the world’s first zinc-ion battery megafactory. Its vision is to scale a better alternative to lithium-ion for storing renewable energy over longer periods of time.

The Enerpoly Production Innovation Center (EPIC) facility is located north of Stockholm. Commissioning has already begun and the plant is expected to make the first zinc-ion batteries next year. The company aims to reach a maximum production capacity of 100MWh by 2026 — enough energy to power around 20,000 homes.

In 2018, Dr. Mylad Chamoun made a breakthrough in zinc-ion battery chemistry while pursuing his PhD at Stockholm University. Later that year, he teamed up with his former colleague Dr. Samir Nameer and the duo founded Enerpoly. The partners saw a gaping hole in the market where lithium-ion wasn’t competitive — offering 2 to 10 hour energy storage. They believed zinc-ion batteries could fill the gap.

Making zinc-ion batteries work

Using zinc in batteries isn’t anything new. The AA batteries that power your most precious (read, junk) toys and gadgets are made from zinc and manganese oxide. This chemistry has made companies like Energizer and Duracell a tonne of money.

However, zinc-ion batteries have historically, for lack of a better word, sucked at recharging. This is because zinc-ion chemistries are plagued by dendrites — crystals that cause short circuits. They also lose capacity fast.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“Enerpoly has innovated across the entire zinc-ion battery system — including anode, cathode, electrolyte, and separator design — to solve these inherent problems,” the company’s CEO, MIT-educated aerospace engineer Eloisa da Castro, told TNW.

Enerpoly uses zinc metal for the battery’s anode, manganese dioxide for the cathode, and a water-based electrolyte to carry charged particles between the two sides.

Unlike lithium, zinc is globally abundant. Moreover, Sweden is home to the largest zinc reserves in Europe — about 2% of the world’s total. Enerpoly hopes to establish a completely European supply chain for its batteries and make the continent a “zinc-ion powerhouse.”

Zinc-ion for energy storage

Different from lithium-ion battery developers, Enerpoly is targeting the energy storage market – not EVs and smartphones. Use cases include renewable energy storage, shifting energy loads on the grid, and increasing power resiliency — within the 2-10 hour storage mark.

The batteries are modular — multiple packs can be placed in parallel to make larger systems. The company claims the packs are non-toxic, non-flammable, and non-explosive.

Because the materials they use are a lot more abundant, Enerpoly believes it can be cost-competitive with myriad other short-to-mid term energy storage technologies under development. These include lithium-ion batteries, thermal heat storage devices, liquid air batteries, iron flow batteries, gravity batteries, and even this CO2 dome.

And investors seem to agree. To date, the company has raised just shy of €15mn. Over €8mn of that came from the Swedish Energy Agency to construct the EPIC factory.

CEO Da Castro told TNW the company is also planning to close its Series A this year, as they look to scale up towards the 2026 target of 100MWh. In July, Enerpoly acquired state-of-the-art dry electrode manufacturing equipment from bankrupt startup Nilar that it will use in its new plant. Beyond 2026, the startup is eyeing its first gigafactory.

Read the whole story
strugk
13 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

The Uruguayan company teaching people how to turn regular cars into EVs

1 Share

In 2010, Uruguayan president-elect José Mujica made headlines for the bright blue mini-truck he rode to his inauguration ceremony.

The vehicle, which looked like any ordinary pickup truck, was used to convey a message: Uruguay was serious about its quest to become more environmentally friendly. The gas-powered four-wheeler had been transformed into an electric vehicle by Organización Autolibre, a local retrofitting company.

Viral press coverage of the ceremony put the company in the spotlight, sparking interest from EV enthusiasts inside and outside Uruguay who wanted to convert their gas-guzzling vehicles into economical EVs. 

“This news coverage in many media outlets across Latin America gave a lot of visibility to this technology, and to this day we tour the region every year across Peru, Mexico, Argentina,” Gabriel González Barrios, founder and CEO of Organización Autolibre, told Rest of World. “The same distributors of Autolibre systems permanently invite us to train the necessary technicians to generate the local ecosystem for the local development of this industry.”

Over the years, González Barrios and his team at Organización Autolibre have helped convert thousands of traditional vehicles into e-cars across 14 Latin American countries. The company trains individuals and mechanics through online courses, and supervises conversions for corporate fleets. So far, at least 40 companies have used Organización Autolibre’s services, González Barrios said. While some countries have flagged concerns about the safety of retrofitting vehicles, González Barrios said his company is leading efforts to make it a safer and standardized practice across Latin America.

“We want to show it’s an industrialized process,” Andrés García, the owner of a retrofitting shop in Bogotá, Colombia, which works with Autolibre, told Rest of World. “This is not for hobbyists or people who are inexperienced.”

González Barrios had the idea for the company in 2006 after watching the Al Gore-produced climate change documentary An Inconvenient Truth. A distributor of chemical products for gas-fueled vehicles at the time, he was inspired to address environmental concerns from his corner of the world.

“We decided to change the combustion engines of our own vehicles into zero-emission electric ones,” said González Barrios. The experiment was successful and affordable, and led him to found Organización Autolibre.

González Barrios initially used some American EV kits to retrofit vehicles, but when those became too costly, Autolibre partnered with Zhuhai Enpower Electric, a Chinese electric power system company.

Over the last few years, as the popularity of EVs has grown, so has interest in retrofitting regular vehicles, Bruno González, head of sales at Autolibre, told Rest of World. In 2011, the company retrofitted a fleet of delivery vans for Bimbo, the largest bread-making company in the world. Bimbo did not respond to questions from Rest of World.

In its 2020 report about the practice, the Latin American Association of Sustainable Mobility revealed that at least 145 retrofitted vehicles had been officially registered.

The Latin American Retrofit Association, co-founded by González Barrios, now has more than 30 members across the region. All are either distributors of EV retrofit kits or have workshops specializing in the process.

Retrofitting electric vehicles has been practiced worldwide for over 30 years, with countries like Japan and Australia establishing national guidelines for the process. A report from the Latin American Association of Sustainable Mobility lists 21 companies that currently sell EV retrofit kits for different vehicles across the world.

The biggest incentive to retrofit a vehicle is its affordability, said González Barrios. Most new EVs available in Latin America remain out of reach for regular car users. One of the most popular models, the electric Renault Kwid, costs around $18,100. Converting an existing gas or diesel-engine car into an electric vehicle using Autolibre’s process starts at $6,000.

Since the practice is largely a DIY process, there are no official statistics on the retrofitting industry in Latin America. Many retrofitting jobs are done “by tinkerers who seek to extend the life of their petrol cars since they can’t afford a new electric one,” Adolfo Rojas, president of the Association of Entrepreneurs to Promote Electric Vehicles in Peru, told Rest of World.

The retrofitting process requires skilled EV technicians to remove the engine, gas tank, exhaust, and other components within a regular car, and fit the electric motor, batteries, on-board charger, and computer into the empty space. Weight has to be carefully distributed so the car doesn’t tilt to one side. The original electrical components — such as airbags and sensors — must function properly, and the battery shouldn’t overheat. Autolibre Academy, the company’s educational branch, offers online courses on these basic skills to any EV enthusiast interested in retrofitting, González said.

But Rojas said there are risks associated with the retrofitting process.

Retrofitting kits, many of which are available on online marketplaces like Alibaba or MercadoLibre, often don’t guarantee a “minimum level of safety and quality for the retrofit unit,” Rojas said.

Once they’ve been modified, retrofitted vehicles must get government permits that allow them to be on the road in specific countries, according to retrofitting experts.

In 2021, the Chilean transport ministry passed legislation banning the retrofitting of all used passenger vehicles. “Retrofits were being done, but keeping the car’s safety level was being overlooked,” Rodrigo Salcedo, president of Chile’s Electric Vehicle Association, told Rest of World. A safety compliance regulation is being prepared by the transport and energy ministries.

In Colombia, where retrofitted vehicles face no legal impediment, some are arguing for tighter controls.

A nine-week newsletter series exploring the ways technology is reshaping spirituality and religion.

García, from the car shop in Bogotá, said he is working with fellow retrofitting experts and enthusiasts to lobby for specific regulation, including meeting with the Colombian transport department and SENA, the country’s professional and technical training service. He said his company sells retrofit kits exclusively to certified technicians.

Jairo Novoa, one of García’s customers, retrofitted a 1981 BMW. He told Rest of World the process made sense for an old car like his because spare or repair parts are expensive and hard to find.

Although most of Colombia’s more than 11,000 electric vehicles are brand-new, retrofitted ones “do not need to envy” them, said Novoa. Except maybe, “really expensive ones like Tesla.”

Read the whole story
strugk
16 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Who killed the world?

2 Shares

This is a prototypical world in a 1950s sci-fi film.

It takes place in a world similar to one in which the viewer lives.

But there’s an existential threat looming in the background. It’s mysterious, scary – and a bit exciting.

In the narrative, the protagonists explore this mysterious phenomenon.

They use science and technology to learn more about it.

And even though the story presents the possibility of failure, the protagonists figure it out.

It feels like the triumph of humanity.

I analyzed the top 200 sci-fi films and tv shows every decade from the 1950s to present day.1 What I found was that sci-fi narratives from yesteryear were quite different from today’s stories.

¹ Based on votes the film or tv show received from IMDB users. More on methodology at the end of the story.

In the 1950s, only a few sci-fi films and shows took place in the future, like the Fire Maidens of Outer Space (1956) which is a film about astronauts landing on one of Jupiter’s moons. For the most part, these stories were set in the audience’s present day – so, the 1950s.

(Hover on a box for details)

In these 1950s stories, the world is often upended by an existential threat.

But in the majority of films, the protagonists figure it out – and leave the world better than the beginning of the story.

Sci-fi is an amazing genre.

It helps us explore our feelings about the unknown, the future, and the possible. It lets us imagine “what if” scenarios, and then build out rich worlds that our minds can occupy. It depicts dystopias we should fend off and utopias we should seek – and it teases us with the scintillating possibility that humans may actually be able to build the world we want.

But over the last few generations, it’s been harder for us to imagine this better world – and our sci-fi reflects that.

This is a prototypical sci-fi setting in more recent years.

We’re in the near future – often a world that looks like ours, but with hints that something has already gone terribly wrong.

Today’s sci-fi is more likely to depict a world that is worse than our reality.

It’s maybe even a dystopian or post-apocalyptic world

This world is almost always marked by economic inequality, human suffering, and sometimes even a militarized, authoritarian society.

In this world, the protagonists face an existential threat.

And to defeat the threat, we must face societal conflicts that feel insurmountable – and we must face conflicts within ourselves that make us question who we are and what we’re doing .

Ultimately, the story is likely to be a commentary on today’s social issues. It’s a warning of what is to come – or a reflection of a current reality that we’ve tried hard to ignore.

The changes to sci-fi stories didn’t happen overnight. Sci-fi slowly evolved over the last few generations.

There’s been a steady increase in sci-fi stories that take place in the future – and it’s usually the near future, like the 2013 film Her – a world where a man falls in love with an artificial intelligence.

Even plots that take place in the present could be interpreted as the near-future.

The stakes are still the same as before; these sci-fi stories still present existential threats.

But we’re now more likely to face these existential threats in a dystopian or post-apocalyptic world, like Mad Max: Fury Road (2015). In the film, the world is a desert wasteland ruled by a warlord who enslaves several women to produce his offspring. When the women escape, in hopes of finding a preserved paradise, they leave behind a message:

“Who killed the world?”

This dystopian society is more likely to be marked by inequality – gaps in opportunity, wealth, and basic rights.

This often leads to a world marked by great amounts of suffering.

And increasingly, sci-fi stories depict militarized societies – although we might be seeing that trend turn around this decade.

There’s almost always a “bad guy” – a human antagonist who tries to kill the world or at least gets in the way of saving the world.

But these days, it’s much more likely that protagonists also have to overcome societal forces – political movements, systemic inequality, rampant capitalism.

These are basically things that seem too big to fix.

It’s also far more likely that the narrative explores inner conflicts – moral dilemmas, identity crises, and wrestling with our understanding of what it means to be human.

We don’t just face outside threats; we also face threats within ourselves.

Ultimately, today’s sci-fi stories are far more likely to be a commentary on current social issues. These might be critiques of political ideologies, runaway capitalism, irresponsible innovation, human apathy, or eroding mental health.

But even though the narrative arc starts us off in a terrible place, the protagonists make the world better over the course of the story. Jurassic Park author Michael Crichton argued that this is necessary: “Futuristic science fiction tends to be pessimistic. If you imagine a future that’s wonderful, you don’t have a story.”

It’s often framed as the triumph of humanity.

But it certainly doesn’t feel triumphant. It often feels pessimistic – and it’s something that critics have noticed.

I think it’s because today’s sci-fi is set in a world where humans have already screwed up, and the narrative arc is basically the protagonists digging out of that hole.

Line chart of a narrative arc showing stories start at the bottom of the arc.

But as we walk out of the theater, we’re thrust back into reality – a world where we’re still facing existential threats like climate change, authoritarianism, devious technology, and war. And if these sci-fi stories are prescient, it means that we will soon experience those existential threats; the world will soon turn into a dystopian hellscape; and only after that do we figure it out.

In other words, the worst is still ahead of us.

Line chart of a narrative arc showing the bottom of the arc is ahead of us.

News stories constantly remind us that we’re headed for trouble. Children are being murdered, authoritarianism is on the rise, and Earth is inevitably going to warm so much that it will likely kill millions of people. Given this, how could we possibly imagine a less bleak future?

But maybe that’s what sci-fi can explore.

Author Neal Stephenson wrote in 2011: “Good SF supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place.” Journalist Noah Smith argues that optimistic sci-fi needs to have “several concrete features corresponding to the type of future people want to imagine actually living in.”

So, what if we figure it out?

What if we create spaceships that explore further than we could have ever imagined?

What if we embrace our natural curiosity and work toward discovering more and more of this wondrous universe?

What if we ensure that even the least fortunate among us have reliable housing, food, and healthcare?

What if we reject the notion that an economy must produce more and more, but rather embrace the idea that a functioning society is only as successful as its least privileged soul?

What if we build civilizations that don’t try to conquer nature, but rather try to be a part of it?

What if our technological innovations didn’t come from efforts to decimate each other, but rather from a constant desire to better each other’s lives?

I know, I know.

Right now, it’s hard to see that future. We see terrible things all around us – hunger, disease, mass murder, greed, an increasingly uninhabitable planet.

But unlike the world of Mad Max, our world has not yet been killed. There are still monumental efforts to stop hunger, to limit disease, to build more resilient governments, to wake us from the hypnosis of war, to sail deeper into the galaxy and to see closer into the atom. We can still create a world where the patches of paradise blossom into the wastelands.

I admit it’s hard to see. In fact, I admit that I’ve spent most of my journalism career telling a narrative about the wastelands bleeding into our lives – a sort of fear-mongering, I suppose.

But maybe that’s why it’s so important for us to imagine a different future – precisely because people like me made it so hard to see.

After all, if we can’t see paradise, how can we possibly navigate toward it?

Read the whole story
strugk
63 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories