Energy, data, environment
1415 stories
·
2 followers

Elephants have names — and they use them with each other

Vox
1 Share

Wild African elephants call each other by their names, according to a study published today in Nature Ecology & Evolution — making them the only nonhuman animals known to use language like this.

From infancy onward, we learn sounds that represent people, objects, feelings, and concepts. But if you repeat a word — even your own name — too many times, it starts to sound meaningless. Most words, after all, are no more than arbitrary collections of sound.

Our ability to create and share vocal labels, like names, is part of what makes us human. Until now, this kind of arbitrary vocal labeling was thought to be unique to humans. 

A handful of animal species, including bottlenose dolphins and parrots, can also address each other using vocal calls. These calls, or catchphrases, are used to shout out the caller’s own identity, not that of another animal. To get a given individual’s attention, a dolphin can imitate another dolphin’s signature call — it works, but it’s not what we do.

If your friend constantly says, “What’s up, dude?” and you’re both dolphins, you might refer to them in the third person as “Whatsup Dude.” Since you’re not dolphins, you’d probably call them something like “Kyle” instead. Scientists think that this cognitive leap takes more effort than imitation alone, making it an extremely rare phenomenon in the animal kingdom.

If elephants are intelligent enough to learn each other’s names, they may also have deep social bonds, complex thoughts, and a desire to connect with others — just like us. Findings like this pile onto mountains of evidence suggesting that we should rethink our current relationships with animals like elephants.

“I honestly think we just scratched the surface of it,” said behavioral ecologist Mickey Pardo, a postdoctoral fellow at Cornell University and lead author of this study, which was done in collaboration with seven other researchers.

Elephants call one another by their names

Elephants live in close-knit social groups, centered around matriarchal herds of females and their calves. They form strong bonds with social networks linking up to 50 or more elephants. “Their social relationships are such an incredibly important part of their ecology,” said Pardo.

Like humans, elephants aren’t always physically close to their best friends and family. They don’t need phones to keep in touch from afar — thanks to their massive vocal tracts, elephants can produce loud, low-frequency rumbles that travel through the ground as seismic waves reaching elephants up to 6 kilometers away (roughly 3.75 miles). At that distance, well out of sight, a caller needs to indicate who they’re directing their message to.

Pardo wondered whether elephants’ intricate social relationships, and the need to identify one another from a distance, pushed elephants to learn to call one another by their names.

To find out, Pardo recorded elephant vocalizations from groups of wild adult females and their calves across two field sites in Kenya, taking note of which elephant was calling and who they were calling to. Elephants make lots of sounds in addition to their iconic trumpeting. Here, researchers focused on the rich, low-frequency rumbles elephants use to call out to each other from a distance, to greet each other up close, and to comfort their children.

The team trained a machine-learning algorithm to match rumble calls to the elephant they were directed toward (the “receiver”). When given an unlabeled rumble, the algorithm was able to guess the receiving elephant’s identity with 27.5 percent accuracy — significantly better than chance. That number might look relatively low, but Pardo said that they wouldn’t expect the model to be perfectly accurate. They probably aren’t saying each other’s names every time they rumble at each other.

Greeting rumbles — the elephant equivalent of saying “hi” — were the worst at predicting the receiver’s identity, which makes sense. When I meet up with a friend at a bar, I rarely say, “Hello, insert-name-here!” Something like “Hey, good to see you!” usually does the trick, and elephants may do the same. It’s possible that the machine-learning tools used in this study simply couldn’t capture all the rumbles’ nuances. They relied on a supervised learning algorithm, which assigned recordings to predefined name labels, rather than discovering patterns on its own. In the future, other techniques like deep learning could uncover more, but would require a lot more training data. 

Elephants don’t have signature calls like dolphins and parrots, but each elephant’s voice has a unique intonation and character, much like ours do. Pardo’s team used their classification algorithm to see whether elephants were truly using a distinct sound to call for their friends, beyond simply copying the receiver. Indeed, they found that vocal labeling in elephants probably doesn’t rely on imitation — but without an exhaustive understanding of elephant language, it’s hard to know for sure.

Calls to the same receiver were also more similar to each other than calls to different receivers, lending more support to the idea that an elephant’s name represents its identity to the whole group. However, the similarity across callers wasn’t very strong, suggesting that different elephants might refer to a given individual by different names. That said, wild elephants did respond to recordings of calls that were initially addressed to them, which means those calls must carry some form of uniquely identifying information.

While humans usually use the same label for a given person — my name is Celia, and everyone calls me Celia — this isn’t always the case. My partner’s given name is Andrew, but most people who have met him within the last five years call him Roan. To some extent, that vocal label depends on the social context and the depth and nature of their relationship. Elephants may be similar. 

Elephant rumbles are information-dense: One 30-second recording could contain an elephant’s name, but it also might contain a lot more. Given the relatively limited amount of data Pardo’s team had access to, the machine-learning techniques could only assign a recording to the elephant name it was most similar to. That’s just the tip of the iceberg.

Imagine receiving a noisy voice memo in a completely unfamiliar language, and trying to pick out a specific word from that collection of sounds — it’s tricky. Daniela Hedwig, director of the Elephant Listening Project in the K. Lisa Yang Center for Conservation Bioacoustics, thinks that the next step will be to figure out exactly how individual pieces of information are encoded in the acoustics of these recordings. 

“If we can figure out how the elephants are encoding names in the calls,” Pardo said, “it would open up so many other avenues of inquiry.”

Could this be used as evidence for elephant personhood?

In 2022, New York state’s highest court ruled that an elephant was not a legal person. The Nonhuman Rights Project had filed habeas corpus litigation on behalf of Happy, an elephant living in isolation at the Bronx Zoo, arguing for her right to be freed from illegal detainment.

They lost. Monica Miller, Happy’s attorney at the Nonhuman Rights Project, was not surprised. Humans have certain basic rights simply because they’re humans, and in many ways, animals are viewed as property under the law. Miller suspects this deeply ingrained feeling of human exceptionalism would stop a judge from granting an elephant the right to personal autonomy. “Even if an elephant could write a law school essay, they would say ‘No,’ because they’re an elephant.”

Demonstrating that an animal engages in complex forms of communication isn’t necessarily enough to make people care about them. Ants use a highly sophisticated chemical language to coordinate some of the most impressive collective actions in the animal kingdom, but we still kill about a gazillion (rough estimate) ants per day. Ants don’t get lawyers.

They do get signatures from 287 scientists, philosophers, and ethicists, including Pardo. In April, The New York Declaration on Animal Consciousness launched at a conference at New York University, stating that there is “strong scientific support for attributions of conscious experience to other mammals and to birds,” and “at least a realistic possibility of conscious experience” in all vertebrates and most invertebrates. The declaration aims, in part, to encourage people to consider the implications of studies like Pardo’s on animal welfare policy.

To collect recordings of elephant calls in the wild, Pardo spent time in the field at the Samburu National Reserve in Kenya. The biggest cause of elephant mortality in the area, he said, was human-elephant conflict. “The conflict between humans and animals is at its worst. And it gets worse every year,” Mike Lesil, a ranger at Samburu National Reserve told Sierra. “We used to chase Somali poachers, organized crime groups, and local thieves hired by the ivory traders. Now most of the elephants are murdered by the local herders fighting the wildlife for pastures and water.”

“The more we learn about the elephant’s behavior and needs, the better informed conflict mitigation strategies can be, taking into account the perspective of both humans and elephants,” Joshua Plotnik, a professor studying the evolution of cognition across species at Hunter College at the City University of New York, wrote in an email.

In theory, findings like Pardo’s could open the door to literal human-elephant communication. More realistically, he hopes it will inspire people to invest in conservation efforts and rethink their relationships with elephants — both in their native habitat, and in captivity. “I feel like we really need a major revolution in how we think about other animals,” he told me. Given the complexity of their social lives in the wild, Pardo no longer believes that it’s ethical to keep elephants in captivity at all.

Project CETI (Cetacean Translation Initiative) is currently taking a similar approach to animal cognition research, decoding sperm whale vocalizations to promote conservation efforts. It all hinges on the hope that if scientists can prove that an animal does something we once thought was uniquely human, we’ll be more motivated to care.

As humans, we tend to empathize with animals that feel similar to us. “People often only appreciate what they understand,” Pardo said, “and they often only understand what’s close to them.”

“Evidence that they’re able to name each other, to have that concept of self and then create a symbol for the self, is a level of autonomy that we would recognize in the court as being worthy of protection,” Miller said. “Rights trickle down from this understanding.”

Correction, June 10, 4 pm: In the original version of this story, Professor Plotnik's place of employment was incorrectly stated. He currently teaches at Hunter College at the City University of New York.

Read the whole story
strugk
2 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Generative AI may be creating more work than it saves

1 Share

Will artificial intelligence (AI) help move DevOps efforts from fragile to agile? There is speculation across the industry that AI can greatly accelerate not just code generation for software, but all the details that follow -- including specifications, documentation, testing, deployment, and more. 

AI has been used for several years in its operational and predictive form, working behind the scenes to automate workflows and scheduling. Now, IT managers and professionals are embracing the potential of generative AI

Also: Agile development can unlock the power of generative AI - here's how

Within the next three years, the number of platform-engineering teams employing AI to augment the software development lifecycle will likely increase from 5% to 40%, according to an analysis published by a team of Gartner analysts led by Manjunath Bhat

Newsletters

ZDNET Tech Today

ZDNET's Tech Today newsletter is a daily briefing of the newest, most talked about stories, five days a week.

Subscribe

See all

Across the IT industry, there is notable optimism about the potential boost AI provides to DevOps and associated Agile practices. "Combining the DevOps and AI domains can be complementary by enhancing all phases of the software development lifecycle and enabling software to ship to market more rapidly, reliably, and efficiently," Billy Dickerson, principal software engineer with SAS, told ZDNET.

Also: 3 ways to accelerate generative AI implementation and optimization

Much activity is taking place around generative AI and the DevOps process. Just about all (97%) of the 408 technology managers in a survey released by automation specialist Stonebranch indicated they are "interested in incorporating generative AI into their automation programs." These professionals "see genAI as a pivotal tool to connect a more diverse set of tools and empower a broader range of users," the survey's authors point out. 

AI boosts DevOps, but DevOps also boosts AI application development, the Stonebranch survey shows. At least 72% of respondents have embraced machine learning pipelines to power their generative AI initiatives.

While there has been a lot of attention on using generative AI to create or modify software code, this is only a fraction of the development process. It's time to look at how AI can assist IT professionals and managers in other ways

"Developers on average spend anywhere between 10% and 25% of their time writing code," Gartner's Bhat and his co-authors wrote. "The rest of the time goes into reading specifications, writing documentation, doing code reviews, attending meetings, helping co-workers, debugging preexisting code, collaborating with other teams, provisioning environments, troubleshooting production incidents, and learning technical and business concepts -- to name just a few."

Integrating AI with "all phases of the DevOps feedback loop -- plan, code review and development, build, test, deploy, monitor, measure -- increases collaboration in teams and positively improves results," SAS's Dickerson pointed out. With planning, "AI can make the project management process more efficient by autogenerating requirements from user requests, detecting non-aligned timelines, and even identifying incomplete requirements."

Dickerson said AI can also handle the heavy-lifting processes in code review and development: "Not only can AI offer developers suggestions for autogenerating boilerplate code, it can also contribute to the code review process. This approach amplifies collaboration between teams and can lead to more innovation, faster time-to-market, and better alignment with business objectives."

Still, technology managers and professionals need to exercise caution in terms of going too far with AI-fueled DevOps and other Agile practices. "Overreliance poses risks," Ian Ferguson, senior director of SiFive, and formerly vice president of marketing at Lynx Software Technologies, told ZDNET.

Also: Generative AI is the technology that IT feels most pressure to exploit

"Without an understanding of how an autonomous AI platform arrived at a decision, we lose accountability," Ferguson said. "Without transparency into an AI's reasoning, we risk blindly accepting outcomes without the ability to question or validate them. We face a future in which a very limited set of companies can create complicated systems, or we see a reduction in the quality of systems."

Ferguson urged fostering "a collaborative dynamic between humans and AI in DevOps. The AI can handle rote coding while humans must own the definition of a thorough set of system requirements and behaviors," he explained. 

Dickerson also advised caution when proceeding with AI-driven DevOps: "Since AI can automate many tasks in the DevOps feedback loop, it would be ideal to have human oversight to ensure AI is making the correct automated decisions. The best practice is to ensure human approval of every important business decision."

In their report for Gartner, Bhat and his co-authors said that applying AI to one part of the software development lifecycle "can result in shifting rather than saving effort, creating a false sense of time savings. For example, time saved during coding can be offset by increased time for code reviews and debugging."

Also: AI business is booming: ChatGPT Enterprise now boasts 600,000+ users

There are, however, reasons to be excited about the impact of AI on DevOps. Evidence suggests AI can be applied to assist or accelerate later stages of the DevOps process. When it comes to the software build and test stage, for example, "AI can evaluate the inputs and outputs of the build process and look for failure patterns to assist in optimizing the meantime to recovery," Dickerson said.

In addition, "with its ability to analyze vast amounts of data and make predictions, AI can also assist with analyzing test results. This can help identify patterns of the most impactful and unreliable tests to assist in optimizing the testing process."

At the deployment stage, "AI can automate the provisioning, configuration, and management of common infrastructure resources. In turn, this can trigger deployments that use these autogenerated artifacts, which can then allow engineers to spend more time on complex deployments," Dickerson said.

For monitoring and measuring, "because enterprise deployments can produce a large quantity of data, DevOps teams can struggle to digest the necessary information to resolve issues that arise," Dickerson said. "To assist with this effort, AI can analyze metrics and logs in real-time to detect issues much earlier and allow for faster resolution. By analyzing continuous data and patterns, AI can forecast potential bottlenecks, identify areas of improvement, and assist in optimizing all phases of the DevOps lifecycle."  

Ferguson said that with human oversight, "AI can strengthen approaches like DevOps and Agile." He said the effective combination of AI and humans across the software lifecycle can increase productivity and innovation: "We must proactively shape this future, though, through transparency, trust-building, workflow re-engineering, and skills training."

Artificial Intelligence

Read the whole story
strugk
14 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Analysis: Benefits of UK ‘sustainable aviation fuel’ will be wiped out by rising demand - Carbon Brief

1 Share

UK government targets for “sustainable aviation fuels” (SAFs) will only cut emissions from the sector to 0.8% below current levels in 2040, Carbon Brief analysis shows.

From 2025, flights taking off from the UK must use a fixed share of SAFs, which are largely made from waste products. This share will gradually rise from 2% next year to 22% in 2040. 

The government says its “SAF mandate” will cut aviation emissions by 6.3m tonnes of carbon dioxide equivalent (MtCO2e) in 2040.

However, Carbon Brief analysis of government forecasts shows this being almost entirely wiped out by rising demand for air travel, meaning emissions would only fall by 0.8% overall.

The SAF mandate is the most substantial policy to date under the UK government’s “jet-zero” strategy for decarbonising air travel, which eschewed efforts to limit demand. The mandate relies heavily on fuels made from used cooking oil and other waste products, which are in limited supply.

No change

The SAF mandate will require jet fuel suppliers to ensure that an increasing share of the product they supply is “sustainable”. This is meant to encourage investment in new facilities to produce SAFs.

Fuels described as SAFs include those made from waste, such as used cooking oil, household waste and offcuts from the forestry sector. 

Despite their name, SAFs produce just as many emissions as fossil fuels when burned to power planes. 

However, they generally – although not always – have a lower overall “lifecycle” carbon footprint than petroleum-based jet fuel. This is due to CO2 emissions absorbed from the atmosphere when growing plants for biofuels, or emissions that are avoided by diverting waste products to be used as fuels. 

These emissions savings are counted towards the UK’s aviation sector as a whole.

(The government says that, for the time being, it will not support SAFs made directly from crops, which tend to have relatively high carbon footprints due to changes in land use.)

The new UK mandate starts in 2025 with a requirement that 2% of total jet fuel demand is SAF, increasing to 10% in 2030 and 22% in 2040. The government says there is currently not enough certainty in the SAF market to set targets beyond that date.

These measures will cut overall aviation emissions by 2.7MtCO2e in 2030 and 6.3MtCO2e in 2040, according to the government.

Based on government forecasts for jet fuel use, this change will be almost totally offset by a growth in flights, leaving UK aviation emissions virtually unchanged between now and 2040. 

Emissions in 2025 are expected to be 36.0MtCO2e, while 15 years later they are set to be 35.7MtCO2e, according to Carbon Brief analysis. This is a drop of just 0.3MtCO2e, or 0.8%. This is illustrated in the chart below, with the SAF mandate merely preventing an increase in emissions resulting from higher jet fuel use in 2040.

These figures are derived from the government’s “central case”, cited in its underlying analysis for the SAF mandate, which sees jet fuel use increasing from 11.5m tonnes (Mt) in 2025 to 13.3Mt in 2040. This, in turn, is based on policies in the government’s “continuation of current trends” scenario, with the SAF mandate included.

The government expects far more flights to take off from the UK in the coming years, resulting in higher jet fuel use. It has resisted pressure to curb the demand for air travel, despite warnings from experts that such actions are vital for reducing aviation emissions.

In its jet-zero strategy, the government stated that it is aiming for a “high ambition” scenario, which would see aviation emissions fall faster in the coming years. However, as it stands, it has not introduced policies to drive further emissions reductions in planes.

The SAF mandate assumes that SAFs reduce lifecycle emissions from jet fuel by 70%. Certificates will be issued to fuel suppliers for each tonne of SAF produced, using this baseline emissions reduction goal as the standard.

However, Prof Bill Rutherford, an Imperial College London biochemist who contributed to two major assessments of low-carbon aviation fuels in the UK last year, tells Carbon Brief he is sceptical about lifecycle emissions analysis that shows such high emissions benefits:

“Lifecycle analysis is a very fuzzy science…You can basically get what you want out of it.”

For example, in its analysis, the government assumes that SAFs made from used cooking oil –  which are expected to make up virtually the entire UK supply in the short-term – cut lifecycle emissions by roughly 95-98% compared to conventional jet fuel. 

Dr Andrea Fantuzzi, another Imperial College London chemist who also worked on the low-carbon fuel assessments with Rutherford, says such figures seem “way too high”. He estimates that the savings would be closer to 70%. 

Fantuzzi adds that even this does not account for the land originally used to produce the oil, and assumes that the oil would otherwise be thrown away – rather than used to power road vehicles, for example. (For more on waste oils, see: More cooking oil.)

Additionally, Rutherford points out that the use of SAFs has no impact on non-CO2 emissions from planes, which could account for up to two-thirds of their climate impact. He concludes:

“The only way you can make aviation any more sustainable is to do less of it.”

More cooking oil

The only SAFs that are currently available in the UK are fuels made from used cooking oil and other waste oils, which are collected from restaurants and factories. 

However, the SAF mandate includes a limit on the amount of waste oil-based fuels within its overall targets. This is partly to “incentivise the development of new technologies” and partly due to concerns that waste oil supplies will be insufficient.

For the first two years, these fuels will be allowed to make up 100% of UK SAFs. This then falls to 71% in 2030 and 33% in 2040. Overall, waste oil-based SAFs would account for 2% of total jet fuel use in 2025 and up to 7.8% in 2040.

Despite these limits, waste oil-based SAF use is expected to rise around 15-fold from current levels within a decade. This huge increase in demand for waste cooking oil under the SAF mandate is illustrated in the figure below.

The Aviation Environment Federation said in a statement that the amount of waste oil being allowed into UK jet fuel under the UK’s SAF mandate is “much higher than we, and many others, were expecting, and appears to be the result of airline pressure”. The looser cap on these fuels was “welcomed” by industry body Airlines UK.

It raises the question of where the UK will source the required volume of waste oil to meet SAF targets. 

Studies have shown that there is nowhere near enough waste cooking oil produced domestically, within the UK, to supply jet fuel demand. “We’re not about to start eating more chips, so we will have to start importing more waste oil,” Matt Finch, UK policy manager at the NGO Transport and Environment, tells Carbon Brief.

The government itself acknowledges this, saying that production of these SAFs within the UK is likely to be constrained by the availability of waste cooking oil from 2029 onwards.

It notes that their availability will therefore be “highly dependent” on how much waste oil the UK can import.

As of 2023, waste cooking oil collected in the UK only accounted for 7% of the country’s SAF production. This share has shrunk in recent years, such that imports from other countries – particularly China – have driven most of the growth in production, as the chart below shows.

There is mounting evidence that the demand for imported cooking oil in the UK and Europe is being met with virgin palm oil that has been fraudulently passed off as waste. This would cancel out the fuel’s emissions savings, due to the land clearances for oil palm plantations.

The UK’s aviation sector will have to compete not only with other countries for a limited pool of waste cooking oil, but also with other sectors. 

Most of the UK’s waste cooking oil supplies are currently used to make biofuels for trucks and other road transport. Again, diverting resources from road fuel use would undermine the emissions savings from using them in SAFs.

The government acknowledges this, noting that “the SAF mandate may divert feedstocks which would have been utilised in other sectors of the economy and this may increase emissions in other sectors”. However, it says this is justified because “there are limited alternatives to decarbonise aviation by 2050”.

One of the scenarios modelled by the government assumes that SAF targets are met, but insufficient waste cooking oil means there is not enough biodiesel for road vehicles. This reduces the cumulative emissions savings between 2025-40 from 53.9MtCO2e to 43.0MtCO2e.

New fuels

The government is also supporting new types of SAF production in the UK, including fuels made from “black bin bag waste” and residues from farming or forestry.

In the newly released documents, the government says the UK will be a “leader” and a “first mover” in these technologies, spurred on by the cap on waste oil fuels and supported by the Advanced Fuel Fund.

Unlike waste oil-based fuels, the government says there will be “sufficient” materials available to meet production demand for these advanced fuels until at least 2040. From that point onwards, it says lack of materials “may become a constraining factor”.

However, a 2023 report by the Royal Society highlighted the limited availability of some waste materials to produce SAFs. It estimated that forest offcuts, for example, would be able to provide no more than 1.7% of current jet fuel demand.

Moreover, many waste sources are already recycled or burned to generate electricity and the government has targets in place to cut household waste in the coming years. “Most waste is already used for something that’s not jet fuel, so we know supplies of waste-based SAF will be limited,” Finch tells Carbon Brief.

Finally, the government’s mandate also includes another target, within the overarching SAF goal, for scaling up the production of “power-to-liquid” fuels. 

These fuels can be made using green hydrogen and carbon captured from the air. Unlike most SAFs, they could cut up to 100% of CO2 emissions compared to conventional jet fuel, but they are currently less developed than other options.

The target for power-to-liquid fuels will start in 2028 at 0.2% of total jet fuel demand, reaching 0.5% in 2030 and 3.5% in 2040.

These targets are lower than the ones introduced in the EU, which is aiming for 35% of its jet fuel to be power-to-liquid by 2050. The bloc is also targeting 70% of aviation demand to be met with SAFs by 2050, whereas the UK’s targets stop at 22% by 2040.

In its “balanced net-zero pathway” for UK aviation, government advisors the Climate Change Committee (CCC) proposed that SAFs should make up 25% of jet fuel by 2050, with one-third of this made up of power-to-liquid fuels – roughly 8% of total jet fuel. The government targets are roughly in line with this trajectory.

Thinktank Green Alliance laid out three scenarios for SAF expansion in 2022, including higher ambition goals, with power-to-liquid fuels reaching 28% and 50% of total jet fuel by 2050.

However, it noted that such a rollout could be constrained by the large amounts of additional green hydrogen and renewable power required to produce these fuels. 

The report stated:

“It could be argued that aviation should not be a priority use of renewables as there are other options to cut carbon in the sector, such as managing the number of flights taken.”

Read the whole story
strugk
23 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

This startup is using protein powder to beef up carbon capture

1 Share

FabricNano

Read the whole story
strugk
37 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Researchers make a plastic that includes bacteria that can digest it

1 Share

It's alive! —

Bacterial spores strengthen the plastic, then revive to digest it in landfills.

One reason plastic waste persists in the environment is because there's not much that can eat it. The chemical structure of most polymers is stable and different enough from existing food sources that bacteria didn't have enzymes that could digest them. Evolution has started to change that situation, though, and a number of strains have been identified that can digest some common plastics.

An international team of researchers has decided to take advantage of those strains and bundle plastic-eating bacteria into the plastic. To keep them from eating it while it's in use, the bacteria is mixed in as inactive spores that should (mostly—more on this below) only start digesting the plastic once it's released into the environment. To get this to work, the researchers had to evolve a bacterial strain that could tolerate the manufacturing process. It turns out that the evolved bacteria made the plastic even stronger.

Bacteria meet plastics

Plastics are formed of polymers, long chains of identical molecules linked together by chemical bonds. While they can be broken down chemically, the process is often energy-intensive and doesn't leave useful chemicals behind. One alternative is to get bacteria to do it for us. If they've got an enzyme that breaks the chemical bonds of a polymer, they can often use the resulting small molecules as an energy source.

The problem has been that the chemical linkages in the polymers are often distinct from the chemicals that living things have come across in the past, so enzymes that break down polymers have been rare. But, with dozens of years of exposure to plastics, that's starting to change, and a number of plastic-eating bacterial strains have been discovered recently.

This breakdown process still requires that the bacteria and plastics find each other in the environment, though. So a team of researchers decided to put the bacteria in the plastic itself.

The plastic they worked with is called thermoplastic polyurethane (TPU), something you can find everywhere from bicycle inner tubes to the coating on your ethernet cables. Conveniently, there are already bacteria that have been identified that can break down TPU, including a species called Bacillus subtilis, a harmless soil bacterium that has also colonized our digestive tracts. B. subtilis also has a feature that makes it very useful for this work: It forms spores.

This feature handles one of the biggest problems with incorporating bacteria into materials: The materials often don't provide an environment where living things can thrive. Spores, on the other hand, are used by bacteria to wait out otherwise intolerable conditions, and then return to normal growth when things improve. The idea behind the new work is that B. subtilis spores remain in suspended animation while the TPU is in use and then re-activate and digest it once it's disposed of.

In practical terms, this works because spores only reactivate once nutritional conditions are sufficiently promising. An Ethernet cable or the inside of a bike tire is unlikely to see conditions that will wake the bacteria. But if that same TPU ends up in a landfill or even the side of the road, nutrients in the soil could trigger the spores to get to work digesting it.

The researchers' initial problem was that the manufacturing of TPU products usually involves extruding the plastic at high temperatures, which are normally used to kill bacteria. In this case, they found that a typical manufacturing temperature (130° C) killed over 90 percent of the B. subtilis spores in just one minute.

So, they started out by exposing B. subtilis spores to lower temperatures and short periods of heat that were enough to kill most of the bacteria. The survivors were grown up, made to sporulate, and then exposed to a slightly longer period of heat or even higher temperatures. Over time, B. subtilis evolved the ability to tolerate a half hour of temperatures that would kill most of the original strain. The resulting strain was then incorporated into TPU, which was then formed into plastics through a normal extrusion process.

You might expect that putting a bunch of biological material into a plastic would weaken it. But the opposite turned out to be true, as various measures of its tensile strength showed that the spore-containing plastic was stronger than pure plastic. It turns out that the spores have a water-repelling surface that interacts strongly with the polymer strands in the plastic. The heat-resistant strain of bacteria repelled water even more strongly, and plastics made with these spores was tougher still.

To simulate landfilling or litter with the plastic, the researchers placed them in compost. Even without any bacteria, there were organisms present that could degrade it; by five months in the compost, plain TPU lost nearly half its mass. But with B. subtilis spores incorporated, the plastic lost 93 percent of its mass over the same time period.

This doesn't mean our plastics problem is solved. Obviously, TPU breaks down relatively easily. There are lots of plastics that don't break down significantly, and may not be compatible with incorporating bacterial spores. In addition, it's possible that some TPU uses would expose the plastic to environments that would activate the spores—something like food handling or buried cabling. Still, it's possible this new breakdown process can provide a solution in some cases, making it worth exploring further.

Nature Communications, 2024. DOI: 10.1038/s41467-024-47132-8  (About DOIs).

Listing image by Han Sol Kim

Read the whole story
strugk
39 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Solar Fuel Production Just Needs a Change in Direction

1 Share

Normally, such a sudden loss would spell disaster for a small, islanded grid. But the Kauai grid has a feature that many larger grids lack: a technology called grid-forming inverters. An inverter converts direct-current electricity to grid-compatible alternating current. The island’s grid-forming inverters are connected to those battery systems, and they are a special type—in fact, they had been installed with just such a contingency in mind. They improve the grid’s resilience and allow it to operate largely on resources like batteries, solar photovoltaics, and wind turbines, all of which connect to the grid through inverters. On that April day in 2023, Kauai had over 150 megawatt-hours’ worth of energy stored in batteries—and also the grid-forming inverters necessary to let those batteries respond rapidly and provide stable power to the grid. They worked exactly as intended and kept the grid going without any blackouts.

The photovoltaic panels at the Kapaia solar-plus-storage facility, operated by the Kauai Island Utility Cooperative in Hawaii, are capable of generating 13 megawatts under ideal conditions.TESLA

A photo of a solar farm. A solar-plus-storage facility at the U.S. Navy’s Pacific Missile Range Facility, in the southwestern part of Kauai, is one of two on the island equipped with grid-forming inverters. U.S. NAVY

That April event in Kauai offers a preview of the electrical future, especially for places where utilities are now, or soon will be, relying heavily on solar photovoltaic or wind power. Similar inverters have operated for years within smaller off-grid installations. However, using them in a multimegawatt power grid, such as Kauai’s, is a relatively new idea. And it’s catching on fast: At the time of this writing, at least eight major grid-forming projects are either under construction or in operation in Australia, along with others in Asia, Europe, North America, and the Middle East.

Reaching net-zero-carbon emissions by 2050, as many international organizations now insist is necessary to stave off dire climate consequences, will require a rapid and massive shift in electricity-generating infrastructures. The International Energy Agency has calculated that to have any hope of achieving this goal would require the addition, every year, of 630 gigawatts of solar photovoltaics and 390 GW of wind starting no later than 2030—figures that are around four times as great as than any annual tally so far.

The only economical way to integrate such high levels of renewable energy into our grids is with grid-forming inverters, which can be implemented on any technology that uses an inverter, including wind, solar photovoltaics, batteries, fuel cells, microturbines, and even high-voltage direct-current transmission lines. Grid-forming inverters for utility-scale batteries are available today from Tesla, GPTech, SMA, GE Vernova, EPC Power, Dynapower, Hitachi, Enphase, CE+T, and others. Grid-forming converters for HVDC, which convert high-voltage direct current to alternating current and vice versa, are also commercially available, from companies including Hitachi, Siemens, and GE Vernova. For photovoltaics and wind, grid-forming inverters are not yet commercially available at the size and scale needed for large grids, but they are now being developed by GE Vernova, Enphase, and Solectria.

The Grid Depends on Inertia

To understand the promise of grid-forming inverters, you must first grasp how our present electrical grid functions, and why it’s inadequate for a future dominated by renewable resources such as solar and wind power.

Conventional power plants that run on natural gas, coal, nuclear fuel, or hydropower produce electricity with synchronous generators—large rotating machines that produce AC electricity at a specified frequency and voltage. These generators have some physical characteristics that make them ideal for operating power grids. Among other things, they have a natural tendency to synchronize with one another, which helps make it possible to restart a grid that’s completely blacked out. Most important, a generator has a large rotating mass, namely its rotor. When a synchronous generator is spinning, its rotor, which can weigh well over 100 tonnes, cannot stop quickly.

The Kauai electric transmission grid operates at 57.1 kilovolts, an unusual voltage that is a legacy from the island’s sugar-plantation era. The network has grid-forming inverters at the Pacific Missile Range Facility, in the southwest, and at Kapaia, in the southeast. CHRIS PHILPOT

This characteristic gives rise to a property called system inertia. It arises naturally from those large generators running in synchrony with one another. Over many years, engineers used the inertia characteristics of the grid to determine how fast a power grid will change its frequency when a failure occurs, and then developed mitigation procedures based on that information.

If one or more big generators disconnect from the grid, the sudden imbalance of load to generation creates torque that extracts rotational energy from the remaining synchronous machines, slowing them down and thereby reducing the grid frequency—the frequency is electromechanically linked to the rotational speed of the generators feeding the grid. Fortunately, the kinetic energy stored in all that rotating mass slows this frequency drop and typically allows the remaining generators enough time to ramp up their power output to meet the additional load.

Electricity grids are designed so that even if the network loses its largest generator, running at full output, the other generators can pick up the additional load and the frequency nadir never falls below a specific threshold. In the United States, where nominal grid frequency is 60 hertz, the threshold is generally between 59.3 and 59.5 Hz. As long as the frequency remains above this point, local blackouts are unlikely to occur.

Why We Need Grid-Forming Inverters

Wind turbines, photovoltaics, and battery-storage systems differ from conventional generators because they all produce direct current (DC) electricity—they don’t have a heartbeat like alternating current does. With the exception of wind turbines, these are not rotating machines. And most modern wind turbines aren’t synchronously rotating machines from a grid standpoint—the frequency of their AC output depends on the wind speed. So that variable-frequency AC is rectified to DC before being converted to an AC waveform that matches the grid’s.

As mentioned, inverters convert the DC electricity to grid-compatible AC. A conventional, or grid-following, inverter uses power transistors that repeatedly and rapidly switch the polarity applied to a load. By switching at high speed, under software control, the inverter produces a high-frequency AC signal that is filtered by capacitors and other components to produce a smooth AC current output. So in this scheme, the software shapes the output waveform. In contrast, with synchronous generators the output waveform is determined by the physical and electrical characteristics of the generator.

Grid-following inverters operate only if they can “see” an existing voltage and frequency on the grid that they can synchronize to. They rely on controls that sense the frequency of the voltage waveform and lock onto that signal, usually by means of a technology called a phase-locked loop. So if the grid goes down, these inverters will stop injecting power because there is no voltage to follow. A key point here is that grid-following inverters do not deliver any inertia.

A photo of a row of people looking at monitors. Przemyslaw Koralewicz, David Corbus, Shahil Shah, and Robb Wallen, researchers at the National Renewable Energy Laboratory, evaluate a grid-forming inverter used on Kauai at the NREL Flatirons Campus. DENNIS SCHROEDER/NREL

Grid-following inverters work fine when inverter-based power sources are relatively scarce. But as the levels of inverter-based resources rise above 60 to 70 percent, things start to get challenging. That’s why system operators around the world are beginning to put the brakes on renewable deployment and curtailing the operation of existing renewable plants. For example, the Electric Reliability Council of Texas (ERCOT) regularly curtails the use of renewables in that state because of stability issues arising from too many grid-following inverters.

It doesn’t have to be this way. When the level of inverter-based power sources on a grid is high, the inverters themselves could support grid-frequency stability. And when the level is very high, they could form the voltage and frequency of the grid. In other words, they could collectively set the pulse, rather than follow it. That’s what grid-forming inverters do.

The Difference Between Grid Forming and Grid Following

Grid-forming (GFM) and grid-following (GFL) inverters share several key characteristics. Both can inject current into the grid during a disturbance. Also, both types of inverters can support the voltage on a grid by controlling their reactive power, which is the product of the voltage and the current that are out of phase with each other. Both kinds of inverters can also help prop up the frequency on the grid, by controlling their active power, which is the product of the voltage and current that are in phase with each other.

What makes grid-forming inverters different from grid-following inverters is mainly software. GFM inverters are controlled by code designed to maintain a stable output voltage waveform, but they also allow the magnitude and phase of that waveform to change over time. What does that mean in practice? The unifying characteristic of all GFM inverters is that they hold a constant voltage magnitude and frequency on short timescales—for example, a few dozen milliseconds—while allowing that waveform’s magnitude and frequency to change over several seconds to synchronize with other nearby sources, such as traditional generators and other GFM inverters.

Some GFM inverters, called virtual synchronous machines, achieve this response by mimicking the physical and electrical characteristics of a synchronous generator, using control equations that describe how it operates. Other GFM inverters are programmed to simply hold a constant target voltage and frequency, allowing that target voltage and frequency to change slowly over time to synchronize with the rest of the power grid following what is called a droop curve. A droop curve is a formula used by grid operators to indicate how a generator should respond to a deviation from nominal voltage or frequency on its grid. There are many variations of these two basic GFM control methods, and other methods have been proposed as well.

At least eight major grid-forming projects are either under construction or in operation in Australia, along with others in Asia, Europe, North America, and the Middle East.

To better understand this concept, imagine that a transmission line shorts to ground or a generator trips due to a lightning strike. (Such problems typically occur multiple times a week, even on the best-run grids.) The key advantage of a GFM inverter in such a situation is that it does not need to quickly sense frequency and voltage decline on the grid to respond. Instead, a GFM inverter just holds its own voltage and frequency relatively constant by injecting whatever current is needed to achieve that, subject to its physical limits. In other words, a GFM inverter is programmed to act like an AC voltage source behind some small impedance (impedance is the opposition to AC current arising from resistance, capacitance, and inductance). In response to an abrupt drop in grid voltage, its digital controller increases current output by allowing more current to pass through its power transistors, without even needing to measure the change it’s responding to. In response to falling grid frequency, the controller increases power.

GFL controls, on the other hand, need to first measure the change in voltage or frequency, and then take an appropriate control action before adjusting their output current to mitigate the change. This GFL strategy works if the response does not need to be superfast (as in microseconds). But as the grid becomes weaker (meaning there are fewer voltage sources nearby), GFL controls tend to become unstable. That’s because by the time they measure the voltage and adjust their output, the voltage has already changed significantly, and fast injection of current at that point can potentially lead to a dangerous positive feedback loop. Adding more GFL inverters also tends to reduce stability because it becomes more difficult for the remaining voltage sources to stabilize them all.

When a GFM inverter responds with a surge in current, it must do so within tightly prescribed limits. It must inject enough current to provide some stability but not enough to damage the power transistors that control the current flow.

Increasing the maximum current flow is possible, but it requires increasing the capacity of the power transistors and other components, which can significantly increase cost. So most inverters (both GFM and GFL) don’t provide current surges larger than about 10 to 30 percent above their rated steady-state current. For comparison, a synchronous generator can inject around 500 to 700 percent more than its rated current for several AC line cycles (around a tenth of a second, say) without sustaining any damage. For a large generator, this can amount to thousands of amperes. Because of this difference between inverters and synchronous generators, the protection technologies used in power grids will need to be adjusted to account for lower levels of fault current.

What the Kauai Episode Reveals

The 2 April event on Kauai offered an unusual opportunity to study the performance of GFM inverters during a disturbance. After the event, one of us (Andy Hoke) along with Jin Tan and Shuan Dong and some coworkers at the National Renewable Energy Laboratory, collaborated with the Kauai Island Utility Cooperative (KIUC) to get a clear understanding of how the remaining system generators and inverter-based resources interacted with each other during the disturbance. What we determined will help power grids of the future operate at levels of inverter-based resources up to 100 percent.

NREL researchers started by creating a model of the Kauai grid. We then used a technique called electromagnetic transient (EMT) simulation, which yields information on the AC waveforms on a sub-millisecond basis. In addition, we conducted hardware tests at NREL’s Flatirons Campus on a scaled-down replica of one of Kauai’s solar-battery plants, to evaluate the grid-forming control algorithms for inverters deployed on the island.The leap from power systems like Kauai’s, with a peak demand of roughly 80 MW, to ones like South Australia’s, at 3,000 MW, is a big one. But it’s nothing compared to what will come next: grids with peak demands of 85,000 MW (in Texas) and 742,000 MW (the rest of the continental United States).

Several challenges need to be solved before we can attempt such leaps. They include creating standard GFM specifications so that inverter vendors can create products. We also need accurate models that can be used to simulate the performance of GFM inverters, so we can understand their impact on the grid.

Some progress in standardization is already happening. In the United States, for example, the North American Electric Reliability Corporation (NERC) recently published a recommendation that all future large-scale battery-storage systems have grid-forming capability.

Standards for GFM performance and validation are also starting to emerge in some countries, including Australia, Finland, and Great Britain. In the United States, the Department of Energy recently backed a consortium to tackle building and integrating inverter-based resources into power grids. Led by the National Renewable Energy Laboratory, the University of Texas at Austin, and the Electric Power Research Institute, the Universal Interoperability for Grid-Forming Inverters (UNIFI) Consortium aims to address the fundamental challenges in integrating very high levels of inverter-based resources with synchronous generators in power grids. The consortium now has over 30 members from industry, academia, and research laboratories.

A recording of the frequency responses to two different grid disruptions on Kauai shows the advantages of grid-forming inverters. The red trace shows the relatively contained response with two grid-forming inverter systems in operation. The blue trace shows the more extreme response to an earlier, comparable disruption, at a time when there was only one grid-forming plant online.NATIONAL RENEWABLE ENERGY LABORATORY

At 4:25 pm on 2 April, there were two large GFM solar-battery plants, one large GFL solar-battery plant, one large oil-fired turbine, one small diesel plant, two small hydro plants, one small biomass plant, and a handful of other solar generators online. Immediately after the oil-fired turbine failed, the AC frequency dropped quickly from 60 Hz to just above 59 Hz during the first 3 seconds [red trace in the figure above]. As the frequency dropped, the two GFM-equipped plants quickly ramped up power, with one plant quadrupling its output and the other doubling its output in less than 1/20 of a second.

In contrast, the remaining synchronous machines contributed some rapid but unsustained active power via their inertial responses, but took several seconds to produce sustained increases in their output. It is safe to say, and it has been confirmed through EMT simulation, that without the two GFM plants, the entire grid would have experienced a blackout.

Coincidentally, an almost identical generator failure had occurred a couple of years earlier, on 21 November 2021. In this case, only one solar-battery plant had grid-forming inverters. As in the 2023 event, the three large solar-battery plants quickly ramped up power and prevented a blackout. However, the frequency and voltage throughout the grid began to oscillate around 20 times per second [the blue trace in the figure above], indicating a major grid stability problem and causing some customers to be automatically disconnected. NREL’s EMT simulations, hardware tests, and controls analysis all confirmed that the severe oscillation was due to a combination of grid-following inverters tuned for extremely fast response and a lack of sufficient grid strength to support those GFL inverters.

In other words, the 2021 event illustrates how too many conventional GFL inverters can erode stability. Comparing the two events demonstrates the value of GFM inverter controls—not just to provide fast yet stable responses to grid events but also to stabilize nearby GFL inverters and allow the entire grid to maintain operations without a blackout.

Australia Commissions Big GFM Projects

An illustration of a chart. In sunny South Australia, solar power now routinely supplies all or nearly all of the power needed during the middle of the day. Shown here is the chart for 31 December 2023, in which solar supplied slightly more power than the state needed at around 1:30 p.m. AUSTRALIAN ENERGY MARKET OPERATOR (AEMO)

The next step for inverter-dominated power grids is to go big. Some of the most important deployments are in South Australia. As in Kauai, the South Australian grid now has such high levels of solar generation that it regularly experiences days in which the solar generation can exceed the peak demand during the middle of the day [see figure at left].

The most well-known of the GFM resources in Australia is the Hornsdale Power Reserve in South Australia. This 150-MW/194-MWh system, which uses Tesla’s Powerpack 2 lithium-ion batteries, was originally installed in 2017 and was upgraded to grid-forming capability in 2020.

Australia’s largest battery (500 MW/1,000 MWh) with grid-forming inverters is expected to start operating in Liddell, New South Wales, later this year. This battery, from AGL Energy, will be located at the site of a decommissioned coal plant. This and several other larger GFM systems are expected to start working on the South Australia grid over the next year.

The leap from power systems like Kauai’s, with a peak demand of roughly 80 MW, to ones like South Australia’s, at 3,000 MW, is a big one. But it’s nothing compared to what will come next: grids with peak demands of 85,000 MW (in Texas) and 742,000 MW (the rest of the continental United States).

Several challenges need to be solved before we can attempt such leaps. They include creating standard GFM specifications so that inverter vendors can create products. We also need accurate models that can be used to simulate the performance of GFM inverters, so we can understand their impact on the grid.

Some progress in standardization is already happening. In the United States, for example, the North American Electric Reliability Corporation (NERC) recently published a recommendation that all future large-scale battery-storage systems have grid-forming capability.

Standards for GFM performance and validation are also starting to emerge in some countries, including Australia, Finland, and Great Britain. In the United States, the Department of Energy recently backed a consortium to tackle building and integrating inverter-based resources into power grids. Led by the National Renewable Energy Laboratory, the University of Texas at Austin, and the Electric Power Research Institute, the Universal Interoperability for Grid-Forming Inverters (UNIFI) Consortium aims to address the fundamental challenges in integrating very high levels of inverter-based resources with synchronous generators in power grids. The consortium now has over 30 members from industry, academia, and research laboratories.

A photo of a field of white-squared batteries from above.  One of Australia’s major energy-storage facilities is the Hornsdale Power Reserve, at 150 megawatts and 194 megawatt-hours. Hornsdale, along with another facility called the Riverina Battery, are the country’s two largest grid-forming installations. NEOEN

In addition to specifications, we need computer models of GFM inverters to verify their performance in large-scale systems. Without such verification, grid operators won’t trust the performance of new GFM technologies. Using GFM models built by the UNIFI Consortium, system operators and utilities such as the Western Electricity Coordinating Council, American Electric Power, and ERCOT (the Texas’s grid-reliability organization) are conducting studies to understand how GFM technology can help their grids.

Getting to a Greener Grid

As we progress toward a future grid dominated by inverter-based generation, a question naturally arises: Will all inverters need to be grid-forming? No. Several studies and simulations have indicated that we’ll need just enough GFM inverters to strengthen each area of the grid so that nearby GFL inverters remain stable.

How many GFMs is that? The answer depends on the characteristics of the grid and other generators. Some initial studies have shown that a power system can operate with 100 percent inverter-based resources if around 30 percent are grid-forming. More research is needed to understand how that number depends on details such as the grid topology and the control details of both the GFLs and the GFMs.

Ultimately, though, electricity generation that is completely carbon free in its operation is within our grasp. Our challenge now is to make the leap from small to large to very large systems. We know what we have to do, and it will not require technologies that are far more advanced than what we already have. It will take testing, validation in real-world scenarios, and standardization so that synchronous generators and inverters can unify their operations to create a reliable and robust power grid. Manufacturers, utilities, and regulators will have to work together to make this happen rapidly and smoothly. Only then can we begin the next stage of the grid’s evolution, to large-scale systems that are truly carbon neutral.

This article appears in the May 2024 print issue as “A Path to 100 Percent Renewable Energy.”

Read the whole story
strugk
43 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories