Energy, data, environment
1410 stories
·
2 followers

Solar Fuel Production Just Needs a Change in Direction

1 Share

Normally, such a sudden loss would spell disaster for a small, islanded grid. But the Kauai grid has a feature that many larger grids lack: a technology called grid-forming inverters. An inverter converts direct-current electricity to grid-compatible alternating current. The island’s grid-forming inverters are connected to those battery systems, and they are a special type—in fact, they had been installed with just such a contingency in mind. They improve the grid’s resilience and allow it to operate largely on resources like batteries, solar photovoltaics, and wind turbines, all of which connect to the grid through inverters. On that April day in 2023, Kauai had over 150 megawatt-hours’ worth of energy stored in batteries—and also the grid-forming inverters necessary to let those batteries respond rapidly and provide stable power to the grid. They worked exactly as intended and kept the grid going without any blackouts.

The photovoltaic panels at the Kapaia solar-plus-storage facility, operated by the Kauai Island Utility Cooperative in Hawaii, are capable of generating 13 megawatts under ideal conditions.TESLA

A photo of a solar farm. A solar-plus-storage facility at the U.S. Navy’s Pacific Missile Range Facility, in the southwestern part of Kauai, is one of two on the island equipped with grid-forming inverters. U.S. NAVY

That April event in Kauai offers a preview of the electrical future, especially for places where utilities are now, or soon will be, relying heavily on solar photovoltaic or wind power. Similar inverters have operated for years within smaller off-grid installations. However, using them in a multimegawatt power grid, such as Kauai’s, is a relatively new idea. And it’s catching on fast: At the time of this writing, at least eight major grid-forming projects are either under construction or in operation in Australia, along with others in Asia, Europe, North America, and the Middle East.

Reaching net-zero-carbon emissions by 2050, as many international organizations now insist is necessary to stave off dire climate consequences, will require a rapid and massive shift in electricity-generating infrastructures. The International Energy Agency has calculated that to have any hope of achieving this goal would require the addition, every year, of 630 gigawatts of solar photovoltaics and 390 GW of wind starting no later than 2030—figures that are around four times as great as than any annual tally so far.

The only economical way to integrate such high levels of renewable energy into our grids is with grid-forming inverters, which can be implemented on any technology that uses an inverter, including wind, solar photovoltaics, batteries, fuel cells, microturbines, and even high-voltage direct-current transmission lines. Grid-forming inverters for utility-scale batteries are available today from Tesla, GPTech, SMA, GE Vernova, EPC Power, Dynapower, Hitachi, Enphase, CE+T, and others. Grid-forming converters for HVDC, which convert high-voltage direct current to alternating current and vice versa, are also commercially available, from companies including Hitachi, Siemens, and GE Vernova. For photovoltaics and wind, grid-forming inverters are not yet commercially available at the size and scale needed for large grids, but they are now being developed by GE Vernova, Enphase, and Solectria.

The Grid Depends on Inertia

To understand the promise of grid-forming inverters, you must first grasp how our present electrical grid functions, and why it’s inadequate for a future dominated by renewable resources such as solar and wind power.

Conventional power plants that run on natural gas, coal, nuclear fuel, or hydropower produce electricity with synchronous generators—large rotating machines that produce AC electricity at a specified frequency and voltage. These generators have some physical characteristics that make them ideal for operating power grids. Among other things, they have a natural tendency to synchronize with one another, which helps make it possible to restart a grid that’s completely blacked out. Most important, a generator has a large rotating mass, namely its rotor. When a synchronous generator is spinning, its rotor, which can weigh well over 100 tonnes, cannot stop quickly.

The Kauai electric transmission grid operates at 57.1 kilovolts, an unusual voltage that is a legacy from the island’s sugar-plantation era. The network has grid-forming inverters at the Pacific Missile Range Facility, in the southwest, and at Kapaia, in the southeast. CHRIS PHILPOT

This characteristic gives rise to a property called system inertia. It arises naturally from those large generators running in synchrony with one another. Over many years, engineers used the inertia characteristics of the grid to determine how fast a power grid will change its frequency when a failure occurs, and then developed mitigation procedures based on that information.

If one or more big generators disconnect from the grid, the sudden imbalance of load to generation creates torque that extracts rotational energy from the remaining synchronous machines, slowing them down and thereby reducing the grid frequency—the frequency is electromechanically linked to the rotational speed of the generators feeding the grid. Fortunately, the kinetic energy stored in all that rotating mass slows this frequency drop and typically allows the remaining generators enough time to ramp up their power output to meet the additional load.

Electricity grids are designed so that even if the network loses its largest generator, running at full output, the other generators can pick up the additional load and the frequency nadir never falls below a specific threshold. In the United States, where nominal grid frequency is 60 hertz, the threshold is generally between 59.3 and 59.5 Hz. As long as the frequency remains above this point, local blackouts are unlikely to occur.

Why We Need Grid-Forming Inverters

Wind turbines, photovoltaics, and battery-storage systems differ from conventional generators because they all produce direct current (DC) electricity—they don’t have a heartbeat like alternating current does. With the exception of wind turbines, these are not rotating machines. And most modern wind turbines aren’t synchronously rotating machines from a grid standpoint—the frequency of their AC output depends on the wind speed. So that variable-frequency AC is rectified to DC before being converted to an AC waveform that matches the grid’s.

As mentioned, inverters convert the DC electricity to grid-compatible AC. A conventional, or grid-following, inverter uses power transistors that repeatedly and rapidly switch the polarity applied to a load. By switching at high speed, under software control, the inverter produces a high-frequency AC signal that is filtered by capacitors and other components to produce a smooth AC current output. So in this scheme, the software shapes the output waveform. In contrast, with synchronous generators the output waveform is determined by the physical and electrical characteristics of the generator.

Grid-following inverters operate only if they can “see” an existing voltage and frequency on the grid that they can synchronize to. They rely on controls that sense the frequency of the voltage waveform and lock onto that signal, usually by means of a technology called a phase-locked loop. So if the grid goes down, these inverters will stop injecting power because there is no voltage to follow. A key point here is that grid-following inverters do not deliver any inertia.

A photo of a row of people looking at monitors. Przemyslaw Koralewicz, David Corbus, Shahil Shah, and Robb Wallen, researchers at the National Renewable Energy Laboratory, evaluate a grid-forming inverter used on Kauai at the NREL Flatirons Campus. DENNIS SCHROEDER/NREL

Grid-following inverters work fine when inverter-based power sources are relatively scarce. But as the levels of inverter-based resources rise above 60 to 70 percent, things start to get challenging. That’s why system operators around the world are beginning to put the brakes on renewable deployment and curtailing the operation of existing renewable plants. For example, the Electric Reliability Council of Texas (ERCOT) regularly curtails the use of renewables in that state because of stability issues arising from too many grid-following inverters.

It doesn’t have to be this way. When the level of inverter-based power sources on a grid is high, the inverters themselves could support grid-frequency stability. And when the level is very high, they could form the voltage and frequency of the grid. In other words, they could collectively set the pulse, rather than follow it. That’s what grid-forming inverters do.

The Difference Between Grid Forming and Grid Following

Grid-forming (GFM) and grid-following (GFL) inverters share several key characteristics. Both can inject current into the grid during a disturbance. Also, both types of inverters can support the voltage on a grid by controlling their reactive power, which is the product of the voltage and the current that are out of phase with each other. Both kinds of inverters can also help prop up the frequency on the grid, by controlling their active power, which is the product of the voltage and current that are in phase with each other.

What makes grid-forming inverters different from grid-following inverters is mainly software. GFM inverters are controlled by code designed to maintain a stable output voltage waveform, but they also allow the magnitude and phase of that waveform to change over time. What does that mean in practice? The unifying characteristic of all GFM inverters is that they hold a constant voltage magnitude and frequency on short timescales—for example, a few dozen milliseconds—while allowing that waveform’s magnitude and frequency to change over several seconds to synchronize with other nearby sources, such as traditional generators and other GFM inverters.

Some GFM inverters, called virtual synchronous machines, achieve this response by mimicking the physical and electrical characteristics of a synchronous generator, using control equations that describe how it operates. Other GFM inverters are programmed to simply hold a constant target voltage and frequency, allowing that target voltage and frequency to change slowly over time to synchronize with the rest of the power grid following what is called a droop curve. A droop curve is a formula used by grid operators to indicate how a generator should respond to a deviation from nominal voltage or frequency on its grid. There are many variations of these two basic GFM control methods, and other methods have been proposed as well.

At least eight major grid-forming projects are either under construction or in operation in Australia, along with others in Asia, Europe, North America, and the Middle East.

To better understand this concept, imagine that a transmission line shorts to ground or a generator trips due to a lightning strike. (Such problems typically occur multiple times a week, even on the best-run grids.) The key advantage of a GFM inverter in such a situation is that it does not need to quickly sense frequency and voltage decline on the grid to respond. Instead, a GFM inverter just holds its own voltage and frequency relatively constant by injecting whatever current is needed to achieve that, subject to its physical limits. In other words, a GFM inverter is programmed to act like an AC voltage source behind some small impedance (impedance is the opposition to AC current arising from resistance, capacitance, and inductance). In response to an abrupt drop in grid voltage, its digital controller increases current output by allowing more current to pass through its power transistors, without even needing to measure the change it’s responding to. In response to falling grid frequency, the controller increases power.

GFL controls, on the other hand, need to first measure the change in voltage or frequency, and then take an appropriate control action before adjusting their output current to mitigate the change. This GFL strategy works if the response does not need to be superfast (as in microseconds). But as the grid becomes weaker (meaning there are fewer voltage sources nearby), GFL controls tend to become unstable. That’s because by the time they measure the voltage and adjust their output, the voltage has already changed significantly, and fast injection of current at that point can potentially lead to a dangerous positive feedback loop. Adding more GFL inverters also tends to reduce stability because it becomes more difficult for the remaining voltage sources to stabilize them all.

When a GFM inverter responds with a surge in current, it must do so within tightly prescribed limits. It must inject enough current to provide some stability but not enough to damage the power transistors that control the current flow.

Increasing the maximum current flow is possible, but it requires increasing the capacity of the power transistors and other components, which can significantly increase cost. So most inverters (both GFM and GFL) don’t provide current surges larger than about 10 to 30 percent above their rated steady-state current. For comparison, a synchronous generator can inject around 500 to 700 percent more than its rated current for several AC line cycles (around a tenth of a second, say) without sustaining any damage. For a large generator, this can amount to thousands of amperes. Because of this difference between inverters and synchronous generators, the protection technologies used in power grids will need to be adjusted to account for lower levels of fault current.

What the Kauai Episode Reveals

The 2 April event on Kauai offered an unusual opportunity to study the performance of GFM inverters during a disturbance. After the event, one of us (Andy Hoke) along with Jin Tan and Shuan Dong and some coworkers at the National Renewable Energy Laboratory, collaborated with the Kauai Island Utility Cooperative (KIUC) to get a clear understanding of how the remaining system generators and inverter-based resources interacted with each other during the disturbance. What we determined will help power grids of the future operate at levels of inverter-based resources up to 100 percent.

NREL researchers started by creating a model of the Kauai grid. We then used a technique called electromagnetic transient (EMT) simulation, which yields information on the AC waveforms on a sub-millisecond basis. In addition, we conducted hardware tests at NREL’s Flatirons Campus on a scaled-down replica of one of Kauai’s solar-battery plants, to evaluate the grid-forming control algorithms for inverters deployed on the island.The leap from power systems like Kauai’s, with a peak demand of roughly 80 MW, to ones like South Australia’s, at 3,000 MW, is a big one. But it’s nothing compared to what will come next: grids with peak demands of 85,000 MW (in Texas) and 742,000 MW (the rest of the continental United States).

Several challenges need to be solved before we can attempt such leaps. They include creating standard GFM specifications so that inverter vendors can create products. We also need accurate models that can be used to simulate the performance of GFM inverters, so we can understand their impact on the grid.

Some progress in standardization is already happening. In the United States, for example, the North American Electric Reliability Corporation (NERC) recently published a recommendation that all future large-scale battery-storage systems have grid-forming capability.

Standards for GFM performance and validation are also starting to emerge in some countries, including Australia, Finland, and Great Britain. In the United States, the Department of Energy recently backed a consortium to tackle building and integrating inverter-based resources into power grids. Led by the National Renewable Energy Laboratory, the University of Texas at Austin, and the Electric Power Research Institute, the Universal Interoperability for Grid-Forming Inverters (UNIFI) Consortium aims to address the fundamental challenges in integrating very high levels of inverter-based resources with synchronous generators in power grids. The consortium now has over 30 members from industry, academia, and research laboratories.

A recording of the frequency responses to two different grid disruptions on Kauai shows the advantages of grid-forming inverters. The red trace shows the relatively contained response with two grid-forming inverter systems in operation. The blue trace shows the more extreme response to an earlier, comparable disruption, at a time when there was only one grid-forming plant online.NATIONAL RENEWABLE ENERGY LABORATORY

At 4:25 pm on 2 April, there were two large GFM solar-battery plants, one large GFL solar-battery plant, one large oil-fired turbine, one small diesel plant, two small hydro plants, one small biomass plant, and a handful of other solar generators online. Immediately after the oil-fired turbine failed, the AC frequency dropped quickly from 60 Hz to just above 59 Hz during the first 3 seconds [red trace in the figure above]. As the frequency dropped, the two GFM-equipped plants quickly ramped up power, with one plant quadrupling its output and the other doubling its output in less than 1/20 of a second.

In contrast, the remaining synchronous machines contributed some rapid but unsustained active power via their inertial responses, but took several seconds to produce sustained increases in their output. It is safe to say, and it has been confirmed through EMT simulation, that without the two GFM plants, the entire grid would have experienced a blackout.

Coincidentally, an almost identical generator failure had occurred a couple of years earlier, on 21 November 2021. In this case, only one solar-battery plant had grid-forming inverters. As in the 2023 event, the three large solar-battery plants quickly ramped up power and prevented a blackout. However, the frequency and voltage throughout the grid began to oscillate around 20 times per second [the blue trace in the figure above], indicating a major grid stability problem and causing some customers to be automatically disconnected. NREL’s EMT simulations, hardware tests, and controls analysis all confirmed that the severe oscillation was due to a combination of grid-following inverters tuned for extremely fast response and a lack of sufficient grid strength to support those GFL inverters.

In other words, the 2021 event illustrates how too many conventional GFL inverters can erode stability. Comparing the two events demonstrates the value of GFM inverter controls—not just to provide fast yet stable responses to grid events but also to stabilize nearby GFL inverters and allow the entire grid to maintain operations without a blackout.

Australia Commissions Big GFM Projects

An illustration of a chart. In sunny South Australia, solar power now routinely supplies all or nearly all of the power needed during the middle of the day. Shown here is the chart for 31 December 2023, in which solar supplied slightly more power than the state needed at around 1:30 p.m. AUSTRALIAN ENERGY MARKET OPERATOR (AEMO)

The next step for inverter-dominated power grids is to go big. Some of the most important deployments are in South Australia. As in Kauai, the South Australian grid now has such high levels of solar generation that it regularly experiences days in which the solar generation can exceed the peak demand during the middle of the day [see figure at left].

The most well-known of the GFM resources in Australia is the Hornsdale Power Reserve in South Australia. This 150-MW/194-MWh system, which uses Tesla’s Powerpack 2 lithium-ion batteries, was originally installed in 2017 and was upgraded to grid-forming capability in 2020.

Australia’s largest battery (500 MW/1,000 MWh) with grid-forming inverters is expected to start operating in Liddell, New South Wales, later this year. This battery, from AGL Energy, will be located at the site of a decommissioned coal plant. This and several other larger GFM systems are expected to start working on the South Australia grid over the next year.

The leap from power systems like Kauai’s, with a peak demand of roughly 80 MW, to ones like South Australia’s, at 3,000 MW, is a big one. But it’s nothing compared to what will come next: grids with peak demands of 85,000 MW (in Texas) and 742,000 MW (the rest of the continental United States).

Several challenges need to be solved before we can attempt such leaps. They include creating standard GFM specifications so that inverter vendors can create products. We also need accurate models that can be used to simulate the performance of GFM inverters, so we can understand their impact on the grid.

Some progress in standardization is already happening. In the United States, for example, the North American Electric Reliability Corporation (NERC) recently published a recommendation that all future large-scale battery-storage systems have grid-forming capability.

Standards for GFM performance and validation are also starting to emerge in some countries, including Australia, Finland, and Great Britain. In the United States, the Department of Energy recently backed a consortium to tackle building and integrating inverter-based resources into power grids. Led by the National Renewable Energy Laboratory, the University of Texas at Austin, and the Electric Power Research Institute, the Universal Interoperability for Grid-Forming Inverters (UNIFI) Consortium aims to address the fundamental challenges in integrating very high levels of inverter-based resources with synchronous generators in power grids. The consortium now has over 30 members from industry, academia, and research laboratories.

A photo of a field of white-squared batteries from above.  One of Australia’s major energy-storage facilities is the Hornsdale Power Reserve, at 150 megawatts and 194 megawatt-hours. Hornsdale, along with another facility called the Riverina Battery, are the country’s two largest grid-forming installations. NEOEN

In addition to specifications, we need computer models of GFM inverters to verify their performance in large-scale systems. Without such verification, grid operators won’t trust the performance of new GFM technologies. Using GFM models built by the UNIFI Consortium, system operators and utilities such as the Western Electricity Coordinating Council, American Electric Power, and ERCOT (the Texas’s grid-reliability organization) are conducting studies to understand how GFM technology can help their grids.

Getting to a Greener Grid

As we progress toward a future grid dominated by inverter-based generation, a question naturally arises: Will all inverters need to be grid-forming? No. Several studies and simulations have indicated that we’ll need just enough GFM inverters to strengthen each area of the grid so that nearby GFL inverters remain stable.

How many GFMs is that? The answer depends on the characteristics of the grid and other generators. Some initial studies have shown that a power system can operate with 100 percent inverter-based resources if around 30 percent are grid-forming. More research is needed to understand how that number depends on details such as the grid topology and the control details of both the GFLs and the GFMs.

Ultimately, though, electricity generation that is completely carbon free in its operation is within our grasp. Our challenge now is to make the leap from small to large to very large systems. We know what we have to do, and it will not require technologies that are far more advanced than what we already have. It will take testing, validation in real-world scenarios, and standardization so that synchronous generators and inverters can unify their operations to create a reliable and robust power grid. Manufacturers, utilities, and regulators will have to work together to make this happen rapidly and smoothly. Only then can we begin the next stage of the grid’s evolution, to large-scale systems that are truly carbon neutral.

This article appears in the May 2024 print issue as “A Path to 100 Percent Renewable Energy.”

Read the whole story
strugk
1 day ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Green Iron Corridors: A New Way to Transform the Steel Business

RMI
1 Share

Green Iron Corridors: A New Way to Transform the Steel Business

By rethinking the ironmaking process around the world, we can bring clean steel one step closer.

If we are to halt runaway climate pollution while meeting global demand, the way we make steel needs to change. Currently, steel contributes 11 percent of global CO2 emissions, and demand for the product is expected to increase by 12 percent between now and 2050.

Clean steel will only be possible with a rapid shift to low-emission production technologies — such as green hydrogen-based methods. For alignment with the International Energy Agency’s (IEA) 1.5°C scenario, an estimated 35 percent of ore-based steelmaking must be produced through hydrogen methods (specifically hydrogen-based direct reduction or H2-DRI) in 2050.

To decarbonize steel effectively, we need to look at all the processes that go into its production. Ironmaking — the most carbon-intensive step in the steelmaking process — is critical to this goal.

RMI and the Green Hydrogen Catapult (GHC), a coalition of ambitious green hydrogen market leaders, have embraced the challenge of decarbonizing the steel value chain and are actively working towards this goal.

Introducing Green Iron Corridors

To understand how we get there, we have to understand where we’re coming from. Traditionally, steelmaking all happened under one roof, where large integrated facilities supported both iron and steelmaking and where coal was used as the primary fuel and reductant. Because of this, cheap coal resources drove the geographic patterns of iron and steelmaking in the last century, with an advantage accruing over time to those who moved fastest to integrate into highly optimized facilities with economies of scale.

But with the advent of greener methods whose cost competitiveness is driven by renewable energy availability and scalability, ironmaking will be drawn to new geographies rich in iron ore and renewable resources (demonstrated by new H2-DRI projects underway in Sweden where there is a combination of high-quality ore and cost-competitive wind and hydropower energy). It could prove cost-effective, and win-win, to split these processes up: with ironmaking taking place in one location with abundant ore and renewable energy potential and steelmaking happening somewhere else with steelmaking capabilities — and demand — already in place. This process splitting could open new markets for regions with abundant ore but little steelmaking capability and accelerate the global transition to low emissions iron and steel production, satisfying climate-conscious buyers who are serious about green steel. We call these potential export-import routes green iron corridors.

Location, location, location

The business case for green iron corridors becomes clearer when strategically choosing export and import locations based on resources and needs. A sizeable portion of the cost to produce green steel is driven by the cost of renewable hydrogen, approximately 15–40 percent of the cost of steel. So, locations with cost-competitive hydrogen production and iron ore will have a competitive advantage. Depending on route-specific tradeoffs between quality of renewable resources, cost and distance of seaborne transport, and cost and efficiency of shipping a more finished product of iron rather than iron ore, the iron reduction could either occur at the mining location or at a secondary location where renewable energy —and thus reduction— is particularly competitive before final shipment to steelmaking centers. Current subsidies will unlock cost reductions for hydrogen-based iron production in developed countries today, but ultimately the supply chain will be optimized around competitive renewable energy resources.

Another key determining factor on the export side will be the iron ore resource — not all ore is created equal. Typically, DRI with Electric Arc Furnace (EAF) operations prefer iron ore pellets with an iron content of at least 67 percent and lower concentrations of key impurities such as silica, phosphorous, and alumina. To meet this requirement, ores are often beneficiated at the mine to separate the iron oxides from the impurities, producing a concentrate that is then pelletized. This process is already routinely done globally, favoring locations with high-grade ore such as Canada and Brazil, but also with low-grade ore, such as the 20–35 percent iron ore mined and upgraded in the United States.

Alternatively, for the ores that are difficult to upgrade to DRI-EAF grade quality due to specific composition, there is work underway to utilize additional downstream processing technology (e.g., Electric Smelting Furnace, ESF) which are already proven in other industries (e.g., ferroalloy production) to remove the impurities. This downstream processing could provide a second life and alternative pathway for Basic Oxygen Furnace use alongside DRIs. Our cost analysis indicates either option as economically feasible, with the preference driven by existing infrastructure and ore specifics such as starting grade, iron-bearing mineral type (magnetite vs. hematite), and specific impurity composition. Among major iron ore miners, investments are being made for both options, with Vale, Rio Tinto, and Fortescue increasing DR-grade pellet production via upstream beneficiation and Rio Tinto and BHP investing in downstream ESF pilot facilities.

Given these various drivers, an understanding of today’s most competitive iron ore reduction locations requires a combined view of subsidies, iron ore quality and management, shipping distances, and renewable resources. RMI, with support from technical advisors and GHC members, has developed technoeconomic modeling to identify these cost-competitive regions and evaluate corridor tradeoffs between these factors. The modeling confirms lowest-cost export options with high-grade iron ore co-located with favorable renewable energy resources and enabling policies can produce iron at $390 per ton — comparable to recent global prices.

Shown in Exhibits 1 and 2, these export locations include the United States (due to IRA subsidies and tax credits), South Africa, Canada (with tax credits), Mauritania, Australia, Brazil, and Chile. Pairing these export regions to importers with strong steelmaking capacity, reliance on iron ore imports, and demand for green hydrogen to meet energy security and decarbonization targets enables a clear business case for top importer regions of Europe, Japan, and South Korea to develop a blueprint for green iron corridors.

Green iron corridors can offer efficiency, cost, and growth opportunities across the steel value chain. As countries are establishing hydrogen strategies, regions such as Northern Africa and Australia are emerging as promising green hydrogen exporters while others will rely on imports to supplement their domestic production capabilities, such as the EU’s Hydrogen Strategy target of importing 10 million tons of green hydrogen by 2030. When these export and import regions overlap with existing iron ore trade flows, transporting finished green iron sees both cost and energy savings compared to transporting hydrogen and ore separately. Few infrastructure changes will be needed on the shipping side for this transition, as briquetted green iron, also known as Hot Briquetted Iron (HBI), can be handled and shipped via similar processes as existing iron ore and act as a vector for hydrogen trade.

Lower costs, faster transition

The main benefit of a green iron corridors approach for importers is to lower costs and enable a faster transition for their steel sector. Crucially, it can help to avoid some of the domestic H2-DRI production infrastructure spending that is required for 1.5°C alignment, while still seeing cost and efficiency savings. At least $5.5 billion has been allocated to 10 commercial-scale hydrogen-ready DRI facilities by government funds in Europe, but even with these generous subsidies companies are struggling to reach final investment decisions, citing high costs for domestic hydrogen as a financial barrier. Instead, companies are already considering green iron imports into Europe from regions with lower hydrogen costs as a way to lower their emissions in a cost-efficient manner while still keeping steel production in Europe (to put the $5.5 billion government investments in perspective, it is estimated that it would require $105 billion in capital investments for new steel facilities and at least $330 billion for the associated hydrogen and electricity production capacity totaling $435 billion to transition the entire European integrated steel industry to hydrogen-based steelmaking).

Building out the infrastructure to provide the roughly 5 million tons of renewable hydrogen production to transition the integration steel production would require 250–350 TWh per year of electricity — a 10% increase from current generation in Europe — supplied by 150–350 GW of new renewables and using 4.5-10 million acres of a land-constrained region. Instead, some of this infrastructure spending and buildout can be avoided while also saving between 5–40 percent on the cost of clean steel by importing green iron rather than producing domestically (shown in Exhibit 3 with Germany as an example importer). These cost savings can be achieved while still maintaining domestic steelmaking activities, which make up approximately 75 percent of the iron and steel direct jobs.

In Europe, the combination of committed buyers demanding green steel who are willing to pay 20-30% premiums for green products and implementation of carbon taxes makes a strong business case for establishing these trade corridors sooner rather than later. As EU Emissions Trading Scheme (ETS) free allowances phase out and the Carbon Border Adjustment Mechanism (CBAM) phases in by 2034, cost savings compared to fossil-based steel either produced in (55% of consumption) or imported into Europe (34% of consumption) can also be realized. For example, steel made in Germany from the lowest cost green iron imports compared to domestic fossil-based steel with projected ETS carbon taxes will be a roughly similar cost in 2028 and significantly cheaper by 2030 (up to 20 percent cost savings). Although cost is important, it is not everything: other factors such as available skilled workforce, geopolitical risk, water and land availability, government support, energy security and equity, hydrogen readiness and enabling policy, and stakeholder engagement will also play a role in selecting key export and import locations.

The idea of green iron trade is gaining momentum, with interest from steel incumbents and on-ground development from new companies. Yet, advancement from concepts and MOUs to construction and production remains to be seen. To accelerate green iron supply chains, RMI and the Green Hydrogen Catapult will unite extensive sector experience, analytical capabilities, and system-level assessments to promote the advantage of corridor networks. Combining our research with supply chain convenings, we aim to kick-start the launch of first-of-a-kind green iron corridors with private and public partnerships. Action is needed across the value chain to make this a reality: governments must show intent, steelmakers in importing regions must show appetite, and iron producers and iron ore miners in exporting regions must begin laying the investment foundations. We can decarbonize steel production efficiently and cost-effectively with green iron corridors: now is the time to accelerate the transition.

To learn more or get involved please contact Chathu Gamage or Sascha Flesch at cgamage@rmi.org and sflesch@rmi.org

About the Green Hydrogen Catapult:

The Green Hydrogen Catapult aims to expedite the global adoption of green hydrogen by increasing production capacity 50-fold, deploying 80 GW of renewables-powered electrolyzers, and cutting costs by 50% to less than $2 per kg of green H2. This coalition, supported by the UN High-Level Climate Champions and RMI, brings together leaders in the green hydrogen market to address the challenge of decarbonizing hard-to-electrify sectors. It encourages collaboration across the green hydrogen value chain, inviting new members to join its mission to scale a green hydrogen economy. For more information please visit: https://greenh2catapult.com

Read the whole story
strugk
5 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Six innovative ways to float skyscraper-sized wind turbines

1 Share

Yes, you read that right – float. You may have seen a wind turbine in the sea before, but chances are you were looking at a “fixed” turbine – that is, one that sits on top of a foundation drilled into the seabed. For the new frontier of offshore wind power, the focus is on floating wind turbines. In this case, the turbines are supported by floating structures that bob and sway in response to waves and wind and are moored with chains and anchored to the seafloor.

This is becoming the focus of the sector for the simple reason that most wind blows above deep water, where building fixed platforms would be too expensive or simply impossible. Designing these new floating platforms is a true engineering challenge, and is a focus of my academic research.

These wind turbines are enormous, reaching up to 240m tall – about the size of a skyscraper. Since they are so tall, strong winds far above the sea surface tend to make the turbine want to tilt, so platform designs focus on minimising this tilt while still being cost-competitive with other forms of energy.

There are more than 100 ideas for platform designs, but we can broadly group them into the following six categories:

1. Spar

Spars are narrow, deep platforms with weight added to the bottom to counteract the wind force (this is called “ballast”). They are usually relatively easy to make because they normally consist of just one cylinder.

However, they can extend 100 metres or more underwater, which means they can’t be deployed in normal docks which are not deep enough. Specialist installation procedures are required to install the turbine once the platform has been towed into deep water.

2. Barge

Barges are wide, shallow platforms that use buoyancy far from the centre of the structure to counteract the wind force on the tower. As they usually extend less than 10 metres underwater, they do not need any specialist deep-water docks or installation vessels.

However, they can be difficult to make because the platform is usually a single, large unit with a complex shape.

3. Tension-leg platform

Tension-leg platforms, or TLPs, use taut mooring lines to connect the platform to the seabed and stop the turbine from tilting in the wind.

These platforms are usually smaller and lighter than the other types, which makes them easier to fit at a standard port. Also, their seabed “footprint” is small due to the taut lines.

However, the platforms are usually not stable until attached to their mooring lines, meaning that a special towing and installation solution is required.

4. Semi-submersible

Semi-submersibles consist of three, four or five connected vertical cylinders, with the turbine in the middle or above one of the columns. The platform utilises buoyancy far from the centre (similar to the barge) and ballast at the base of each column (similar to the spar).

Like barges, semi-submersibles do not require specialist tow-out equipment and work for a wide range of water depths. Manufacturing is again a challenge.

5. Combination-type

The four categories above are the more “traditional” platforms, influenced by their predecessors in the oil and gas industry. Since the 1960s, floating platforms have meant huge oil rigs can access deeper water sites (the deepest is over 2,000m). Most of these oil rigs in deep water are either semi-submersibles, anchored to the seabed with chains, or TLPs, connected to the seabed with taut cables.

More recently, there has been a trend towards platforms more specialised to floating wind. Specifically, some use a combination of the stability mechanisms, taking advantages from each of the previous designs.

For example, “lowerable ballast” platforms look like traditional semi-submersible or barge platforms, but with a weight hanging from from taut cables.

During turbine installation at the port and tow-out, the weight is raised, so that a traditional (non-deep) dock can be used and no specialist equipment is needed. At the site of installation, the weight is lowered and the platform gets extra stability from a low centre of mass.

Other designs use the benefits of stability from taut mooring lines (similar to a TLP) but are designed to be stable during tow-out and so don’t need a special installation vessel. For example, the picture below shows the X1 Wind platform:

The taut mooring lines are attached to a single column, which is installed initially. The rest of the platform, which is self-stable, is then towed out and connected to the pre-installed column with the taut mooring lines. The platform uses the extra stability from the mooring lines but without the tow-out instability typical of TLPs.

6. Hybrid platforms

These platforms add another type of renewable energy, most commonly a wave energy converter. This increases the overall amount of energy generated, and reduces costs as power cables, maintenance and other infrastructure can be shared.

A wave energy converter also reduces platform motion, which in turn increases the power performance from the turbine.

Room for improvement

Four floating offshore wind farms have already been built, the largest of which was opened in 2023 off the coast of Norway. Two of these farms use the Hywind spar design and two use the WindFloat semi-submersible.

There have been 18 other platform designs to reach at-sea testing, including at least one of each of the categories described above. Some have plans to build floating farms in the next few years, and additional early-stage designs have plans to deploy their own prototype devices in the near future.

Interestingly, platforms are actually diverging in design. After many years, wind turbines have mostly converged on the three-bladed design that you see today, but there has been no such convergence yet on a consensus “best” floating platform. This suggests significant improvements are still possible, especially in terms of reducing motion and decreasing cost.


Don’t have time to read about climate change as much as you’d like?
Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 30,000+ readers who’ve subscribed so far.


Read the whole story
strugk
38 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

People Hate the Idea of Car-Free Cities—Until They Live in One

1 Share

London had a problem. In 2016, more than 2 million of the city’s residents—roughly a quarter of its population—lived in areas with illegal levels of air pollution; areas that also contained nearly 500 of the city’s schools. That same air pollution was prematurely killing as many as 36,000 people a year. Much of it was coming from transport: a quarter of the city’s carbon emissions were from moving people and goods, with three-quarters of that emitted by road traffic.

But in the years since, carbon emissions have fallen. There’s also been a 94 percent reduction in the number of people living in areas with illegal levels of nitrogen dioxide, a pollutant that causes lung damage. The reason? London has spent years and millions of pounds reducing the number of motorists in the city.

It’s far from alone. From Oslo to Hamburg and Ljubljana to Helsinki, cities across Europe have started working to reduce their road traffic in an effort to curb air pollution and climate change.

But while it’s certainly having an impact (Ljubljana, one of the earliest places to transition away from cars, has seen sizable reductions in carbon emissions and air pollution), going car-free is a lot harder than it seems. Not only has it led to politicians and urban planners facing death threats and being doxxed, it has forced them to rethink the entire basis of city life.

London’s car-reduction policies come in a variety of forms. There are charges for dirtier vehicles and for driving into the city center. Road layouts in residential areas have been redesigned, with one-way systems and bollards, barriers, and planters used to reduce through-traffic (creating what are known as “low-traffic neighborhoods”—or LTNs). And schemes to get more people cycling and using public transport have been introduced. The city has avoided the kind of outright car bans seen elsewhere in Europe, such as in Copenhagen, but nevertheless things have changed.

“The level of traffic reduction is transformative, and it’s throughout the whole day,” says Claire Holland, leader of the council in Lambeth, a borough in south London. Lambeth now sees 25,000 fewer daily car journeys than before its LTN scheme was put in place in 2020, even after adjusting for the impact of the pandemic. Meanwhile, there was a 40 percent increase in cycling and similar rises in walking and scooting over that same period.

What seems to work best is a carrot-and-stick approach—creating positive reasons to take a bus or to cycle rather than just making driving harder. “In crowded urban areas, you can’t just make buses better if those buses are still always stuck in car traffic,” says Rachel Aldred, professor of transport at the University of Westminster and director of its Active Travel Academy. “The academic evidence suggests that a mixture of positive and negative characteristics is more effective than either on their own.”

For countries looking to cut emissions, cars are an obvious target. They make up a big proportion of a country’s carbon footprint, accounting for one-fifth of all emissions across the European Union. Of course, urban driving doesn’t make up the majority of a country’s car use, but the kind of short journeys taken when driving in the city are some of the most obviously wasteful, making cities an ideal place to start if you’re looking to get people out from behind the wheel. That, and the fact that many city residents are already car-less (just 40 percent of people in Lambeth own cars, for example) and that cities tend to have better public transport alternatives than elsewhere.

Plus, traffic-reduction programmes also have impacts beyond reducing air pollution and carbon emissions. In cities like Oslo and Helsinki, thanks to car-reduction policies, entire years have passed without a single road traffic death. It’s even been suggested that needing less parking could free up space to help ease the chronic housing shortage felt in so many cities.

But as effective as policies to end or reduce urban car use have been, they’ve almost universally faced huge opposition. When Oslo proposed in 2017 that its city center should be car-free, the backlash saw the idea branded as a “Berlin Wall against motorists.” The plan ended up being downgraded into a less ambitious scheme consisting of smaller changes, like removing car parking and building cycle lanes to try to lower the number of vehicles.

In London, the introduction of LTNs has also led to a massive backlash. In the east London borough of Hackney, one councilor and his family were sent death threats due to their support for the programme. Bollards were regularly graffitied, while pro-LTN activists were accused of “social cleansing.” It was suggested that low-traffic areas would drive up house prices and leave the only affordable accommodation on unprotected roads. “It became very intimidating,” says Holland. “I had my address tweeted out twice, with sort of veiled threats from people who didn’t even live in the borough saying that we knew they knew where I lived.”

Part of that response is a testament to how much our cities, and by extension, our lives are designed around cars. In the US, between 50 and 60 percent of the downtowns of many cities are dedicated to parking alone. While in the UK that figure tends to be smaller, designing streets to be accessible to a never-ending stream of traffic has been the central concern of most urban planning since the Second World War. It’s what led to the huge sprawl of identikit suburban housing on the outskirts of cities like London, each sporting its own driveway and ample road access.

“If you propose this idea to the average American, the response is: if you take my car away from me, I will die,” says J. H. Crawford, the author of the book Carfree Cities and a leading figure in the movement to end urban car use. “If you do that overnight, without making any other provisions, that’s actually approximately correct.” Having the right alternatives to cars is therefore vital to reducing city traffic.

And any attempts to reduce urban car use tend to do better when designed from the bottom up. Barcelona’s “superblocks” programme, which takes sets of nine blocks within its grid system and limits cars to the roads around the outside of the set (as well as reducing speed limits and removing on-street parking) was shaped by having resident input on every stage of the process, from design to implementation. Early indicators suggest the policy has been wildly popular with residents, has seen nitrogen dioxide air pollution fall by 25 percent in some areas, and will prevent an estimated 667 premature deaths each year, saving an estimated 1.7 billion euros.

When it comes to design, there’s also the question of access. Whether it’s emergency services needing to get in or small businesses awaiting deliveries, there’s an important amount of “last mile” traffic—transport that gets people or things to the actual end point of their journey—that is vital to sustaining an urban area. If you want to reduce traffic, you have to work around that and think of alternative solutions—such as allowing emergency vehicles access to pedestrianized areas, or even using automatic number plate recognition to exempt emergency vehicles from the camera checks that are used to police through-traffic in LTNs (which is what Lambeth is doing, Holland says).

But even then, it’s often just hard to convince people an entirely different city layout is possible. Getting people to accept that how they live alongside cars can be changed—say, with an LTN—takes time. But government surveys of the UK’s recently implemented LTNs have indicated that support from residents for such schemes increases over time. “If you start seeing more and more of those kinds of things, things become thinkable,” explains Aldred. If you start unpicking the idea that car use can’t be changed, “it starts to become possible to do more and more things without cars for people.”

The other issue is that, to put it simply, cars are never just cars. They’re interwoven into our culture and consumption as symbols of affluence, independence, and success, and the aspiration to achieve those things in future. “A man who, beyond the age of 26, finds himself on a bus can count himself a failure,” the British prime minister Margaret Thatcher reportedly once said. “That’s how we got in this mess in the first place, though,” says Crawford. “Everybody saw that the rich people were driving cars, and they wanted to too.”

That divide goes some way to explaining why the opposition to car-reduction schemes is often so extreme and can devolve into a “culture war”—which is what Holland has found in her experience with LTNs. But that struggle also outlines an important fact about car-free urban areas—that once cities make the decision to reduce or remove cars, they rarely go back. No one I spoke to for this piece could name a recent sizable pedestrianization or traffic-reduction scheme that had been reversed once it had been given time to have an effect.

Many of the cities that pioneered reducing car use—like Copenhagen in the 1970s—are rated today as some of the best places to live in the world. Even with London’s experimental and often unpopular LTN scheme, 100 of the 130 low-traffic areas created have been kept in place, Aldred says.

“Generally speaking, if a sensible program is adopted to really reduce or eliminate car usage in a central urban area, it seems to stick,” says Crawford. “If you go back a year or two later, people will just say: well, this is the best thing we ever did.”


Reaching net zero emissions by 2050 will require innovative solutions at a global scale. In this series, in partnership with the Rolex Perpetual Planet initiative, WIRED highlights individuals and communities working to solve some of our most pressing environmental challenges. It’s produced in partnership with Rolex but all content is editorially independent. Find out more.

Read the whole story
strugk
41 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

UK spends least among major European economies on low-carbon energy policy, study shows

1 Share

The UK spends less on low-carbon energy policy than any other major European economy, analysis has shown, despite evidence that such spending could lower household bills and increase economic growth more than the tax cuts the government has planned.

Spending on low-carbon measures for the three years from April 2020 to the end of April 2023 was about $33.3bn (£26.2bn) in total for the UK, the lowest out of the top five European economies, according to an analysis by Greenpeace of data from the International Energy Agency.

Italy topped the table for western European economies, having spent $111bn in the period. Germany spent $92.7bn, France $64.5bn and Spain about $51.3bn.

The data includes spending on electricity networks, energy efficiency, innovation on fuels and technology, low-carbon and efficient transport and low-carbon electricity.

In addition to spending on these measures, all the countries spent substantial amounts on holding down energy bills for households, in many cases more than was spent on low-carbon measures. The UK spent about $42bn on energy affordability in the period, through measures such as the energy bills rebate and payments and discounts for the vulnerable.

Only about $13.3bn was spent on energy efficiency for homes and industry, $12.8bn on low-carbon transport and less than $6bn on renewable electricity and innovation in the UK.

When spending on energy affordability was stripped out, per capita spending was also much lower in the UK, at just under $500 per person across the three years, compared with more than $950 in France, $1,115 in Germany and $1,880 in Italy.

On Wednesday, Jeremy Hunt, the chancellor of the exchequer, will deliver the last budget of this parliament, which is likely to centre on tax cuts that economists have said will mainly benefit better-off people.

Hunt is expected to devote little resource to energy or green issues, despite a growing body of evidence and expert opinion suggesting that government spending is needed to kickstart the UK’s flagging economy and dismal productivity, and that green spending could provide a greater boost than tax cuts.

Bob Ward, the policy and communications director at the Grantham Research Institute on Climate Change and the Environment at the London School of Economics and Political Science, said: “There is now very clear evidence that the UK has been investing much less than its competitors across a range of areas, including on tackling climate change, biodiversity loss and environmental degradation.

“This low investment explains why productivity has stagnated in the UK and growth has been so feeble. It also explains why our homes and businesses are vulnerable to climate change impacts, our countryside and seas are becoming depleted of wildlife, our cities have dirty air, and our rivers and beaches are covered in sewage.”

A study from the LSE earlier this year found that investing about £26bn a year in the low-carbon economy would reduce household bills, attract about twice as much additional investment from the private sector, and do more to boost the economy than tax cuts.

Georgia Whitaker, a climate campaigner at Greenpeace UK, said the UK was losing out to international rivals in the race for the economy of the future.

“It’s clear that despite the government’s bluster, we are utterly failing on the world stage when it comes to green investment. Not only are the US and China leaving us in the dust in the race on green technology, we’re also doing terribly compared to our European neighbours,” she said.

She called instead for a green industrial strategy and infrastructure investment. “Jeremy Hunt should use the spring budget to address this embarrassing failure, but instead he’s flirting with tax cuts that disproportionately benefit the wealthiest. Meanwhile, the rest of us struggle on with the cost of living,” she said.

A Department for Energy Security and Net Zero spokesperson said: “This report fails to recognise our progress compared to European allies. We are the first major economy in the world to halve our emissions, and we have the second largest renewables capacity in Europe.

“We have a clear strategy to boost UK industry and reach net zero by 2050 – backed by £300bn in low carbon investment since 2010.”

Read the whole story
strugk
50 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete

Will a four-day work week solve Germany’s labour shortage?

1 Share

Germany, Europe’s industrial powerhouse, is struggling with a critical labour shortage. By some estimates, two million jobs across the economy are vacant, and half of the country’s companies are unable to find enough workers.

Faced with this crisis, dozens of firms are testing a strategy that, on the surface, at least, might appear counterintuitive: getting workers to work fewer days.

In early February, 31 companies in Germany began a “four-day” work week pilot. The initiative is being led by not-for-profit company, 4 Day Week Global (4DWG), and management consultancy, Intraprenör. Another 14 companies are joining the initiative in March.

The German public research university, University of Münster, will carry out a scientific evaluation of the six-month-long trials, in which up to 600 employees are expected to participate.

The 4DWG, which has been conducting similar trials in many other countries, believes that reducing work days, while keeping pay at the same levels, would result in productivity gains for companies and improved wellbeing of employees, motivating a stretched workforce. The approach could also attract people to the workforce who can’t work five days a week, helping ease the labour crunch.

But how is the German experiment different from a series of efforts in other countries to test a shorter work-week? What have those previous trials shown – are workers more productive when they work fewer hours? Is it possible for the global economy to shift to a four-day work week, and will other countries follow the lead? Al Jazeera spoke to economists, experts and researchers involved in the study to find out.

The short answer: The German test uses more sophisticated techniques to compare more robust data than earlier trials in other countries, say economists, experts and researchers, though it still has shortcomings. Its results could offer the clearest picture yet of the gains and pitfalls of a four-day week. But even the staunchest advocates for the strategy concede that moving all jobs to a shorter work week may not be possible.

The long history of the short work week debate

The demand for a work-life balance emerged from the trade union movement in parts of the world in the 19th century that campaigned for eight hours of work, eight hours of recreation and eight hours of rest.

Then, the modern economy saw its first full test of a shorter work week. Timothy T Campbell, a senior lecturer in corporate social responsibility and business ethics at the United Kingdom-based De Montfort University, traced the origins of a reduced work week to the 1940s when drivers of fuel and gasoline delivery trucks in the United States worked four days a week.

In the decades that followed, especially since the 1960s, several four-day week experiments were conducted, Campbell concluded in a research paper.

Sign up for Al Jazeera

Weekly Newsletter

“But it was in the early 1970s that interest in the 4DWW (four-day work week) exploded, almost exclusively in the US, in both the popular press and academia,” the study found. “It did not last. By the end of the 1970s very little interest remained.”

Back then, the most popular way of trying out a four-day work week, which was tested in diverse sectors of the economy, including manufacturing, was to work 10 hours a day for four days a week.

“While there were reported advantages such as improved morale, job satisfaction, decreased absenteeism and so on, there was also evidence of increased monitoring by employers and intensified work (due to prolonged daily hours), which could lead to more stress rather than less,” Campbell told Al Jazeera.

Today, the mean weekly hours that a person around the globe works for stands at 44 hours, according to the International Labour Organization’s World Employment and Social Outlook report published in January. Different countries have their own laws capping maximum daily work, beyond which workers are entitled to overtime pay.

An earlier ILO report noted that the average hours of work per week was the highest in South Asia (49 hours), followed by Eastern Asia (48.8 hours) and the lowest in North America (36 hours) and Northern, Southern and Western Europe (37.2 hours).

Around the world, one in three people worked what are considered to be long working hours – 48 hours a week – before the COVID-19 pandemic. In some countries like India, a majority of workers clocked long hours. Only one-fifth of employees around the globe worked less than 35 hours a week.

‘A paradox’

The underlying assumption of the reduced working hours trial, said Julia Backmann, professor and chair for a team looking at the transformation of work at the University of Münster, is that with fewer working hours, workers will have more time to recover from work.

This, according to the hypothesis of the experts who have designed the experiment, could help workers focus more when they go back to their jobs. Backmann is on 4DWG’s research team and is involved with the German trials.

Trial advocates said that one of the main goals is addressing the labour shortage in the German economy by attracting workers towards companies with better work-life balance. They said that it would benefit companies, especially in sectors such as healthcare and education, where the pay is comparatively less attractive, or industries such as law or information technology, where the competition for attracting workers is high.

“It’s kind of a paradox. If you ask politicians, when it comes to labour shortage, they would say ‘everyone has to work more hours and not less’,” Carsten Meier, co-founder and partner at Intraprenör, the Berlin-based consultancy involved in the trials, told Al Jazeera in an interview, “A four-day work week is an attractive concept to solve labour shortage as it makes it easier for companies to gain more attraction with the right talent. That’s the main objective of the participating companies.”

German economy minister Robert Habeck recently said that the biggest hurdle in the way of the country’s economic growth would be the labour shortage. He put the estimated figure of job vacancies at two million, even as the estimated skilled workers stage is expected to go up to five million by 2035 in Germany.

Meier said that the four-day work week is expected to have positive effects on both the mental and physical wellbeing of employees, which will reduce sick leave, as it will leave more room for leisure and physical activities. “For instance, men would be more present towards caretaking activities towards their children or elderly people, helping women to get into more types of full-time work, which will also address the labour shortage,” he said.

Germany lost about 26 billion euros ($28.5bn) of economic value in 2023 due to high levels of sick leave – among the highest in developed countries, according to vfa, the country’s research-based association of pharmaceutical companies.

UK-based research group Autonomy and the 4DWG found encouraging results in the “world’s largest” six-month trials that took place in the United Kingdom in 2022, with 2,900 workers participating from 61 companies. The trials saw a 65 per cent reduction in absenteeism, due to illness and personal leave, and reduced levels of stress and burnout, while there was no effect on company revenues. However, the trials also saw employees reporting higher work intensity. One year on, nine out of 10 companies are continuing with a four-day work week, while half of the firms have made the four-day work week permanent.

Designing a four-day work week

The German trials have been designed flexibly keeping in mind the differing needs of various sectors.

“Our principle is based on a 100-80-100 rule, a productivity-focussed meaningful reduction in work time, which means 100 percent pay for 80 percent time and 100 percent productivity,” Charlotte Lockhart, managing director and founder of 4DWG told Al Jazeera. “Different businesses will have different ways of doing that.”

Yet, the German experiment is more complex than a simple exercise in shrinking working hours.

Most companies participating in the German experiment – while reducing weekly work hours from 40 – have not gone down to 32, a number that would fit a four-day work week, with eight hours a day.

“What’s required is that they reduce their working time significantly at least 10 percent (of their current weekly work time) and that the pay remains the same so there is no pay cut,” said Backmann.

Many companies, Backmann said, felt that reducing work hours further would be “too much” to start with.

Since the participation of companies is voluntary, and the terms of the trials are flexible, some firms are giving employees a day off during the week. However, by doing so, each worker may be working extra hours on their remaining working days to get three days scheduled off from work.

The 4DWG team has been involved in conducting similar studies in other countries to test the “four-day” work week, including New Zealand, the United Kingdom, the United States, Ireland and Australia. Compressed work hour experiments have been previously conducted in Sweden, Finland, Iceland and Portugal, even as labour unions have, in recent years, been demanding reduced work hours.

“This is done by effectively eliminating some of the unproductive activity that occurs in the workplace on a daily basis,” Andrew Barnes, co-founder of 4DWG told Al Jazeera. “It could be meetings, processes, attitudes, interruptions or people spending too much time on the internet, etc. There’s all sorts of things when you give people more time, then they have time to deal with those things outside the work environment.”

Lockhart said that 90 percent of the firms that have participated in their global trials so far have stayed on “some form of reduced work hour week” after the experiments. The studies conducted by 4DWG have shown a “25 per cent” increase in productivity for firms, she said.

However, independent researchers, who have looked into the findings and the methodologies of the trials conducted by 4DWG and other similar pilots conducted in New Zealand and Iceland, have found many flaws, including with sample size, issues with data collection and limited transparency in reporting the trial outcomes.

Design flaws

“There are definitely significant empirical limitations in the four-day week pilots carried out by organisations that have a clear intention to show positive results that most journalists are not taking into consideration,” Hugo Cuello, senior policy analyst at Madrid-based Innovation Growth Lab, told Al Jazeera in an email.

Cuello, who wrote a research paper, Assessing the Validity of Four-day Week Pilots last year, found key problems. In such experiments, companies decide to take part in the trial voluntarily and are not chosen on a randomised basis, which makes the study non-representative across the economy.

Cuello noted that the trials also overrelied on self-reported data from employees asking them questions about their wellbeing or productivity before, midway and after the experiment.

The problem of relying too much on self-reporting is that it could lead to a phenomenon known as the Hawthorne effect. This basically means that employees, being aware that they are under observation during the short-term trials, may report positive effects with the hope that it could lead to permanency in work hour reductions.

There are challenges, too, that research into earlier four-day work week trials has thrown up.

As the compressed work week may lead to longer working hours in a day, despite a day off, some researchers have reported fatigue and stress among employees, even as others found evidence of reduced stress.

Cuello’s research also showed how the four-day work trials tried to establish a correlation between reduced working hours and increased productivity or wellbeing of employees over the trial period without considering other factors that could be at play.

As part of the trials, the advocacy groups collected data on key performance indicators from companies and compared it with a period a year earlier. However, they did not necessarily factor in other external factors that might have been at play, affecting productivity before the trial period began, such as weather extremities or the COVID-19 pandemic.

Overcoming obstacles

That’s where the German trials could be different.

The experiment is attempting to overcome some of the limitations observed in the previous trials by collecting “more objective data”, looking beyond the self-reported data, Backmann said.

The researchers will collect hair samples of employees to determine the level of cortisol in their body before, during and after the trial period – which will in turn be used to measure stress levels and how and if they change.

About 200 workers will also wear fitness trackers throughout the trial period, which will be used to measure other health parameters such as heart rate, sleeping patterns and activity levels. However, the Hawthorne effect cannot be completely ruled out even in this case as employees who are aware they are being monitored might, for instance, engage in increased physical activity, Backmann admitted. Since the trials also began in winter and would end in summer, seasonal change could also affect the mental health of workers, she said.

However, to control for social desirability effects – in simpler words, to ensure employees do not report being less stressed as part of the trial expectations – the researchers would also collect information from a control group of organisations which will not reduce working hours. Employees in those organisations would also wear fitness trackers and complete short surveys.

The survey will track employee personality traits over the six-month trial period to check whether they reported a significant behaviour change. “This would give us an indication whether the response of employees to the survey are completely truthful as there shouldn’t ideally be a big change in their personality reported over six months,” Backmann said.

Already, some limitations are clear, though.

Lonnie Golden, professor of economics and labour at Penn State University, said that retail, manufacturing and construction sectors, where typically hours of workers are longer, have found it hard to switch to a four-day work week. There was more acceptance in other sectors, Golden, an advisory council member at WorkFour, a non-profit set up in partnership with 4DWG, told Al Jazeera.

For now, the German trial researchers hope to report back objective results later this year. And if the news is grim, they’ll still be upfront about it, said Backmann. “I’m not of the opinion that every organisation should now switch to a four-hour work week,” Backmann said. “If we see critical aspects or negative effects, I’m happy to also share them.”

Read the whole story
strugk
53 days ago
reply
Cambridge, London, Warsaw, Gdynia
Share this story
Delete
Next Page of Stories