Krzysztof Strug
1286 stories
·
2 followers

Green ammonia electrolysis breakthrough could finally kill Haber-Bosch

1 Share

Scientists at Australia's Monash University claim to have made a critical breakthrough in green ammonia production that could displace the extremely dirty Haber-Bosch process, with the potential to eliminate nearly two percent of global greenhouse emissions.

Ammonia is one of the most heavily-produced industrial chemicals in the world, and absolutely vital to modern society. Currently, the majority of ammonia is used as an agricultural fertilizer, but it's also used in plastics, fibers, explosives, pharmaceuticals and other areas.

The global ammonia industry pumps out upwards of 230 million tonnes of ammonia annually, and demand may be set to rise as the race to net zero emissions progresses; ammonia stores so much energy that it's being proposed as a high-density green fuel for hard-to-decarbonize sectors like shipping and aviation.

Virtually all the ammonia produced today is made using the Haber-Bosch cycle. Natural methane gas is used to produce hydrogen (releasing six tons of carbon dioxide for every 1.1 tons of hydrogen), then this hydrogen is reacted with atmospheric nitrogen to produce ammonia, typically burning more natural gas to provide the necessary heat and pressure for the reaction.

Not only does this result in an estimated 1.8 percent of global CO2 emissions, it's also responsible for nitrate pollution of ground water and puts vast amounts of dangerous nitrous oxide emissions into the atmosphere. Not to mention, it consumes between three to five percent of global natural gas production totals, and the gas extraction process itself spews methane emissions directly into the air, where it acts as an extremely potent greenhouse gas.

Long story short, Haber-Bosch has to be put to bed if we're to get to net zero emissions. And researchers at Monash University say they've more or less stumbled upon a way to remove natural gas from the equation altogether, while still producing ammonia "at room temperature, at high, practical rates and efficiency."

While working on a separate project attempting to make bleach out of salt water through electrolysis, Dr. Bryan Suryanto was working with Professor Doug MacFarlane, an expert on phosphonium salts, and decided to run some side experiments to see if these ionic liquids could be used to produce ammonia in an electrolytic process. To everyone's surprise, they could.

“To be honest, the eureka moment was not really ‘Eureka!’, it was more like, ‘Are you sure? I think you need to do that again,’” Professor MacFarlane says. “It takes a long time to really believe it. I don’t know that we’ve yet really had a proper celebration. The launch of our spin-out company will possibly be the time that we genuinely celebrate all of this.”

The process, says collaborator Dr. Alexandr Simonov, is "very similar to what happens in a water electrolyzer to produce hydrogen – the difference being that we use electrolytes that are familiar in the lithium battery world. When current is applied across an electrolytic cell containing such electrolytes and also dissolved nitrogen gas, a compound called lithium nitride (Li₃N) is found at the cathode surface. The electrolyte should also contain a carrier of the hydrogen ions, or protons ... [in our paper] we have shown that phosphonium salts can act as such proton carriers to produce ammonia in a highly efficient manner."

When the hydrogen ions arrive at the cathode, they displace the lithium atoms in each lithium nitride molecule, creating NH₃ – or ammonia. This is released from the cathode surface and captured. "The phosphonium cycles between the two electrodes," says Simonov, "delivering its protons at the cathode, and being replenished with a fresh proton at the anode, creating a continuous process that we can run for as much as four days."

The process is as clean as the electricity used to power it, and produces around 53 nanomoles of ammonia per second, at Faradaic efficiencies around 69 percent. The highest reported previous efficiencies for ammonia electrolysis sat around 60 percent, according to Hollevoat et al. in 2020, with the exception of one other lithium cycling approach that managed around 88 percent, but required high temperatures around 450 °C (842 °F).

The team says it's massively scalable, capable of operating either at industrial scale, or in extremely small on-site operations. "They can be as small as a thick iPad," says MacFarlane, "and that could make a small amount of ammonia continuously to run a commercial greenhouse or hydroponics setup, for example."

This kind of distributed production model, as we explored looking at FuelPositive's modular, container-sized ammonia production units, would have additional benefits in that it would eliminate the distribution and transport that contribute significantly to the financial and emissions costs of the current ammonia model. The advantage of this new process is that it's a single step, not requiring a hydrogen electrolysis process earlier in the chain. Presumably this will make it more energy-efficient, yielding more ammonia per unit of renewable energy.

The team has patented the technology and spun off a business, Jupiter Ionics, to commercialize it, drawing in a seed round of US$1.8 million to get things started.

We'd be interested to know what the cost is going to look like, as well as what goes into these specific phosphonium salts, how long the salt solutions will last under constant production conditions, what the process for replacing them and disposing of them will be, and whether there are any negative environmental issues to be considered in their production.

Still, Haber-Bosch needs to die, and if this Jupiter Ionics technology can help put a fork in it, the world will be better off.

The research is published in the journal Science. Check out a short video below.

New Research from the Monash Ammonia Project

Sources: Monash University, Jupiter Ionics

Read the whole story
strugk
2 days ago
reply
Warsaw, Poland
Share this story
Delete

Why a toaster from 1949 is still smarter than any sold today

1 Comment and 2 Shares

My colleague Tom once introduced you to a modern toaster with two seemingly ingenious buttons: one to briefly lift your bread to check its progress, and another to toast it “a bit more.” I respectfully submit you shouldn’t need a button at all.

That’s because in 1948, Sunbeam engineer Ludvik J. Koci invented the perfect toaster, one where the simple act of placing a slice into one of its two slots would result in a delicious piece of toasted bread. No button, no lever, no other input required. Drop bread, get toast.

Some of you are no doubt already connoisseurs who know what I’m referring to: the Sunbeam Radiant Control Toaster, sold from 1949 all the way through the late ‘80s. (It goes by many names, including the T-20A, T-20-B, T20-C, T-35, VT-40, AT-W and even the 20-30-AG.) In 2019, the YouTube channel Technology Connections famously explained precisely why the antique Sunbeam Radiant is better than yours, and it might be the smartest thing you watch today.

But if you don’t have the time just now, I’ll summarize: When you stick a piece of bread into this toaster, it pushes down a series of cleverly designed levers that have just enough tension to lower and raise two slices all by themselves — and it’s got a mechanical thermostat inside that stops your bread toasting when it’s toasted and ready, NOT after some arbitrary amount of time.

With the Sunbeam, the heat radiating from the bread itself warms up a bimetal strip (one of the simplest kinds of thermostats) which, being made of two different kinds of metal that expand at different rates, ends up bending backwards to sever the connection and stop the flow of electricity when the toast is done. And here’s the most ingenious part: when the heating wire shrinks as it cools down, that is what triggers the mechanical chain reaction that lifts your bread back up. Here’s how Sunbeam describes it in the toaster’s official service manual:

Raising or lowering of the bread is obtained by making use of the energy of expansion and contraction of the Center Element wire. Of course, this movement is very small and is measured in thousandths of an inch, but more than adequate carriage movement is obtained by a simple linkage which multiplies this movement approximately 175 times.

And that mechanism doesn’t just wear out after nearly three-quarters of a century of use: there’s a single screw underneath the crumb tray to adjust the tension of the wire, and it alone is enough to bring many aging toasters back to life.

So yeah: drop bread, get toast. And like Technology Connections points out, you get toast regardless of whether your bread is room temperature, refrigerated, or frozen when you stick it into the device.

That also makes it remarkably hard to accidentally burn your bread by toasting it too long! Remember the “A bit more” button on Tom’s toaster? The Sunbeam Radiant Control Toaster does that merely by dropping a toasted piece of bread back in the slot — it warms the bread right back to the temperature at which it browns, which browns the bread slightly more, before it trips the thermostat once again and shuts itself off.

By now, you might have guessed I wasn’t satisfied watching a YouTube video — I bought my own off eBay. And then I bought a second and a third, because it turns out a Space Age artifact that produces delicious food is just the kind of wonderful conversation piece that makes for a wonderful gift, too. (Before giving them, I opened them up and replaced their aging power cords with modern grounded three-prong ones, as many of these predate even polarized plugs and are not remotely safe by modern electrocution prevention standards.)

There are good arguments that the Sunbeam Radiant Control Toaster still isn’t perfect. For one thing, there’s no ding to remind you when the toast is done — though these 1275- and 1375-watt toasters are powerful enough you might as well stick around for the minute or two it takes. (Let your tea steep, grab your butter and preserves.)

You’re also not going to toast bagels in these easily, since the thermostat’s aimed at the center of your piece of bread. Frozen waffles come out fantastic, but I have to carefully split English muffins perfectly in half so they don’t catch on the guide wires. And while slices of square sandwich bread crisp up beautifully, including the thin-cut Taiwan toast from my local bakery, thick or oblong breads don’t necessarily fit. (A wide slice of Oroweat Buttermilk or Nature’s Own Brioche Style might require a quick flip-and-retoast to crisp all the way across.)

But when it works, which is most of the time, the result is the kind of crisp-on-the-outside, fluffy-and-moist-on-the-inside piece of toast my mom tells me she hasn’t had since she left her own mother’s kitchen.

I admit I’ve never tried a Balmuda, the $300 toaster oven where you add a splash of water so it “locks in the bread’s inner moisture before the surface is given a golden brown finish.” But I have to wonder if quickly crisping the outside with a dedicated vertical toaster, instead of baking it a second time in a miniature oven, might be a more elegant solution? I do own a Panasonic FlashXpress, which often takes home best toaster oven awards, and its perfectly browned slices definitely don’t have the same taste the Sunbeam can provide.

If you find yourself in the market for a Sunbeam Radiant yourself, you should know they’re not all quite the same — you can read about the differences here and here — and you may have to pay quite a bit. They go for an average of $130 on eBay, with fully restored models fetching two to four times that at auction. (Tim’s Toasters also promises to restore your existing Sunbeam for $250, though I can’t vouch for their work myself.)

Is that actually a lot? The Sunbeam T-20 reportedly retailed for over $22.50 brand new back in 1949. That’s $260 in today’s money, which may be why no other company has seemingly bothered to replicate its fully automatic charms.

This Thanksgiving, I thought I’d raise a toast to the ultimate toaster. We may never see its like again.

Read the whole story
strugk
6 days ago
reply
Warsaw, Poland
Share this story
Delete

EU strategy seeks to remove carbon from atmosphere – EURACTIV.com

1 Share

The European Commission will reward green farming practices like afforestation and soil conservation while putting rules to identify activities that “unambiguously remove carbon from the atmosphere” under draft proposals seen by EURACTIV.

A draft EU carbon strategy, first obtained by French news site Contexte, aims to contribute towards the EU’s climate effort by removing CO2 from the atmosphere and “pave the way for a policy of negative emissions in the future”.

The European Climate Law, approved earlier this year, requires that any carbon emissions remaining in Europe by 2050 are to be balanced by removals, “with the aim to achieve negative emissions thereafter,” says the paper.

The EU strategy on “sustainable carbon cycles,” due to be released on 14 December, takes this objective one step further by putting forward plans “to upscale carbon removal solutions that capture CO2 from the atmosphere and store it for the long term”.

This can be done “either in ecosystems through nature-based solutions or in other storage forms through industrial solutions” like carbon capture and storage, says the Commission paper.

Oil and gas companies have been at the forefront of calls to ramp up carbon dioxide removals, attracting criticism from green activists who say they are looking for ways to shirk their responsibility in cutting emissions.

But advocates say carbon removal technologies will eventually be needed because emissions from parts of industry and agriculture will be impossible to eliminate.

Even if nations succeed at cutting CO2 in accordance with the Paris Agreement, there will still be “residual emissions” after 2050 when Europe is supposed to achieve climate neutrality, said Oliver Geden, a German scientist who is one of the lead authors behind the sixth assessment report by the Intergovernmental Panel on Climate Change (IPCC).

“So whenever we talk about achieving climate neutrality by mid-century in a way that complies with the Paris Agreement, it’s assuming that we’re going to remove carbon dioxide from the atmosphere,” Geden told a recent EURACTIV event.

“To me this reads like a very sensible proposal,” he said in reaction to the draft. Until now, governments have only reluctantly dealt with carbon dioxide removal and the strategy will place the EU “among the frontrunners, globally, together with the UK and the US,” he told EURACTIV in emailed comments.

The Commission proposal, he added, “will certainly create a political dynamic” by forcing governments and lawmakers in the European Parliament to invest political attention to the issue.

Natural and industrial CO2 capture

The key to achieving carbon dioxide removals is so-called “carbon farming”, where landowners are rewarded for green farming practices that capture CO2 or prevent the release of carbon into the atmosphere.

The Commission also seeks to replace fossil fuels by promoting wood in the construction sector or by producing electricity using bioenergy in combination with carbon capture and storage (BECCS) technology to sequester the related emissions.

But nature-based solutions cannot be scaled up indefinitely, the EU executive cautions, pointing to other technological solutions like Direct Air Capture (DAC), which sucks CO2 directly from the air.

Although still in their infancy, these technological solutions are more promising because the removals are permanent – unlike carbon captured from forest growth, which can be reversed when trees are cut and burned.

By 2030, 5 million tonnes of CO2 should be removed every year from the atmosphere and permanently stored using technological solutions like DAC, the strategy says.

“Another promising route is to turn the CO2 from a waste to a resource and use it as feedstock for the production of chemicals, plastics or fuels,” the EU executives adds, saying the production of methanol from CO2 could pave the way for the production of green plastics, coolants, and resins.

By 2030, “at least 20% of the carbon used in the chemical and plastic industry should be from non-fossil sources,” the strategy says.

Transparency and certification

All these solutions will be underpinned by “strong requirements on monitoring, reporting and verification” to ensure the carbon is permanently stored.

This will be based on a certification system, which the European Commission plans to table in 2022 and could later serve as a basis for an international standard.

“All carbon removals need to be accounted for in full transparency and by considering criteria such as the duration of the storage, the risk of reversal, the uncertainty of the measurement or the risk of carbon leakages increasing GHG emissions elsewhere,” the communication says.

These will form the basis for creating a new EU market for carbon removals, working alongside the EU’s existing Emissions Trading Scheme, which covers emissions from large industrial plants.

Carbon removal credits – or “offsets” in climate jargon – are already being traded on voluntary markets, and demand is already outweighing supply, the Commission notes, saying it is considering options “for creating an EU regulated market for the period after 2030”.

EU plans certification scheme for carbon dioxide removals

The European Commission will publish a policy paper by the end of the year on “the sustainable management of the carbon cycle” – the first step towards an EU-wide certification scheme for negative emissions coming from agriculture, forestry and other sources, that will be tabled in 2022.

Offsetting schemes like tree-planting are controversial, however. Environmental groups have criticised them as a greenwashing tool that allows fossil fuel companies to continue polluting because their emissions would be compensated by withdrawals elsewhere.

Mary S. Booth, an ecological scientist at the Partnership for Policy Integrity (PFPI), a US-based campaign group, also warned about the “astronomical” cost of deploying technologies like Direct Air Capture (DAC) at scale, which involves transporting carbon across borders to available storage sites.

Booth also raised doubts about the Commission’s target of storing 5 million tonnes of CO2 annually using carbon capture technologies, saying the amount vastly exceeds the scale of existing carbon storage facilities currently available.

“For example, the Silverstone project mentioned on page 16 currently captures 12,000 tonnes annually at a cost of millions – about a quarter of 1% of the 5 million tonne goal.”

Geden, the German scientist, agrees that the 2030 target “might be a bit overambitious in terms of the timelines involved,” especially since setting up a monitoring and verification system for a diverse set of CO2 removal methods won’t be an easy task.

Yet, he says the 5 Mt technological removals “are an important signal” for the market. “The signal that these removals should be delivered domestically is very important, so that stakeholders and member states do not prematurely combine this unfounded expectations about international offsets under UNFCCC Art. 6,” he said.

Similarly, Booth said replacing 20% of chemical and plastic production in Europe would require “tens of millions” more tonnes of biomass, or a massive ramp-up of DAC – a solution she said was “an extraordinarily expensive and resource-intensive approach to replace a small proportion of fossil fuel use.”

A new study by PFPI, published on Tuesday (23 November), concluded that EU plans to achieve net-zero emissions by 2050 was based on “unrealistic assumptions” regarding carbon removals.

Under EU climate policy proposals for 2030, new forest growth would absorb 310 million tonnes of CO2 by 2030 while another 250 Mt would be sequestered from biomass energy with carbon capture and storage (BECCS), a technology that Booth says “effectively does not exist”.

If those plans are approved, “use of forest wood for fuel will increase 50%, contributing to the weak land sink target and making the EU reliant on BECCS,” Booth warned.

She argues that a cheaper and more immediate solution to capture carbon is to halt deforestation and stop burning biomass to produce electricity.

EU member states currently pay out €10-€17 billion per year to subsidise biomass burning that contributes to pumping CO2 into the atmosphere, according to calculations by PFPI.

Instead, those subsidies “should be reallocated to zero-emissions energy and restoring the declining forest carbon sink,” she suggests.

“To restore ecosystems and the forest carbon sink, the EU needs to harvest and burn less wood,” the study concludes.

[Edited by Alice Taylor]

Read the whole story
strugk
9 days ago
reply
Warsaw, Poland
Share this story
Delete

The chase for fusion energy

1 Share

Several fusion researchers who don’t work for private firms told Nature that, although prospects are undeniably exciting, commercial fusion in a decade is overly optimistic. “Private companies say they’ll have it working in ten years, but that’s just to attract funders,” says Tony Donné, programme manager of the Eurofusion consortium which conducts experiments at the state-run Joint European Torus, established at Culham in the late 1970s. “They all have stated constantly to be about ten years away from a working fusion reactor, and they still do.”

Timelines that companies project should be regarded not so much as promises but as motivational aspirations, says Melanie Windridge, a plasma physicist who is the FIA’s UK director of communications, and a communications consultant for the fusion firm Tokamak Energy, in Culham. “I think bold targets are necessary,” she says. State support is also likely to be needed to build a fusion power plant that actually feeds electricity into the grid, adds Ian Chapman, chief executive of the UK Atomic Energy Authority (UKAEA).

But whether it comes from small-scale private enterprise, huge national or international fusion projects, or a bit of both, practical nuclear fusion finally seems to be on the horizon. “I’m convinced that it’s going to happen”, says Chapman. Chris Kelsall, chief executive of Tokamak Energy, agrees. “Sooner or later this will be cracked,” he says. “And it will be transformative.”

Seventy-year dream

Nuclear fusion, says Klinger, is “the only primary energy source left in the Universe” that we have yet to exploit. Ever since the process that powers the stars was harnessed in the 1950s for hydrogen bombs, technologists have dreamt of unlocking it in a more controlled manner for energy generation.

Existing nuclear power plants use fission: the release of energy when heavy atoms such as uranium decay. Fusion, by contrast, produces energy by merging very light nuclei, typically hydrogen, which can happen only at very high temperatures and pressures. Most efforts to harness it in reactors involve heating the hydrogen isotopes deuterium (D) and tritium (T) until they form a plasma — a fluid state of matter containing ionized atoms and other charged particles — and then fuse (see ‘Fuel mix’). For these isotopes, fusion starts at lower temperatures and densities than for normal hydrogen.

D–T fusion generates some radiation in the form of short-lived neutrons, but no long-lived radioactive waste, unlike fission. It is also safer than fission because it can be switched off easily: if the plasma is brought below critical thresholds of temperature or density, the nuclear reactions stop.


Fuel mix

Many reactors fuse deuterium (D) with tritium (T) to release energy. This mix ignites, or creates a self-sustaining fusion reaction,at around 100 million kelvin. It produces neutrons, which can make the chamber radioactive.

Other reactions, such as fusing protons (p) with boron-11 (11B), don’t produce neutrons, but ignition requires higher temperatures.


What makes it so difficult to conduct in a controlled manner, however, is the challenge of containing electrically charged plasma that is undergoing fusion at temperatures of around 100 million kelvin — much hotter than the centre of the Sun. Generally, researchers use magnetic fields to confine and levitate the plasma inside the reactor. But instabilities in this infernal fluid make containment very difficult, and have so far prevented fusion from being sustained for long enough to extract more energy than is put in to trigger it.

This is necessarily big science, and until this century, only state-run projects could muster the resources. The scale of the enterprise is reflected today in the world’s biggest fusion effort: ITER, a fusion reactor being constructed in southern France and supported by 35 nations, including China, European Union member states, the United States, Russia, South Korea and Japan, with a price tag of at least $22 billion.

Read the whole story
strugk
12 days ago
reply
Warsaw, Poland
Share this story
Delete

Toronto’s deep lake water cooling (DLWC) is the world’s largest. Here’s how it works.

1 Share

With just minutes left in Game 5 of the 2019 NBA finals, the Toronto Raptors drained a 16-foot jumper to pull ahead by six points. Hardly a soul was sitting down or silent as fans cheered the team toward Canada’s first basketball championship.

But the sellout crowd also posed a challenge. The National Basketball Association requires arenas to be chilled to between 65 and 72 degrees Fahrenheit. And, left unchecked, the arena’s 20,144 attendees were likely to produce a sweltering mess that would set off alarms at league headquarters.

Story continues below advertisement

Story continues below advertisement

“People bring with them a lot of body heat,” said Kyle Lamkey, director of engineering for the arena. “Cooling is probably one of the most critical parts of our building.”

But unlike other sports venues, Scotiabank Arena doesn’t keep its temperatures in check using air conditioners. Toronto is home to the world’s largest deep lake water cooling (DLWC) system.

Conceptually, the technology is relatively simple. Instead of relying on energy-intensive compressors and chillers to dissipate heat from buildings, DLWC uses water from nearby Lake Ontario to whisk away the warmth.

3. Pipelines carry cooled water to buildings in downtown Toronto.

2. Lake water makes

its way to the city

through a heat transfer station.

1. Three pipes pull water

from the deep lake.

Scale varies in this perspective

3. Pipelines carry cooled

water to buildings in

downtown Toronto.

2. Lake water makes

its way to the city through

a heat transfer station.

1. Three pipes pull water

from the deep lake.

Scale varies in this perspective

3. Pipelines carry cooled water to buildings in downtown Toronto.

2. Lake water makes its way to the city through a heat transfer station.

1. Three pipes pull water

from the deep lake.

Scale varies in this perspective

The system launched in 2004 with only a handful of customers in the city, but it now cools over 100 downtown buildings, ranging from City Hall and Toronto General Hospital to hotels and even a brewery.

Enwave, the company that owns and operates Toronto’s DLWC, says the system already saves 90,000 mega-watt hours of electricity use annually — roughly enough to power a town of 25,000. It is so popular that the city has nearly reached capacity and recently committed to an expansion.

“It’s a big investment,” said Carlyle Coutinho, president of Enwave, of the upcoming (CAD) $100 million project. But he said, “it would be challenging to keep growing commercially without increasing the baseload.”

Toronto’s cooling process begins about 3.5 miles south of the city and 280 feet underwater, in the depths of Lake Ontario where the water remains cool year-round. The water is first drawn into the city through three massive pipes, spaced about half a mile apart. In the planned expansion, a fourth pipe will be added to increase capacity by 60 percent.

The three pipes merge before entering the water treatment plant.

High-density polyethylene pipes lie along the natural slope of the bottom of the lake.

Water is densest at 39°F (4°C) and sinks to bottom, becoming a stable chilled water source.

The three pipes merge before entering the water treatment plant.

High-density polyethylene pipes lie along the natural slope of the bottom of the lake.

Water is densest at 39°F (4°C) and sinks to bottom, becoming a stable chilled water source.

The three pipes merge before entering the water treatment plant.

High-density polyethylene pipes lie along the natural slope of the bottom of the lake.

Water is densest at 39°F (4°C) and sinks to bottom, becoming a stable chilled water source.

The three pipes merge before entering the water treatment plant.

High-density polyethylene pipes lie along the natural slope of the bottom of the lake.

Water is densest at 39°F (4°C) and sinks to bottom, becoming a stable chilled water source.

Once the lake water makes it to the city, the DLWC system operates via a series of water loops. There is a loop that moves the lake water; a loop that moves water within the downtown area; and loops in each building the system serves. The water moves itself through these pipes using relatively little energy.

CLOSED LOOP

Cooling system

Drinking water to the city

CITY POTABLE

WATER SYSTEM

CLOSED LOOP

Cooling system

Drinking water to the city

CITY POTABLE

WATER SYSTEM

CLOSED LOOP

Cooling system

Additional cooling facility

Drinking water to the city

CITY POTABLE

WATER SYSTEM

CLOSED LOOP

Cooling system

Additional cooling facility

Drinking water to the city

CITY POTABLE

WATER SYSTEM

Traditional commercial water-cooling systems often involve towers that evaporate water as a means of expelling heat. DLWC avoids that evaporation, and Enwave estimates that the Toronto system saves roughly 220 million gallons of water annually.

Another way the Toronto system saves is by using largely passive heat exchangers, rather than energy-intensive air conditioners and chillers.

Heat exchangers transfer heat, or coolness, between water loops and are located where those water loops meet — at each customer site and where the lake water pipes meet the city pipes. The latter heat exchanger uses the coolness of the lake water to dissipate heat from the downtown buildings.

Pipe from

customer

buildings

Pipe to city

drinking water

Pipe from

customer

buildings

Pipe to city drinking water

Pipe from customer buildings

Pipe to city drinking water

Pipe from customer buildings

Pipe to city drinking water

DLWC ultimately allows buildings to consume less electricity. Lamkey says Scotiabank Arena uses some 3 million kilowatt-hours less electricity annually than if it cooled using traditional methods — a reduction of about 70 percent. While he occasionally needs to call in excess cooling from Enwave’s electric-powered chillers, he says it’s rare.

Most of the time, the lake does the job.

The same water, now warm after absorbing the heat of the space, returns to be cooled again.

Chilled water

flows through

water fan coils.

Chilled water goes to cool other buildings.

Chilled water is piped from the station.

Warmer water returns to the station.

Chilled water flows through water fan coils.

The same water, now warm after absorbing the heat of the space, returns to be cooled again.

Chilled water goes to cool other buildings.

Warmer water

returns to the station.

Chilled water

is piped from the station.

Chilled water flows through water fan coils.

The same water, now warm after absorbing the heat of the space, returns to be cooled again.

Chilled water goes to cool other buildings.

Chilled water

is piped from the station.

Warmer water

returns to the station.

Chilled water flows through water fan coils.

The same water, now warm after absorbing the heat of the space, returns to be cooled again.

Chilled water goes to cool other buildings.

Chilled water is piped

from the station.

Warmer water

returns to the station.

Finding suitable conditions for a DLWC system isn’t always simple.

Location is the first hurdle to making the technology feasible. Much of the East Coast of the United States, for example, has a shallow, sloped ocean shelf that makes it difficult to position a system at the depths necessary. There also must be enough cooling demand to justify a system.

Story continues below advertisement

Story continues below advertisement

Then there are the enormous upfront costs. Cornell University’s lake water cooling system — the largest and oldest in the United States — cost $58.5 million. The investment, though, “has easily already paid for itself,” said Todd Cowen, an engineer at the university, because operating and maintenance costs are so low.

Toronto’s system costs (CAD) $170 million, and unlike Cornell, Enwave needed customers. Lou Di Gironimo, general manager for Toronto Water, says the question was, “Would this be a sustainable economic activity?" But any fears of failure were short-lived. Starting with only a few customers in 2004, Enwave’s DLWC customer base has since expanded rapidly.

DLWC doesn’t come without potential pitfalls. Alex Horne, an environmental engineer and lake expert, points out that if the warmer, nutrient-rich water coming from DLWC systems is released too close to the surface of the lake, it can lead to issues such as blooms of algae, including potentially toxic variants. But Horne, a professor emeritus at the University of California Berkeley, says the fix is fairly simple — discharge the water deeper in a lake and through diffusers in the pipes. “It’s sort of common sense,” he said. “But if you’re a heating-cooling engineer, you don’t think about it.”

There’s plenty of potential for source water cooling to keep growing, said Hermann Kugeler, with Makai Ocean Engineering, Inc., a company that designs and installs piping for the systems. He added that there has also been progress on salt water air conditioning (SWAC), which utilizes ocean instead of lake water as coolant.

While they may not have proliferated on the scale of other types of climate-friendly technologies, DLWC and SWAC systems are now up and running in dozens of locations around the world, from Hong Kong to Bahrain. “I think the big thing is informing people that it exists," Kugeler said. “People don’t know it’s an option.”

Story continues below advertisement

Story continues below advertisement

Toronto has celebrated the city’s success — not only in the form of DLWC energy savings and a planned expansion, but also with its basketball team. While the team ended up narrowly losing Game 5, it closed out the 2019 NBA title three days later and brought Toronto its first major sports championship in more than a quarter-century.

To commemorate, Coutinho made T-shirts with the Raptors’ claw-mark logo splashed across the front with the words: “Chillin’ the Champs.”

Aaron Steckelberg contributed to this report.

correction

A previous version of this article misstated the energy savings of Toronto's system. It's 90,000 megawatt-hours, not kilowatt-hours.

Read the whole story
strugk
17 days ago
reply
Warsaw, Poland
Share this story
Delete

'Useless Specks of Dust' Turn Out to Be Building Blocks of All Vertebrate Genomes

1 Share

Originally, they were thought to be just specks of dust on a microscope slide.

Now, a new study suggests that microchromosomes – a type of tiny chromosome found in birds and reptiles – have a longer history, and a bigger role to play in mammals than we ever suspected.

By lining up the DNA sequence of microchromosomes across many different species, researchers have been able to show the consistency of these DNA molecules across bird and reptile families, a consistency that stretches back hundreds of millions of years.

What's more, the team found that these bits of genetic code have been scrambled and placed on larger chromosomes in marsupial and placental mammals, including humans. In other words, the human genome isn't quite as 'normal' as previously supposed.

"We lined up these sequences from birds, turtles, snakes and lizards, platypus and humans and compared them," says geneticist Jenny Graves, from La Trobe University in Australia. "Astonishingly, the microchromosomes were the same across all bird and reptile species.

"Even more astonishingly, they were the same as the tiny chromosomes of Amphioxus – a little fish-like animal with no backbone that last shared a common ancestor with vertebrates 684 million years ago."

By tracing these microchromosomes back to the ancient Amphioxus, the scientists were able to establish genetic links to all of its descendants. These tiny 'specks of dust' are actually important building blocks for vertebrates, not just abnormal extras.

It seems that most mammals have absorbed and jumbled up their microchromosomes as they've evolved, making them seem like normal pieces of DNA. The exception is the platypus, which has several chromosome sections line up with microchromosomes, suggesting that this method may well have acted as a 'stepping stone' for other mammals in this regard, according to the researchers.

A tree chart outlining the presence of similar DNA in snakes, lizards, birds, crocodiles, and mammalsMicrochromosomes are consistent in birds and reptiles, but mixed up in larger chromosomes in mammals. (Paul Waters)

The study also revealed that as well as being similar across numerous species, the microchromosomes were also located in the same place inside cells.

"Not only are they the same in each species, but they crowd together in the center of the nucleus where they physically interact with each other, suggesting functional coherence," says biologist Paul Waters, from the University of New South Wales (UNSW) in Australia.

"This strange behavior is not true of the large chromosomes in our genomes."

The researchers credit recent advancements in DNA sequencing technology for the ability to sequence microchromosomes end-to-end, and to better establish where these DNA fragments came from and what their purpose might be.

It's not clear whether there's an evolutionary benefit to coding DNA in larger chromosomes or in microchromosomes, and the findings outlined in this paper might help scientists put that particular debate to rest – although a lot of questions remain.

The study suggests that the large chromosome approach that has evolved in mammals isn't actually the normal state, and might be a disadvantage: genes are packed together much more tightly in microchromosomes, for example.

"Rather than being 'normal', chromosomes of humans and other mammals were puffed up with lots of 'junk DNA' and scrambled in many different ways," says Graves.

"The new knowledge helps explain why there is such a large range of mammals with vastly different genomes inhabiting every corner of our planet."

The research has been published in PNAS.

Read the whole story
strugk
27 days ago
reply
Warsaw, Poland
Share this story
Delete
Next Page of Stories