Krzysztof Strug
1319 stories

Restoring Hearing With Beams of Light

1 Share

There’s a popular misconception that cochlear implants restore natural hearing. In fact, these marvels of engineering give people a new kind of “electric hearing” that they must learn how to use.

Natural hearing results from vibrations hitting tiny structures called hair cells within the cochlea in the inner ear. A cochlear implant bypasses the damaged or dysfunctional parts of the ear and uses electrodes to directly stimulate the cochlear nerve, which sends signals to the brain. When my hearing-impaired patients have their cochlear implants turned on for the first time, they often report that voices sound flat and robotic and that background noises blur together and drown out voices. Although users can have many sessions with technicians to “tune” and adjust their implants’ settings to make sounds more pleasant and helpful, there’s a limit to what can be achieved with today’s technology.

I have been an otolaryngologist for more than two decades. My patients tell me they want more natural sound, more enjoyment of music, and most of all, better comprehension of speech, particularly in settings with background noise—the so-called cocktail party problem. For 15 years, my team at the University of Göttingen, in Germany, has been collaborating with colleagues at the University of Freiburg and beyond to reinvent the cochlear implant in a strikingly counterintuitive way: using light.

We recognize that today’s cochlear implants run up against hard limits of engineering and human physiology. So we’re developing a new kind of cochlear implant that uses light emitters and genetically altered cells that respond to light. By using precise beams of light instead of electrical current to stimulate the cochlear nerve, we expect our optical cochlear implants to better replicate the full spectral nature of sounds and better mimic natural hearing. We aim to start clinical trials in 2026 and, if all goes well, we could get regulatory approval for our device at the beginning of the next decade. Then, people all over the world could begin to hear the light.

These 3D microscopic images of mouse ear anatomy show optical implants [dotted lines] twisting through the intricate structure of a normal cochlea, which contains hair cells; in deafness, these cells are lost or damaged. At left, the hair cells [light blue spiral] connect to the cochlear nerve cells [blue filaments and dots]. In the middle and right images, the bony housing of the mouse cochlea surrounds this delicate arrangement.Daniel Keppeler

How cochlear implants work

Some 466 million people worldwide suffer from disabling hearing loss that requires intervention, according to the World Health Organization. Hearing loss mainly results from damage to the cochlea caused by disease, noise, or age and, so far, there is no cure. Hearing can be partially restored by hearing aids, which essentially provide an amplified version of the sound to the remaining sensory hair cells of the cochlea. Profoundly hearing-impaired people benefit more from cochlear implants, which, as mentioned above, skip over dysfunctional or lost hair cells and directly stimulate the cochlear, or auditory, nerve.

In the 2030s, people all over the world could begin to hear the light.

Today’s cochlear implants are the most successful neuroprosthetic to date. The first device was approved by the U.S. Food and Drug Administration in the 1980s, and nearly 737,000 devices had been implanted globally by 2019. Yet they make limited use of the neurons available for sound encoding in the cochlea. To understand why, you first need to understand how natural hearing works.

In a functioning human ear, sound waves are channeled down the ear canal and set the ear drum in motion, which in turn vibrates tiny bones in the middle ear. Those bones transfer the vibrations to the inner ear’s cochlea, a snail-shaped structure about the size of a pea. Inside the fluid-filled cochlea, a membrane ripples in response to sound vibrations, and those ripples move bundles of sensory hair cells that project from the surface of that membrane. These movements trigger the hair cells to release neurotransmitters that cause an electrical signal in the neurons of the cochlear nerve. All these electrical signals encode the sound, and the signal travels up the nerve to the brain. Regardless of which sound frequency they encode, the cochlear neurons represent sound intensity by the rate and timing of their electrical signals: The firing rate can reach a few hundred hertz, and the timing can achieve submillisecond precision.

Hair cells in different parts of the cochlea respond to different frequencies of sound, with those at the base of the spiral-shaped cochlea detecting high-pitched sounds of up to about 20 kilohertz, and those at the top of the spiral detecting low-pitched sounds down to about 20 Hz. This frequency map of the cochlea is also available at the level of the neurons, which can be thought of as a spiraling array of receivers. Cochlear implants capitalize on this structure, stimulating neurons in the base of the cochlea to create the perception of a high pitch, and so on.

A commercial cochlear implant today has a microphone, processor, and transmitter that are worn on the head, as well as a receiver and electrodes that are implanted. It typically has between 12 and 24 electrodes that are inserted into the cochlea to directly stimulate the nerve at different points. But the saline fluid within the cochlea is conductive, so the current from each electrode spreads out and causes broad activation of neurons across the frequency map of the cochlea. Because the frequency selectivity of electrical stimulation is limited, the quality of artificial hearing is limited, too. The natural process of hearing, in which hair cells trigger precise points on the cochlear nerve, can be thought of as playing the piano with your fingers; cochlear implants are more equivalent to playing with your fists. Even worse, this large stimulation overlap limits the way we can stimulate the auditory nerve, as it forces us to activate only one electrode at a time.

How optogenetics works

The idea for a better way began back in 2005, when I started hearing about a new technique being pioneered in neuroscience called optogenetics. German researchers were among the first to discover light-sensitive proteins in algae that regulated the flow of ions across a cellular membrane. Then, other research groups began experimenting with taking the genes that coded for such proteins and using a harmless viral vector to insert them into neurons. The upshot was that shining a light on these genetically altered neurons could trigger them to open their voltage-gated ion channels and thus fire, or activate, allowing researchers to directly control living animals’ brains and behaviors. Since then, optogenetics has become a significant tool in neuroscience research, and clinicians are experimenting with medical applications including vision restoration and cardiac pacing.

I’ve long been interested in how sound is encoded and how this coding goes wrong in hearing impairment. It occurred to me that stimulating the cochlear nerve with light instead of electricity could provide much more precise control, because light can be tightly focused even in the cochlea’s saline environment.

We are proposing a new type of implanted medical device that will be paired with a new type of gene therapy.

If we used optogenetics to make cochlear nerve cells light sensitive, we could then precisely hit these targets with beams of low-energy light to produce much finer auditory sensations than with the electrical implant. We could theoretically have more than five times as many targets spaced throughout the cochlea, perhaps as many as 64 or 128. Sound stimuli could be electronically split up into many more discrete frequency bands, giving users a much richer experience of sound. This general idea had been taken up earlier by Claus-Peter Richter from Northwestern University, who proposed directly stimulating the auditory nerve with high-energy infrared light, though that concept wasn’t confirmed by other laboratories.

Our idea was exciting, but my collaborators and I saw a host of challenges. We were proposing a new type of implanted medical device that would be paired with a new type of gene therapy, both of which must meet the highest safety standards. We’d need to determine the best light source to use in the optogenetic system and how to transmit it to the proper spots in the cochlea. We had to find the right light-sensitive protein to use in the cochlear nerve cells, and we had to figure out how best to deliver the genes that code for those proteins to the right parts of the cochlea.

But we’ve made great progress over the years. In 2015, the European Research Council gave us a vote of confidence when it funded our “OptoHear” project, and in 2019, we spun off a company called OptoGenTech to work toward commercializing our device.

Channelrhodopsins, micro-LEDs, and fiber optics

Our early proof-of-concept experiments in mice explored both the biology and technology at play in our mission. Finding the right light-sensitive protein, or channelrhodopsin, turned out to be a long process. Many early efforts in optogenetics used channelrhodopsin-2 (ChR2) that opens an ion channel in response to blue light. We used it in a proof-of-concept experiment in mice that demonstrated that optogenetic stimulation of the auditory pathway provided better frequency selectivity than electrical stimulation did.

In our continued search for the best channelrhodopsin for our purpose, we tried a ChR2 variant called calcium translocating channelrhodopsin (CatCh) from the Max Planck Institute of Biophysics lab of Ernst Bamberg, one of the world pioneers of optogenetics. We delivered CatCh to the cochlear neurons of Mongolian gerbils using a harmless virus as a vector. We next trained the gerbils to respond to an auditory stimulus, teaching them to avoid a certain area when they heard a tone. Then we deafened the gerbils by applying a drug that kills hair cells and inserted a tiny optical cochlear implant to stimulate the light-sensitized cochlear neurons. The deaf animals responded to this light stimulation just as they had to the auditory stimulus.

The optical cochlear implant will enable people to pick out voices in a busy meeting and appreciate the subtleties of their favorite songs.

However, the use of CatCh has two problems: First, it requires blue light, which is associated with phototoxicity. When light, particularly high-energy blue light, shines directly on cells that are typically in the dark of the body’s interior, these cells can be damaged and eventually die off. The other problem with CatCh is that it’s slow to reset. At body temperature, once CatCh is activated by light, it takes about a dozen milliseconds to close the channel and be ready for the next activation. Such slow kinetics do not support the precise timing of neuron activation necessary to encode sound, which can require more than a hundred spikes per second. Many people said the kinetics of channelrhodopsins made our quest impossible—that even if we gained spectral resolution, we’d lose temporal resolution. But we took those doubts as a strong motivation to look for faster channelrhodopsins, and ones that respond to red light.

We were excited when a leader in optogenetics, Edward Boyden at MIT, discovered a faster-acting channelrhodopsin that his team called Chronos. Although it still required blue light for activation, Chronos was the fastest channelrhodopsin to date, taking about 3.6 milliseconds to close at room temperature. Even better, we found that it closed within about 1 ms at the warmer temperature of the body. However, it took some extra tricks to get Chronos working in the cochlea: We had to use powerful viral vectors and certain genetic sequences to improve the delivery of Chronos protein to the cell membrane of the cochlear neurons. With those tricks, both single neurons and the neural population responded robustly and with good temporal precision to optical stimulation at higher rates of up to about 250 Hz. So Chronos enabled us to elicit near-natural rates of neural firing, suggesting that we could have both frequency and time resolution. But we still needed to find an ultrafast channelrhodopsin that operated with longer wavelength light.

We teamed up with Bamberg to take on the challenge. The collaboration targeted Chrimson, a channelrhodopsin first described by Boyden that’s best stimulated by orange light. The first results of our engineering experiments with Chrimson were fast Chrimson (f-Chrimson) and very fast Chrimson (vf-Chrimson). We were pleased to discover that f-Chrimson enables cochlear neurons to respond to red light reliably up to stimulation rates of approximately 200 Hz. Vf-Chrimson is even faster but is less well expressed in the cells than f-Chrimson is; so far, vf-Chrimson has not shown a measurable advantage over f-Chrimson when it comes to high-frequency stimulation of cochlear neurons.

This flexible micro-LED array, fabricated at the University of Freiburg, is wrapped around a glass rod that’s 1 millimeter in diameter. The array is shown with its 144 diodes turned off [left] and operating at 1 milliamp [right]. University of Freiburg/Frontiers

We’ve also been exploring our options for the implanted light source that will trigger the optogenetic cells. The implant must be small enough to fit into the limited space of the cochlea, stiff enough for surgical insertion, yet flexible enough to gently follow the cochlea’s curvature. Its housing must be biocompatible, transparent, and robust enough to last for decades. My collaborators Ulrich Schwarz and Patrick Ruther, then at the University of Freiburg, started things off by developing the first micro-light-emitting diodes (micro-LEDs) for optical cochlear implants.

We found micro-LEDs useful because they’re a very mature commercial technology with good power efficiency. We conducted severalexperiments with microfabricated thin-film micro-LEDs and demonstrated that we could optogenetically stimulate the cochlear nerve in our targeted frequency ranges. But micro-LEDs have drawbacks. For one thing, it’s difficult to establish a flexible, transparent, and durable hermetic seal around the implanted micro-LEDs. Also, micro-LEDs with the highest efficiency emit blue light, which brings us back to the phototoxicity problem. That's why we’re also looking at another way forward.

Instead of getting the semiconductor emitter itself into the cochlea, the alternative approach puts the light source, such as a laser diode, farther away in a hermetically sealed titanium housing. Optical fibers then bring the light into the cochlea and to the light-sensitive neurons. The optical fibers must be biocompatible, durable, and flexible enough to wind through the cochlea, which may be challenging with typical glass fibers. There’s interesting ongoing research in flexible polymer fibers, which might have better mechanical characteristics, but so far, they haven’t matched glass in efficiency of light propagation. The fiber-optic approach could have efficiency drawbacks, because we’d lose some light when it goes from the laser diode to the fiber, when it travels down the fiber, and when it goes from the fiber to the cochlea. But the approach seems promising, as it ensures that the optoelectronic components could be safely sealed up and would likely make for an easy insertion of the flexible waveguide array.

Another design possibility for optical cochlear implants is to use laser diodes as a light source and pair them with optical fibers made of a flexible polymer. The laser diode could be safely encapsulated outside the cochlea, which would reduce concerns about heat, while polymer waveguide arrays [left and right images] would curl into the cochlea to deliver the light to the cells.OptoGenTech

The road to clinical trials

As we consider assembling these components into a commercial medical device, we first look for parts of existing cochlear implants that we can adopt. The audio processors that work with today’s cochlear implants can be adapted to our purpose; we’ll just need to split up the signal into more channels with smaller frequency ranges. The external transmitter and implanted receiver also could be similar to existing technologies, which will make our regulatory pathway that much easier. But the truly novel parts of our system—the optical stimulator and the gene therapy to deliver the channelrhodopsins to the cochlea—will require a good amount of scrutiny.

Cochlear implant surgery is quite mature and typically takes only a couple of hours at most. To keep things simple, we want to keep our procedure as close as possible to existing surgeries. But the key part of the surgery will be quite different: Instead of inserting electrodes into the cochlea, surgeons will first administer viral vectors to deliver the genes for the channelrhodopsin to the cochlear nerve cells, and then implant the light emitter into the cochlea.

Since optogenetic therapies are just beginning to be tested in clinical trials, there’s still some uncertainty about how best to make the technique work in humans. We’re still thinking about how to get the viral vector to deliver the necessary genes to the correct neurons in the cochlea. The viral vector we’ve used in experiments thus far, an adeno-associated virus, is a harmless virus that has already been approved for use in several gene therapies, and we’re using some genetic tricks and local administration to target cochlear neurons specifically. We’ve already begun gathering data about the stability of the optogenetically altered cells and whether they’ll need repeated injections of the channelrhodopsin genes to stay responsive to light.

Our roadmap to clinical trials is very ambitious. We’re working now to finalize and freeze the design of the device, and we have ongoing preclinical studies in animals to check for phototoxicity and prove the efficacy of the basic idea. We aim to begin our first-in-human study in 2026, in which we’ll find the safest dose for the gene therapy. We hope to launch a large phase 3 clinical trial in 2028 to collect data that we’ll use in submitting the device for regulatory approval, which we could win in the early 2030s.

We foresee a future in which beams of light can bring rich soundscapes to people with profound hearing loss or deafness. We hope that the optical cochlear implant will enable them to pick out voices in a busy meeting, appreciate the subtleties of their favorite songs, and take in the full spectrum of sound—from trilling birdsongs to booming bass notes. We think this technology has the potential to illuminate their auditory worlds.

From Your Site Articles

Related Articles Around the Web

Read the whole story
11 days ago
Warsaw, Poland
Share this story

Forget olive oil. This new cooking oil is made using fermentation

1 Share

The world runs on vegetable oil. It’s the third-most-consumed food globally after rice and wheat. It’s in your morning croissant and your oat milk, your salad dressing, your afternoon snack bar, and your midnight cookie.

Our obsession with vegetable oil is so big that we use more land—around 20% to 30% of all the world’s agricultural space—for vegetable oil crops than for fruits, vegetables, legumes, and nuts combined. All of this leads to devastating deforestation, biodiversity loss, and climate change. But what if we could grow cooking oil in a lab?

Launching today,

Zero Acre’s

first product is a cooking oil made by fermentation: High in healthy fats and low in bad fats, its Cultured Oil is produced using 85% less land than canola oil, emits 86% less CO2 than soybean oil, and requires 99% less water than olive oil. At $29.99, it’s significantly more expensive than its vegetable counterpart, but replacing just 5% of vegetable oils used in the U.S. with so-called cultured oil, the company claims, would free up 3.1 million acres of land every year.

Vegetable oils are bad for the environment, but they’ve also been linked with obesity, heart disease, cancer, and other diseases. That’s why Jeff Nobbs, cofounder and CEO of Zero Acre, has been trying to take them out of the food system for years—first with a keto-friendly restaurant called Kitava in San Francisco, then with nutrition-tracking software. Now his company is looking to make cooking oil by fermenting microbes rather than harvesting crops.

Conventionally, vegetable oil is made by crushing parts of a vegetable or seed (like sunflower seeds or olives) and extracting the oil. “Cultured oil,” on the other hand, is made by fermentation.

So, let’s back up a little. Fermentation involves a naturally occurring chemical reaction between two main groups of ingredients: microorganisms and natural sugars. Microorganisms include bacteria, microalgae, yeast, and other fungi; natural sugars can be found in a variety of products, from wheat to milk to grapes.

To make wine, for example, winemakers add yeast to grape juice. The yeast then converts, or ferments, the natural sugars of the grapes into ethanol, and you have yourself a crisp glass of chardonnay. But you can thank fermentation for an abundance of other foods like bread, cheese, yogurt, pickles, and even chocolate.

When it comes to cooking oil, the process is similar. Nobbs won’t disclose the exact kind of microorganism being used to produce Zero Acre’s Cultured Oil, but he says the company works with both non-GMO yeast and microalgae. “We focus on cultures that naturally produce healthy fats, and yeast and microalgae do that efficiently,” he says.

The process starts with a proprietary culture made up of food-producing microorganisms (yeast or microalgae) that is fed natural plants like sugar beet and sugarcane. (The company doesn’t grow these directly, but both are part of its supply chain.)

Over the course of a few days, the microorganisms convert, or ferment, the natural plant sugars into oils or fats. The resulting mixture is then pressed and the oil is released, separated, filtered, and cultured oil is born. (Nobbs describes the taste as “lightly buttery,” though you can taste it only if you have it straight up with a spoon.)

Nobbs says the entire process takes less than a week, compared to soybean oil (the most widely consumed oil in the U.S.), which requires a six-month period just for the seeds to mature. His company’s Cultured Oil also requires 90% less land to produce than soybean oil. (The only reason the company needs land is to grow sugarcane, though Nobbs aspires to eventually use sugars in existing food waste like corncobs and orange peels, bringing the amount of land needed closer to zero, hence “Zero Acre.”)

That’s if the company manages to scale up. According to Kyria Boundy-Mills, a microbiologist at the University of California, Davis, who has studied yeast oils for the past 10 years, “microbial oils” like the one Zero Acre is producing have been studied for at least 80 years, “mostly for fuel,” she says via email.

Boundy-Mills recalls a biotechnology company called TerraVia (formerly Solazyme), which developed a technology to make biodiesel from microalgae. TerraVia then switched gears and used it to make the first culinary algae oil on the market, which made it to Walmart but was discontinued a few years later.

It’s a cautionary tale for Zero Acre, but “fermentation is a mature technology,” Boundy-Mills says, noting that yeasts and microalgae have been grown in large-scale commercial fermentations for decades. The challenge remains the price.

“Fermentation is faster than growing crops, but the capital and operating costs of fermentation facilities is much, much more per acre than farmland,” she says. (Zero Acre runs a research facility in San Mateo and has raised $37 million to date.)

A bottle of Zero Acre’s Cultured Oil isn’t cheap, but as demand grows, Nobbs hopes that economies of scale will help the company lower the cost. “We want to kick off the flywheel, but it’s going to take a while to replace 200 million metric tons [of vegetable oil],” he says.

Nobbs is also eyeing solid fats that could replace palm shortening, and foods that come with cultured oil as an ingredient, noting, “We want an ecosystem to develop around cultured oil the same way it has developed around olive oil.”

Read the whole story
14 days ago
Warsaw, Poland
Share this story

our review suggests it's time to ditch it in favour of a new theory of gravity

1 Share

We can model the motions of planets in the Solar System quite accurately using Newton’s laws of physics. But in the early 1970s, scientists noticed that this didn’t work for disc galaxies – stars at their outer edges, far from the gravitational force of all the matter at their centre – were moving much faster than Newton’s theory predicted.

This made physicists propose that an invisible substance called “dark matter” was providing extra gravitational pull, causing the stars to speed up – a theory that’s become hugely popular. However, in a recent review my colleagues and I suggest that observations across a vast range of scales are much better explained in an alternative theory of gravity proposed by Israeli physicist Mordehai Milgrom in 1982 called Milgromian dynamics or Mond – requiring no invisible matter.

Mond’s main postulate is that when gravity becomes very weak, as occurs at the edge of galaxies, it starts behaving differently from Newtonian physics. In this way, it is possible to explain why stars, planets and gas in the outskirts of over 150 galaxies rotate faster than expected based on just their visible mass. But Mond doesn’t merely explain such rotation curves, in many cases, it predicts them.

Philosophers of science have argued that this power of prediction makes Mond superior to the standard cosmological model, which proposes there is more dark matter in the universe than visible matter. This is because, according to this model, galaxies have a highly uncertain amount of dark matter that depends on details of how the galaxy formed – which we don’t always know. This makes it impossible to predict how quickly galaxies should rotate. But such predictions are routinely made with Mond, and so far these have been confirmed.

Our mission is to share knowledge and inform decisions.

Imagine that we know the distribution of visible mass in a galaxy but do not yet know its rotation speed. In the standard cosmological model, it would only be possible to say with some confidence that the rotation speed will come out between 100km/s and 300km/s on the outskirts. Mond makes a more definite prediction that the rotation speed must be in the range 180-190km/s.

If observations later reveal a rotation speed of 188km/s, then this is consistent with both theories – but clearly, Mond is preferred. This is a modern version of Occam’s razor – that the simplest solution is preferable to more complex ones, in this case that we should explain observations with as few “free parameters” as possible. Free parameters are constants - certain numbers that we must plug into equations to make them work. But they are not given by the theory itself – there’s no reason they should have any particular value – so we have to measure them observationally. An example is the gravitation constant, G, in Newton’s gravity theory or the amount of dark matter in galaxies within the standard cosmological model.

We introduced a concept known as “theoretical flexibility” to capture the underlying idea of Occam’s razor that a theory with more free parameters is consistent with a wider range of data – making it more complex. In our review, we used this concept when testing the standard cosmological model and Mond against various astronomical observations, such as the rotation of galaxies and the motions within galaxy clusters.

Each time, we gave a theoretical flexibility score between –2 and +2. A score of –2 indicates that a model makes a clear, precise prediction without peeking at the data. Conversely, +2 implies “anything goes” – theorists would have been able to fit almost any plausible observational result (because there are so many free parameters). We also rated how well each model matches the observations, with +2 indicating excellent agreement and –2 reserved for observations that clearly show the theory is wrong. We then subtract the theoretical flexibility score from that for the agreement with observations, since matching the data well is good – but being able to fit anything is bad.

A good theory would make clear predictions which are later confirmed, ideally getting a combined score of +4 in many different tests (+2 -(-2) = +4). A bad theory would get a score between 0 and -4 (-2 -(+2)= -4). Precise predictions would fail in this case – these are unlikely to work with the wrong physics.

We found an average score for the standard cosmological model of –0.25 across 32 tests, while Mond achieved an average of +1.69 across 29 tests. The scores for each theory in many different tests are shown in figures 1 and 2 below for the standard cosmological model and Mond, respectively.

It is immediately apparent that no major problems were identified for Mond, which at least plausibly agrees with all the data (notice that the bottom two rows denoting falsifications are blank in figure 2).

One of the most striking failures of the standard cosmological model relates to “galaxy bars” – rod-shaped bright regions made of stars – that spiral galaxies often have in their central regions (see lead image). The bars rotate over time. If galaxies were embedded in massive halos of dark matter, their bars would slow down. However, most, if not all, observed galaxy bars are fast. This falsifies the standard cosmological model with very high confidence.

Another problem is that the original models that suggested galaxies have dark matter halos made a big mistake – they assumed that the dark matter particles provided gravity to the matter around it, but were not affected by the gravitational pull of the normal matter. This simplified the calculations, but it doesn’t reflect reality. When this was taken into account in subsequent simulations it was clear that dark matter halos around galaxies do not reliably explain their properties.

There are many other failures of the standard cosmological model that we investigated in our review, with Mond often able to naturally explain the observations. The reason the standard cosmological model is nevertheless so popular could be down to computational mistakes or limited knowledge about its failures, some of which were discovered quite recently. It could also be due to people’s reluctance to tweak a gravity theory that has been so successful in many other areas of physics.

The huge lead of Mond over the standard cosmological model in our study led us to conclude that Mond is strongly favoured by the available observations. While we do not claim that Mond is perfect, we still think it gets the big picture correct – galaxies really do lack dark matter.

Read the whole story
32 days ago
Warsaw, Poland
Share this story

Octopus and Human Brains Share the Same “Jumping Genes”

1 Share

According to a new study, the neural and cognitive complexity of the octopus could originate from a molecular analogy with the human brain.

New research has identified an important molecular analogy that could explain the remarkable intelligence of these fascinating invertebrates.

An exceptional organism with an extremely complex brain and cognitive abilities makes the octopus very unique among invertebrates. So much so that it resembles vertebrates more than invertebrates in several aspects. The neural and cognitive complexity of these animals could originate from a molecular analogy with the human brain, as discovered by a research paper that was recently published in BMC Biology and coordinated by Remo Sanges from Scuola Internazionale Superiore di Studi Avanzati (SISSA) of Trieste and by Graziano Fiorito from Stazione Zoologica Anton Dohrn of Naples.

This research shows that the same ‘jumping genes’ are active both in the human brain and in the brain of two species, Octopus vulgaris, the common octopus, and Octopus bimaculoides, the Californian octopus. A discovery that could help us understand the secret of the intelligence of these remarkable organisms.

Sequencing the human genome revealed as early as 2001 that over 45% of it is composed of sequences called transposons, so-called ‘jumping genes’ that, through molecular copy-and-paste or cut-and-paste mechanisms, can ‘move’ from one point to another of an individual’s genome, shuffling or duplicating.

In most cases, these mobile elements remain silent: they have no visible effects and have lost their ability to move. Some are inactive because they have, over generations, accumulated mutations; others are intact, but blocked by cellular defense mechanisms. From an evolutionary point of view even these fragments and broken copies of transposons can still be useful, as ‘raw matter’ that evolution can sculpt.

Octopus Drawing

Drawing of an octopus. Credit: Gloria Ros

Among these mobile elements, the most relevant are those belonging to the so-called LINE (Long Interspersed Nuclear Elements) family, found in a hundred copies in the human genome and still potentially active. It has been traditionally though that LINEs’ activity was just a vestige of the past, a remnant of the evolutionary processes that involved these mobile elements, but in recent years new evidence emerged showing that their activity is finely regulated in the brain. There are many scientists who believe that LINE transposons are associated with cognitive abilities such as learning and memory: they are particularly active in the hippocampus, the most important structure of our brain for the neural control of learning processes.

The octopus’ genome, like ours, is rich in ‘jumping genes’, most of which are inactive. Focusing on the transposons still capable of copy-and-paste, the researchers identified an element of the LINE family in parts of the brain crucial for the cognitive abilities of these animals. The discovery, the result of the collaboration between Scuola Internazionale Superiore di Studi Avanzati, Stazione Zoologica Anton Dohrn and Istituto Italiano di Tecnologia, was made possible thanks to next-generation sequencing techniques, which were used to analyze the molecular composition of the genes active in the nervous system of the octopus.

“The discovery of an element of the LINE family, active in the brain of the two octopuses species, is very significant because it adds support to the idea that these elements have a specific function that goes beyond copy-and-paste,” explains Remo Sanges, director of the Computational Genomics laboratory at SISSA, who started working at this project when he was a researcher at Stazione Zoologica Anton Dohrn of Naples. The study, published in BMC Biology, was carried out by an international team with more than twenty researchers from all over the world.

“I literally jumped on the chair when, under the microscope, I saw a very strong signal of activity of this element in the vertical lobe, the structure of the brain which in the octopus is the seat of learning and cognitive abilities, just like the hippocampus in humans,” tells Giovanna Ponte from Stazione Zoologica Anton Dohrn.

According to Giuseppe Petrosino from Stazione Zoologica Anton Dohrn and Stefano Gustincich from Istituto Italiano di Tecnologia “This similarity between man and octopus that shows the activity of a LINE element in the seat of cognitive abilities could be explained as a fascinating example of convergent evolution, a phenomenon for which, in two genetically distant species, the same molecular process develops independently, in response to similar needs.”

“The brain of the octopus is functionally analogous in many of its characteristics to that of mammals,” says Graziano Fiorito, director of the Department of Biology and Evolution of Marine Organisms of the Stazione Zoologica Anton Dohrn. “For this reason, also, the identified LINE element represents a very interesting candidate to study to improve our knowledge on the evolution of intelligence.”

Reference: “Identification of LINE retrotransposons and long non-coding RNAs expressed in the octopus brain” by Giuseppe Petrosino, Giovanna Ponte, Massimiliano Volpe, Ilaria Zarrella, Federico Ansaloni, Concetta Langella, Giulia Di Cristina, Sara Finaurini, Monia T. Russo, Swaraj Basu, Francesco Musacchia, Filomena Ristoratore, Dinko Pavlinic, Vladimir Benes, Maria I. Ferrante, Caroline Albertin, Oleg Simakov, Stefano Gustincich, Graziano Fiorito and Remo Sanges, 18 May 2022, BMC Biology.
DOI: 10.1186/s12915-022-01303-5

Read the whole story
39 days ago
Warsaw, Poland
Share this story

This super-tree could help feed the world and fight…

1 Share

The first miracle was the de-bittering of pongamia. Sikka had hoped it could be made palatable, at least for cattle, though he feared that would require nasty chemical solvents unfit for human consumption. He was stunned when his team figured out a way to do it with a solvent already consumed by quite a few humans: alcohol. 

That’s when Terviva began to pivot from fuel toward its current mission of planting millions of trees to feed billions of people.” As a child, Sikka regularly visited relatives in India, and after college, he worked for the U.S. State Department in West Africa, so he had witnessed the developing world’s desperate need for protein and vegetable oil firsthand. Now he had a way to grow a lot without using any productive land.

The problem was finding someone to do the growing, because farmers are notoriously reluctant to gamble on untested crops, especially tree crops that take four years to yield their first harvest. Farmers are only willing to take a risk like that when they are, as Sikka puts it, totally fucked,” which brings us to Terviva’s second miracle: A bacterial disease began wiping out Florida’s citrus trees, inspiring some totally fucked farmers to take a chance on pongamia on a few of their worst tracts of land.

So far, pongamia has lived up to its billing, producing yields comparable to Midwestern soybeans in much poorer soil with virtually no chemicals or added water. In test fields, some trees are producing yields four to 10 times higher than soybean fields. Pongamia is basically vertical soy, except it doesn’t need to be plowed or sprayed or irrigated. It simply converts sunlight, air and rain into protein and oil — plus an extract from the de-bittering process that Terviva has successfully patented as a bio-fungicide. And the field results should only improve with experience and advanced breeding of the superstars from the test fields. 

Terviva has now raised more than $100 million, hired more than 100 employees, sequenced the entire pongamia genome, and built a solid reputation as an ag-tech startup. Ultimately, though, Sikka is building a food company, which is why he’s so excited about miracle number three: De-bittered pongamia oil turned out to be a golden-colored substitute for olive oil. A glowing analysis by the food consultancy Mattson found it produced an indulgent mouthfeel” reminiscent of foods fried in butter. Pongamia also has enormous potential as a protein for plant-based milks and meats, as it contains all nine essential amino acids.

The food giant Danone is now partnering with Terviva to develop pongamia as a climate-friendly, climate-resilient, regenerative,” non-GMO alternative to soy and palm oil, which are increasingly unpopular with consumers who care about sustainability. The first products featuring Terviva’s newly branded Ponova oil” could hit the market early next year.

The universe has smiled at us so often. We’ve had so many strokes of dumb luck to get where we are,” Sikka says. 

And we still haven’t made a dent.”

You knew the bad news was coming eventually. After 12 years of miracles, Terviva now has 1,500 acres of pongamia in the ground. But around the world, there are about 300 million acres of soy in the ground. Sikka had hoped farmers would rush to pongamia once the Florida experiment proved the concept, but that has not happened. The world finally has a high-yield, low-impact crop that can grow almost anywhere, and it’s still a rounding error, barely a blip on the global landscape.

It just shows how hard it is to change agriculture,” Sikka says. It takes forever, even when everything goes right.”

The thing is, agriculture does need to change for the earth to remain hospitable to humans, and forever is too long to wait.


Sikka continues to hope that more farmers will embrace pongamia. He also hopes that institutional investors with deeper pockets and greater risk tolerance than farmers will finance much larger projects to plant tens of thousands of acres of pongamia. But since hope is not a business plan, Sikka has figured out a way to generate revenue and start selling ingredients without reshaping the agricultural landscape. There are already over 1 million tons of pongamia seeds on trees growing wild in India, and Terviva is now paying impoverished villagers to pick them.

It’s a complex undertaking, requiring delicate negotiations with village elders, direct payments through mobile phones, and sophisticated geolocation technology to trace the seeds. But it’s already produced 5,000 tons of beans, enough to take Ponova oil to market, while injecting $2 million into impoverished rural areas. Sikka believes India can be an economic engine for Terviva, and vice versa. He also believes pongamia could inspire the Danones of the world to invest in other exotic tree crops indigenous to the global South, from ramon seeds to croton nuts to Bambara beans. 

Again, though, Terviva’s 5,000 tons are a pittance compared to the world’s 350 million tons of soybeans, and $2 million barely counts as a drop in the $200 billion global cooking-oil bucket. Unfortunately, the problem of feeding the world without frying the world is an almost indescribably gigantic problem. 

By 2050, the agricultural sector will have to produce a couple billion additional tons of food each year without clearing any additional forests. That will require dramatic changes on the demand side, like wasting less food, eating less beef and using less good land to grow biofuels. It will also require dramatic changes on the supply side, like higher crop and livestock yields, more resilience to a warmer world, fewer emissions from fertilizer and manure, and less chemical and mechanical degradation of soils.

Pongamia checks a bunch of those boxes, but not at a large enough scale to matter much yet. One lesson of its miracles is that it will take a lot more miracles to transform global agriculture.

Read the whole story
43 days ago
Warsaw, Poland
Share this story

EU pioneers carbon removals certification: Key recommendations and principles

1 Share

Doubling down on its global climate leadership, the European Union (EU) is pioneering a certification mechanism for carbon dioxide removals (CDR). While policymakers must continue to focus on rapid, near-term reduction of emissions, it has become abundantly clear that the world will need to draw large amounts of carbon dioxide out of the atmosphere through CDR. While several EU Member States have already begun developing policies to support CDR, gaps remain at the EU level. With the forthcoming certification mechanism, the EU has the unique opportunity to safeguard CDR’s climate value by setting a global standard for other countries to follow.

The latest report from Working Group III of the Intergovernmental Panel on Climate Change (IPCC) confirms CDR is an essential tool in the climate toolbox. In fact, even if the world succeeds at dramatically cutting emissions in the near-term, we are still expected to need hundreds of gigatonnes of CDR cumulatively by the end of the century. The IPCC highlights that CDR can help reduce net emissions in the near-term, counterbalance residual emissions, and achieve and sustain net-negative emissions in the long-term.

But for CDR to fulfill these roles in climate mitigation, it must be rigorously measured, well-governed, and scaled-up through dedicated incentive regimes. Currently, there are no widely accepted standards for CDR certification and existing informal rules are driven by private initiatives. The absence of regulation creates substantial risks for the environment and for consumer protection, as many CO2 removal claims often lack methodological or technical transparency as well as comparability. Furthermore, regulatory intervention is needed to safeguard the climate benefits of scaling permanent CDR. By introducing much-needed clarity, the EU’s certification mechanism is poised to lay the foundation for responsible CDR deployment in Europe as part of the bloc’s path to climate neutrality. A rigorous certification mechanism would provide a strong signal of political support and would bolster public trust that CDR provides reliable climate benefits. Thus, the upcoming EU legislation should ensure that carbon removals certified under this mechanism are real, measurable, additional, permanent, do not result in leakage, and avoid double counting. 

Carbon removal certification also presents an excellent opportunity for the EU to break the conventional fault lines between natural vs. technological CDR methods. This unhelpful dichotomy has not only failed to provide clarity on the quality of removals, but also distracts from the issue of permanence, creates confusion via various interpretations and divides stakeholders. The best way forward would be to emulate the IPCC’s approach and clearly categorize and certify CDR methods based on the removal process (e.g., land-based, ocean-based, geochemical, or chemical) and estimated timescale of carbon storage (e.g., decades to centuries, centuries to millennia, beyond ten thousand years). Carbon removal options shouldn’t be a question of picking favored technologies, but a question of the permanent removal of harmful emissions from our atmosphere.

A rigorous certification mechanism would also enable the incentivization of a substantial scaling-up of CDR methods in the EU to help achieve climate neutrality by 2050. Requiring robust monitoring, reporting, and verification as well as public disclosure of credits used in voluntary and compliance markets can help to avoid double-counting, thereby further building public trust and climate benefit alike.

The certification mechanism is also expected to guide the voluntary carbon market in the private sector. The role of the private sector in advancing CDR is growing, and the collaboration between the public and private sector will determine the success of the certification mechanism. It is therefore crucial to provide stakeholders with a clear indication of how the certification mechanism will apply to voluntary and compliance markets, and how this will evolve over the next decades.

CDR represents an essential suite of options for helping to address the climate crisis. If CDR is to fulfill its climate value, it must be stringently regulated with global standards to ensure quality and permanence. As it pioneers the CDR certification mechanism over the months ahead, the European Commission has an unparalleled responsibility to adopt the highest environmental standards, so that it can further bolster the EU’s credibility as a climate leader and provide a global gold standard for CDR certification.

  1. The upcoming EU legislation should ensure that carbon removals certified under this mechanism are real, measurable, additional, permanent, do not result in leakage, and avoid double counting.
  2. The EU should establish clear institutional roles to oversee and implement the certification framework, ensuring robust governance and compliance.
  3. The EU should establish advisory boards at the EU-level to ensure harmony across the bloc and high standards of the certification mechanism. These could include a scientific advisory board to achieve excellence with regards to scientific evidence and rigor; and an advisory board consisting of market participants as well as other civil society stakeholders to inform the evolution of the certification mechanism.
  4. The EU should provide stakeholders with a clear indication on how the certification mechanism will feed into the voluntary and compliance markets, and how this is expected to evolve over the next decades.

In establishing the certification mechanism, it is essential that the Commission ensure the mechanism includes only methods which adhere to basic minimum principles for CDR.

The principles guiding Carbon Dioxide Removal Certification are:

  • Real – CO2 is removed from the atmosphere and durably stored. 
  • Measurable – The removed CO2 is quantified via robust monitoring, reporting and verification rules. 
  • Permanent – Where the durability of CO2 storage is not long-term (beyond 1000 years), legal and financial mechanisms ensure the permanence of CO2 storage in perpetuity.
  • Additionality – CDR activities are additional to those required by existing policies and regulations. For example, these activities will not occur in the absence of financing from selling carbon credits, where such financing is used. 
  • Avoidance of leakage – Removal activities do not cause emissions at other geographical locations due to market changes or other shifts. A robust consequential life cycle assessment with cradle-to-grave system boundaries is required to ensure this. The potential for physical leakage from CO2 storage sites is addressed via legal and financial mechanisms. 
  • Avoidance of double counting – Certificates for removals for the same activity are not issued, used or claimed by more than one entity. This is highly relevant given that the certification for carbon removals is expected to leverage voluntary carbon markets, and that carbon removals are also considered to be integrated into the EU regulatory and compliance frameworks. 
  • Sustainability – Negative externalities which can stem from both social and environmental implications of CDR are addressed through strict regulation to ensure that CDR projects result in no net-harm to the environment and people.

This article is also published by the Clean Air Task Force. Energy Voices is a democratic space presenting the thoughts and opinions of leading Energy & Sustainability writers, their opinions do not necessarily represent those of illuminem.

Read the whole story
57 days ago
Warsaw, Poland
Share this story
Next Page of Stories