Krzysztof Strug
1321 stories
·
2 followers

Should Congress save newspapers from Google?

1 Share

This article is republished with permission from BIG, a newsletter on the history and politics of monopoly power. Subscribe here.

“In America there is scarcely a hamlet which has not its own newspaper.” —Alexis de Tocqueville, 1835

Today’s issue is on how Australia saved its newspapers from Google and Facebook, and whether Congress will follow suit in America.

The Dean of the Columbia School of Journalism did public introspection in a Politico article last week. For a nine-and-a-half-month program to get a journalism degree, the price is $121,290. Traditionally a working class profession, journalism, like fashion or art or publishing, is now a sinecure for the wealthy.

There are a lot of discussions about the media in American politics, but very few about advertising, which is the key pivot point around which the media organizes itself. In America, and throughout the world, the press is dying, starved of ad revenue. Since 2005, we’ve lost more than 2,500 newspapers and tens of thousands of jobs in journalism. Australia for instance, lost more than 15% of its newspapers from 2008-2019, and you can trace similar declines globally.

A common explanation is, well, the internet killed the news. And yet, ad revenue for newspapers peaked in 2006, which was more than 10 years after the internet became a commercial medium. A different explanation for the decline of news publishing is that, starting in the mid-2000s, Google and Facebook built market power in ad markets, directing revenue away from newspapers and toward themselves. One very clear indication that the market power story has merit is that last year, the Australian government made a significant change to policy to undo part of big tech’s bargaining leverage. If “the internet killed the news,” then changes to ad markets wouldn’t matter. But the result of the new law was a massive increase in journalism.

In fact, in Australia today, it is hard to recruit interns at newspapers because there are so many full-time jobs available, even as Gannett in the U.S. just did yet another round of layoffs. Now the U.S. Senate, as well as legislatures around the world, is poised to copy Australia’s example through an antitrust bill called the Journalism Competition and Preservation Act (JCPA).

That’s what I’m going to write about today. The death of the news, its revival in Australia, and the weird politics around the debate.

Can America Exist without Newspapers?

Let’s start with the problem, which is that newspapers are disappearing en masse across America. So far, one might say, so what? In the 1980s, newspapers became part of big conglomerates and failed to address their business model problems, instead collecting high profit margins due to local monopoly status. That’s certainly Jack Shafer’s view. They are also owned by big private equity predators. Why care about whether some hedge fund magnate has more money versus some Palo Alto magnate? Moreover, the news world hasn’t covered itself in glory. As someone who is still angry about the lies that led us into the war in Iraq, I shrug my shoulders about the bankruptcy of any particular news outlet.

However, newspapers and the news are not the same thing. One typical way to fix the news is by starting new outlets to compete with the old ones. That’s how the alt-weeklies of the 1960s were formed, filling a void that the audience wanted. And yet, despite high traffic to local news, as well as high interest in niche communities, new outlets are mostly not being born. No one is replacing the local newspapers that go out of business, such that today, nearly a third of U.S. counties have no daily paper. People have tried many different innovative strategies, and they can generate traffic and readers. But unlike in any other period in American history, publishers can’t manage to sell ads. And if they can’t sell ads, they can’t finance a diverse set of independent publishing outlets.

This situation, of a newspaper-less nation, is a crisis. America has never existed as an independent nation without lots and lots of local and niche newspapers. “Nothing is easier than to set up a newspaper, and a small number of readers suffices to defray the expenses of the editor,” Alexis de Tocqueville wrote in 1835 in his iconic Democracy in America. “The number of periodical and occasional publications which appears in the United States actually surpasses belief.” Tocqueville actually found it kind of annoying because the papers were often crude. And yet, it was also a source of public order and local control of politics.

In 19th century Europe, aristocrats and kings controlled and financed the news. But most American papers, by contrast, were chock-full of advertising. No one paper was powerful, Tocqueville argued, because there were so many. Anticipating the debate over “disinformation” today, he called it “self-evident” that “the only way to neutralize the effect of public journals is to multiply them indefinitely.” The wide distribution of lots of opinions meant that, in America, no one who was particularly powerful could use papers, as Tocqueville noted, to “excite the passions of the multitude to their own advantage.” In Europe, there were fewer outlets, and not being financed by ads, they were often state-controlled, polarizing, elitist, and destructive.

The European aristocratic system of the press is what American journalism looks like today. Trade publications and elite news centered in New York City and D.C. do pretty well, cable news gets automatic payment from subscriber fees, but the local news is dying. It’s very hard to start a paper these days and have it financed by anyone but foundations or billionaires. Jeff Bezos owns the Washington Post, Marc Benioff owns Time, Laurene Powell Jobs owns The Atlantic, and Miriam Adelson owns the Las Vegas Review-Journal. Meanwhile, private equity funds are squeezing whatever they can out of the remaining local press, laying journalists off en masse. The news increasingly looks like an oligarchy.

The intense paranoia about “disinformation” is a result of the narrowing of the economic and political basis of news. The idea of free expression as a mechanism to promote liberty and correct errors is ebbing in parts of our political spectrum. Even certain left-wing advocacy groups are teaming up with dominant distributors in Silicon Valley to advocate against antitrust rules and for mass censorship, attacking the basic diversity of thought that underpinned American democracy in the name of preventing “disinformation.”

But there’s an economic basis to this shift.

The 257 Billion Reasons for the Collapse of the News

Let’s start with why news collapsed, which has to do with advertising markets. From the early 1900s until the early 2000s, 60% to 80% of the budgets of newspapers came from advertising. And in the 1990s and early 2000s, this model ported reasonably well to the web, with a host of ad intermediaries fostering open markets for internet advertising. But a host of mergers, culminating in 2007 with Google’s purchase of DoubleClick, changed the situation.

What makes advertising valuable is two things. First is the placement. Is there a pair of eyeballs looking at an ad? And second is data. Who is looking at the ad, and are they looking at it when they want to buy something? In 2007, Google was the dominant search engine, and DoubleClick was the dominant system tracking people all over the web. DoubleClick DART software enabled publishers and advertisers to serve ads in standardized formats. The company began brokering advertising, helping to match ad buyers with available ad inventory.

When Google sought to buy DoubleClick, it was a major pivot point in the industry, and highly controversial. The Federal Trade Commission (FTC) oversaw the merger, but voted 4-1 to let it go through. When these firms combined, it “tipped” online advertising into a monopoly. Google could now track every individual everywhere online, and show them ads with more granularity than anyone else. Because of DoubleClick’s market position and its own search data, Google now had a God’s-eye view into what every publishing company, every advertiser, and every user did. (I’m going to tell the Google story here, but the Facebook story is roughly similar, and the two, in fact, entered into a presumed cartel in the mid-2010s that is now being litigated in an antitrust suit.)

From 2003 onward, Google rolled up much of the online intermediary world. It bought YouTube, Applied Semantics, Keyhole, Admob, Urchin, Android, Neotonic, and hundreds of other firms. Though Google portrayed itself as innovative, in fact, most of its products, from Maps to Gmail, came from acquisitions. By 2014, Google was no longer just a search engine; if you bought advertising, sold advertising, brokered advertising, tracked advertising, etc., you were doing it on Google tools. It tied its products togethers so you couldn’t get access to Google search data or YouTube ad inventory unless you used Google ad software, which killed rivals in the market. It downgraded newspapers that tried to negotiate different terms.

This leverage came from Google’s control of both the distribution of news and the software and data underpinning online ad markets. Roughly half of Americans report getting news from social media, while 65% get it from a search engine like Google. That means newspapers are getting a lot of their customers from entities who compete with them to sell ads, often to their same audience. And they must use the software offered by Google to sell those ads, and often display content on Google News under the terms that Google demands, which includes allowing Google to display much of the article itself on its own properties. (If you want a good example of how Google steals content, read this piece on what it did to Celebrity Net Worth.)

Over these years, Google introduced Google News and standards for web pages that privileged its own services, cut favorable deals with adblockers, and fought against things like header bidding, which was an attempt by publishers/advertisers to get better prices than Google was offering for ad inventory. Google began demanding terms for data and formatting that publishers had no choice but to supply. In 2017, for instance, the Wall Street Journal refused to allow Google search users to read its content for free, instead locating its content behind a paywall. Google downgraded the status of the newspaper in its search rankings. While subscriptions went up, traffic to the newspaper dropped by 44%.

Over the course of these 20 years, under Republican and Democratic administrations, neither Congress nor the FTC created mandatory public rules over the use of data, and enforcers pursued no meaningful antitrust suits to stop big tech firms. In 2012, the FTC Bureau of Economics, in one of the all-time most embarrassing episodes for economics, actively killed a suit that could have stopped the monopolization of the search market. So Google became a monopoly in the advertising industry, not just over search ads, but over most online advertising markets. Last year, Google’s global revenue amounted to $257.6 billion, which is nearly all from advertising. That’s a huge amount of money, some of which used to go to finance journalism, but now goes to private jets in Palo Alto.

The net effect of decades of bad policy is simple. Newspapers began to die, and private equity is now feeding on the carcasses. This collapse, and the turn toward aristocracy it is fostering, is not driven by some large culture force, but by shifts in competition and antitrust law that fostered market power in advertising markets.

Why the Australian Law Works

One possible way that newspapers could have fought back would have been to band together and bargain collectively with Google. One newspaper can’t stop Google News from imposing new contractual terms or prevent Google from rolling out new ad-formatting standards, but thousands of them can if they work together. The problem is that, independent businesses collectively bargaining against a dominant firm is an antitrust violation, seen as price-fixing by enforcers. In 2012, for instance, book publishers and Apple were sued by the Department of Justice for trying to create a competitor to Amazon’s Kindle e-book reader. The sword of antitrust was perversely used by the Obama administration on behalf of the monopolist.

Prosecuting all collaboration with rivals as price-fixing while legalizing mergers creates a tremendous incentive to merge to monopoly. And that’s exactly what happened. Google’s hundreds of mergers were legal, but newspapers couldn’t band together to address the bargaining power because that was considered price-fixing. (This dynamic is similar to Uber drivers, who can’t collectively bargain because they are independent businesses, and that would be price-fixing.)

Here’s my crude drawing of what this dynamic looked like. On the left, Google’s mergers are legal, so it can combine and form a conglomerate with products like YouTube, AdMob, DoubleClick, and hundreds of other firms. On the right, newspapers cannot work together, that combination is illegal.

I’ve drawn a slightly modified picture to illustration the resulting bargaining imbalance. On the left, Google gets to put the full weight of its conglomerate power in any negotiation with any supplier or customer or user. On the right, each newspaper must bargain on its own.

Of course, these images only approximate reality. Google is much, much, much bigger than the entire news industry, but the basic bargaining imbalance in ad markets is at the core of the death of news gathering. It’s not just the reason for why newspapers are falling apart, it’s also why it’s extremely difficult to start new ad-financed publications.

Now, the market power story isn’t obvious. Much of the shift was disguised by two events. Craigslist in the mid-2000s killed classified ads, and then the financial crisis crushed the whole ad market, temporarily. Newspaper publishers were confused, and did not at first understand what was happening. Moreover, advertising is weird and confusing and full of quasi-con artists who spout chatter about data, so most people just take the false narrative that “the internet killed the news.”

Moreover, depending on your perspective, it often doesn’t look like Google is the bad guy. Google is delivering free services to consumers, and consumers don’t know that the content is stolen from publishers. It just looks like awesome free content. And to newspaper workers, each newspaper—especially because many have been bought by corporate chains—looks powerful, buying back tens of millions of dollars of stock. “I don’t bargain with Google, I bargain with the publishers, so how money comes in is less urgent to me as a union than how it’s spent,” one reporter told me over Twitter to explain skepticism toward the bill. But newspapers are a flea on an elephant when compared to Google, and they are being squeezed by forces much larger than themselves.

And this brings me to how Australia started fixing the problem. I’ve been following the Australian competition authorities for years because they are ahead of the game when it comes to big tech. After the Australian Competition and Consumer Commission (ACCC) did a long series of investigations and reports on how big tech firms operate, Australia passed a law letting newspapers band together and bargain against big tech. Here’s another crude drawing showing the fix.

The government also set certain rules mandating how the bargaining should take place. Newspapers got to form co-ops, but also request arbitration with dominant big tech firms. As I wrote, “the arbitrator doesn’t micromanage the process, but does ‘baseball style’ arbitration, meaning both sides give an offer, and the arbitrator picks one of them. This kind of arbitration is both faster and less intrusive that standard government regulation, and creates the incentive for both sides to offer non-extreme proposals they can live with, for fear the arbitrator will simply pick the other side if they suggest something outlandish.”

Now, the ideal solution is a time machine to prevent Google from becoming a monopoly in the first place. But a temporary exemption from antitrust laws, along with rules that mimic a healthy market where there is transparency of data and a robust set of buyers and sellers instead of a few dominant platforms, is the next best thing, at least for now. As the legislature noted, “This allows the panel, in making their determination, to consider the outcome of a hypothetical scenario where commercial negotiations take place in the absence of the bargaining power imbalance.”

When Australia proposed this legislation, large swaths of the media reform and internet world freaked out. Many tech-friendly lobbyists, like those at Techdirt and various trade associations, obviously opposed it, claiming that the law would place a tax on every single link and destroy the web. This “link tax” idea spread among normally credible actors. For instance, here’s the inventer/founder of the World Wide Web, Tim Berners-Lee, making that point.

He wasn’t the only one. The nonprofit group Public Knowledge, which, though funded by Google and Facebook, has supported certain antitrust law to address big tech dominance, argued that the legislation would be “a radical change that threatens the fundamental nature of the internet as it exists today.” The left-wing group Free Press argued that the bill simply forced publishers to “pay for links” and would backfire, “wedding an old-media business model to a new-media disinformation engine.” There were many more criticisms, but that’s the gist.

The basic argument from some of these advocates was that for-profit media simply isn’t realistic anymore. “The market-driven model that once helped sustain public-interest news doesn’t function in a world where attention is the main commodity,” wrote Tim Karr of Free Press. “No amount of tinkering with these mechanics can fix that.” Karr went on to note there was “little evidence that any of the money generated through negotiations with Big Tech would go to putting reporters back on local beats where they’re needed most.”

There were many arguments about why the Australian law would devastate the internet, create new and intrusive copyright rules, foster hate speech and disinformation, and entrench the existing business models of big media while not helping the little guy. Instead, these groups proposed taxing the big tech firms and having the government redirect that money to newspapers, which is very similar to how Google and Facebook regularly offer grants to local news outlets.

And what happened? For a time, Google threatened to pull out of Australia, and Facebook actually did pull out. But this bullying of Australia generated anger, not just locally, but globally, as regulators everywhere looked at the power of big tech and got both incensed and afraid that their nations might be blackmailed as well. The law went into effect, Google and Facebook quickly caved, and these two firms began cutting deals with Australian newspapers. None of the scare stories about the new law came true. There were no changes to copyright, no link taxes, there was no devastation of the internet, and no increase of hate speech. There was no entrenchment of big media business models, the ACCC continued to move ahead to stop anticompetitive practices in the adtech marketplace.

Big media firms benefitted, but so did small ones, and most of all, so did journalists.

According to Poynter, the main result of this law has not been a link tax, but a flourishing of journalism.

Outlets throughout Australia are hiring new reporters. The Guardian added 50 journalists, bringing their newsroom total up to 150. Journalism professors say their students are getting hired and that there are too many job vacancies to fill.

There are problems with the law, such as a lack of transparency, truculence from Facebook, and a demand from big tech that publishers sign non-disclosure agreements during and after bargaining. The government is reviewing the code, and will make changes. But there’s no other way to characterize the code as anything but a stunning success, and that the arguments against the law as projecting scare stories did not come true.

Bringing the Australian Law to America

This week or next, the Senate Judiciary Committee is going to be looking at a similar bill, the Journalism Competition and Preservation Act. The JCPA is slightly different than the Australian bargaining code because speech regimes differ between countries. The American version would temporarily suspend narrow applications of antitrust laws for news publishers, letting them band together to bargain with dominant tech platforms who use their content, and imposing an arbitration process for negotiating with big tech firms.

The JCPA mandates certain rules about when publishers can enter co-ops, with no ability to restrict anyone based on viewpoint. It also has a size cap to exempt the biggest newspapers, like the New York Times and Wall Street Journal. Arbitration is baseball-style, similar to Australia. 65% of the payout from these cooperatives would be based on the number of journalists hired by newspapers, and 35% would come from the traffic publishers generated. In some ways, it is similar to agriculture cooperatives, which are bands of farmers exempted from certain antitrust laws so they can collectively bargain with processors.

The co-op incentive model would do two important things to newspapers. First, private equity owners, who right now are laying people off and squeezing whatever remains of the customer base until the papers they own die, will have their incentives changed. They will make money not by firing people, but by hiring people; not by killing journalism, but by doing more of it. And second, it would allow people to form media outlets and monetize the traffic. If Alden Capital chooses to kill a newspaper, journalists from that paper can leave, start a local competitor, and make money doing it.

Given all this, you’d think that the law would be a gimme. Yet despite the success of the law in Australia, the bill has kicked up a storm of opposition, and not all of it in bad faith. In a letter signed by a weird mix, tech lobbyists Chamber of Progress and Computer & Communications Industry Association joined left-wing public interest groups Public Knowledge, Common Cause, Free Press, and Consumer Reports to oppose the bill. They argued, echoing the same discredited critiques of the Australian bill, that the JCPA would foster hate speech, impose a link tax, fail to pay journalists, and help conglomerates but not small publishers. It’s a bizarre letter, written as if we don’t have the example of Australia to look at.

Some of the opposition is easy to explain. Obviously, there are a lot of tech lobbyists who dislike it, and groups paid by big tech firms to oppose it. Libertarian Republicans like Jim Jordan are deeply opposed, arguing that the bill would help entrench the left-wing “big media.” It’s impossible to tell where big tech influence stops and libertarian ideas start, but whether good faith or not, that opposition is understandable.

And yet, big tech money and power doesn’t explain it all. In certain parts of the progressive world, there is a genuine ideological opposition to decentralized ad markets and a diverse press. For instance, influential left-wing scholar Victor Pickard regularly critiques the importance of advertising in the American news landscape, arguing that the mid-20th century moment where news gathering was profitable was something of an anomaly. Commercialism, he argues, “degrades journalism.” Pickard, who I like and respect very much, is taking issue with how Tocqueville laid out the basics of the American media landscape in the 1830s. For these left-wing advocates, the “public option is the best model going forward.”

Pickard’s framing resonates widely among groups like Free Press and Public Knowledge, who have made the case for a tax on targeted advertising and a redirect of that money to public interest news organizations. Behind that is a basic distrust for the commercial press. And that’s the reason the Australian success story doesn’t register with large swaths of the media-reform world. For them, it’s not a success. Their basic assumption, like that of Pickard, is that the centralization of bargaining power by Google and the resulting death of newspapers isn’t a problem, but is in fact a solution to what is their long-standing gripe with for-profit ad-funded news.

In other words, baked into the opposition to the JCPA is a preference for large centralized administrative processes. It’s not just the desire for a centralized funds to finance the news. Free Press, for instance, opposes big tech antitrust bills on the grounds that they would not allow Google and Facebook to sufficiently police the internet for hate speech and “disinformation.” They want censorship, and fear a diversity of press funded by advertising, precisely because it fosters speech that is out of their control.

That desire for centralization is not so different from how Google executives see their role, as “organizing the world’s information,” or how Mark Zuckerberg once framed Facebook’s mission, which was to “make the world more open and connected.” Left or right, centralizers share a utopian vision of a world brought to us by our betters, instead of the muck of advertising and the diversity of speech that, while bringing democratic features, also allows racism and whatever crudeness any ordinary person might see fit to print. As FDR’s antitrust chief and later Nuremberg chief prosecutor Robert Jackson once put it, “what the extreme socialist favors, because of his creed, the extreme capitalist favors, because of greed.”

Open ideological debates are generally not common in American politics because there’s an attempt to paint opponents as evil. I do not believe opponents of the JCPA are evil. I have learned a lot from Pickard, and I respect and have worked with many of the people and groups I have highlighted here as opponents. But on a practical level, for anyone who tracks ad markets, what is happening in Australia is perhaps the most important real-world experiment in structuring a healthy news ecosystem. Love it or hate it, you have to reckon with it. And opponents of newspaper co-ops and the JCPA simply haven’t. Hopefully, Congress will.

Matt Stoller is the research director at the American Economic Liberties Project and the author of Goliath: The 100-Year War Between Monopoly Power and Democracy.

This article is republished with permission from BIG, a newsletter on the history and politics of monopoly power. Subscribe here.

Read the whole story
strugk
26 days ago
reply
Warsaw, Poland
Share this story
Delete

A deeper dive into World Wide Wind's colossal, contra-rotating turbines

1 Share

We interviewed the core team at Norway's World Wide Wind (WWW) to learn more about its floating, tilting, contra-rotating, double turbine design, which it says can unlock unprecedented scale, power and density to radically lower the cost of offshore wind.

To briefly recap our article from August 30, WWW has designed a floating offshore wind turbine unlike any other. Indeed, it's two vertical-axis wind turbines (VAWTs) in one, tuned to rotate in opposite directions. With one turbine attached to the generator's rotor, and the other to the stator, you effectively double your output.

Where conventional large horizontal-axis wind turbines (HAWTs) have to support a large mass of motor and generators in a nacelle at the top of their enormous towers, WWW's design keeps all its heaviest components at the bottom, vastly reducing engineering stresses and materials costs. And where HAWTs need to be anchored right to the sea floor, or mounted on extremely heavy platforms so they won't tip over, WWW can simply put a float partway up its pole, held in place by tethers, and let its own weight balance hold the turbines up, allowing the whole structure to tilt with the wind rather than fighting to stay upright.

This design, says WWW, fundamentally removes the engineering restrictions that are preventing offshore wind turbines from growing larger to reap the benefits that come with scale. Not that today's biggest wind turbines are small, by any means – but WWW says it sees a clear path to gargantuan 400-meter-tall (1,312-ft) machines with 40-megawatt capacities, two and a half times as much as today's biggest turbines can produce, as early as 2029.

What's more, VAWTs are well known to leave considerably less of a turbulent wake behind them than HAWTs. So not only are these things perfectly designed for deep waters, far offshore, they can be placed much closer together than conventional turbines. All of these departures from the status quo, says the company, add up to a projected Levelized Cost of Energy (LCoE) under US$50/MWh, less than half of what cutting-edge HAWT installations are expected to deliver by 2027.

And that's the crux of things. Humanity needs an incomprehensible amount of green energy to power the coming decades of mass electrification, and nothing is going to get built if it's not profitable. Offshore wind is some of the least intrusive, but most expensive energy money can buy. It's a sector desperately in need of a technological overhaul, and if WWW's solution works like it says on the tin, this is massive news and a genuine ray of hope in the race to zero carbon by 2050.

Of course, it's such a radically different design that it demands a closer look. So we reached out to World Wide Wind to dig in further, and the company responded by making its core team available for more than an hour-long video interview. We'll let the team introduce themselves as we launch into a (heavily edited) transcript of the chat below.

Stian Valentin Knutson: I'm the Founder, Chairman of the Board and former CEO. I've been developing products and designs since I was 18. I've sold a few companies, recently I sold four innovations to a big grocery brand owner in Norway, and I've also established a company called Smart Packaging Industries, which holds a worldwide patent on a sustainable packaging design that gets rid of unnecessary air on grocery pallets. On average, we get double the amount of food on a pallet. Last year, I came up with the contra-rotating turbine idea, and made contact with Hans, and began getting the right people on board.

Elsbeth Tronstad: I have a background in the energy sector, in oil and gas at ABB many years ago. I was at SN Power for 12 years, doing hydropower in developing countries. I also have a background as a politician, I've been Deputy Minister several times in the Ministry of Foreign Affairs for the Norwegian government. I'm a communications person and head of PR.

Hans Bernhoff (CTO): I'm a Professor in Engineering at Uppsala University, and a bit of an engineering freak. I also enjoy science, but only when it can be used for something. I did a Ph.D and made a proper scientific career to make my parents happy, but as my hobby I was building sailing yachts, trimarans. I spent many thousands of hours on that, and gained a lot of experience. Now, at Uppsala University, over the last 20 years I've sort of fused these things into a working vertical axis wind turbine. Stian contacted me this fall. I wasn't quite sure if these guys were serious, but I realized he'd put together an excellent group. So I decided to join. I think we have a lot of interesting stuff ahead of us.

Trond Lutdal (CEO): Hans is a fantastic asset to our team. He's number one in Europe when it comes to vertical axis wind, a global expert. So his knowledge and experience is super important to us. I've been heading this team since November last year. I come from a different kind of background – a little bit from energy as well, but more managerial types of positions for private-equity-owned companies. I was an Associate Principal at McKinsey and Company back in the day, I've worked in South Africa, the US and the Nordics. In general I've been leading businesses, and now leading this business, while also being on the board of a couple of private-equity-owned companies. So this is the core team you see today.

Loz: Great. For my part, I've been writing for New Atlas for 15 years or so, as a complete generalist, covering anything that's sparked my interest. But in recent times I'm finding it pretty difficult to get excited about anything that doesn't help deal with decarbonization and climate change. Reading the science news releases on climate every day can be pretty depressing. So right now, I want to be writing about ideas that can help solve this goddamned problem, because without them, we're all screwed.

Trond: I think we share that notion and passion. So 99% of all large-scale industrial wind turbines are horizontal axis, with three propellers. There's been lots of attempts at vertical axis as well. But the novelty here is really two things: one is the counter-rotating concept, having two rotors of the same mass rotating different ways. There are specific advantages to doing that. And then the added perspective is the tilting, floating structure. We feel this represents a game-changer in terms of less cost, as you commented, but also lifetime maintenance, and size, because it's much more stable than conventional turbines.

We don't see how levelized cost can really come down with conventional turbines. They were never meant to be offshore – and certainly not to be floating offshore. A super big turbine with a big nacelle on top, maybe a thousand metric tons? Trying to stabilize that with an incredibly large and heavy and expensive rotor? That doesn't add up, right? The huge commercial turbines are growing in size, they're now 15, 16 megawatts, because taking down costs has really been all about size and scalability.

But there's a limit to how big they can get with that design, and the prices are still high. They're expensive, complex, hard to make, difficult and dangerous to maintain, and it's difficult to see how you can get down to a competitive levelized cost with a conventional design. And there's a debate in Norway about whether floating offshore wind is something we should do at all. The jury's out, because some people don't see how the math adds up. Floating wind currently has a levelized cost around €120-130 per megawatt.

Loz: So why offshore wind, if onshore and solar are inherently cheaper? Is it just because it's out of everyone's way?

Trond: Yes, there's a NIMBY element and some politics and some negative sentiment around onshore wind, whereas offshore's out of sight. But it's also about where the wind is. For Norway, you have to go offshore, and you have to go deep, to 60 m (197 ft) or more, to access the best wind potential.

Hans: Politically, it's almost impossible to build wind power on land in many European countries. And the wind resource offshore is better than on land. You get a larger utility factor; on land, you might expect a turbine to produce energy as if it was running on full power 30% of the year. Offshore, especially at these fantastic resources off Norway's coast, you can get up to 50%. That's a better source for the grid.

Trond: So vertical axis turbines have a few intrinsic advantages, which I don't think are controversial. It's a simple design. There's no nacelle on top, and no cooling system. We have a generator towards the bottom, so a lower center of gravity. It's omnidirectional, you don't need a system to turn your turbine to face into the wind. It's better in turbulence, and it produces less of a wake downstream of the turbine. So you can increase your density and have more turbines for the same area, which is super important when it comes to area scarcity and offshore. There's less vibrations, for several reasons. Less forces in general on the structure. Environmental impact is better; it's less of a threat to birds and insects. It doesn't produce the same noise, because it doesn't have blade tips traveling at 300 km/h or so.

And then there's the advantages of the counter-rotating concept. One is that you're neutralizing the torque on the structure; that's important for floating, because otherwise there'll be a twisting force toward the mooring system. So that's important. Secondly, by merging two turbines into one, we're doubling the size and the scale immediately. Instead of having a static stator, you're counter-rotating it, and doubling the relative rotational speed between the two key parts of the generator. You can think of that as a way to double your power generation, or as a way to reduce your generator cost by half. So it's lower cost, it's much more scalable, and any maintenance happens at the bottom and not hundreds of feet up in the air.

So, Hans, why don't you talk a little more about your experience?

Hans: Yes, I've had a lot of fun! First, that was building boats, and 20 years ago I switched to building turbines. The one you see rotating here is my big vertical axis turbine from about 10 years ago, where we solved a lot of the issues. This was the first modern wood tower in the world, it was later followed by others. It has a very lightweight hub at the center of the turbine. This was 200 kW, so it's not commercial scale, but it's a model of a 3-MW turbine so we could test all the technologies. We have the generator in the foundation, and a very long shaft, 40 m (131 ft). That's a higher tower than the corresponding HAWT would have. The Vestas 27, for example, had a 30-m (98-ft) tower.

So we demonstrated that you can build a higher tower and successfully demonstrated the technology. And what happened? We had a main customer, and a main owner, and both changed their management at the same time and they weren't interested in this. More or less concurrently, thanks to the Lehmann Brothers collapse in 2008, all the investment capital in clean tech disappeared. So we didn't have the economics to carry on this project.

Loz: My understanding is that these vertical technologies typically make less energy for a given wind strength than a similar sized HAWT.

Hans: In Swedish, we say you can't compare apples with pears! The swept area for a vertical axis is a rectangle. For horizontal, it's a circle. And then you calculate CP capture efficiency, that's how much of the kinetic energy flowing through this cross section you can capture. Historically, when the HAWT technology started off 30 years ago, that was around 40%. Now, after tens of thousands of hours of optimization, they're averaging roughly 50%. Under ideal conditions, the vertical ones are approaching 40%. So that's less – but you know, if you draw a square the height and width of a circle, it's a bigger area, and if you count the energy it'll be more or less the same.

And then there's the efficiency of the drivetrain. Horizontal axis turbines normally have drive train efficiencies of 90% or less. We demonstrated a drive train with 98% efficiency. That's because we can put the generator at the bottom, where we can optimize it for cost and performance, rather than weight and volume. Now if you talk to a horizontal guy, they'll never admit that they optimize for weight and volume, but this is a fact.

Trond: We have an ambition and a belief that it's possible to reach around the same efficiency as a HAWT.

Loz: So a layman might look at a vertical-axis turbine and think, well, the blade's going with the wind on one side, but against the wind on the other side. How can that work?

Hans: This is the aerodynamics – some people have said that vertical axis turbines were not developed because the aerodynamics is so difficult to understand. You need to think of a boat sailing in a circle. The boat will catch speed when it's upwind of the circuit, but it'll also catch speed when it's downwind of the circuit. The only time it will not catch speed is when it's going directly upwind, or directly downwind. As a vertical axis blade goes around, it more or less gets torque on 300 degrees of the 360. There's a very small zone where it doesn't contribute to torque.

Loz: Do you need variable blade pitch to do that?

Hans: That has been tested, but it has two drawbacks. You need heavy gear to pitch them, and you have to pitch them all the time, through every rotation. Imagine how many times that would be over a 20-year lifetime! It's not robust, it'll break down. There's a German expression: why make it simple, when you so beautifully can make it complicated? It falls into that category. On the 200-kW turbine, I demonstrated that you can control the rotation speed electrically, by controlling the generator torque on a microsecond basis.

Trond: So we have this manifestation of the technology that can be introduced. We speak to floating operations, and they like it, because with the lower center of gravity, they can have a cheaper and smaller tilter, which is obviously good for them as they scratch their heads in terms of economics. Offshore wind is not solved. Conventional towers kind of work against the forces when they stand erect in this way, with only a minimal amount of sway in strong wind. So why not utilize the wind, go with the forces instead of trying to resist them?

Hans has been thinking about floating wind for some time, and probably had this tilting idea in his drawer when we spoke last fall. Then Stian had the counter-rotating concept, and that's where it matched up. An integrated turbine where the generator is part of the solution, not part of the problem.

Hans: Tilting, floating structures that utilize the wind have been developed over many thousands of years. That's what keeled sailing boats do. It works excellently. That was my inspiration. There was a Swedish company that saw our wood tower and started talking about building a floating turbine. They took the bulk of my concept, degraded the power takeoff on the generator, not very innovative, but they tried to make a floating turbine, and discovered that with a single turbine you get a huge torque on the foundation. So when I met Stian, with his idea of the counter-rotating configuration, I realized, wow, this is the way to do it, adding in the sailboat approach of tilting.

Loz: And so the key advantage is really the fact that you don't need to hold it up?

Hans: Well, the guys that build racing boats, they use fancy materials and carbon fiber, because the thing is to have a relatively light mast. That's what we're doing: a lightweight tower. There's no 700-ton generator at the top, all that generator weight is right down the bottom. It's our keel. It helps to keep the thing upright.

The second important thing is, we get two turbines. Many have tried to do this, there are many European projects where they have a platform and they try to put two turbines on it, or they have a single tower, and they put two arms out to the sides and run two turbines. Here, you get the two turbines almost for free. Two turbines can use the same tower, and the same generator. So we get rid of a lot of the cost.

So you look at the floating part of our towers – the hull and the keel, if you like, below the water. For our 40-MW system, this is almost the size and the amount of materials you'd need for a horizontal axis system that's maybe four times smaller. That tells me that the floater and the foundation is four times as cost-effective, where using two tilting turbines on a single tower is maybe two times as cost-effective.

And then there's all the systems ... For a HAWT, there are hundreds of moving parts. Each wing has to be turned, that takes gears and motors. The hub rotates, and the nacelle rotates. Our system essentially has two large moving parts, maybe another on the very lower end where it attaches to the mooring system. It's an enormous increase in simplicity. And this technology loves to scale, it's the old trick, reduce cost by scaling up.

Trond: Let's talk about the size. Whether you want to call it a moonshot or not, it's obviously difficult to fathom the size of these things. Our 40-MW tower is taller than the Eiffel tower.

Hans: It's HUGE! It's ridiculously large. But if I picked you up in a time machine from the early 90s and showed you one of the modern 15-MW turbines, you'd say that was ridiculously huge too!

Trond: I mean, when it comes to long axis and things like that, this has been done in other industries. It's simple, it's known technology. The blades will be more than 150 m (492 ft), but there's really nothing that's science fiction when it comes to this. But obviously, it's a big construction.

But I think it's a good idea, and it's doable, technically, we're quite confident. So now it's all about execution. And that's not easy. But as you referred to in your article, we need to do the engineering in real life. You need to test this and grow in terms of prototypes, scale those up and test different things along the way. We've already built the first prototype, and we're raising money to get to the second and third ones.

We're also talking to big industrial players like hydro, we've had dialogue with Equinor and Aker Solutions. We're building a consortium of players to realize this. We're not going to do it ourselves, but we've got to be the engine and the thrust behind making this happen, and obviously owning the patents and the core of the technology.

So our first prototype, it's just 2 m (6.5 ft) high, to test the counter-rotating generator.

Hans: Yeah, it's 400 W, just to get some first initial experience on a counter-rotating generator. So it's a small step for mankind, but a giant leap for us! And it'll have some applications. We have customers who already want to buy these for special applications, when they're off-grid and desperately need power, and there's really no alternative to wind. So it's meant to be a very robust system.

Loz: And this is the kind of device you're looking to spin out to generate some early revenue along the way?

Trond: Yes. There are many spinoff opportunities and different application areas that we haven't started working on yet, but we see they'll be part of our journey. So it's not only the 10-year investment horizon for a giant floating offshore structure. These could be separate companies in our structure. We're thinking about charging stations combining solar and wind with batteries. We're working with the guys that are trying to combine wind, solar and wave energy, and they like vertical axis because of the weight advantages. We talked to Equinor about this specifically, and just showing how we can run a much higher density of turbines. But the huge game-changer is probably with floating offshore, because that's just not solved today.

Hans: On the topic of density, this is all about dissipating wake. What's the wake? Well, behind the turbine, there's no air speed, so there's no wind. So you can do a few tricks to fix this. A few years ago we published research on one: for a vertical axis turbine you have the wings, but also the struts. And you can pitch those struts to induce a little bit of downward drop in the airflow. That helps the wake behind the turbine go downward and dissipate more quickly.

With a tilted VAWT, with a conical sweep like ours, there's another advantage in that this also develops a little bit of lift, which helps give stability to the structure. But the main point is that you're dragging the air down, which dissipates the wake.

Trond: I know you had a question in your original article about how much science there was behind this idea. Well (plonks stack down on desk) here's a number of publications.

Hans: We've examined a lot of Ph.D students in this area over the last 20 years, so we've had the opportunity to think it through, work through the detail, simulate them and understand them. And that's the fun of this game! I love trying to solve very hard problems, because then it's challenging, then it's fun!

Trond: We look at what Tesla's done to push electric cars forward. And this is the kind of pivotal moment for offshore wind, where there needs to be a shift.

Stian: Tesla and SpaceX, Elon Musk is very inspiring to me.

Trond: We were kind of joking that we should have him on board, but we're also kind of serious. This is the gravity of the situation. The necessity of pursuing this opportunity. This is something the world needs, and we need to pursue it. We need to make it happen.

Loz: What are you going to make them out of?

Hans: Materials? Yeah, we're looking at that. It has to be lightweight materials. And of course, since I was the inventor of the modern wood tower, wood will be an essential part of it. I'm thinking of using wood in the blades.

Trond: We're discussing with Hydro how to incorporate aluminum. Based on the structure, there could be an external coating on the wings. We know the rough characteristics of what we need, and there are different options we're working on.

Hans: Today's turbines tend to use carbon fiber in the blades. Well, compared to carbon fiber, aluminum's very low cost. Another interesting part is in the generator. Most HAWTs use neodymium magnets, which is problematic for two reasons. One is political; China seems to have control of the world market. The second is environmental: neodymium mining, shall we say, isn't done the way we'd like to see it done.

Ferrite, on the other hand, that's a by-product of steel production. It's much lower cost than neodymium, but it weighs more. If you use ferrite in the rotor of your generator, it weighs 10 times as much. But when you have it down in the keel, you want it to weigh 10 times as much! And we can understand them, and fully simulate them, that's part of the advanced physics simulations I've had the opportunity to do over the last year.

And for the tower, well, I made a composite wood mast for my boat all the way back in the 90s. This is an old game, nothing new.

Loz: You're talking about making the whole tower structure out of wood?

Trond: That's one option. Nothing is settled yet, of course.

Loz: You'll need a pretty bloody big tree. (laughs)

Hans: Look into glue-laminated beams, which were developed in Scandinavia like 50 years ago. They use it to build the big football arenas and things like that. You cut down a lot of trees, chop them up into planks and sort them depending on strength and elasticity. You mix them together and glue-laminate them. You can easily have a beam that's 1 m (3.3 ft) high, 20 m (66 ft) long. We used this idea to build the first tower, 10 years ago, it's the most cost-effective way of using wood in big structures.

Loz: How about bearings? How are those going to stand up to such massive structures turning and tilting on top of them?

Hans: Maybe we didn't explain this part, but there's no big bearing at the bottom. The tower is floating, so you don't need a big bearing to allow the tower to rotate. There are some bearings at the top, to take up the torque between the upper turbine and the tower, but at the bottom of the shaft, it's supported and rolling on small wheels, the same as we did 10 years ago. There are some bearings in the generator, which of course will be as small as we can make them. But those bearings will be relatively large for a 40-MW generator!

Loz: So for the push towards commercialization, it's a question of money, speed, manpower, all that sort of stuff. Where are you guys at?

Trond: Early, early phases. We've raised about 10 million NOKs (~US$1 million) to fund what we've done so far. We're raising US$5 million toward the next couple of prototypes, then we'll need to raise US$30-40 million for the next steps. So we're in the process of raising cash and building the organization, but also developing partnerships with companies that are fired up about what we're doing.

Loz: OK, well I think this has set my head and my body contra-rotating at sufficient speed. Thank you so much for your time and best of luck with the next steps. We're all counting on you!

Source: World Wide Wind

Read the whole story
strugk
34 days ago
reply
Warsaw, Poland
Share this story
Delete

Restoring Hearing With Beams of Light

1 Share

There’s a popular misconception that cochlear implants restore natural hearing. In fact, these marvels of engineering give people a new kind of “electric hearing” that they must learn how to use.

Natural hearing results from vibrations hitting tiny structures called hair cells within the cochlea in the inner ear. A cochlear implant bypasses the damaged or dysfunctional parts of the ear and uses electrodes to directly stimulate the cochlear nerve, which sends signals to the brain. When my hearing-impaired patients have their cochlear implants turned on for the first time, they often report that voices sound flat and robotic and that background noises blur together and drown out voices. Although users can have many sessions with technicians to “tune” and adjust their implants’ settings to make sounds more pleasant and helpful, there’s a limit to what can be achieved with today’s technology.

I have been an otolaryngologist for more than two decades. My patients tell me they want more natural sound, more enjoyment of music, and most of all, better comprehension of speech, particularly in settings with background noise—the so-called cocktail party problem. For 15 years, my team at the University of Göttingen, in Germany, has been collaborating with colleagues at the University of Freiburg and beyond to reinvent the cochlear implant in a strikingly counterintuitive way: using light.

We recognize that today’s cochlear implants run up against hard limits of engineering and human physiology. So we’re developing a new kind of cochlear implant that uses light emitters and genetically altered cells that respond to light. By using precise beams of light instead of electrical current to stimulate the cochlear nerve, we expect our optical cochlear implants to better replicate the full spectral nature of sounds and better mimic natural hearing. We aim to start clinical trials in 2026 and, if all goes well, we could get regulatory approval for our device at the beginning of the next decade. Then, people all over the world could begin to hear the light.

These 3D microscopic images of mouse ear anatomy show optical implants [dotted lines] twisting through the intricate structure of a normal cochlea, which contains hair cells; in deafness, these cells are lost or damaged. At left, the hair cells [light blue spiral] connect to the cochlear nerve cells [blue filaments and dots]. In the middle and right images, the bony housing of the mouse cochlea surrounds this delicate arrangement.Daniel Keppeler

How cochlear implants work

Some 466 million people worldwide suffer from disabling hearing loss that requires intervention, according to the World Health Organization. Hearing loss mainly results from damage to the cochlea caused by disease, noise, or age and, so far, there is no cure. Hearing can be partially restored by hearing aids, which essentially provide an amplified version of the sound to the remaining sensory hair cells of the cochlea. Profoundly hearing-impaired people benefit more from cochlear implants, which, as mentioned above, skip over dysfunctional or lost hair cells and directly stimulate the cochlear, or auditory, nerve.

In the 2030s, people all over the world could begin to hear the light.

Today’s cochlear implants are the most successful neuroprosthetic to date. The first device was approved by the U.S. Food and Drug Administration in the 1980s, and nearly 737,000 devices had been implanted globally by 2019. Yet they make limited use of the neurons available for sound encoding in the cochlea. To understand why, you first need to understand how natural hearing works.

In a functioning human ear, sound waves are channeled down the ear canal and set the ear drum in motion, which in turn vibrates tiny bones in the middle ear. Those bones transfer the vibrations to the inner ear’s cochlea, a snail-shaped structure about the size of a pea. Inside the fluid-filled cochlea, a membrane ripples in response to sound vibrations, and those ripples move bundles of sensory hair cells that project from the surface of that membrane. These movements trigger the hair cells to release neurotransmitters that cause an electrical signal in the neurons of the cochlear nerve. All these electrical signals encode the sound, and the signal travels up the nerve to the brain. Regardless of which sound frequency they encode, the cochlear neurons represent sound intensity by the rate and timing of their electrical signals: The firing rate can reach a few hundred hertz, and the timing can achieve submillisecond precision.

Hair cells in different parts of the cochlea respond to different frequencies of sound, with those at the base of the spiral-shaped cochlea detecting high-pitched sounds of up to about 20 kilohertz, and those at the top of the spiral detecting low-pitched sounds down to about 20 Hz. This frequency map of the cochlea is also available at the level of the neurons, which can be thought of as a spiraling array of receivers. Cochlear implants capitalize on this structure, stimulating neurons in the base of the cochlea to create the perception of a high pitch, and so on.

A commercial cochlear implant today has a microphone, processor, and transmitter that are worn on the head, as well as a receiver and electrodes that are implanted. It typically has between 12 and 24 electrodes that are inserted into the cochlea to directly stimulate the nerve at different points. But the saline fluid within the cochlea is conductive, so the current from each electrode spreads out and causes broad activation of neurons across the frequency map of the cochlea. Because the frequency selectivity of electrical stimulation is limited, the quality of artificial hearing is limited, too. The natural process of hearing, in which hair cells trigger precise points on the cochlear nerve, can be thought of as playing the piano with your fingers; cochlear implants are more equivalent to playing with your fists. Even worse, this large stimulation overlap limits the way we can stimulate the auditory nerve, as it forces us to activate only one electrode at a time.

How optogenetics works

The idea for a better way began back in 2005, when I started hearing about a new technique being pioneered in neuroscience called optogenetics. German researchers were among the first to discover light-sensitive proteins in algae that regulated the flow of ions across a cellular membrane. Then, other research groups began experimenting with taking the genes that coded for such proteins and using a harmless viral vector to insert them into neurons. The upshot was that shining a light on these genetically altered neurons could trigger them to open their voltage-gated ion channels and thus fire, or activate, allowing researchers to directly control living animals’ brains and behaviors. Since then, optogenetics has become a significant tool in neuroscience research, and clinicians are experimenting with medical applications including vision restoration and cardiac pacing.

I’ve long been interested in how sound is encoded and how this coding goes wrong in hearing impairment. It occurred to me that stimulating the cochlear nerve with light instead of electricity could provide much more precise control, because light can be tightly focused even in the cochlea’s saline environment.

We are proposing a new type of implanted medical device that will be paired with a new type of gene therapy.

If we used optogenetics to make cochlear nerve cells light sensitive, we could then precisely hit these targets with beams of low-energy light to produce much finer auditory sensations than with the electrical implant. We could theoretically have more than five times as many targets spaced throughout the cochlea, perhaps as many as 64 or 128. Sound stimuli could be electronically split up into many more discrete frequency bands, giving users a much richer experience of sound. This general idea had been taken up earlier by Claus-Peter Richter from Northwestern University, who proposed directly stimulating the auditory nerve with high-energy infrared light, though that concept wasn’t confirmed by other laboratories.

Our idea was exciting, but my collaborators and I saw a host of challenges. We were proposing a new type of implanted medical device that would be paired with a new type of gene therapy, both of which must meet the highest safety standards. We’d need to determine the best light source to use in the optogenetic system and how to transmit it to the proper spots in the cochlea. We had to find the right light-sensitive protein to use in the cochlear nerve cells, and we had to figure out how best to deliver the genes that code for those proteins to the right parts of the cochlea.

But we’ve made great progress over the years. In 2015, the European Research Council gave us a vote of confidence when it funded our “OptoHear” project, and in 2019, we spun off a company called OptoGenTech to work toward commercializing our device.

Channelrhodopsins, micro-LEDs, and fiber optics

Our early proof-of-concept experiments in mice explored both the biology and technology at play in our mission. Finding the right light-sensitive protein, or channelrhodopsin, turned out to be a long process. Many early efforts in optogenetics used channelrhodopsin-2 (ChR2) that opens an ion channel in response to blue light. We used it in a proof-of-concept experiment in mice that demonstrated that optogenetic stimulation of the auditory pathway provided better frequency selectivity than electrical stimulation did.

In our continued search for the best channelrhodopsin for our purpose, we tried a ChR2 variant called calcium translocating channelrhodopsin (CatCh) from the Max Planck Institute of Biophysics lab of Ernst Bamberg, one of the world pioneers of optogenetics. We delivered CatCh to the cochlear neurons of Mongolian gerbils using a harmless virus as a vector. We next trained the gerbils to respond to an auditory stimulus, teaching them to avoid a certain area when they heard a tone. Then we deafened the gerbils by applying a drug that kills hair cells and inserted a tiny optical cochlear implant to stimulate the light-sensitized cochlear neurons. The deaf animals responded to this light stimulation just as they had to the auditory stimulus.

The optical cochlear implant will enable people to pick out voices in a busy meeting and appreciate the subtleties of their favorite songs.

However, the use of CatCh has two problems: First, it requires blue light, which is associated with phototoxicity. When light, particularly high-energy blue light, shines directly on cells that are typically in the dark of the body’s interior, these cells can be damaged and eventually die off. The other problem with CatCh is that it’s slow to reset. At body temperature, once CatCh is activated by light, it takes about a dozen milliseconds to close the channel and be ready for the next activation. Such slow kinetics do not support the precise timing of neuron activation necessary to encode sound, which can require more than a hundred spikes per second. Many people said the kinetics of channelrhodopsins made our quest impossible—that even if we gained spectral resolution, we’d lose temporal resolution. But we took those doubts as a strong motivation to look for faster channelrhodopsins, and ones that respond to red light.

We were excited when a leader in optogenetics, Edward Boyden at MIT, discovered a faster-acting channelrhodopsin that his team called Chronos. Although it still required blue light for activation, Chronos was the fastest channelrhodopsin to date, taking about 3.6 milliseconds to close at room temperature. Even better, we found that it closed within about 1 ms at the warmer temperature of the body. However, it took some extra tricks to get Chronos working in the cochlea: We had to use powerful viral vectors and certain genetic sequences to improve the delivery of Chronos protein to the cell membrane of the cochlear neurons. With those tricks, both single neurons and the neural population responded robustly and with good temporal precision to optical stimulation at higher rates of up to about 250 Hz. So Chronos enabled us to elicit near-natural rates of neural firing, suggesting that we could have both frequency and time resolution. But we still needed to find an ultrafast channelrhodopsin that operated with longer wavelength light.

We teamed up with Bamberg to take on the challenge. The collaboration targeted Chrimson, a channelrhodopsin first described by Boyden that’s best stimulated by orange light. The first results of our engineering experiments with Chrimson were fast Chrimson (f-Chrimson) and very fast Chrimson (vf-Chrimson). We were pleased to discover that f-Chrimson enables cochlear neurons to respond to red light reliably up to stimulation rates of approximately 200 Hz. Vf-Chrimson is even faster but is less well expressed in the cells than f-Chrimson is; so far, vf-Chrimson has not shown a measurable advantage over f-Chrimson when it comes to high-frequency stimulation of cochlear neurons.

This flexible micro-LED array, fabricated at the University of Freiburg, is wrapped around a glass rod that’s 1 millimeter in diameter. The array is shown with its 144 diodes turned off [left] and operating at 1 milliamp [right]. University of Freiburg/Frontiers

We’ve also been exploring our options for the implanted light source that will trigger the optogenetic cells. The implant must be small enough to fit into the limited space of the cochlea, stiff enough for surgical insertion, yet flexible enough to gently follow the cochlea’s curvature. Its housing must be biocompatible, transparent, and robust enough to last for decades. My collaborators Ulrich Schwarz and Patrick Ruther, then at the University of Freiburg, started things off by developing the first micro-light-emitting diodes (micro-LEDs) for optical cochlear implants.

We found micro-LEDs useful because they’re a very mature commercial technology with good power efficiency. We conducted severalexperiments with microfabricated thin-film micro-LEDs and demonstrated that we could optogenetically stimulate the cochlear nerve in our targeted frequency ranges. But micro-LEDs have drawbacks. For one thing, it’s difficult to establish a flexible, transparent, and durable hermetic seal around the implanted micro-LEDs. Also, micro-LEDs with the highest efficiency emit blue light, which brings us back to the phototoxicity problem. That's why we’re also looking at another way forward.

Instead of getting the semiconductor emitter itself into the cochlea, the alternative approach puts the light source, such as a laser diode, farther away in a hermetically sealed titanium housing. Optical fibers then bring the light into the cochlea and to the light-sensitive neurons. The optical fibers must be biocompatible, durable, and flexible enough to wind through the cochlea, which may be challenging with typical glass fibers. There’s interesting ongoing research in flexible polymer fibers, which might have better mechanical characteristics, but so far, they haven’t matched glass in efficiency of light propagation. The fiber-optic approach could have efficiency drawbacks, because we’d lose some light when it goes from the laser diode to the fiber, when it travels down the fiber, and when it goes from the fiber to the cochlea. But the approach seems promising, as it ensures that the optoelectronic components could be safely sealed up and would likely make for an easy insertion of the flexible waveguide array.

Another design possibility for optical cochlear implants is to use laser diodes as a light source and pair them with optical fibers made of a flexible polymer. The laser diode could be safely encapsulated outside the cochlea, which would reduce concerns about heat, while polymer waveguide arrays [left and right images] would curl into the cochlea to deliver the light to the cells.OptoGenTech

The road to clinical trials

As we consider assembling these components into a commercial medical device, we first look for parts of existing cochlear implants that we can adopt. The audio processors that work with today’s cochlear implants can be adapted to our purpose; we’ll just need to split up the signal into more channels with smaller frequency ranges. The external transmitter and implanted receiver also could be similar to existing technologies, which will make our regulatory pathway that much easier. But the truly novel parts of our system—the optical stimulator and the gene therapy to deliver the channelrhodopsins to the cochlea—will require a good amount of scrutiny.

Cochlear implant surgery is quite mature and typically takes only a couple of hours at most. To keep things simple, we want to keep our procedure as close as possible to existing surgeries. But the key part of the surgery will be quite different: Instead of inserting electrodes into the cochlea, surgeons will first administer viral vectors to deliver the genes for the channelrhodopsin to the cochlear nerve cells, and then implant the light emitter into the cochlea.

Since optogenetic therapies are just beginning to be tested in clinical trials, there’s still some uncertainty about how best to make the technique work in humans. We’re still thinking about how to get the viral vector to deliver the necessary genes to the correct neurons in the cochlea. The viral vector we’ve used in experiments thus far, an adeno-associated virus, is a harmless virus that has already been approved for use in several gene therapies, and we’re using some genetic tricks and local administration to target cochlear neurons specifically. We’ve already begun gathering data about the stability of the optogenetically altered cells and whether they’ll need repeated injections of the channelrhodopsin genes to stay responsive to light.

Our roadmap to clinical trials is very ambitious. We’re working now to finalize and freeze the design of the device, and we have ongoing preclinical studies in animals to check for phototoxicity and prove the efficacy of the basic idea. We aim to begin our first-in-human study in 2026, in which we’ll find the safest dose for the gene therapy. We hope to launch a large phase 3 clinical trial in 2028 to collect data that we’ll use in submitting the device for regulatory approval, which we could win in the early 2030s.

We foresee a future in which beams of light can bring rich soundscapes to people with profound hearing loss or deafness. We hope that the optical cochlear implant will enable them to pick out voices in a busy meeting, appreciate the subtleties of their favorite songs, and take in the full spectrum of sound—from trilling birdsongs to booming bass notes. We think this technology has the potential to illuminate their auditory worlds.

From Your Site Articles

Related Articles Around the Web

Read the whole story
strugk
68 days ago
reply
Warsaw, Poland
Share this story
Delete

Forget olive oil. This new cooking oil is made using fermentation

1 Share

The world runs on vegetable oil. It’s the third-most-consumed food globally after rice and wheat. It’s in your morning croissant and your oat milk, your salad dressing, your afternoon snack bar, and your midnight cookie.

Our obsession with vegetable oil is so big that we use more land—around 20% to 30% of all the world’s agricultural space—for vegetable oil crops than for fruits, vegetables, legumes, and nuts combined. All of this leads to devastating deforestation, biodiversity loss, and climate change. But what if we could grow cooking oil in a lab?

Launching today,

Zero Acre’s

first product is a cooking oil made by fermentation: High in healthy fats and low in bad fats, its Cultured Oil is produced using 85% less land than canola oil, emits 86% less CO2 than soybean oil, and requires 99% less water than olive oil. At $29.99, it’s significantly more expensive than its vegetable counterpart, but replacing just 5% of vegetable oils used in the U.S. with so-called cultured oil, the company claims, would free up 3.1 million acres of land every year.

Vegetable oils are bad for the environment, but they’ve also been linked with obesity, heart disease, cancer, and other diseases. That’s why Jeff Nobbs, cofounder and CEO of Zero Acre, has been trying to take them out of the food system for years—first with a keto-friendly restaurant called Kitava in San Francisco, then with nutrition-tracking software. Now his company is looking to make cooking oil by fermenting microbes rather than harvesting crops.

Conventionally, vegetable oil is made by crushing parts of a vegetable or seed (like sunflower seeds or olives) and extracting the oil. “Cultured oil,” on the other hand, is made by fermentation.

So, let’s back up a little. Fermentation involves a naturally occurring chemical reaction between two main groups of ingredients: microorganisms and natural sugars. Microorganisms include bacteria, microalgae, yeast, and other fungi; natural sugars can be found in a variety of products, from wheat to milk to grapes.

To make wine, for example, winemakers add yeast to grape juice. The yeast then converts, or ferments, the natural sugars of the grapes into ethanol, and you have yourself a crisp glass of chardonnay. But you can thank fermentation for an abundance of other foods like bread, cheese, yogurt, pickles, and even chocolate.

When it comes to cooking oil, the process is similar. Nobbs won’t disclose the exact kind of microorganism being used to produce Zero Acre’s Cultured Oil, but he says the company works with both non-GMO yeast and microalgae. “We focus on cultures that naturally produce healthy fats, and yeast and microalgae do that efficiently,” he says.

The process starts with a proprietary culture made up of food-producing microorganisms (yeast or microalgae) that is fed natural plants like sugar beet and sugarcane. (The company doesn’t grow these directly, but both are part of its supply chain.)

Over the course of a few days, the microorganisms convert, or ferment, the natural plant sugars into oils or fats. The resulting mixture is then pressed and the oil is released, separated, filtered, and cultured oil is born. (Nobbs describes the taste as “lightly buttery,” though you can taste it only if you have it straight up with a spoon.)

Nobbs says the entire process takes less than a week, compared to soybean oil (the most widely consumed oil in the U.S.), which requires a six-month period just for the seeds to mature. His company’s Cultured Oil also requires 90% less land to produce than soybean oil. (The only reason the company needs land is to grow sugarcane, though Nobbs aspires to eventually use sugars in existing food waste like corncobs and orange peels, bringing the amount of land needed closer to zero, hence “Zero Acre.”)

That’s if the company manages to scale up. According to Kyria Boundy-Mills, a microbiologist at the University of California, Davis, who has studied yeast oils for the past 10 years, “microbial oils” like the one Zero Acre is producing have been studied for at least 80 years, “mostly for fuel,” she says via email.

Boundy-Mills recalls a biotechnology company called TerraVia (formerly Solazyme), which developed a technology to make biodiesel from microalgae. TerraVia then switched gears and used it to make the first culinary algae oil on the market, which made it to Walmart but was discontinued a few years later.

It’s a cautionary tale for Zero Acre, but “fermentation is a mature technology,” Boundy-Mills says, noting that yeasts and microalgae have been grown in large-scale commercial fermentations for decades. The challenge remains the price.

“Fermentation is faster than growing crops, but the capital and operating costs of fermentation facilities is much, much more per acre than farmland,” she says. (Zero Acre runs a research facility in San Mateo and has raised $37 million to date.)

A bottle of Zero Acre’s Cultured Oil isn’t cheap, but as demand grows, Nobbs hopes that economies of scale will help the company lower the cost. “We want to kick off the flywheel, but it’s going to take a while to replace 200 million metric tons [of vegetable oil],” he says.

Nobbs is also eyeing solid fats that could replace palm shortening, and foods that come with cultured oil as an ingredient, noting, “We want an ecosystem to develop around cultured oil the same way it has developed around olive oil.”

Read the whole story
strugk
72 days ago
reply
Warsaw, Poland
Share this story
Delete

our review suggests it's time to ditch it in favour of a new theory of gravity

1 Share

We can model the motions of planets in the Solar System quite accurately using Newton’s laws of physics. But in the early 1970s, scientists noticed that this didn’t work for disc galaxies – stars at their outer edges, far from the gravitational force of all the matter at their centre – were moving much faster than Newton’s theory predicted.

This made physicists propose that an invisible substance called “dark matter” was providing extra gravitational pull, causing the stars to speed up – a theory that’s become hugely popular. However, in a recent review my colleagues and I suggest that observations across a vast range of scales are much better explained in an alternative theory of gravity proposed by Israeli physicist Mordehai Milgrom in 1982 called Milgromian dynamics or Mond – requiring no invisible matter.

Mond’s main postulate is that when gravity becomes very weak, as occurs at the edge of galaxies, it starts behaving differently from Newtonian physics. In this way, it is possible to explain why stars, planets and gas in the outskirts of over 150 galaxies rotate faster than expected based on just their visible mass. But Mond doesn’t merely explain such rotation curves, in many cases, it predicts them.

Philosophers of science have argued that this power of prediction makes Mond superior to the standard cosmological model, which proposes there is more dark matter in the universe than visible matter. This is because, according to this model, galaxies have a highly uncertain amount of dark matter that depends on details of how the galaxy formed – which we don’t always know. This makes it impossible to predict how quickly galaxies should rotate. But such predictions are routinely made with Mond, and so far these have been confirmed.

Our mission is to share knowledge and inform decisions.

Imagine that we know the distribution of visible mass in a galaxy but do not yet know its rotation speed. In the standard cosmological model, it would only be possible to say with some confidence that the rotation speed will come out between 100km/s and 300km/s on the outskirts. Mond makes a more definite prediction that the rotation speed must be in the range 180-190km/s.

If observations later reveal a rotation speed of 188km/s, then this is consistent with both theories – but clearly, Mond is preferred. This is a modern version of Occam’s razor – that the simplest solution is preferable to more complex ones, in this case that we should explain observations with as few “free parameters” as possible. Free parameters are constants - certain numbers that we must plug into equations to make them work. But they are not given by the theory itself – there’s no reason they should have any particular value – so we have to measure them observationally. An example is the gravitation constant, G, in Newton’s gravity theory or the amount of dark matter in galaxies within the standard cosmological model.

We introduced a concept known as “theoretical flexibility” to capture the underlying idea of Occam’s razor that a theory with more free parameters is consistent with a wider range of data – making it more complex. In our review, we used this concept when testing the standard cosmological model and Mond against various astronomical observations, such as the rotation of galaxies and the motions within galaxy clusters.

Each time, we gave a theoretical flexibility score between –2 and +2. A score of –2 indicates that a model makes a clear, precise prediction without peeking at the data. Conversely, +2 implies “anything goes” – theorists would have been able to fit almost any plausible observational result (because there are so many free parameters). We also rated how well each model matches the observations, with +2 indicating excellent agreement and –2 reserved for observations that clearly show the theory is wrong. We then subtract the theoretical flexibility score from that for the agreement with observations, since matching the data well is good – but being able to fit anything is bad.

A good theory would make clear predictions which are later confirmed, ideally getting a combined score of +4 in many different tests (+2 -(-2) = +4). A bad theory would get a score between 0 and -4 (-2 -(+2)= -4). Precise predictions would fail in this case – these are unlikely to work with the wrong physics.

We found an average score for the standard cosmological model of –0.25 across 32 tests, while Mond achieved an average of +1.69 across 29 tests. The scores for each theory in many different tests are shown in figures 1 and 2 below for the standard cosmological model and Mond, respectively.

It is immediately apparent that no major problems were identified for Mond, which at least plausibly agrees with all the data (notice that the bottom two rows denoting falsifications are blank in figure 2).

One of the most striking failures of the standard cosmological model relates to “galaxy bars” – rod-shaped bright regions made of stars – that spiral galaxies often have in their central regions (see lead image). The bars rotate over time. If galaxies were embedded in massive halos of dark matter, their bars would slow down. However, most, if not all, observed galaxy bars are fast. This falsifies the standard cosmological model with very high confidence.

Another problem is that the original models that suggested galaxies have dark matter halos made a big mistake – they assumed that the dark matter particles provided gravity to the matter around it, but were not affected by the gravitational pull of the normal matter. This simplified the calculations, but it doesn’t reflect reality. When this was taken into account in subsequent simulations it was clear that dark matter halos around galaxies do not reliably explain their properties.

There are many other failures of the standard cosmological model that we investigated in our review, with Mond often able to naturally explain the observations. The reason the standard cosmological model is nevertheless so popular could be down to computational mistakes or limited knowledge about its failures, some of which were discovered quite recently. It could also be due to people’s reluctance to tweak a gravity theory that has been so successful in many other areas of physics.

The huge lead of Mond over the standard cosmological model in our study led us to conclude that Mond is strongly favoured by the available observations. While we do not claim that Mond is perfect, we still think it gets the big picture correct – galaxies really do lack dark matter.

Read the whole story
strugk
89 days ago
reply
Warsaw, Poland
Share this story
Delete

Octopus and Human Brains Share the Same “Jumping Genes”

1 Share
Octopus

According to a new study, the neural and cognitive complexity of the octopus could originate from a molecular analogy with the human brain.

New research has identified an important molecular analogy that could explain the remarkable intelligence of these fascinating invertebrates.

An exceptional organism with an extremely complex brain and cognitive abilities makes the octopus very unique among invertebrates. So much so that it resembles vertebrates more than invertebrates in several aspects. The neural and cognitive complexity of these animals could originate from a molecular analogy with the human brain, as discovered by a research paper that was recently published in BMC Biology and coordinated by Remo Sanges from Scuola Internazionale Superiore di Studi Avanzati (SISSA) of Trieste and by Graziano Fiorito from Stazione Zoologica Anton Dohrn of Naples.

This research shows that the same ‘jumping genes’ are active both in the human brain and in the brain of two species, Octopus vulgaris, the common octopus, and Octopus bimaculoides, the Californian octopus. A discovery that could help us understand the secret of the intelligence of these remarkable organisms.

Sequencing the human genome revealed as early as 2001 that over 45% of it is composed of sequences called transposons, so-called ‘jumping genes’ that, through molecular copy-and-paste or cut-and-paste mechanisms, can ‘move’ from one point to another of an individual’s genome, shuffling or duplicating.

In most cases, these mobile elements remain silent: they have no visible effects and have lost their ability to move. Some are inactive because they have, over generations, accumulated mutations; others are intact, but blocked by cellular defense mechanisms. From an evolutionary point of view even these fragments and broken copies of transposons can still be useful, as ‘raw matter’ that evolution can sculpt.

Octopus Drawing

Drawing of an octopus. Credit: Gloria Ros

Among these mobile elements, the most relevant are those belonging to the so-called LINE (Long Interspersed Nuclear Elements) family, found in a hundred copies in the human genome and still potentially active. It has been traditionally though that LINEs’ activity was just a vestige of the past, a remnant of the evolutionary processes that involved these mobile elements, but in recent years new evidence emerged showing that their activity is finely regulated in the brain. There are many scientists who believe that LINE transposons are associated with cognitive abilities such as learning and memory: they are particularly active in the hippocampus, the most important structure of our brain for the neural control of learning processes.

The octopus’ genome, like ours, is rich in ‘jumping genes’, most of which are inactive. Focusing on the transposons still capable of copy-and-paste, the researchers identified an element of the LINE family in parts of the brain crucial for the cognitive abilities of these animals. The discovery, the result of the collaboration between Scuola Internazionale Superiore di Studi Avanzati, Stazione Zoologica Anton Dohrn and Istituto Italiano di Tecnologia, was made possible thanks to next-generation sequencing techniques, which were used to analyze the molecular composition of the genes active in the nervous system of the octopus.

“The discovery of an element of the LINE family, active in the brain of the two octopuses species, is very significant because it adds support to the idea that these elements have a specific function that goes beyond copy-and-paste,” explains Remo Sanges, director of the Computational Genomics laboratory at SISSA, who started working at this project when he was a researcher at Stazione Zoologica Anton Dohrn of Naples. The study, published in BMC Biology, was carried out by an international team with more than twenty researchers from all over the world.

“I literally jumped on the chair when, under the microscope, I saw a very strong signal of activity of this element in the vertical lobe, the structure of the brain which in the octopus is the seat of learning and cognitive abilities, just like the hippocampus in humans,” tells Giovanna Ponte from Stazione Zoologica Anton Dohrn.

According to Giuseppe Petrosino from Stazione Zoologica Anton Dohrn and Stefano Gustincich from Istituto Italiano di Tecnologia “This similarity between man and octopus that shows the activity of a LINE element in the seat of cognitive abilities could be explained as a fascinating example of convergent evolution, a phenomenon for which, in two genetically distant species, the same molecular process develops independently, in response to similar needs.”

“The brain of the octopus is functionally analogous in many of its characteristics to that of mammals,” says Graziano Fiorito, director of the Department of Biology and Evolution of Marine Organisms of the Stazione Zoologica Anton Dohrn. “For this reason, also, the identified LINE element represents a very interesting candidate to study to improve our knowledge on the evolution of intelligence.”

Reference: “Identification of LINE retrotransposons and long non-coding RNAs expressed in the octopus brain” by Giuseppe Petrosino, Giovanna Ponte, Massimiliano Volpe, Ilaria Zarrella, Federico Ansaloni, Concetta Langella, Giulia Di Cristina, Sara Finaurini, Monia T. Russo, Swaraj Basu, Francesco Musacchia, Filomena Ristoratore, Dinko Pavlinic, Vladimir Benes, Maria I. Ferrante, Caroline Albertin, Oleg Simakov, Stefano Gustincich, Graziano Fiorito and Remo Sanges, 18 May 2022, BMC Biology.
DOI: 10.1186/s12915-022-01303-5

Read the whole story
strugk
96 days ago
reply
Warsaw, Poland
Share this story
Delete
Next Page of Stories