Krzysztof Strug
1045 stories
·
2 followers

Who Really Creates Value in an Economy? by Mariana Mazzucato

1 Share

Ten years after the global economic crisis, profits have recovered, but investment remains weak. Ultimately, the reason is that economic policy continues to be informed by neoliberal ideology and its academic cousin, “public choice” theory, rather than by historical experience.

LONDON – After the 2008 global financial crisis, a consensus emerged that the public sector had a responsibility to intervene to bail out systemically important banks and stimulate economic growth. But that consensus proved short-lived, and soon the public sector’s economic interventions came to be viewed as the main cause of the crisis, and thus needed to be reversed. This turned out to be a grave mistake.

In Europe, in particular, governments were lambasted for their high debts, even though private debt, not public borrowing, caused the collapse. Many were instructed to introduce austerity, rather than to stimulate growth with counter-cyclical policies. Meanwhile, the state was expected to pursue financial-sector reforms, which, together with a revival of investment and industry, were supposed to restore competitiveness.

But too little financial reform actually took place, and in many countries, industry still has not gotten back on its feet. While profits have bounced back in many sectors, investment remains weak, owing to a combination of cash hoarding and increasing financialization, with share buybacks – to boost stock prices and hence stock options – also at record highs.

The reason is simple: the much-maligned state was permitted to pursue only timid policy responses. This failure reflects the extent to which policy continues to be informed by ideology – specifically, neoliberalism, which advocates a minimal role for the state in the economy, and its academic cousin, “public choice” theory, which emphasizes governments’ shortcomings – rather than historical experience.

Growth requires a well-functioning financial sector, in which long-term investments are rewarded over short-term plays. Yet, in Europe, a financial-transaction tax was introduced only in 2016, and so-called patient finance remains inadequate almost everywhere. As a result, the money that is injected into the economy through, say, monetary easing ends up back in the banks.

The predominance of short-term thinking reflects fundamental misunderstandings about the state’s proper economic role. Contrary to the post-crisis consensus, active strategic public-sector investment is critical to growth. That is why all the great technological revolutions – whether in medicine, computers, or energy – were made possible by the state acting as an investor of first resort.

Exclusive explainers, thematic deep dives, interviews with world leaders, and our Year Ahead magazine. Choose an On Point experience that’s right for you.

Learn More

Yet we continue to romanticize private actors in innovative industries, ignoring their dependence on the products of public investment. Elon Musk, for example, has not only received over $5 billion in subsidies from the US government; his companies, SpaceX and Tesla, have been built on the work of NASA and the Department of Energy, respectively.

The only way to revive our economies fully requires the public sector to reprise its pivotal role as a strategic, long-term, and mission-oriented investor. To that end, it is vital to debunk flawed narratives about how value and wealth are created.

The popular assumption is that the state facilitates wealth creation (and redistributes what is created), but does not actually create wealth. Business leaders, by contrast, are considered to be productive economic actors – a notion used by some to justify rising inequality. Because businesses’ (often risky) activities create wealth – and thus jobs – their leaders deserve higher incomes. Such assumptions also result in the wrong use of patents, which in recent decades have been blocking rather than incentivizing innovation, as patent-friendly courts have increasingly allowed them to be used too widely, privatizing research tools rather than just the downstream outcomes.

If these assumptions were true, tax incentives would spur an increase in business investment. Instead, such incentives – such as the US corporate-tax cuts enacted in December 2017 – reduce government revenues, on balance, and help to fuel record-high profits for companies, while producing little private investment.

This should not be shocking. In 2011, the businessman Warren Buffett pointed out that capital gains taxes do not stop investors from making investments, nor do they undermine job creation. “A net of nearly 40 million jobs were added between 1980 and 2000,” he noted. “You know what’s happened since then: lower tax rates and far lower job creation.”

These experiences clash with the beliefs forged by the so-called Marginal Revolution in economic thought, when the classical labor theory of value was replaced by the modern, subjective value theory of market prices. In short, we assume that, as long as an organization or activity fetches a price, it is generating value.

This reinforces the inequality-normalizing notion that those who earn a lot must be creating a lot of value. It is why Goldman Sachs CEO Lloyd Blankfein had the audacity to declare in 2009, just a year after the crisis to which his own bank contributed, that his employees were among “the most productive in the world.” And it is also why pharmaceutical companies get away with using “value-based pricing” to justify astronomical drug-price hikes, even when the US government spends more than $32 billion annually on the high-risk links of the innovation chain that results in those drugs.

When value is determined not by specific metrics, but rather by the market mechanism of supply and demand, value becomes simply “in the eye of the beholder” and rents (unearned income) become confused with profits (earned income); inequality rises; and investment in the real economy falls. And when flawed ideological stances about how value is created in an economy shape policymaking, the result is measures that inadvertently reward short-termism and undermine innovation.

A decade after the crisis, the need to address enduring economic weaknesses remains. That means, first and foremost, admitting that value is determined collectively, by business, workers, strategic public institutions, and civil-society organizations. The way these various actors interact determines not just the rate of economic growth, but also whether growth is innovation-led, inclusive, and sustainable. It is only by recognizing that policy must be as much about actively shaping and co-creating markets as it is about fixing them when things go wrong that we may bring this crisis to an end.

Read the whole story
strugk
2 days ago
reply
Warsaw, Poland
Share this story
Delete

Why can't more than four people have a conversation at once? — Quartz at Work

1 Share

It’s called the “dinner party problem”: A table of four or fewer people may happily converse as one, but a party of five or more will splinter fairly quickly into separate conversations of two or three four people each. What is it about the number four?

The question bothered Jaimie Krems, an assistant professor of psychology at Oklahoma State University. Krems had previously studied under Robin Dunbar, the Oxford University evolutionary psychologist who theorized that cohesion in any human social group falls apart once the group reaches 150—a figure now known as Dunbar’s number. But just as the dynamics of large groups start changing around 150, something also happens to the casual conversations of small groups once they surpass four members.

Social psychologists have noted the pattern in group conversations in research stretching back decades. There’s evidence that this four-person limit on conversations has been in place for about as long as humans have been having chatting with one another. Shakespeare rarely allowed more than four speaking characters in any scene; ensemble films rarely have more than four actors interacting at once. But why do we max out at four?

In a forthcoming paper in the journal Evolution and Human Behavior, Krems and Jason Wilkes offer one theory rooted in evolutionary psychology.

Pairs (or “dyads,” in psychology research parlance) are the essential building blocks of a society. Let’s imagine a conversation between four hypothetical humans: you, Chris, Pat, and Taylor. In a four-person conversation, there are six possible pairs of people who can be talking to one another at once. you and Chris, you and Pat, you and Taylor, Chris and Pat, Chris and Taylor, and Pat and Taylor. That’s three pairs you’re part of, and three pairs you’re not. Essentially, you have a role in influencing half of the possible conversations that could be happening in that group.

If there are three people in the conversation, there are three possible pairs, only one of which excludes you. If there are five people, there are 10 possible pairs, and the majority—six—don’t include you, which makes it harder to get your point across.

What if, the researchers argue, there was an evolutionary advantage to not being “outnumbered” in a conversational group? The physical danger of being an isolated outcast is clear: exclusion from society in early human history could easily be a death sentence, and even most observed cases of lethal chimpanzee violence have happened when aggressive groups encounter a lone chimp.

Perhaps there was an advantage in not being a conversational outcast, either. If a group is trying to reach an important decision—the safest location to build a shelter, for example, or how to allocate scarce food—your position has a better chance of prevailing if you’re able to convince at least half of the group. You also have a better chance of being able to talk your way out of exclusion.

“Just as one may have avoided death by avoiding being individually outnumbered in intergroup interactions, then, perhaps one would have been able to avoid social condemnation and/or exclusion by avoiding being dyadically outnumbered in in-group interactions,” they write. Yikes.

It’s possible our brains evolved to manage only the conversations in which we have a chance of swaying the group to our side. Otherwise, what’s the point of talking?

Read the whole story
strugk
4 days ago
reply
Warsaw, Poland
Share this story
Delete

London Fashion Week and designers are banning fur, but sales are up

1 Share

When London Fashion Week kicks off on September 14, fur will be missing from the runway. A survey by LFW organizer the British Fashion Council found that no designers in the lineup plan to use fur. The finding is the latest in a series of recent wins for the anti-fur movement and animal rights groups.

Over just the past year, fashion brands, designers, and cities have decided to do away with fur. What’s responsible for the trend? Advocacy groups like PETA (People for the Ethical Treatment of Animals) have taken credit for pressuring public officials to ban fur sales and manufacturing within city limits. They have also assumed responsibility for persuading designers to stop using fur, all while arguing that socially conscious young shoppers find fur outdated and cruel.

But fur sales tell a different story: They continue to rise. While it may no longer be socially acceptable to drape oneself in mink, fur trim has found its way onto coats, shoes, accessories, and furniture. Animal pelts, it would appear, are still very much in demand.

The fur industry is surviving the activism of animal rights groups

Just last week, Burberry said it will stop selling fur products. Since last year, several fashion houses, including Gucci, Michael Kors, Tom Ford, DKNY, Furla, and Versace, have announced plans to cease using fur as well. The same goes for high-end fashion retailers like Net-a-Porter. Even cities are getting in on the trend: In March, San Francisco became the largest city to ban the sale of fur, but that honor may ultimately go to Los Angeles, which took the first steps toward implementing a fur ban in August.

These wins are rooted in the anti-fur movement that took off in the 1980s. PETA was founded in 1980, and the first Fur-Free Friday, observed on Black Friday, took place in 1986. The momentum certainly made an impact on the fur industry, causing fur sales to wane in the 1990s. But by the early aughts, the industry managed to regroup by selling brightly dyed fur that, to the naked eye, was hard to distinguish from fake fur. Today, furriers have managed to attract customers by selling fur as accents on shoes, handbags, and other apparel. Both of these sales strategies have allowed shoppers to avoid the social stigma of wearing fur, since a full mink in brown is much more likely to attract the attention of fur opponents than a blue fur jacket or trim on boots.

Fur accents and accessories function as sort of a gateway drug in the industry. “We start with the young consumer buying a fur key ring. Then maybe a little later she has more money for a fur bag,” Julie Maria Iversen of Kopenhagen Fur in Denmark told National Geographic in 2016. “Eventually she buys a full coat. [It’s] all part of the agenda, to inspire the upcoming generation of women.”

The agenda seems to be working: This year alone, the US is expected to manufacture more than $352 billion in fur apparel and accessories, according to the market research firm Euromonitor International. That’s a slight gain from 2014, when $336.9 billion of fur was manufactured domestically. But fashion isn’t the only sector responsible for this 4 percent jump in fur manufacturing. The furniture industry has increasingly used fur in chair coverings, and expenditures on fur goods in that sector grew by 10 percent from 2011 to 2016.

Furriers’ ability to weather the ongoing criticism about the ethics of fur has largely been attributed to the global demand for fur, and the perception, even domestically, that wearing fur is a personal choice and not necessarily a moral one. And while animal rights activists have suggested that young people don’t wear fur, National Geographic pointed out that celebrities’ fondness for it — Lady Gaga, Rihanna, Kim Kardashian, and Rita Ora have all worn fur — has likely made it seem hip to young people. Whatever the case, the Fur Information Council of America (FICA) claims that 55 percent of fur shoppers are age 44 or younger.

Beyond any perceived coolness, however, is that fur has cultural influence depending on one’s region or ethnic heritage. In Detroit, a city known for its fur trading history, furs continue to be a staple. In fact, PETA once offered to pay Detroiter Aretha Franklin’s overdue property taxes if she agreed to give up fur. She refused.

For many indigenous peoples throughout North America, fur has always been a way of life. After the anti-sealskin movement of the 1970s resulted in bans on the animal pelts that disempowered the Inuit both economically and culturally, the ethnic group has in recent years attempted to reclaim and revive the sealskin market. Over the past decade, Canadians in particular have reportedly been more willing to become consumers of sealskin.

Animal rights groups may need to change focus if they want to combat fur sales

Animal rights groups aren’t denying statistics about fur’s rise in popularity. Some have even sounded the alarm about the trend, with activists acknowledging that the sales data reflects what they’ve noticed about fur anecdotally.

“The fur industry’s statistics reflect what we’re seeing in the streets — that fur consumption is on the rise,” Edita Birnkrant, the campaigns director for Friends of Animals, told the animal rights magazine Their Turn. “For the sake of the animals, we have to organize and take a more aggressive approach on their behalf.”

Aggressive moves by animal rights groups have often led to controversy and concerns that these organizations are racist, sexist, and sizeist. The most effective approach could simply be more public education about the conditions in which animals raised for their fur live. Animal rights groups say these animals are killed in grossly inhumane ways and raised in conditions with little to no regard for their welfare, and videos documenting the treatment of such animals net millions of pageviews.

But those viewers are likely people already invested in the animal rights movement in some capacity. Getting average consumers to understand the ins and outs of the trade may result in consumers recognizing that whether they’re wearing a full mink coat or fur-trimmed boots, an animal’s life was ultimately sacrificed for their aesthetic choice.

It’s also important to put the fashion world’s seeming embrace of animal rights into more context. London Fashion Week may be fur-free, but the appearance of fur on British catwalks has generally risen in recent years. Although LFW has sworn off fur for now, the material hasn’t disappeared from New York, Paris, and Milan fashion weeks.

Moreover, when fashion houses like Versace swear off fur, it results in positive press for both the brand and animal rights groups. After Versace’s announcement, for example, Nylon ran a headline declaring it “the latest fashion brand to denounce fur.” The use of the word “denounce” frames Versace as having taken an ethical stand.

The same has happened in regard to the California cities banning fur. When San Francisco decided to prohibit fur sales, the Animal Legal Defense Fund ran a headline praising it as “the first major US city to ban fur.” The headline suggested that unlike the nation’s other urban centers, San Francisco actually had a conscience. But is a fur ban in San Francisco very progressive when temperatures in the city rarely dip below freezing?

And though haute couture brands might appear more ethical for dropping fur, their decision to part with it does little to upend the fur industry. That’s because the average consumer is not buying fur at Versace or anywhere similar. According to FICA, 85 percent of US fur retailers and manufacturers are small, family-run businesses. So while the idea that fur sales have dropped persists, it’s rooted in PR wins for animal rights groups rather than a concrete change in fur consumption.

If shoppers don’t realize that sales are up, even as the stigma of wearing fur lingers, animal rights activists can’t disrupt the fur market. First, they’ll have to change the public perception about fur sales. Only then can they change how the public shops.

Read the whole story
strugk
6 days ago
reply
Warsaw, Poland
Share this story
Delete

Cash giving in Africa: program in Uganda works after 4 years, but not 9

1 Share

I write a lot about the benefits of fighting poverty by giving poor people cash, rather than, say, giving them chickens or food parcels or water pumps. Giving cash directly to the poor is relatively easy, it respects the decisions of poor people as to how to spend it, and it avoids the central planning challenges of some other anti-poverty policies. Moreover, there is, I think, pretty good evidence demonstrating its effectiveness.

But “pretty good evidence” doesn’t mean “all the evidence,” and “effective” doesn’t mean “a panacea.” A new paper from development economists Chris Blattman, Nathan Fiala, and Sebastian Martinez complicates our picture of cash transfer programs, and suggests that the best way to think of cash is as a way to speed up poor people’s escape from poverty, rather than as the key to helping them escape poverty in the first place.

The paper is about a program in northern Uganda, which with a GDP per capita of $2,352 (compared to $59,495 in the US) is among the poorest countries on earth; the north has endured years of violence driven by a Christian terrorist organization called the Lord’s Resistance Army.

The cash-transfer intervention, called the Youth Opportunities Program, was offered in 2008 to small groups of young people. Each person in every group was given about $382 to learn a skilled trade, with the goal of becoming craftsmen — which craft exactly varied from group to group — with higher earning potential. Twists like this are pretty common with cash programs. Recipients didn’t get the cash just to, say, make it easier for them and their families to eat. The goal was to become more productive and get into higher-paying jobs that could lift them out of poverty permanently.

Groups (of 10 to 40 people each) had to apply together, and the authors note that about half the groups who applied already existed in some form, as “farm cooperatives, or sports, drama, or microfinance clubs.” Each group had to put together a written proposal, which was screened by government officials. The groups also received formal advising from “a local government employee, teacher, or community leader” who helped put together the written application, partly to help applicants who were functionally illiterate and couldn’t compile an application on their own.

The main benefit, though, was the cash. “I started off like many people, 10 to 12 years ago, never having thought seriously about people getting cash as a development intervention and thinking it was vaguely crazy or irresponsible,” Blattman told me. “Having been a 25-year-old semi-skilled person once, the idea that I could get my annual income as one big lump sum and use it responsibly … it didn’t strike me as a good idea.”

After 4 years, the program appeared to work. After 9 years, though …

But in 2012, Blattman, Fiala, and Martinez checked in on the young adults in the Ugandan program four years after the intervention, and the results were incredibly encouraging. “Relative to the control group, the program increases business assets by 57 percent, work hours by 17 percent, and earnings by 38 percent,” they wrote. “Many also formalize their enterprises and hire labor.”

The implication was that giving ambitious young people in poor countries a little bit of cash could transform their lives, propelling them into more profitable careers and even encouraging them to build businesses with other employees.

In a blog post, Blattman argued that the results suggested an answer to “one of the big questions in development: how to create jobs and speed up the shift from agriculture to industry in developing countries?” Their findings suggested that helping countries in Africa go through the rapid development process of China or India could be as simple as handing out cash. Blattman titled the post, “Dear governments: Want to help the poor and transform your economy? Give people cash.”

Now he and his co-authors have checked back in again nine years after the intervention, and the results are a great deal less promising than after four. While the people who got cash were earning 38 percent more money than the control group in year four, the control group caught up to the cash recipients by year nine. Overall income was no higher in the treatment group, and earnings were higher by a small (4.6 percent), statistically insignificant amount.

The recipients did have more assets on average than people not getting the money, which makes sense; they had a sudden influx of money, some of which was sure to go toward buying durable assets like metal roofs, fruit-bearing trees, or work tools.

“The right way to look at these results is that people were richer for a while and then they have nicer houses,” Blattman said. “Consuming that stuff makes you less poor. But I think what a lift out of poverty means is not just that you have some extra savings and a buffer, but actually that you have some real, sustained earnings potential, and that’s not what we’ve seen.”

Women who got the money also reported that their kids were healthier than women who didn’t. But overall, the study strongly suggests that the cash grant wasn’t a catapult out of poverty. It just helped people who were already going to escape dire poverty do it a little bit faster.

The optimistic and pessimistic readings

On first glance, this looks bad for cash transfer enthusiasts like me. Cash had big effects after four years but, in the long run, no effect on income. It’s not a miracle drug.

But that’s a bit too dismissive. Getting out of extreme poverty faster is a really big deal! And indeed, if you look over the full nine years, Blattman, Fiala, and Martinez estimate that beneficiaries earned a total of $665 more than people not getting the grant. So an initial grant of just $382 led to an increase in earnings of nearly twice that size. Not bad — and pretty cost-effective, in that every $1 put in by the program led to another $1.74 in benefits to recipients.

Then there’s the fact that beneficiaries were still doing better after nine years in terms of assets. “I wouldn’t underrate the asset findings,” said Amanda Glassman, chief operating officer and senior fellow at the Center for Global Development, who’s read the study. “Having those assets is also associated with higher earnings over the lifetime, and it’s possible that might be pretty important over the long term.” Blattman and his co-authors didn’t find that assets like livestock and trees were leading to a lot more income nine years out — but that could change in the future.

Michael Faye, the co-founder and president of GiveDirectly, a nonprofit charity that gives cash directly to extremely poor people in sub-Saharan Africa, emphasizes that this is one study among many. A review of the evidence on cash transfers by the Overseas Development Institute, a British think tank, wound up extracting data from 165 different studies, out of a total pool of thousands. Of those, 96 were randomized controlled trials, widely considered the most rigorous possible study design. That’s a huge evidence base, the preponderance of which shows cash works quite well, and nothing like it exists for any other development intervention besides cash.

“I don’t update [my views] much for each individual paper,” Faye said. He also emphasizes that a few other long-term papers have shown more optimistic results. An evaluation of Mexico’s Progresa cash transfer program by economists Susan Parker and Tom Vogl released this spring found that after two decades, children whose families received the transfer saw improved “educational attainment, geographic mobility, labor market performance, and household economic outcomes in early adulthood.” The children got, on average, over a year more schooling than they would’ve without the cash.

Some data in the US backs up that conclusion too. Brown’s Anna Aizer, University of Toronto’s Shari Eli, Northwestern’s Joseph Ferrie, and UCLA’s Adriana Lleras-Muney looked at the Mothers’ Pension program, the first federal welfare program in American history, which ran from 1911 to 1935. They found that male children of mothers who were accepted for the program lived one year longer, got more schooling, and had incomes 14 percent greater than children of mothers who were rejected.

Berk Özler, the lead economist for the poverty cluster of the World Bank’s Development Research Group, is a bit more circumspect. Özler has been sharply critical of GiveDirectly for being unduly boosterish and insufficiently evidence-based in its arguments for cash. He’s not anti-cash, per se — “nobody’s disagreeing that cash is good,” he told me — but he does think the new paper’s findings are a bit more negative than Faye or Glassman do.

Looking at the people who got the cash, Özler said: “You have a third that never really start a business, a third that are disinvesting, and a third that are happy to be small businesses not really growing. That’s kind of disappointing, but it’s surprising to me. I don’t really understand why this is happening.”

In his view, the positive results among that small third of the recipients who actually used their money — rather than spending it or investing it for a bit and then disinvesting soon thereafter — drove the positive findings in year four. And he’s not quite sure why that group gained, and the others didn’t.

Özler also raised the issue of spillovers. The point of the program was to get more people into skilled trades, like tailoring. But that doesn’t just affect the people getting the skills; it affects the other tailors already working in the area. “Creating 10 to 15 tailors at once in a parish of 10,000 people, it’s got to affect existing tailors,” he said. “Maybe some went out of business.” It’s hard to know whether the program is cost-effective without knowing what happened to those other tailors. If the program just made some people successful tailors at the expense of others, that’s not really a huge gain.

For Özler, this all isn’t a reason to abandon cash, but it is a reason to think harder about how to do cash. “We’re not arguing ‘cash good versus cash not good.’ Cash is good!” he said. “But the only way to give it isn’t, ‘I’ll drop 1,000 bucks on you and go away.’”

There are a lot of ways to give out cash. The hope was that the Ugandan program had found one that would set up a durable, sustained escape from poverty for beneficiaries. That doesn’t really seem to be the case. But the cash certainly helped the recipients. And it’s possible an even better-designed cash program could help more.

“We don’t want to sound too disappointed,” Blattman told me, summarizing his takeaway. “It’s still better than anything else we’d seen.”

Read the whole story
strugk
7 days ago
reply
Warsaw, Poland
Share this story
Delete

Massive solar and wind farms could bring vegetation back to the Sahara

1 Share

Switching from fossil fuels to renewable energy is an important and necessary step towards averting climate change. However, in our efforts to go green, we also need to be mindful of other consequences, both intended and unintended – and that includes how a mass deployment of renewable technology might affect its surrounding climate.

What if the Sahara desert was turned into a giant solar and wind farm, for instance? This is the topic of new research published in Science by Yan Li and colleagues. They found that all those hypothetical wind turbines and solar panels would make their immediate surroundings both warmer and rainier, and could turn parts of the Sahara green for the first time in at least 4,500 years.

The scientists behind the research looked at the maximum amount of solar and wind energy that could be generated in the Sahara desert and the transition region to its south, the Sahel. The two regions were picked as they are relatively plausible sites for such an enormous roll-out of renewable energy, being fairly near to substantial demand from Europe and the Middle East, while having limited other demands on the land. Both have substantial potential resources of wind and solar energy. Li and colleagues also suggest that The Sahel, in particular, could also benefit from economic development and more energy for desalination, providing water for cities and agriculture.

As the two regions are so large, the solar and wind farms that were simulated in this study are the size of entire countries – 38 times larger than the UK. They would be vastly bigger than any existing solar and wind farms, and could provide up to four times as much energy as is currently consumed globally.

This would prompt quite significant changes in the local environment – massive wind farms would raise temperatures by around 2℃ for instance, similar to the amount of global warming we are concerned about. Solar would cause a smaller temperature change, around 1℃.

Precipitation increases of 0.25 mm per day associated with wind farms sound more modest, yet this would be almost double the previous amount of rainfall. Again, the effect associated with solar parks was smaller – an increase of 0.13 mm/day – but still significant when added up over a year.

Why turbines and panels mean warmth and rain

Wind farms largely cause temperature increases because their turbine blades bring warmer air down to the surface, especially at night. This has been observed in field studies and using remote sensing. They have also been shown to increase moisture in the air.

Solar panels mean more solar radiation is absorbed and less of the sun’s energy is reflected back into space. This causes the land surface to warm up. Several studies have shown this, including one which showed that the effect of warming caused by fossil fuels, via carbon emissions, was 30 times greater than the warming caused by solar photovoltaics absorbing more solar radiation. However, temperature effects may vary within the solar park and with season.

In the Sahara simulation, extra rainfall happens because wind turbines represent an obstacle to free-flowing air, slowing it down and reducing the effect of the Earth spinning on air flow. This lowers the air pressure, and the difference in pressure between the Sahara and surrounding areas causes wind to flow there. When the air meets, or converges, in the Sahara it has nowhere else to go but up. As the air rises, water vapour in it condenses and rain drops form.

For solar, the process is slightly different: warmer air, heated by the panels, simply rises. However, this also promotes low pressure, causing air to flow there, converge and rise.

More rainfall also means more vegetation. This increases surface roughness, as with wind turbines, and causes more solar radiation to be absorbed, as with solar panels. This reinforcing cycle is known as a “climate feedback” and incorporating these vegetation feedbacks is a novel aspect of the research by Li and colleagues.

Time to make it a reality?

Not quite. Decisions aren’t made in response to environmental impacts alone – if this was the case we’d have already ditched fossil fuels. It’s certainly true that developing a mega renewable energy site across the Sahara and the Sahel would be a game-changer, but there are lots of other factors to consider first.

These areas may be sparsely populated but people do live there, their livelihoods are there, and the landscapes are of cultural value to them. Can the land really be “grabbed” to supply energy to Europe and the Middle East?

Coherent and stable energy policies are challenging enough within an individual nation, let alone between nations with all the potential political implications and energy security issues. Though mass amounts of cheap Saharan energy sounds like a great thing, it is not clear it would be a secure enough investment for the economics to add up.

It’s also hard to tell what this would mean for desertification, which is caused by poor land management, such as overgrazing, as well as by the climate. The changes to rainfall looked at in this study are regional, not global, and once the wind and solar farms were taken away their effects would disappear and the land could revert back to its previous state.

Overall, this is an interesting and important piece of research, highlighting the need to be mindful of unintended consequences, be these positive or negative, of the energy transition. Integrating these findings with other social, economic, environmental and technical considerations is essential to ensure we don’t leap from the frying pan into the fire.

Read the whole story
strugk
9 days ago
reply
Warsaw, Poland
Share this story
Delete

The real Goldfinger: the London banker who broke the world | News

1 Share

Every January, to coincide with the World Economic Forum in Davos, Oxfam tells us how much richer the world’s richest people have got. In 2016, their report showed that the wealthiest 62 individuals owned the same amount as the bottom half of the world’s population. This year, that number had dropped to 42: three-and-half-dozen people with as much stuff as three-and-a-half billion.

This yearly ritual has become part of the news cycle, and the inequality it exposes has ceased to shock us. The very rich getting very much richer is now part of life, like the procession of the seasons. But we should be extremely concerned: their increased wealth gives them ever-greater control of our politics and of our media. Countries that were once democracies are becoming plutocracies; plutocracies are becoming oligarchies; oligarchies are becoming kleptocracies.

Things were not always this way. In the years after the second world war, the trend was in the opposite direction: the poor were getting richer; we were all getting more equal. To understand how and why that changed, we need to go back to the dying days of the conflict, to a resort in New Hampshire, where a group of economists set out to secure humanity’s future.

This is the story of how their dream failed and how a London banker’s bright idea broke the world.


In the years after the first world war, money flowed between countries pretty much however its owners wished, destabilising currencies and economies in pursuit of profit. Many of the wealthy grew wealthier even while economies fell apart. The chaos led to the election of extremist governments in Germany and elsewhere, to competitive devaluations and beggar-my-neighbour tariffs, to trade wars and, ultimately, to the horrors of the second world war.

The allies wanted to prevent this ever happening again. So, at a meeting at the Bretton Woods resort in New Hampshire in 1944, they negotiated the details of an economic architecture that would – in perpetuity – stop uncontrolled money flows. This, they hoped, would keep governments from using trade as a weapon with which to bully neighbours, and create a stable system that would help secure peace and prosperity.

Under the new system, all currencies would be pegged to the dollar, which would in turn be pegged to gold. An ounce of gold cost $35 (that’s about $500/£394 today). In other words, the US Treasury pledged that, if a foreign government turned up with $35, it could always buy an ounce of gold. The United States was promising to keep everyone supplied with enough dollars to fund international trade, as well as to maintain sufficient gold reserves for those dollars to be inherently valuable.

To prevent speculators trying to attack these fixed currencies, cross-border money flows were severely constrained. Money could move overseas, but only in the form of long-term investments, not to speculate short term against currencies or bonds.

To understand how this system worked, imagine an oil tanker. If it has just one huge tank, then the oil can slosh backwards and forwards in ever greater waves, until it destabilises the vessel, which overturns and sinks. At the Bretton Woods conference, the oil was divided between smaller tanks, one for each country. The liquid could slosh back and forth within its little compartments, but would be unable to achieve enough momentum to damage the integrity of the vessel.

Strangely, one of the best evocations of this long-gone system is Goldfinger, the James Bond book. The film of the same name has a slightly different plot, but they both feature a Soviet agent trying to undermine the west’s financial system by interfering with its gold reserves. “Gold and currencies backed by gold are the foundations of our international credit,” a Bank of England official named Colonel Smithers explains to 007.

The trouble is, the colonel continues, that the Bank is only prepared to pay £1,000 for a gold bar, which is the equivalent of the $35 per ounce price paid in America, whereas the same gold is worth 70% more in India, where there is a high demand for gold jewellery. It is thus highly profitable to smuggle gold out of the country and sell it overseas.

The villain Auric Goldfinger’s cunning scheme is to own pawnbrokers all over Britain, buy up gold jewellery and trinkets from ordinary Brits in need of a bit of cash, then melt them down into plates, attach the plates to his Rolls-Royce, drive them to Switzerland, reprocess them and fly them to India. By doing so, Goldfinger will not only undermine the British currency and economy, but also earn profits he could use to fund communists and other miscreants. Hundreds of Bank of England employees are engaged in trying to stop this kind of scam from happening, Smithers tells 007, but Goldfinger is too clever for them. He has secretly become Britain’s richest man, and has £5m-worth of gold bars sitting in the vaults of a bank in the Bahamas.

“We are asking you to bring Mr Goldfinger to book, Mr Bond, and get that gold back,” says Smithers. “You know about the currency crisis and the high Bank rate? Of course. Well, England needs that gold, badly – and the quicker the better.”

By modern standards, Goldfinger wasn’t doing anything wrong, apart perhaps from dodging some taxes. He was buying up gold at a price people were prepared to pay for it, then selling it in another market, where people were prepared to pay more. It was his money. It was his gold. So what was the problem? He was oiling the wheels of commerce, efficiently allocating capital where it could best be used, no?

No, because that wasn’t how Bretton Woods worked. Colonel Smithers considered the gold to belong not only to Goldfinger, but also to Great Britain. The system didn’t consider the owner of money to be the only person with a say in what happened to it. According to the carefully crafted rules, the nations that created and guaranteed the value of money had rights to that money, too. They restricted the rights of money-owners in the interests of everybody else. At Bretton Woods, the allies – desperate to avoid a repeat of the horrors of the inter-war depression and the second world war – decided that, when it came to international trade, society’s rights trumped those of money-owners.

All this is hard to imagine for anyone who has only experienced the world since the 1980s, because the system now is so different. Money flows ceaselessly between countries, nosing out investment opportunities in China, Brazil, Russia or wherever. If a currency is overvalued, investors sense the weakness and gang up on it like sharks around a sickly whale. In times of global crisis, the money retreats into the safety of gold or US government bonds. In boom times, it pumps up share prices elsewhere in its restless quest for a good return. These waves of liquid capital have such power that they can wash away all but the strongest governments. The prolonged speculative attacks on the euro, the rouble or the pound, which have been such a feature of the past few decades, would have been impossible under the Bretton Woods system, which was specifically designed to stop them happening.

And the system was remarkably successful: economic growth in most western countries was almost uninterrupted throughout the 1950s and 1960s, societies became more equal, while governments made massive improvements in public health and infrastructure. All of this did not come cheap, however. Taxes had to be high to pay for it, and rich people struggled to move their money out of the taxman’s reach – thanks to the separate compartments in the oil tanker. Fans of the Beatles will remember George Harrison singing on Taxman about the government taking 19 shillings for every one he could keep; that was an accurate reflection of the amount of his earnings that was going to the Treasury, a 95% marginal tax rate.

It wasn’t only the Beatles who hated this system. So did the Rolling Stones, who relocated to France to record Exile on Main St. And so, too, did Rowland Baring, scion of the Barings bank dynasty, third earl of Cromer and – between 1961 and 1966 – the governor of the Bank of England. “Exchange control is an infringement on the rights of the citizen,” he wrote in a note to the government in 1963. “I therefore regard [it] ethically as wrong.”


One reason Baring hated the restrictions was that they were killing the City of London. “It was like driving a powerful car at 20 miles an hour,” lamented one banker, of his spell in charge of a major British bank. “The banks were anaesthetised. It was a kind of dream life.” In those days, bankers arrived at work late, left early and frittered away much of the time in between having boozy lunches. No one particularly cared, because there wasn’t much to do anyway.

Today, looking over its glass-and-steel skyline, it is hard to imagine that the City of London once almost died as a financial centre. In the 1950s and 1960s, the City played little part in the national conversation. Yet, although few books about the swinging 60s even mention the City, something very significant was brewing there – something that would change the world far more than the Beatles or Mary Quant or David Hockney ever did, something that would shatter the high-minded strictures of the Bretton Woods system.

By the time Ian Fleming published Goldfinger in 1959, there were already some leaks in the compartments of the oil tanker. The problem was that not all foreign governments trusted the US to honour its commitment to use the dollar as an impartial international currency; and they were not unreasonable in doing so, since Washington did not always act as a fair umpire. In the immediate post-second-world-war years, the US government had sequestered communist Yugoslavia’s gold reserves. The rattled eastern bloc countries then made a habit of keeping their dollars in European banks rather than in New York.

Similarly, when Britain and France attempted to regain control of the Suez canal in 1956, a disapproving Washington froze their access to dollars and doomed the venture. These were not the actions of a neutral arbiter. Britain at the time was staggering from one crisis to another. In 1957, it raised interest rates and stopped banks using sterling to finance trade in an attempt to keep the pound strong (this was the “currency crisis and the high bank rate” that Smithers told Bond about).

City banks, which could no longer use sterling in the way they were accustomed, began to use dollars instead, and they obtained those dollars from the Soviet Union, which was keeping them in London and Paris so as to avoid becoming vulnerable to American pressure. This turned out to be a profitable thing to do. In the US, there were limits on how much interest banks could charge on dollar loans – but not so in London.

This market – the bankers called the dollars “eurodollars” – gave a bit of life to the City of London in the late 1950s, but not much. The big bond issues were still taking place in New York, a fact which annoyed many bankers in London. After all, many of the companies borrowing the money were European, yet it was American banks that were earning the fat commissions.

One banker in particular was not prepared to tolerate this: Siegmund Warburg. Warburg was an outsider in the cosy world of the City. For one thing, he was German. For another, he hadn’t given up on the idea that a City banker’s job was to hustle for business. In 1962, Warburg learned from a friend at the World Bank that some $3bn was circulating outside the US – sloshing around and ready to be put to use. Warburg had been a banker in Germany in the 1920s and remembered arranging bond deals in foreign currencies. Why couldn’t his bankers do something similar again?

Up to this point, if a company wanted to borrow dollars, it would have to do so in New York. Warburg, however, was pretty confident he knew where he could find a significant chunk of that $3bn – Switzerland. Since at least the 1920s, the Swiss had been in the business of hoarding cash and assets on behalf of foreigners who wanted to avoid scrutiny. By the 1960s, perhaps 5% of all the money in Europe lay under Switzerland’s steel mattresses.

For the City’s most ambitious financiers, this was tantalising: there was all this money squirrelled away, doing nothing much, and it was exactly what they needed in their quest to start selling bonds again. As Warburg saw it, if he could somehow access the money, package it up and lend it, he would be in business. Surely, Warburg thought, he could persuade the people who were paying Swiss bankers to look after their money that they would rather earn an income from it by buying his bonds? And surely he could persuade European companies that they would rather borrow this money from him and avoid paying the steep fees demanded in New York?

It was a great idea, but there was a problem: the compartments of the oil tanker were in the way. It was impossible for Warburg to move that money from Switzerland via London to clients who wanted to borrow it. But he took two of his best men and told them to get it done anyway.


They began their efforts in October 1962, the same month that the Beatles released Love Me Do. The bankers finalised their deal on 1 July the following year, the same day that the Fab Four recorded She Loves You, the song that sparked global Beatlemania. That extraordinary nine months not only revolutionised pop music, but also geopolitics, since they included the Cuban missile crisis and John F Kennedy’s Ich bin ein Berliner speech. Under the circumstances, it is understandable that a simultaneous revolution in global finance passed little remarked.

Warburg’s new bond issue – these bonds became known as “eurobonds”, after the example set by eurodollars – was led by Ian Fraser, a Scottish war hero turned journalist turned banker. He and his colleague Peter Spira had to find ways to defang the taxes and controls designed to prevent hot money flowing across borders, and to find ways to pick and choose different aspects of different countries’ regulations for the various elements of their creation.

If the bonds had been issued in Britain, there would have been a 4% tax on them, so Fraser formally issued them at Schiphol airport in the Netherlands. If the interest were to be paid in Britain, it would have attracted another tax, so Fraser arranged for it to be paid in Luxembourg. He managed to persuade the London Stock Exchange to list the bonds, despite their not being issued or redeemed in Britain, and talked around the central banks of France, the Netherlands, Sweden, Denmark and Britain, all of which were rightly concerned about the eurobonds’ impact on currency controls. The final trick was to pretend that the borrower was Autostrade – the Italian state motorway company – when really it was IRI, a state holding company. If IRI had been the borrower, it would have had to deduct tax at source, while Autostrade did not have to.

The cumulative effect of this game of jurisdictional Twister was that Fraser created a bond paying a good rate of interest, on which no one had to pay tax of any kind, and which could be turned back into cash anywhere. These were what are known as bearer bonds. Whoever possessed the bond owned them; there was no register of ownership or any obligation to record your holding, which was not written down anywhere.

Fraser’s eurobonds were like magic. Before eurobonds, hidden wealth in Switzerland couldn’t really do much; but now it could buy these fantastic pieces of paper, which could be carried anywhere, redeemed anywhere and all the while paid interest to their owners, tax free. Dodge taxes and make a profit, worldwide.

So, who was buying Fraser’s magical invention? Who was providing the money he was lending to IRI, via Autostrade? “The main buyers of these bonds were individuals, usually from eastern Europe but often also from Latin America, who wanted to have part of their fortune in mobile form so that if they had to leave they could leave quickly with their bonds in a small suitcase,” Fraser wrote in his autobiography. “There was still a mass migration of the surviving Jewish populations of central Europe heading for Israel and the west. To this was added the normal migration of fallen South American dictators heading east. Switzerland was where all this money was stashed away.”

Later, historians tried to downplay Fraser’s account a little, and to claim that corrupt politicians – those fallen South American dictators – made up just a fifth or so of the demand for these early bond issues. As for the remaining four-fifths of the money that bought up the bonds, this came from standard tax dodgers – “Belgian dentists”, the bankers called them – high-earning professionals who steered a chunk of their earnings to Luxembourg or Geneva, and who welcomed this lovely new investment.

The eurobonds set wealth free and were the first step towards creating the virtual country of the rich that I call Moneyland. Moneyland includes offshore finance, but is much broader than that, since it protects every aspect of a rich person’s life from scrutiny, not just their money. The same money-making dynamic that enticed Fraser to defang capital controls on behalf of his clients, entices his modern-day counterparts to find ways for the world’s richest people to avoid visa controls, journalistic scrutiny, legal liability and much more. Moneyland is a place where, if you are rich enough, whoever you are, wherever your money comes from, the laws do not apply to you.

This is the dirty secret at the heart of the City’s rebirth, the beginning of the process that eventually led to today’s stratospheric inequality. It was all made possible by modern communications – the telegram, the phone, the telex, the fax, the email – and it allowed the world’s richest people to avoid the responsibilities of citizenship


That first deal was for $15m. But once the way to sidestep the obstacles that stopped cash flowing offshore had been identified, there was nothing to stop more money following behind. In the second half of 1963, $35m of eurobonds were sold. In 1964, the market was $510m. In 1967, the total passed $1bn for the first time, and it is now one of the biggest markets in the world.

The result was that, over time, the system created at Bretton Woods fell apart. More and more dollars were escaping offshore, where they avoided the regulations and taxes imposed upon them by the US government. But they were still dollars, and thus 35 of them were still worth an ounce of gold.

The trouble that followed stemmed from the fact that dollars don’t just sit around doing nothing. They multiply. If you put a dollar in a bank, the bank uses it as security for the money it lends to someone else, meaning there are more dollars – your dollar, and the dollars someone else has borrowed. And if that person puts the money in another bank, and that bank lends it, there are now even more dollars, and so on.

And since every one of those dollars was nominally worth a fixed amount of gold, the US would have needed to keep buying ever more gold to satisfy the potential demand. If the US did that, however, it would have to have bought that gold with dollars, meaning yet more dollars would exist, which would multiply in turn, meaning more gold purchases, and more dollars, until the system would eventually collapse under the weight of the fact that it didn’t make sense; it couldn’t cope with offshore.

The US government tried to defend the dollar/gold price, but every restriction it put on dollar movements just made it more profitable to keep your dollars in London, leading more money to leak offshore, and thus more pressure to build on the dollar/gold price. And where the dollars went, the bankers followed. The City had looser regulations and more accommodating politicians than Wall Street, and the banks loved it. In 1964, 11 US banks had branches in the City of London. In 1975, 58 did.

The US Office of the Comptroller of the Currency, who administered the federal banking system, opened a permanent office in London to inspect what the British branches of American banks were up to. But the Americans had no power in the UK and got no help from the locals. “It doesn’t matter to me,” said Jim Keogh, the Bank of England official responsible for monitoring these banks, “whether Citibank is evading American regulations in London”.

By that time, however, Washington had bowed to the inevitable and stopped promising to redeem dollars for gold at $35 an ounce. It was the first step in a steady dismantling of all the safeguards created at Bretton Woods. The philosophical question over who really owned money – the person who earned it, or the country that created it – had been answered.

If you had money, thanks to the accommodating bankers of London and Switzerland, you could now do what you wanted with it and governments could not stop you. As long as one country tolerated offshore, as Britain did, then the efforts of all the others came to nothing. If regulations stop at a country’s borders, but the money can flow wherever it wishes, its owners can outwit any regulators they choose.

The developments that began with Warburg did not stop with simple eurobonds. The basic pattern was endlessly replicable. Identify a line of business that might make you and your clients money. Look around the world for a jurisdiction with the right rules for that business – Liechtenstein, the Cook Islands, Jersey – and use it as a nominal base.

If you couldn’t find a jurisdiction with the right kind of rules, then you threatened or flattered one until it changed its rules to accommodate you. Warburg himself started this off, by explaining to the Bank of England that if Britain did not make its rules competitive and its taxes lower, then he would take his bank elsewhere, perhaps to Luxembourg.

Hey presto, the rules were changed, and the tax – in this case, stamp duty on bearer bonds – was abolished. The world’s response to these developments has been entirely predictable as well. Time after time, countries have chased after the business they have lost offshore (as the US did by abolishing the regulations the banks were dodging when they moved to London), thus making the onshore world ever more similar to the offshore piratical world that Warburg’s bankers created.

Taxes have fallen, regulations have relaxed, politicians have become friendlier, all in an effort to entice the restless money to settle in one jurisdiction rather than another. The reason for this is simple. Once one jurisdiction lets you do what you want, the business flows there and other jurisdictions have to rush to change, too. It is the Moneyland ratchet, always loosening regulations for the benefit of those with money to move around, and never tightening them.

Different nations are affected by Moneyland in different ways. Wealthy citizens of the rich countries of Europe and North America own the largest total amount of cash offshore, but it is a relatively small proportion of their national wealth, thanks to the large size of their economies. The economist Gabriel Zucman estimates it to be just 4% for the US. For Russia, however, 52% of household wealth is offshore, outside the reach of the government. In the Gulf countries, it is an astonishing 57%.

“It’s very easy for oligarchs of developing countries, non-democratic countries, to hide their wealth. That provides them with huge incentives to loot their countries, and there’s no oversight,” says Zucman.

Come January, we will get another update of how much more of the world’s wealth these oligarchs have taken for themselves: the only surprise will be the precise volume of their new acquisition, and how little they have left for the rest of us. But we shouldn’t wait until then to grasp the urgency of the situation.

We need to act now to shine a light on their wealth, on the dark matter whose gravitational power is bending the fabric of our societies. We may have been ignoring Moneyland, but its nomad citizens have not been ignoring us. If we wish to take back control of our economies, and our democracies, we need to act now. Every day that we wait, more money is stacked against us.

Adapted from Moneyland: Why Thieves & Crooks Now Rule The World & How to Take It Back by Oliver Bullough, published by Profile Books

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.

Read the whole story
strugk
11 days ago
reply
Warsaw, Poland
Share this story
Delete
Next Page of Stories