Krzysztof Strug
1170 stories

The dream of high speed trains is already coming off the rails

1 Share

On April 3, 2007, a sleek train raced across the countryside of northeast France, pursued by a small jet aircraft. On an open stretch of track between Prény and Bezannes, the train galloped ahead – eventually reaching 500kph. Officially, no train operated by TGV, the French state-owned high speed service, had ever ever surpassed 515kph, the speed record set by the same firm 17 years earlier. This attempt, christened Operation TGV 150, was aiming to reach 150 metres per second, or 540kph. As the chase aircraft beamed data and video to pensive engineers, the train pushed beyond 540kph, before setting a new world speed record: 574.8kph.

Since then the world has seen a boom in high-speed rail. In 2011, the EU set out to triple the length of the European network by 2030, and since 2000 has invested €23.7bn (£20.42bn) in the development of high-speed rail infrastructure. In the last ten years alone, the number of passenger-kilometres travelled annually on high-speed trains (those operating at speeds over 250kph) has increased 350 per cent, to 845 billion. The lion’s share of this growth has taken place in China. Since the country inaugurated its first high speed service, a 120 km route between Beijing and Tianjin, in 2008, it has built over 20,000km of track, served by 1,200 trains.

With air travel under increasing scrutiny as a dangerously indulgent mode of transport, rail is often touted as the greenest form of mass transit available. Across Europe and Asia, ultra-fast trains are racing to capture overland routes back from the air industry. Can high speed rail make long distance travel green again?

“The big issue is power,” says Alan Vardy, emeritus professor of engineering at the University of Dundee. “The power required increases with the cube of the train speed.” That makes squeezing each additional boost in speed exponentially more difficult – and expensive. “You’ve got to have the electricity to provide that power, and the motors of the vehicle have to cope with that power,” he says.

Typically, that power (around 15,000 to 25,000 volts worth) is supplied by catenaries, overhead wires that a train contacts to via a raised arm called a pantograph. These wires are not rigid, but draped between support pillars. “As the train goes under, it distorts the shape of wire, and the whole thing shifts,” says Vardy. The faster they go, the more the wire sways. “There is a fair amount of technology just keeping the pantograph in reasonable contact with the wire.”

And as trains get faster, increasing that speed becomes even harder. Air resistance become a major factor with increased speeds. “Double the speed leads to four times as much loss to drag,” says Hugh Hunt, researcher in engineering at Cambridge. “So high speed trains have really sharp pointy noses.” The famously long noses of Japan’s Shinkansen ‘bullet’ trains are actually there for a different purpose, however: preventing sonic booms. As a train enters a tunnel, it acts like a piston, creating a shockwave that races ahead of the train. The aerodynamics of long, narrow tunnels can result in a cacophonous bang at the far end – to the irritation of those living within earshot.

The problem is particularly acute in Japan, where tunnels were built before the effect was understood. Engineers designed trains with elongated nose cones to soften the sudden increase in air pressure. High-speed trains in Europe go just as fast as Japanese bullet trains – if not faster – but the phenomenon is rarer due to larger bore tunnels. Where it does occur, engineers usually tackle the problem by adding a long hood to the tunnel. “Just like the long nose makes is possible to operate in tunnels without any hood, the hood makes it possible to operate without a nose cone,” explains Vardy.

Sudden pressure changes in tunnels are also uncomfortable for passengers, and for this reason all high-speed trains are pressurised to some degree. But this creates a new problem: the pressure difference has to be shouldered by the chassis of the train, and over time, leads to fatigue issues.

At high speeds, noise also becomes a big issue. Narrow tunnels are also a problem in the UK, where they were built for smaller nineteenth century vehicles, and limit the amount of noise insulation that can be added to a modern train. Noise increases with speed, and high-speed trains are usually fitted with skirts to muffle the shriek of steel wheels on steel track. This sound is especially bad if the rail isn’t constantly ground down. “High speed tracks have got to be very, very smooth,” says Vardy. “You can hear it if you go on the Underground in London, or any city, sometimes it becomes really noisy because that section of track has become corrugated.”

At 400kph, high speed transport is a moving experience for more than just the passengers. The speed of a train is dictated by the rail it runs on, which is set down in stone – literally – when the track is laid. The sharpness of curves – as well as the tilt they are given, to allow the train to lean into the turn – sets a cap on the maximum speed a train can pass it safely. Tilting trains can squeeze a little extra speed from existing track, but then any existing track must be redesigned or replaced with purpose built high speed line. Costing around $30m (£23m) per kilometre, this soon adds up.

Even the ground below the rail can be affected by the rumble of trains. “There are problems of liquefaction,” says Hunt. “In Belgium, quite a lot of the reclaimed land is quite soft soil, it has to be stiffened for high speed trains.”

Assuming we’re willing to supply a custom-built track – and pay for the constant maintenance required – how fast could a train go? “We consider theoretical maximum speed to be about 600kph,” says Laurent Jarsalé, vice president of the High Speed Product Line at French rail giant Alstom. The hard cap is set by the copper catenary that supplies the train with power: as speed increases, the cable must be held in ever-higher tension. But there is a limit to how tightly you can lace this copper wire before it snaps.

Yet it’s unlikely you’ll be able to ride a train running at half the speed of sound any time soon – or perhaps ever. The fastest non-levitating train currently in service is China Railway’s Fùxīng Hào, which hits 400kph between Shanghai and Beijing. This will be rivalled when Japan’s new fleet of ALFA-X bullet trains arrives in 2030, boasting comparable top speeds.

Most of the time though, both will be travelling much slower than that, around 350kph. “There’s a big gap between the maximum record speed, and speed which is considered optimum for everyday operation,” says Jarsalé. “The market is not looking for these kinds of very high speeds. We need to find the technical-economical optimum, that is most often defined to be around 320kph.”

“The key thing is journey time, not speed,” says Mark Smith, the ex-railway manager better known as The Man In Seat 61, after his travel website of the same name. The standard rule within the industry was that rail competed best against air travel for journeys under three hours. But increased air congestion, the use of hard-to-reach satellite airports and increased security measures have all played in rail’s favour. “The head of the French railways Guillaume Pepy, said some time ago that this was more like four or five hours now,” says Smith. “Paris to Perpignan on the TGV is five hours and 15 minutes, and SNCF have a 50 per cent share of travellers.”

In fact, high speed lines shorten journey times for those who never even ride on them. “The issue about HS2 is it has been badly publicised by the media,” says Vardy of the UK’s beleaguered £85bn rail project. “Partly to please politicians, everyone shouted about the high speed, but the fundamental of HS2 is increasing capacity.” High speed lines act like a highway bypass: by moving the fastest trains to one line, space is freed up across the existing network to add more slow, regional trains.

Can new technologies push the envelope? Maglev trains - which hover over specialised track on magnetic fields - are freed from catenaries and wheels, and in theory can glide along as fast as air resistance allows. In reality, that currently stands at 603kph, set by Japan Railway in 2015 – not a compelling advance on what has been achieved by a conventional train travelling on line which costs one third as much to build.

In addition, high-speed trains can pick up passengers in centrally located railway stations before switching to high speed lines outside the city. But maglev trains can only run on dedicated track, which isolates them to out-of-town terminals, adding to overall journey time.

Already there are signs the high-speed boom may be over. China Railway Corp has debts of ¥4tn (£446bn), some lines are struggling to attract enough passengers and planned extensions have been shelved. The EU’s high-speed rail network was criticised in an audit for being incoherent, overpriced and under-delivered. Upgrading the line between Stuttgart and Munich shaved 36 minutes off the average journey time, but each one of those minutes came at a cost of €369m (£316m).

“We don’t expect significant network increase in Europe,” says Jarsalé. Instead, his company is focussed on building rolling stock that is more flexible, more comfortable, and more environmentally friendly. Across Europe, state-built networks are being opened to competition. Operators such as France’s low-cost Ouigo will compete with flag carriers on price, comfort and timetabling, and no doubt trade on their green credentials to climate-conscious travellers. Strangely enough, it turns out that with high-speed trains, speed isn’t everything.

More from WIRED on Cities

🚫 Inside the software meltdown causing Crossrail’s delay crisis

🚄 The return of the night train

📷 With Ring, Amazon is building a smart city that should worry us all

🏢 Everyone needs to stop building giant glass skyscrapers right now

⛏️Could our future cities be built underground?

Digital Society is a digital magazine exploring how technology is changing society. It's produced as a publishing partnership with Vontobel, but all content is editorially independent. Visit Vontobel Impact for more stories on how technology is shaping the future of society.

Read the whole story
1 day ago
Warsaw, Poland
Share this story

Hackers can use lasers to send voice commands to your devices

1 Share

In the spring of last year, cybersecurity researcher Takeshi Sugawara walked into the lab of Kevin Fu, a professor he was visiting at the University of Michigan. He wanted to show off a strange trick he'd discovered. Sugawara pointed a high-powered laser at the microphone of his iPad – all inside of a black metal box, to avoid burning or blinding anyone – and had Fu put on a pair of earbuds to listen to the sound the iPad's mic picked up. As Sugawara varied the laser's intensity over time in the shape of a sine wave, fluctuating at about 1,000 times a second, Fu picked up a distinct high-pitched tone. The iPad's microphone had inexplicably converted the laser's light into an electrical signal, just as it would with sound.

Six months later Sugawara – visiting from the Tokyo-based University of Electro-Communications—along with Fu and a group of University of Michigan researchers have honed that curious photoacoustic quirk into something far more disturbing. They can now use lasers to silently "speak" to any computer that receives voice commands – including smartphones, Amazon Echo speakers, Google Homes, and Facebook's Portal video chat devices. That spy trick lets them send "light commands" from hundreds of feet away; they can open garages, make online purchases, and cause all manner of mischief or malevolence. The attack can easily pass through a window, when the device's owner isn't home to notice a telltale flashing speck of light or the target device's responses.

"It’s possible to make microphones respond to light as if it were sound," says Sugawara. "This means that anything that acts on sound commands will act on light commands."

In months of experimentation that followed Sugawara's initial findings, the researchers found that when they pointed a laser at a microphone and changed the intensity at a precise frequency, the light would somehow perturb the microphone's membrane at that same frequency. The positioning didn't need to be especially precise; in some cases they simply flooded the device with light. Otherwise, they used a telephoto lens and a geared tripod to hit their mark.

As a result, the microphone interpreted the incoming light into a digital signal, just as it would sound. The researchers then tried changing the intensity of the laser over time to match the frequency of a human voice, aiming the beam at the microphones of a collection of consumer devices that accept voice commands.

When they used a 60 milliwatt laser to "speak" commands to 16 different smart speakers, smartphones, and other voice activated devices, they found that almost all of the smart speakers registered the commands from 164 feet away, the maximum distance they tested. Smartphones proved trickier: An iPhone was only susceptible from a range of around 33 feet, and two Android phones could only be controlled from within around 16 feet.

In a second experiment, the researchers tested the power and range limits of their technique, downgrading to a 5 milliwatt laser – equivalent to a cheap laser pointer – and moving 361 feet away from their targets in a hallway. While their tests mostly failed at that range, they nonetheless found that they could still control a Google Home and a first-generation Echo Plus. In another test, they successfully transmitted their laser commands through a window onto a Google Home's microphone inside a nearby building nearly 250 feet away.

The "voice" commands carried on that laser beam, the researchers point out, would be entirely silent. An observer might notice a flashing blue spot on their microphone – if they were even home to see it. "Your assumptions about blocking sound aren’t true about blocking light," says Daniel Genkin, a professor at the University of Michigan who co-led the team. "This security problem manifests as a laser through the window to your voice activated system."

For even more stealth, the researchers suggest that a voice-assistant hacker could use an infrared laser, which would be invisible to the naked eye. (They tested an infrared laser and found that it worked to control Echo and Google Home speakers at close range, but didn't try longer ranges for fear of burning or blinding someone.) And while voice assistants typically give an audible response, a hacker could send an initial command that turns the volume down to zero. While they haven't tested this specifically, the researchers also suggest that an attacker could use light commands to trigger Amazon's "whisper mode," which allows a user to speak commands and receive answers in a hushed tone.

When it comes to the actual physics of a microphone interpreting light as sound, the researchers had a surprising answer: They don't know. In fact, in the interest of scientific rigour, they refused to even speculate about what photoacoustic mechanics caused their light-as-speech effect.

But at least two different physical mechanisms might be producing the vibrations that make the light commands possible, says Paul Horowitz, a professor emeritus of physics and electrical engineering at Harvard and the co-author of The Art of Electronics. First, a pulse of laser light would heat up the microphone's diaphragm, which would expand the air around it and create a bump in pressure just as sound would. Alternately, Horowitz posits that if the components of the target devices aren't fully opaque, the laser's light may get past the microphone and directly hit the electronic chip that interprets its vibrations into an electrical signal. Horowitz says this could result in the same photovoltaic effect that occurs in diodes in solar cells and at the ends of fiberoptic cables, turning light into electricity or electrical signals. He says this could easily cause the laser to be processed as a voice command.

"There's no dearth of theories, one or more of which is happening here," Horowitz says. The potential havoc encompasses everything from triggering smart home controls like door locks and thermostats to remotely unlocking cars. "It’s the same threat model as any voice system, but with an unusual distance effect," says Fu. Or as University of Michigan researcher Sara Rampazzi puts it: "You can hijack voice commands. Now the question is just how powerful your voice is, and what you’ve linked it to."

A Google spokesperson told WIRED in a statement that it was "closely reviewing this research paper. Protecting our users is paramount, and we're always looking at ways to improve the security of our devices." Apple declined to comment, and Facebook didn't immediately respond. An Amazon spokesperson wrote in a statement that "we are reviewing this research and continue to engage with the authors to understand more about their work.”

Some devices do offer authentication protections that might foil a laser-wielding hacker. iPhones and iPads require a user to prove their identity with TouchID or FaceID before, say, making a purchase. And the researchers acknowledge that for most smartphone voice assistants, the "wake words" that begin a voice command must be spoken in the voice of the phone's owner, which makes their laser attack far more difficult to pull off. But they note that an attacker who obtains or reconstructs just those words – like "hey Siri" or "OK Google" could then "speak" those words in the target user's own voice as the preamble to their voice commands.

Smart speakers like the Echo and Google Home, however, have none of that voice authentication. And given the physical nature of the vulnerability, no software update may be able to fix it. But the researchers do suggest some less-than-ideal patches, such as requiring a spoken PIN number before voice assistants carry out the most sensitive commands. They also suggest future tweaks to the devices designs to protect them from their attack, such as building light shielding around the microphone, or listening for voice commands from two different microphones on opposite sides of the device, which might be tough to hit simultaneously with a laser.

Until those fixes or design changes arrive, Michigan's Genkin suggests a simple if counterintuitive rule of thumb for anyone concerned by the attack's implications: "Don’t put a voice-activated device in line of sight of your adversary," he says. If they can so much as see your Echo or Google Home through a window, they can talk to it too.

This article was originally published on WIRED US. Videos via University of Michigan

More great stories from WIRED

⏲️ What would happen if we abolished time zones altogether?

🍎 Prepare Yourself for the Biggest Apple Launch of All Time

🏙️ Inside the sinking megacity that can't be saved

💰 Meet the economist with a brilliant plan to fix capitalism

🎮 Long Read: Inside Google Stadia

📧 Get the best tech deals and gadget news in your inbox

Read the whole story
8 days ago
Warsaw, Poland
Share this story

Libra się sypie. Zróbmy z niej dobry użytek –

1 Share

Janis Warufakis. Fot. DTRocks, CC BY-SA 4.0

Marzenie Zuckerberga o prywatnym światowym monopolu na płatności raczej się nie spełni – i całe szczęście. Ale świetne pomysły, które w rękach nieoglądających się na ryzyko prywaciarzy wiodą do katastrofy, powinny być oddane w służbę ogółowi.

ATENY. Stowarzyszenie Libra się rozpada. Z sojuszu firm, który pod przewodnictwem Facebooka zamierzał stworzyć nową, zabezpieczoną aktywami kryptowalutę i tym samym zrewolucjonizować międzynarodowe transakcje finansowe, wystąpiły już Visa, Mastercard, PayPal, Stripe, Mercado Pago oraz eBay. Inne firmy zapewne pójdą w ich ślady, jako że zaniepokojone rządy wywierają na nie naciski i chcą powstrzymać wdrożenie libry w życie.

To dobrze, że tak się stało. Ludzkość ucierpiałaby, gdyby Facebookowi powiódł się plan sprywatyzowania międzynarodowego systemu płatności przy użyciu libry. Jednak władze, które obecnie starają się zdusić librę w zarodku, powinny pomyśleć o przyszłości i zrobić z tą walutą coś innowacyjnego, użytecznego i wizjonerskiego: przekazać librę – bądź koncepcję, na której ją oparto – w ręce Międzynarodowego Funduszu Walutowego w celu zmniejszenia globalnych nierówności w handlu i zrównoważenia przepływów finansowych. Bo tak naprawdę kryptowaluta na kształt libry mogłaby wesprzeć MFW w realizacji jego pierwotnego celu.

Waluta Facebooka – Hayek by nie kupował

czytaj także

Friedrich von Hayek i Mark Zuckerberg

Kiedy szef Facebooka Mark Zuckerberg ogłosił wszem i wobec plan wprowadzenia libry, sam pomysł wydawał się interesujący i niewinny. Każdy posiadacz telefonu komórkowego mógłby kupować żetony libry za krajową walutę, płacąc za nie zwykłą kartą płatniczą czy przelewem elektronicznym. Żetonami można by następnie dokonywać płatności na rzecz innych użytkowników w ramach Libry, kupując towary i usługi albo spłacając długi. Obsługa wszystkich transakcji za pomocą technologii blockchain miała zapewnić pełną przejrzystość. W przeciwieństwie do bitcoinów żetony libry miałyby pełne zabezpieczenie w formie solidnych aktywów.

Aby zabezpieczyć librę rzeczowymi aktywami trwałymi, popierające ją stowarzyszenie obiecało przeznaczyć swoje przychody oraz kapitał zalążkowy przekazany przez firmy członkowskie (co najmniej 10 milionów dolarów od każdej firmy) na zakup niezwykle płynnych, wysoko wycenianych aktywów finansowych (takich jak obligacje rządu USA). Ponieważ to Facebook miałby grać pierwsze skrzypce, nietrudno było wyobrazić sobie moment, w którym połowa dorosłych na całej planecie, czyli 2,4 miliarda aktywnych użytkowników Facebooka, nagle miałaby dostęp do nowej waluty, która umożliwiałaby im przeprowadzanie transakcji między sobą z pominięciem całej reszty systemu finansowego.

Początkowa reakcja władz była negatywna, ale i niezręczna. Poprzez uwypuklenie ryzyka użycia libry do celów przestępczych władze zdołały jedynie potwierdzić podejrzenie libertarian, że w obliczu groźby utraty kontroli nad pieniądzem organy regulacyjne, politycy i szefowie banków centralnych wolą ukręcić łeb niosącym większą swobodę nowinkom. Szkoda, ponieważ dotąd do nielegalnych działań najbardziej przyczyniała się stara dobra gotówka oraz – co istotniejsze – ponieważ libra stanowiłaby systemowe zagrożenie dla gospodarek, nawet gdyby nigdy nie została użyta do finansowania terroryzmu bądź działalności przestępczej.

Zacznijmy od indywidualnych skutków wprowadzenia libry. Przypomnijmy sobie wysiłek, jaki większość krajów włożyła w ograniczenie do minimum wahnięć siły nabywczej krajowej waluty. W wyniku tych starań za sto euro bądź dolarów można kupić mniej więcej tyle samo towaru dzisiaj co w przyszłym miesiącu. Nie byłoby tak jednak w przypadku stu euro czy dolarów wymienionych na librę.

Stiglitz: Kryptowaluta Facebooka? Nie lubię

czytaj także

Ponieważ libra miałaby zabezpieczenie w aktywach w kilku walutach, siła nabywcza żetonu libry w każdym państwie podlegałaby o wiele większym wahaniom niż waluta krajowa. W rzeczywistości libra przypominałaby stosowaną wewnątrz MFW jednostkę rozrachunkową o nazwie SDR (Special Drawing Rights), będącą odzwierciedleniem średniej ważonej wartości czołowych światowych walut.

Libra stanowiłaby systemowe zagrożenie dla gospodarek, nawet gdyby nigdy jej nie użyto do finansowania terroryzmu.

Aby zrozumieć, co to oznacza, przypomnijmy sobie, że w 2015 roku kurs wymiany dolara amerykańskiego wobec SDR ulegał zmianom rzędu 20%. Gdyby amerykańscy konsumenci wymienili wtedy sto dolarów na żetony libry, skazaliby się na męczarnie patrzenia, jak krajowa siła nabywcza ich żetonów na przemian wzlatuje i opada jak zabawkowe jojo. Dla mieszkańców państw rozwijających się, których waluty mają tendencję do utraty wartości, ułatwienie wymiany waluty w ramach libry przyspieszałoby ten spadek, napędzało krajową inflację, a jednocześnie zwiększało zarówno prawdopodobieństwo, jak i skalę odpływu kapitału.

Od krachu finansowego z 2008 roku władze z wielkim trudem utrzymywały w ryzach inflację, stopę zatrudnienia i poziom inwestycji przez stosowanie dźwigni budżetowych i walutowych, które przed kryzysem wydawały się względnie skuteczne. Libra ograniczyłaby już i tak nikłą zdolność państw do łagodzenia cyklu biznesowego. Ucierpiałaby skuteczność polityki fiskalnej, jako że wraz z każdą płatnością przesuniętą do światowego systemu waluty Facebooka zmniejszałaby się podstawa opodatkowania. Krajową politykę walutową czekałby jeszcze większy wstrząs.

Libra – to będzie krach, jakiego świat nie widział

czytaj także

Mark Zuckerberg

Z lepszym bądź gorszym skutkiem banki centralne zarządzają jednak ilością i przepływem pieniądza przez wycofywanie lub wprowadzanie papierowych aktywów do rezerw banków prywatnych. Kiedy chcą pobudzić gospodarkę, banki centralne wykupują od prywatnych banków komercyjne pożyczki, kredyty hipoteczne, depozyty i inne aktywa, przez co te ostatnie dysponują większą pulą pieniędzy, które mogą pożyczyć klientom. Sytuacja wygląda odwrotnie, kiedy rząd chce schłodzić gospodarkę.

Libra zaburza tę równowagę: im większy odniesie sukces, tym więcej pieniędzy trafi na prywatne rachunki bankowe wyrażone właśnie w librze – i tym trudniej będzie bankom centralnym stabilizować gospodarkę. Innymi słowy, im więcej osób przerzuci się na librę, tym większe wahania i kryzysy będą nękać zarówno ludzi, jak i całe państwa.

Platformy do wywoływania furii

czytaj także

Jedynie Stowarzyszenie Libra wyjdzie na tym dobrze, bo to jemu przypadnie w udziale ogromny dochód z odsetek od aktywów na całym świecie, które zgromadzi przez ściągnięcie olbrzymiej części światowych oszczędności na własną platformę płatniczą. Już wkrótce stowarzyszenie uległoby pokusie oferowania osobom fizycznym i firmom kredytów, przekształcając się tym samym z systemu płatności w gigantycznych rozmiarów ogólnoświatowy bank, którego nie byłby w stanie wykupić, regulować ani rozwiązać żaden rząd.

Im więcej osób przerzuci się na librę, tym większe wahania i kryzysy będą nękać zarówno ludzi, jak i całe państwa.

Dlatego też to dobrze, że libra się sypie, a wraz z nią sypie się marzenie Zuckerberga o prywatnym ogólnoświatowym monopolu na płatności. Nie powinniśmy jednak wylewać tego technologicznego dziecka wraz z jego monopolistyczną kąpielą. Cała sztuczka polega na tym, aby wdrożenie tego rozwiązania powierzyć Międzynarodowemu Funduszowi Walutowemu, który dokonałby tego w imieniu państw członkowskich. MFW mógłby w ten sposób zreformować międzynarodowy system walutowy w sposób zbliżony do odrzuconej w 1944 roku na konferencji w Bretton Woods propozycji Johna Maynarda Keynesa, która zakładała utworzenie Międzynarodowej Unii Rozrachunkowej.

W celu zrealizowania tego nowego Bretton Woods MFW wyemitowałby oparty na technologii blockchain podobny do libry żeton – nazwijmy go „kosmosem” – o swobodnym kursie wymiany na waluty krajowe. Ludzie nadal używaliby walut krajowych, ale wszystkie międzynarodowe transakcje handlowe i transgraniczne przelewy kapitałowe byłyby rozliczane w kosmosach i przechodziłby przez rachunek danego banku centralnego w ramach MFW. Deficyty i nadwyżki w handlu zagranicznym obarczone byłyby opłatą za brak równowagi handlowej, a prywatne instytucje finansowe uiszczałyby opłatę proporcjonalnie do jakiejkolwiek zwyżki wypływu kapitału. Takie opłaty byłyby rozliczane w kosmosach w ramach rachunku prowadzonego przez MFW, który w ten sposób działałby jako ogólnoświatowy odpowiednik państwowego funduszu majątkowego.

Opodatkujmy wielki biznes – bez półśrodków

czytaj także

Nagle wszelkie międzynarodowe transakcje przebiegałyby gładko i w pełni przejrzyście, a niewielkie, acz znaczące taksy utrzymywałyby w ryzach zaburzenia równowagi w handlu i przepływie kapitału oraz finansowałyby inwestycje w ekologiczne rozwiązania i środki korygujące nierówną dystrybucję dóbr pomiędzy Północą a Południem.

Świetne pomysły, które w rękach nieoglądających się na ryzyko prywaciarzy wiodą do katastrofy, powinny być oddane w służbę ogółowi. W ten sposób będziemy mogli skorzystać z ich wynalazczości, nie padając jednocześnie ofiarą ich planów.

Copyright: Project Syndicate, 2019. Z angielskiego przełożyła Katarzyna Byłów.

Read the whole story
11 days ago
Warsaw, Poland
Share this story

I Got Access to My Secret Consumer Score. Now You Can Get Yours, Too.

1 Share

Kustomer, for example, gave me the runaround. When I first contacted the company from my personal email address, a representative wrote back that I would have the report by the end of the week. After a couple of weeks passed, I emailed again and was told the company was “instituting a new process” and had “hit a few snags.” I never got the report. When I contacted a company spokeswoman, I was told that I would need to get my data instead from the companies that used Kustomer to analyze me.

Most of the companies only recently started honoring these requests in response to the California Consumer Privacy Act. Set to go into effect in 2020, the law will grant Californians the right to see what data a company holds on them. It follows a 2018 European privacy law, called General Data Protection Regulation, that lets Europeans gain access to and delete their online data. Some companies have decided to honor the laws’ transparency requirements even for those of us who are not lucky enough to live in Europe or the Golden State.

“We expect these are the first of many laws,” said Jason Tan, the chief executive of Sift. The company, founded in 2011, started making files available to “all end users” this June, even where not legally required to do so — such as in New York, where I live. “We’re trying to be more privacy conscious. We want to be good citizens and stewards of the internet. That includes transparency.”

I was inspired to chase down my data files by a June report from the Consumer Education Foundation, which wants the Federal Trade Commission to investigate secret surveillance scores “generated by a shadowy group of privacy-busting firms that operate in the dark recesses of the American marketplace.” The report named 11 firms that rate shoppers, potential renters and prospective employees. I pursued data from the firms most likely to have information on me.

One of the co-authors of the report was Laura Antonini, the policy director at the Consumer Education Foundation. At my suggestion, she sought out her own data. She got a voluminous report from Sift, and like me, had several companies come up empty-handed despite their claims to have information on hundreds of millions of people. Retail Equation, the company that helps decide whether customers should be allowed to make a return, had nothing on me and one entry for Ms. Antonini: a return of three items worth $78 to Victoria’s Secret in 2009.

“I don’t really care that these data analytics companies know I made a return to Victoria’s Secret in 2009, or that I had chicken kebabs delivered to my apartment, but how is this information being used against me when you generate scores for your clients?” Ms. Antonini said. “That is what consumers deserve to know. The lack of the information I received back is the most alarming part of this.”

In other words, most of these companies are just showing you the data they used to make decisions about you, not how they analyzed that data or what their decision was.

Read the whole story
11 days ago
Warsaw, Poland
Share this story

Google and IBM Clash Over Quantum Supremacy Claim

1 Share

This morning, Google researchers officially made computing history. Or not, depending on whom you ask.

The tech giant announced it had reached a long-anticipated milestone known as “quantum supremacy” — a watershed moment in which a quantum computer executes a calculation that no ordinary computer can match. In a new paper in Nature, Google described just such a feat performed on their state-of-the-art quantum machine, code named “Sycamore.” While quantum computers are not yet at a point where they can do useful things, this result demonstrates that they have an inherent advantage over ordinary computers for some tasks.

Yet in an eleventh-hour objection, Google’s chief quantum-computing rival asserted that the quantum supremacy threshold has not yet been crossed. In a paper posted online Monday, IBM provided evidence that the world’s most powerful supercomputer can nearly keep pace with Google’s new quantum machine. As a result, IBM argued that Google’s claim should be received “with a large dose of skepticism.”

So why all the confusion? Aren’t major milestones supposed to be big, unambiguous achievements? The episode reminds us that not all scientific revolutions arrive as a thunderclap — and that quantum supremacy in particular involves more nuance than fits in a headline.

Quantum computers have been under development for decades. While ordinary, or classical, computers perform calculations using bits — strings of 1s and 0s — quantum computers encode information using quantum bits, or qubits, that behave according to the strange rules of quantum mechanics. Quantum computers aim to harness those features to rapidly perform calculations far beyond the capacity of any ordinary computer. But for years, quantum computers struggled to match the computing power of a handheld calculator.

In 2012, John Preskill, a theoretical physicist at the California Institute of Technology, coined the phrase “quantum supremacy” to describe the moment when a quantum computer finally surpasses even the best supercomputer. The term caught on, but experts came to hold different ideas about what it means.

And that’s how you end up in a situation where Google says it has achieved quantum supremacy, but IBM says it hasn’t.

Before explaining what quantum supremacy means, it’s worth clarifying what it doesn’t mean: the moment a quantum computer performs a calculation that’s impossible for a classical computer. This is because a classical computer can, in fact, perform any calculation a quantum computer can perform — eventually.

“Given enough time … classical computers and quantum computers can solve the same problems,” said Thomas Wong of Creighton University.

Instead, most experts interpret quantum supremacy to mean the moment a quantum computer performs a calculation that, for all practical purposes, a classical computer can’t match. This is the crux of the disagreement between Google and IBM, because “practical” is a fuzzy concept.

In their Nature paper, Google claims that their Sycamore processor took 200 seconds to perform a calculation that the world’s best supercomputer — which happens to be IBM’s Summit machine — would need 10,000 years to match. That’s not a practical time frame. But IBM now argues that Summit, which fills an area the size of two basketball courts at the Oak Ridge National Laboratory in Tennessee, could perform the calculation in 2.5 days.

Google stands by their 10,000 year estimate, though several computer experts interviewed for this article said IBM is probably right on that point. “IBM’s claim looks plausible to me,” emailed Scott Aaronson of the University of Texas, Austin.

So assuming they’re right, is 2.5 days a practical amount of time? Maybe it is for some tasks, but certainly not for others. For that reason, when computer scientists talk about quantum supremacy, they usually have a more precise idea in mind.

Read the whole story
20 days ago
Warsaw, Poland
Share this story

Why Green Pledges Will Not Create the Natural Forests We Need

1 Share

Experts agree: Reforesting our planet is one of the great ecological challenges of the 21st century. It is essential to meeting climate targets, the only route to heading off the extinction crisis, and almost certainly the best way of maintaining the planet’s rainfall. It could also boost the livelihoods of hundreds of millions of inhabitants of former forest lands.

The good news is that, even as deforestation continues in many countries, reforestation is under way in many others. From India to Ethiopia, and China to Costa Rica, there are more trees today than there were 30 years ago, saving species, recycling rain, and sucking carbon dioxide from the air. The Bonn Challenge, an international agreement struck eight years ago to add 1.35 million square miles of forests (an area slightly larger than India) to the planet’s land surface by 2030, is on track.

But what kind of forests are they?

A damning assessment published earlier this month in the journal Nature brought bad news. Forest researchers analyzed the small print of government declarations about what kind of forests they planned to create. They discovered that 45 percent of promised new forests will be monoculture plantations of fast-growing trees like acacia and eucalyptus, usually destined for harvesting in double-quick time to make pulp for paper.

There are growing concerns that the reforestation agenda is becoming a green cover for the further assault on ecosystems.

Such forests would often decrease biodiversity rather than increase it, and would only ever hold a small fraction of the carbon that could be captured by giving space for natural forests. Another 21 percent of the “reforestation” would plant fruit and other trees on farms as part of agroforestry programs; just 34 percent would be natural forests.

“Policymakers are misinterpreting the term forest restoration [and] misleading the public,” the study’s two main authors, geographer Simon Lewis of Leeds University and tropical forest researcher Charlotte Wheeler of Edinburgh University, commented in a blog. It is, they say, a “scandal.”

Forestry and climate experts say that monoculture timber plantations have their place, but that they must be in addition to the 1.35 million square miles of restored natural forests, not instead of them. These experts also say that an important component of reforestation is supporting policies that help trashed forests and degraded lands regenerate naturally into jungle or woodlands, thus promoting significant carbon storage and fostering biodiversity. Richard Houghton, an ecologist at the Woods Hole Research Center in Massachusetts, has estimated that if degraded tropical forests were allowed to regrow, they could capture up to 3 billion tons of carbon annually for as much as 60 years, potentially “providing a bridge to a fossil fuel-free world.”

The world’s forests are home to half of all terrestrial species. Their foliage recycles rainfall to keep the interiors of continents from turning into desert, and they store CO2 that would otherwise add to global warming. Their restoration is fast becoming a global clarion call, essential for protecting biodiversity and climate.

Environment groups such as The Nature Conservancy and World Resources Institute (WRI) trumpet the environmental potential — and economic rationale — for putting reforestation at the heart of “natural solutions” to climate change. Such new forests “can provide 37 percent of cost-effective CO2 mitigation needed through 2030” to hold down warming, an international study headed by Bronson Griscom of The Nature Conservancy concluded.

The World Wildlife Fund (WWF) has called for the planting and protection of 1 trillion trees worldwide. The United Nations recently declared the 2020s would be the “Decade of Ecosystem Restoration.” As well as the promises made in Bonn, reforestation is central to fulfilling many countries’ emissions pledges made at the Paris climate conference in 2015.

But there are growing concerns that the reforestation agenda is becoming green cover for a further assault on the world’s ecosystems, and that this will undermine its ability to deliver for climate.

Reforestation is happening. Many countries in temperate lands have been gradually increasing their forest cover for decades. Europe has a third more trees than it did a century ago, as they encroach onto unwanted farmland. Some of the biggest expansion has happened in Eastern European countries such as Romania and Poland since the collapse of communist rule, when state collective farms were abandoned. In New England, forests have recolonized 15,400 square miles, an area 1 1/2 -times the size of Massachusetts, since the mid-19th century.

Many Chinese farmers who took money to plant trees said they would cut them down when the subsidies stopped.

But if there is a date when reforestation took flight as a global policy project, it was probably 20 years ago, when China blamed massive floods along the Yangtze River on deforestation. In 1999, Beijing banned further deforestation on Chinese soil and launched its Conversion of Cropland to Forest Program, sometimes called “grain for green.” Today, it claims the program has paid more than 100 million farmers across the country to plant trees and has restored more than 108,000 square miles of forest.

The effectiveness of the program has often been questioned. One important component, the “great green wall” project, which aims to halt spreading deserts across northern China by planting 100 billion trees by 2050, has been called a “fairy tale” by some Chinese ecologists. They claim five out of six seedlings have died. Geographer David Shankman, professor emeritus at the University of Alabama and a long-time observer of China’s reforestation programs, told Yale Environment 360: “I am not confident of long-term success.”

A 2016 study of the overall Chinese program by Lucas Gutiérrez Rodríguez of the Center for International Forestry Research in Bogor, Indonesia, found the published research was skewed and variable. The impacts on biodiversity were sometimes negative. On Hainan Island, for instance, the reforestation program replaced traditionally biodiverse farming systems with monocultures of eucalyptus and rubber. And many farmers who took money to plant trees on their land said they would cut them down again when the subsidies stopped. But Rodríguez agreed that, despite its failings, China had seen “a substantial increase in forest cover and associated carbon stocks.”

Other developing countries have also made the transition from deforestation to reforestation. Costa Rica saw its forest cover decline from 75 percent in 1940 to 20 percent in the late 1980s, mostly through clearance for cattle ranches. But with the government paying land users to nurture new forests of native tree species, cover has since recovered to more than 50 percent.

Nepal has seen a remarkable development of community forests. Some 17,000 autonomous community forest user groups, with rights to manage their forests and control access, have driven a rise in national forest cover of around 20 percent in the past three decades. Those new woodlands are largely composed of native species.

In Niger, on the edge of the Sahara desert, farmers have overturned decades of advice from government agricultural advisors and begun nurturing rather than removing trees on their land. The grassroots movement began in the mid-1980s in a single village, says Chris Reij, then of the VU University, Amsterdam and now at WRI, who uncovered it. Farmers in Dan Saga in the Maradi region of the country discovered by accident that they got better grain yields if they let trees grow; the trees stabilized soils, helped retain nitrogen, and dropped leaves that maintained soil moisture. Word spread. Today, the practice extends across 12.3 million acres and 200 million trees.

Increasingly, governments are becoming convinced that forests can be a boon for rural livelihoods. Since the Bonn Challenge was launched, 58 countries have made formal pledges on reforestation, covering more than 650,000 square miles that they say will eventually capture the equivalent of almost half-a-year of global industrial CO2 emissions.

In Brazil, 82 percent of the promised restoration is actually monoculture plantations rather than natural forest.

Other non-Bonn commitments — for instance as part of Paris climate pledges — extend the total reforestation in tropical countries alone to 1.1 million square miles, says Lewis. Countries with commitments exceeding 38,000 square miles include Brazil, China, India, Ethiopia, the United States, Nigeria, Indonesia, Mexico, Vietnam, and the Democratic Republic of Congo.

But Lewis says many of these promises are misleading. His and Wheeler’s analysis of plans submitted to the Bonn Challenge secretariat and elsewhere, found that in Brazil, for instance, 82 percent of the promised restoration is actually monoculture plantations rather than natural forest. In China, the figure is 99 percent.

Long-maturing natural forests will eventually store typically 40 times more carbon than a plantation harvested once a decade. “Plantations hold little more carbon, on average, than the land cleared to plant them,” says Lewis. The same would apply to proposed plantations of forest to provide biomass for burning in power stations.

Agroforestry — defined as the integration and cultivation of trees on farms — is rather better, holding typically six times more carbon than monoculture plantations, though only a seventh as much as natural forests. Many African countries are committed to reforesting primarily through agroforestry, encouraging smallholders to plant non-timber trees such as mangoes, cashew, or cocoa in their fields.

Backed by $1 billion from the World Bank, the Africa Forest Landscape Restoration Initiative aims to restore 386,000 square miles of forest by 2030, much of it on farmland. The model is Ethiopia, where in the wake of disastrous drought in the 1980s farmers have planted 2.5 million acres of trees among their crops in Tigray province alone.

Some nations have made big promises for restoring natural forests. They include Vietnam and India, both of which plan to meet more than 60 percent of their promised extensive reforestation this way. Under its 2018 draft forest plan, India intends to raise forest cover from the current 24 percent to 33 percent.

Even so, Lewis concludes, the preponderance of countries that plan to meet their commitments primarily through plantations or agroforestry means that they will only capture a fifth as much carbon as they would if they restored natural forests. He estimates they will capture around 16 billion tons, compared to the 200 billion tons that a recent IPCC report estimated would need to be removed from the atmosphere this century to help hold warming to 1.5 degrees Celsius (2.7 degrees Fahrenheit).

“If deforestation could be reduced, Africa could quickly become a significant carbon sink,” says one biologist.

But these calculations leave out one additional component, says Edward Mitchard of Edinburgh University, a co-author of Lewis and Wheeler’s paper. Largely unnoticed, many degraded forests are regrowing, capturing carbon and often retaining much of their old biodiversity. Mitchard has tracked how, as African farmers head for jobs in cities, their old fields are consumed by jungle. “If deforestation could be reduced, Africa could quickly become a significant carbon sink,” he told Yale e360.

Other researchers take a similar line. Philip G. Curtis, a consultant with the non-profit Sustainability Consortium, has estimated that only about a quarter of annual deforestation is permanent. Much of the rest — whether lost to fires, shifting cultivation, or logging — will eventually recover. One assessment estimates that the world contains some 7.7 million square miles of degraded land suitable for forest restoration, a quarter of it for closed forests and the remainder for “mosaic” restoration in which forests are embedded into agricultural landscapes.

Others argue that successful forest restoration will require much greater involvement — and control — by forest communities. If badly managed, taking land for reforestation can result in outright “green grab,” as earmarked land is handed over to outside corporations or even NGOs, according to Rebecca McLain a spokeswoman for the Center for International Forestry Research. “Tenure rights are often key.”

But the bottom line, says Lewis, is that “to stem global warming, deforestation must stop. And restoration programs worldwide should return all degraded lands to natural forests.” The danger, he says, is that by trying to smuggle tree plantations into global agreements on restoring real forests, governments are in danger of undermining what could still become the greatest story of ecological redemption in the 21st century.

Read the whole story
21 days ago
Warsaw, Poland
Share this story
Next Page of Stories