This article is an on-site version of Martin Sandbu’s Free Lunch newsletter. Sign up here to get the newsletter sent straight to your inbox every Thursday
We have an artificial intelligence summit behind us, which produced some celebrity photoshoots at Bletchley Park but also some useful outcomes. One was simply having many of the world’s main policymakers and business leaders in the same room focusing on the challenges and opportunities of AI. Having them spend limited political bandwidth on the next technological upheaval at all, let alone doing so together, amounts to a degree of foresight that is all too rare in our politics. If it becomes the first of a regular series of meetings, and it matters that Chinese representatives participated, so much the better.
Another result is that it may have prompted the White House to get its own first stab at regulating AI over the line, with a US executive order published before the Bletchley gathering. For all this, UK prime minister Rishi Sunak deserves thanks; we are even ready to forgive him for the tech bro talk with Elon Musk.
The Bletchley Park summit suggests that the policy conversation has shifted in a constructive direction since the release of ChatGPT led to a wave of panic about humanity’s end. Last time I wrote about AI, I argued that warnings of existential risks were identical to those triggered by previous technological breakthroughs — from books (the supposedly suicide-inducing The Sorrows of Young Werther) to the nuclear bomb (see Dr Strangelove). As I said then:
I have found myself unable to get caught up in much of the excitement . . . I struggle to see how even the worst-case scenarios the experts warn us against are qualitatively different from the big problems humanity has already managed to cause and had to try to solve all by ourselves . . . [L]ying and manipulation, especially in our democratic processes, are problems we humans have been perfectly capable of causing without the need for AI . . .
So I think that the whiff of existential terror the latest AI breakthroughs have whipped up is a distraction. We should instead be thinking on a much more mundane level.
And as far as I can tell, the thinking has taken a good turn for the mundane. The risks of wiping out humanity (the “terminator” challenge) are no longer centre stage. Nor are those of one great power wiping out another (the “robot army” challenge). Instead, we are thinking about just what we should: how AI could cause harm to humanity today and how state power can be used to address that. Tim Wu puts it nicely in a New York Times op-ed: “Actual harm, not imagined risk, is a far better guide to how and when the state should intervene.” I also agree with him that the US executive order mostly, and rightly, focuses on actual harms.
And the fact is that these more mundane problems are much the same as the actual harms we already face. The difference made by AI is that it will be easier to cause them at greater scale and perhaps with less scrutiny. But the problems are qualitatively the same and we can do well by applying solutions that are qualitatively the same as the ones we know, albeit perhaps with greater technological sophistication and speed (no doubt regulators will have to fight AI with AI, at least in part).
One type of problem has to do with fraud and impersonation, as highlighted by both Wu and the White House. Nothing new about this: such abuses go back as early as humans gathered in big enough groups not to know everyone personally. Of course, AI provides new methods, such as voice impersonation or deep fake video, particularly suited to a society where much interaction is remote and digital. The solutions are partly legal — defining accountability for communication and requirements for honest dealing — and partly technological — such as the “watermarking” the EU and US are now regulating for. But there is nothing profoundly different from how we have dealt with fraud and counterfeits in the past.
My colleague Rana Foroohar has highlighted another challenge: the manipulative monetisation of personal data and online behaviour. This is the core “production model” of the digital business idea known as surveillance capitalism and Rana is, of course, right to warn that AI will turbo-charge the kind of abuses already being committed. Stopping this does not require a whole lot of new tools but that we use the ones we have in earnest. While we have let the Facebooks of this world target us based on our personal data for many years, all it really takes is for governments to ban the practice. This has just happened, on a narrow scale: Norway prohibited Meta from using behavioural advertising in the country a few months ago, and has now persuaded the collective of EU privacy regulators to follow suit.
Now, that wasn’t so hard, was it? Admittedly, this is a limited prohibition: only for one company, only for one country or perhaps region, and only for behavioural advertising rather than using private data more broadly or even collecting it. And it is, of course, being contested. But the point is clear: if a practice is harmful or unfair, you can actually ban it. And — to today’s point — you can ban it when carried out by an AI as well.
Then there are risks related not to intentional harm but significant unforeseen effects. US Securities and Exchange Commission chair Gary Gensler has warned that the use of AI by financial companies could lead to financial instability if many market participants unwittingly rely on the same model. I am sure analogous risks can emerge in other sectors. Here, I think, the Wu remark I quote above falls short: in cases such as the ones Gensler worries about, we absolutely have to imagine risks. The reason is that we know from experience that there are sectors and activities where previously unsuspected risks have a way of creeping up on us.
In all these cases, in other words, there is work to be done but no need to reinvent the wheel. Update legislation, empower rule-setters, strengthen enforcement — and, above all, give strong political backing to all this work. There is really no excuse for letting AI lead to a flowering of old abuses, like some vampire given access to new blood.
But there is one area that still seems to be missing in these laudably technocratic debates: AI’s effect on incomes, wealth and inequality. There is some concern about the displacement of jobs, another challenge we have a lot of (poor) experience of — and I should know because I have written an entire book about how we mishandled the last big job disruption. But apart from labour markets, the impact of AI on the distribution of prosperity could be massive.
Successful AI innovation will no doubt create new fortunes. (The FT has reported forecasts that the market for generative AI will grow from $6bn today to $59bn in five years’ time, and that is surely lowballing it. OpenAI has announced it will open a ChatGPT app store — I suppose the plan is to eat Apple’s bacon by usurping its gatekeeping platform.) But how big these fortunes are, who gets them and how fairly they are distributed depends not on the AI itself but on the economic and regulatory structure they emerge in — in particular the regulation of rights of ownership.
By far the most thought-provoking discussion of this I have seen since last week’s summit comes courtesy of Björn Ulvaeus of Abba. Please don’t miss the pop superstar’s op-ed for the FT (kudos to my colleagues on the opinion desk who came up with the headline “Take a chance on AI”), where he sets out a case for balanced rights between creators using AI and those whose work the AI has been trained on. The key insight is that using AI to create new music is not so different from how he and Benny Andersson took inspiration from The Beatles’ White Album, which they listened to over and over.
The economic point, however, is that property rights — including intellectual property rights — have to be defined. That goes for the “products” of AI, which in large part will be intangible ideas and their application, so we are talking about use rights, royalties, the ability to license and on what terms, and, of course, how AI builders are allowed to use data generated by others in the first place. But it must also go for the ownership of AIs themselves and the rights to control and profit from them.
This, I think, could be the most consequential aspect of how to govern AI — at least in economic terms. How concentrated the control of AI is allowed to get, and how tilted towards profits the economic gains (and how concentrated those profits) are, could change our societies more profoundly than the potential applications of the technology that have us riveted. And so could the neglect of these questions.
Other readables
Behind the legal arguments, it is mostly fear of the economic consequences that has kept western governments from seizing Russia’s $300bn-plus foreign exchange reserves to help Ukraine. But this fear is misguided, as I explain in my FT column this week.
Michael Pettis sets out the merciless arithmetic of how China can only sustain high growth rates with “a major restructuring of its economy in which a much greater role for domestic consumption replaces its over-reliance on investment and manufacturing”.
Colby Smith interviews Claudia Sahm, one of the smartest thinkers on US macroeconomic policy.
“Death has few virtues except, perhaps, for clarifying the important things in life.” My colleague Emma Jacobs is on top form.
Numbers news
Torsten Slok of Apollo Global Management highlights in an email that foreigners’ share of outstanding US government debt has fallen from a peak of 33 per cent a decade ago to 23 per cent today. The shift from foreign to domestic holders is just as big even if you remove the Treasuries bought by the Federal Reserve.
A poll that puts Donald Trump ahead of Joe Biden in most US battleground states has made Democrats tear their hair out in panic. In the same week, however, state election results in places such as Kentucky, Virginia and Ohio look much more favourable for Democrats. Something to confirm everyone’s prior beliefs!
Recommended newsletters for you
Chris Giles on Central Banks — Your essential guide to money, interest rates, inflation and what central banks are thinking. Sign up here
Unhedged — Robert Armstrong dissects the most important market trends and discusses how Wall Street’s best minds respond to them. Sign up here
Source: Economy - ft.com