Riyadh Air is betting on a tourist surge to Saudi Arabia

Listen to this story. Enjoy more audio and podcasts on More
138 Shares119 Views
in Business
Listen to this story. Enjoy more audio and podcasts on More
113 Shares129 Views
in Business
Listen to this story. Enjoy more audio and podcasts on More
75 Shares169 Views
in Business
Call it the “Burning Man” theory of tech. Every so often, the hopes and dreams of a technological visionary are almost torched by those who surround them. In 1985 Steve Jobs was fired from Apple, the company he fathered, and did not return for 11 years. In 2000 Elon Musk’s co-founders ousted him as CEO of X.com, the firm that went on to become PayPal, a digital-payments platform. In 2008 Jack Dorsey’s fellow creators of Twitter ended his short reign as chief executive of the social-media app. On November 17th Sam Altman looked like he would become the Bay Area’s next burnt effigy, ousted from OpenAI, the artificial-intelligence (AI) firm he co-founded in 2015, by a board that accused him of lacking candour. But on November 21st, after four days in which he, his employees and OpenAI’s investors, such as Microsoft, wrangled feverishly for his reinstatement, he was back in control of the firm. “Wow it even took Jesus three days,” one wag tweeted in the midst of the drama. Instead of Mr Altman, three of the four board members who gave him the boot are toast.It is not the first time in his 38 years on Earth that Mr Altman has been at the centre of such an imbroglio. He is a man of such supreme self-confidence that people tend to treat him as either genius or opportunist—the latter usually in private. Like Jobs, he has a messianic ability to inspire people, even if he doesn’t have the iPhone creator’s God-like eye for design. Like Mr Musk, he has ironclad faith in his vision for the future, even if he lacks Tesla boss’s legendary engineering skills. Like Mr Dorsey, he has shipped a product, ChatGPT, that has become a worldwide topic of conversation—and consternation.Yet along the way he has irked people. This started at Y Combinator (YC), a hothouse for entrepreneurs, which he led from 2014 until he was pushed out in 2019 for scaling it up too fast and getting distracted by side hustles such as OpenAI. At OpenAI, he fell out with Mr Musk, another co-founder, and some influential AI researchers who left in a huff. The latest evidence comes from the four board members who clumsily sought to fire him. The specific reasons for their decision remain unclear. But it would not be a surprise if Mr Altman’s unbridled ambition played a role.If there is one constant in Mr Altman’s life, it is a missionary zeal that even by Silicon Valley standards is striking. Some entrepreneurs are motivated by fame and fortune. His goal appears to be techno-omnipotence. Paul Graham, co-founder of YC, said of Mr Altman, then still in his early 20s: “You could parachute him into an island full of cannibals and come back in five years and he’d be the king.”Forget the island. The world is now his domain. In 2021 he penned a Utopian manifesto called “Moore’s Law for Everything”, predicting that the AI revolution (which he was leading) would shower benefits on Earth—creating phenomenal wealth, changing the nature of work, reducing poverty. He is an ardent proponent of nuclear fusion, arguing that coupled with ChatGPT-like “generative” AI, falling costs of knowledge and energy will create a “beautiful exponential curve”. This is heady stuff, all the more so given the need to strike a careful balance between speed and safety when rolling out such world-changing technologies. Where Mr Altman sits on that spectrum is hard to gauge.Mr Altman is a man of contradictions. In 2016, when he still led YC, Peter Thiel, a billionaire venture capitalist, described him to the New Yorker as “not particularly religious but…culturally very Jewish—an optimist yet a survivalist” (back then Mr Altman had a bolt hole in Big Sur, stocked with guns and gold, in preparation for rogue AIs, pandemics and other disasters). As for his enduring optimism, it rang out clearly during an interview he recorded just two days before OpenAI’s boardroom coup, which he did not see coming. “What differentiates me [from] most of the AI companies is I think AI is good,” he told “Hard Fork”, a podcast. “I don’t secretly hate what I do all day. I think it’s going to be awesome.”He has sought to have it both ways when it comes to OpenAI’s governance, too. Mr Altman devised the wacky corporate structure at the heart of the latest drama. OpenAI was founded as a non-profit, in order to push the frontiers of AI to a point where computers can out-think people, yet without sacrificing human pre-eminence. But it also needed money. For that it established a for-profit subsidiary that offered investors capped rewards but no say in the running of the company. Mr Altman, who owns no shares in OpenAI, has defended the model. In March he told one interviewer that putting such technologies into the hands of a company that sought to create unlimited value left him “a little afraid”.And yet he also appears to chafe against its constraints. As he did at YC, he has pursued side projects, including seeking investors to make generative-AI devices and semiconductors, which could potentially be hugely lucrative. The old board is being replaced by a new one that may turn out to be less wedded to OpenAI’s safety-above-all-else charter. The incoming chairman, Bret Taylor, used to run Salesforce, a software giant. On his watch the startup could come to resemble a more conventional, fast-scaling tech company. Mr Altman will probably be happy with that, too.Mercury rising If that happens, OpenAI may become an even hotter ticket. With the latest version of its AI model, GPT-5, and other products on the way, it is ahead of the pack. Mr Altman has a unique knack for raising money and recruiting talented individuals, and his task would be all the easier with a more normal corporate structure. But his ambiguities, especially over where to strike the balance between speed and safety, are a lesson. Though Mr Altman has been welcomed into the world’s corridors of powers to provide guidance on AI regulation, his own convictions are still not set in stone. That is all the more reason for governments to set the tone on AI safety, not mercurial tech visionaries. ■ More
88 Shares149 Views
in Business
“The mission continues,” tweeted Sam Altman, the co-founder of OpenAI, the startup behind ChatGPT, on November 19th. But precisely where it will continue remains unclear. Mr Altman’s tweet was part of an announcement that he was joining Microsoft. Two days earlier, to the astonishment of Silicon Valley, he had been fired from Openai for not being “consistently candid in his communications with the board”. Then Satya Nadella, Microsoft’s boss, announced that Mr Altman would “lead a new advanced AI [artificial intelligence] research team” within the tech giant. At first it looked like Mr Altman would be accompanied by just a few former colleagues. Many more may follow. The vast majority of OpenAI’s 770 staff have signed a letter threatening to resign if the board fails to reinstate Mr Altman.The shenanigans involving the world’s hottest startup are not over. The Verge, a tech-focused online publication, has reported that Mr Altman may be willing to return to OpenAI, if the board members responsible for his dismissal themselves resign. Mr Nadella also seems to allow for that possibility. His manoeuvring could look shrewd either way. If Mr Altman returns, then Microsoft, Openai’s biggest investor, would have supported him at a time of crisis, strengthening an important corporate relationship. If Mr Altman and friends do join Microsoft, Mr Nadella could look even smarter. He would have brought in house the talent and technology that the world’s second-most valuable company is betting its future on.Microsoft has long invested in various forms of AI. It first announced it was working with OpenAI in 2016, and has since invested $13bn in the startup for what is reported to be a 49% stake. The deal means that Openai’s technology has to run on Azure, Microsoft’s cloud-computing arm. In exchange OpenAI has access to enormous amounts of Microsoft’s processing power, which it needs to “train” its powerful models.The investment became crucial to Microsoft one year ago with the launch of ChatGPT. The chatbot became the fastest-growing consumer software application in history, reaching 100m users in two months. Since then Microsoft has been busy working out how to infuse the startup’s technology into its software. It has launched ChatGPT-like bots to run alongside many of its offerings, including its productivity tools, such as Word and Excel; Bing, its search engine; and even its Windows operating system.Bringing parts of OpenAI in-house would be a smart move. The technology is central to Microsoft’s future. Having direct control over it eliminates the risk that OpenAI could take its technology in a different direction. And such influence would have been attained for a bargain. Before he was fired, Mr Altman was hoping to raise fresh funds for OpenAI that would value the firm at around $86bn. Hiring OpenAI’s boffins this way is something antitrust regulators would find harder to challenge than a straightforward acquisition. Investors appear keen. Microsoft’s share price fell slightly on the news of Mr Altman’s firing. That loss was reversed when his new gig was announced.Yet the move would also entail risks. One is reputational. A pillar of Microsoft’s AI strategy has been to keep the technology at arm’s length, thus insulating the company from any embarrassment caused when ChatGPT goes awry. When Meta, Facebook’s parent company, released Galactica, its science AI chatbot, the tool started to fabricate research. The public response was critical enough for Meta to take it down.Some analysts think that Microsoft may not need insulating any more. It has invested heavily in managing AI risks, with teams working on issues including security, privacy and limiting inappropriate behaviour. Microsoft’s version of OpenAI’s GPT models come with more guardrails than the startup’s do, notes Mark Moerdler of Bernstein, a broker. The firm’s launch of its own array of ChatGPT-like products suggests that it is confident it can manage some of the reputational flak.A bigger risk is that moving OpenAI in-house could create a “short-term slowdown in the progress of the technology,” argues Mr Moerdler. A team led by Mr Altman within Microsoft would take time to get off the ground because new models need to be designed and trained. If OpenAI lost its brightest employees in the meantime, that could slow the development of its new products—on which Microsoft still depends to jazz up its software. A third threat is that Openai’s talent goes not to Microsoft, but somewhere else entirely. Marc Benioff, the boss of Salesforce, another software firm, has said he will hire any OpenAI researcher who resigns.Whether they do leave will in part depend on the exact setup of Mr Altman’s new outfit. The early signs are that it will get plenty of independence. Mr Nadella referred to Mr Altman as the “CEO” of the new unit. Barry Briggs, of Directions on Microsoft, a consultancy, points out that Microsoft has given its previous acquisitions plenty of autonomy, citing the episodes of LinkedIn and GitHub in 2016 and 2018.The stakes this time are far higher: Openai’s talent is highly sought after and the company’s technology is key to Microsoft’s future. Mr Nadella will hope that he has secured his firm’s interests, whether Mr Altman takes up his new job or returns to the startup he founded. But the chaos is not over yet. ■ More
63 Shares119 Views
in Business
“WHICH WOULD you have more confidence in? Getting your technology from a non-profit, or a for-profit company that is entirely controlled by one human being?” asked Brad Smith, president of Microsoft, at a conference in Paris on November 10th. That was Mr Smith’s way of praising OpenAI, the startup behind ChatGPT, and knocking Meta, Mark Zuckerberg’s social-media behemoth.In recent days OpenAI’s non-profit governance has looked rather less attractive. On November 17th, seemingly out of nowhere, its board fired Sam Altman, the startup’s co-founder and chief executive. Mr Smith’s own boss, Satya Nadella, who heads Microsoft, was told of Mr Altman’s sacking only a few minutes before Mr Altman himself. Never mind that Microsoft is OpenAI’s biggest shareholder, having backed the startup to the tune of over $10bn.By November 20th the vast majority of OpenAI’s 700-strong workforce had signed an open letter giving the remaining board members an ultimatum: resign or the signatories will follow Mr Altman to Microsoft, where he has been invited by Mr Nadella to head a new in-house AI lab.The goings-on have thrown a spotlight on OpenAI’s unusual structure, and its even more unusual board. What exactly is it tasked with doing, and how could it sack the boss of the hottest ai startup without any of its investors having a say in the matter?The firm was founded as a non-profit in 2015 by Mr Altman and a group of Silicon Valley investors and entrepreneurs including Elon Musk, the mercurial billionaire behind Tesla, X (formerly Twitter) and SpaceX. The group collectively pledged $1bn towards OpenAI’s goal of building artificial general intelligence (AGI), as AI experts refer to a program that outperforms humans on most intellectual tasks.After a few years OpenAI realised that in order to attain its goal, it needed cash to pay for expensive computing capacity and top-notch talent—not least because it claims that just $130m or so of the original $1bn pledge materialised. So in 2019 it created a for-profit subsidiary. Profits for investors in this venture were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025). Any profits above the cap flow to the parent non-profit. The company also reserves the right to reinvest all profits back into the firm until its goal of creating AGI is achieved. And once it is attained, the resulting AGI is not meant to generate a financial return; OpenAI’s licensing terms with Microsoft, for example, cover only “pre-AGI” technology.The determination of if and when AGI has been attained is down to OpenAI’s board of directors. Unlike at most startups, or indeed most companies, investors do not get a seat. Instead of representing OpenAI’s financial backers, the organisation’s charter tasks directors with representing the interests of “humanity”.Until the events of last week, humanity’s representatives comprised three of OpenAI’s co-founders (Mr Altman, Greg Brockman and Ilya Sutskever) and three independent members (Adam D’Angelo, co-founder of Quora; Tasha McCauley, a tech entrepreneur; and Helen Toner, from the Centre for Security and Emerging Technology, another non-profit). On November 17th four of them—Mr Sutskever and the three independents—lost confidence in Mr Altman. Their reasons remain murky but may have to do with what the board seems to have viewed as pursuit of new products paired with insufficient concern for AI safety.The firm’s bylaws from January 2016 give its board members wide-ranging powers, including the right to add or remove board members, if a majority concur. The earliest tax filings from the same year show three directors: Mr Altman, Mr Musk and Chris Clark, an OpenAI employee. It is unclear how they were chosen, but thanks to the bylaws they could henceforth appoint others. By 2017 the original trio were joined by Mr Brockman and Holden Karnofsky, chief executive of Open Philanthropy, a charity. Two years later the board had eight members, though by then Mr Musk had stepped down because of a feud with Mr Altman over the direction OpenAI was taking. Last year it was down to six. Throughout, it was answerable only to itself.This odd structure was designed to ensure that OpenAI can resist outside pressure from investors, who might prefer a quick profit now to AGI for humankind later. Instead, the board’s amateurish ousting of Mr Altman has piled on the pressure from OpenAI’s investors and employees. That tiny part of humanity, at least, clearly feels misrepresented. ■ More
75 Shares149 Views
in Business
Here are some handy rules of thumb. Anyone who calls themselves a thought leader is to be avoided. A man who does not wear socks cannot be trusted. And a company that holds an employee-appreciation day does not appreciate its employees.It is not just that the message sent by acknowledging staff for one out of 260-odd working days is a bit of a giveaway (there isn’t a love-your-spouse day or a national don’t-be-a-total-bastard week for the same reason). It is also that the ideas are usually so tragically unappreciative. You have worked hard all year so you get a slice of cold pizza or a rock stamped with the words “You rock”?This approach reveals more about the beliefs of the relevant bosses than it does anything about what actually motivates people at work (the subject of this week’s penultimate episode of Boss Class, our management podcast). In a book published in 1960, called “The Human Side of Enterprise”, Douglas McGregor, a professor at MIT Sloan School of Management, divided managers’ assumptions about workers into two categories. He called them Theory X and Theory Y.McGregor, who died in 1964, was a product of his time. The vignettes in the book feature men with names like Tom and Harry. But his ideas remain useful.Theory X managers believe that people have a natural aversion to work; their job is to try and get the slackers to put in some effort. That requires the exercise of authority and control. It relies heavily on the idea of giving and withholding rewards to motivate people. Perks and pizza fit into this picture but pay is critical to theory X; work is the price to be paid for wages.Theory Y, the one McGregor himself subscribed to, is based on a much more optimistic view of humans. It assumes that people want to work hard and that managers do not need to be directive if employees are committed to the goals of the company. It holds that pay can be demoralising if it is too low or unfair, but that once people earn enough to take care of their basic needs, other sources of motivation matter more. In this, McGregor was a follower of Abraham Maslow, a psychologist whose hierarchy of needs moves from having enough to eat and feeling safe up to higher-order concepts like belonging, self-esteem and purpose.Theory X is not dead. It lives on in low-wage industries where workers must follow rules to the letter and in high-wage ones where pay motivates people long after they can feed themselves. It surfaces in the fears of managers that working from home is a golden excuse for people to do nothing. It shows up in the behaviour of employees who phone it in and bosses who bully and berate.Nevertheless, theory Y is in the ascendant. You cannot move for research showing that if people think what they do matters, they work harder. A meta-analysis of such research, conducted by Cassondra Batz-Barbarich of Lake Forest College and Louis Tay of Purdue University, found that doing meaningful work is strongly correlated with levels of employee engagement, job satisfaction and commitment. Trust is increasingly seen as an important ingredient of successful firms; a recent report by the Institute for Corporate Productivity found that high-performing organisations were more likely to be marked by high levels of trust.Firms of all kinds are asking themselves Y. Companies in prosaic industries are trying to concoct purpose statements that give people a reason to come into work that goes beyond paying the rent. The appeal of autonomy and responsibility permeates the management philosophy not just of creative firms like Netflix but also of lean manufacturers who encourage employees to solve problems on their own initiative. Some retailers have raised wages in the Theory Y belief that reducing workers’ financial insecurity will improve employee retention and organisational performance.McGregor himself wrote that the purpose of his book was not to get people to choose sides but to get managers to make their assumptions explicit. On this score he is less successful. It is still possible to run financially viable firms in accordance with Theory X. It is impossible to admit it. More
163 Shares199 Views
in Business
There is little doubting the dedication of Sam Altman to Openai, the firm at the forefront of an artificial-intelligence (ai) revolution. As co-founder and boss he appeared to work as tirelessly for its success as at a previous startup where his singlemindedness led to a bout of scurvy, a disease more commonly associated with mariners of a bygone era who remained too long at sea without access to fresh food. So his sudden sacking on November 17th was a shock. The reasons why the firm’s board lost confidence in Mr Altman are unclear. Rumours point to disquiet about his side-projects, and fears that he was moving too quickly to expand Openai’s commercial offerings without considering the safety implications, in a firm that has also pledged to develop the tech for the “maximal benefit of humanity”.The company’s investors and some of its employees are now seeking Mr Altman’s reinstatement. Whether they succeed or not, it is clear that the events at Openai are the most dramatic manifestation yet of a wider divide in Silicon Valley. On one side are the “doomers”, who believe that, left unchecked, ai poses an existential risk to humanity and hence advocate stricter regulations. Opposing them are “boomers”, who play down fears of an ai apocalypse and stress its potential to turbocharge progress. The camp that proves more influential could either encourage or stymie tighter regulations, which could in turn determine who will profit most from ai in the future.Openai’s corporate structure straddles the divide. Founded as a non-profit in 2015, the firm carved out a for-profit subsidiary three years later to finance its need for expensive computing capacity and brainpower in order to propel the technology forward. Satisfying the competing aims of doomers and boomers was always going to be difficult.The split in part reflects philosophical differences. Many in the doomer camp are influenced by “effective altruism”, a movement that is concerned by the possibility of ai wiping out all of humanity. The worriers include Dario Amodei, who left OpenAI to start up Anthropic, another model-maker. Other big tech firms, including Microsoft and Amazon, are also among those worried about ai safety.Boomers espouse a worldview called “effective accelerationism” which counters that not only should the development of ai be allowed to proceed unhindered, it should be speeded up. Leading the charge is Marc Andreessen, co-founder of Andreessen Horowitz, a venture-capital firm. Other ai boffins appear to sympathise with the cause. Meta’s Yann LeCun and Andrew Ng and a slew of startups including Hugging Face and Mistral ai have argued for less restrictive regulation.Mr Altman seemed to have sympathy with both groups, publicly calling for “guardrails” to make ai safe while simultaneously pushing Openai to develop more powerful models and launching new tools, such as an app store for users to build their own chatbots. Its largest investor, Microsoft, which has pumped over $10bn into Openai for a 49% stake without receiving any board seats in the parent company, is said to be unhappy, having found out about the sacking only minutes before Mr Altman did. If he does not return, it seems likely that Openai will side more firmly with the doomers.Yet there appears to be more going on than abstract philosophy. As it happens, the two groups are also split along more commercial lines. Doomers are early movers in the ai race, have deeper pockets and espouse proprietary models. Boomers, on the other hand, are more likely to be firms that are catching up, are smaller and prefer open-source software.Start with the early winners. Openai’s Chatgpt added 100m users in just two months after its launch, closely trailed by Anthropic, founded by defectors from Openai and now valued at $25bn. Researchers at Google wrote the original paper on large language models, software that is trained on vast quantities of data, and which underpin chatbots including Chatgpt. The firm has been churning out bigger and smarter models, as well as a chatbot called Bard.Microsoft’s lead, meanwhile, is largely built on its big bet on Openai. Amazon plans to invest up to $4bn in Anthropic. But in tech, moving first doesn’t always guarantee success. In a market where both technology and demand are advancing rapidly, new entrants have ample opportunities to disrupt incumbents.This may give added force to the doomers’ push for stricter rules. In testimony to America’s Congress in May Mr Altman expressed fears that the industry could “cause significant harm to the world” and urged policymakers to enact specific regulations for ai. In the same month a group of 350 ai scientists and tech executives, including from Openai, Anthropic and Google signed a one-line statement warning of a “risk of extinction” posed by ai on a par with nuclear war and pandemics. Despite the terrifying prospects, none of the companies that backed the statement paused their own work on building more potent ai models.Politicians are scrambling to show that they take the risks seriously. In July President Joe Biden’s administration nudged seven leading model-makers, including Microsoft, Openai, Meta and Google, to make “voluntary commitments’‘, to have their ai products inspected by experts before releasing them to the public. On November 1st the British government got a similar group to sign another non-binding agreement that allowed regulators to test their ai products for trustworthiness and harmful capabilities, such as endangering national security. Days beforehand Mr Biden issued an executive order with far more bite. It compels any ai company that is building models above a certain size—defined by the computing power needed by the software—to notify the government and share its safety-testing results.image: The EconomistAnother fault line between the two groups is the future of open-source ai. llms have been either proprietary, like the ones from Openai, Anthropic and Google, or open-source. The release in February of llama, a model created by Meta, spurred activity in open-source ai (see chart). Supporters argue that open-source models are safer because they are open to scrutiny. Detractors worry that making these powerful ai models public will allow bad actors to use them for malicious purposes.But the row over open source may also reflect commercial motives. Venture capitalists, for instance, are big fans of it, perhaps because they spy a way for the startups they back to catch up to the frontier, or gain free access to models. Incumbents may fear the competitive threat. A memo written by insiders at Google that was leaked in May admits that open-source models are achieving results on some tasks comparable to their proprietary cousins and cost far less to build. The memo concludes that neither Google nor Openai has any defensive “moat” against open-source competitors.So far regulators seem to have been receptive to the doomers’ argument. Mr Biden’s executive order could put the brakes on open-source ai. The order’s broad definition of “dual-use” models, which can have both military or civilian purposes, imposes complex reporting requirements on the makers of such models, which may in time capture open-source models too. The extent to which these rules can be enforced today is unclear. But they could gain teeth over time, say if new laws are passed.Not every big tech firm falls neatly on either side of the divide. The decision by Meta to open-source its ai models has made it an unexpected champion of startups by giving them access to a powerful model on which to build innovative products. Meta is betting that the surge in innovation prompted by open-source tools will eventually help it by generating newer forms of content that keep its users hooked and its advertisers happy. Apple is another outlier. The world’s largest tech firm is notably silent about ai. At the launch of a new iPhone in September the company paraded numerous ai-driven features without mentioning the term. When prodded, its executives lean towards extolling “machine learning”, another term for ai.That looks smart. The meltdown at Openai shows just how damaging the culture wars over ai can be. But it is these wars that will shape how the technology progresses, how it is regulated—and who comes away with the spoils. ■ More
125 Shares189 Views
in Business
HOW QUICKLY the mighty fall. Ever since the release of ChatGPT a year ago, Sam Altman has been the human face of the generative artificial-intelligence revolution. As recently as November 16th the co-founder and boss of OpenAI was touting the virtues of AI to executives and world leaders at the Asia-Pacific Economic Co-operation summit in San Francisco. The very next day he was out on his ear. A blog post on OpenAI’s website said the board “no longer has confidence” in Mr Altman’s leadership because “he was not consistently candid in his communications with the board”. Another shock came hours later. Greg Brockman, chairman of the firm’s board and another co-founder, resigned in response to Mr Altman’s sacking.The defenestration was all the more surprising because Mr Altman seemed at the peak of his powers. He had recently completed a world tour where everyone from Narendra Modi to Emmanuel Macron jockeyed to meet him. On November 6th he had launched a suite of new AI tools at OpenAI’s developer day, drawing comparisons with Steve Jobs—a parallel that now seems ironic, considering that in 1985 Jobs too was booted out of the company he had founded. One startup boss says Mr Altman’s and Mr Brockman’s departures are as serious as if Larry Page and Sergey Brin had been kicked out of Google during its early years.OpenAI’s employees and investors were blindsided by the move. Mr Brockman later tweeted that he and Mr Altman had not been aware of what was happening until minutes before the ousting. According to Axios, a news website, Microsoft, which has a 49% stake in the firm, was also in the dark until the last minute. Microsoft’s stock fell by 2% on the news, probably because the firm’s AI ambitions, including a hotly anticipated “copilot” for its Office suite, hinge on access to OpenAI’s technology.The ousting raises three big issues: what led to the surprise sacking; what it means for the firm that has been at the frontier of generative AI; and what it means for the future of the technology itself. How OpenAI’s employees, the tech world and society writ large respond to the situation will be critical to what happens next.The board has yet to offer a detailed explanation for its decision. The leading theory, put forward by Kara Swisher of New York Magazine, says that the rest of the board, assembled by chief scientist Ilya Sutskever, disagreed with Mr Altman on how the firm should balance making money with the safe release of its models. An employee at OpenAI flatly calls the situation a “coup d’état”. In a meeting with employees held shortly after the announcement, though, Mr Sutskever denied this characterisation, saying that the board was just doing its duty.If it is true that Mr Altman’s defenestration resulted from disagreements over AI safety, it would be the most dramatic expression yet of a longstanding debate at the heart of OpenAI’s history and indeed the wider industry. OpenAI was founded as a non-profit in 2015 by Mr Altman, Mr Brockman and Mr Sutskever, a superstar AI researcher, among others. But in 2019, in need of cash to train models that were demanding ever more computer power, Mr Altman spearheaded the creation of a “capped-profit” company inside the non-profit and raised $1bn from Microsoft. In 2021 a handful of senior employees at the firm grew disillusioned with the firm’s more commercial focus and left to form a rival startup called Anthropic. The launch of ChatGPT was only one chapter in Mr Altman’s quest to turn what was once a tiny research lab into a nimble product-oriented company.What does Mr Altman’s ousting mean for OpenAI? The immediate effect is chaos. In addition to Mr Brockman, three other senior engineers have left. Others could follow, especially if the ousting resulted purely from a strategic disagreement. There could also be financial repercussions. Speaking to your correspondent in August, an investor in OpenAI called Mr Altman the “only irreplaceable person” at the startup because of his top-notch recruiting and fundraising abilities. “Sam is the greatest fundraiser of all time…after Elon,” he said. But perhaps OpenAI no longer needs Mr Altman to hire staff and raise money, now that the firm is so well known, and has the backing of Microsoft. That is why the departure of Mr Brockman, widely considered the engineering brains of the startup, is even more stinging.OpenAI is currently in talks to raise funds at a valuation of nearly $90bn, which would make it one of the most valuable private tech companies in the world. A private-markets broker says that before the announcement there had been “nothing but demand” for OpenAI’s shares. That valuation will now be tested, he says.The impact on the wider industry is less clear. Mr Altman pushed OpenAI to “ship” new products into the world which gave it a first-mover advantage. Its competitors were forced to move faster to keep pace. A more safety-focused OpenAI will therefore slow down the whole industry, allowing competitors to catch up. AI startups that were building products with OpenAI’s technology may now think twice before tying themselves too closely to one company. And Mr Altman himself is a wildcard. Writing on X he promised he would “have more to say about what’s next later”. If what’s next is a new company, the drama could just be getting started.■ More
This portal is not a newspaper as it is updated without periodicity. It cannot be considered an editorial product pursuant to law n. 62 of 7.03.2001. The author of the portal is not responsible for the content of comments to posts, the content of the linked sites. Some texts or images included in this portal are taken from the internet and, therefore, considered to be in the public domain; if their publication is violated, the copyright will be promptly communicated via e-mail. They will be immediately removed.
Notifications