ReelShort is the latest Chinese export to conquer America

Listen to this story. Enjoy more audio and podcasts on More
113 Shares99 Views
in Business
Listen to this story. Enjoy more audio and podcasts on More
88 Shares109 Views
in Business
Listen to this story. Enjoy more audio and podcasts on More
138 Shares119 Views
in Business
Listen to this story. Enjoy more audio and podcasts on More
113 Shares129 Views
in Business
Listen to this story. Enjoy more audio and podcasts on More
75 Shares169 Views
in Business
Call it the “Burning Man” theory of tech. Every so often, the hopes and dreams of a technological visionary are almost torched by those who surround them. In 1985 Steve Jobs was fired from Apple, the company he fathered, and did not return for 11 years. In 2000 Elon Musk’s co-founders ousted him as CEO of X.com, the firm that went on to become PayPal, a digital-payments platform. In 2008 Jack Dorsey’s fellow creators of Twitter ended his short reign as chief executive of the social-media app. On November 17th Sam Altman looked like he would become the Bay Area’s next burnt effigy, ousted from OpenAI, the artificial-intelligence (AI) firm he co-founded in 2015, by a board that accused him of lacking candour. But on November 21st, after four days in which he, his employees and OpenAI’s investors, such as Microsoft, wrangled feverishly for his reinstatement, he was back in control of the firm. “Wow it even took Jesus three days,” one wag tweeted in the midst of the drama. Instead of Mr Altman, three of the four board members who gave him the boot are toast.It is not the first time in his 38 years on Earth that Mr Altman has been at the centre of such an imbroglio. He is a man of such supreme self-confidence that people tend to treat him as either genius or opportunist—the latter usually in private. Like Jobs, he has a messianic ability to inspire people, even if he doesn’t have the iPhone creator’s God-like eye for design. Like Mr Musk, he has ironclad faith in his vision for the future, even if he lacks Tesla boss’s legendary engineering skills. Like Mr Dorsey, he has shipped a product, ChatGPT, that has become a worldwide topic of conversation—and consternation.Yet along the way he has irked people. This started at Y Combinator (YC), a hothouse for entrepreneurs, which he led from 2014 until he was pushed out in 2019 for scaling it up too fast and getting distracted by side hustles such as OpenAI. At OpenAI, he fell out with Mr Musk, another co-founder, and some influential AI researchers who left in a huff. The latest evidence comes from the four board members who clumsily sought to fire him. The specific reasons for their decision remain unclear. But it would not be a surprise if Mr Altman’s unbridled ambition played a role.If there is one constant in Mr Altman’s life, it is a missionary zeal that even by Silicon Valley standards is striking. Some entrepreneurs are motivated by fame and fortune. His goal appears to be techno-omnipotence. Paul Graham, co-founder of YC, said of Mr Altman, then still in his early 20s: “You could parachute him into an island full of cannibals and come back in five years and he’d be the king.”Forget the island. The world is now his domain. In 2021 he penned a Utopian manifesto called “Moore’s Law for Everything”, predicting that the AI revolution (which he was leading) would shower benefits on Earth—creating phenomenal wealth, changing the nature of work, reducing poverty. He is an ardent proponent of nuclear fusion, arguing that coupled with ChatGPT-like “generative” AI, falling costs of knowledge and energy will create a “beautiful exponential curve”. This is heady stuff, all the more so given the need to strike a careful balance between speed and safety when rolling out such world-changing technologies. Where Mr Altman sits on that spectrum is hard to gauge.Mr Altman is a man of contradictions. In 2016, when he still led YC, Peter Thiel, a billionaire venture capitalist, described him to the New Yorker as “not particularly religious but…culturally very Jewish—an optimist yet a survivalist” (back then Mr Altman had a bolt hole in Big Sur, stocked with guns and gold, in preparation for rogue AIs, pandemics and other disasters). As for his enduring optimism, it rang out clearly during an interview he recorded just two days before OpenAI’s boardroom coup, which he did not see coming. “What differentiates me [from] most of the AI companies is I think AI is good,” he told “Hard Fork”, a podcast. “I don’t secretly hate what I do all day. I think it’s going to be awesome.”He has sought to have it both ways when it comes to OpenAI’s governance, too. Mr Altman devised the wacky corporate structure at the heart of the latest drama. OpenAI was founded as a non-profit, in order to push the frontiers of AI to a point where computers can out-think people, yet without sacrificing human pre-eminence. But it also needed money. For that it established a for-profit subsidiary that offered investors capped rewards but no say in the running of the company. Mr Altman, who owns no shares in OpenAI, has defended the model. In March he told one interviewer that putting such technologies into the hands of a company that sought to create unlimited value left him “a little afraid”.And yet he also appears to chafe against its constraints. As he did at YC, he has pursued side projects, including seeking investors to make generative-AI devices and semiconductors, which could potentially be hugely lucrative. The old board is being replaced by a new one that may turn out to be less wedded to OpenAI’s safety-above-all-else charter. The incoming chairman, Bret Taylor, used to run Salesforce, a software giant. On his watch the startup could come to resemble a more conventional, fast-scaling tech company. Mr Altman will probably be happy with that, too.Mercury rising If that happens, OpenAI may become an even hotter ticket. With the latest version of its AI model, GPT-5, and other products on the way, it is ahead of the pack. Mr Altman has a unique knack for raising money and recruiting talented individuals, and his task would be all the easier with a more normal corporate structure. But his ambiguities, especially over where to strike the balance between speed and safety, are a lesson. Though Mr Altman has been welcomed into the world’s corridors of powers to provide guidance on AI regulation, his own convictions are still not set in stone. That is all the more reason for governments to set the tone on AI safety, not mercurial tech visionaries. ■ More
88 Shares149 Views
in Business
“The mission continues,” tweeted Sam Altman, the co-founder of OpenAI, the startup behind ChatGPT, on November 19th. But precisely where it will continue remains unclear. Mr Altman’s tweet was part of an announcement that he was joining Microsoft. Two days earlier, to the astonishment of Silicon Valley, he had been fired from Openai for not being “consistently candid in his communications with the board”. Then Satya Nadella, Microsoft’s boss, announced that Mr Altman would “lead a new advanced AI [artificial intelligence] research team” within the tech giant. At first it looked like Mr Altman would be accompanied by just a few former colleagues. Many more may follow. The vast majority of OpenAI’s 770 staff have signed a letter threatening to resign if the board fails to reinstate Mr Altman.The shenanigans involving the world’s hottest startup are not over. The Verge, a tech-focused online publication, has reported that Mr Altman may be willing to return to OpenAI, if the board members responsible for his dismissal themselves resign. Mr Nadella also seems to allow for that possibility. His manoeuvring could look shrewd either way. If Mr Altman returns, then Microsoft, Openai’s biggest investor, would have supported him at a time of crisis, strengthening an important corporate relationship. If Mr Altman and friends do join Microsoft, Mr Nadella could look even smarter. He would have brought in house the talent and technology that the world’s second-most valuable company is betting its future on.Microsoft has long invested in various forms of AI. It first announced it was working with OpenAI in 2016, and has since invested $13bn in the startup for what is reported to be a 49% stake. The deal means that Openai’s technology has to run on Azure, Microsoft’s cloud-computing arm. In exchange OpenAI has access to enormous amounts of Microsoft’s processing power, which it needs to “train” its powerful models.The investment became crucial to Microsoft one year ago with the launch of ChatGPT. The chatbot became the fastest-growing consumer software application in history, reaching 100m users in two months. Since then Microsoft has been busy working out how to infuse the startup’s technology into its software. It has launched ChatGPT-like bots to run alongside many of its offerings, including its productivity tools, such as Word and Excel; Bing, its search engine; and even its Windows operating system.Bringing parts of OpenAI in-house would be a smart move. The technology is central to Microsoft’s future. Having direct control over it eliminates the risk that OpenAI could take its technology in a different direction. And such influence would have been attained for a bargain. Before he was fired, Mr Altman was hoping to raise fresh funds for OpenAI that would value the firm at around $86bn. Hiring OpenAI’s boffins this way is something antitrust regulators would find harder to challenge than a straightforward acquisition. Investors appear keen. Microsoft’s share price fell slightly on the news of Mr Altman’s firing. That loss was reversed when his new gig was announced.Yet the move would also entail risks. One is reputational. A pillar of Microsoft’s AI strategy has been to keep the technology at arm’s length, thus insulating the company from any embarrassment caused when ChatGPT goes awry. When Meta, Facebook’s parent company, released Galactica, its science AI chatbot, the tool started to fabricate research. The public response was critical enough for Meta to take it down.Some analysts think that Microsoft may not need insulating any more. It has invested heavily in managing AI risks, with teams working on issues including security, privacy and limiting inappropriate behaviour. Microsoft’s version of OpenAI’s GPT models come with more guardrails than the startup’s do, notes Mark Moerdler of Bernstein, a broker. The firm’s launch of its own array of ChatGPT-like products suggests that it is confident it can manage some of the reputational flak.A bigger risk is that moving OpenAI in-house could create a “short-term slowdown in the progress of the technology,” argues Mr Moerdler. A team led by Mr Altman within Microsoft would take time to get off the ground because new models need to be designed and trained. If OpenAI lost its brightest employees in the meantime, that could slow the development of its new products—on which Microsoft still depends to jazz up its software. A third threat is that Openai’s talent goes not to Microsoft, but somewhere else entirely. Marc Benioff, the boss of Salesforce, another software firm, has said he will hire any OpenAI researcher who resigns.Whether they do leave will in part depend on the exact setup of Mr Altman’s new outfit. The early signs are that it will get plenty of independence. Mr Nadella referred to Mr Altman as the “CEO” of the new unit. Barry Briggs, of Directions on Microsoft, a consultancy, points out that Microsoft has given its previous acquisitions plenty of autonomy, citing the episodes of LinkedIn and GitHub in 2016 and 2018.The stakes this time are far higher: Openai’s talent is highly sought after and the company’s technology is key to Microsoft’s future. Mr Nadella will hope that he has secured his firm’s interests, whether Mr Altman takes up his new job or returns to the startup he founded. But the chaos is not over yet. ■ More
63 Shares119 Views
in Business
“WHICH WOULD you have more confidence in? Getting your technology from a non-profit, or a for-profit company that is entirely controlled by one human being?” asked Brad Smith, president of Microsoft, at a conference in Paris on November 10th. That was Mr Smith’s way of praising OpenAI, the startup behind ChatGPT, and knocking Meta, Mark Zuckerberg’s social-media behemoth.In recent days OpenAI’s non-profit governance has looked rather less attractive. On November 17th, seemingly out of nowhere, its board fired Sam Altman, the startup’s co-founder and chief executive. Mr Smith’s own boss, Satya Nadella, who heads Microsoft, was told of Mr Altman’s sacking only a few minutes before Mr Altman himself. Never mind that Microsoft is OpenAI’s biggest shareholder, having backed the startup to the tune of over $10bn.By November 20th the vast majority of OpenAI’s 700-strong workforce had signed an open letter giving the remaining board members an ultimatum: resign or the signatories will follow Mr Altman to Microsoft, where he has been invited by Mr Nadella to head a new in-house AI lab.The goings-on have thrown a spotlight on OpenAI’s unusual structure, and its even more unusual board. What exactly is it tasked with doing, and how could it sack the boss of the hottest ai startup without any of its investors having a say in the matter?The firm was founded as a non-profit in 2015 by Mr Altman and a group of Silicon Valley investors and entrepreneurs including Elon Musk, the mercurial billionaire behind Tesla, X (formerly Twitter) and SpaceX. The group collectively pledged $1bn towards OpenAI’s goal of building artificial general intelligence (AGI), as AI experts refer to a program that outperforms humans on most intellectual tasks.After a few years OpenAI realised that in order to attain its goal, it needed cash to pay for expensive computing capacity and top-notch talent—not least because it claims that just $130m or so of the original $1bn pledge materialised. So in 2019 it created a for-profit subsidiary. Profits for investors in this venture were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025). Any profits above the cap flow to the parent non-profit. The company also reserves the right to reinvest all profits back into the firm until its goal of creating AGI is achieved. And once it is attained, the resulting AGI is not meant to generate a financial return; OpenAI’s licensing terms with Microsoft, for example, cover only “pre-AGI” technology.The determination of if and when AGI has been attained is down to OpenAI’s board of directors. Unlike at most startups, or indeed most companies, investors do not get a seat. Instead of representing OpenAI’s financial backers, the organisation’s charter tasks directors with representing the interests of “humanity”.Until the events of last week, humanity’s representatives comprised three of OpenAI’s co-founders (Mr Altman, Greg Brockman and Ilya Sutskever) and three independent members (Adam D’Angelo, co-founder of Quora; Tasha McCauley, a tech entrepreneur; and Helen Toner, from the Centre for Security and Emerging Technology, another non-profit). On November 17th four of them—Mr Sutskever and the three independents—lost confidence in Mr Altman. Their reasons remain murky but may have to do with what the board seems to have viewed as pursuit of new products paired with insufficient concern for AI safety.The firm’s bylaws from January 2016 give its board members wide-ranging powers, including the right to add or remove board members, if a majority concur. The earliest tax filings from the same year show three directors: Mr Altman, Mr Musk and Chris Clark, an OpenAI employee. It is unclear how they were chosen, but thanks to the bylaws they could henceforth appoint others. By 2017 the original trio were joined by Mr Brockman and Holden Karnofsky, chief executive of Open Philanthropy, a charity. Two years later the board had eight members, though by then Mr Musk had stepped down because of a feud with Mr Altman over the direction OpenAI was taking. Last year it was down to six. Throughout, it was answerable only to itself.This odd structure was designed to ensure that OpenAI can resist outside pressure from investors, who might prefer a quick profit now to AGI for humankind later. Instead, the board’s amateurish ousting of Mr Altman has piled on the pressure from OpenAI’s investors and employees. That tiny part of humanity, at least, clearly feels misrepresented. ■ More
75 Shares149 Views
in Business
Here are some handy rules of thumb. Anyone who calls themselves a thought leader is to be avoided. A man who does not wear socks cannot be trusted. And a company that holds an employee-appreciation day does not appreciate its employees.It is not just that the message sent by acknowledging staff for one out of 260-odd working days is a bit of a giveaway (there isn’t a love-your-spouse day or a national don’t-be-a-total-bastard week for the same reason). It is also that the ideas are usually so tragically unappreciative. You have worked hard all year so you get a slice of cold pizza or a rock stamped with the words “You rock”?This approach reveals more about the beliefs of the relevant bosses than it does anything about what actually motivates people at work (the subject of this week’s penultimate episode of Boss Class, our management podcast). In a book published in 1960, called “The Human Side of Enterprise”, Douglas McGregor, a professor at MIT Sloan School of Management, divided managers’ assumptions about workers into two categories. He called them Theory X and Theory Y.McGregor, who died in 1964, was a product of his time. The vignettes in the book feature men with names like Tom and Harry. But his ideas remain useful.Theory X managers believe that people have a natural aversion to work; their job is to try and get the slackers to put in some effort. That requires the exercise of authority and control. It relies heavily on the idea of giving and withholding rewards to motivate people. Perks and pizza fit into this picture but pay is critical to theory X; work is the price to be paid for wages.Theory Y, the one McGregor himself subscribed to, is based on a much more optimistic view of humans. It assumes that people want to work hard and that managers do not need to be directive if employees are committed to the goals of the company. It holds that pay can be demoralising if it is too low or unfair, but that once people earn enough to take care of their basic needs, other sources of motivation matter more. In this, McGregor was a follower of Abraham Maslow, a psychologist whose hierarchy of needs moves from having enough to eat and feeling safe up to higher-order concepts like belonging, self-esteem and purpose.Theory X is not dead. It lives on in low-wage industries where workers must follow rules to the letter and in high-wage ones where pay motivates people long after they can feed themselves. It surfaces in the fears of managers that working from home is a golden excuse for people to do nothing. It shows up in the behaviour of employees who phone it in and bosses who bully and berate.Nevertheless, theory Y is in the ascendant. You cannot move for research showing that if people think what they do matters, they work harder. A meta-analysis of such research, conducted by Cassondra Batz-Barbarich of Lake Forest College and Louis Tay of Purdue University, found that doing meaningful work is strongly correlated with levels of employee engagement, job satisfaction and commitment. Trust is increasingly seen as an important ingredient of successful firms; a recent report by the Institute for Corporate Productivity found that high-performing organisations were more likely to be marked by high levels of trust.Firms of all kinds are asking themselves Y. Companies in prosaic industries are trying to concoct purpose statements that give people a reason to come into work that goes beyond paying the rent. The appeal of autonomy and responsibility permeates the management philosophy not just of creative firms like Netflix but also of lean manufacturers who encourage employees to solve problems on their own initiative. Some retailers have raised wages in the Theory Y belief that reducing workers’ financial insecurity will improve employee retention and organisational performance.McGregor himself wrote that the purpose of his book was not to get people to choose sides but to get managers to make their assumptions explicit. On this score he is less successful. It is still possible to run financially viable firms in accordance with Theory X. It is impossible to admit it. More
This portal is not a newspaper as it is updated without periodicity. It cannot be considered an editorial product pursuant to law n. 62 of 7.03.2001. The author of the portal is not responsible for the content of comments to posts, the content of the linked sites. Some texts or images included in this portal are taken from the internet and, therefore, considered to be in the public domain; if their publication is violated, the copyright will be promptly communicated via e-mail. They will be immediately removed.
Notifications