More stories

  • in

    Biden Administration Weighs Further Curbs on Sales of A.I. Chips to China

    Reports that the White House may clamp down on sales of semiconductors that power artificial intelligence capabilities sent tech stocks diving.The Biden administration is weighing additional curbs on China’s ability to access critical technology, including restricting the sale of high-end chips used to power artificial intelligence, according to five people familiar with the deliberations.The curbs would clamp down on the sales to China of advanced chips made by companies like Nvidia and Advanced Micro Devices and Intel, which are needed for the data centers that power artificial intelligence.Biden officials have said that China’s artificial intelligence capabilities could pose a national security threat to the United States by enhancing Beijing’s military and security apparatus. Among the concerns is the use of A.I. in guiding weapons, carrying out cyber warfare and powering facial recognition systems used to track dissidents and minorities.But such curbs would be a blow to semiconductor manufacturers, including those in the United States, who still generate much of their revenue in China.The deliberations were earlier reported by The Wall Street Journal. Nvidia’s shares closed down 1.8 percent on Wednesday after reports of the potential export crackdown. The company has been one of the primary beneficiaries of the enthusiasm over artificial intelligence, with its share price surging by roughly 180 percent this year.Such additional restrictions, if adopted, would not have an immediate impact on Nvidia’s financial results, Colette Kress, the chief financial officer of Nvidia, said Wednesday at an event hosted by an investment firm. But over the long term, they “will result in a permanent loss of opportunities for the U.S. industry to compete and lead in one of the world’s largest markets,” she said. She added that China typically generates 20 percent to 25 percent of the company’s data center revenue, which includes other products in addition to chips that enable A.I.The stock prices of chip companies Qualcomm and Intel fell less than 2 percent on Wednesday while AMD nudged 0.2 percent lower.Intel declined to comment, as did the Commerce Department, which oversees export controls. AMD did not respond to a request for comment.Curbing the sale of high-end chips would be the latest step in the Biden administration’s campaign to starve China of advanced technology that is needed to power everything from self-driving cars to robotics.Last October, the administration issued sweeping restrictions on the types of advanced semiconductors and chip making machinery that could be sent to China. The rules were applied across the industry, but they had particularly strong consequences for Nvidia. The company, an industry leader, was barred from selling China its top-line A100 and H100 chips — which are adept at running the many processes required to build artificial intelligence — unless it first obtained a special license.In response to those restrictions, Nvidia began offering the downgraded A800 and H800 chips in China last year.The additional restrictions under consideration, which would come as part of the process of finalizing those earlier rules, would also bar sales of Nvidia’s A800 and H800 chips, and similar advanced chips from competitors like AMD and Intel, unless those companies obtained a license from the Commerce Department to continue shipping to the country.The deliberations have touched off an intense lobbying battle, with Intel and Nvidia working to prevent further curbs on their business.Chip companies say cutting them off from a major market like China will substantially eat into their revenues and reduce their ability to spend on research and innovation of new chips. In an interview with The Financial Times last month, Nvidia’s chief executive, Jensen Huang, warned that the U.S. tech industry was at risk of “enormous damage” if it were to be cut off from trading with China.The Biden administration has also been internally debating where to draw the line on chip sales to China. Their goal is to limit technological capacity that could aid the Chinese military in guiding weapons, developing autonomous drones, carrying out cyber warfare and powering surveillance systems, while minimizing the impact such rules would have on private companies.The measure, which would come as the United States is also considering expanded curbs on U.S. investment in Chinese technology firms, is also likely to ruffle the Chinese government. Biden officials have been working in recent weeks to improve bilateral relations after a fallout with Beijing this year, after a Chinese surveillance balloon flew over the United States.Antony J. Blinken, the secretary of state, traveled to Beijing this month to meet with his counterparts, and Treasury Secretary Janet L. Yellen is also expected to travel to China soon.During a Wednesday appearance at the Council on Foreign Relations in New York, Mr. Blinken said that China’s concern that the U.S. sought to slow its economic growth was “a lengthy part of the conversation that we just had in Beijing.”Chinese officials, he said, believe the U.S. seeks “to hold them back, globally, and economically.” But he disputed that notion.“How is it in our interest to allow them to get technology that they may turn around and use against us?” he asked, citing China’s expanding nuclear weapons program, its development of hypersonic missiles and its use of artificial intelligence “potentially for repressive purposes.”“If they were in our shoes, they would do exactly the same thing,” he said, adding that the U.S. was imposing “very targeted, very narrowly defined controls.”Nvidia’s valuation had soared in light of the recent boom in generative artificial intelligence services, which can produce complex written answers to questions and images based on a single prompt. Microsoft has teamed up with OpenAI, which makes the chatbot ChatGPT, to generate results in its Bing search engine while Google has built a competing chatbot called Bard.As companies race to incorporate the technology into their products, it has increased demand for chips like Nvidia’s that can handle that the complex computing tasks. That momentum has helped to push Nvidia’s market capitalization past $1 trillion, making the company the world’s sixth largest by value.Nvidia said in an August filing that $400 million in revenue from “potential sales to China” could be subject to U.S. export restrictions, including sales of the A100, if “customers do not want to purchase the company’s alternative product offerings” or the government failed to grant licenses to allow the company to continue to sell the chip inside China.Since the restrictions were imposed, Chinese chip makers have been trying to overhaul their supply chains and develop domestic sources of advanced chips, but China’s capabilities to produce the most advanced chips remains many years behind that of the United States.Dan Wang, a visiting scholar at Yale Law School, said that the impact of advanced chip restrictions on Chinese tech companies was uncertain.“Most of their business needs are driven by less advanced chips, as fewer of them are playing on the fringes of the most advanced A.I.,” he said.Joe Rennison More

  • in

    Facial Recognition Spreads as Tool to Fight Shoplifting

    Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave.Mr. Mackenzie adds one or two new faces every week, he said, mainly people who steal diapers, groceries, pet supplies and other low-cost goods. He said their economic hardship made him sympathetic, but that the number of thefts had gotten so out of hand that facial recognition was needed. Usually at least once a day, Facewatch alerts him that somebody on the watchlist has entered the store.Mr. Mackenzie adds one or two new faces a week to the Facewatch watch list that stores in the area share.Suzie Howell for The New York TimesA sign at a supermarket that uses Facewatch in Bristol, England. Suzie Howell for The New York TimesFacial recognition technology is proliferating as Western countries grapple with advances brought on by artificial intelligence. The European Union is drafting rules that would ban many of facial recognition’s uses, while Eric Adams, the mayor of New York City, has encouraged retailers to try the technology to fight crime. MSG Entertainment, the owner of Madison Square Garden and Radio City Music Hall, has used automated facial recognition to refuse entry to lawyers whose firms have sued the company.Among democratic nations, Britain is at the forefront of using live facial recognition, with courts and regulators signing off on its use. The police in London and Cardiff are experimenting with the technology to identify wanted criminals as they walk down the street. In May, it was used to scan the crowds at the coronation of King Charles III.But the use by retailers has drawn criticism as a disproportionate solution for minor crimes. Individuals have little way of knowing they are on the watchlist or how to appeal. In a legal complaint last year, Big Brother Watch, a civil society group, called it “Orwellian in the extreme.”Fraser Sampson, Britain’s biometrics and surveillance camera commissioner, who advises the government on policy, said there was “a nervousness and a hesitancy” around facial recognition technology because of privacy concerns and poorly performing algorithms in the past.“But I think in terms of speed, scale, accuracy and cost, facial recognition technology can in some areas, you know, literally be a game changer,” he said. “That means its arrival and deployment is probably inevitable. It’s just a case of when.”‘You can’t expect the police to come’Simon Gordon, the owner of Gordon’s Wine Bar in London, founded Facewatch in 2010. As a business owner, “you’ve got to help yourself,” he said. Suzie Howell for The New York TimesFacewatch was founded in 2010 by Simon Gordon, the owner of a popular 19th-century wine bar in central London known for its cellarlike interior and popularity among pickpockets.At the time, Mr. Gordon hired software developers to create an online tool to share security camera footage with the authorities, hoping it would save the police time filing incident reports and result in more arrests.There was limited interest, but Mr. Gordon’s fascination with security technology was piqued. He followed facial recognition developments and had the idea for a watchlist that retailers could share and contribute to. It was like the photos of shoplifters that stores keep next to the register, but supercharged into a collective database to identify bad guys in real time.By 2018, Mr. Gordon felt the technology was ready for commercial use.“You’ve got to help yourself,” he said in an interview. “You can’t expect the police to come.”Facewatch, which licenses facial recognition software made by Real Networks and Amazon, is now inside nearly 400 stores across Britain. Trained on millions of pictures and videos, the systems read the biometric information of a face as the person walks into a shop and check it against a database of flagged people.Facewatch’s watchlist is constantly growing as stores upload photos of shoplifters and problematic customers. Once added, a person remains there for a year before being deleted.‘Mistakes are rare but do happen’Every time Facewatch’s system identifies a shoplifter, a notification goes to a person who passed a test to be a “super recognizer” — someone with a special talent for remembering faces. Within seconds, the super recognizer must confirm the match against the Facewatch database before an alert is sent.Facewatch is used in about 400 British stores.Suzie Howell for The New York TimesBut while the company has created policies to prevent misidentification and other errors, mistakes happen.In October, a woman buying milk in a supermarket in Bristol, England, was confronted by an employee and ordered to leave. She was told that Facewatch had flagged her as a barred shoplifter.The woman, who asked that her name be withheld because of privacy concerns and whose story was corroborated by materials provided by her lawyer and Facewatch, said there must have been a mistake. When she contacted Facewatch a few days later, the company apologized, saying it was a case of mistaken identity.After the woman threatened legal action, Facewatch dug into its records. It found that the woman had been added to the watchlist because of an incident 10 months earlier involving £20 of merchandise, about $25. The system “worked perfectly,” Facewatch said.But while the technology had correctly identified the woman, it did not leave much room for human discretion. Neither Facewatch nor the store where the incident occurred contacted her to let her know that she was on the watchlist and to ask what had happened.The woman said she did not recall the incident and had never shoplifted. She said she may have walked out after not realizing that her debit card payment failed to go through at a self-checkout kiosk.Madeleine Stone, the legal and policy officer for Big Brother Watch, said Facewatch was “normalizing airport-style security checks for everyday activities like buying a pint of milk.”Mr. Gordon declined to comment on the incident in Bristol.In general, he said, “mistakes are rare but do happen.” He added, “If this occurs, we acknowledge our mistake, apologize, delete any relevant data to prevent reoccurrence and offer proportionate compensation.”Approved by the privacy officeA woman said Facewatch had misidentified her at the Bristol market. Facewatch said the system had ”worked perfectly.”Suzie Howell for The New York TimesCivil liberties groups have raised concerns about Facewatch and suggested that its deployment to prevent petty crime might be illegal under British privacy law, which requires that biometric technologies have a “substantial public interest.”The U.K. Information Commissioner’s Office, the privacy regulator, conducted a yearlong investigation into Facewatch. The office concluded in March that Facewatch’s system was permissible under the law, but only after the company made changes to how it operated.Stephen Bonner, the office’s deputy commissioner for regulatory supervision, said in an interview that an investigation had led Facewatch to change its policies: It would put more signage in stores, share among stores only information about serious and violent offenders and send out alerts only about repeat offenders. That means people will not be put on the watchlist after a single minor offense, as happened to the woman in Bristol.“That reduces the amount of personal data that’s held, reduces the chances of individuals being unfairly added to this kind of list and makes it more likely to be accurate,” Mr. Bonner said. The technology, he said, is “not dissimilar to having just very good security guards.”Liam Ardern, the operations manager for Lawrence Hunt, which owns 23 Spar convenience stores that use Facewatch, estimates the technology has saved the company more than £50,000 since 2020.He called the privacy risks of facial recognition overblown. The only example of misidentification that he recalled was when a man was confused for his identical twin, who had shoplifted. Critics overlook that stores like his operate on thin profit margins, he said.“It’s easy for them to say, ‘No, it’s against human rights,’” Mr. Ardern said. If shoplifting isn’t reduced, he said, his shops will have to raise prices or cut staff. More

  • in

    Generative A.I. Can Add $4.4 Trillion in Value to Global Economy, Study Says

    The report from McKinsey comes as a debate rages over the potential economic effects of A.I.-powered chatbots on labor and the economy.“Generative artificial intelligence” is set to add up to $4.4 trillion of value to the global economy annually, according to a report from McKinsey Global Institute, in what is one of the rosier predictions about the economic effects of the rapidly evolving technology.Generative A.I., which includes chatbots such as ChatGPT that can generate text in response to prompts, can potentially boost productivity by saving 60 to 70 percent of workers’ time through automation of their work, according to the 68-page report, which was published early Wednesday. Half of all work will be automated between 2030 and 2060, the report said.McKinsey had previously predicted that A.I. would automate half of all work between 2035 and 2075, but the power of generative A.I. tools — which exploded onto the tech scene late last year — accelerated the company’s forecast.“Generative A.I. has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities,” the report said.McKinsey’s report is one of the few so far to quantify the long-term impact of generative A.I. on the economy. The report arrives as Silicon Valley has been gripped by a fervor over generative A.I. tools like ChatGPT and Google’s Bard, with tech companies and venture capitalists investing billions of dollars in the technology.The tools — some of which can also generate images and video, and carry on a conversation — have started a debate over how they will affect jobs and the world economy. Some experts have predicted that the A.I. will displace people from their work, while others have said the tools can augment individual productivity.Last week, Goldman Sachs released a report warning that A.I. could lead to worker disruption and that some companies would benefit more from the technology than others. In April, a Stanford researcher and researchers at the Massachusetts Institute of Technology released a study showing that generative A.I. could boost the productivity of inexperienced call center operators by 35 percent.Any conclusions about the technology’s effects may be premature. David Autor, a professor of economics at M.I.T. cautioned that generative A.I. was “not going to be as miraculous as people claim.”“We are really, really in the early stage,” he added.For the most part, economic studies of generative A.I. do not take into account other risks from the technology, such as whether it might spread misinformation and eventually escape the realm of human control.The vast majority of generative A.I.’s economic value will most likely come from helping workers automate tasks in customer operations, sales, software engineering, and research and development, according to McKinsey’s report. Generative A.I. can create “superpowers” for high-skilled workers, said Lareina Yee, a McKinsey partner and an author of the report, because the technology can summarize and edit content.“The most profound change we are going to see is the change to people, and that’s going to require far more innovation and leadership than the technology,” she said.The report also outlined challenges that industry leaders and regulators would need to address with A.I., including concerns that the content generated by the tools can be misleading and inaccurate.Ms. Yee acknowledged that the report was making prognostications about A.I.’s effects, but that “if you could capture even a third” of what the technology’s potential is, “it is pretty remarkable over the next five to 10 years.” More

  • in

    The AI Boom Is Pulling Tech Entrepreneurs Back to San Francisco

    Doug Fulop’s and Jessie Fischer’s lives in Bend, Ore., were idyllic. The couple moved there last year, working remotely in a 2,400-square-foot house surrounded by trees, with easy access to skiing, mountain biking and breweries. It was an upgrade from their former apartments in San Francisco, where a stranger once entered Mr. Fulop’s home after his lock didn’t properly latch.But the pair of tech entrepreneurs are now on their way back to the Bay Area, driven by a key development: the artificial intelligence boom.Mr. Fulop and Ms. Fischer are both starting companies that use A.I. technology and are looking for co-founders. They tried to make it work in Bend, but after too many eight-hour drives to San Francisco for hackathons, networking events and meetings, they decided to move back when their lease ends in August.“The A.I. boom has brought the energy back into the Bay that was lost during Covid,” said Mr. Fulop, 34.The couple are part of a growing group of boomerang entrepreneurs who see opportunity in San Francisco’s predicted demise. The tech industry is more than a year into its worst slump in a decade, with layoffs and a glut of empty offices. The pandemic also spurred a wave of migration to places with lower taxes, fewer Covid restrictions, safer streets and more space. And tech workers have been among the most vocal groups to criticize the city for its worsening problems with drugs, housing and crime.But such busts are almost always followed by another boom. And with the latest wave of A.I. technology — known as generative A.I., which produces text, images and video in response to prompts — there’s too much at stake to miss out.Investors have already announced $10.7 billion in funding for generative A.I. start-ups within the first three months of this year, a thirteenfold increase from a year earlier, according to PitchBook, which tracks start-ups. Tens of thousands of tech workers recently laid off by big tech companies are now eager to join the next big thing. On top of that, much of the A.I. technology is open source, meaning companies share their work and allow anyone to build on it, which encourages a sense of community.“Hacker houses,” where people create start-ups, are springing up in San Francisco’s Hayes Valley neighborhood, known as “Cerebral Valley” because it is the center of the A.I. scene. And every night someone is hosting a hackathon, meet-up or demo focused on the technology.In March, days after the prominent start-up OpenAI unveiled a new version of its A.I. technology, an “emergency hackathon” organized by a pair of entrepreneurs drew 200 participants, with almost as many on the waiting list. That same month, a networking event hastily organized over Twitter by Clement Delangue, the chief executive of the A.I. start-up Hugging Face, attracted more than 5,000 people and two alpacas to San Francisco’s Exploratorium museum, earning it the nickname “Woodstock of A.I.”More than 5,000 people attended the so-called Woodstock of A.I. in San Francisco in March.Alexy KhrabrovMadisen Taylor, who runs operations for Hugging Face and organized the event alongside Mr. Delangue, said its communal vibe had mirrored that of Woodstock. “Peace, love, building cool A.I.,” she said.Taken together, the activity is enough to draw back people like Ms. Fischer, who is starting a company that uses A.I. in the hospitality industry. She and Mr. Fulop got involved in the 350-person tech scene in Bend, but they missed the inspiration, hustle and connections in San Francisco.“There’s just nowhere else like the Bay,” Ms. Fischer, 32, said.Jen Yip, who has been organizing events for tech workers over the past six years, said that what had been a quiet San Francisco tech scene during the pandemic began changing last year in tandem with the A.I. boom. At nightly hackathons and demo days, she watched people meet their co-founders, secure investments, win over customers and network with potential hires.“I’ve seen people come to an event with an idea they want to test and pitch it to 30 different people in the course of one night,” she said.Ms. Yip, 42, runs a secret group of 800 people focused on A.I. and robotics called Society of Artificers. Its monthly events have become a hot ticket, often selling out within an hour. “People definitely try to crash,” she said.Her other speaker series, Founders You Should Know, features leaders of A.I. companies speaking to an audience of mostly engineers looking for their next gig. The last event had more than 2,000 applicants for 120 spots, Ms. Yip said.In Founders You Should Know, a series run by Jen Yip, leaders of A.I. companies speak to an audience of mostly engineers looking for their next gig.Ximena NateraBernardo Aceituno moved his company, Stack AI, to San Francisco in January to be part of the start-up accelerator Y Combinator. He and his co-founders had planned to base the company in New York after the three-month program ended, but decided to stay in San Francisco. The community of fellow entrepreneurs, investors and tech talent that they found was too valuable, he said.“If we move out, it’s going to be very hard to re-create in any other city,” Mr. Aceituno, 27, said. “Whatever you’re looking for is already here.”After operating remotely for several years, Y Combinator has started encouraging start-ups in its program to move to San Francisco. Out of a recent batch of 270 start-ups, 86 percent participated locally, the company said.“Hayes Valley truly became Cerebral Valley this year,” Gary Tan, Y Combinator’s chief executive, said at a demo day in April.The A.I. boom is also luring back founders of other kinds of tech companies. Brex, a financial technology start-up, declared itself “remote first” early in the pandemic, closing its 250-person office in San Francisco’s SoMa neighborhood. The company’s founders, Henrique Dubugras and Pedro Franceschi, decamped for Los Angeles.Henrique Dubugras, a co-founder of Brex, in 2019. After decamping to Los Angeles, he recently returned to the Bay Area.Arsenii Vaselenko for The New York TimesBut when generative A.I. began taking off last year, Mr. Dubugras, 27, was eager to see how Brex could adopt the technology. He quickly realized that he was missing out on the coffees, casual conversations and community happening around A.I. in San Francisco, he said.In May, Mr. Dubugras moved to Palo Alto, Calif., and began working from a new, pared-down office a few blocks from Brex’s old one. San Francisco’s high office vacancy rate meant the company paid a quarter of what it had been paying in rent before the pandemic.Seated under a neon sign in Brex’s office that read “Growth Mindset,” Mr. Dubugras said he had been on a steady schedule of coffee meetings with people working on A.I. since his return. He has hired a Stanford Ph.D. student to tutor him on the topic.“Knowledge is concentrated at the bleeding edge,” he said.Ms. Fischer and Ms. Fulop said they would miss Bend but craved the Bay Area’s sense of urgency and focus.Will Matsuda for The New York TimesMr. Fulop and Ms. Fischer said they would miss their lives in Bend, where they could ski or mountain bike on their lunch breaks. But getting two start-ups off the ground requires an intense blend of urgency and focus.In the Bay Area, Ms. Fischer attends multiday events where people stay up all night working on their projects. And Mr. Fulop runs into engineers and investors he knows every time he walks by a coffee shop. They are considering living in suburbs like Palo Alto and Woodside, which has easy access to nature, in addition to San Francisco.“I’m willing to sacrifice the amazing tranquillity of this place for being around that ambition, being inspired, knowing there are a ton of awesome people to work with that I can bump into,” Mr. Fulop said. Living in Bend, he added, “honestly just felt like early retirement.” More

  • in

    New York City Moves to Regulate How AI Is Used in Hiring

    European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.Uneasy compromises are inevitable.Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”The law applies to companies with workers in New York City, but labor experts expect it to influence practices nationally. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also working on laws to regulate A.I. in hiring. And Illinois and Maryland have enacted laws limiting the use of specific A.I. technologies, often for workplace surveillance and the screening of job candidates.The New York City law emerged from a clash of sharply conflicting viewpoints. The City Council passed it during the final days of the administration of Mayor Bill de Blasio. Rounds of hearings and public comments, more than 100,000 words, came later — overseen by the city’s Department of Consumer and Worker Protection, the rule-making agency.The result, some critics say, is overly sympathetic to business interests.“What could have been a landmark law was watered down to lose effectiveness,” said Alexandra Givens, president of the Center for Democracy & Technology, a policy and civil rights organization.That’s because the law defines an “automated employment decision tool” as technology used “to substantially assist or replace discretionary decision making,” she said. The rules adopted by the city appear to interpret that phrasing narrowly so that A.I. software will require an audit only if it is the lone or primary factor in a hiring decision or is used to overrule a human, Ms. Givens said.That leaves out the main way the automated software is used, she said, with a hiring manager invariably making the final choice. The potential for A.I.-driven discrimination, she said, typically comes in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.Ms. Givens also criticized the law for limiting the kinds of groups measured for unfair treatment. It covers bias by sex, race and ethnicity, but not discrimination against older workers or those with disabilities.“My biggest concern is that this becomes the template nationally when we should be asking much more of our policymakers,” Ms. Givens said.“This is a significant regulatory success,” said Robert Holden, center, a member of the City Council who formerly led its committee on technology.Johnny Milano for The New York TimesThe law was narrowed to sharpen it and make sure it was focused and enforceable, city officials said. The Council and the worker protection agency heard from many voices, including public-interest activists and software companies. Its goal was to weigh trade-offs between innovation and potential harm, officials said.“This is a significant regulatory success toward ensuring that A.I. technology is used ethically and responsibly,” said Robert Holden, who was the chair of the Council committee on technology when the law was passed and remains a committee member.New York City is trying to address new technology in the context of federal workplace laws with guidelines on hiring that date to the 1970s. The main Equal Employment Opportunity Commission rule states that no practice or method of selection used by employers should have a “disparate impact” on a legally protected group like women or minorities.Businesses have criticized the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent audits of A.I. was “not feasible” because “the auditing landscape is nascent,” lacking standards and professional oversight bodies.But a nascent field is a market opportunity. The A.I. audit business, experts say, is only going to grow. It is already attracting law firms, consultants and start-ups.Companies that sell A.I. software to assist in hiring and promotion decisions have generally come to embrace regulation. Some have already undergone outside audits. They see the requirement as a potential competitive advantage, providing proof that their technology expands the pool of job candidates for companies and increases opportunity for workers.“We believe we can meet the law and show what good A.I. looks like,” said Roy Wang, general counsel of Eightfold AI, a Silicon Valley start-up that produces software used to assist hiring managers.The New York City law also takes an approach to regulating A.I. that may become the norm. The law’s key measurement is an “impact ratio,” or a calculation of the effect of using the software on a protected group of job candidates. It does not delve into how an algorithm makes decisions, a concept known as “explainability.”In life-affecting applications like hiring, critics say, people have a right to an explanation of how a decision was made. But A.I. like ChatGPT-style software is becoming more complex, perhaps putting the goal of explainable A.I. out of reach, some experts say.“The focus becomes the output of the algorithm, not the working of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute, which is developing certifications for the safe use of A.I. applications in the workplace, health care and finance. More

  • in

    How AI and DNA Are Unlocking the Mysteries of Global Supply Chains

    At a cotton gin in the San Joaquin Valley, in California, a boxy machine helps to spray a fine mist containing billions of molecules of DNA onto freshly cleaned Pima cotton.That DNA will act as a kind of minuscule bar code, nestling amid the puffy fibers as they are shuttled to factories in India. There, the cotton will be spun into yarn and woven into bedsheets, before landing on the shelves of Costco stores in the United States. At any time, Costco can test for the DNA’s presence to ensure that its American-grown cotton hasn’t been replaced with cheaper materials — like cotton from the Xinjiang region of China, which is banned in the United States because of its ties to forced labor.Amid growing concern about opacity and abuses in global supply chains, companies and government officials are increasingly turning to technologies like DNA tracking, artificial intelligence and blockchains to try to trace raw materials from the source to the store.Companies in the United States are now subject to new rules that require firms to prove their goods are made without forced labor, or face having them seized at the border. U.S. customs officials said in March that they had already detained nearly a billion dollars’ worth of shipments coming into the United States that were suspected of having some ties to Xinjiang. Products from the region have been banned since last June.Customers are also demanding proof that expensive, high-end products — like conflict-free diamonds, organic cotton, sushi-grade tuna or Manuka honey — are genuine, and produced in ethically and environmentally sustainable ways.That has forced a new reality on companies that have long relied on a tangle of global factories to source their goods. More than ever before, companies must be able to explain where their products really come from.A technician at Applied DNA Sciences testing samples to trace the raw materials.Johnny Milano for The New York TimesCotton samples that are being processed at the lab.Johnny Milano for The New York TimesThe task may seem straightforward, but it can be surprisingly tricky. That’s because the international supply chains that companies have built in recent decades to cut costs and diversify their product offerings have grown astonishingly complex. Since 2000, the value of intermediate goods used to make products that are traded internationally has tripled, driven partly by China’s booming factories.A large, multinational company may buy parts, materials or services from thousands of suppliers around the world. One of the largest such companies, Procter & Gamble, which owns brands like Tide, Crest and Pampers, has nearly 50,000 direct suppliers. Each of those suppliers may, in turn, rely on hundreds of other companies for the parts used to make its product — and so on, for many levels up the supply chain.To make a pair of jeans, for example, various companies must farm and clean cotton, spin it into thread, dye it, weave it into fabric, cut the fabric into patterns and stitch the jeans together. Other webs of companies mine, smelt or process the brass, nickel or aluminum that is crafted into the zipper, or make the chemicals that are used to manufacture synthetic indigo dye.“Supply chains are like a bowl of spaghetti,” said James McGregor, the chairman of the greater China region for APCO Worldwide, an advisory firm. “They get mixed all over. You don’t know where that stuff comes from.”Harvesting cotton in Xinjiang. Cotton from the region in China is banned in the United States because of its ties to forced labor.Getty ImagesGiven these challenges, some companies are turning to alternative methods, not all proven, to try to inspect their supply chains.Some companies — like the one that sprays the DNA mist onto cotton, Applied DNA Sciences — are using scientific processes to tag or test a physical attribute of the good itself, to figure out where it has traveled on its path from factories to consumer.Applied DNA has used its synthetic DNA tags, each just a billionth of the size of a grain of sugar, to track microcircuits produced for the Department of Defense, trace cannabis supply chains to ensure the product’s purity and even to mist robbers in Sweden who attempted to steal cash from A.T.M.s, leading to multiple arrests.MeiLin Wan, the vice president for textiles at Applied DNA, said the new regulations were creating a “tipping point for real transparency.”“There definitely is a lot more interest,” she added.The cotton industry was one of the earliest adopters of tracing technologies, in part because of previous transgressions. In the mid-2010s, Target, Walmart and Bed Bath & Beyond faced expensive product recalls or lawsuits after the “Egyptian cotton” sheets they sold turned out to have been made with cotton from elsewhere. A New York Times investigation last year documented that the “organic cotton” industry was also rife with fraud.In addition to the DNA mist it applies as a marker, Applied DNA can figure out where cotton comes from by sequencing the DNA of the cotton itself, or analyzing its isotopes, which are variations in the carbon, oxygen and hydrogen atoms in the cotton. Differences in rainfall, latitude, temperature and soil conditions mean these atoms vary slightly across regions of the world, allowing researchers to map where the cotton in a pair of socks or bath towel has come from.Other companies are turning to digital technology to map supply chains, by creating and analyzing complex databases of corporate ownership and trade.Farmers in India auction their cotton.Saumya Khandelwal for The New York TimesSome firms, for example, are using blockchain technology to create a digital token for every product that a factory produces. As that product — a can of caviar, say, or a batch of coffee — moves through the supply chain, its digital twin gets encoded with information about how it has been transported and processed, providing a transparent log for companies and consumers.Other companies are using databases or artificial intelligence to comb through vast supplier networks for distant links to banned entities, or to detect unusual trade patterns that indicate fraud — investigations that could take years to carry out without computing power.Sayari, a corporate risk intelligence provider that has developed a platform combining data from billions of public records issued globally, is one of those companies. The service is now used by U.S. customs agents as well as private companies. On a recent Tuesday, Jessica Abell, the vice president of solutions at Sayari, ran the supplier list of a major U.S. retailer through the platform and watched as dozens of tiny red flags appeared next to the names of distant companies.“We’re flagging not only the Chinese companies that are in Xinjiang, but then we’re also automatically exploring their commercial networks and flagging the companies that are directly connected to it,” Ms. Abell said. It is up to the companies to decide what, if anything, to do about their exposure.Studies have found that most companies have surprisingly little visibility into the upper reaches of their supply chains, because they lack either the resources or the incentives to investigate. In a 2022 survey by McKinsey & Company, 45 percent of respondents said they had no visibility at all into their supply chain beyond their immediate suppliers.But staying in the dark is no longer feasible for companies, particularly those in the United States, after the congressionally imposed ban on importing products from Xinjiang — where 100,000 ethnic minorities are presumed by the U.S. government to be working in conditions of forced labor — went into effect last year.Uyghur workers at a garment factory in the Xinjiang region of China in 2019.Gilles Sabrie for The New York TimesXinjiang’s links to certain products are already well known. Experts have estimated that roughly one in five cotton garments sold globally contains cotton or yarn from Xinjiang. The region is also responsible for more than 40 percent of the world’s polysilicon, which is used in solar panels, and a quarter of its tomato paste.But other industries, like cars, vinyl flooring and aluminum, also appear to have connections to suppliers in the region and are coming under more scrutiny from regulators.Having a full picture of their supply chains can offer companies other benefits, like helping them recall faulty products or reduce costs. The information is increasingly needed to estimate how much carbon dioxide is actually emitted in the production of a good, or to satisfy other government rules that require products to be sourced from particular places — such as the Biden administration’s new rules on electric vehicle tax credits.Executives at these technology companies say they envision a future, perhaps within the next decade, in which most supply chains are fully traceable, an outgrowth of both tougher government regulations and the wider adoption of technologies.“It’s eminently doable,” said Leonardo Bonanni, the chief executive of Sourcemap, which has helped companies like the chocolate maker Mars map out their supply chains. “If you want access to the U.S. market for your goods, it’s a small price to pay, frankly.”Others express skepticism about the limitations of these technologies, including their cost. While Applied DNA’s technology, for example, adds only 5 to 7 cents to the price of a finished piece of apparel, that may be significant for retailers competing on thin margins.And some express concerns about accuracy, including, for example, databases that may flag companies incorrectly. Investigators still need to be on the ground locally, they say, speaking with workers and remaining alert for signs of forced or child labor that may not show up in digital records.Justin Dillon, the chief executive of FRDM, a nonprofit organization dedicated to ending forced labor, said there was “a lot of angst, a lot of confusion” among companies trying to satisfy the government’s new requirements.Importers are “looking for boxes to check,” he said. “And transparency in supply chains is as much an art as it is a science. It’s kind of never done.” More

  • in

    Tinkering With ChatGPT, Workers Wonder: Will This Take My Job?

    In December, the staff of the American Writers and Artists Institute — a 26-year-old membership organization for copywriters — realized that something big was happening.The newest edition of ChatGPT, a “large language model” that mines the internet to answer questions and perform tasks on command, had just been released. Its abilities were astonishing — and squarely in the bailiwick of people who generate content, such as advertising copy and blog posts, for a living.“They’re horrified,” said Rebecca Matter, the institute’s president. Over the holidays, she scrambled to organize a webinar on the pitfalls and potential of the new artificial-intelligence technology. More than 3,000 people signed up, she said, and the overall message was cautionary but reassuring: Writers could use ChatGPT to complete assignments more quickly, and move into higher-level roles in content planning and search-engine optimization.“I do think it’s going to minimize short-form copy projects,” Ms. Matter said. “But on the flip side of that, I think there will be more opportunities for things like strategy.”OpenAI’s ChatGPT is the latest advance in a steady march of innovations that have offered the potential to transform many occupations and wipe out others, sometimes in tandem. It is too early to tally the enabled and the endangered, or to gauge the overall impact on labor demand and productivity. But it seems clear that artificial intelligence will impinge on work in different ways than previous waves of technology.The positive view of tools like ChatGPT is that they could be complements to human labor, rather than replacements. Not all workers are sanguine, however, about the prospective impact.Katie Brown is a grant writer in the Chicago suburbs for a small nonprofit group focused on addressing domestic violence. She was shocked to learn in early February that a professional association for grant writers was promoting the use of artificial-intelligence software that would automatically complete parts of an application, requiring the human simply to polish it before submitting.The platform, called Grantable, is based on the same technology as ChatGPT, and it markets itself to freelancers who charge by the application. That, she thought, clearly threatens opportunities in the industry.“For me, it’s common sense: Which do you think a small nonprofit will pick?” Ms. Brown said. “A full-time-salary-plus-benefits person, or someone equipped with A.I. that you don’t have to pay benefits for?”Artificial intelligence and machine learning have been operating in the background of many businesses for years, helping to evaluate large numbers of possible decisions and better align supply with demand, for example. And plenty of technological advancements over centuries have decreased the need for certain workers — although each time, the jobs created have more than offset the number lost.Guillermo Rubio has found that his job as a copywriter has changed markedly since he started using ChatGPT to generate ideas for blog posts.In-camera double exposure by Mark Abramson for The New York TimesChatGPT, however, is the first to confront such a broad range of white-collar workers so directly, and to be so accessible that people could use it in their own jobs. And it is improving rapidly, with a new edition released this month. According to a survey conducted by the job search website ZipRecruiter after ChatGPT’s release, 62 percent of job seekers said they were concerned that artificial intelligence could derail their careers.“ChatGPT is the one that made it more visible,” said Michael Chui, a partner at the McKinsey Global Institute who studies automation’s effects. “So I think it did start to raise questions about where timelines might start to be accelerated.”That’s also the conclusion of a White House report on the implications of A.I. technology, including ChatGPT. “The primary risk of A.I. to the work force is in the general disruption it is likely to cause to workers, whether they find that their jobs are newly automated or that their job design has fundamentally changed,” the authors wrote.For now, Guillermo Rubio has found that his job as a copywriter has changed markedly since he started using ChatGPT to generate ideas for blog posts, write first drafts of newsletters, create hundreds of slight variations on stock advertising copy and summon research on a subject about which he might write a white paper.Since he still charges his clients the same rates, the tool has simply allowed him to work less. If the going rate for copy goes down, though — which it might, as the technology improves — he’s confident he’ll be able to move into consulting on content strategy, along with production.“I think people are more reluctant and fearful, with good reason,” Mr. Rubio, who is in Orange County, Calif., said. “You could look at it in a negative light, or you can embrace it. I think the biggest takeaway is you have to be adaptable. You have to be open to embracing it.”After decades of study, researchers understand a lot about automation’s impact on the work force. Economists including Daron Acemoglu at the Massachusetts Institute of Technology have found that since 1980, technology has played a primary role in amplifying income inequality. As labor unions atrophied, hollowing out systems for training and retraining, workers without college educations saw their bargaining power reduced in the face of machines capable of rudimentary tasks.The advent of ChatGPT three months ago, however, has prompted a flurry of studies predicated on the idea that this isn’t your average robot.One team of researchers ran an analysis showing the industries and occupations that are most exposed to artificial intelligence, based on a model adjusted for generative language tools. Topping the list were college humanities professors, legal services providers, insurance agents and telemarketers. Mere exposure, however, doesn’t determine whether the technology is likely to replace workers or merely augment their skills.Shakked Noy and Whitney Zhang, doctoral students at M.I.T., conducted a randomized, controlled trial on experienced professionals in such fields as human relations and marketing. The participants were given tasks that typically take 20 to 30 minutes, like writing news releases and brief reports. Those who used ChatGPT completed the assignments 37 percent faster on average than those who didn’t — a substantial productivity increase. They also reported a 20 percent increase in job satisfaction.A third study — using a program developed by GitHub, which is owned by Microsoft — evaluated the impact of generative A.I. specifically on software developers. In a trial run by GitHub’s researchers, developers given an entry-level task and encouraged to use the program, called Copilot, completed their task 55 percent faster than those who did the assignment manually.Those productivity gains are unlike almost any observed since the widespread adoption of the personal computer.“It does seem to be doing something fundamentally different,” said David Autor, another M.I.T. economist, who advises Ms. Zhang and Mr. Noy. “Before, computers were powerful, but they simply and robotically did what people programmed them to do.” Generative artificial intelligence, on the other hand, is “adaptive, it learns and is capable of flexible problem solving.”That’s very apparent to Peter Dolkens, a software developer for a company that primarily makes online tools for the sports industry. He has been integrating ChatGPT into his work for tasks like summarizing chunks of code to aid colleagues who may pick up the project after him, and proposing solutions to problems that have him stumped. If the answer isn’t perfect, he’ll ask ChatGPT to refine it, or try something different.“It’s the equivalent of a very well-read intern,” Mr. Dolkens, who is in London, said. “They might not have the experience to know how to apply it, but they know all the words, they’ve read all the books and they’re able to get part of the way there.”There’s another takeaway from the initial research: ChatGPT and Copilot elevated the least experienced workers the most. If true, more generally, that could mitigate the inequality-widening effects of artificial intelligence.On the other hand, as each worker becomes more productive, fewer workers are required to complete a set of tasks. Whether that results in fewer jobs in particular industries depends on the demand for the service provided, and the jobs that might be created in helping to manage and direct the A.I. “Prompt engineering,” for example, is already a skill that those who play around with ChatGPT long enough can add to their résumés.Since demand for software code seems insatiable, and developers’ salaries are extremely high, increasing productivity seems unlikely to foreclose opportunities for people to enter the field.That won’t be the same for every profession, however, and Dominic Russo is pretty sure it won’t be true for his: writing appeals to pharmacy benefit managers and insurance companies when they reject prescriptions for expensive drugs. He has been doing the job for about seven years, and has built expertise with only on-the-job training, after studying journalism in college.After ChatGPT came out, he asked it to write an appeal on behalf of someone with psoriasis who wanted the expensive drug Otezla. The result was good enough to require only a few edits before submitting it.“If you knew what to prompt the A.I. with, anyone could do the work,” Mr. Russo said. “That’s what’s really scares me. Why would a pharmacy pay me $70,000 a year, when they can license the technology and pay people $12 an hour to run prompts into it?”To try to protect himself from that possible future, Mr. Russo has been building up his side business: selling pizzas out of his house in southern New Jersey, an enterprise that he figures won’t be disrupted by artificial intelligence.Yet. More

  • in

    Economists Pin More Blame on Tech for Rising Inequality

    Recent research underlines the central role that automation has played in widening disparities.Daron Acemoglu, an influential economist at the Massachusetts Institute of Technology, has been making the case against what he describes as “excessive automation.”The economywide payoff of investing in machines and software has been stubbornly elusive. But he says the rising inequality resulting from those investments, and from the public policy that encourages them, is crystal clear.Half or more of the increasing gap in wages among American workers over the last 40 years is attributable to the automation of tasks formerly done by human workers, especially men without college degrees, according to some of his recent research.Globalization and the weakening of unions have played roles. “But the most important factor is automation,” Mr. Acemoglu said. And automation-fueled inequality is “not an act of God or nature,” he added. “It’s the result of choices corporations and we as a society have made about how to use technology.”Mr. Acemoglu, a wide-ranging scholar whose research makes him one of most cited economists in academic journals, is hardly the only prominent economist arguing that computerized machines and software, with a hand from policymakers, have contributed significantly to the yawning gaps in incomes in the United States. Their numbers are growing, and their voices add to the chorus of criticism surrounding the Silicon Valley giants and the unchecked advance of technology.Paul Romer, who won a Nobel in economic science for his work on technological innovation and economic growth, has expressed alarm at the runaway market power and influence of the big tech companies. “Economists taught: ‘It’s the market. There’s nothing we can do,’” he said in an interview last year. “That’s really just so wrong.”Anton Korinek, an economist at the University of Virginia, and Joseph Stiglitz, a Nobel economist at Columbia University, have written a paper, “Steering Technological Progress,” which recommends steps from nudges for entrepreneurs to tax changes to pursue “labor-friendly innovations.”Erik Brynjolfsson, an economist at Stanford, is a technology optimist in general. But in an essay to be published this spring in Daedalus, the journal of the American Academy of Arts and Sciences, he warns of “the Turing trap.” The phrase is a reference to the Turing test, named for Alan Turing, the English pioneer in artificial intelligence, in which the goal is for a computer program to engage in a dialogue so convincingly that it is indistinguishable from a human being.For decades, Mr. Brynjolfsson said, the Turing test — matching human performance — has been the guiding metaphor for technologists, businesspeople and policymakers in thinking about A.I. That leads to A.I. systems that are designed to replace workers rather than enhance their performance. “I think that’s a mistake,” he said.The concerns raised by these economists are getting more attention in Washington at a time when the giant tech companies are already being attacked on several fronts. Officials regularly criticize the companies for not doing enough to protect user privacy and say the companies amplify misinformation. State and federal lawsuits accuse Google and Facebook of violating antitrust laws, and Democrats are trying to rein in the market power of the industry’s biggest companies through new laws.Mr. Acemoglu testified in November before the House Select Committee on Economic Disparity and Fairness in Growth at a hearing on technological innovation, automation and the future of work. The committee, which got underway in June, will hold hearings and gather information for a year and report its findings and recommendations.Despite the partisan gridlock in Congress, Representative Jim Himes, a Connecticut Democrat and the chairman of the committee, is confident the committee can find common ground on some steps to help workers, like increased support for proven job-training programs.“There’s nothing partisan about economic disparity,” Mr. Himes said, referring to the harm to millions of American families regardless of their political views.Representative Jim Himes, who leads a panel on economic disparity, is confident it can find ways to help workers, like increased support for proven job-training programs.Samuel Corum for The New York TimesEconomists point to the postwar years, from 1950 to 1980, as a golden age when technology forged ahead and workers enjoyed rising incomes.But afterward, many workers started falling behind. There was a steady advance of crucial automating technologies — robots and computerized machines on factory floors, and specialized software in offices. To stay ahead, workers required new skills.Yet the technological shift evolved as growth in postsecondary education slowed and companies began spending less on training their workers. “When technology, education and training move together, you get shared prosperity,” said Lawrence Katz, a labor economist at Harvard. “Otherwise, you don’t.”Increasing international trade tended to encourage companies to adopt automation strategies. For example, companies worried by low-cost competition from Japan and later China invested in machines to replace workers.Today, the next wave of technology is artificial intelligence. And Mr. Acemoglu and others say it can be used mainly to assist workers, making them more productive, or to supplant them.Mr. Acemoglu, like some other economists, has altered his view of technology over time. In economic theory, technology is almost a magic ingredient that both increases the size of the economic pie and makes nations richer. He recalled working on a textbook more than a decade ago that included the standard theory. Shortly after, while doing further research, he had second thoughts.“It’s too restrictive a way of thinking,” he said. “I should have been more open-minded.”Mr. Acemoglu is no enemy of technology. Its innovations, he notes, are needed to address society’s biggest challenges, like climate change, and to deliver economic growth and rising living standards. His wife, Asuman Ozdaglar, is the head of the electrical engineering and computer science department at M.I.T.But as Mr. Acemoglu dug deeply into economic and demographic data, the displacement effects of technology became increasingly apparent. “They were greater than I assumed,” he said. “It’s made me less optimistic about the future.”Mr. Acemoglu’s estimate that half or more of the increasing gap in wages in recent decades stemmed from technology was published last year with his frequent collaborator, Pascual Restrepo, an economist at Boston University. The conclusion was based on an analysis of demographic and business data that details the declining share of economic output that goes to workers as wages and the increased spending on machinery and software.Mr. Acemoglu and Mr. Restrepo have published papers on the impact of robots and the adoption of “so-so technologies,” as well as the recent analysis of technology and inequality.So-so technologies replace workers but do not yield big gains in productivity. As examples, Mr. Acemoglu cites self-checkout kiosks in grocery stores and automated customer service over the phone.Today, he sees too much investment in such so-so technologies, which helps explain the sluggish productivity growth in the economy. By contrast, truly significant technologies create new jobs elsewhere, lifting employment and wages.The rise of the auto industry, for example, generated jobs in car dealerships, advertising, accounting and financial services.Market forces have produced technologies that help people do their work rather than replace them. In computing, the examples include databases, spreadsheets, search engines and digital assistants.But Mr. Acemoglu insists that a hands-off, free-market approach is a recipe for widening inequality, with all its attendant social ills. One important policy step, he recommends, is fair tax treatment for human labor. The tax rate on labor, including payroll and federal income tax, is 25 percent. After a series of tax breaks, the current rate on the costs of equipment and software is near zero.Well-designed education and training programs for the jobs of the future, Mr. Acemoglu said, are essential. But he also believes that technology development should be steered in a more “human-friendly direction.” He takes inspiration from the development of renewable energy over the last two decades, which has been helped by government research, production subsidies and social pressure on corporations to reduce carbon emissions.“We need to redirect technology so it works for people,” Mr. Acemoglu said, “not against them.” More