More stories

  • in

    Biden to Restrict Investments in China, Citing National Security Threats

    The measure to clamp down on investments in certain industries deemed to pose security risks, set to be issued Wednesday, appears likely to open a new front in the U.S.-China economic conflict.The Biden administration plans on Wednesday to issue new restrictions on American investments in certain advanced industries in China, according to people familiar with the deliberations, a move that supporters have described as necessary to protect national security but that will undoubtedly rankle Beijing.The measure would be one of the first significant steps the United States has taken amid an economic clash with China to clamp down on outgoing financial flows. It could set the stage for more restrictions on investments between the two countries in the years to come.The restrictions would bar private equity and venture capital firms from making investments in certain high-tech sectors, like quantum computing, artificial intelligence and advanced semiconductors, the people said, in a bid to stop the transfer of American dollars and expertise to China.It would also require firms making investments in a broader range of Chinese industries to report that activity, giving the government better visibility into financial exchanges between the United States and China.The White House declined to comment. But Biden officials have emphasized that outright restrictions on investment would narrowly target a few sectors that could aid the Chinese military or surveillance state as they seek to combat security threats but not disrupt legitimate business with China.“There is mounting evidence that U.S. capital is being used to advance Chinese military capabilities and that the U.S. lacks a sufficient means of combating this activity,” said Emily Benson, the director of project on trade and technology at the Center for Strategic and International Studies, a Washington think tank.The Biden administration has recently sought to calm relations with China, dispatching Treasury Secretary Janet L. Yellen and other top officials to talk with Chinese counterparts. In recent speeches, Biden officials have argued that targeted actions taken against China are aimed purely at protecting U.S. national security, not at damaging the Chinese economy.At the same time, the Biden administration has continued to push to “de-risk” critical supply chains by developing suppliers outside China, and it has steadily ramped up its restrictions on selling certain technologies to China, including semiconductors for advanced computing.The Chinese government has long restricted certain foreign investments by individuals and firms. Other governments, such as those of Taiwan and South Korea, also have restrictions on outgoing investments.But beyond screening Chinese investment into the United States for security risks, the U.S. government has left financial flows between the world’s two largest economies largely untouched. Just a few years ago, American policymakers were working to open up Chinese financial markets for U.S. firms.In the past few years, investments between the United States and China have fallen sharply as the countries severed other economic ties. But venture capital and private equity firms have continued to seek out lucrative opportunities for partnerships, as a way to gain access to China’s vibrant tech industry.The planned measure has already faced criticism from some congressional Republicans and others who say it has taken too long and does not go far enough to limit U.S. funding of Chinese technology. In July, a House committee on China sent letters to four U.S. venture capital firms expressing “serious concern” about their investments in Chinese companies in areas including artificial intelligence and semiconductors.Others have argued that the restriction would mainly put the U.S. economy at a disadvantage, because other countries continue to forge technology partnerships with China, and China has no shortage of capital.Nicholas R. Lardy, a nonresident senior fellow at the Peterson Institute for International Economics, said the United States was the source of less than 5 percent of China’s inbound direct investment in 2021 and 2022.“Unless other major investors in China adopt similar restrictions, I think this is a waste of time,” Mr. Lardy said. “Pushing this policy now simply plays into the hands of those in Beijing who believe that the U.S. seeks to contain China and are not interested in renewed dialogue or a ‘thaw.’”Biden officials have talked with allies in recent months to explain the measure and encourage other governments to adopt similar restrictions, including at the Group of 7 meetings in Japan in May. Since then, Ursula von der Leyen, the president of the European Commission, has urged the European Union to introduce its own measure.The administration is expected to give businesses and other organizations a chance to comment on the new rules before they are finalized in the months to come.Claire Chu, a senior China analyst at Janes, a defense intelligence company, said that communicating and enforcing the measure would be difficult, and that officials would need to engage closely with Silicon Valley and Wall Street.“For a long time, the U.S. national security community has been reticent to recognize the international financial system as a potential warfighting domain,” she said. “And the business community has pushed back against what it considers to be the politicization of private markets. And so this is not only an interagency effort, but an exercise in intersectoral coordination.” More

  • in

    ‘Training My Replacement’: Inside a Call Center Worker’s Battle With A.I.

    To many people, chatbots and other technology feel like a ticking time bomb, sure to explode their work. But to some, the threat is already here.“This A.I. stuff is getting really crazy.”The voices of Charlamagne tha God, host of the nationally syndicated radio show “The Breakfast Club,” and his guests Mandii B and WeezyWTF filled Ylonda Sherrod’s car as she sped down Interstate 10 in Mississippi during her daily commute. Her favorite radio show was discussing artificial intelligence, specifically an A.I.-generated sample of Biggie.“Sonically, it sounds cool,” Charlamagne tha God said. “But it lacks soul.”WeezyWTF replied: “I’ve had people ask me like, ‘Oh, would you replace people that work for you with A.I.?’ I’m like, ‘No, dude.’”Ms. Sherrod nodded along emphatically, as she drove past low-slung brick homes and strip malls dotted with Waffle Houses. She arrived at the AT&T call center where she works, feeling unsettled. She played the radio exchange about A.I. for a colleague.“Yeah, that’s crazy,” Ms. Sherrod’s friend replied. “What do you think about us?”Like so many millions of American workers, across so many thousands of workplaces, the roughly 230 customer service representatives at AT&T’s call center in Ocean Springs, Miss., watched artificial intelligence arrive over the past year both rapidly and assuredly, like a new manager settling in and kicking up its feet.Suddenly, the customer service workers weren’t taking their own notes during calls with customers. Instead, an A.I. tool generated a transcript, which their managers could later consult. A.I. technology was providing suggestions of what to tell customers. Customers were also spending time on phone lines with automated systems, which solved simple questions and passed on the complicated ones to human representatives.Ms. Sherrod, 38, who exudes quiet confidence at 5-foot-11, regarded the new technology with a combination of irritation and fear. “I always had a question in the back of my mind,” she said. “Am I training my replacement?”Ms. Sherrod, a vice president of the call center’s local union chapter, part of the Communications Workers of America, started asking AT&T managers questions. “If we don’t talk about this, it could jeopardize my family,” she said. “Will I be jobless?”In recent months, the A.I. chatbot ChatGPT has made its way into courtrooms, classrooms, hospitals and everywhere in between. With it has come speculation about A.I.’s impact on jobs. To many people, A.I. feels like a ticking time bomb, sure to explode their work. But to some, like Ms. Sherrod, the threat of A.I. isn’t abstract. They can already feel its effects.When automation swallows up jobs, it often comes for customer service roles first, which make up about three million jobs in America. Automation tends to overtake tasks that repeat themselves; customer service, already a major site for outsourcing of jobs abroad, can be a prime candidate.The AT&T call center where Ms. Sherrod works, in Ocean Springs, Miss. The company has increasingly been integrating A.I. into many parts of its customer service work.Bryan Tarnowski for The New York TimesA majority of U.S. call center workers surveyed this year reported that their employers were automating some of their work, according to a 2,000-person survey from researchers at Cornell. Nearly two-thirds of respondents said they felt it was somewhat or very likely that increased use of bots would lead to layoffs within the next two years.Technology executives point out that fears of automation are centuries old — stretching back to the Luddites, who smashed and burned textile machines — but have historically been undercut by a reality in which automation creates more jobs than it eliminates.But that job creation happens gradually. The new jobs that technology creates, like engineering roles, often demand complex skills. That can create a gap for workers like Ms. Sherrod, who found what seemed like a golden ticket at AT&T: a job that pays $21.87 an hour and up to $3,000 in commissions a month, she said, and provides health care and five weeks of vacation — all without the requirement of a college degree. (Less than 5 percent of AT&T’s roles require a college education.)Customer service, to Ms. Sherrod, meant that someone like her — a young Black woman raised by her grandmother in small-town Mississippi — could make “a really good living.”“We’re breaking generational curses,” Ms. Sherrod said. “That’s for sure.”In Ms. Sherrod’s childhood home, a one-story, brick A-frame in Pascagoula, money was tight. Her mother died when she was 5. Her grandmother, who took her in, didn’t work, but Ms. Sherrod remembers getting food stamps to take to the corner bakery whenever the family could spare them. Ms. Sherrod cries recalling how Christmas used to be. The family had a plastic tree and tried to make it festive with ornaments, but there was typically no money for presents.To students at Pascagoula High School, she recalled, job opportunities seemed limited. Many went to Ingalls Shipbuilding, a shipyard where work meant blistering days under the Mississippi sun. Others went to the local Chevron refinery.“It felt like I was going to always have to do hard labor in order to make a living,” Ms. Sherrod said. “It seemed like my lifestyle would never be something with ease, something I enjoyed.”When Ms. Sherrod was 16, she worked at KFC, making $6.50 an hour. After graduating from high school, and dropping out of community college, she moved to Biloxi, Miss., to work as a maid at IP Casino, a 32-story hotel, where her sister still works. Within months of working at the casino, Ms. Sherrod felt the toll of the job on her body. Her knees ached, and her back thrummed with pain. She had to clean at least 16 rooms a day, fishing hair out of bathroom drains and rolling up dirty sheets.When a friend told her about the jobs at AT&T, the opportunity seemed, to Ms. Sherrod, impossibly good. The call center was air-conditioned. She could sit all day and rest her knees. She took the call center’s application test twice, and on her second time she got an offer, in 2006, starting out making $9.41 an hour, up from around $7.75 at the casino.“That $9 meant so much to me,” she recalled.So did AT&T, a place where she kept growing more comfortable: “Out of 17 years, my check hasn’t ever been wrong,” she said. “AT&T, by far, is the best job in the area.”‘Your Biggest Nightmare’Sam Altman, the chief executive of OpenAI, testified before a Senate subcommittee in May. In recent months, OpenAI’s ChatGPT chatbot has made its way into courtrooms.Win McNamee/Getty ImagesThis spring, lawmakers in Washington hauled forward the makers of A.I. tools to begin discussing the risks posed by the products they’ve unleashed.“Let me ask you what your biggest nightmare is,” Senator Richard Blumenthal, Democrat of Connecticut, asked OpenAI’s chief executive, Sam Altman, after sharing that his own greatest fear was job loss.“There will be an impact on jobs,” said Mr. Altman, whose company developed ChatGPT.That reality has already become clear. The British telecommunications company BT Group announced in May that it would cut up to 55,000 jobs by 2030 as it increasingly relied on A.I. The chief executive of IBM said A.I. would affect certain clerical jobs in the company, eliminating the need for up to 30 percent of some roles, while creating new ones.AT&T has begun integrating A.I. into many parts of its customer service work, including routing customers to agents, offering suggestions for technical solutions during customer calls and producing transcripts. The company said all of these uses were intended to create a better experience for customers and workers. “We’re really trying to focus on using A.I. to augment and assist our employees,” said Nicole Rafferty, who leads AT&T’s customer care operation and works with staff members nationwide.“We’re always going to need in-person engagement to solve those complex customer situations,” Ms. Rafferty added. “That’s why we’re so focused on building A.I. that supports our employees.”Economists studying A.I. have argued that it most likely won’t prompt sudden widespread layoffs. Instead, it could gradually eliminate the need for humans to do certain tasks — and make the remaining work more challenging.“The tasks left to call center workers are the most complex ones, and customers are frustrated,” said Virginia Doellgast, a professor at the New York State School of Industrial and Labor Relations at Cornell.Ms. Sherrod has always enjoyed getting to know her customers. She said she took about 20 calls a day, from 9:30 to 6:30. While she’s resolving technical issues, she listens to why people are calling in, and she hears from customers who just bought new homes, were married or lost family members.“It’s sort of like you’re a therapist,” she said. “They tell you their life stories.”She is already finding her job growing more challenging with A.I. The automated technology has a hard time understanding Ms. Sherrod’s drawl, she said, so the transcripts from her calls are full of mistakes. Once the technology is no longer in a pilot phase, she won’t be able to make corrections. (AT&T said it was refining the A.I. products it used to prevent these kinds of errors.)It seems likely, to Ms. Sherrod, that at some point as the work gets more efficient, the company won’t need quite as many humans answering calls in its centers. Ms. Sherrod wonders, too: Doesn’t the company trust her? For two consecutive years, she won AT&T’s Summit Award, placing her in the top 3 percent of the company’s customer service representatives nationally. Her name was projected on the call center’s wall.“They gave everyone a little gift bag with a trophy,” Ms. Sherrod recalled. “That meant a lot to me.”‘Look at My Life’Ms. Sherrod at the Communications Workers of America’s regional labor union office where she is a vice president.Bryan Tarnowski for The New York TimesAs companies like AT&T embrace A.I., experts are floating proposals meant to protect workers. There’s the possibility of training programs helping people make the transition to new jobs, or a displacement tax levied on employers when a worker’s job is automated but the person is not retrained.Labor unions are wading into these battles. In Hollywood, the unions representing actors and television writers have fought to limit the use of A.I. in script writing and production.Just 6 percent of the country’s private-sector workers are represented by unions. Ms. Sherrod is one, and she has begun fighting her company for more information about its A.I. plans, sitting in her union hall nine miles from the call center, where she works under a Norman Rockwell painting of a wireline technician.For years, Ms. Sherrod’s demands on behalf of the union have been rote. As a steward, she typically asked the company to reduce penalties for colleagues who got in trouble.But for the first time, this summer, she feels that she is taking up an issue that will affect workers beyond AT&T. She recently asked her union to establish a task force focused on A.I.In late May, Ms. Sherrod was invited by the Communications Workers of America to travel to Washington, where she and dozens of other workers met with the White House’s Office of Public Engagement to share their experience with A.I.A warehouse worker described being monitored with A.I. that tracked how speedily he moved packages, creating pressure for him to skip breaks. A delivery driver said automated surveillance technologies were being used to monitor workers and look for potential disciplinary actions, even though their records weren’t reliable. Ms. Sherrod described how the A.I. in her call center created inaccurate summaries of her work.Her son, Malik, was astonished to hear that his mother was headed to the White House. “When my dad told me about it, at first I said, ‘You’re lying,’” he said with a laugh. With her pay and commissions, Ms. Sherrod has been able to buy a home and give her son, Malik, the childhood she never had.Bryan Tarnowski for The New York TimesMs. Sherrod sometimes feels that her life presents an argument for a type of job that one day might no longer exist.With her pay and commissions, she has been able to buy a home. She lives on a sunny street full of families, some of whom work in fields like nursing and accounting. She is down the road from a softball field and playground. On the weekends, her neighbors gather for cookouts. The adults eat snowballs, while the children play basketball and set up splash pads.Ms. Sherrod takes pride in buying Malik anything he asks for. She wants to give him the childhood she never had.“Call center work — it’s life-changing,” she said. “Look at my life. Will all that be taken away from me?” More

  • in

    As Businesses Clamor for Workplace A.I., Tech Companies Rush to Provide It

    Amazon, Box, Salesforce, Oracle and others have recently rolled out A.I.-related products to help workplaces become more efficient and productive.Earlier this year, Mark Austin, the vice president of data science at AT&T, noticed that some of the company’s developers had started using the ChatGPT chatbot at work. When the developers got stuck, they asked ChatGPT to explain, fix or hone their code.It seemed to be a game-changer, Mr. Austin said. But since ChatGPT is a publicly available tool, he wondered if it was secure for businesses to use.So in January, AT&T tried a product from Microsoft called Azure OpenAI Services that lets businesses build their own A.I.-powered chatbots. AT&T used it to create a proprietary A.I. assistant, Ask AT&T, which helps its developers automate their coding process. AT&T’s customer service representatives also began using the chatbot to help summarize their calls, among other tasks.“Once they realize what it can do, they love it,” Mr. Austin said. Forms that once took hours to complete needed only two minutes with Ask AT&T so employees could focus on more complicated tasks, he said, and developers who used the chatbot increased their productivity by 20 to 50 percent.AT&T is one of many businesses eager to find ways to tap the power of generative artificial intelligence, the technology that powers chatbots and that has gripped Silicon Valley with excitement in recent months. Generative A.I. can produce its own text, photos and video in response to prompts, capabilities that can help automate tasks such as taking meeting minutes and cut down on paperwork.To meet this new demand, tech companies are racing to introduce products for businesses that incorporate generative A.I. Over the past three months, Amazon, Box and Cisco have unveiled plans for generative A.I.-powered products that produce code, analyze documents and summarize meetings. Salesforce also recently rolled out generative A.I. products used in sales, marketing and its Slack messaging service, while Oracle announced a new A.I. feature for human resources teams.These companies are also investing more in A.I. development. In May, Oracle and Salesforce Ventures, the venture capital arm of Salesforce, invested in Cohere, a Toronto start-up focused on generative A.I. for business use. Oracle is also reselling Cohere’s technology.Salesforce recently rolled out generative A.I. products used in sales, marketing and its Slack messaging service.Jeenah Moon for The New York Times“I think this is a complete breakthrough in enterprise software,” Aaron Levie, chief executive of Box, said of generative A.I. He called it “this incredibly exciting opportunity where, for the first time ever, you can actually start to understand what’s inside of your data in a way that wasn’t possible before.”Many of these tech companies are following Microsoft, which has invested $13 billion in OpenAI, the maker of ChatGPT. In January, Microsoft made Azure OpenAI Service available to customers, who can then access OpenAI’s technology to build their own versions of ChatGPT. As of May, the service had 4,500 customers, said John Montgomery, a Microsoft corporate vice president.Aaron Levie, chief executive of Box, said generative A.I. creates “a complete breakthrough in enterprise software.”Michael Short/BloombergFor the most part, tech companies are now rolling out four kinds of generative A.I. products for businesses: features and services that generate code for software engineers, create new content such as sales emails and product descriptions for marketing teams, search company data to answer employee questions, and summarize meeting notes and lengthy documents.“It is going to be a tool that is used by people to accomplish what they are already doing,” said Bern Elliot, a vice president and analyst at the I.T. research and consulting firm Gartner.But using generative A.I. in workplaces has risks. Chatbots can produce inaccuracies and misinformation, provide inappropriate responses and leak data. A.I. remains largely unregulated.In response to these issues, tech companies have taken some steps. To prevent data leakage and to enhance security, some have engineered generative A.I. products so they do not keep a customer’s data.When Salesforce last month introduced AI Cloud, a service with nine generative A.I.-powered products for businesses, the company included a “trust layer” to help mask sensitive corporate information to stop leaks and promised that what users typed into these products would not be used to retrain the underlying A.I. model.Similarly, Oracle said that customer data would be kept in a secure environment while training its A.I. model and added that it would not be able to see the information.Salesforce offers AI Cloud starting at $360,000 annually, with the cost rising depending on the amount of usage. Microsoft charges for Azure OpenAI Service based on the version of OpenAI technology that a customer chooses, as well as the amount of usage.For now, generative A.I. is used mainly in workplace scenarios that carry low risks — instead of highly regulated industries — with a human in the loop, said Beena Ammanath, the executive director of the Deloitte A.I. Institute, a research center of the consulting firm. A recent Gartner survey of 43 companies found that over half the respondents have no internal policy on generative A.I.“It is not just about being able to use these new tools efficiently, but it is also about preparing your work force for the new kinds of work that might evolve,” Ms. Ammanath said. “There is going to be new skills needed.”Panasonic Connect began using Microsoft’s Azure OpenAI Service to make its own chatbot in February.Panasonic ConnectPanasonic Connect, part of the Japanese electronics company Panasonic, began using Microsoft’s Azure OpenAI Service to make its own chatbot in February. Today, its employees ask the chatbot 5,000 questions a day about everything from drafting emails to writing code.While Panasonic Connect had expected its engineers to be the main users of the chatbot, other departments — such as legal, accounting and quality assurance — also turned to it to help summarize legal documents, brainstorm solutions to improve product quality and other tasks, said Judah Reynolds, Panasonic Connect’s marketing and communications chief.“Everyone started using it in ways that we didn’t even foresee ourselves,” he said. “So people are really taking advantage of it.” More

  • in

    Biden Administration Weighs Further Curbs on Sales of A.I. Chips to China

    Reports that the White House may clamp down on sales of semiconductors that power artificial intelligence capabilities sent tech stocks diving.The Biden administration is weighing additional curbs on China’s ability to access critical technology, including restricting the sale of high-end chips used to power artificial intelligence, according to five people familiar with the deliberations.The curbs would clamp down on the sales to China of advanced chips made by companies like Nvidia and Advanced Micro Devices and Intel, which are needed for the data centers that power artificial intelligence.Biden officials have said that China’s artificial intelligence capabilities could pose a national security threat to the United States by enhancing Beijing’s military and security apparatus. Among the concerns is the use of A.I. in guiding weapons, carrying out cyber warfare and powering facial recognition systems used to track dissidents and minorities.But such curbs would be a blow to semiconductor manufacturers, including those in the United States, who still generate much of their revenue in China.The deliberations were earlier reported by The Wall Street Journal. Nvidia’s shares closed down 1.8 percent on Wednesday after reports of the potential export crackdown. The company has been one of the primary beneficiaries of the enthusiasm over artificial intelligence, with its share price surging by roughly 180 percent this year.Such additional restrictions, if adopted, would not have an immediate impact on Nvidia’s financial results, Colette Kress, the chief financial officer of Nvidia, said Wednesday at an event hosted by an investment firm. But over the long term, they “will result in a permanent loss of opportunities for the U.S. industry to compete and lead in one of the world’s largest markets,” she said. She added that China typically generates 20 percent to 25 percent of the company’s data center revenue, which includes other products in addition to chips that enable A.I.The stock prices of chip companies Qualcomm and Intel fell less than 2 percent on Wednesday while AMD nudged 0.2 percent lower.Intel declined to comment, as did the Commerce Department, which oversees export controls. AMD did not respond to a request for comment.Curbing the sale of high-end chips would be the latest step in the Biden administration’s campaign to starve China of advanced technology that is needed to power everything from self-driving cars to robotics.Last October, the administration issued sweeping restrictions on the types of advanced semiconductors and chip making machinery that could be sent to China. The rules were applied across the industry, but they had particularly strong consequences for Nvidia. The company, an industry leader, was barred from selling China its top-line A100 and H100 chips — which are adept at running the many processes required to build artificial intelligence — unless it first obtained a special license.In response to those restrictions, Nvidia began offering the downgraded A800 and H800 chips in China last year.The additional restrictions under consideration, which would come as part of the process of finalizing those earlier rules, would also bar sales of Nvidia’s A800 and H800 chips, and similar advanced chips from competitors like AMD and Intel, unless those companies obtained a license from the Commerce Department to continue shipping to the country.The deliberations have touched off an intense lobbying battle, with Intel and Nvidia working to prevent further curbs on their business.Chip companies say cutting them off from a major market like China will substantially eat into their revenues and reduce their ability to spend on research and innovation of new chips. In an interview with The Financial Times last month, Nvidia’s chief executive, Jensen Huang, warned that the U.S. tech industry was at risk of “enormous damage” if it were to be cut off from trading with China.The Biden administration has also been internally debating where to draw the line on chip sales to China. Their goal is to limit technological capacity that could aid the Chinese military in guiding weapons, developing autonomous drones, carrying out cyber warfare and powering surveillance systems, while minimizing the impact such rules would have on private companies.The measure, which would come as the United States is also considering expanded curbs on U.S. investment in Chinese technology firms, is also likely to ruffle the Chinese government. Biden officials have been working in recent weeks to improve bilateral relations after a fallout with Beijing this year, after a Chinese surveillance balloon flew over the United States.Antony J. Blinken, the secretary of state, traveled to Beijing this month to meet with his counterparts, and Treasury Secretary Janet L. Yellen is also expected to travel to China soon.During a Wednesday appearance at the Council on Foreign Relations in New York, Mr. Blinken said that China’s concern that the U.S. sought to slow its economic growth was “a lengthy part of the conversation that we just had in Beijing.”Chinese officials, he said, believe the U.S. seeks “to hold them back, globally, and economically.” But he disputed that notion.“How is it in our interest to allow them to get technology that they may turn around and use against us?” he asked, citing China’s expanding nuclear weapons program, its development of hypersonic missiles and its use of artificial intelligence “potentially for repressive purposes.”“If they were in our shoes, they would do exactly the same thing,” he said, adding that the U.S. was imposing “very targeted, very narrowly defined controls.”Nvidia’s valuation had soared in light of the recent boom in generative artificial intelligence services, which can produce complex written answers to questions and images based on a single prompt. Microsoft has teamed up with OpenAI, which makes the chatbot ChatGPT, to generate results in its Bing search engine while Google has built a competing chatbot called Bard.As companies race to incorporate the technology into their products, it has increased demand for chips like Nvidia’s that can handle that the complex computing tasks. That momentum has helped to push Nvidia’s market capitalization past $1 trillion, making the company the world’s sixth largest by value.Nvidia said in an August filing that $400 million in revenue from “potential sales to China” could be subject to U.S. export restrictions, including sales of the A100, if “customers do not want to purchase the company’s alternative product offerings” or the government failed to grant licenses to allow the company to continue to sell the chip inside China.Since the restrictions were imposed, Chinese chip makers have been trying to overhaul their supply chains and develop domestic sources of advanced chips, but China’s capabilities to produce the most advanced chips remains many years behind that of the United States.Dan Wang, a visiting scholar at Yale Law School, said that the impact of advanced chip restrictions on Chinese tech companies was uncertain.“Most of their business needs are driven by less advanced chips, as fewer of them are playing on the fringes of the most advanced A.I.,” he said.Joe Rennison More

  • in

    Facial Recognition Spreads as Tool to Fight Shoplifting

    Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave.Mr. Mackenzie adds one or two new faces every week, he said, mainly people who steal diapers, groceries, pet supplies and other low-cost goods. He said their economic hardship made him sympathetic, but that the number of thefts had gotten so out of hand that facial recognition was needed. Usually at least once a day, Facewatch alerts him that somebody on the watchlist has entered the store.Mr. Mackenzie adds one or two new faces a week to the Facewatch watch list that stores in the area share.Suzie Howell for The New York TimesA sign at a supermarket that uses Facewatch in Bristol, England. Suzie Howell for The New York TimesFacial recognition technology is proliferating as Western countries grapple with advances brought on by artificial intelligence. The European Union is drafting rules that would ban many of facial recognition’s uses, while Eric Adams, the mayor of New York City, has encouraged retailers to try the technology to fight crime. MSG Entertainment, the owner of Madison Square Garden and Radio City Music Hall, has used automated facial recognition to refuse entry to lawyers whose firms have sued the company.Among democratic nations, Britain is at the forefront of using live facial recognition, with courts and regulators signing off on its use. The police in London and Cardiff are experimenting with the technology to identify wanted criminals as they walk down the street. In May, it was used to scan the crowds at the coronation of King Charles III.But the use by retailers has drawn criticism as a disproportionate solution for minor crimes. Individuals have little way of knowing they are on the watchlist or how to appeal. In a legal complaint last year, Big Brother Watch, a civil society group, called it “Orwellian in the extreme.”Fraser Sampson, Britain’s biometrics and surveillance camera commissioner, who advises the government on policy, said there was “a nervousness and a hesitancy” around facial recognition technology because of privacy concerns and poorly performing algorithms in the past.“But I think in terms of speed, scale, accuracy and cost, facial recognition technology can in some areas, you know, literally be a game changer,” he said. “That means its arrival and deployment is probably inevitable. It’s just a case of when.”‘You can’t expect the police to come’Simon Gordon, the owner of Gordon’s Wine Bar in London, founded Facewatch in 2010. As a business owner, “you’ve got to help yourself,” he said. Suzie Howell for The New York TimesFacewatch was founded in 2010 by Simon Gordon, the owner of a popular 19th-century wine bar in central London known for its cellarlike interior and popularity among pickpockets.At the time, Mr. Gordon hired software developers to create an online tool to share security camera footage with the authorities, hoping it would save the police time filing incident reports and result in more arrests.There was limited interest, but Mr. Gordon’s fascination with security technology was piqued. He followed facial recognition developments and had the idea for a watchlist that retailers could share and contribute to. It was like the photos of shoplifters that stores keep next to the register, but supercharged into a collective database to identify bad guys in real time.By 2018, Mr. Gordon felt the technology was ready for commercial use.“You’ve got to help yourself,” he said in an interview. “You can’t expect the police to come.”Facewatch, which licenses facial recognition software made by Real Networks and Amazon, is now inside nearly 400 stores across Britain. Trained on millions of pictures and videos, the systems read the biometric information of a face as the person walks into a shop and check it against a database of flagged people.Facewatch’s watchlist is constantly growing as stores upload photos of shoplifters and problematic customers. Once added, a person remains there for a year before being deleted.‘Mistakes are rare but do happen’Every time Facewatch’s system identifies a shoplifter, a notification goes to a person who passed a test to be a “super recognizer” — someone with a special talent for remembering faces. Within seconds, the super recognizer must confirm the match against the Facewatch database before an alert is sent.Facewatch is used in about 400 British stores.Suzie Howell for The New York TimesBut while the company has created policies to prevent misidentification and other errors, mistakes happen.In October, a woman buying milk in a supermarket in Bristol, England, was confronted by an employee and ordered to leave. She was told that Facewatch had flagged her as a barred shoplifter.The woman, who asked that her name be withheld because of privacy concerns and whose story was corroborated by materials provided by her lawyer and Facewatch, said there must have been a mistake. When she contacted Facewatch a few days later, the company apologized, saying it was a case of mistaken identity.After the woman threatened legal action, Facewatch dug into its records. It found that the woman had been added to the watchlist because of an incident 10 months earlier involving £20 of merchandise, about $25. The system “worked perfectly,” Facewatch said.But while the technology had correctly identified the woman, it did not leave much room for human discretion. Neither Facewatch nor the store where the incident occurred contacted her to let her know that she was on the watchlist and to ask what had happened.The woman said she did not recall the incident and had never shoplifted. She said she may have walked out after not realizing that her debit card payment failed to go through at a self-checkout kiosk.Madeleine Stone, the legal and policy officer for Big Brother Watch, said Facewatch was “normalizing airport-style security checks for everyday activities like buying a pint of milk.”Mr. Gordon declined to comment on the incident in Bristol.In general, he said, “mistakes are rare but do happen.” He added, “If this occurs, we acknowledge our mistake, apologize, delete any relevant data to prevent reoccurrence and offer proportionate compensation.”Approved by the privacy officeA woman said Facewatch had misidentified her at the Bristol market. Facewatch said the system had ”worked perfectly.”Suzie Howell for The New York TimesCivil liberties groups have raised concerns about Facewatch and suggested that its deployment to prevent petty crime might be illegal under British privacy law, which requires that biometric technologies have a “substantial public interest.”The U.K. Information Commissioner’s Office, the privacy regulator, conducted a yearlong investigation into Facewatch. The office concluded in March that Facewatch’s system was permissible under the law, but only after the company made changes to how it operated.Stephen Bonner, the office’s deputy commissioner for regulatory supervision, said in an interview that an investigation had led Facewatch to change its policies: It would put more signage in stores, share among stores only information about serious and violent offenders and send out alerts only about repeat offenders. That means people will not be put on the watchlist after a single minor offense, as happened to the woman in Bristol.“That reduces the amount of personal data that’s held, reduces the chances of individuals being unfairly added to this kind of list and makes it more likely to be accurate,” Mr. Bonner said. The technology, he said, is “not dissimilar to having just very good security guards.”Liam Ardern, the operations manager for Lawrence Hunt, which owns 23 Spar convenience stores that use Facewatch, estimates the technology has saved the company more than £50,000 since 2020.He called the privacy risks of facial recognition overblown. The only example of misidentification that he recalled was when a man was confused for his identical twin, who had shoplifted. Critics overlook that stores like his operate on thin profit margins, he said.“It’s easy for them to say, ‘No, it’s against human rights,’” Mr. Ardern said. If shoplifting isn’t reduced, he said, his shops will have to raise prices or cut staff. More

  • in

    Generative A.I. Can Add $4.4 Trillion in Value to Global Economy, Study Says

    The report from McKinsey comes as a debate rages over the potential economic effects of A.I.-powered chatbots on labor and the economy.“Generative artificial intelligence” is set to add up to $4.4 trillion of value to the global economy annually, according to a report from McKinsey Global Institute, in what is one of the rosier predictions about the economic effects of the rapidly evolving technology.Generative A.I., which includes chatbots such as ChatGPT that can generate text in response to prompts, can potentially boost productivity by saving 60 to 70 percent of workers’ time through automation of their work, according to the 68-page report, which was published early Wednesday. Half of all work will be automated between 2030 and 2060, the report said.McKinsey had previously predicted that A.I. would automate half of all work between 2035 and 2075, but the power of generative A.I. tools — which exploded onto the tech scene late last year — accelerated the company’s forecast.“Generative A.I. has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities,” the report said.McKinsey’s report is one of the few so far to quantify the long-term impact of generative A.I. on the economy. The report arrives as Silicon Valley has been gripped by a fervor over generative A.I. tools like ChatGPT and Google’s Bard, with tech companies and venture capitalists investing billions of dollars in the technology.The tools — some of which can also generate images and video, and carry on a conversation — have started a debate over how they will affect jobs and the world economy. Some experts have predicted that the A.I. will displace people from their work, while others have said the tools can augment individual productivity.Last week, Goldman Sachs released a report warning that A.I. could lead to worker disruption and that some companies would benefit more from the technology than others. In April, a Stanford researcher and researchers at the Massachusetts Institute of Technology released a study showing that generative A.I. could boost the productivity of inexperienced call center operators by 35 percent.Any conclusions about the technology’s effects may be premature. David Autor, a professor of economics at M.I.T. cautioned that generative A.I. was “not going to be as miraculous as people claim.”“We are really, really in the early stage,” he added.For the most part, economic studies of generative A.I. do not take into account other risks from the technology, such as whether it might spread misinformation and eventually escape the realm of human control.The vast majority of generative A.I.’s economic value will most likely come from helping workers automate tasks in customer operations, sales, software engineering, and research and development, according to McKinsey’s report. Generative A.I. can create “superpowers” for high-skilled workers, said Lareina Yee, a McKinsey partner and an author of the report, because the technology can summarize and edit content.“The most profound change we are going to see is the change to people, and that’s going to require far more innovation and leadership than the technology,” she said.The report also outlined challenges that industry leaders and regulators would need to address with A.I., including concerns that the content generated by the tools can be misleading and inaccurate.Ms. Yee acknowledged that the report was making prognostications about A.I.’s effects, but that “if you could capture even a third” of what the technology’s potential is, “it is pretty remarkable over the next five to 10 years.” More

  • in

    The AI Boom Is Pulling Tech Entrepreneurs Back to San Francisco

    Doug Fulop’s and Jessie Fischer’s lives in Bend, Ore., were idyllic. The couple moved there last year, working remotely in a 2,400-square-foot house surrounded by trees, with easy access to skiing, mountain biking and breweries. It was an upgrade from their former apartments in San Francisco, where a stranger once entered Mr. Fulop’s home after his lock didn’t properly latch.But the pair of tech entrepreneurs are now on their way back to the Bay Area, driven by a key development: the artificial intelligence boom.Mr. Fulop and Ms. Fischer are both starting companies that use A.I. technology and are looking for co-founders. They tried to make it work in Bend, but after too many eight-hour drives to San Francisco for hackathons, networking events and meetings, they decided to move back when their lease ends in August.“The A.I. boom has brought the energy back into the Bay that was lost during Covid,” said Mr. Fulop, 34.The couple are part of a growing group of boomerang entrepreneurs who see opportunity in San Francisco’s predicted demise. The tech industry is more than a year into its worst slump in a decade, with layoffs and a glut of empty offices. The pandemic also spurred a wave of migration to places with lower taxes, fewer Covid restrictions, safer streets and more space. And tech workers have been among the most vocal groups to criticize the city for its worsening problems with drugs, housing and crime.But such busts are almost always followed by another boom. And with the latest wave of A.I. technology — known as generative A.I., which produces text, images and video in response to prompts — there’s too much at stake to miss out.Investors have already announced $10.7 billion in funding for generative A.I. start-ups within the first three months of this year, a thirteenfold increase from a year earlier, according to PitchBook, which tracks start-ups. Tens of thousands of tech workers recently laid off by big tech companies are now eager to join the next big thing. On top of that, much of the A.I. technology is open source, meaning companies share their work and allow anyone to build on it, which encourages a sense of community.“Hacker houses,” where people create start-ups, are springing up in San Francisco’s Hayes Valley neighborhood, known as “Cerebral Valley” because it is the center of the A.I. scene. And every night someone is hosting a hackathon, meet-up or demo focused on the technology.In March, days after the prominent start-up OpenAI unveiled a new version of its A.I. technology, an “emergency hackathon” organized by a pair of entrepreneurs drew 200 participants, with almost as many on the waiting list. That same month, a networking event hastily organized over Twitter by Clement Delangue, the chief executive of the A.I. start-up Hugging Face, attracted more than 5,000 people and two alpacas to San Francisco’s Exploratorium museum, earning it the nickname “Woodstock of A.I.”More than 5,000 people attended the so-called Woodstock of A.I. in San Francisco in March.Alexy KhrabrovMadisen Taylor, who runs operations for Hugging Face and organized the event alongside Mr. Delangue, said its communal vibe had mirrored that of Woodstock. “Peace, love, building cool A.I.,” she said.Taken together, the activity is enough to draw back people like Ms. Fischer, who is starting a company that uses A.I. in the hospitality industry. She and Mr. Fulop got involved in the 350-person tech scene in Bend, but they missed the inspiration, hustle and connections in San Francisco.“There’s just nowhere else like the Bay,” Ms. Fischer, 32, said.Jen Yip, who has been organizing events for tech workers over the past six years, said that what had been a quiet San Francisco tech scene during the pandemic began changing last year in tandem with the A.I. boom. At nightly hackathons and demo days, she watched people meet their co-founders, secure investments, win over customers and network with potential hires.“I’ve seen people come to an event with an idea they want to test and pitch it to 30 different people in the course of one night,” she said.Ms. Yip, 42, runs a secret group of 800 people focused on A.I. and robotics called Society of Artificers. Its monthly events have become a hot ticket, often selling out within an hour. “People definitely try to crash,” she said.Her other speaker series, Founders You Should Know, features leaders of A.I. companies speaking to an audience of mostly engineers looking for their next gig. The last event had more than 2,000 applicants for 120 spots, Ms. Yip said.In Founders You Should Know, a series run by Jen Yip, leaders of A.I. companies speak to an audience of mostly engineers looking for their next gig.Ximena NateraBernardo Aceituno moved his company, Stack AI, to San Francisco in January to be part of the start-up accelerator Y Combinator. He and his co-founders had planned to base the company in New York after the three-month program ended, but decided to stay in San Francisco. The community of fellow entrepreneurs, investors and tech talent that they found was too valuable, he said.“If we move out, it’s going to be very hard to re-create in any other city,” Mr. Aceituno, 27, said. “Whatever you’re looking for is already here.”After operating remotely for several years, Y Combinator has started encouraging start-ups in its program to move to San Francisco. Out of a recent batch of 270 start-ups, 86 percent participated locally, the company said.“Hayes Valley truly became Cerebral Valley this year,” Gary Tan, Y Combinator’s chief executive, said at a demo day in April.The A.I. boom is also luring back founders of other kinds of tech companies. Brex, a financial technology start-up, declared itself “remote first” early in the pandemic, closing its 250-person office in San Francisco’s SoMa neighborhood. The company’s founders, Henrique Dubugras and Pedro Franceschi, decamped for Los Angeles.Henrique Dubugras, a co-founder of Brex, in 2019. After decamping to Los Angeles, he recently returned to the Bay Area.Arsenii Vaselenko for The New York TimesBut when generative A.I. began taking off last year, Mr. Dubugras, 27, was eager to see how Brex could adopt the technology. He quickly realized that he was missing out on the coffees, casual conversations and community happening around A.I. in San Francisco, he said.In May, Mr. Dubugras moved to Palo Alto, Calif., and began working from a new, pared-down office a few blocks from Brex’s old one. San Francisco’s high office vacancy rate meant the company paid a quarter of what it had been paying in rent before the pandemic.Seated under a neon sign in Brex’s office that read “Growth Mindset,” Mr. Dubugras said he had been on a steady schedule of coffee meetings with people working on A.I. since his return. He has hired a Stanford Ph.D. student to tutor him on the topic.“Knowledge is concentrated at the bleeding edge,” he said.Ms. Fischer and Ms. Fulop said they would miss Bend but craved the Bay Area’s sense of urgency and focus.Will Matsuda for The New York TimesMr. Fulop and Ms. Fischer said they would miss their lives in Bend, where they could ski or mountain bike on their lunch breaks. But getting two start-ups off the ground requires an intense blend of urgency and focus.In the Bay Area, Ms. Fischer attends multiday events where people stay up all night working on their projects. And Mr. Fulop runs into engineers and investors he knows every time he walks by a coffee shop. They are considering living in suburbs like Palo Alto and Woodside, which has easy access to nature, in addition to San Francisco.“I’m willing to sacrifice the amazing tranquillity of this place for being around that ambition, being inspired, knowing there are a ton of awesome people to work with that I can bump into,” Mr. Fulop said. Living in Bend, he added, “honestly just felt like early retirement.” More

  • in

    New York City Moves to Regulate How AI Is Used in Hiring

    European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.Uneasy compromises are inevitable.Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”The law applies to companies with workers in New York City, but labor experts expect it to influence practices nationally. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also working on laws to regulate A.I. in hiring. And Illinois and Maryland have enacted laws limiting the use of specific A.I. technologies, often for workplace surveillance and the screening of job candidates.The New York City law emerged from a clash of sharply conflicting viewpoints. The City Council passed it during the final days of the administration of Mayor Bill de Blasio. Rounds of hearings and public comments, more than 100,000 words, came later — overseen by the city’s Department of Consumer and Worker Protection, the rule-making agency.The result, some critics say, is overly sympathetic to business interests.“What could have been a landmark law was watered down to lose effectiveness,” said Alexandra Givens, president of the Center for Democracy & Technology, a policy and civil rights organization.That’s because the law defines an “automated employment decision tool” as technology used “to substantially assist or replace discretionary decision making,” she said. The rules adopted by the city appear to interpret that phrasing narrowly so that A.I. software will require an audit only if it is the lone or primary factor in a hiring decision or is used to overrule a human, Ms. Givens said.That leaves out the main way the automated software is used, she said, with a hiring manager invariably making the final choice. The potential for A.I.-driven discrimination, she said, typically comes in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.Ms. Givens also criticized the law for limiting the kinds of groups measured for unfair treatment. It covers bias by sex, race and ethnicity, but not discrimination against older workers or those with disabilities.“My biggest concern is that this becomes the template nationally when we should be asking much more of our policymakers,” Ms. Givens said.“This is a significant regulatory success,” said Robert Holden, center, a member of the City Council who formerly led its committee on technology.Johnny Milano for The New York TimesThe law was narrowed to sharpen it and make sure it was focused and enforceable, city officials said. The Council and the worker protection agency heard from many voices, including public-interest activists and software companies. Its goal was to weigh trade-offs between innovation and potential harm, officials said.“This is a significant regulatory success toward ensuring that A.I. technology is used ethically and responsibly,” said Robert Holden, who was the chair of the Council committee on technology when the law was passed and remains a committee member.New York City is trying to address new technology in the context of federal workplace laws with guidelines on hiring that date to the 1970s. The main Equal Employment Opportunity Commission rule states that no practice or method of selection used by employers should have a “disparate impact” on a legally protected group like women or minorities.Businesses have criticized the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent audits of A.I. was “not feasible” because “the auditing landscape is nascent,” lacking standards and professional oversight bodies.But a nascent field is a market opportunity. The A.I. audit business, experts say, is only going to grow. It is already attracting law firms, consultants and start-ups.Companies that sell A.I. software to assist in hiring and promotion decisions have generally come to embrace regulation. Some have already undergone outside audits. They see the requirement as a potential competitive advantage, providing proof that their technology expands the pool of job candidates for companies and increases opportunity for workers.“We believe we can meet the law and show what good A.I. looks like,” said Roy Wang, general counsel of Eightfold AI, a Silicon Valley start-up that produces software used to assist hiring managers.The New York City law also takes an approach to regulating A.I. that may become the norm. The law’s key measurement is an “impact ratio,” or a calculation of the effect of using the software on a protected group of job candidates. It does not delve into how an algorithm makes decisions, a concept known as “explainability.”In life-affecting applications like hiring, critics say, people have a right to an explanation of how a decision was made. But A.I. like ChatGPT-style software is becoming more complex, perhaps putting the goal of explainable A.I. out of reach, some experts say.“The focus becomes the output of the algorithm, not the working of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute, which is developing certifications for the safe use of A.I. applications in the workplace, health care and finance. More