More stories

  • in

    As Businesses Clamor for Workplace A.I., Tech Companies Rush to Provide It

    Amazon, Box, Salesforce, Oracle and others have recently rolled out A.I.-related products to help workplaces become more efficient and productive.Earlier this year, Mark Austin, the vice president of data science at AT&T, noticed that some of the company’s developers had started using the ChatGPT chatbot at work. When the developers got stuck, they asked ChatGPT to explain, fix or hone their code.It seemed to be a game-changer, Mr. Austin said. But since ChatGPT is a publicly available tool, he wondered if it was secure for businesses to use.So in January, AT&T tried a product from Microsoft called Azure OpenAI Services that lets businesses build their own A.I.-powered chatbots. AT&T used it to create a proprietary A.I. assistant, Ask AT&T, which helps its developers automate their coding process. AT&T’s customer service representatives also began using the chatbot to help summarize their calls, among other tasks.“Once they realize what it can do, they love it,” Mr. Austin said. Forms that once took hours to complete needed only two minutes with Ask AT&T so employees could focus on more complicated tasks, he said, and developers who used the chatbot increased their productivity by 20 to 50 percent.AT&T is one of many businesses eager to find ways to tap the power of generative artificial intelligence, the technology that powers chatbots and that has gripped Silicon Valley with excitement in recent months. Generative A.I. can produce its own text, photos and video in response to prompts, capabilities that can help automate tasks such as taking meeting minutes and cut down on paperwork.To meet this new demand, tech companies are racing to introduce products for businesses that incorporate generative A.I. Over the past three months, Amazon, Box and Cisco have unveiled plans for generative A.I.-powered products that produce code, analyze documents and summarize meetings. Salesforce also recently rolled out generative A.I. products used in sales, marketing and its Slack messaging service, while Oracle announced a new A.I. feature for human resources teams.These companies are also investing more in A.I. development. In May, Oracle and Salesforce Ventures, the venture capital arm of Salesforce, invested in Cohere, a Toronto start-up focused on generative A.I. for business use. Oracle is also reselling Cohere’s technology.Salesforce recently rolled out generative A.I. products used in sales, marketing and its Slack messaging service.Jeenah Moon for The New York Times“I think this is a complete breakthrough in enterprise software,” Aaron Levie, chief executive of Box, said of generative A.I. He called it “this incredibly exciting opportunity where, for the first time ever, you can actually start to understand what’s inside of your data in a way that wasn’t possible before.”Many of these tech companies are following Microsoft, which has invested $13 billion in OpenAI, the maker of ChatGPT. In January, Microsoft made Azure OpenAI Service available to customers, who can then access OpenAI’s technology to build their own versions of ChatGPT. As of May, the service had 4,500 customers, said John Montgomery, a Microsoft corporate vice president.Aaron Levie, chief executive of Box, said generative A.I. creates “a complete breakthrough in enterprise software.”Michael Short/BloombergFor the most part, tech companies are now rolling out four kinds of generative A.I. products for businesses: features and services that generate code for software engineers, create new content such as sales emails and product descriptions for marketing teams, search company data to answer employee questions, and summarize meeting notes and lengthy documents.“It is going to be a tool that is used by people to accomplish what they are already doing,” said Bern Elliot, a vice president and analyst at the I.T. research and consulting firm Gartner.But using generative A.I. in workplaces has risks. Chatbots can produce inaccuracies and misinformation, provide inappropriate responses and leak data. A.I. remains largely unregulated.In response to these issues, tech companies have taken some steps. To prevent data leakage and to enhance security, some have engineered generative A.I. products so they do not keep a customer’s data.When Salesforce last month introduced AI Cloud, a service with nine generative A.I.-powered products for businesses, the company included a “trust layer” to help mask sensitive corporate information to stop leaks and promised that what users typed into these products would not be used to retrain the underlying A.I. model.Similarly, Oracle said that customer data would be kept in a secure environment while training its A.I. model and added that it would not be able to see the information.Salesforce offers AI Cloud starting at $360,000 annually, with the cost rising depending on the amount of usage. Microsoft charges for Azure OpenAI Service based on the version of OpenAI technology that a customer chooses, as well as the amount of usage.For now, generative A.I. is used mainly in workplace scenarios that carry low risks — instead of highly regulated industries — with a human in the loop, said Beena Ammanath, the executive director of the Deloitte A.I. Institute, a research center of the consulting firm. A recent Gartner survey of 43 companies found that over half the respondents have no internal policy on generative A.I.“It is not just about being able to use these new tools efficiently, but it is also about preparing your work force for the new kinds of work that might evolve,” Ms. Ammanath said. “There is going to be new skills needed.”Panasonic Connect began using Microsoft’s Azure OpenAI Service to make its own chatbot in February.Panasonic ConnectPanasonic Connect, part of the Japanese electronics company Panasonic, began using Microsoft’s Azure OpenAI Service to make its own chatbot in February. Today, its employees ask the chatbot 5,000 questions a day about everything from drafting emails to writing code.While Panasonic Connect had expected its engineers to be the main users of the chatbot, other departments — such as legal, accounting and quality assurance — also turned to it to help summarize legal documents, brainstorm solutions to improve product quality and other tasks, said Judah Reynolds, Panasonic Connect’s marketing and communications chief.“Everyone started using it in ways that we didn’t even foresee ourselves,” he said. “So people are really taking advantage of it.” More

  • in

    Facial Recognition Spreads as Tool to Fight Shoplifting

    Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave.Mr. Mackenzie adds one or two new faces every week, he said, mainly people who steal diapers, groceries, pet supplies and other low-cost goods. He said their economic hardship made him sympathetic, but that the number of thefts had gotten so out of hand that facial recognition was needed. Usually at least once a day, Facewatch alerts him that somebody on the watchlist has entered the store.Mr. Mackenzie adds one or two new faces a week to the Facewatch watch list that stores in the area share.Suzie Howell for The New York TimesA sign at a supermarket that uses Facewatch in Bristol, England. Suzie Howell for The New York TimesFacial recognition technology is proliferating as Western countries grapple with advances brought on by artificial intelligence. The European Union is drafting rules that would ban many of facial recognition’s uses, while Eric Adams, the mayor of New York City, has encouraged retailers to try the technology to fight crime. MSG Entertainment, the owner of Madison Square Garden and Radio City Music Hall, has used automated facial recognition to refuse entry to lawyers whose firms have sued the company.Among democratic nations, Britain is at the forefront of using live facial recognition, with courts and regulators signing off on its use. The police in London and Cardiff are experimenting with the technology to identify wanted criminals as they walk down the street. In May, it was used to scan the crowds at the coronation of King Charles III.But the use by retailers has drawn criticism as a disproportionate solution for minor crimes. Individuals have little way of knowing they are on the watchlist or how to appeal. In a legal complaint last year, Big Brother Watch, a civil society group, called it “Orwellian in the extreme.”Fraser Sampson, Britain’s biometrics and surveillance camera commissioner, who advises the government on policy, said there was “a nervousness and a hesitancy” around facial recognition technology because of privacy concerns and poorly performing algorithms in the past.“But I think in terms of speed, scale, accuracy and cost, facial recognition technology can in some areas, you know, literally be a game changer,” he said. “That means its arrival and deployment is probably inevitable. It’s just a case of when.”‘You can’t expect the police to come’Simon Gordon, the owner of Gordon’s Wine Bar in London, founded Facewatch in 2010. As a business owner, “you’ve got to help yourself,” he said. Suzie Howell for The New York TimesFacewatch was founded in 2010 by Simon Gordon, the owner of a popular 19th-century wine bar in central London known for its cellarlike interior and popularity among pickpockets.At the time, Mr. Gordon hired software developers to create an online tool to share security camera footage with the authorities, hoping it would save the police time filing incident reports and result in more arrests.There was limited interest, but Mr. Gordon’s fascination with security technology was piqued. He followed facial recognition developments and had the idea for a watchlist that retailers could share and contribute to. It was like the photos of shoplifters that stores keep next to the register, but supercharged into a collective database to identify bad guys in real time.By 2018, Mr. Gordon felt the technology was ready for commercial use.“You’ve got to help yourself,” he said in an interview. “You can’t expect the police to come.”Facewatch, which licenses facial recognition software made by Real Networks and Amazon, is now inside nearly 400 stores across Britain. Trained on millions of pictures and videos, the systems read the biometric information of a face as the person walks into a shop and check it against a database of flagged people.Facewatch’s watchlist is constantly growing as stores upload photos of shoplifters and problematic customers. Once added, a person remains there for a year before being deleted.‘Mistakes are rare but do happen’Every time Facewatch’s system identifies a shoplifter, a notification goes to a person who passed a test to be a “super recognizer” — someone with a special talent for remembering faces. Within seconds, the super recognizer must confirm the match against the Facewatch database before an alert is sent.Facewatch is used in about 400 British stores.Suzie Howell for The New York TimesBut while the company has created policies to prevent misidentification and other errors, mistakes happen.In October, a woman buying milk in a supermarket in Bristol, England, was confronted by an employee and ordered to leave. She was told that Facewatch had flagged her as a barred shoplifter.The woman, who asked that her name be withheld because of privacy concerns and whose story was corroborated by materials provided by her lawyer and Facewatch, said there must have been a mistake. When she contacted Facewatch a few days later, the company apologized, saying it was a case of mistaken identity.After the woman threatened legal action, Facewatch dug into its records. It found that the woman had been added to the watchlist because of an incident 10 months earlier involving £20 of merchandise, about $25. The system “worked perfectly,” Facewatch said.But while the technology had correctly identified the woman, it did not leave much room for human discretion. Neither Facewatch nor the store where the incident occurred contacted her to let her know that she was on the watchlist and to ask what had happened.The woman said she did not recall the incident and had never shoplifted. She said she may have walked out after not realizing that her debit card payment failed to go through at a self-checkout kiosk.Madeleine Stone, the legal and policy officer for Big Brother Watch, said Facewatch was “normalizing airport-style security checks for everyday activities like buying a pint of milk.”Mr. Gordon declined to comment on the incident in Bristol.In general, he said, “mistakes are rare but do happen.” He added, “If this occurs, we acknowledge our mistake, apologize, delete any relevant data to prevent reoccurrence and offer proportionate compensation.”Approved by the privacy officeA woman said Facewatch had misidentified her at the Bristol market. Facewatch said the system had ”worked perfectly.”Suzie Howell for The New York TimesCivil liberties groups have raised concerns about Facewatch and suggested that its deployment to prevent petty crime might be illegal under British privacy law, which requires that biometric technologies have a “substantial public interest.”The U.K. Information Commissioner’s Office, the privacy regulator, conducted a yearlong investigation into Facewatch. The office concluded in March that Facewatch’s system was permissible under the law, but only after the company made changes to how it operated.Stephen Bonner, the office’s deputy commissioner for regulatory supervision, said in an interview that an investigation had led Facewatch to change its policies: It would put more signage in stores, share among stores only information about serious and violent offenders and send out alerts only about repeat offenders. That means people will not be put on the watchlist after a single minor offense, as happened to the woman in Bristol.“That reduces the amount of personal data that’s held, reduces the chances of individuals being unfairly added to this kind of list and makes it more likely to be accurate,” Mr. Bonner said. The technology, he said, is “not dissimilar to having just very good security guards.”Liam Ardern, the operations manager for Lawrence Hunt, which owns 23 Spar convenience stores that use Facewatch, estimates the technology has saved the company more than £50,000 since 2020.He called the privacy risks of facial recognition overblown. The only example of misidentification that he recalled was when a man was confused for his identical twin, who had shoplifted. Critics overlook that stores like his operate on thin profit margins, he said.“It’s easy for them to say, ‘No, it’s against human rights,’” Mr. Ardern said. If shoplifting isn’t reduced, he said, his shops will have to raise prices or cut staff. More

  • in

    Generative A.I. Can Add $4.4 Trillion in Value to Global Economy, Study Says

    The report from McKinsey comes as a debate rages over the potential economic effects of A.I.-powered chatbots on labor and the economy.“Generative artificial intelligence” is set to add up to $4.4 trillion of value to the global economy annually, according to a report from McKinsey Global Institute, in what is one of the rosier predictions about the economic effects of the rapidly evolving technology.Generative A.I., which includes chatbots such as ChatGPT that can generate text in response to prompts, can potentially boost productivity by saving 60 to 70 percent of workers’ time through automation of their work, according to the 68-page report, which was published early Wednesday. Half of all work will be automated between 2030 and 2060, the report said.McKinsey had previously predicted that A.I. would automate half of all work between 2035 and 2075, but the power of generative A.I. tools — which exploded onto the tech scene late last year — accelerated the company’s forecast.“Generative A.I. has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities,” the report said.McKinsey’s report is one of the few so far to quantify the long-term impact of generative A.I. on the economy. The report arrives as Silicon Valley has been gripped by a fervor over generative A.I. tools like ChatGPT and Google’s Bard, with tech companies and venture capitalists investing billions of dollars in the technology.The tools — some of which can also generate images and video, and carry on a conversation — have started a debate over how they will affect jobs and the world economy. Some experts have predicted that the A.I. will displace people from their work, while others have said the tools can augment individual productivity.Last week, Goldman Sachs released a report warning that A.I. could lead to worker disruption and that some companies would benefit more from the technology than others. In April, a Stanford researcher and researchers at the Massachusetts Institute of Technology released a study showing that generative A.I. could boost the productivity of inexperienced call center operators by 35 percent.Any conclusions about the technology’s effects may be premature. David Autor, a professor of economics at M.I.T. cautioned that generative A.I. was “not going to be as miraculous as people claim.”“We are really, really in the early stage,” he added.For the most part, economic studies of generative A.I. do not take into account other risks from the technology, such as whether it might spread misinformation and eventually escape the realm of human control.The vast majority of generative A.I.’s economic value will most likely come from helping workers automate tasks in customer operations, sales, software engineering, and research and development, according to McKinsey’s report. Generative A.I. can create “superpowers” for high-skilled workers, said Lareina Yee, a McKinsey partner and an author of the report, because the technology can summarize and edit content.“The most profound change we are going to see is the change to people, and that’s going to require far more innovation and leadership than the technology,” she said.The report also outlined challenges that industry leaders and regulators would need to address with A.I., including concerns that the content generated by the tools can be misleading and inaccurate.Ms. Yee acknowledged that the report was making prognostications about A.I.’s effects, but that “if you could capture even a third” of what the technology’s potential is, “it is pretty remarkable over the next five to 10 years.” More

  • in

    The AI Boom Is Pulling Tech Entrepreneurs Back to San Francisco

    Doug Fulop’s and Jessie Fischer’s lives in Bend, Ore., were idyllic. The couple moved there last year, working remotely in a 2,400-square-foot house surrounded by trees, with easy access to skiing, mountain biking and breweries. It was an upgrade from their former apartments in San Francisco, where a stranger once entered Mr. Fulop’s home after his lock didn’t properly latch.But the pair of tech entrepreneurs are now on their way back to the Bay Area, driven by a key development: the artificial intelligence boom.Mr. Fulop and Ms. Fischer are both starting companies that use A.I. technology and are looking for co-founders. They tried to make it work in Bend, but after too many eight-hour drives to San Francisco for hackathons, networking events and meetings, they decided to move back when their lease ends in August.“The A.I. boom has brought the energy back into the Bay that was lost during Covid,” said Mr. Fulop, 34.The couple are part of a growing group of boomerang entrepreneurs who see opportunity in San Francisco’s predicted demise. The tech industry is more than a year into its worst slump in a decade, with layoffs and a glut of empty offices. The pandemic also spurred a wave of migration to places with lower taxes, fewer Covid restrictions, safer streets and more space. And tech workers have been among the most vocal groups to criticize the city for its worsening problems with drugs, housing and crime.But such busts are almost always followed by another boom. And with the latest wave of A.I. technology — known as generative A.I., which produces text, images and video in response to prompts — there’s too much at stake to miss out.Investors have already announced $10.7 billion in funding for generative A.I. start-ups within the first three months of this year, a thirteenfold increase from a year earlier, according to PitchBook, which tracks start-ups. Tens of thousands of tech workers recently laid off by big tech companies are now eager to join the next big thing. On top of that, much of the A.I. technology is open source, meaning companies share their work and allow anyone to build on it, which encourages a sense of community.“Hacker houses,” where people create start-ups, are springing up in San Francisco’s Hayes Valley neighborhood, known as “Cerebral Valley” because it is the center of the A.I. scene. And every night someone is hosting a hackathon, meet-up or demo focused on the technology.In March, days after the prominent start-up OpenAI unveiled a new version of its A.I. technology, an “emergency hackathon” organized by a pair of entrepreneurs drew 200 participants, with almost as many on the waiting list. That same month, a networking event hastily organized over Twitter by Clement Delangue, the chief executive of the A.I. start-up Hugging Face, attracted more than 5,000 people and two alpacas to San Francisco’s Exploratorium museum, earning it the nickname “Woodstock of A.I.”More than 5,000 people attended the so-called Woodstock of A.I. in San Francisco in March.Alexy KhrabrovMadisen Taylor, who runs operations for Hugging Face and organized the event alongside Mr. Delangue, said its communal vibe had mirrored that of Woodstock. “Peace, love, building cool A.I.,” she said.Taken together, the activity is enough to draw back people like Ms. Fischer, who is starting a company that uses A.I. in the hospitality industry. She and Mr. Fulop got involved in the 350-person tech scene in Bend, but they missed the inspiration, hustle and connections in San Francisco.“There’s just nowhere else like the Bay,” Ms. Fischer, 32, said.Jen Yip, who has been organizing events for tech workers over the past six years, said that what had been a quiet San Francisco tech scene during the pandemic began changing last year in tandem with the A.I. boom. At nightly hackathons and demo days, she watched people meet their co-founders, secure investments, win over customers and network with potential hires.“I’ve seen people come to an event with an idea they want to test and pitch it to 30 different people in the course of one night,” she said.Ms. Yip, 42, runs a secret group of 800 people focused on A.I. and robotics called Society of Artificers. Its monthly events have become a hot ticket, often selling out within an hour. “People definitely try to crash,” she said.Her other speaker series, Founders You Should Know, features leaders of A.I. companies speaking to an audience of mostly engineers looking for their next gig. The last event had more than 2,000 applicants for 120 spots, Ms. Yip said.In Founders You Should Know, a series run by Jen Yip, leaders of A.I. companies speak to an audience of mostly engineers looking for their next gig.Ximena NateraBernardo Aceituno moved his company, Stack AI, to San Francisco in January to be part of the start-up accelerator Y Combinator. He and his co-founders had planned to base the company in New York after the three-month program ended, but decided to stay in San Francisco. The community of fellow entrepreneurs, investors and tech talent that they found was too valuable, he said.“If we move out, it’s going to be very hard to re-create in any other city,” Mr. Aceituno, 27, said. “Whatever you’re looking for is already here.”After operating remotely for several years, Y Combinator has started encouraging start-ups in its program to move to San Francisco. Out of a recent batch of 270 start-ups, 86 percent participated locally, the company said.“Hayes Valley truly became Cerebral Valley this year,” Gary Tan, Y Combinator’s chief executive, said at a demo day in April.The A.I. boom is also luring back founders of other kinds of tech companies. Brex, a financial technology start-up, declared itself “remote first” early in the pandemic, closing its 250-person office in San Francisco’s SoMa neighborhood. The company’s founders, Henrique Dubugras and Pedro Franceschi, decamped for Los Angeles.Henrique Dubugras, a co-founder of Brex, in 2019. After decamping to Los Angeles, he recently returned to the Bay Area.Arsenii Vaselenko for The New York TimesBut when generative A.I. began taking off last year, Mr. Dubugras, 27, was eager to see how Brex could adopt the technology. He quickly realized that he was missing out on the coffees, casual conversations and community happening around A.I. in San Francisco, he said.In May, Mr. Dubugras moved to Palo Alto, Calif., and began working from a new, pared-down office a few blocks from Brex’s old one. San Francisco’s high office vacancy rate meant the company paid a quarter of what it had been paying in rent before the pandemic.Seated under a neon sign in Brex’s office that read “Growth Mindset,” Mr. Dubugras said he had been on a steady schedule of coffee meetings with people working on A.I. since his return. He has hired a Stanford Ph.D. student to tutor him on the topic.“Knowledge is concentrated at the bleeding edge,” he said.Ms. Fischer and Ms. Fulop said they would miss Bend but craved the Bay Area’s sense of urgency and focus.Will Matsuda for The New York TimesMr. Fulop and Ms. Fischer said they would miss their lives in Bend, where they could ski or mountain bike on their lunch breaks. But getting two start-ups off the ground requires an intense blend of urgency and focus.In the Bay Area, Ms. Fischer attends multiday events where people stay up all night working on their projects. And Mr. Fulop runs into engineers and investors he knows every time he walks by a coffee shop. They are considering living in suburbs like Palo Alto and Woodside, which has easy access to nature, in addition to San Francisco.“I’m willing to sacrifice the amazing tranquillity of this place for being around that ambition, being inspired, knowing there are a ton of awesome people to work with that I can bump into,” Mr. Fulop said. Living in Bend, he added, “honestly just felt like early retirement.” More

  • in

    New York City Moves to Regulate How AI Is Used in Hiring

    European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.Uneasy compromises are inevitable.Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”The law applies to companies with workers in New York City, but labor experts expect it to influence practices nationally. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also working on laws to regulate A.I. in hiring. And Illinois and Maryland have enacted laws limiting the use of specific A.I. technologies, often for workplace surveillance and the screening of job candidates.The New York City law emerged from a clash of sharply conflicting viewpoints. The City Council passed it during the final days of the administration of Mayor Bill de Blasio. Rounds of hearings and public comments, more than 100,000 words, came later — overseen by the city’s Department of Consumer and Worker Protection, the rule-making agency.The result, some critics say, is overly sympathetic to business interests.“What could have been a landmark law was watered down to lose effectiveness,” said Alexandra Givens, president of the Center for Democracy & Technology, a policy and civil rights organization.That’s because the law defines an “automated employment decision tool” as technology used “to substantially assist or replace discretionary decision making,” she said. The rules adopted by the city appear to interpret that phrasing narrowly so that A.I. software will require an audit only if it is the lone or primary factor in a hiring decision or is used to overrule a human, Ms. Givens said.That leaves out the main way the automated software is used, she said, with a hiring manager invariably making the final choice. The potential for A.I.-driven discrimination, she said, typically comes in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.Ms. Givens also criticized the law for limiting the kinds of groups measured for unfair treatment. It covers bias by sex, race and ethnicity, but not discrimination against older workers or those with disabilities.“My biggest concern is that this becomes the template nationally when we should be asking much more of our policymakers,” Ms. Givens said.“This is a significant regulatory success,” said Robert Holden, center, a member of the City Council who formerly led its committee on technology.Johnny Milano for The New York TimesThe law was narrowed to sharpen it and make sure it was focused and enforceable, city officials said. The Council and the worker protection agency heard from many voices, including public-interest activists and software companies. Its goal was to weigh trade-offs between innovation and potential harm, officials said.“This is a significant regulatory success toward ensuring that A.I. technology is used ethically and responsibly,” said Robert Holden, who was the chair of the Council committee on technology when the law was passed and remains a committee member.New York City is trying to address new technology in the context of federal workplace laws with guidelines on hiring that date to the 1970s. The main Equal Employment Opportunity Commission rule states that no practice or method of selection used by employers should have a “disparate impact” on a legally protected group like women or minorities.Businesses have criticized the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent audits of A.I. was “not feasible” because “the auditing landscape is nascent,” lacking standards and professional oversight bodies.But a nascent field is a market opportunity. The A.I. audit business, experts say, is only going to grow. It is already attracting law firms, consultants and start-ups.Companies that sell A.I. software to assist in hiring and promotion decisions have generally come to embrace regulation. Some have already undergone outside audits. They see the requirement as a potential competitive advantage, providing proof that their technology expands the pool of job candidates for companies and increases opportunity for workers.“We believe we can meet the law and show what good A.I. looks like,” said Roy Wang, general counsel of Eightfold AI, a Silicon Valley start-up that produces software used to assist hiring managers.The New York City law also takes an approach to regulating A.I. that may become the norm. The law’s key measurement is an “impact ratio,” or a calculation of the effect of using the software on a protected group of job candidates. It does not delve into how an algorithm makes decisions, a concept known as “explainability.”In life-affecting applications like hiring, critics say, people have a right to an explanation of how a decision was made. But A.I. like ChatGPT-style software is becoming more complex, perhaps putting the goal of explainable A.I. out of reach, some experts say.“The focus becomes the output of the algorithm, not the working of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute, which is developing certifications for the safe use of A.I. applications in the workplace, health care and finance. More

  • in

    Silicon Valley Chosen for $4 Billion Chip Research Center

    Anticipating federal subsidies, Applied Materials said it planned to invest up to $4 billion in the semiconductor project in Sunnyvale, Calif.Silicon Valley got its name from computer chips, but no longer plays a central role in shaping how they are made. A major supplier to the industry hopes to change that.Applied Materials, the biggest maker of machines for producing semiconductors, said on Monday that it planned to build a massive research facility near its hometown, Santa Clara, Calif., to allow chip makers and universities to collaborate on advances to make more powerful chips. Silicon Valley hasn’t seen a comparable semiconductor construction project in more than 30 years, industry analysts say.The company expects to invest up to $4 billion in the project over seven years, with a portion of that money coming from federal subsidies, while creating up to 2,000 engineering jobs.The plan is the latest in a string of chip-related projects spurred by the CHIPs Act, a $52 billion package of subsidies that Congress passed last year to reduce U.S. dependence on Asian factories for the critical components. What sets Applied Materials’ move apart is that it focuses on research, rather than manufacturing, and is a substantial new commitment to the industry’s original hub.Chip makers that grew up in Silicon Valley have long chosen to build new “fabs,” the sophisticated factories that fabricate chips from silicon wafers, in less costly states and countries. But Applied Materials is betting that technical talent at nearby universities and the local companies that design chips will spur innovation quickly, making up for cost differences with other locations.“You can connect more leaders in this ecosystem here than anyplace in the world,” said Gary Dickerson, the chief executive of Applied Materials. “There’s no place like this.”Applied Materials has scheduled an event on Monday in Sunnyvale, Calif., to discuss the project, with expected guests including Vice President Kamala Harris.Politicians from both parties overwhelmingly supported the CHIPs Act, partly out of fears that China will one day exert control over Taiwan and factories there that produce the most advanced chips. Besides encouraging domestic chip manufacturing, the legislation allocated about $11 billion to spur related research and development.Chip research now takes place in several phases in multiple locations, including university labs and collaborative centers such as the Albany NanoTech Complex in New York. Applied Materials participates with other companies in that center and operates a research fab in Silicon Valley where chip makers can work with its machines and those of other toolmakers.But many of the core chores in developing new production processes are carried out by chip manufacturers in fabs outfitted with a broad array of equipment. The proposed center, which Applied Materials calls Epic, is set to have ultraclean production space bigger than three football fields and is designed to give university researchers and other engineers comparable resources to experiment with new materials and techniques for creating advanced chips.One goal is to reduce the time it takes for new ideas to flow from the research labs to companies designing new manufacturing gear, information that is now often delayed as it is filtered through the chip makers.“The trouble is, those customers need time to figure out what they need,” said H.-S. Philip Wong, a Stanford professor of electrical engineering who was briefed on the company’s plans. “There is a big hole in there.”Applied Materials also said chip makers would be able to reserve space in the center and try out new tools before they were commercially available.The plan hinges partly on whether Applied Materials can win subsidies under the CHIPs Act, which the Commerce Department says has already attracted expressions of interest from more than 300 companies. Mr. Dickerson said that the company planned to build the center in any case, but that government funding could affect the project’s scale.Assuming the center evolves as planned, it could substantially bolster Silicon Valley’s role in the evolution of chips, said G. Dan Hutcheson, vice chair at the market research firm TechInsights.“It really is a vote of confidence for the Valley,” he said. More

  • in

    Commerce Dept. Outlines Its Bid to Fund Cutting-Edge Chip Research

    The Biden administration announced its strategy for the National Semiconductor Technology Center, a string of facilities aimed at propelling U.S. innovation.WASHINGTON — The Biden administration outlined plans on Tuesday to propel research on the type of cutting-edge microchips needed to power computers, cars and other devices, saying it would establish a new national organization with locations in various parts of the United States.The Commerce Department, which is in charge of the administration’s efforts to revitalize the American chip industry, said its new National Semiconductor Technology Center would bring together companies, universities and others to collaborate on next-generation chip technology. The organization would include a string of research centers, the locations of which have yet to be chosen, and aim to be operational by the end of this year.The organization would help “regain America’s leadership in research and development and technologies of the future, and importantly, make sure we stay there for decades to come,” Gina Raimondo, the commerce secretary, said in a briefing Monday.“It’s a place where industry and academia and start-ups and investors can come together to solve the biggest, grandest challenges and set priorities,” she added.The plans are part of the Biden administration’s effort to reinvigorate semiconductor manufacturing and ensure that the United States has a steady supply of chips necessary to feed its factories and support its national defense. The Commerce Department has been charged with doling out $50 billion to revitalize the industry, including $11 billion devoted to research and development.The technology center is expected to be central to that effort. Some of its locations would be capable of end-to-end manufacturing of new chip designs, while others would focus on experimenting with new materials and equipment, or with new ways of putting chips together to make them more powerful, Ms. Raimondo said.Laurie Giandomenico, the vice president and chief acceleration officer of MITRE, a nonprofit organization that operates federally funded research centers, called the $11 billion investment by the United States “pretty significant,” given that the semiconductor industry has in past years spent about $70 billion on research and development globally.The challenge, she said, would be to ensure that the money was spent to encourage collaborative research to solve the industry’s biggest problems, not the “siloed innovation” now carried out by chip firms that carefully guard their creations from competitors.“It should be on areas that no one company can solve alone,” she said.Companies, universities, lawmakers and local governments have been lobbying the administration to set up an outpost of the new organization in their area. Ms. Raimondo emphasized that the organization would be an independent “trusted” player, with board members appointed by a separate selection committee and strict controls for protecting intellectual property.One of the organization’s primary goals, Ms. Raimondo said, would be making it easier and less expensive for start-ups and other new entrants to develop and commercialize new chip technologies.“We want to cut in half the projected cost of moving a new chip from concept to commercialization over the next decade,” she said.Chris Miller, the author of “Chip War,” which chronicles the industry’s development, said it was comparatively easy for a researcher to develop a new idea for a chip in a laboratory. But given the high cost of producing chips, researchers can have a hard time getting their inventions manufactured.Designing an advanced chip, which may have tens of billions of transistors, can cost hundreds of millions of dollars, according to analysts. The latest systems for defining the smallest circuitry on wafers cost more than $100 million each, while the new factories called “fabs” that make advanced chips can cost $10 billion to $20 billion.“The big fabs are interested in producing 100 million chips for an iPhone, not 10 chips for a professor at M.I.T.,” Mr. Miller said.Venture capitalists also often shy away from investing in chip start-ups because they require more initial funding than other kinds of tech companies and more time to generate a return on that investment.To help address some of these issues, the government’s technology center will establish an investment fund to support start-ups, and provide manufacturing facilities for small players to experiment with new technologies.“I see a world where the U.S. can actually revitalize this microelectronics industry because we could bring down the costs of doing a chip start-up by a factor of five to a factor of ten,” said Gilman Louie, a tech investor and chief executive of a nonprofit investment organization called America’s Frontier Fund.The center’s research priorities are expected to be refined in the coming months. But the Commerce Department specified several areas it would focus on, including advancing the technology for analyzing the microscopic components of chips and setting technical standards for new kinds of chip packaging.As progress slows in squeezing ever-smaller transistors onto each piece of silicon, many companies are now breaking up big products into smaller “chiplets” that are placed side by side or stacked on top of one another.The Commerce Department said that setting new standards for these practices would pave the way for the creation of marketplaces in which companies can assemble new products using chiplets from multiple vendors. More

  • in

    Do We Know How Many People Are Working From Home?

    New Labor Department numbers indicate that fewer Americans worked remotely last year. But many experts criticize the government’s data collection.Millions of workers, employers, square feet of real estate and dollars of downtown economic retail are wrapped up in the question of how many people are working from home — yet there remain large discrepancies in how remote work is measured.The Labor Department, last week, released data indicating a decline in remote work: 72.5 percent of businesses said their employees rarely or never teleworked last year, up from 60.1 percent in 2021 and quite close to the 76.7 percent that had no such work before the pandemic. But while the Labor Department found that remote work was almost back to prepandemic levels, many other surveys show it is up four- to fivefold.Outside research, including a monthly survey of workers from researchers at Stanford University and the Census Bureau’s household survey, indicate that remote work remains prevalent, with Stanford’s finding that it accounts for over a quarter of paid full-time workdays in the United States, just slightly down from 33 percent in 2021. Some scholars suggested that the Labor Department’s survey may overcount fully in-person work, though the comparisons among the various surveys aren’t direct.“I see this survey as an outlier and not the most reliable measure,” said Adam Ozimek, chief economist of the Economic Innovation Group, a public policy organization, describing the Labor Department’s survey. “We need to think hard as we try to develop better measures of working from home.”Remote work is having profound effects on nearly every dimension of the economy: foot traffic to downtown businesses, housing markets in big cities and far-flung areas, methods of assessing productivity and child care. Public transportation ridership sank during the pandemic, and suburban real estate values rose.Nearly one billion square feet of office real estate was available but in search of a tenant at the end of 2022. People refashioned their lives and routines, working 28 percent more after traditional hours, according to Microsoft.The stakes of measuring remote work’s prevalence are high. And researchers said the wording of the Bureau of Labor Statistics survey on remote work, which was distributed to businesses, might have caused some confusion among respondents.“Telework is a work arrangement that allows an employee to work at home, or from another remote location, by using the internet or a computer linked to one’s place of employment, as well as digital communications, such as email and phone,” the survey read. “Do any employees at this location CURRENTLY telework in any amount?”By defining telework so broadly — as any worker sending an email or making a call outside the office — the Labor Department’s survey question should most likely have turned up a fully in-person figure lower than the one released last week, said Nick Bloom, an economist at Stanford, suggesting that some businesses may have been confused by the question.This particular Labor Department figure on telework also combines fully remote work with hybrid arrangements. But hybrid work has eclipsed fully remote policies, with just over half of the workers who can do their jobs from home combining in-person and remote work, according to Gallup.A spokeswoman for the Labor Department said the survey most likely did not reflect informal work-from-home arrangements.“Taking into account that the self-employed and the public sector are not included in the sample, and that this is a survey of establishments rather than individuals, our estimates do not appear out of line with other estimates,” the spokeswoman said.Stanford’s monthly study on working from home, which surveys 10,000 workers across cities and industries, found that 27 percent of paid full-time days were worked from home in early 2023.Much of that remote work came from hybrid setups. Last month, the survey found that 12 percent of workers were fully remote, roughly 60 percent fully in person and 28 percent hybrid.Other sources of data confirm that working-from-home patterns remain entrenched in certain industries. The building security firm Kastle, for example, tracks data on office badge swipes and reported this month that offices remained at roughly 48 percent of their prepandemic occupancy.A closer look at New York, from the Partnership for New York City, found that 52 percent of Manhattan office workers were working in person on an average day at the start of this year, up from 49 percent in September. But only 9 percent of employees were in the office five days a week, underscoring the reach of hybrid arrangements. And Square, the retail technology company, which tracks payments at food and drink establishments, found that sales growth at bars and restaurants in Brooklyn had recently outpaced growth of those in Manhattan.“It’s clear that the work-from-home trends induced by the pandemic have transformed the food and drink scene in the city,” said Ara Kharazian, an economist at Square.The Partnership for New York City’s data indicated that financial service firms were back in the office in greater numbers than many other companies. Financial service firms reported 59 percent daily office attendance in late January, according to the partnership. The tech industry, by contrast, was at 43 percent.All this data is emerging as hundreds of companies formalize their policies on hybrid work, with many trying to persuade their employees to spend more time at the office.Amazon told corporate workers last month that they had to be in the office three days a week starting in May, and Starbucks called its 3,750 corporate workers back three days a week as well. Disney asked employees to return to the office four days a week. Its chief executive, Robert A. Iger, cited the need for in-person creative collaborations.Other chief executives have also begun to question the merits of remote work. Even Marc Benioff, chief executive of Salesforce, which told all its employees that they could go permanently remote, began voicing concern this year that productivity among some employees has been lower.As executives clamp down on in-person work, worker resistance has become more vocal. At Amazon, more than 29,000 employees joined a Slack channel, called Remote Advocacy, protesting the shift to in-person work. At Starbucks, more than 40 corporate employees signed an open letter opposing the new return-to-office policy.Wherever people are doing the jobs they already have, mostly in person per the Labor Department or over a quarter of the time at home per others, one metric does indicate that hybrid work is here to stay: job postings.A study from researchers at Stanford, Harvard and other institutions analyzing over 50 million job postings last month found that postings explicitly mentioning remote work are at 12.2 percent — a fourfold increase since before the pandemic. More