More stories

  • in

    New York City Moves to Regulate How AI Is Used in Hiring

    European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.Uneasy compromises are inevitable.Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”The law applies to companies with workers in New York City, but labor experts expect it to influence practices nationally. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also working on laws to regulate A.I. in hiring. And Illinois and Maryland have enacted laws limiting the use of specific A.I. technologies, often for workplace surveillance and the screening of job candidates.The New York City law emerged from a clash of sharply conflicting viewpoints. The City Council passed it during the final days of the administration of Mayor Bill de Blasio. Rounds of hearings and public comments, more than 100,000 words, came later — overseen by the city’s Department of Consumer and Worker Protection, the rule-making agency.The result, some critics say, is overly sympathetic to business interests.“What could have been a landmark law was watered down to lose effectiveness,” said Alexandra Givens, president of the Center for Democracy & Technology, a policy and civil rights organization.That’s because the law defines an “automated employment decision tool” as technology used “to substantially assist or replace discretionary decision making,” she said. The rules adopted by the city appear to interpret that phrasing narrowly so that A.I. software will require an audit only if it is the lone or primary factor in a hiring decision or is used to overrule a human, Ms. Givens said.That leaves out the main way the automated software is used, she said, with a hiring manager invariably making the final choice. The potential for A.I.-driven discrimination, she said, typically comes in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.Ms. Givens also criticized the law for limiting the kinds of groups measured for unfair treatment. It covers bias by sex, race and ethnicity, but not discrimination against older workers or those with disabilities.“My biggest concern is that this becomes the template nationally when we should be asking much more of our policymakers,” Ms. Givens said.“This is a significant regulatory success,” said Robert Holden, center, a member of the City Council who formerly led its committee on technology.Johnny Milano for The New York TimesThe law was narrowed to sharpen it and make sure it was focused and enforceable, city officials said. The Council and the worker protection agency heard from many voices, including public-interest activists and software companies. Its goal was to weigh trade-offs between innovation and potential harm, officials said.“This is a significant regulatory success toward ensuring that A.I. technology is used ethically and responsibly,” said Robert Holden, who was the chair of the Council committee on technology when the law was passed and remains a committee member.New York City is trying to address new technology in the context of federal workplace laws with guidelines on hiring that date to the 1970s. The main Equal Employment Opportunity Commission rule states that no practice or method of selection used by employers should have a “disparate impact” on a legally protected group like women or minorities.Businesses have criticized the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent audits of A.I. was “not feasible” because “the auditing landscape is nascent,” lacking standards and professional oversight bodies.But a nascent field is a market opportunity. The A.I. audit business, experts say, is only going to grow. It is already attracting law firms, consultants and start-ups.Companies that sell A.I. software to assist in hiring and promotion decisions have generally come to embrace regulation. Some have already undergone outside audits. They see the requirement as a potential competitive advantage, providing proof that their technology expands the pool of job candidates for companies and increases opportunity for workers.“We believe we can meet the law and show what good A.I. looks like,” said Roy Wang, general counsel of Eightfold AI, a Silicon Valley start-up that produces software used to assist hiring managers.The New York City law also takes an approach to regulating A.I. that may become the norm. The law’s key measurement is an “impact ratio,” or a calculation of the effect of using the software on a protected group of job candidates. It does not delve into how an algorithm makes decisions, a concept known as “explainability.”In life-affecting applications like hiring, critics say, people have a right to an explanation of how a decision was made. But A.I. like ChatGPT-style software is becoming more complex, perhaps putting the goal of explainable A.I. out of reach, some experts say.“The focus becomes the output of the algorithm, not the working of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute, which is developing certifications for the safe use of A.I. applications in the workplace, health care and finance. More

  • in

    How AI and DNA Are Unlocking the Mysteries of Global Supply Chains

    At a cotton gin in the San Joaquin Valley, in California, a boxy machine helps to spray a fine mist containing billions of molecules of DNA onto freshly cleaned Pima cotton.That DNA will act as a kind of minuscule bar code, nestling amid the puffy fibers as they are shuttled to factories in India. There, the cotton will be spun into yarn and woven into bedsheets, before landing on the shelves of Costco stores in the United States. At any time, Costco can test for the DNA’s presence to ensure that its American-grown cotton hasn’t been replaced with cheaper materials — like cotton from the Xinjiang region of China, which is banned in the United States because of its ties to forced labor.Amid growing concern about opacity and abuses in global supply chains, companies and government officials are increasingly turning to technologies like DNA tracking, artificial intelligence and blockchains to try to trace raw materials from the source to the store.Companies in the United States are now subject to new rules that require firms to prove their goods are made without forced labor, or face having them seized at the border. U.S. customs officials said in March that they had already detained nearly a billion dollars’ worth of shipments coming into the United States that were suspected of having some ties to Xinjiang. Products from the region have been banned since last June.Customers are also demanding proof that expensive, high-end products — like conflict-free diamonds, organic cotton, sushi-grade tuna or Manuka honey — are genuine, and produced in ethically and environmentally sustainable ways.That has forced a new reality on companies that have long relied on a tangle of global factories to source their goods. More than ever before, companies must be able to explain where their products really come from.A technician at Applied DNA Sciences testing samples to trace the raw materials.Johnny Milano for The New York TimesCotton samples that are being processed at the lab.Johnny Milano for The New York TimesThe task may seem straightforward, but it can be surprisingly tricky. That’s because the international supply chains that companies have built in recent decades to cut costs and diversify their product offerings have grown astonishingly complex. Since 2000, the value of intermediate goods used to make products that are traded internationally has tripled, driven partly by China’s booming factories.A large, multinational company may buy parts, materials or services from thousands of suppliers around the world. One of the largest such companies, Procter & Gamble, which owns brands like Tide, Crest and Pampers, has nearly 50,000 direct suppliers. Each of those suppliers may, in turn, rely on hundreds of other companies for the parts used to make its product — and so on, for many levels up the supply chain.To make a pair of jeans, for example, various companies must farm and clean cotton, spin it into thread, dye it, weave it into fabric, cut the fabric into patterns and stitch the jeans together. Other webs of companies mine, smelt or process the brass, nickel or aluminum that is crafted into the zipper, or make the chemicals that are used to manufacture synthetic indigo dye.“Supply chains are like a bowl of spaghetti,” said James McGregor, the chairman of the greater China region for APCO Worldwide, an advisory firm. “They get mixed all over. You don’t know where that stuff comes from.”Harvesting cotton in Xinjiang. Cotton from the region in China is banned in the United States because of its ties to forced labor.Getty ImagesGiven these challenges, some companies are turning to alternative methods, not all proven, to try to inspect their supply chains.Some companies — like the one that sprays the DNA mist onto cotton, Applied DNA Sciences — are using scientific processes to tag or test a physical attribute of the good itself, to figure out where it has traveled on its path from factories to consumer.Applied DNA has used its synthetic DNA tags, each just a billionth of the size of a grain of sugar, to track microcircuits produced for the Department of Defense, trace cannabis supply chains to ensure the product’s purity and even to mist robbers in Sweden who attempted to steal cash from A.T.M.s, leading to multiple arrests.MeiLin Wan, the vice president for textiles at Applied DNA, said the new regulations were creating a “tipping point for real transparency.”“There definitely is a lot more interest,” she added.The cotton industry was one of the earliest adopters of tracing technologies, in part because of previous transgressions. In the mid-2010s, Target, Walmart and Bed Bath & Beyond faced expensive product recalls or lawsuits after the “Egyptian cotton” sheets they sold turned out to have been made with cotton from elsewhere. A New York Times investigation last year documented that the “organic cotton” industry was also rife with fraud.In addition to the DNA mist it applies as a marker, Applied DNA can figure out where cotton comes from by sequencing the DNA of the cotton itself, or analyzing its isotopes, which are variations in the carbon, oxygen and hydrogen atoms in the cotton. Differences in rainfall, latitude, temperature and soil conditions mean these atoms vary slightly across regions of the world, allowing researchers to map where the cotton in a pair of socks or bath towel has come from.Other companies are turning to digital technology to map supply chains, by creating and analyzing complex databases of corporate ownership and trade.Farmers in India auction their cotton.Saumya Khandelwal for The New York TimesSome firms, for example, are using blockchain technology to create a digital token for every product that a factory produces. As that product — a can of caviar, say, or a batch of coffee — moves through the supply chain, its digital twin gets encoded with information about how it has been transported and processed, providing a transparent log for companies and consumers.Other companies are using databases or artificial intelligence to comb through vast supplier networks for distant links to banned entities, or to detect unusual trade patterns that indicate fraud — investigations that could take years to carry out without computing power.Sayari, a corporate risk intelligence provider that has developed a platform combining data from billions of public records issued globally, is one of those companies. The service is now used by U.S. customs agents as well as private companies. On a recent Tuesday, Jessica Abell, the vice president of solutions at Sayari, ran the supplier list of a major U.S. retailer through the platform and watched as dozens of tiny red flags appeared next to the names of distant companies.“We’re flagging not only the Chinese companies that are in Xinjiang, but then we’re also automatically exploring their commercial networks and flagging the companies that are directly connected to it,” Ms. Abell said. It is up to the companies to decide what, if anything, to do about their exposure.Studies have found that most companies have surprisingly little visibility into the upper reaches of their supply chains, because they lack either the resources or the incentives to investigate. In a 2022 survey by McKinsey & Company, 45 percent of respondents said they had no visibility at all into their supply chain beyond their immediate suppliers.But staying in the dark is no longer feasible for companies, particularly those in the United States, after the congressionally imposed ban on importing products from Xinjiang — where 100,000 ethnic minorities are presumed by the U.S. government to be working in conditions of forced labor — went into effect last year.Uyghur workers at a garment factory in the Xinjiang region of China in 2019.Gilles Sabrie for The New York TimesXinjiang’s links to certain products are already well known. Experts have estimated that roughly one in five cotton garments sold globally contains cotton or yarn from Xinjiang. The region is also responsible for more than 40 percent of the world’s polysilicon, which is used in solar panels, and a quarter of its tomato paste.But other industries, like cars, vinyl flooring and aluminum, also appear to have connections to suppliers in the region and are coming under more scrutiny from regulators.Having a full picture of their supply chains can offer companies other benefits, like helping them recall faulty products or reduce costs. The information is increasingly needed to estimate how much carbon dioxide is actually emitted in the production of a good, or to satisfy other government rules that require products to be sourced from particular places — such as the Biden administration’s new rules on electric vehicle tax credits.Executives at these technology companies say they envision a future, perhaps within the next decade, in which most supply chains are fully traceable, an outgrowth of both tougher government regulations and the wider adoption of technologies.“It’s eminently doable,” said Leonardo Bonanni, the chief executive of Sourcemap, which has helped companies like the chocolate maker Mars map out their supply chains. “If you want access to the U.S. market for your goods, it’s a small price to pay, frankly.”Others express skepticism about the limitations of these technologies, including their cost. While Applied DNA’s technology, for example, adds only 5 to 7 cents to the price of a finished piece of apparel, that may be significant for retailers competing on thin margins.And some express concerns about accuracy, including, for example, databases that may flag companies incorrectly. Investigators still need to be on the ground locally, they say, speaking with workers and remaining alert for signs of forced or child labor that may not show up in digital records.Justin Dillon, the chief executive of FRDM, a nonprofit organization dedicated to ending forced labor, said there was “a lot of angst, a lot of confusion” among companies trying to satisfy the government’s new requirements.Importers are “looking for boxes to check,” he said. “And transparency in supply chains is as much an art as it is a science. It’s kind of never done.” More

  • in

    Tinkering With ChatGPT, Workers Wonder: Will This Take My Job?

    In December, the staff of the American Writers and Artists Institute — a 26-year-old membership organization for copywriters — realized that something big was happening.The newest edition of ChatGPT, a “large language model” that mines the internet to answer questions and perform tasks on command, had just been released. Its abilities were astonishing — and squarely in the bailiwick of people who generate content, such as advertising copy and blog posts, for a living.“They’re horrified,” said Rebecca Matter, the institute’s president. Over the holidays, she scrambled to organize a webinar on the pitfalls and potential of the new artificial-intelligence technology. More than 3,000 people signed up, she said, and the overall message was cautionary but reassuring: Writers could use ChatGPT to complete assignments more quickly, and move into higher-level roles in content planning and search-engine optimization.“I do think it’s going to minimize short-form copy projects,” Ms. Matter said. “But on the flip side of that, I think there will be more opportunities for things like strategy.”OpenAI’s ChatGPT is the latest advance in a steady march of innovations that have offered the potential to transform many occupations and wipe out others, sometimes in tandem. It is too early to tally the enabled and the endangered, or to gauge the overall impact on labor demand and productivity. But it seems clear that artificial intelligence will impinge on work in different ways than previous waves of technology.The positive view of tools like ChatGPT is that they could be complements to human labor, rather than replacements. Not all workers are sanguine, however, about the prospective impact.Katie Brown is a grant writer in the Chicago suburbs for a small nonprofit group focused on addressing domestic violence. She was shocked to learn in early February that a professional association for grant writers was promoting the use of artificial-intelligence software that would automatically complete parts of an application, requiring the human simply to polish it before submitting.The platform, called Grantable, is based on the same technology as ChatGPT, and it markets itself to freelancers who charge by the application. That, she thought, clearly threatens opportunities in the industry.“For me, it’s common sense: Which do you think a small nonprofit will pick?” Ms. Brown said. “A full-time-salary-plus-benefits person, or someone equipped with A.I. that you don’t have to pay benefits for?”Artificial intelligence and machine learning have been operating in the background of many businesses for years, helping to evaluate large numbers of possible decisions and better align supply with demand, for example. And plenty of technological advancements over centuries have decreased the need for certain workers — although each time, the jobs created have more than offset the number lost.Guillermo Rubio has found that his job as a copywriter has changed markedly since he started using ChatGPT to generate ideas for blog posts.In-camera double exposure by Mark Abramson for The New York TimesChatGPT, however, is the first to confront such a broad range of white-collar workers so directly, and to be so accessible that people could use it in their own jobs. And it is improving rapidly, with a new edition released this month. According to a survey conducted by the job search website ZipRecruiter after ChatGPT’s release, 62 percent of job seekers said they were concerned that artificial intelligence could derail their careers.“ChatGPT is the one that made it more visible,” said Michael Chui, a partner at the McKinsey Global Institute who studies automation’s effects. “So I think it did start to raise questions about where timelines might start to be accelerated.”That’s also the conclusion of a White House report on the implications of A.I. technology, including ChatGPT. “The primary risk of A.I. to the work force is in the general disruption it is likely to cause to workers, whether they find that their jobs are newly automated or that their job design has fundamentally changed,” the authors wrote.For now, Guillermo Rubio has found that his job as a copywriter has changed markedly since he started using ChatGPT to generate ideas for blog posts, write first drafts of newsletters, create hundreds of slight variations on stock advertising copy and summon research on a subject about which he might write a white paper.Since he still charges his clients the same rates, the tool has simply allowed him to work less. If the going rate for copy goes down, though — which it might, as the technology improves — he’s confident he’ll be able to move into consulting on content strategy, along with production.“I think people are more reluctant and fearful, with good reason,” Mr. Rubio, who is in Orange County, Calif., said. “You could look at it in a negative light, or you can embrace it. I think the biggest takeaway is you have to be adaptable. You have to be open to embracing it.”After decades of study, researchers understand a lot about automation’s impact on the work force. Economists including Daron Acemoglu at the Massachusetts Institute of Technology have found that since 1980, technology has played a primary role in amplifying income inequality. As labor unions atrophied, hollowing out systems for training and retraining, workers without college educations saw their bargaining power reduced in the face of machines capable of rudimentary tasks.The advent of ChatGPT three months ago, however, has prompted a flurry of studies predicated on the idea that this isn’t your average robot.One team of researchers ran an analysis showing the industries and occupations that are most exposed to artificial intelligence, based on a model adjusted for generative language tools. Topping the list were college humanities professors, legal services providers, insurance agents and telemarketers. Mere exposure, however, doesn’t determine whether the technology is likely to replace workers or merely augment their skills.Shakked Noy and Whitney Zhang, doctoral students at M.I.T., conducted a randomized, controlled trial on experienced professionals in such fields as human relations and marketing. The participants were given tasks that typically take 20 to 30 minutes, like writing news releases and brief reports. Those who used ChatGPT completed the assignments 37 percent faster on average than those who didn’t — a substantial productivity increase. They also reported a 20 percent increase in job satisfaction.A third study — using a program developed by GitHub, which is owned by Microsoft — evaluated the impact of generative A.I. specifically on software developers. In a trial run by GitHub’s researchers, developers given an entry-level task and encouraged to use the program, called Copilot, completed their task 55 percent faster than those who did the assignment manually.Those productivity gains are unlike almost any observed since the widespread adoption of the personal computer.“It does seem to be doing something fundamentally different,” said David Autor, another M.I.T. economist, who advises Ms. Zhang and Mr. Noy. “Before, computers were powerful, but they simply and robotically did what people programmed them to do.” Generative artificial intelligence, on the other hand, is “adaptive, it learns and is capable of flexible problem solving.”That’s very apparent to Peter Dolkens, a software developer for a company that primarily makes online tools for the sports industry. He has been integrating ChatGPT into his work for tasks like summarizing chunks of code to aid colleagues who may pick up the project after him, and proposing solutions to problems that have him stumped. If the answer isn’t perfect, he’ll ask ChatGPT to refine it, or try something different.“It’s the equivalent of a very well-read intern,” Mr. Dolkens, who is in London, said. “They might not have the experience to know how to apply it, but they know all the words, they’ve read all the books and they’re able to get part of the way there.”There’s another takeaway from the initial research: ChatGPT and Copilot elevated the least experienced workers the most. If true, more generally, that could mitigate the inequality-widening effects of artificial intelligence.On the other hand, as each worker becomes more productive, fewer workers are required to complete a set of tasks. Whether that results in fewer jobs in particular industries depends on the demand for the service provided, and the jobs that might be created in helping to manage and direct the A.I. “Prompt engineering,” for example, is already a skill that those who play around with ChatGPT long enough can add to their résumés.Since demand for software code seems insatiable, and developers’ salaries are extremely high, increasing productivity seems unlikely to foreclose opportunities for people to enter the field.That won’t be the same for every profession, however, and Dominic Russo is pretty sure it won’t be true for his: writing appeals to pharmacy benefit managers and insurance companies when they reject prescriptions for expensive drugs. He has been doing the job for about seven years, and has built expertise with only on-the-job training, after studying journalism in college.After ChatGPT came out, he asked it to write an appeal on behalf of someone with psoriasis who wanted the expensive drug Otezla. The result was good enough to require only a few edits before submitting it.“If you knew what to prompt the A.I. with, anyone could do the work,” Mr. Russo said. “That’s what’s really scares me. Why would a pharmacy pay me $70,000 a year, when they can license the technology and pay people $12 an hour to run prompts into it?”To try to protect himself from that possible future, Mr. Russo has been building up his side business: selling pizzas out of his house in southern New Jersey, an enterprise that he figures won’t be disrupted by artificial intelligence.Yet. More

  • in

    Economists Pin More Blame on Tech for Rising Inequality

    Recent research underlines the central role that automation has played in widening disparities.Daron Acemoglu, an influential economist at the Massachusetts Institute of Technology, has been making the case against what he describes as “excessive automation.”The economywide payoff of investing in machines and software has been stubbornly elusive. But he says the rising inequality resulting from those investments, and from the public policy that encourages them, is crystal clear.Half or more of the increasing gap in wages among American workers over the last 40 years is attributable to the automation of tasks formerly done by human workers, especially men without college degrees, according to some of his recent research.Globalization and the weakening of unions have played roles. “But the most important factor is automation,” Mr. Acemoglu said. And automation-fueled inequality is “not an act of God or nature,” he added. “It’s the result of choices corporations and we as a society have made about how to use technology.”Mr. Acemoglu, a wide-ranging scholar whose research makes him one of most cited economists in academic journals, is hardly the only prominent economist arguing that computerized machines and software, with a hand from policymakers, have contributed significantly to the yawning gaps in incomes in the United States. Their numbers are growing, and their voices add to the chorus of criticism surrounding the Silicon Valley giants and the unchecked advance of technology.Paul Romer, who won a Nobel in economic science for his work on technological innovation and economic growth, has expressed alarm at the runaway market power and influence of the big tech companies. “Economists taught: ‘It’s the market. There’s nothing we can do,’” he said in an interview last year. “That’s really just so wrong.”Anton Korinek, an economist at the University of Virginia, and Joseph Stiglitz, a Nobel economist at Columbia University, have written a paper, “Steering Technological Progress,” which recommends steps from nudges for entrepreneurs to tax changes to pursue “labor-friendly innovations.”Erik Brynjolfsson, an economist at Stanford, is a technology optimist in general. But in an essay to be published this spring in Daedalus, the journal of the American Academy of Arts and Sciences, he warns of “the Turing trap.” The phrase is a reference to the Turing test, named for Alan Turing, the English pioneer in artificial intelligence, in which the goal is for a computer program to engage in a dialogue so convincingly that it is indistinguishable from a human being.For decades, Mr. Brynjolfsson said, the Turing test — matching human performance — has been the guiding metaphor for technologists, businesspeople and policymakers in thinking about A.I. That leads to A.I. systems that are designed to replace workers rather than enhance their performance. “I think that’s a mistake,” he said.The concerns raised by these economists are getting more attention in Washington at a time when the giant tech companies are already being attacked on several fronts. Officials regularly criticize the companies for not doing enough to protect user privacy and say the companies amplify misinformation. State and federal lawsuits accuse Google and Facebook of violating antitrust laws, and Democrats are trying to rein in the market power of the industry’s biggest companies through new laws.Mr. Acemoglu testified in November before the House Select Committee on Economic Disparity and Fairness in Growth at a hearing on technological innovation, automation and the future of work. The committee, which got underway in June, will hold hearings and gather information for a year and report its findings and recommendations.Despite the partisan gridlock in Congress, Representative Jim Himes, a Connecticut Democrat and the chairman of the committee, is confident the committee can find common ground on some steps to help workers, like increased support for proven job-training programs.“There’s nothing partisan about economic disparity,” Mr. Himes said, referring to the harm to millions of American families regardless of their political views.Representative Jim Himes, who leads a panel on economic disparity, is confident it can find ways to help workers, like increased support for proven job-training programs.Samuel Corum for The New York TimesEconomists point to the postwar years, from 1950 to 1980, as a golden age when technology forged ahead and workers enjoyed rising incomes.But afterward, many workers started falling behind. There was a steady advance of crucial automating technologies — robots and computerized machines on factory floors, and specialized software in offices. To stay ahead, workers required new skills.Yet the technological shift evolved as growth in postsecondary education slowed and companies began spending less on training their workers. “When technology, education and training move together, you get shared prosperity,” said Lawrence Katz, a labor economist at Harvard. “Otherwise, you don’t.”Increasing international trade tended to encourage companies to adopt automation strategies. For example, companies worried by low-cost competition from Japan and later China invested in machines to replace workers.Today, the next wave of technology is artificial intelligence. And Mr. Acemoglu and others say it can be used mainly to assist workers, making them more productive, or to supplant them.Mr. Acemoglu, like some other economists, has altered his view of technology over time. In economic theory, technology is almost a magic ingredient that both increases the size of the economic pie and makes nations richer. He recalled working on a textbook more than a decade ago that included the standard theory. Shortly after, while doing further research, he had second thoughts.“It’s too restrictive a way of thinking,” he said. “I should have been more open-minded.”Mr. Acemoglu is no enemy of technology. Its innovations, he notes, are needed to address society’s biggest challenges, like climate change, and to deliver economic growth and rising living standards. His wife, Asuman Ozdaglar, is the head of the electrical engineering and computer science department at M.I.T.But as Mr. Acemoglu dug deeply into economic and demographic data, the displacement effects of technology became increasingly apparent. “They were greater than I assumed,” he said. “It’s made me less optimistic about the future.”Mr. Acemoglu’s estimate that half or more of the increasing gap in wages in recent decades stemmed from technology was published last year with his frequent collaborator, Pascual Restrepo, an economist at Boston University. The conclusion was based on an analysis of demographic and business data that details the declining share of economic output that goes to workers as wages and the increased spending on machinery and software.Mr. Acemoglu and Mr. Restrepo have published papers on the impact of robots and the adoption of “so-so technologies,” as well as the recent analysis of technology and inequality.So-so technologies replace workers but do not yield big gains in productivity. As examples, Mr. Acemoglu cites self-checkout kiosks in grocery stores and automated customer service over the phone.Today, he sees too much investment in such so-so technologies, which helps explain the sluggish productivity growth in the economy. By contrast, truly significant technologies create new jobs elsewhere, lifting employment and wages.The rise of the auto industry, for example, generated jobs in car dealerships, advertising, accounting and financial services.Market forces have produced technologies that help people do their work rather than replace them. In computing, the examples include databases, spreadsheets, search engines and digital assistants.But Mr. Acemoglu insists that a hands-off, free-market approach is a recipe for widening inequality, with all its attendant social ills. One important policy step, he recommends, is fair tax treatment for human labor. The tax rate on labor, including payroll and federal income tax, is 25 percent. After a series of tax breaks, the current rate on the costs of equipment and software is near zero.Well-designed education and training programs for the jobs of the future, Mr. Acemoglu said, are essential. But he also believes that technology development should be steered in a more “human-friendly direction.” He takes inspiration from the development of renewable energy over the last two decades, which has been helped by government research, production subsidies and social pressure on corporations to reduce carbon emissions.“We need to redirect technology so it works for people,” Mr. Acemoglu said, “not against them.” More

  • in

    Hundreds of Google Employees Unionize, Culminating Years of Activism

    #masthead-section-label, #masthead-bar-one { display: none }Artificial IntelligenceThe Bot That WritesAre These People Real?Algorithms Against SuicideRobots Without BiasAdvertisementContinue reading the main storySupported byContinue reading the main storyHundreds of Google Employees Unionize, Culminating Years of ActivismThe creation of the union, a rarity in Silicon Valley, follows years of increasing outspokenness by Google workers. Executives have struggled to handle the change.Chewy Shaw, an engineer at Google, at a video meeting with other members of the union’s leadership council. He said a union would keep pressure on management.Credit…Damien Maloney for The New York TimesJan. 4, 2021, 6:00 a.m. ETOAKLAND, Calif. — More than 225 Google engineers and other workers have formed a union, the group revealed on Monday, capping years of growing activism at one of the world’s largest companies and presenting a rare beachhead for labor organizers in staunchly anti-union Silicon Valley.The union’s creation is highly unusual for the tech industry, which has long resisted efforts to organize its largely white-collar work force. It follows increasing demands by employees at Google for policy overhauls on pay, harassment and ethics, and is likely to escalate tensions with top leadership.The new union, called the Alphabet Workers Union after Google’s parent company, Alphabet, was organized in secret for the better part of a year and elected its leadership last month. The group is affiliated with the Communications Workers of America, a union that represents workers in telecommunications and media in the United States and Canada.But unlike a traditional union, which demands that an employer come to the bargaining table to agree on a contract, the Alphabet Workers Union is a so-called minority union that represents a fraction of the company’s more than 260,000 full-time employees and contractors. Workers said it was primarily an effort to give structure and longevity to activism at Google, rather than to negotiate for a contract.Chewy Shaw, an engineer at Google in the San Francisco Bay Area and the vice chair of the union’s leadership council, said the union was a necessary tool to sustain pressure on management so that workers could force changes on workplace issues.“Our goals go beyond the workplace questions of, ‘Are people getting paid enough?’ Our issues are going much broader,” he said. “It is a time where a union is an answer to these problems.”In response, Kara Silverstein, Google’s director of people operations, said: “We’ve always worked hard to create a supportive and rewarding workplace for our work force. Of course, our employees have protected labor rights that we support. But as we’ve always done, we’ll continue engaging directly with all our employees.”The new union is the clearest sign of how thoroughly employee activism has swept through Silicon Valley over the past few years. While software engineers and other tech workers largely kept quiet in the past on societal and political issues, employees at Amazon, Salesforce, Pinterest and others have become more vocal on matters like diversity, pay discrimination and sexual harassment.“Our goals go beyond the workplace questions of, ‘Are people getting paid enough?’” Mr. Shaw said.Credit…Damien Maloney for The New York TimesTimnit Gebru, an artificial intelligence researcher, said Google fired her after she criticized biases in A.I. systems.Credit…Cody O’Loughlin for The New York TimesNowhere have those voices been louder than at Google. In 2018, more than 20,000 employees staged a walkout to protest how the company handled sexual harassment. Others have opposed business decisions that they deemed unethical, such as developing artificial intelligence for the Defense Department and providing technology to U.S. Customs and Border Protection.Even so, unions have not previously gained traction in Silicon Valley. Many tech workers shunned them, arguing that labor groups were focused on issues like wages — not a top concern in the high-earning industry — and were not equipped to address their concerns about ethics and the role of technology in society. Labor organizers also found it difficult to corral the tech companies’ huge workforces, which are scattered around the globe.Only a few small union drives have succeeded in tech in the past. Workers at the crowdfunding site Kickstarter and at the app development platform Glitch won union campaigns last year, and a small group of contractors at a Google office in Pittsburgh unionized in 2019. Thousands of employees at an Amazon warehouse in Alabama are also set to vote on a union in the coming months.“There are those who would want you to believe that organizing in the tech industry is completely impossible,” Sara Steffens, C.W.A.’s secretary-treasurer, said of the new Google union. “If you don’t have unions in the tech industry, what does that mean for our country? That’s one reason, from C.W.A.’s point of view, that we see this as a priority.”Veena Dubal, a law professor at the University of California, Hastings College of the Law, said the Google union was a “powerful experiment” because it brought unionization into a major tech company and skirted barriers that have prevented such organizing.“If it grows — which Google will do everything they can to prevent — it could have huge impacts not just for the workers, but for the broader issues that we are all thinking about in terms of tech power in society,” she said.The union is likely to ratchet up tensions between Google engineers, who work on autonomous cars, artificial intelligence and internet search, and the company’s management. Sundar Pichai, Google’s chief executive, and other executives have tried to come to grips with an increasingly activist work force — but have made missteps.Last month, federal officials said Google had wrongly fired two employees who protested its work with immigration authorities in 2019. Timnit Gebru, a Black woman who is a respected artificial intelligence researcher, also said last month that Google fired her after she criticized the company’s approach to minority hiring and the biases built into A.I. systems. Her departure set off a storm of criticism about Google’s treatment of minority employees.“These companies find it a bone in their throat to even have a small group of people who say, ‘We work at Google and have another point of view,’” said Nelson Lichtenstein, the director of the Center for the Study of Work, Labor and Democracy at the University of California, Santa Barbara. “Google might well succeed in decimating any organization that comes to the floor.”The Alphabet Workers Union, which represents employees in Silicon Valley and cities like Cambridge, Mass., and Seattle, gives protection and resources to workers who join. Those who opt to become members will contribute 1 percent of their total compensation to the union to fund its efforts.Over the past year, the C.W.A. has pushed to unionize white-collar tech workers. (The NewsGuild, a union that represents New York Times employees, is part of C.W.A.) The drive focused initially on employees at video game companies, who often work grueling hours and face layoffs.In late 2019, C.W.A. organizers began meeting with Google employees to discuss a union drive, workers who attended the meetings said. Some employees were receptive and signed cards to officially join the union last summer. In December, the Alphabet Workers Union held elections to select a seven-person executive council.But several Google employees who had previously organized petitions and protests at the company objected to the C.W.A.’s overtures. They said they declined to join because they worried that the effort had sidelined experienced organizers and played down the risks of organizing as it recruited members.Google employees staged a walkout in 2018 to protest how the company handled sexual harassment.Credit…Bebeto Matthews/Associated PressAmr Gaber, a Google software engineer who helped organize the 2018 walkout, said that C.W.A. officials were dismissive of other labor groups that had supported Google workers during a December 2019 phone call with him and others.“They are more concerned about claiming turf than the needs of the workers who were on the phone call,” Mr. Gaber said. “As a long-term labor organizer and brown man, that’s not the type of union I want to build.”The C.W.A. said it was selected by Google workers to help organize the union and had not elbowed their way in. “It’s really the workers who chose,” Ms. Steffens of C.W.A. said.Traditional unions typically enroll a majority of a work force and petition a state or federal labor board like the National Labor Relations Board to hold an election. If they win the vote, they can bargain with their employer on a contract. A minority union allows employees to organize without first winning a formal vote before the N.L.R.B.The C.W.A. has used this model to organize groups in states where it said labor laws are unfavorable, like the Texas State Employees Union and the United Campus Workers in Tennessee.The structure also gives the union the latitude to include Google contractors, who outnumber full-time workers and who would be excluded from a traditional union. Some Google employees have considered establishing a minority or solidarity union for several years, and ride-hailing drivers have formed similar groups.Although they will not be able to negotiate a contract, the Alphabet Workers Union can use other tactics to pressure Google into changing its policies, labor experts said. Minority unions often turn to public pressure campaigns and lobby legislative or regulatory bodies to influence employers.“We’re going to use every tool that we can to use our collective action to protect people who we think are being discriminated against or retaliated against,” Mr. Shaw said.Members cited the recent N.L.R.B. finding on the firing of two employees and the exit of Ms. Gebru, the prominent researcher, as reasons to broaden its membership and publicly step up its efforts.“Google is making it all the more clear why we need this now,” said Auni Ahsan, a software engineer at Google and an at-large member of the union’s executive council. “Sometimes the boss is the best organizer.”AdvertisementContinue reading the main story More