The Age of AI gave the American public its first look at the potential dangers posed by machines that can process information faster than humans and potentially replace humans in many jobs and professions, becoming superior to humans in the distant future.
by Arthur Herman
Artificial intelligence (AI has become the primary driver of the stock market surge in 2023, despite worries and anxieties on Capitol Hill about the technology’s unregulated status. The fear that it will turn us into prey for Terminator-style robots has faded, although not entirely gone. The one worry consuming modern experts is China’s scramble to become AI’s top nation, as it published more than twice the number of scientific papers on AI than the United States in 2022 while widening its lead over the United States in applications for AI-related patents. These numbers hardly mean game, set, and match for Beijing, but strategically and economically, they point in an alarming direction. Nonetheless, every indication suggests that the United States will remain the world’s dominant power in artificial intelligence and machine learning in 2024.
[ Photo: Amanda Dalbjörn/ Unsplash] |
According to Tracxn Technologies, the United States now hosts more than 18,000 AI startups. A recent Adobe study found that 77 percent of consumers now use AI technology. Furthermore, the consulting firm McKinsey found that business adoption of AI had more than doubled since 2017, with half of all American businesses using some form of AI. Overall, McKinsey predicted generative AI alone would add $2.6 to $4.4 trillion a year to the global economy, with the bulk of that flowing to the United States, the world’s dominant AI power.
America’s dominance in AI did not happen by accident, nor was it the result of government planning and investment, like China’s catch-up effort under President Xi Jinping. Instead, it has been a decades-long saga involving scientists, engineers, business moguls, investors, visionaries, and charlatans, not to mention an American free-market business culture that’s open to new ideas and new technologies and willing to put money where it sees a comfortable return. Nonetheless, in 2023, three figures stood out as harbingers of where the AI sector is headed for good or ill in 2024.
The first is Sam Altman, CEO and genius behind OpenAI and ChatGPT, an AI-driven app that made Americans and the world aware of the explosive potentialities of generative AI. Generative AI, i.e., artificial intelligence able to generate text, images, or other media, is reshaping how we live, work, and do business. It shot out onto the world stage because of ChatGPT’s seemingly effortless ability to create poems and essays while conducting original research problems virtually on command (someone even had ChatGPT write a book review of a book I hadn’t written yet).
Altman’s sudden celebrity brought him to Capitol Hill to talk to Congress and the White House about the future of AI, a subject of which he was a strong advocate and optimist. However, others on OpenAI’s board had their doubts. Then, early last November, Altman was ousted as CEO, with the board looking for a successor. The news set off shock waves around AI and business, with some clamoring for Altman’s reinstatement. Even board members who had voted him out began to doubt their decision.
In less than a week, Altman was back in charge while the board underwent a major shakeup. Some analysts compared the incident to the 1985 ousting and rehiring of Steve Jobs as CEO of Apple in 1997. However, Altman’s firing had nothing to do with his business decisions. As Sarah Kreps, director of Cornell University’s Tech Policy Institute, told Vox, he and his aide Greg Brockman “seem to be of the view that accelerating AI can achieve the most good for humanity.” The old board took a different view, “that the pace of advancement is too fast and could compromise safety and trust,” and so applied the brakes by firing Altman.
Altman’s reinstatement suggests the majority of the board had changed their mind about Altman and the future of AI. Altman’s return sent a clear signal to the AI optimists that, at least for now, we are willing to assume AI’s risks to reap its rewards.
Another AI optimist is Lisa Su, CEO of American Micro Devices (AMD). AMD used to be one of the innovative semiconductor pioneers, giving Silicon Valley its name. However, while ADM enjoyed a relatively steady market in microchips for gaming devices in the 2000s, it was being eclipsed in market dominance by firms like TSMC, Intel, Qualcomm, and Nvidia. In 2014, it was on the brink of bankruptcy. Then came Lisa. Her stewardship brought a new generation of advanced processors for AMD customers named the Ryzen generation, culminating with the launch of AMD’s MI300X chip this November. Su said the new chip was “the most advanced AI accelerator in the industry” when she spoke on December 6 at an event in San Jose, California.
It was a chip aimed directly at the industry’s leader in AI chips, Nvidia. Su’s entire presentation highlighted the superiority of the MI300X to Nvidia’s H100 chip, which companies like Amazon, Meta, Microsoft, and Google have been using to develop their own generative AI tools.
Transitioning to AI chips is a good move for AMD since gaming chips are precisely the kind of chips needed for AI-specific chips. Unlike standard Computer Processing Units (CPUs), Graphic Processing Units (GPUs) come with thousands of cores that speed up the machine learning training process at the heart of generative AI.
GPUs come into play for developing and refining AI algorithms. Another set of microchips, field-programmable gate arrays (FPGAs), apply trained AI algorithms to real-world data inputs to solve specific problems. After being manufactured for niche computing tasks, FPGAs can be programmed in the field.
AMD’s MI300X chip contains both units, including application-specific integrated circuits (ASICs) specializing in AI. Su expects the new chip to hit $1 billion in sales by mid-2024 (a variant, MI300A, will serve the supercomputing market).
Securing that market means winning over big buyers like Microsoft and Meta. Both companies were at the chip launch, as well as OpenAI, which will use AMD’s new chips in the latest version of its Triton AI software. Whatever happens with AI as a technology, the competition to provide the means of doing it faster, cheaper, and more efficiently will be fiercer than ever, thanks to Lisa Su.
What happens with AI as a technology was the primary concern of the third person on my list of AI revolutionaries—and the most surprising. His name is Henry Kissinger. Most know him as the legendary secretary of state under Richard Nixon, who engineered the opening to China in 1971 and who saved Israel from destruction in the Yom Kippur War in 1973. However, towards the end of his life, Kissinger became fascinated by the possibilities—and dangers—of AI as a transformative technology. He and Google co-founder Eric Schmidt co-wrote a book, The Age of AI And Our Human Future, which explored the many possibilities for an AI-dominated world, including its application to lethal weapons, terrorism, and health care.
The Age of AI gave the American public its first look at the potential dangers posed by machines that can process information faster than humans and potentially replace humans in many jobs and professions, becoming superior to humans in the distant future. My review of the book at the time was highly critical, thinking that its view of the dangers of runaway AI was overstated, with its solutions (such as international agreements similar to international agreements on nuclear weapons) being ill-conceived.
However, this ideological disagreement did not interfere with our growing association until his death (he was planning a new book on AI) or my respect for his willingness to raise the critical and existential issues surrounding AI’s future as a technology. As he wrote: “The advent of AI, with its capacity to learn and process information in ways that human reason alone cannot, may yield progress on questions that have proven beyond our capacity to answer…but success will produce new questions.” Henry Kissinger was no scientist or engineer. Yet, he dared to question not only what we want to do with AI but also why we want to do it. In some ways, that makes him just as important a figure in the making of American AI in 2024 as Sam Altman and Lisa Su.
Arthur Herman is the Director of the Quantum Alliance Initiative at Hudson Institute and Hudson Senior Fellow. He is also the author of Freedom’s Forge: How American Business Produced Victory in World War II.
Post a Comment