If the race for powerful A.I. is indeed a race among civilizations for control of the future, the United States and European nations should be spending at least 50 times the amount they do on public funding of basic A.I. research. Their model should be the research that led to the internet, funded by the Advanced Research Projects Agency, created by the Eisenhower administration and arguably the most successful publicly funded science project in American history.
To their credit, companies like Google, Amazon, Microsoft and Apple are spending considerable money on advanced research. Google has been willing to lose about $500 million a year on DeepMind, an artificial intelligence lab, and Microsoft has invested $1 billion in its independent OpenAI laboratory. In these efforts they are part of a tradition of pathbreaking corporate laboratories like Bell Labs, Xerox’s Palo Alto Research Center and Cisco Systems in its prime.
But it would be a grave error to think that we can rest assured that Silicon Valley has it all taken care of. The history of computing research is a story not just of big corporate laboratories but also of collaboration and competition among civilian government, the military, academia and private players both big (IBM, AT&T) and small (Apple, Sun).
When it comes to research and development, each of these actors has advantages and limitations. Compared with government-funded research, corporate research, at its best, can offer a stimulating balance of theory and practice, yielding inventions like the transistor and the Unix operating system. But big companies can also be secretive, occasionally paranoid and sometimes just wrong, as with AT&T’s dismissal of internet technologies.
Big companies can also change their priorities. Cisco, once an industry leader, has spent more than $129 billion in stock buybacks over the past 17 years, while its chief Chinese competitor, Huawei, developed the world’s leading 5G products.