Why you are here: you are interested in bimonthly longform essays, think pieces, and curated news and information summaries related to public interest technologies, ethical tech, responsible innovation, responsible tech, responsible AI, trust and safety, digital citizenship, and tech for good.
Note: If someone forwarded this newsletter to you, please sign up here:
According to math and computer science professor and science fiction author Vernor Vinge, sometime this year we will create an artificial intelligence far surpassing human abilities. This superhuman intelligence will “wake-up” and humans will soon thereafter either become slaves of these superhuman AIs or will be destroyed (Vinge 1993). Vinge is not alone making such dire predictions. Theoretical physicist Stephen Hawking said of such an AI, “The development of full artificial intelligence could spell the end of the human race.”1 And certainly we shouldn’t let Elon Musk’s struggles in being CEO of the social media company Twitter cast doubt on his knowledge and judgement of technology when he said, “With artificial intelligence we are summoning the demon” and that AI was “our biggest existential threat.”2
So, if Vernor is right, should you bother making vacation plans past 2023, the year of the Technological Singularity?3 Do we have to worry about the superhuman intelligence of Artificial Intelligence?
My advice to the above questions is yes to making your vacation plans, and no about worries about AI reaching the Technological Singularity—not for now and maybe not ever.
In this edition of my newsletter I’ll offer a few pieces of evidence to back up my advice, but I’ll trust you to make your own vacation plans based on your own assessment of Technological Singularity risk.
One piece of evidence you might want to consider, which takes some wind out of all these imminent predictions of the Singularity, is that we humans have a historical track record of being extremely bad at predicting the progress of technology. For example, in 1958 referring to one of the earliest versions of artificial neural network technology, Frank Rosenblatt’s Perceptron, it was reported that soon we would have a machine that “will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence” (“New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser” 1958). As I type this newsletter, in 2023, it is arguable that any machine can achieve any of those listed abilities. For example, my Amazon Spot device can “see” only if you redefine the word “see” to mean me turning on the machine’s camera via an app.
Similarly, in 1960 Nobel winner Herbert Simon said, “machines will be capable, within twenty years, of doing any work that a man can do” (Simon 1960). In 1970, cognitive and computer scientist Marvin Minsky, also co-founder of MIT’s Artificial Intelligence Laboratory, said “with quiet certitude” that “in from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable” (Darrach 1970).
Skipping a few decades of similar prediction misses, in 2014 Director of Engineering at Google Ray Kurzweil said, “My timeline is computers will be at human levels, such as you can have a human relationship with them, 15 years from now. When I say about human levels, I’m talking about emotional intelligence. The ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy, that is the cutting edge of human intelligence, that is not a sideshow.”4 And struggling social media company CEO Elon Musk stated also in 2014 that “next year” we would have self-driving cars, and he has moved that goalpost to “next year” every year since.”5
So, based on this track record, do you think this year, according to Vernor Vinge, will be the year of the Technological Singularity?
Another piece of evidence to consider is a wonderful argument in the book Artifictional Intelligence: Against Humanity’s Surrender to Computers by Harry Collins. His main stance is a negative outlook on even human-equivalent artificial intelligence any time soon, if ever: “No computer will be fluent in a natural language, pass a severe Turing Test, and have full human-like intelligence unless it is fully embedded in normal human society” (Collins 2018a, 2). The reason, which Collins develops throughout the rest of the book, is that to achieve anything like strong AI, artificial general intelligence, or full AI–terms that generally mean the ability of an AI to learn and replicate any human reasoning, problem-solving, or planning abilities, ie, an artificial intelligence indistinguishable from the human mind–AI must first be embedded within human society.
This is a subtle but powerful observation, one that current AI approaches do not consider, perhaps because typical AI engineering researchers do not collaborate with researchers from the humanities, nor do engineers typically get sociology, history, and philosophy training in traditional engineering programs.
For example, so-called “deep learning” AI architectures are designed to “learn” from massive repositories of historical data–documents, web pages, images, and databases–typically scraped from the Internet. Based on these methods, a large language AI model such as GPT-3, when given as an input prompt the start of a sentence, can "write" text that continues and completes the sentence. GPT-3 can even continue “writing” additional paragraphs of prose (or code, or recipes, or poetry, depending on the prompt). While this may seem magical to those not familiar with the technology–surely that AI must be intelligent and will soon replace software developers!6–this AI is a complex but fundamentally probabilistic algorithm. Given lots of training text of children's stories, if the human were to then give the algorithm the prompt of “Mary had a little” the algorithm would be more likely to output “lamb” than “Tyrannosaurus Rex” in response. Such an algorithm is not intelligent, is not consciousness, and is not creative. That algorithm is, as put by some researchers, simply a stochastic parrot (Bender et al. 2021).
This is not to say of course that stochastic parrots can’t be interesting–currently for example there is a frenzy over generative art AIs such as DALL-E 2, Midjourney and Stable Diffusion7–but these are not warning signs of an imminent Technological Singularity.
Collins points out that rather than grow these AIs in a lab based on historical training data, an AI must be able to operate within a social context in order to truly “learn,” become fluent, and gain operational understanding or the world. For example, Collins uses the sentence “I am going to misspell wierd to prove a point.” Your computer’s spellchecker will automatically flag that as an error, but an alert human editor will understand that, in that context, the misspelling is actually correct. Humans understand context to determine when rules can be broken rather than simply always following them, unlike computers.
Also, human babies learn through a long training period in human society. In this training, humans learn culture and values while developing their own agency–when it is appropriate to speak in a certain way, for example. A deep learning algorithm, on the other hand, just learns the biases and values contained with the training data. Microsoft’s Tay chatbot, for example, within 16 hours of its deployment on Twitter “learned” to spew profanity along with insensitive and racist comments such as I f@#%&*# hate feminists and they should all die and burn in hell” and “Bush did 9/11 and Hitler would have done a better job…”8
Again, this type of AI is not speaking with agency, intention, or intelligence, nor is it understanding the context within which it is operating–it is just functioning as a stochastic parrot.
Collins has more arguments and examples in the book, but a final nail in the coffin of a soon-Technological-Singularity is to consider how science is done by human scientists. One fear with a superhuman intelligence AI is that it will be able to “do science” so fast that humans will be rendered obsolete. But scientists don’t “find” facts that are just “out there” in the world. Scientific knowledge is created by argument and discussion with other scientists. Consider the science of “gravitational waves” or “black holes” or “climate change.” Some researchers for example say gravitational waves were first discovered in 2015, others dispute this finding (Collins 2018b). Has scientific knowledge regarding gravitational waves been created or not? Did the scientists see something real in 2015 or not?
Thus there is a sociology to scientific knowledge wherein facts are not simply discovered by analyzing the data—whether or not the data is real in the first place is a matter of argument and agreement among groups of scientists. Current AI approaches can never fill this sociological gap because these AI approaches depend on us (humans) to give to the algorithm the training data to learn from. Since there is a social aspect in science in determining what is “the data” in the first place, AI scientists built using current AI approaches would never be able to generate new scientific knowledge since these AI models assume all the data it sees is real data. In science, sometimes data is not data.
Therefore I think we have no fear of an imminent Technological Singularity with the continuation of current AI approaches, and thus you should feel free to schedule your 2023 (and beyond) vacations without concern of a takeover of humanity by robot overlords. I might, however, suggest you be wary of going on that vacation in a “self-driving” car. Maybe save that experience for “next year.” :-)
Yours,
Kendall
You’re on the free list for The Pseudodragon Newsletter. For the full experience, including access to The TechnoSlipstream Podcast transcripts, podcast episode early access, and other writings available only to supporters, join the community on our Patreon page:
Share
Why not forward this newsletter to a friend? Thanks!
Feedback?
If you are a subscriber just reply to this email.
About
Just joining us? Or maybe you’ve forgotten why you signed up? I’m Kendall Giles, a writer, researcher, and drinker of much coffee. Currently I work at Virginia Tech in the Department of Electrical and Computer Engineering in the College of Engineering in Falls Church, Virginia. I also teach in the Master of Information Technology Program, teach in the ECE Master of Engineering Program, and am a PhD student in the Department of Science, Technology, and Society in the College of Liberal Arts and Human Sciences. I research, write, and speak at the intersection of science, technology, and society, including the TechnoSlipstream podcast and the Pseudodragon Newsletter.
Contact
Bibliography
Bender, Emily M, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: CanLanguage Models Be Too Big?🦜.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23.
Collins, Harry. 2018a. Artifictional Intelligence: Against Humanity’s Surrender to Computers. John Wiley & Sons.
———. 2018b. Gravity’s Kiss: The Detection of Gravitational Waves. MIT Press.
Darrach, Brad. 1970. “Meet Shaky: The First Electronic Person.” Life Magazine 69 (21): 58B–68B.
“New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. New York Times (1923-Current File), 25.
Simon, Herbert A. (Herbert Alexander), 1916-2001. 1960. The New Science of Management Decision. [1st ed.]. The Ford Distinguished Lectures. 3. New York: Harper.
Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Science Fiction Criticism: An Anthology of Essential Writings, 352–63.
https://web.archive.org/web/20190417002644/http://www.singularitysymposium.com/definition-of-singularity.html↩︎
https://www.cnbc.com/2014/06/11/computers-will-be-like-humans-by-2029-googles-ray-kurzweil.html
https://futurism.com/video-elon-musk-promising-self-driving-cars
https://www.axios.com/2023/01/10/hackers-chatgpt-malware-cybercrime-ai
https://www.nytimes.com/2022/10/21/technology/ai-generated-art-jobs-dall-e-2.html
https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation
ChatGPT makes me feel like I am indeed non-native English speaker and it spares me much effort to write. Still, I do not think it takeovers my brain and not sure if it has the chance to read this article and plans its 2023 vacation. :P