Note: If someone forwarded this newsletter to you, please sign up here:
Hello, World!
One of the first programs anyone ever writes in any programming language is usually some variant of having the program output to the computer’s console window a slightly optimistic and hopeful message: “Hello, World!” Setting up a new programming environment and starting a new software project can be tricky, so seeing your computer print that cheery message to you is a comforting first step. For example, here is the Python version of “Hello, World” I use in my classes so that students can verify they have their environment set up correctly:
# This is a comment
print("Hello, World!\n")
No, having your computer print “Hello, World!” does not mean your computer is sentient, though some Google engineers might disagree.1
What you are reading right now is the first edition of the Pseudodragon Newsletter–a newsletter version of “Hello, World.” I’m similarly optimistic and hopeful that this newsletter will be full of bimonthly longform essays, think pieces, and curated news and information summaries related to public interest technologies, ethical tech, responsible innovation, responsible tech, responsible AI, trust and safety, digital citizenship, and tech for good. That’s a lot of ground to cover, but it’s ground we as a society truly need to cover, for in discussing technology as it is and as it could be, we are really talking about ourselves. Regarding us, as with this newsletter I’m also optimistic and hopeful, though I’m also not naive to the difficulties of human nature.
Hannah Fry, mathematician, science presenter and all-round badass (her words2), in her book Hello World: Being Human in the Age of Algorithms takes a similar tack with human-computer examples of conflicts relating to power, data, justice, medicine, cars, crime, and art. We must, though we do this too seldom, take a critical approach with our technology:
It’s about asking if an algorithm is having a net benefit on society. About when you should trust a machine over your own judgement, and when you should resist the temptation to leave machines in control. It’s about breaking open the algorithms and finding their limits; and about looking hard at ourselves and finding our own. About separating the harm from the good and deciding what kind of world we want to live in. Because the future doesn’t just happen. We create it. (Fry 2018, 3)
In March 2018, Elaine Herzberg hurried her bike across a street one night in Tempe, Arizona. Seeing a car approaching, she probably felt she could make it across the street in time. But this wasn’t just any car–unfortunately for Elaine, this car was an Uber “self-driving” car. Though there was a human in the driver’s seat, a human-in-the-decision-making loop safety feature so that the human could intervene when the machine learning and artificial intelligence “self-driving” technologies failed, at that time least once every 13 miles,3 Elaine’s fate was sealed in part because the human “safety” driver was looking at a cell phone in his lap and was not paying attention to the road. But more importantly, from a review by the National Transportation Safety Board, Uber’s “self-driving” system detected Elaine but never classified her as a pedestrian because she was not crossing the road in a crosswalk. Uber had deactivated the 2017 Volvo XC90’s factory-installed forward collision warning and automatic emergency braking systems and Uber ML/AI engineers had instructed the “self-driving” car to consider an obstacle in the roadway as a human if the person was in an official crosswalk.4 Thus, though the car was tracking Elaine crossing in front of the vehicle (but not in a crosswalk), the car did not slow down nor did the car alert the inattentive human “safety” driver.
When Fry asked Paul Newman5–no, not that one–professor of robotics at Oxford and co-founder of the autonomous software systems for autonomous vehicles company Oxbotica6 how his software worked, he said “It’s many, many millions of lines of code, but I could frame the entire thing as probabilistic inference. All of it” (Fry 2018, 127). Notice that he referred to his code as performing “probabilistic inference”–he did not use the phrases “machine learning” and “artificial intelligence” that we see plastered across every startup company’s VC pitch deck. Even from the early days of artificial intelligence, humans have so wanted to be able to create real, actual in-silicon intelligence, but ever since then we’ve been misled and misdirected in thinking that what we were doing with lines of software code, 1’s and 0’s, approached “intelligence” or “learning.”
For example, early AI researcher Frank Rosenblatt in 1958 said he was developing a neural network computer, a 5-ton room-sized assembly of relays, punch cards, and wires, that would be able to “walk, talk, see, write, reproduce itself and be conscious of its existence” (“New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser” 1958). And today the AI hustle continues–Elon Musk hyped the production of self-driving cars by the end of 2021.7 As of the writing of this newsletter, we neither have self-driving cars nor computers that can “walk, talk, see, write, reproduce itself and be conscious of its existence” (again, the opinion of certain Google engineers notwithstanding).
Emily Tucker, Executive Director at the Center on Privacy & Technology at Georgetown Law, says that using “intelligence” and “learning” to describe such pattern matching and computational statistics algorithms is misleading, and dangerous. In fact, in a recent article she said that the Privacy Center will no longer use terms “artificial intelligence,” “AI,” and “machine learning” in their work. One reason:
That we are ignorant about, and deferential to, the technologies that increasingly comprise our whole social and political interface is not an accident. The AI demon of speculative fiction is a super intelligence that threatens to dominate by stripping human beings of any agency. The threat of lost agency is real, but not because computers are yet capable of anything similar to, let alone superior to, human intelligence. The threat is real because the satisfaction of corporate greed, and the perfection of political control, requires people to lay aside the aspiration to know what their own minds can do.8
We have so much work ahead of us to wrest the direction of our technological future towards one that is more humanistic, ethical, and responsible. That is a goal of the Pseudodragon Newsletter, so thank you for being a part of the journey. But fair warning: you may want to buckle up–I expect it’s going to be quite a ride. As Scotty often replied to Captain Kirk, “I'm giving her all she's got, Captain! She cannae take anymore.” With your help, we can.
Hello World, indeed.
Yours,
Kendall
You’re on the free list for The Pseudodragon Newsletter. For the full experience, including access to The TechnoSlipstream Podcast transcripts, podcast episode early access, and other writings available only to supporters, join the community on our Patreon page: patreon.com/kendallgiles.
Share
Why not forward this newsletter to a friend? Thanks!
Feedback?
If you are a subscriber just reply to this email.
About
Just joining us? Or maybe you’ve forgotten why you signed up? I’m Kendall Giles, a writer, researcher, and drinker of much coffee. Currently I work at Virginia Tech in the Department of Electrical and Computer Engineering in the College of Engineering in Falls Church, Virginia. I also teach in the Master of Information Technology Program, teach in the ECE Master of Engineering Program, and am a PhD student in the Department of Science, Technology, and Society in the College of Liberal Arts and Human Sciences. I research, write, and speak at the intersection of science, technology, and society, including the TechnoSlipstream podcast and the Pseudodragon Newsletter.
Contact
Bibliography
Fry, Hannah. 2018. Hello World: Being Human in the Age of Algorithms. WW Norton & Company.
“New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. New York Times (1923-Current File), 25.
Elon Musk claimed to investors in January 2021 that he is “highly confident the car will be able to drive itself with reliability in excess of human this year,” https://www.theverge.com/2021/5/7/22424592/tesla-elon-musk-autopilot-dmv-fsd-exaggeration
As usual, a well-written and thought-provoking treatise. Nice work Dr. Giles.
Nice blog Dr. Giles! I posted on my LinkedIn page many months ago on a similar AI/ML subject. Here's a link: https://spectrum.ieee.org/stop-calling-everything-ai-machinelearning-pioneer-says. Hope you find the additional reading relevant and supplemental to this blog post.