Why you are here (hopefully!): you are interested in bimonthly longform essays, think pieces, and curated news and information summaries related to public interest technologies, ethical tech, responsible innovation, responsible tech, responsible AI, trust and safety, digital citizenship, and tech for good.
Note: If someone forwarded this newsletter to you, please sign up here:
One myth that seems to be passed down from each generation of engineers, marketers, and venture capitalists to another is that “technology is neutral.” When confronted with the wreckage and social upheaval caused by the introduction of some hastily prototyped and packaged new product or service, such as companies giving surveillance footage to police without a warrant or consent,1 the reply by those designing or selling those products or services is often simply to shrug and say something similar to what was said by Paul Daugherty, Chief Technology and Innovation Officer at Accenture: “Is technology, and Artificial Intelligence, good or bad? The answer is NEITHER. Technology is neutral, AI is neutral. The way ‘we’, as humans, apply and use the technology is what defines if the impact is good or bad” (Hare 2022, 30). If that notion was true, then that would be good news indeed for the engineers designing and companies selling surveillance technologies, social media platforms, user data harvesters, and cryptocurrencies–their conscience is clean, because if the technology is neutral, then the blame, guilt, and responsibility for misuse can only fall on the user.
That would be a cleaner, tidier, and ethically pure world to live in, wouldn’t it? Unfortunately, the real world is not so simple, and the ethics of those products is far from pure.
But especially if you consider AI and automation technologies–devices that can move independently in the world, sense the world around them, make decisions based on the data collected, and then implement those decisions by operating in the world–series of actions increasingly done without human intervention–the decisions that those systems make and whom they make those decisions on or against is programmed by us humans. We give those systems our own desires and intentions when we are choosing the datasets and programming the code.
As Marshall McLuhan once famously said, “The medium is the message” (McLuhan 1994). The Internet, for example, is a technology that connects us to other people and devices. It allows us to be entertained, allows us to work, allows us to learn things, and even is critical to the proper operation of devices in our homes and businesses. But the internet is not just a neutral tool–it can change our behavior or influence how we experience things. The internet changes our culture, it can connect us with each other, and it can also divide us against one another. It can affect who votes or who has heat or air conditioning. It can surveil us, and it can collect and give our private, personal information to others.
So, technologies help shape how we interact with the world as well as shape our experiences in the world–they aren’t just simple, neutral tools or objects. Philosophers of technology such as Peter-Paul Verbeek or Don Ihde would say that technologies mediate our relationship with the world. There is no dimension of human life today where technological mediation doesn’t have a role, from how we can learn about and understand the world around us, to how we interact with each other, to our perceptions of right and wrong. And the implications will only become more important as AI and automation technologies become more pervasive. Unfortunately, with these kinds of powerful, complex systems we are increasingly giving them agency to act in the world on our behalf, but we are still writing technology policy and designing these systems using the old ways of thinking about technology–that they are neutral tools, such as advocated by the so-called Value Neutrality Thesis (Pitt 2014). You’ve heard that one before–a gun doesn’t kill people, people kill. Proponents of those types of views haven’t realized that when the engineers design and build these powerful, complex AI and automation systems, the engineers are building into the systems their own faults and biases. When the engineer draws a box to designate the boundary between what is inside the system and what is outside the system, they aren’t realizing that the engineer is actually inside the box, not outside the box as is traditionally taught.
A well-known example of the flaw in this kind of thinking is the case of a new AI system Amazon developed to help increase the diversity of new employees hired to work for the company.2 Many of you know that especially in technology companies, the number of females hired is historically very low. So Amazon wanted to build an AI system to help eliminate any possible gender biases in their hiring process. But after building an AI system to screen resumes and recommend candidates to hire, the result was that their hiring process discriminated even more against females. The reason? They trained their AI system to recognize good candidates based on the qualities of previous applicants that were then hired by the company. But since the company historically rejected female candidates in favor of male candidates, the AI system learned this behavior–so the company actually taught their AI system to identify and reject resumes from female candidates.
Another example: consider AI-based systems being used to predict where future crimes are likely to occur, or to suggest sentences for defendants.3 Let’s assume these systems are meant to reduce bias–good intentions all around–but like the Amazon example, consider the way these systems are being designed and built. If you’re at all familiar with the systemic historical biases against low-income and minority populations in our society, the results from these “smart” risk assessment systems trained on past data can be deadly. Even worse? Because these AI systems are giving assessments and decision based on “data,” the judges who use these systems to impose sentences will have cover for their decisions.
The realization that technology is not neutral then brings us to the themes pursued by this newsletter–public interest technologies, ethical tech, responsible innovation, responsible tech, responsible AI, trust and safety, digital citizenship, and tech for good. What then is the implication of realizing technology is not neutral? As related by Stephanie Hare in her book, “‘I could probably write a very good program for choosing people to be killed for some reason, selecting people from a population by a particular criterion,’ Karen Spärck Jones, a computer scientist and professor at Cambridge University, told the British Computing Society in 2007. ‘But you might argue that a true professional would say, I don’t think I should be writing programs about this at all’” (Hare 2022, 15).
As engineers, as technical leaders, as entrepreneurs – we do have agency in the products we choose to design, implement, and sell. As consumers, we have agency to hold technology and those who contributed to it coming into our lives, including ourselves, accountable. As educators, we have agency in how we teach technology to our students.
Technology is not neutral and thankfully that understanding is slowly getting out. Universities such as Cornell, Harvard, MIT, Stanford, University of Texas at Austin, Indiana University, the University of Wisconsin-Madison, Yale, Carnegie Mellon, George Washington, NYU, the University of North Carolina at Chapel Hill, the University of Washington, and others are all introducing curriculum to bring technology ethics and responsible tech to the attention of their students and future technical leaders. I would love it if one day we could add Virginia Tech to this list, but unfortunately I’m not aware that we yet can just yet. Thus, my work here is not yet done!
Finally, I’ll leave you with a quote from Apple CEO Tim Cook’s 2019 commencement address at Stanford:4
We see it every day now, with every data breach, every privacy violation, every blind eye turned to hate speech. Fake news poisoning our national conversation. The false promise of miracles in exchange for a single drop of your blood. Too many seem to think that good intentions excuse away harmful outcomes.
But whether you like it or not, what you build and what you create define who you are.
It feels a bit crazy that anyone should have to say this. But if you’ve built a chaos factory, you can’t dodge responsibility for the chaos. Taking responsibility means having the courage to think things through.
Love that: “If you’ve built a chaos factory, you can’t dodge responsibility for the chaos.”
Yours,
Kendall
You’re on the free list for The Pseudodragon Newsletter. For the full experience, including access to The TechnoSlipstream Podcast transcripts, podcast episode early access, and other writings available only to supporters, join the community on our Patreon page:
Share
Why not forward this newsletter to a friend? Thanks!
Feedback?
If you are a subscriber just reply to this email.
About
Just joining us? Or maybe you’ve forgotten why you signed up? I’m Kendall Giles, a writer, researcher, and drinker of much coffee. Currently I work at Virginia Tech in the Department of Electrical and Computer Engineering in the College of Engineering in Falls Church, Virginia. I also teach in the Master of Information Technology Program, teach in the ECE Master of Engineering Program, and am a PhD student in the Department of Science, Technology, and Society in the College of Liberal Arts and Human Sciences. I research, write, and speak at the intersection of science, technology, and society, including the TechnoSlipstream podcast and the Pseudodragon Newsletter.
Contact
Bibliography
Hare, Stephanie. 2022. Technology Is Not Neutral: A Short Guide to Technology Ethics. London Publishing Partnership.
McLuhan, Marshall. 1994. Understanding Media: The Extensions of Man. MIT press.
Pitt, Joseph C. 2014. “‘Guns Don’t Kill, People Kill’; Values In and/or Around Technologies.” In The Moral Status of Technical Artefacts, 89–101. Springer.
Love this article. It reminds me of iPhone's new privacy poster, where a person holds the phone and the phone is protecting this person's face from the outside world. I was not even aware of "technology ethics" until I read this. This is the word that we should be aware of. Thanks.