Why you are here: you are interested in bimonthly longform essays, think pieces, and curated news and information summaries related to public interest technologies, ethical tech, responsible innovation, responsible tech, responsible AI, trust and safety, digital citizenship, and tech for good.
Note: If someone forwarded this newsletter to you, please sign up here:
Complex AI (and ML) technologies are increasingly becoming integrated into critical social processes. While there are perceived benefits of AI innovation, there are also causes for concern, as these AI systems often have unintended and unfortunate consequences relating to security, safety, bias, privacy, interpretability, and fairness. In one example, a chatbot designed to reduce doctors’ workloads suggested that a patient should commit suicide.1 In a system designed to suggest sentencing recommendations for judges, black defendants were likely to be falsely classified as high risk of committing future crimes at twice the rate as white defendants, while white defendants were falsely classified as low risk of committing future crimes more often than black defendants.2 And a self-driving Uber car was designed to recognize pedestrians only in crosswalks as objects to be avoided, resulting in the death of a woman jaywalking.3
What’s missing in the development of AI systems and machines that is causing so many increasingly impactful failures? According to Reid Blackman, author of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, the missing element is risk mitigation. Current AI software and system development methodologies do not sufficiently, if at all, assess and mitigate the risks associated with the systems being developed. While the examples above suggest obvious risks a minimally ethical company would want to mitigate–eg, the death of their customers–other risks include reputation, regulatory, and legal risks.
Though we can try to get ahead of ourselves and wistfully think about Responsible AI, AI for Good, and Ethical AI, Blackman sets the bar for at least committing to creating AI for Not Bad. And this means doing the due diligence to vet AI systems of potential vulnerabilities, especially bias, privacy, and explainability. Says Blackman, “while the ethical risks of AI are not novel—discrimination, invasions of privacy, manslaughter, and so on have been around since time immemorial—AI creates novel paths to realize those risks. This means we need novel ways to block those paths from being traveled” (Blackman 2022, 10).
I’ve talked about the AI system vulnerabilities of bias, privacy, and explainability on the TechnoSlipsteam Podcast, but note how I am referring to these issues–as vulnerabilities. Borrowing terminology from cybersecurity, where a computer system, network, or software vulnerability is a weakness that can be exploited by an attacker, one way of thinking of the problems of AI systems from a design and development standpoint is to consider these issues as vulnerabilities, weaknesses in how these systems are designed and implemented. Except here, those weaknesses are not exploited by attackers, but, perhaps unintentionally, by the designers, engineers, managers, and other AI system stakeholders themselves. There are known problems of security, safety, bias, privacy, interpretability, and fairness in AI systems, and by the engineers and corporations not doing due diligence in the development of AI systems, they are causing the exploitation of those known vulnerabilities.
Blackman suggests companies who are willing to at least develop AI for Not Bad should create an ethical statement and set of ethical standards as well as an organizational structure to operationally make sure those standards are being met. To return to my cybersecurity analogy, corporations who take cybersecurity seriously have an organizational structure to deal with cybersecurity training, enforcement, and incident response. The beliefs and guidelines for how that structure operates is guided by that corporation’s security policy–what technical and behavioral guidelines the organization has established as important and meaningful regarding cybersecurity issues. Similarly, for a minimally ethical organization, the AI ethical structure includes a team whose roles and responsibilities are focused on AI and ethics oversight. But how does that team know what ethical boundaries and values to provide training for, measure, and enforce? That is the corporation’s ethical statement and standards that the company decides is meaningful–the company’s ethical North Star, so to speak.
Similar to cybersecurity, the goal of the organization’s AI ethical structure and content is to develop organizational culture and processes that allow AI systems and services to be sufficiently assessed from a risk standpoint–to minimize the physical harm, mental harm, social injustice, unfairness, unintended consequences, or whatever risks the company has decided is important to mitigate. One important point that Blackman stresses is that setting up this ethical infrastructure involves hiring people with the appropriate expertise. Says Blackman, “It would be unwise and unfair to charge data scientists, engineers, and product developers or owners with the primary responsibility of identifying and mitigating ethical risks of products. Expert oversight is needed, most obviously in the form of an AI ethics committee” (Blackman 2022, 160). As someone who teaches engineers and who is familiar with a number of engineering programs, ethics and risk management instruction sadly is just not a priority or even a consideration from an engineering program perspective. Given the growth of AI, and resulting growing AI failure list, I think engineering programs should start incorporating AI ethics in their programs. Until then, I do what I can–in the graduate level AI course I teach at Virginia Tech, for example. But the point is that companies can’t rely on their engineers and developers to deal with AI ethics issues–they just don’t have the expertise. Blackman states it firmly, “There is such a thing as ethical expertise. Those with that expertise are called ‘ethicists.’ Involve them” (Blackman 2022, 182).
The larger point of this discussion is that we do have a choice in the types of products we build and release into society. Considering the scale, complexity, and impact of AI technologies, I think we should start taking AI risk management more seriously, and Blackman’s book is a good starting point. Why not be more intentional about building a future we want to live in?
Yours,
Kendall
You’re on the free list for The Pseudodragon Newsletter. For the full experience, including access to The TechnoSlipstream Podcast transcripts, podcast episode early access, and other writings available only to supporters, join the community on our Patreon page:
Share
Why not forward this newsletter to a friend? Thanks!
Feedback?
If you are a subscriber just reply to this email.
About
Just joining us? Or maybe you’ve forgotten why you signed up? I’m Kendall Giles, a writer, researcher, and drinker of much coffee. Currently I work at Virginia Tech in the Department of Electrical and Computer Engineering in the College of Engineering in Falls Church, Virginia. I also teach in the Master of Information Technology Program, teach in the ECE Master of Engineering Program, and am a PhD student in the Department of Science, Technology, and Society in the College of Liberal Arts and Human Sciences. I research, write, and speak at the intersection of science, technology, and society, including the TechnoSlipstream podcast and the Pseudodragon Newsletter.
Contact
Bibliography
Blackman, Reid. 2022. Ethical Machines : Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI. First ebook edition. Boston, Massachusetts: Harvard Business Review Press.
https://syncedreview.com/2021/01/01/2020-in-review-10-ai-failures/
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://www.nbcnews.com/tech/tech-news/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281