The problem, however, is that as technological tools have increasingly become an indispensible part of our lives, we sometimes forget that they are just that—tools. Nobody is shocked to discover that a hammer isn’t very good at sawing wood, yet when it comes to the more complex technology in our smartphones and PCs, we often get angry when it can’t do stuff it was never built to do.
Then we yell at the offending device as if it was an incompetent or outright malicious person who’s out to make our lives miserable.
That’s nothing new. Humans have been practicing anthropomorphism since the dawn of time. And you better believe that this proclivity really kicks into overdrive with a technology like Siri, which, unlike that hammer, is actually designed to fool us into thinking it’s intelligent.
It’s not. Siri is only as smart as its programming, its inputs, and the resources it can access to provide answers. It’s not pro-life or pro-choice or much of an instrument of human agency at all, beyond the various manipulations of search engine optimization experts.
These objections would apply equally to any Assisted Decision Making tool.