Tim Wu disagrees with Eugene Volokh:
Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship. The First Amendment has wandered far from its purposes when it is recruited to protect commercial automatons from regulatory scrutiny. . . . The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)
This debate has some relevance to the assisted-decision making ideas I am working on, which rely, on part, on constitutional protection against challenges from occupational licensing. This may be another interesting front in this pursuit.
Update: Eugene Volokh responds:
Prof. Wu’s main other objection is that protecting people’s right to speak using partly computerized algorithms “is a bad idea that threatens the government’s ability to oversee companies and protect consumers.” But the First Amendment itself embodies an idea that often threatens the government’s ability “to oversee” what information is communicated, even when the government is purporting to prevent supposed unfairness. That’s not a bug, as computer programmers say — it’s a feature.