+ A
A -
Parler, which describes itself as an unbiased, free-speech alternative to larger platforms like Facebook and Twitter, was kicked off Apple’s app store and the Google Play store for a time following the Jan. 6 attack on the US Capitol.
Amazon also booted the company from its cloud-computing services, effectively shutting down Parler for a time.
These companies claimed that Parler was violating terms of service by not monitoring content that appeared on the site, some of which was hate-filled or inciting of violence. It was viewed as potentially contributory to the Capitol insurrection.
To make amends with Apple, Parler executives contracted with a content-moderating company that employs artificial intelligence as the first line in community policing. Posts that the algorithm identifies as “incitement” or that are deemed to be threatening violence or constituting hate speech will not appear for many users.
Despite the resolution, there are recent reports indicating the popularity of the social media platform was damaged by the controversy.
What happened with Parler underscores the tricky business of the attempt by businesses to balance free speech with responsible speech. User bans seem to be happening with greater frequency as social media platforms and digital services attempt to avoid promotion of “bad” speech.
Therein lies the rub: what is “bad”? What kind of speech should be restricted? Offensive speech? Hate speech? Irresponsible speech? Speech that incites violence?
While it is not an absolute, it should generally be held as true that the best cure for bad speech is more speech.
Still, communication companies must take steps to monitor their content. The platforms for communication have an obligation to be aware. The next step beyond awareness is more troublesome as companies choose what content to host and what (and who) to ban. Great caution is in order. While private companies have the legal right to control public access related to communications, special regard must be given to the concept of free speech and free exchange in the marketplace of ideas.
Moderation of content that users post on social media sites is in order. However, care must be taken to avoid meddling too much into speech regulation.
It is not a matter of legalities. Businesses are private entities and can decide for themselves what content to allow. However, given their importance to and influence of modern public discourse, any form of speech limit must be carefully scrutinized and weighed.
Social media platforms may not be required to uphold the First Amendment (that is an obligation of government), but their collective power makes any restriction they levy highly consequential. They should, therefore, err on the side of openness and uphold this country’s free speech principles as much as possible, regardless of legal requirement or the lack thereof.
There was an outcry, largely from the political right, when Parler was first expelled from the app store. Many saw it as hypocritically partisan. There is some truth in the contention insofar as this: Death threats have appeared on other sites including Twitter without adverse consequences for Twitter; protesters and rioters involved in the Jan. 6 attack openly posted about weapons and violence ahead of the day on Facebook, and yet Facebook was not banished from app stores.
Not to be overlooked is the substantive difference that lies in the content-moderating policies of these respective companies. Facebook and Twitter employ many content moderators in an effort to be vigilant about language they deem crosses their lines of acceptability.
Parler essentially had been relying on users to police each other. An enterprise such as Parler should have on its team some players who are geared up to know exactly what speech is being exchanged. Ignorance of their own content is not acceptable.
Parler and companies like it have a duty to be aware of the content they host. At the same time, they have a civic duty to place the highest value possible on free speech, a hallmark of American values. It is a tricky business.
Amazon also booted the company from its cloud-computing services, effectively shutting down Parler for a time.
These companies claimed that Parler was violating terms of service by not monitoring content that appeared on the site, some of which was hate-filled or inciting of violence. It was viewed as potentially contributory to the Capitol insurrection.
To make amends with Apple, Parler executives contracted with a content-moderating company that employs artificial intelligence as the first line in community policing. Posts that the algorithm identifies as “incitement” or that are deemed to be threatening violence or constituting hate speech will not appear for many users.
Despite the resolution, there are recent reports indicating the popularity of the social media platform was damaged by the controversy.
What happened with Parler underscores the tricky business of the attempt by businesses to balance free speech with responsible speech. User bans seem to be happening with greater frequency as social media platforms and digital services attempt to avoid promotion of “bad” speech.
Therein lies the rub: what is “bad”? What kind of speech should be restricted? Offensive speech? Hate speech? Irresponsible speech? Speech that incites violence?
While it is not an absolute, it should generally be held as true that the best cure for bad speech is more speech.
Still, communication companies must take steps to monitor their content. The platforms for communication have an obligation to be aware. The next step beyond awareness is more troublesome as companies choose what content to host and what (and who) to ban. Great caution is in order. While private companies have the legal right to control public access related to communications, special regard must be given to the concept of free speech and free exchange in the marketplace of ideas.
Moderation of content that users post on social media sites is in order. However, care must be taken to avoid meddling too much into speech regulation.
It is not a matter of legalities. Businesses are private entities and can decide for themselves what content to allow. However, given their importance to and influence of modern public discourse, any form of speech limit must be carefully scrutinized and weighed.
Social media platforms may not be required to uphold the First Amendment (that is an obligation of government), but their collective power makes any restriction they levy highly consequential. They should, therefore, err on the side of openness and uphold this country’s free speech principles as much as possible, regardless of legal requirement or the lack thereof.
There was an outcry, largely from the political right, when Parler was first expelled from the app store. Many saw it as hypocritically partisan. There is some truth in the contention insofar as this: Death threats have appeared on other sites including Twitter without adverse consequences for Twitter; protesters and rioters involved in the Jan. 6 attack openly posted about weapons and violence ahead of the day on Facebook, and yet Facebook was not banished from app stores.
Not to be overlooked is the substantive difference that lies in the content-moderating policies of these respective companies. Facebook and Twitter employ many content moderators in an effort to be vigilant about language they deem crosses their lines of acceptability.
Parler essentially had been relying on users to police each other. An enterprise such as Parler should have on its team some players who are geared up to know exactly what speech is being exchanged. Ignorance of their own content is not acceptable.
Parler and companies like it have a duty to be aware of the content they host. At the same time, they have a civic duty to place the highest value possible on free speech, a hallmark of American values. It is a tricky business.