I’ll bite.
To start with, enforcement of TOS is potentially one avenue to explore, but that doesn’t necessarily rule out others. An attorney familiar with federal laws would be better qualified to determine what would and wouldn’t fly in a federal court as far as regulation goes. I’m throwing out ideas at this point. But I reject the idea that we’re left to do nothing in the face of powerful misinformation which, left unchecked, will destroy our democracy. If it means going as far as breaking up big tech companies in the interests of taking away monopolistic powers, I’d rather not go that route, but if that’s the only option we have, then so be it. Data is the new oil, so to speak; if we could break up Standard Oil, we can break up Facebook. I would of course love nothing more than to have big tech do some soul searching itself, even if it means igniting its own civil war – Tim Cook’s recent shot at FB are a welcome development to that end.
Responding to K9’s questions:
Do you mean regulating what the TOS can say, how it is enforced, or both?
I think that what the TOS actually say should be proposed by the company and agreed to by the consumer. What I would propose is having a regulatory agency like the FTC come in and say “Okay, Facebook, these are the terms that you required your users to abide by; you, too, are bound by those terms and must make a good faith effort to enforce those terms consistently.” Now in that scenario, FB would be free to take hate speech out of its TOS but then it would be up to them to deal with the PR fallout. As it is now, FB, Twitter, and other big platforms have a highly inconsistent policing policy, typically only policing speech when their brand has been embarrassed in some way.
Should I be able to make my own messageboard that forbids the use of the word, “it”?
This question would be better if the example were more realistic.
Should someone be able to sue a social media company if they do not feel as though they were consistent in their moderation? Should there be a govt body that investigates complaints of uneven sanctions?
If someone could prove that they were moderated arbitrarily, I don’t know about a lawsuit per se, but perhaps they could submit a complaint to a regulatory body (be it FTC, FCC, or a new agency that deals with social media). I think what would happen over time is that lawyers and legal departments representing these companies would give moderation guidance. Perhaps one preemptive remedy for companies would be that garden variety legal disputes over unfair moderation be resolved through a process that begins internally and then, if it escalates, ends with arbitration rather than litigation, clear regulatory infractions notwithstanding.
No doubt, this would change how social media operate, and the changes would be noticeable. In some ways, these changes would feel noticeably restrictive, but I personally think that the internet needs to be cleaned up for our own good in the sense that the power must be shifted back to the public interest. At the same time, I’d like to see net neutrality tilt some of that power back away from internet providers.
People look at regulation as though it’s going to immediately used to censor and stifle communication on the internet. I see it differently. I think it will reduce the incentive for destructive communication on the internet – communication that everybody knows is harmful but companies that are in a position to stop it refuse to because they don’t want to do anything that interferes with the way they operate and because they don’t want to risk alienating some users who lurk in dark corners of the web.