What are the limits of the First Amendment/freedom of speech in terms of search engine content? I’m specifically thinking of medical/pharmacological disinformation. How vigilant do companies like Google have to be to weed out disinformation? Does it depend on the country they are doing bsusiness with? Does the EU have stricter guidelines in terms of search engine content?
Google doesn’t search for “truth”, it searches for websites. Those sites linked to by more places and clicked on more often are at the top of the results along with paid advertisers. Its up to the reader to evaluate the value and truthfulness of the site. It’s the same in Europe.
Google is a private company. The First Amendment constrains government suppression of speech and does not apply.
Also: Google maintains an index of content hosted on external sites. It does not directly control what is or is not posted on those sites. (You may quibble philosophically about Google indirectly controlling content, as site owners manipulate their content in order to improve their search rankings, but I doubt you could get a court to agree with you.)
However you look at it, the First Amendment has nothing to do with Google’s search results.
The rules are a bit different in the EU, because in addition to free speech rules, there are rules about storing and processing personal data under the GDPR, as well as a brand-new Digital Services Act which puts all web platforms on notice that they need to be more transparent in how they manage and police their own content. The immediate concern behind this act is the continuing problem of state-sponsored disinformation coordinated by Russia and China, and the EU’s demand that Facebook and other parties take an active role in scrubbing this material. But there’s also a wider scope to the Act (and its sister legislation, the Digital Markets Act) in which the EU wants to force services like Google to reveal exactly how it’s processing user data to manipulate search results in order to influence people’s decisions.
It’s unclear as yet how your specific example of medical disinformation would be handled under the DSA/DMA regime. Again, however, in the EU, it has nothing to do with “free speech.”
From a user perspective, the only real difference between accessing a site in the EU vs Stateside, is the website first asks you for permission to install cookies when you open it.
I’m pretty certain that Google does have a way of filtering out, or at least demoting, harmful content. This article, from pre-pandemic 2017, agrees about the demoting part.
It’s also known that Google gives different results to different people, even if not logged into a Google account (study from 2018).
The issue is not free speech but liability. Would Google be held liable for the content that it returns? It basically turns on Section 230 of the Communications Decency Act, which says, in part, that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Google’s search results essentially just repeat speech elsewhere, so they cannot be liable. That said, to qualify, the organization has to take down things like copyright infringement.
The EU has more restrictions, to the point that they passed laws saying Google has to pay a fee to show those little snippets. And they have “the right to be forgotten.”
The OP isn’t talking about Russian spam bots spreading “fake news”, though. He/She is specifically referring to filtering out medical information. I just did a quick Google Search for the “Healing Power of Crystals” and the third result states:
“Crystals emit positive, uplifting, energizing, and calming vibrations that help you achieve a more peaceful mind and a revitalized physical. . .”
Clearly what “Google can do” is different from what “Google is doing”. They’re not filtering out truth and users are still required to evaluate Google results for currency, credibility, reliability and bias.
All the First Amendment says is that the government cannot limit free speech.
Court rulings over the years have allowed laws that somewhat limit speech, for example ban obscenity, disallow posting of copyright materials, allow offended parties to sue for slander, incitement to violence, etc. They cannot specifically ban false information - after all, that’s a slippery slope. At what point does information become false, and who gets to judge? banning legal but objectionable or misleading content is all on the service provider.
I saw a discussion the other day, revolving around “twitter and free speech” - the commentator said exactly what I said - allowing complete free speech means your service will fill with steadily worse garbage. Under cover of perceived anonymity, you get spam, harassment, false news, Qanon, kidddie porn, beheading videos, revenge porn, doxxing, etc. This has happened to every online service over the years and prompted them to be ever more vigilant about the blatantly illegal and much undesirable content.
But, it’s not always the government that mandates or initiates this censorship. It’s just good business.
In the good old days, newspapers and then TV and radio were liable for what went on the air because as editors of publications (i.e. “Publishers”) they had major input to decide what to show and not to show in a limited space. Online services, by contrast, can have effectively unlimited space and such a flood of content that they cannot hope to gatekeep it all in real time. Plus, the volume of varied content and the timeliness requirement added to the impracticality of gatekeeping.
The only proviso was that when improper content was flagged and reported to them by those with the authority to do so (i.e. court order or police intervention), they were obliged to remove it - slanderous material, illegal material, etc.
As far as I am aware, there is no law in the U.S. requiring Google to weed out disinformation from its search results. There are laws regarding child pornography, though. And Google has voluntarily set up a copyright takedown system.
Florida attempted to regulate search engine rankings in 2021, sort of to the opposite effect. I think the most relevant provision is this, from Fla. Stat. 501.2041:
(j) A social media platform may not take any action to censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast. Post-prioritization of certain journalistic enterprise content based on payments to the social media platform by such journalistic enterprise is not a violation of this paragraph. This paragraph does not apply if the content or material is obscene as defined in s. 847.001.
Where social media platform is defined to include an “Internet search engine”,
(g) “Social media platform” means any information service, system, Internet search engine […]
“shadow ban” includes removing from search results due to misinformation,
(f) “Shadow ban” means action by a social media platform, through any means, whether the action is determined by a natural person or an algorithm, to limit or eliminate the exposure of a user or content or material posted by a user to other users of the social media platform. This term includes acts of shadow banning by a social media platform which are not readily apparent to a user.
(It is possible to “post” your website to Google Search through the Google Search Console.)
and “journalistic enterprise” could mean any website with over 100,000 words and over 100,000 monthly active users,
(d) “Journalistic enterprise” means an entity doing business in Florida that […] Publishes in excess of 100,000 words available online with at least 50,000 paid subscribers or 100,000 monthly active users […]
A judge issued a preliminary injunction against the state due to First Amendment concerns in the case NetChoice, LLC v. Moody , 546 F. Supp. 3d 1082, 1094 (N.D. Fla. 2021). Oral arguments for an appeal before the Eleventh Circuit are being heard today.
~Max
Of relevance, Judge Hinkle wrote:
Where social media fit in traditional First Amendment jurisprudence is not settled. But three things are clear.
First, […] The First Amendment says “Congress” shall make no law abridging the freedom of speech or of the press. The Fourteenth Amendment extended this prohibition to state and local governments. The First Amendment does not restrict the rights of private entities not performing traditional, exclusive public functions. See, e.g., Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1930 (2019). So whatever else may be said of the [social media/search engine] providers’ actions, they do not violate the First Amendment.
Second, the First Amendment applies to speech over the internet, just as it applies to more traditional forms of communication. See, e.g., Reno v. ACLU, 521 U.S. 844, 870 (1997) (stating that prior cases, including those allowing greater regulation of broadcast media, “provide no basis for qualifying the level of First Amendment scrutiny that should be applied” to the internet).
Third, state authority to regulate speech has not increased even if, as Florida argued nearly 50 years ago and is again arguing today, one or a few powerful entities have gained a monopoly in the marketplace of ideas, reducing the means available to candidates or other individuals to communicate on matters of public interest. In Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), the Court rejected just such an argument, striking down a Florida statute requiring a newspaper to print a candidate’s reply to the newspaper’s unfavorable assertions. A similar argument about undue concentration of power was commonplace as the social-media restrictions now at issue advanced through the Florida Legislature. But here, as in Tornillo, the argument is wrong on the law; the concentration of market power among large social-media providers does not change the governing First Amendment principles. And the argument is also wrong on the facts. Whatever might be said of the largest providers’ monopolistic conduct, the internet provides a greater opportunity for individuals to publish their views—and for candidates to communicate directly with voters—than existed before the internet arrived. To its credit, the State does not assert that the dominance of large providers renders the First Amendment inapplicable.
That brings us to issues about First Amendment treatment of social-media providers that are not so clearly settled. The plaintiffs say, in effect, that they should be treated like any other speaker. The State says, in contrast, that social-media providers are more like common carriers, transporting information from one person to another much as a train transports people or products from one city to another. The truth is in the middle.
[… too much to quote …]
In sum, it cannot be said that a social media platform, to whom most content is invisible to a substantial extent, is indistinguishable for First Amendment purposes from a newspaper or other traditional medium. But neither can it be said that a platform engages only in conduct, not speech. The statutes at issue are subject to First Amendment scrutiny.
~Max
A lot of sites ask that of everyone, including Americans. I’m guessing that it’s just easier to ask everyone than to first figure out where the user is and then ask only Europeans.
Indeed. And Section 230 also works the other way, in terms of taking too much control of the content: if you do that, then you become the publisher. I’m convinced this is going to wind up at SCOTUS, could go either way, and will be very bad whichever way it ends up: either companies will be forbidden from controlling content that passes through, which will mean most/all sites either turn into cesspools or stop allowing comments etc., as many already have; or they will be required to control content, which will mean many cannot afford to exist (e.g., Facebook–I know, “You say that like it’s a negative thing”, but also Nextdoor, LinkedIn…all the SM sites, basically). The only way they will be able to survive in that scenario is by charging their users.
Meanwhile, I keep hearing Facebook ads touting that they’ve spent (IIRC) “over $13B” in the last few years fighting bogus content, and have taken down over a billion bogus pages in the last few months.
If my business model required me to piss away that amount of money, I’m not sure I’d be advertising it.
And on the flip side, from the European perspective, we run into a fair number of US-hosted sites that block us with a static page: “Sorry, but we don’t show our content to EU visitors because we can’t be arsed to comply with your data protection laws.” Paraphrasing, of course.
I seem to recall this has already been decided - a service does not become a publisher simply because it polices (or tries to police) objectionable (to someone) content.
The main difference is that a publisher selects the material and presents only the material it selects, and often a very limited selection of total submissions. A website may (or may not) allow all content unlimited and then maybe police it after the fact, and does not invest a great deal of their effort in moderating content. They are more often compared to a telephone system or the mail - you can’t blame the phone system or the mail for objectionable content, even if they make some effort to limit it.
This was the precise intent of Section 230. A service that allows millions of people to post cannot effectively screen everything everyone posts before it goes up. Requiring websites to do so will either put them out of business, or relegate them to running less regulated from foreign countries.
You may well be correct, but I missed it if so (fully possible!).
This seems to be ongoing:
IANAL, just interested in this stuff as a layperson!
I think the First Amendment could apply, in the sense that the government – well, Congress, more particularly, whom the Amendment limits – could set requirements for how vigilant Google had to be, and it would be up to courts whether the Amendment precludes Congress setting such requirements.
Thanks Napier for getting to the heart of the matter (at least for me). I had assumed (wrongly?) that despite Section 230 of the Communications Decency Act, there was already some government oversight of search engine/social media content. Have any bills been introduced to curtail malicious content? Would it necessitate amending the First Amendment to accomplish this or does Congress have the power even with the First Amendment as is to legislate that content providers remove such content?
It’s important to remember that Section 230 just sets certain limits to avoid expensive first amendment court fights about government limits on speech. The back stop of Section 230 is still the first amendment.
Lots of bills have been introduced to curtail malicious content, repeal Section 230, and other things. All of these bill have large first amendment problems. The most recent one that I can think of to pass was FOSTA a few years ago, which puts restrictions on the ability to provide certain information related to sex.
Congress can pass any bills they want, and if the president signs them, they become law. They can pass a law saying “you can’t talk about Bruno,” but until somebody brings a court case over that, and a judge rules that the Bruno law violates the first amendment, the law will stand. Meaning, that anybody who doesn’t have deep enough pockets to fight government in court for years, isn’t going to talk about Bruno.
Yes, the heart of the matter is that the US government can set some reasonable limits on free speech - but since it’s protected in the constitution, those better be minimal and very very necessary limits. Some obvious limits are libel and slander, limiting child porn and revenge porn, privacy concerns, copyright violations, national security, and disseminating dangerous information (“How to make a nuclear bomb”).
Google itself is under no obligation to provide fair, balanced, accurate, or universal access to its platform. “Right to free speech” does not apply to what Google gives its participants. It applies to what the Government tells Google it can or cannot show.
As I understand, Google is only required under section 230 to remove content if the courts order it to under the authority of those exceptions above. IIRC section 230 makes a separate process for things like copyright - a holder may notify the service of the perceived violation with a sworn statement, and then there’s a process for solving the situation. Theoretically, a person who falsely claims a copyright violation commits an offense, but in practice this sems to be ignored all the time.
By censoring any content during or after it is posted, the service does not convert from a carrier to a publisher (who would be liable).
But it seems to me the distinction is - a publisher actively scrutinizes and selects a limited number of items to present to the public. Each page of a newspaper, each minute of a broadcast, is essentially limited. Conversely, services allow (within practical limits) almost unlimited content form unregulated and unmonitored sources. Google, for example, explicitly tries to provide everything it can find. Twitter, Facebook, Tiktok, etc. basically beg people to flood them with content, even if some content is then restricted.
Just to be pedantic, but this is FQ, that part is the DMCA, not Section 230.