ISPs Should Not Police Online Speech—No Matter How Awful It Is.

gsa4555@lemm.ee to Technology@lemmy.world – 786 points –
ISPs Should Not Police Online Speech—No Matter How Awful It Is.
eff.org
128

Platforms should, like how we had to shut down our shit posting community for CSAM.

ISPs are a privatized infrastructure and should really be run as utilities. Like trains or water should be.

The world has been treated as a for-profit endeavor and this has many regrettable consequences.

exactly. switch my packets, and shut the fuck up.

the water company isnt trying to upsell me on premium water services, i would like the same from my isp thankyouverymuch.

I'd say "don't give them ideas", but they only have a few they like and that's one of them already

think of how much money could be saved not having to advertise alone.

we literally have a pretend market for who owns the last mile to force competition into a market that shouldnt exist. insane

Buy our premium package for 40% less microplastics, guaranteed*!

Check out the company Aqua, they have been buying public water companies across the country and running them into the ground with no government oversight since they are a private company.

It’s an unpopular opinion, but crippling platforms due to CSAM is a lot more harmful than what would happen if we did not have such draconian laws around it. Do people think there would be some dramatic explosion of CSAM? I don’t buy that for a second and the act of producing such material has always and will always be illegal, so like everything else, it seems ridiculous to prosecute the particular crime of posession.

Seize all funds received for distributing it, throw anyone involved in producing it in prison and throw away the key, and stop holding threat of social death over anybody’s head if some idiots throw a bunch of digital gunk at them.

it seems ridiculous to prosecute the particular crime of posession

what does this even mean? you mean with people hoarding CSAM shouldn't be charged because they're not distributing it?

Do people think there would be some dramatic explosion of CSAM?

Yes, this is not your local backwater town where you know there are a few visibly shitty & disgusting people and people tell their kids to stay away and everyone becomes safe. And if you think shit doesn't explode on the internet, you might be living under a rock last 2 decades.

That's stupid on a whole new level and your made up scenario doesn't make it any better. No one is threatened for having been sent some questionable content. The person who sent those however might be and the tech today makes it incredibly easy to prove where anything came from since everyone is being tracked.

Seize all funds received for distributing it, throw anyone involved in producing it in prison and throw away the key,

How about we prevent such things from happening by discouraging it in the firat place? Sure, they won't be down to 0, but your solution starting after the distribution has already started is highly disturbing.

what does this even mean? you mean with people hoarding CSAM shouldn’t be charged because they’re not distributing it?

He's just a dirty MAP apologist. Ignore him.

They claim it stands for Minor Attracted Person

… and that’s better? There are already separate terms for those attracted to under age people vs prepubescence people. The latter is a serious mental condition that needs help. The former is something society has largely agreed upon being morally wrong. In what world is “MAP” sufficient cover? Weird.

3 more...

what does this even mean? you mean with people hoarding CSAM shouldn't be charged because they're not distributing it?

This means that current prosecution violates principles of criminal prosecution: namely requirement of intent.

3 more...

As a CSA survivor, who had images taken of me while I was abused... Fuck you.

People wanting to possess it is exactly what encourages people to produce the material. If you let people possess it with no consequences you will let the demand shoot up and basic economics should tell you what happens next with the supply part.

That is disgusting. Seriously. You should feel ashamed of yourself.

It seems I'm going into really wierd conversation.

If you let people possess it

I think you are missing the point or are a troll. Person above said that creating and/or buying is always be illigal anyway. Or you want to make easier for abusers to collect information about their future victims by destroying privacy?

Oh look, another shithead pedo apologist.

Creating and buying is exactly what they're trying to get people to accept, incrementally, by attacking possession first. Exploiting the warped way Americans have been taught to think about morality to do it. And your sorry ass is helping him. You doing so is not acceptable AT ALL and neither is any notion of getting rid of CP possession bans.

You will not make pedophilia socially acceptable and you will not lie to me and say that you aren't, and then go back to doing it like I know you're gonna do.

You're evil.

Dear Faust, another one. It is so much easier to call opponent Hitler/pedophile/terrorist than counter-argue.

This is discussion about ISP's surveillance. This is not just attacking posession. This is attacking computer that relayed data stream. Technically it is message sequence, but the fact we have discussion speaks that you doesn't care. Should postman go to jail if delivered letter contained child pornography? You say that postman should open and read every letter.

Your only option to not look like complete troll is to somehow define posession in a way that excludes postman(ISP).

Here's my take on your manipulation:

You will not make espionage socially acceptable and you will not lie to me and say that you aren't, and then go back to doing so like I know you're gonna do.

No. You're a pedo apologist. No one owes you a counter argument and you're not going to get the legitimacy you seek from a debate. Fuck off.

Fuck off, you're just a pedo.

Edit: I angered at least 6 pedos!

Edit 2: We're up to 8 angry pedos now!

3 more...
3 more...

ISPs shouldn't be doing anything other than providing internet service.

ISPs are to me like infrastructure. They are like roads or power lines. If you ask the ISPs to block malicious activity it's like asking the electrical poweregrid to be responsible for stopping their electricity being used for illegal activity. Asking the ISPs to block malicious activity is like asking the road builders to be responsible for bankrobbers and murdere driving on the roads. It's simply just ridiculous to put the responsibility like that.

Then ISPs should be public corporations, until that happens then they're not equivalent to pubic infrastructures.

ISPs should definitely be owned by the public and regulated like a utility.

Well said. It’s really the endpoints where the burden is.

Yup. To many people don't get it. They are all for heavy regulations so long as it's their side doing the regulation, then five years later they're crying about being regulated.

Bingo. People forget that rules and laws are always double edged and can be used against you.

I understand that censorship can be misused, but I also understand trying to fight faschistic right-wing propaganda and demagoguery is more important than trying to stick to general liberal ideals like "free speach", if radically following those ideals lead to bad outcomes.

So yes, I am in favor of not giving people that are against democracy a platform to push their lies and propaganda. With the current level of education and media literacy in the broad population, lies are much easier spread than that countered. Ignoring that means giving them their victories.

Facts are boring and feelings can easily be abused and misled.

I don't have an answer where the line is, and where and how censorship/blocking/deplatforming is effective. I just think that this isn't a simple issue.

But I would mostly agree that this shouldn't be decided by ISP companies. They probably shouldn't have a TOS. And if you ask me, they are infrastructure providers, so they have a monopoly, and therefore they should be non-commercial and under democratic control. Because democracy has proven to be a good way to handle monopolies.

I agree the situation looks helpless regarding fighting missinformation but conversation is the only viable tool. Failing that, when the topic is important enough, then only the tool of violence remains. A person about to blow themselves up in a crowd likely can't be talked out of it. Hopefully the situation isn't so bad that a lot of people are like that. I think it's better to promote education rather than trusting anyone to draw a line on what speech I can't hear (or say).

You can have conversations with people, but first you have to somehow stop the constant onslaught of propaganda meant to keep people in an easy to manipulate mental state.

You need to give them room to breath first, then you can start working with them to take their fear away and let them reflect on their preconceptions.

I think improving people's well-being is the best bet to give people the time to potentially become innoculated against manipulation. Stopping propoganda reaching people may contribute to that. Does getting caught censoring not backfire to a high degree ("they're trying to hide the truth")?

Yes, improving the well being is the best way, but doing that is very difficult and takes time. There are so may reasons why people are unwell in the US: suburbia, car centric infrastructure, housing, education, ...

If you are transparent about what is censored and why, your aren't hiding it.

After WW2, Germany created a law against Volksverhetzung and later expanded this into laws against hate speech, which was adopted in similar ways in other countries as well.

The law seems to work well, it is not perfect, but at least the right needs to be much more careful in what they utter. Of course free speach absolutists where concered about it, with slippery-slope arguments, but their concerns where unsubstaniated so far.

When it comes to government instead of a business then I am more certain and am of the free speech absolitish kind. Even though I believe speech can hurt, indeed has killed (e.g. driven to suicide my fellow LGBT people), there is simply no one I can trust to decide what speech I can hear.

I have little faith that any holocast deniers has anything worth hearing but I wouldn't say they could never have even a grain of truth. What effect are those laws having switch social backlash does not?

You fight them by blocking them and moving on.

Just because you don't want to see something doesn't mean nobody else should.

Blocking people is only useful to protect your own mental health. Ignorance of issues will not solve them on their own.

I don't see a suggestion here to fix the rise of demagoguery in the west. Pushing people out of the general public perception will help against stochastic violence against minorities.

The side who censors is always the side that knows it has the incorrect position

1 more...

This was literally the Net Neutrality debate from 2013-2016ish... And yall can correct me if im mis-remembering what the argument was. IIRC it was if an ISP wanted to be classified as a public utility or private service and the outcome was something along the lines of.

Public utility > protection under title 2 of The Communications Act of 1934 (ammended in 1996, its not that old, and could NOT be sued for the content they transmitted) > they could not police the content, otherwise they were liable for what their customers used it for.

The reasoning was you could not sue a public utility for someone using them to do something illegal. However if it was a private service.

Private service > not protected under title 2 > could police content as it was private infrastructure. The fear was if Time Warner or someone throttled connections to streaming platforms to ruin the expirence so people would go back to watching cable. This was kicked off when Netflix, Level 3 and Comcast all got into a spat over content usage, data volumes and who was responsible for paying for hardware upgrades.

The issue was that they were poorly classified at the time (unsure if that changed) and had a habbit of flip-flopping classifications as they saw fit in different cases (ISPs claimed to be both and would only argue in favor of the classification that was more useful at the time). I dont think this was ever resolved as it was on chairman Wheelers to-do list but 45s nominee to the FCC was a wet blanket and intentionally did nothing. Now the seat is empty because congressional approval is required for appointees and were doing the "think of the kids/ruin the internet" bill again... /Sigh.

Y'all know the drill, call your congress critter n' shit, remind them not to break the internet again. And if your in a red state, just fart loudly into the phone, its funny and they wont do anything constructive anyway, even if you asked nicely. (Sorry, im just tired of this cycle of regulatory lights on, lights off)

Thank you for coming to my TEDtalk.

Man, this is so wierd reading from post-soviet country. Here red state/region meant in 90-ies region with communists majority. And they probably would be for public utility.

Anyway now it doesn't matter in personalist resource autocracy.

Thats the part that makes this double frustrating, it by all accounts should be a public. Back in the early 00's the US federal government basicly gave all these companies a blank check to provide "broadband internet" to every home in America (See the US Postal Service as them doing shit like this before).

They (ISPs) have since taken the money and done some of the work (with the promise to get it done some day, eventually maybe never) and the term "broadband" is borderline useless in terms of an acceptable internet connection. Every few years there is some skuttlebutt to increase the standard of what "broadband" means, but the last update set it at 25mb down / 3mb up... Which in 2023 is pretty emberassing.

Downlink speed itself is not as embarassing as assymetrical channel.

And not everybody receiving it also embarassing. To the point of articles 19 and 27.1 of DoHR. AFAIK the only country in the world that does not have to be embarassed is Finland.

I don't want my local ISP to be making judgments about whether my neighbor is pirating movies or posting hate speech.

But I do want my local ISP to be able to cut off connectivity to a house that is directly abusing neighborhood-level network resources; in order to protect the availability of the network to my house and the rest of the neighborhood.

Back in the early 2000s there was a spate of Windows worms known as "flash worms" or "Warhol worms"¹, which could flood out whole network segments with malware traffic. If an end-user machine is infected by something like this, it's causing a problem for everyone in the neighborhood.

And the ISP should get to cut them off as a defensive measure. Worm traffic isn't speech; it's fully-automated malware activity.


¹ From Andy Warhol's aphorism that "in the future, everyone will be famous for 15 minutes", a Warhol worm is a worm that can take over a large swath of vulnerable machines across the Internet in 15 minutes. https://en.wikipedia.org/wiki/Warhol_worm

Yeah ok, that's like a gas leak from the gas Co. They come over and help you fix it.

that is directly abusing neighborhood-level network resources

First question: how?

20 more...

A fascinating read. I'm sure there will be plenty of people complaining about their "centralists / fence sitting" takes, but what they're saying it's perfectly valid. These top level providers shouldn't be interfering in arguably critical infrastructure.

We keep seeing Moral Guardians create more problems than they solve

Users need more control over the kind of content they want to see. The problem Lemmy has is very similar to the main problem with the internet as a whole: the current model is that of a "regulator" who controls the flow of information for us.

What I'd like to see is giving users the tools to filter for themselves, which means the internet as a whole. Not interested in sports, let me filter it all out by myself, instead of blocking individual parts piecemeal.

The problem is that no company has an incentive to work on something like that, and I wouldn't even know where to start designing such interface tools on my own, but there is, for example, a keyword blocker for YouTube that prevents video that contain said terms from appearing on my timeline. I've used it to block everything "Trump", for example. I'd like to see more of that.

The idea sounds nice in theory, but there is a reason people bring their car to a shop instead of changing their own oil. There are a lot of things we could/should take responsibility for directly but they are far too numerous for us to take responsibility for everyone of them. Sometimes we just have to place trust in groups we loosely vetted (if at all) and hope for the best. We all do it every day in all sorts of capacities.

To put it another way: do you think we should have the FDA? Or do you think everybody should have to test everything they eat and put on their skin?

I'm talking about internet content. Maybe this is where personal assistants can come into play at some point.

To put it another way: do you think we should have the FDA? Or do you think everybody should have to test everything they eat and put on their skin?

There is a middle ground. The FDA shouldn't have the power to ban a product from the market. They should be able to publish their recommendations, however, and people who trust them can choose to follow those recommendations. Others should be free to publish their own recommendations, and some people will choose to follow those instead.

Applied to online content: Rather than having no filter at all, or relying on a controversial, centralized content policy, users would subscribe to "reputation servers" which would score content based on where it comes from. Anyone could participate in moderation and their moderation actions (positive or negative) would be shared publicly; servers would weight each action according to their own policies to determine an overall score to present to their followers. Users could choose a third-party reputation server to suit their own preferences or run their own, either from scratch or blending recommendations from one or more other servers.

That isn’t a middle ground. You’re just saying the state can publish a recommendation, which it always has been able to. That’s absolutely in the “unregulated” / “no safety nets” camp. It’s caveat emptor as a status quo and takes us back to the gilded age.

To put it another way: The middle ground between “the state has no authority here” and “the state can regulate away a product” isn’t “the state can suggest we don’t buy it.” It still puts the burden on the consumer in an unreasonable way. We can’t assess literally everything we consume. If I go to a grocery store and buy apples, I can reasonably assume they won’t poison me. Without basic regulations this is not possible. You can’t feed 8 billion people without some rules.

Let me be clear, I agree with the EFF on this particular issue. ISP’s should not regulate speech and what sites I browse. But it’s not the same as having the FDA. For starters, ISP’s are private corporations.

You misunderstood. It's not a middle ground between "can regulate" and "cannot regulate". That would indeed be idiotic. It's a middle ground between "must judge everything for yourself" and "someone else determines what you have access to". Someone else does the evaluation and tells you whether they think it's worthwhile, but you choose whose recommendations to listen to (or ignore, if you please).

1 more...
1 more...

There is a middle ground. The FDA shouldn't have the power to ban a product from the market. They should be able to publish their recommendations, however, and people who trust them can choose to follow those recommendations. Others should be free to publish their own recommendations, and some people will choose to follow those instead.

That's putting too much responsibility on the average person, who doesn't have the time to become educated enough in biology and pharmacology to understand what every potentially harmful product may do to them. What if they never even hear the FDA recommendation?

Also, though you'd like to think this would only harm the individual in question who purchases a harmful product, there are many ways innocent third parties could be harmed through this. Teratogens are just one example.

This kind of laissez-faire attitude just doesn't work in the real world. There's a reason we ban overtly harmful substances.

What if they never even hear the FDA recommendation?

Then the FDA isn't doing a very good job, are they? Ensuring that people hear their recommendations (and trust them) would be among their core goals.

The rare fringe cases where someone is affected indirectly without personally having choosen to purchase the product can be dealt with through the courts. There is no need for preemptive bans.

6 more...
6 more...
7 more...
7 more...
9 more...

If you ask me (and nobody ever does for good reason), one of the only times an ISP should be pulling the plug on online speech is when you start linking actual malicious links that have a good chance of your grandma losing her retirement funds or your tech illiterate uncle getting a crypto miner installed on his laptop or something equally destructive.

I don't know why people are disagreeing with you.

This is like someone setting up a fake stop on a public road to mug people.

You're telling me that the state shouldn't have the right to police the road to prevent that from happening?

Lemmy.people, are you high?

the EFF seems to be suggesting that the private sector is policing itself via censorship because law enforcement doesn't fucking do anything. yea bro

It doesn't help that the ISPs are run by media companies, who put the content on the Internet...

That's why the policing is even happening in the first place.

"ISPs should monitor and censor bad things!"

"ISPs shouldn't be censoring free speech!"

We already see how bad stuff gets censored or punished on social media with things like bots that cast such a wide net that lots of innocents get caught with no human to appeal to.

You only have to look at how the first amendment is abused in the US to understand just how bad free speech without consequence is.

Yuval Noah Harari did a session with the rest is politics podcast. He brought up a concept that was total alien to me beforehand. In the past fake news has sought to shock people into taking notice. With AI this will change things dramatically. Rogue states will use AI to befriend individuals, and then manipulate the thought process by gentle integration. I found this an immensely scary prospect to dwell on. People rarely think about the person on the other end of a conversation being something other than what they portray. The concept is very credible.

To me, this is one of the reasons why we must police activity online. But it must be done without government interference. Ideally an international effort should be made. An international fact authentication group would go along way also.

Edited for better clarity.

YouTube should not be policing copyright either.

Oof... Yeah this.

When you have a corporation that acts as a stand in for the law, something very wrong has happened.

I'll agree that ISPs should not be in the business of policing speech, buuuut

I really think it's about time platforms and publishers be held responsible for content on their platforms, particularly if in their quest to monetize that content they promote antisocial outcomes like the promulgation of conspiracy theories and hate and straight-up crime

For example, Meta is not modding down outright advertising and sales of stolen credit cards at the moment Also meta selling information with which to target voters... to foreign entities

The issue with this is holding tech companies liable for every possible infraction will mean tjay platforms like Lemmy, and mastodon can't exist cause they could be sued out of existance

The issue with this is holding tech companies liable for every possible infraction

That concern was the basis for section 230 of the 1996 Communications Decency Act, which is in effect in the USA but is not the law in places like, say the EU. It made sense at the time, but today it is desperately out of date.

Today we understand that in absolving platforms like Meta of their duty of care to take reasonable steps to not cause harm to their customers, their profit motive would guide them to look the other way when their platform is used to disseminate disinformation about vaccines that gets people killed, that the money would have them protecting Nazis, that algorithms intended to promote engagement would become a tool not just to advertisers but to propagandists and information warfare people.

I'm not particularly persuaded that if in the US there is reform to section 230 of the Communications Decency act, that it would doom nonprofit social media like most of the fediverse- if you look around at all, most of it already follows a well-considered duty-of-care standard that provides its operators substantial legal protection from liability for what 3rd parties post to their platforms. Also if you consider even briefly, that is the standard in effect in much of Europe and social media still exists- it's just less-profitable and has fewer nazis.

I feel like you can't really change 230, you need to to instead legislatate differently. There is room for more criminal liability when things go wrong I think. But cival suits in the US can be really bogus. Like someone could likely sue a mastodon instance for turning their kid trans and win without section 230

I'm with you on the legislate differently part.

The background of Section 230(c)(2) is an unfortunate 1995 court ruling that held that if you moderate any content whatsoever, you should be regarded as its publisher (and therefore ought to be legally liable for whatever awful nonsense your users put on your platform). This perversely created incentive for web forum operators (and a then-fledgling social media industry) to not moderate content at all in order to gain immunity from liability- and that in turn transformed broad swathes of the social internet into an unmoderated cesspool full of Nazis and conspiracy theories and vaccine disinformation, all targeting people with inadequate critical thinking faculties to really process it responsibly.

The intent of 230(c)(2) was to encourage platform operators to feel safe to moderate harmful content, but it also protects them if they don't. The result is a wild-west, if you will, in which it's perfectly legal for social media operators in the USA to look the other way when known-unlawful use of their platforms (like advertising stolen goods, or sex trafficking, or coordinating a coup attempt, or making porn labeled 'underage' searchable) goes on.

It was probably done in good faith, but in hindsight it was naïve and carved out the American internet as a magical zone of no-responsibility.

This is not really what 230 does, sites still face criminal liability were needed, like if I made a site that had illegal content I could still be arrested and have my server seized, repealing 230 would legit just let Ken Paxton launch a multi state lawsuit suing a large list of queer mastodon instance for transing minors. Without 230 it would be lawsuit land and sites would censor anything that wasnt cat photos in an effort to avoid getting sued. Lawsuits are expensive even when you win. If you wanna make social media companies deal with something you gotta setup criminal liability not repeal 230. 230 just protect sites from cival suits not criminal ones.

I think generally we need to regulate how algrithums work of this is the case. We need actual legislation and not just law suit buttons. Also meta can slither its way out of any lawsuit, this would really only effect small mastodon instances.

publishers are held responsible.

Internet service providers, as defined in the 1996 cda, are not.

The problem is that your definitions are incredibly vague.

What is a "platform" and what is a "host"?

A host, in the definition of technology, could mean a hosting company where you would "host" a website from. If it's a private website, how would the hosting company moderate that content?

And that's putting aside the legality and ethics of one private company policing not only another private company, but also one that's a client.

Fair point about hosts, I'm talking about platforms as if we held them to the standards we hold publishers to. Publishing is protected speech so long as it's not libelous or slanderous, and the only reason we don't hold social media platforms to that kind of standard is that they demanded (and received) complete unaccountability for what their users put on it. That seemed okay as a choice to let social media survive as a new form of online media, but the result is that for-profit social media, being the de facto public square, have all the influence they want over speech but have no responsibility to use that influence in ways that aren't corrosive to democracy or to the public interest.

Big social media already censor content they don't like, I'm not calling for censorship in an environment that has none. What I'm calling for is some sort of accountability to nudge them in the direction of maybe not looking the other way when offshore troll farms and botnets spread division and disinformation

I'm gonna respectfully disagree. Free speech is a big reason why our world is so fucked right now.

Tell me that you didn’t read the article without telling me you didn’t read the article.

3 more...

The EFF is going to bat for fucking Kiwifarms? This is unconscionable.

If foundational human rights are for everybody, except those people. Then they're for nobody.

You have to defend basic human rights for everyone, or they mean nothing

IMHO it only counts when you defend the rights of someone you disagree with.

Principles apply to everyone or no one.

People who harass other people to suicide don’t deserve to communicate freely. They have shown they are not responsible with that right.

Yes, and the ACLU has defended the KKK. It's not a right unless it applies universally.

I think that was also a bad thing to do.

How is it a right if it isn't universal? If it's taken away from someone else, it can be taken away from you.

Society revokes rights from people who show that they cannot be trusted with them.

ISPs are private, they can do whatever they want with their service. Create a state run ISP if you want to impose free speech on an ISP.

Also fuck USA's definition of free speech that lets people share hate.

Bring in the downvotes!

Agreed, ISPs should be public services. The Internet is an essential service these days. Shouldn't be left in private hands.

Na-ti-o-na-lize! Na-ti-o-na-lize!

Why wouldn't you want free speech protection to be regulated to private companies as well

Free speech is guaranteed in public space, not private space.

Just because you get to a public park in a privately operated taxi doesn't mean the park is suddenly private. It would also be absurd for the taxi to have a say in how you spend your time in the park.

I also refuse to trust that any private corporation would have my best interests at heart, or that they would not use the excuse of policing hate speech to also interfere with discussion against their corporate interests.

The taxi has a say in how you spend your time in the taxi, however

If you enter the taxi and the driver sees that you've got a shirt with a swastika the driver can tell you they're not taking you for this reason.

I didn't ask that, I asked why wouldn't you want that?

When did I say I didn't want it? Just stating a fact.

I've asked twice now, idk why you're intentionally avoiding the question now

Because you're asking a question that's irrelevant.

it's literally the whole topic of the thread you're replying to, you just misread it

No, I stated a fact and you're asking me why I agree with it when I never said I do.

Bro you are dense. I can say water is wet when I'm asked a completely different question, doesn't change the fact that doesn't have anything to do with the question asked. I didn't bring up if free speech is applied to private spaces, that's an entirely different comment that nobody said, we already know the answer, hence why it wasn't asked. I asked why you specifically, don't think that should apply, which is why I asked that. You didn't read correctly, and was asked three times now and are still avoiding the question for some reason, you either do or don't support it and you are avoiding stating either which is the whole fucking point of this thread, and now it's a waste of time because you don't want to share your opinion for the sake of downvotes?

My comment was top level and a reaction to the article, your ISP doesn't have to guarantee that it will allow you to pass any data you want through its system because it's a private company and as such it doesn't have to give you free speech in the infrastructure it owns. That's just a fact and an answer to the article posted.

You then asked me why I wouldn't want free speech to be guaranteed by private companies, implying that I shared the opinion that I don't want it to be the case, which I didn't.

I won't answer that question either because my opinion doesn't matter since I'm just stating a fact, the US Constitution applies to public space and it would be a violation of the Constitution to make it an obligation for private parties to let anyone say and do whatever they please in their place of business. If you want to have an ISP in the USA that allows people to share whatever they want on their infrastructure then you need it to be ran by the government. It's that simple.

The only opinion I shared was my dislike of the American definition of free speech which is just an invitation for people to share their vile opinions.

Also you'll notice in this discussion there are people who want ISPs to not control traffic in order to allow them to freely share CSAM... So... yeah 👍