I like it, but I know that other notionally democratic web schemes (Digg, for example) have been undermined and manipulated by individuals for their own purposes.
In an (well, my) ideal world, webmasters would only seek to promote their content to potential viewers who might be genuinely interested in it, but in reality, there’s money to be made by thrusting yourself in everybody’s face, which presents enough of a motive for people to try to game any system.
How, (and this is an earnest question, not a jibe) if at all, would your WOT-like system resist such corruption, without compromising its democracy?
What is it with all these off-base counter-arguments? A WOT-like system easily solves every problem anyone has brought out here in this thread. All this blither about movie ratings has no bearing whatsoever on what I proposed! Movie ratings are done by a central authority which has many problems and drawbacks. But NOT the WOT proposal.
There’s no authority involved with WOT whatsoever!
The ratings come from ordinary individual users who visit the site and effectively “vote” on what WOT rating it deserves. Everyone is still free to visit any site, but the WOT color simply gives you a heads up as to what to expect. Also, one’s own personal rating of a particular site is retained in a cookie or something so that if you disagree with the other users’ ratings, you can override them with your own rating.
It is completely democratic. In the WOT-like solution, each and every individual user rates each site (or can choose not to) with their own personal rating.
In the current WOT, when you see a link or an icon or something, there is a colored circle attached to it which shows the other user’s ratings. No HTML or other coding changes are required whatsoever! The circles are inserted automatically without any extra coding at all. And they appear immediately; I’ve never, ever had to wait for a WOT circle to show up.
It would work, and it has none of the drawbacks that the posters here are concerned about.
I rather like this idea because it gives incentive (spy/malware protection, transaction security rating and others) to people in general to use it like the yahoo or google toolbar, not just parents protecting their precious children. However, lazy parents exist so it have to be a mandated feature of web browsers like seatbelts in cars. Anything short of mandated and we will likely have a ‘Tipper Gore’ sitting in front of congress demanding more.
on preview ETA: There is one drawback, its subject to Stephen Colbert style web vote bum rushes. For example, he could implore fans to sink the rating of mainstream Superman sites in favor of Tek Jansen(sp?). Just sayin…
Just an aside, the article I linked to is now the 2nd link in their “Internet Security News”.
All major news sites have a ‘Pictures of the Week’ section, anything with graphic content will come up will a black box where the picture should be and say “warning: graphic content”. Almost all non-American news sites had the Daniel Pearl beheading linked to somewhere on the site. I’d WAG that was what Mr. Burnham was referring to.
One more thing, I was sipping from the crazy cup when I wrote the OP.
Not WOT! No tags of any kind are needed! There is absolutely NOTHING to change or insert. All you need is a plug-in (available for most if not all browsers), and WOT ratings are seen immediately with ZERO effort.
Your thinking is all wrong! The system I’m proposing merely offers a heads-up, not a way to block sites.
Rubbish! When content changes, those who then view the new site’s content simply change their votes. I’ve seen several sites do this with their WOT ratings, and it works! That’s the beauty of letting site viewers rate sites themselves rather than some authority or other. It really works.
There exist large groups of people who co-operate to subvert the democratic nature of social networks such as Digg and StumbleUpon, so as to promote their own sites.
How would your WOT-alike cope with an organised effort to subvert it?
What the fuck do you mean “off-base counter-arguments”?
In case you hadn’t noticed, the link provided by the OP was talking about a rating system implemented at the governmental level. Here’s a quote from the OP’s article:
No mention of any WOT-style system, and a very clear implication that there will be a central authority of some sort involved.
Great. But most people in this thread were NOT responding to your proposal. They were talking about proposals in which the ratings system would be overseen by some sort of centralized, official authority.
And if the WOT system was what governments like the UK and Australia were proposing, you might have a point. But they’re proposing much more intrusive systems.
Sounds fine, i guess, although i still want to be able to opt out of seeing the ratings altogether. I have no real interest in having my choice of websites rated by overprotective parents, fundamentalist Christians, or the mouthbreathing idiots that make up such a large proportion of the general population.
Thank you for your thoughtful reply. For one thing, the ratings are constantly being updated by subsequent viewers, which means that nothing is permanent or unchangeable. In this way, malicious working of the system will be able to automatically correct itself as more viewers visit the site and find the rating given to be inappropriate.
Another powerful method to deal with the problems you raised is, as I discussed in post #6, that every individual user can over-ride a rating on a personal basis. Your machine keeps a cookie or something that stores your personally over-ridden ratings.
Beyond that, I’d expect that (rather quickly) a “group of trust” mechanism would evolve, be added as a feature, or be developed as a plug-in or additional layer. (In fact, glancing at the WOT forums, I see that someone was suggesting a “friend” mechanism, which is a first step towards group formation.) That is, for instance, we might all be part of the “Dope” group ('sup, yo), where each of us has the opportunity to rate sites for other Dopers (I trust y’all that much). Others might set up a “MedWeb” group, or “Pron Lovers” group, or “Elm Street Evangelical” group, or whatever. Over time, the group will get a reputation that allows some gauge of its reliability and suitability – exactly what a “web of trust” is all about.
Some of these groups might very well be despotic (a single person controlling white/black lists), some might be democratic (moderation system a la Slashdot), some might be subscription based, etc., etc. To some degree, I’d see it as similar to StumbleUpon’s categories except that, rather than dictating topical content, the groups dictate ratings (and potentially even the ratings categories and scales). By being selective about the groups one joins – assuming those groups are also selective about their members – one would remove the possibility of “mass rating subversion”.
Nifty stuff to contemplate; pulls in elements of “bazaar vs. cathedral”, “marketplace of ideas”, complexity and self-organizing systems, social networking, etc. It just needs to hit a critical mass to be comprehensively useful…
I’m not so sure - I think you may be underestimating the persistence and power of the folks who might wish to subvert it - in the case of social networks, they use networks of members who will exchange votes, there are sites where you can buy votes in massive bulk, to be distributed to individuals on a commission-per-vote basis, they use distributed networks of trojan-infected PCs to do some of the work for them.
Admittedly, that’s all because financial reward are possible for getting your web page to go popular on a social network, but even in cases where there isn’t an obvious motive, projects that start out as autonomous democracies just tend to need to evolve authority structures just to deal with the persistence of misbehaviour - look at Wikipedia - it used to be free-for-all, on the basis that yes, anyone could vandalise it, but anyone else could restore it, but it’s evolving into a mini-government.
In the case of a service designed to filter or tag web content, confidence is really important - if I want to take specific action to protect my kids from web nasties, chances are, I want it to work all the time - even a 1% fail rate may be unacceptable - so being able to fix problems after they occur might not be good enough.
Again, this will only protect me/my kids from stuff after they have already been exposed to it. Shutting the stable door after the horse has bolted.
I’m far from knowledgeable on these matters, but my first reaction when I read about the “rating” of the internet was to chuckle. How long does it take for any sort of safeguard or copy protection or internet brick wall to be circumvented? I’ve read that there are games and movies that are available on piracy sites before they are even released. There are many thousands (millions?) of people who live to thwart any attempt to censor or control internet activity. Is there any agency in existence or in theory that can effectively enforce this proposed system?
I’m sick of do-gooder mommies (or daddies) deciding for me what is acceptable. If internetbeheadings.com is available and I consider it acceptable, I can choose to visit that site, and if I consider it unacceptable, I can choose not to visit it. If I don’t want my imaginary children to see it, I can block it or [gasp!] supervise their internet usage.
I have a lot of faith in the ingenuity of the techno-wizards to triumph over the infernal people who insist on trying to save us from ourselves against our will.
You underestimate the power of the Dark Side. There are people who would set up armies of hijacked computers simply to subvert this system, whether for profit, to effectively censor content they don’t like, or just because they want to watch any censorship system burn. (You could argue that it isn’t censorship until you turned blue and died and it wouldn’t matter. There are still enough people who would still see it as censorship and would take action against it to cause problems.) You could get into an arms race with them, but it would be endless, expensive, and cause enough collateral damage to make your system effectively unusable.
You can’t boil the ocean: Any system that assumes people will upgrade their software is broken by design.
I just missed the edit window, and I must emphasize a specific point: The word “inappropriate” means you’re going to have protracted, acrimonious battles over the ratings of whole classes of content. For example, is gay, lesbian, bisexual, and transsexual information (not pornography, just normal stuff that acknowledges the existence of non-straight people) “appropriate” for anyone, ever? There are people who will go to the mat and beyond the bounds of reason for the HELL NO answer to that, and they will mobilize. Then the opposition mobilizes and it never ends. That means some pages will have highly variable ratings, simply based on who is “winning” at the time and how sweeping their victory happens to be.
(You can replace ‘GLBT’ above with ‘the existence of the state of Israel’ or ‘whether Turkey committed genocide against the Armenians’ or ‘whether Macedonians are irredentist swine yearning for a piece of Greece’. It really doesn’t matter all that much.)
I’m pretty sure that’s what we’re actually talking about here - a method by which concerned parents could reliably block sites systematically, rather than one by one.
Which is exactly why the only method of rating that would be acceptable is a community-based, voluntary one. (Note: it’s not clear to me whether you’re advocating a particular position, so I’m not really arguing with you.) Even assuming that the “group of trust” idea I raised above never comes about, what better reflects an “average” rating – one dictated by an authority (potentially, with a vested interest in the results) or one that is democratically decided by a broad swath of people?
Certainly, I’ll grant that there are implementation issues, not the least of which is to make it difficult to hijack. But on a principled level, I can’t see how trust of an authority can possibly be better at reflecting community standards than a “web of trust”.
I agree - more useful than a system of rating would be a system of category tags - that way, anyone could block or permit certain categories of content, based on what they are, not what someone else thinks of them.
The Web of Trust is philosophically the best answer, but I have severe doubts about how well it could work in practice given the number of people bent on either hijacking it outright or twisting it into implementing their policy and nobody else’s.
I refuse to worry too much because this could never be implemented effectively by anyone, simply because the current installed base of software is too large.
Agreed, if one is looking for a 100% bulletproof way to resolve the issue (for reasons in addition to what I snipped).
I was pondering this earlier today – that is, the practicality of a ratings scheme (as opposed to my claim above about the “principled level”). And it struck me that, as far as I can tell, nothing but a WOT scheme will be remotely practical. Here’s my reasoning: to some degree, a movie rating system makes sense; that is, it’s at least feasible. There is a limit on the number of films that are released (due to the time, effort, financing, etc. that is involved), which makes it possible (again, feasible) to assign an authority to rate each one. But websites? Imagine having to submit each one to a central authority…the sheer number is astounding. The internet would be dead in short order if it ever came to pass (well, actually, the authority would be “treated as damage and routed around”).
Looking at two examples will demonstrate the point: the U.S. patent office and web “portals” from about a decade ago (say, Yahoo!, which I seem to remember hired people to create and fill a content hierarchy). The former is notorioulsy overwhelmed and behind with application review, and the number of submitted applications must be dwarfed by the number of websites created. The latter went out of fashion when it (quickly) became obvious that they flat out couldn’t keep up; either the links were outdated, dead, or too limited in number to do any topic justice.
If one agrees with me, then any non-WOT scheme loses not only on philosophical grounds, but also on a practical level. I honestly don’t see any way around it.