{{rfctag|policy}}
THIS IS A DRAFT RFC THAT IS NOT YET LIVE, PLEASE DO NOT COMMENT YET, BUT EDITING OF THE ARGUMENTS AND FACTS BY INTERESTED PARTIES IS ENCOURAGED
Questions for community consideration
There are two primary, and related, questions that have been raised recently at discussions regarding the spam blacklist:
- Should we blacklist sites that pay authors for contributions, based on traffic, such as ehow.com, examiner.com, triond.com and associatedcontent.com?
- Should the reliability of a source be a factor in blacklisting or whitelisting decisions?
Background facts
- The blacklist in question is a page in the MediaWiki namespace. Unlike the Meta blacklist, this blacklist affects pages on the English Wikipedia only.
- Blacklisting is preemptive. It prevents editors from saving a revision that adds a link or a reference to the blacklisted site.
- Blacklisting prevents editors from adding a hyperlink to the blacklisted site.
- Blacklisting is abuse/evidence-based and should not be used Pre-emptively.
- Recent practice has been to refuse de-blacklisting based on the reliability or suitability of the source, as judged by the admin(s) who handle the request. Examples: 1,2
- Recent practice has been to consider the reliability of a site as one of the factors in blacklisting decisions. Examples: 1,2
- The blacklist extension has been in use since July of 2007 on the English Wikipedia, recently there has been an ArbCom ruling:
As blacklisting is a method of last resort, methods including blocking, page protection, or the use of bots such as XLinkBot are to be used in preference to blacklisting. Blacklisting is not to be used to enforce content decisions.
— ArbCom, Passed 10 to 0, 16:39, 18 May 2009 (UTC)
- It is general practice to consider the status of the account requesting de-blacklisting or whitelisting (a.o. avoiding conflict of interest issues)
- It is general practice in declined de-blacklisting requests to ask editors to consider specific whitelisting of specific documents which are needed
- It is general practice in whitelist requests to ask editors whether the linked documents are deemed necessary, and/or if there is consensus under editors that the specific document is needed
- Until recently, XLinkBot could not revert WP:REFSPAM
- XLinkBot can revert the addition of sites that are listed in it, however it will not re-revert if an editor adds them again or undos the bot's revert. It will also not revert established editors.
Typical examples which show the persistence of the problem
Here are some typical examples of the efforts spammers do to escape blacklisting etc. (I will look for examples which are related to the sites under scrutiny, but it shows how the incentive of being linked to can drive editors):
- E.g. see User:Japanhero, who has made a large number of socks using an easy changing IP to spam a large number of subdomains on webs.com. This example shows the lack of sufficiency of specific blacklisting and blocking, while numerous (full) protections (accounts are quickly autoconfirmed!) to prevent the editor from adding the links to the pages would be needed. Blacklisting the whole of webs.com would solve the problem .. but the site contains too much good information.
- MediaWiki_talk:Spam-blacklist#brownplanet.com. Besides spamming this domain, the spammer also linked to a large number of redirect sites (and seems to have quite an access to them). Again a clear example of how blacklisting is not sufficient.
Should we blacklist sites that pay authors for contributions, based on traffic, such as ehow.com, examiner.com, triond.com and associatedcontent.com?
Arguments for
The arguments for this are that such sites represent a unique and significant conflict of interest, which will motivate authors to spam these sites to an extent that low-threshold, or even preemptive blacklisting could be justified. "Low-threshold" here refers to blacklisting the entire site once abuse is detected of one article or subdomain. This because other subdomains or documents are easily made to circumvent specific blacklisting, blocking accounts or protecting pages is insufficient as the drive for earning money will result in editors. XLinkBot listings are unsuitable because the financial motivation to spam will override the small speedbump of being reverted once or establishing an account.
Arguments against
The arguments against this are that these sites host a wide range of content from a wide range of sources, with varying reliability, and the decision whether or not to use such links and references should be an editorial one, not an administrative one. XLinkBot listings are appropriate as a reminder to unexperienced users that they should think twice before using such links, while still leaving the ultimate decision on suitability to the editor and article talk page consensus.
(problems with the wording:
- When there are already several discussions on WP:RS / WP:RS/N on a site, which have as a general outcome that the site is generally unreliable, by several editors, then the decision not to use these links has already been made by editors, and that decision has already been imposed by administrators; I reacted earlier against calling this an administrative decision, because, though it appears that way, it is absolutely not
)
Should the reliability of a source be a factor in blacklisting or whitelisting decisions?
Arguments for
The reliability of a source is not the primary motivation for blacklisting a site; the fact that it is being spammed is. That said, even a few links being spammed to an unreliable source can justify blanket blacklisting. The reliability of a source is considered for whitelisting decisions and decisions about the scope of a blacklist listing. If an unreliable source is blacklisted, the community doesn't lose as much as if a reliable source is blacklisted.
(Problems with wording:
- Nope, the reliability has often already been considered by editors, and it was found to be not reliable
)
Arguments against
The nature of a wiki is that editors are encouraged and allowed to be bold. Decisions about the suitability of content are editorial, not administrative. Judgment about the reliability of a source should not be happening on whitelist request pages, it should be happening on article talk pages and through the normal process of bold, revert, discuss. Blacklisting is a last resort; it shouldn't be used as a pre-emptive measure.
(Problems with the wording of this:
- I don't think it is judgment that is given during de-blacklisting or whitelisting requests, that judgment has already passed (see the numerous discussions on WP:RS on examiner.com, editors have already decided it is not suitable). Moreover, the editors are generally asking a confirmation on whitelisting whether a specific document is reliable. There is time and place enough to discuss whether the specific document is reliable, and when such discussions are incorporated in the whitelisting request, whitelisting goes easy and quick. Also, blacklisting is generally not a pre-emptive measure, blacklisting is to stop abuse/spam, and that is what it has been used for, stop the spam from editors who add ehow.com, associatedcontent.com or examiner.com links
).