→5. Allow some links on En WP: suggestion - lockbox with receipt |
|||
Line 98: | Line 98: | ||
::There has to be somewhere to report it - otherwise you appear to be saying "we don't have a policy on paid editing" which is just not the case. [[User:Smallbones|Smallbones]]<sub>(<font color="cc6600">[[User talk:Smallbones|smalltalk]]</font>)</sub> 17:33, 2 September 2015 (UTC) |
::There has to be somewhere to report it - otherwise you appear to be saying "we don't have a policy on paid editing" which is just not the case. [[User:Smallbones|Smallbones]]<sub>(<font color="cc6600">[[User talk:Smallbones|smalltalk]]</font>)</sub> 17:33, 2 September 2015 (UTC) |
||
:::Then figure out which body of people does have the required time, skills and resources. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 18:04, 2 September 2015 (UTC) |
:::Then figure out which body of people does have the required time, skills and resources. [[User:Thryduulf|Thryduulf]] ([[User talk:Thryduulf|talk]]) 18:04, 2 September 2015 (UTC) |
||
::::I've sent stuff in to functionaries and it seems like a black hole, sorry guys. What would really be better from my point of view is some kind of locked repository with a receipt that could be referred to. In other words, hypothetical example: I email my Elance link and evidence of real-world identity to a certain email address and get back a unique token in reply. Now I can refer to this token in a sockpuppet investigation or COIN case. Users with appropriate credentials can open the file by the token I supplied. Ideally there'd be transparency via audit trail as to who has shared what with whom. I don't see why the audit trail shouldn't be public, though maybe during an investigation it should not be. I think I'll develop the idea of a community investigation toolset a bit farther in a new subsection of this page. — [[User:Brianhe|Brianhe]] ([[User talk:Brianhe|talk]]) 18:23, 2 September 2015 (UTC) |
|||
==6. Allow corporate accounts== |
==6. Allow corporate accounts== |
Revision as of 18:23, 2 September 2015
Ideas that may help address the issues of paid editing for discussion. Please feel free to add more ideas. My hope is to sit down with the legal department in Nov of 2015 to discuss possibilities in person.
Notification of this discussion
- have been posted on Wikimedia-l
- Wikipedia:Village pump (idea lab)
- WP:COIN
1. Ban some types of paid editing
Ban the type of paid editing that occurs via sites like Elance/Upworks. They have agree to automatically remove Wikipedia related jobs from their sites if we do this.
- Discuss
- I think this is definitely a good idea, as the handful of disclosed paid editors don't typically use those sites. As an additional point to this, perhaps we could establish a list of community endorsed paid editors to promote TOU compliance (added to a list by community consensus). In principal, I hate the idea of promoting paid editing at any level, and I'm not sure that this is feasible, but it is important to give TOU compliant editors tools to compete against unethical non-TOU complaint editors. Winner 42 Talk to me! 05:18, 1 September 2015 (UTC)
- This needs much better definition of what "the type of paid editing that occurs via sites like Elance/Upworks" means before anybody can seriously consider it. In theory at least there is no barrier to accepting a job at those sites and completing it while complying perfectly with the terms of use. Thryduulf (talk) 08:45, 1 September 2015 (UTC)
- Yes so this proposal is to ban the type of paid editing that is funded through sites like Elance. There may be one or two people that edit properly through those sites. And yes we are sacrificing those one or two to deal with the thousands that are not following the TOU and have no intention to do so. Doc James (talk · contribs · email) 18:50, 1 September 2015 (UTC)
- What is "the type of paid editing that is funded through sites like Elance" is the crux here. You can't ban something unless you can define it. Thryduulf (talk) 19:01, 1 September 2015 (UTC)
- No that is not the "crux". The crux is are we in principle willing to ban this type of editing. If the majority oppose it no matter what the definition no need to spend our time arguing about definitions. Only once we have dealt with the real crux do we need to discuss the secondary issue of definition and for that I will get the legal team involved. That that is my question to you User:Thryduulf, do you oppose this type of paid editing? Doc James (talk · contribs · email) 00:48, 2 September 2015 (UTC)
- I'd imagine the rule would be some variation of "The use of online job soliciting sites (such as elance and upworks) to receive payment in return for editing Wikipedia is forbidden." Winner 42 Talk to me! 19:13, 1 September 2015 (UTC)
- I don't think that would actually have the desired effect, as people would just find other venues that are less visible. There is obviously a market for paid editors, what we need to do is make it clear who is doing it according to the rules and make it easy for potential customers of paid editors to find out who is doing it legitimately and who isn't rather than drive everyone underground. Such a rule would also do nothing about the Orangemoody way of working. Thryduulf (talk) 19:21, 1 September 2015 (UTC)
- This is about moving paid editing off of reputable sites. This is about making it more difficult. Yes it is not perfect for all types of paid editing but it is a piece of the solution. Doc James (talk · contribs · email) 00:00, 2 September 2015 (UTC)
- I strongly disagree with you then - we want paid editors to be recruited through reputable sites where we can ensure that they are transparent and following our rules. A reputable site will work with us to ensure that our rules are followed and that those who do not cannot advertise through them, a non-reputable site will not care. Thryduulf (talk) 10:19, 2 September 2015 (UTC)
- This is about moving paid editing off of reputable sites. This is about making it more difficult. Yes it is not perfect for all types of paid editing but it is a piece of the solution. Doc James (talk · contribs · email) 00:00, 2 September 2015 (UTC)
- If we are going to ban all paid editing then that needs to include via such sites. If we are going to continue to allow it within certain rules and codes then we want those sites to promote the types of paid editing that we allow. Allowing paid editing provided it goes through an organisation such as the one behind OrangeMoody and not if it is a direct relationship between client and legit transparent paid editor, would be the worst of both worlds. ϢereSpielChequers 12:00, 2 September 2015 (UTC)
I support prohibiting the use of such sites. Of course a proper definition would have to be added but that should be easy. If people think that goes too far then we could require that the use of a paid recruitment site be disclosed as an affiliation of the paid editor. The proper place to do that would be at WP:PAYDISCLOSE
I'll add a subsection here for prohibiting most paid editing of BLPs, as it is also one of " some types of paid editing" Smallbones(smalltalk) 14:19, 2 September 2015 (UTC)
1a. Prohibit most paid edits to BLPs
It appears that there are many paid edits to BLPs. This not only distorts our contents, but sucks in naive individuals and helps support and encourage paid editing. Most importantly this rule would be very simple to explain "You can't pay to get your biography on Wikipedia." Simple explanations are very important since we have had problems getting the public to understand our rules. Proposed rule:
"Paid editing of biographies of living people is prohibited, except to remove content that is otherwise in violation of this policy. Content removal by paid editors must be noted on the article's talk page or at WP:BLPN. Paid editors may add information on BLP talk pages as long as their paid status is disclosed."
The proper place for this rule is at WP:BLP. Smallbones(smalltalk) 14:19, 2 September 2015 (UTC)
2. Increase notability requirements
This is especially important in the areas of corporations and BLPs.
- Discuss
- I think this is more a matter of enforcing existing notability requirements rather than creating new ones (through increased NPP and AfD participation or similar). I'm not convinced that notability can or should be used as a tool to prevent paid editing. Winner 42 Talk to me! 05:20, 1 September 2015 (UTC)
- Kindasortamaybe? Perhaps loosening the CSD for notability? The downside of that is more people are confused about how to create/have a Wikipedia article kept, which is often how people fall for PR schemes. I've thought about this for years and yet to see how this will change what is the overall status quo. We're popular, being on here makes you somebody, and nothing we can do (aside from failing) will change that. Keegan (talk) 06:10, 1 September 2015 (UTC)
- Loosing the notability CSD criteria is something I am very, very strongly opposed to - A7 is already misapplied far more than probably any other criterion. Clarifying the notability criteria and possibly raising the notability requirements for corporations would probably help. The biggest thing that needs to happen though is more eyes on AfDs. Thryduulf (talk) 08:48, 1 September 2015 (UTC)
- Kindasortamaybe? Perhaps loosening the CSD for notability? The downside of that is more people are confused about how to create/have a Wikipedia article kept, which is often how people fall for PR schemes. I've thought about this for years and yet to see how this will change what is the overall status quo. We're popular, being on here makes you somebody, and nothing we can do (aside from failing) will change that. Keegan (talk) 06:10, 1 September 2015 (UTC)
- The problem in this case is that our existing notability requirements meant that deletionists could credibly threaten to get articles deleted that companies were willing to pay to try and keep up. How would increasing the proportion of articles that needed protection from deletionists reduce the risk of future deletionist scams like OrangeMoody? Clarifying the criteria, introducing some specific rules similar to the ones we have for sport or places would help, but I doubt we could get consensus for an "all companies that have ever directly employed more than y people at the same time" type rule. ϢereSpielChequers 13:06, 1 September 2015 (UTC)
- There is an expression that I actually first heard outside of this project, "notable only on Wikipedia". Our notability criteria are far too low for both biographical articles and for businesses in particular, although it could also be applied to many other aspects of our project. Just as importantly, as Winner 42 points out, even when we have notability standards, people do not routinely apply them. It is rare to find an AfD with more than 5 participants, meaning that it is very easy to sway them toward a default keep/no consensus to delete. Anyone who has even the slightest desire to progress to adminship is very wary of tagging articles for CSD or any other form of deletion because any perceived error in judgment can often come to haunt them years down the road. Instead they use "may not be notable" tags, and we now have tens of thousands of articles with those tags whose actual notability is never really assessed or considered. Our low standards of notability have effectively turned sections of the English Wikipedia project into a cross between the Yellow Pages and Who's Who, both of which are fine for what they do but are simply directories for which the subjects pay for an entry. In many cases, we find that Company X located in Country Y (or entrepreneur X whose businesses are mainly in country Y) does not have an article on the "native language" Wikipedia because it would not meet that project's notability standards, but we'll have one on our project. This does not make sense at all. We need to (a) significantly increase notability standards, and (b) change our community consideration about 'enforcing' notability so that page curators and others do not have to "live down" recommendations for deletion on notability grounds. We should also go through all those articles tagged for notability concerns and either decide they're notable or that they're not; if they're notable, we should fix them and if they're not notable then we should delete them. We should aim to have no more than 0.005% of our articles tagged that way, and no article should make it through PROD or AfD with that tag still affixed. Risker (talk) 13:53, 1 September 2015 (UTC)
- I don't know where you find that editors are criticised for ancient errors in deletion tagging, but I can assure you that RFA at least doesn't work that way. We do get RFA candidates with a history of poor quality deletion tagging, but for that alone to derail an RFA the candidate needs to have made multiple recent mistakes. Isolated or ancient mistakes in deletion tagging are not on their own enough to stop an RFA. As for having no more than 0.005% of our articles tagged for notability, that simply isn't realistic. You would have to ration the deletion taggers to 34 notability tagged AFDs per day between them.... ϢereSpielChequers 18:21, 1 September 2015 (UTC)
- I'd support this, but not for any reason to do with stopping paid editing - I just share Risker's concern that our bar is set too low with it comes to businesses, biographies and news, resulting in inherent problems with how we portray living people. In regard to paid editing it would make a dent (a lot of jobs are along the lines of "if my competitor has an article, why can't I"), but I'm not sure how much. - Bilby (talk) 07:33, 2 September 2015 (UTC)
- Totally agree. The "two substantial references" for wp:corp is much too vague since the business press exists to fill its pages with content regardless of the actual importance of the information. I find the business press not unlike the music or film press -- these are essentially "fan zines" that will eventually cover every company/band/movie with an article or two. wp:corp needs to be tightened. As it is, there is an "arms race" for companies to get their information into WP because their competitors are there. This is especially the case for upstarts - I'm sure that Apple and Google are basically unconcerned about being included in WP because they ARE notable. I also assume that no one writes a WP article for a small company or a doctor's practice UNLESS they have a COI -- it's just not inherently interesting. I'd love to have a reason to reject or delete these articles. LaMona (talk) 16:49, 2 September 2015 (UTC)
- Absolutely agree we need tighter notability requirements on business. It used to be that being in the Fortune 500 or traded on the NYSE was not sufficient notability. Now, according to The Independent "several wedding photography companies" were victims of the recent scam. Wedding photographers? Well perhaps if they are known for royal weddings and the like, but I don't see any need for "several wedding photography companies" in the entire encyclopedia. Three independent reliable sources that focus on the subject of the article should be a bare minimum. The single-store cafe on the corner might well come up with 3 reliable sources, especially if they advertise heavily in the local newspapers. Yes, I have seen an article on a 15 seat single-store cafe on Wikipedia. What we really need however is a bright-line rule. Perhaps sales of over $10,000,000 per year as reported in a reliable source or in a linked audited financial statement. That limit would likely let in a few hundred US restaurants - do we really need more than that? Smallbones(smalltalk) 17:09, 2 September 2015 (UTC)
- Comment - agree in general, but it should be sufficient to enforce and clarify the general notability criteria for business topics. One re-occuring problem in particular: product announcements, feelgood interviews from branche-internal magazines and pre-reviews, before the product/artist/company are even really established, are often given too much weight. Such articles, often with little or no own neutrally researched content from the author, should not count towards our notability evaluation. They are often accepted with the argument "But it's a reliable source" - that argument is flawed: reliable sources are perfectly capable of writing promotional or other vanity fluff. GermanJoe (talk) 17:45, 2 September 2015 (UTC)
- Agree Notability requirements need to be tightened and specific notability criteria need to be re-examined. Many of the specific criteria are fine until that one criteria like two book reviews or one professional game etc. and WP:NCORP while detailed is pretty useless becuase of the volume of PR press. These loopholes pretty much gut GNG. The community could start by defining the term significant as its interpratation varies wildly amongst editors and it a great point to wikilawyer when trying to sway an AfD.
Further, no matter the topic, all articles should have reliable sources and an expanded BLPPROD should be implemented in all topic areas, regardless to when the article was created and with the initial sourcing requirement the same as the removal requirement ie RS not 'any-source'.
This will help deal with the paid editors trying to push low-notability material into Wikipedia and will allow it to be removed easier. Paid editors on high-notability subjects are a different matter and likely a different population in general. The best solution is ban all paid editing, rollback all edits made by paid editors and delete all articles made by paid editors. I find it objectionable that so much volunteer time is used a) tracking down bad edits by paid editors, or b) improving articles of marginal notability, than we can not delete, created by paid editors thereby allowing the subject that hired them to get free work from us. JbhTalk 18:20, 2 September 2015 (UTC)
- Agree with Risker. Andreas JN466 18:23, 2 September 2015 (UTC)
3. Increase requirements for new articles
Increase how long someone must be editing before they can create a new article without going through AfC. Maybe 6 months and 500 edits?
- Discuss
- Most articles in the Orangemoody case were actually taken from declined AfCs or Drafts. After that either a redirect was created that was then later populated by article text, or the article was recreated in sandbox and moved to article space. Technical measures around curating such pages would probably help better raising user requirements. Keegan (talk) 03:21, 1 September 2015 (UTC)
- I would accept implementing the consensus at Wikipedia:Autoconfirmed article creation trial to run a trial to require autoconfirmed status to create or move articles into article space. (Please Doc, use your super WMF powers to make it happen) Ideally I think a 1 month, 100 edit requirement (and perhaps getting one or two articles through AfC could be an alternative way to gain article creation rights). This would prevent most speedy deletions in article space from occurring as that is sufficient experience to understand the basics of article writing, thereby increasing editor retention as new editors don't have their articles speedy deleted. Winner 42 Talk to me! 05:11, 1 September 2015 (UTC)
- @Winner 42: I've dedicated the greatest portion of my wiki-career to helping new editors create articles, primarily in #wikipedia-en-help where I was a founder for several years, to my general CSD deletions. There is no amount of ACTRIAL or other confirmations that will deter this. Money is a far greater motive than meeting arbitrary software targets. Keegan (talk) 05:39, 1 September 2015 (UTC)
- As this sockfarm was sufficiently organised to get their new accounts autoconfirmed, any extra measures we now take can forget ACTRIAL or any other solution based on the idea that only goodfaith editors stay long enough to get autoconfirmed. ϢereSpielChequers 11:18, 1 September 2015 (UTC)
- The suggestion is to have different levels of "autoconfirmed" with a higher one for new articles. Doc James (talk · contribs · email) 18:52, 1 September 2015 (UTC)
- Currently you don't need to be autoconfirmed to create new articles, and ACTRIAL, the proposal to limit new article creation to autoconfirmed users, was vetoed by the WMF. Since the OrangeMooody sockfarm was sufficiently organised to get accounts auto confirmed it is reasonable to assume that they would do what was necessary to get the new higher level of confirmation. Which raises the question about auto confirmation based proposals, what is the point of making things still more difficult for editors who don't use the services of organisations like OrangeMoody? ϢereSpielChequers 09:47, 2 September 2015 (UTC)
4. Create a new group of functionaries
Arbcom is clear that they do not see it as within their remit to enforce the TOU. We need a group of people who can receive concerns around paid editing that cannot be posted on wiki due to issues around privacy.
- Discuss
- I disagree. To me this is about overall juggling of titular responsibility. In practice the communities have no trouble enforcing their remits; it's arguments over remits that pass things to the WMF - or actual out-of-hands things - when a stalemate is reached. We'd rather have an arbitrary judgement than a stalemate. I'd rather have a stalement than an arbitrary judgement most of the time. Keegan (talk) 06:07, 1 September 2015 (UTC)
- As has been repeatedly explained, the only group of people who can reliably make judgements about many TOU violators are the WMF as they are the only people with all of the time, tools, expertise and access to information to distinguish joe jobs from real humans. As an arbitrator who has been involved in trying to determine a connection between an off-site identity and a Wikipedia editor, it is not possible to do this with sufficient accuracy even with a group of people with arbitrator-level access. Thryduulf (talk) 08:53, 1 September 2015 (UTC)
- This needs to be a meta level proposal so the group has the remit to tackle spam that runs across several languages. We also have the problem that Arbcom has historically been so busy that it burns members out, so spinning this out as a separate role makes sense. As for whether it should be run by staff or volunteers, first lets establish if the community wants this to happen, second we need to establish if we have volunteers to make it happen. If the community wants it to happen but can't get volunteers then we need to see if the community wants it enough to add to the list of things that volunteers want to have happen, aren't finding sufficient volunteers for, and therefore are prepared to pay people to do. ϢereSpielChequers 13:16, 1 September 2015 (UTC)
- I hear arbcom members saying they can't do it, so obviously a new group of functionaries are needed. I don't see that this is much different from checkuser functions: private and incomplete data have to be analyzed and a decision has to be made if Wikipedia is going to keep on working. Somebody should take this to the WMF board if necessary. Smallbones(smalltalk) 17:28, 2 September 2015 (UTC)
5. Allow some links on En WP
Allow the linking to Elance/Upwork accounts on Wikipedia. Currently it is not clear if this is allowed or not.
- Discuss
- People currently link to Elance accounts. Most do not consider it outing. Thus this would not be a change to outing just a confirmation of current practice. For example
- We should not allow the misuse interpretation of our guidelines to protect bad actors. Doc James (talk · contribs · email) 06:46, 1 September 2015 (UTC)
- Loosing WP:OUTING is the very last thing we should be doing. It's OK to link in cases of self-disclosure on Wikipedia, and in cases where it is conclusively proven, e.g. by CU, that user:X here is user:X or user:Y there. It is never acceptable to out someone on the basis of suspicion or fishing. Thryduulf (talk) 09:02, 1 September 2015 (UTC)
- I agree that loosening OUTING is not the way we want to go here. However, the community is currently between a rock and a hard place in cases of abusive paid editing: we can't discuss paid editing evidence onwiki, because outing, and we can't submit it to arbcom, because arbcom says paid editing is neither their job nor their problem (and that even if it were, they can't investigate things to their own satisfaction and thus couldn't do anything about it). The community is left with no real option for handling cases involving abusive paid editing. If we don't want exasperated people trying to handle it onwiki for lack of other options, then either Arbcom is going to have to start being willing to deal with these cases, or we are going to have to, as suggested above, consider creating a new group that is empowered to do it instead. A fluffernutter is a sandwich! (talk) 15:07, 1 September 2015 (UTC)
Posting links to other websites is NOT outing per se. Quoting WP:OUTING:
"Posting links to other accounts on other websites is allowable on a case-by-case basis."
....
"If ... personally identifying material is important to the COI discussion, then it should be emailed privately to an administrator or arbitrator – but not repeated on Wikipedia: it will be sufficient to say that the editor in question has a COI and the information has been emailed to the appropriate administrative authority. Issues involving private personal information (of anyone) could also be referred by email to a member of the functionaries team."
So it is not a question of whether we can report links to elance/upworks. The only question is how we can do this in the proper manner. I don't think it is completely clear who to report it to in order to have it acted upon. ArbCom has been very skittish on this - so where exactly should we report it? We can address that question. Smallbones(smalltalk) 14:55, 2 September 2015 (UTC)
- The "case by case" wording was inserted without real consensus. There was a massive discussion a few months ago about it, and there was consensus against allowing it in any of the cases suggested. Speaking as an arbitrator, we do not have the time, skills or resources to carry out investigations of this nature. Thryduulf (talk) 15:42, 2 September 2015 (UTC)
- There has to be somewhere to report it - otherwise you appear to be saying "we don't have a policy on paid editing" which is just not the case. Smallbones(smalltalk) 17:33, 2 September 2015 (UTC)
- Then figure out which body of people does have the required time, skills and resources. Thryduulf (talk) 18:04, 2 September 2015 (UTC)
- I've sent stuff in to functionaries and it seems like a black hole, sorry guys. What would really be better from my point of view is some kind of locked repository with a receipt that could be referred to. In other words, hypothetical example: I email my Elance link and evidence of real-world identity to a certain email address and get back a unique token in reply. Now I can refer to this token in a sockpuppet investigation or COIN case. Users with appropriate credentials can open the file by the token I supplied. Ideally there'd be transparency via audit trail as to who has shared what with whom. I don't see why the audit trail shouldn't be public, though maybe during an investigation it should not be. I think I'll develop the idea of a community investigation toolset a bit farther in a new subsection of this page. — Brianhe (talk) 18:23, 2 September 2015 (UTC)
- Then figure out which body of people does have the required time, skills and resources. Thryduulf (talk) 18:04, 2 September 2015 (UTC)
- There has to be somewhere to report it - otherwise you appear to be saying "we don't have a policy on paid editing" which is just not the case. Smallbones(smalltalk) 17:33, 2 September 2015 (UTC)
6. Allow corporate accounts
These accounts are verified and have a corporate name attached to them. They are not allowed to directly edit articles. Allowing corporations a way to engage may prevent many of them from going to paid editors.
- Discuss
- I think this would have had more merit for en.wiki prior to the update ToU. Before that, following the German Model (flagging role accounts that represent people/companies is what the German Wikipedia does) could have been much more practical. Now it's far easier to register a PR account in your own name and list your employer, and just edit the talk page. The problem is with paid editors that do not choose to follow the ToU. This is the internet, there's no hard solution there. Keegan (talk) 06:03, 1 September 2015 (UTC)
- I would allow accounts operated by individuals to include corporate names in their username, e.g. user:Brian (Notable Corporation) or user:Alison (Llanelli PR Agents Ltd). I don't like the idea of role accounts so much, as one employee should not be punished for the actions of a colleague. Thryduulf (talk) 09:09, 1 September 2015 (UTC)
- These accounts are not speaking on behalf of individuals. They are speaking on behalf of companies. As companies are people to they deserve their own accounts no? Doc James (talk · contribs · email) 18:57, 1 September 2015 (UTC)
- Companies are not people though. Brian is an employee of Notable Corporation who can speak on behalf of his employer, and part of his role is to manage the online presence of Notable Corporation. Brian should have an account that is officially linked to Notable Corporation. Catherine works in the same team for the same company, she should also have an count that is officially linked to Notable Corporation. This way Catherine doesn't get punished if Brian is unable to follow the rules. Thryduulf (talk) 19:10, 1 September 2015 (UTC)
- These accounts are not speaking on behalf of individuals. They are speaking on behalf of companies. As companies are people to they deserve their own accounts no? Doc James (talk · contribs · email) 18:57, 1 September 2015 (UTC)
- "Corporate account" names are already allowed per WP:ISU--see bullet 4. --Izno (talk) 11:31, 2 September 2015 (UTC)
- Allow "User:Brian (Notable Corporation)" but require an OTRS declaration to prevent joe jobs against Notable Corp., and prohibit them from editing article pages on their employer, competitors, and suppliers with a paid editing disclosure required. Smallbones(smalltalk) 17:42, 2 September 2015 (UTC)
See ongoing RfC here: Wikipedia_talk:Username_policy#RfC_-_should_we_allow_company_account_names_with_verification Andreas JN466 18:15, 2 September 2015 (UTC)
7. Delete articles by paid editors
Once an editor is confirmed to be a paid sockpuppet/undisclosed paid editor we should simply delete the articles they have created. We do not have the editor numbers to fix them all. And typically they are mostly of borderline notability.
- Possibly, but only if they have not been adopted by genuine community editors. All their articles should certainly be examined to see if they meet any speedy criteria, and deleted if they do. Any criteria like this must apply only if the breach of the ToU was intentional. If someone was genuinely unaware of the ToU requirements, or tried and failed to meet them (e.g. because of misunderstanding) and complies when educated then their prior work should not be deleted unless it meets normal criteria - these are the people we want to avoid driving to undisclosed paid editing. Thryduulf (talk) 09:16, 1 September 2015 (UTC)
- Yes exactly "only if they have not been adopted by genuine community editors". Most of them havn't as they are non notable topics. Doc James (talk · contribs · email) 18:56, 1 September 2015 (UTC)
- Actually, there are quite a few Orangemoody article subjects who are or might be notable enough for an article. Not all paid editing is about non-notable subjects. Thryduulf (talk) 19:11, 1 September 2015 (UTC)
- Yes exactly "only if they have not been adopted by genuine community editors". Most of them havn't as they are non notable topics. Doc James (talk · contribs · email) 18:56, 1 September 2015 (UTC)
This could be added to Criteria for Speedy Deletion, but the articles could be recreated without prejudice by other editors, or kept if other editors have made significant contributions. There is really no way to tell if somebody unintentionally violated ToU, so I'd just leave that out. Smallbones(smalltalk) 15:20, 2 September 2015 (UTC)
- You find out if someone unintentionally violated the TOU by asking them. If they respond that yes they are paid but didn't realise they were breaking the TOU (either through unawareness or misunderstanding) and correct their mistake(s) going forward then the breach was not intentional and their work should not be deleted. I think any speedy criterion for this would have to be related to G5, which normally applies only to pages created after the user was blocked, but could (if there is consensus at WT:CSD for it) be applied retroactively if the user was banned for breaching the TOU regarding paid editing. Thryduulf (talk) 15:40, 2 September 2015 (UTC)
8. CorpProd for businesses
BLPprod has raised the bar for BLPs, we could do something similar for businesses. A sticky prod that required "from this date all new articles on commercial businesses must have at least one reference to an independent reliable source" would give an uncontentious source it or it gets deleted deletion route for those articles that are only sourced to the business's site. ϢereSpielChequers 09:10, 1 September 2015 (UTC)
- This is the best idea I've seen so far. It would need to disallow news stories that are basically a rewording of the press release and mentions in business directories, but that's detail that should be easily worked out. Thryduulf (talk) 09:18, 1 September 2015 (UTC)
- I'd boost this to at least 3 references that consist of editorial content written by a reliable source, in which the article subject is the primary focus of the editorial content, and which is not mainly focused on "start-up" information. Lots of media that are generally considered "reliable sources" also allow articles that are not written by editorial staff on their sites (i.e., they allow "paid editing" by basically publishing press releases or similar 'reporting'), including just about every single business-related source. The vast majority of organizations that get start-up funding never actually get anywhere; perhaps include a requirement that the entity must have been formally registered as a business for a minimum of 2 years, and have significant reliable-source coverage of at least one successful profit-generating product/service before they are considered notable. Risker (talk) 14:06, 1 September 2015 (UTC)
-
- Strongly support Risker, i.e 3 reliable sources where the article subject is the primary focus. It may be difficult to clearly state her start-up criteria, but I support it in principle. There are the occasional start-ups (maybe 1 each year) that come out of nowhere with a groundbreaking product and are reported on internationally in dozens of sources, but probably those could be dealt with as the exception to the rule, e.g. via WP:IAR. Smallbones(smalltalk) 15:43, 2 September 2015 (UTC)
- For corporate, there needs to be further qualification on what "substantial article" means and what "reliable source" means. Much of what comes out in the business press is just fandom - cooing over new products or lauding some company's success. I don't see those as being encyclopedic, just business as usual. What would we expect to see that would make a company worthy of an article? Personally, I'd go for some concept of social/historical impact, but I doubt if we can define that sufficiently. If we treated agriculture the way we treat business, we'd have an article for every farm and every year's new crop. LaMona (talk) 17:02, 2 September 2015 (UTC)
- If we treated agriculture the way we treat business, we'd have an article for every farm and every year's new crop. that bears repeating. Thank you LaMona. Smallbones(smalltalk) 17:47, 2 September 2015 (UTC)
- Strongly support Risker, i.e 3 reliable sources where the article subject is the primary focus. It may be difficult to clearly state her start-up criteria, but I support it in principle. There are the occasional start-ups (maybe 1 each year) that come out of nowhere with a groundbreaking product and are reported on internationally in dozens of sources, but probably those could be dealt with as the exception to the rule, e.g. via WP:IAR. Smallbones(smalltalk) 15:43, 2 September 2015 (UTC)
-
9. Lower the bar for sockpuppetry investigations of spammers
We have a difficult balance between maintaining the privacy of our editors and investigating potentially dodgy edits. If "writing a spammy article on a borderline notable commercial entity" became grounds for a checkuser to check for sockpuppetry then we would be much more likely to catch rings like this in future. The vast majority of our editors would be unaffected by such a targeted change. ϢereSpielChequers 09:55, 1 September 2015 (UTC)
- I would support. Doc James (talk · contribs · email) 00:10, 2 September 2015 (UTC)
- Support Smallbones(smalltalk) 15:46, 2 September 2015 (UTC)
- I support this notionally, but the devil's in the details. Getting consensus on what constitutes "spamming" might be difficult. And the problem with socking isn't usually strictly about spamlinks, though that can be part of it. Would the lower bar attach just to the spammer account (s) or does it spread to the whole investigation to which they are a party? The one way you encourage throwaway SPAs for spamming. The other way you have kind of an all-purpose tool for widening SPIs just by naming the right party. So I'm having trouble seing how this would be reduced to practice. Brianhe (talk) 16:21, 2 September 2015 (UTC)
10. Keep some IP data for longer
Currently we keep data about the IP's used by registered accounts for three months, this was a major limitation in this investigation. A general increase would be ill advised as it would risk exposing more of our editors to harassment via lawyers acting for spammers, but we could do a targeted extension, in particular for edits by badfaith accounts such as those blocked for sockpuppetry. That should make it easier to find returning bad faith editors. ϢereSpielChequers 10:03, 1 September 2015 (UTC)
- I don't think this would be technically possible - either the data is available or it isn't. There is already a way for checkusers to record and store details of known sockmasters where this is desirable, however checkuser data has a high entropy - the older it is the less useful it is generally speaking (IP address get reassigned, people move, useragents change as people upgrade and switch to different versions of browsers, etc), so the benefits of having the data available for longer are not linnear. Thryduulf (talk) 10:45, 1 September 2015 (UTC)
- It isn't technically possible to restore deleted data. But it is possible to keep existing data for longer, though this would require changes both to the software and in the legalese of the privacy policy. ϢereSpielChequers 11:13, 1 September 2015 (UTC)
- I mean that it is not possible to do a "targeted extension", either there is a "general increase" or no increase. Thryduulf (talk) 12:39, 1 September 2015 (UTC)
- I can appreciate that a general increase would be easier to implement, but the downside would be excessive in that all accounts would lose privacy and be more vulnerable to spurious court cases. A targeted extension would be more difficult to implement and would require careful thought as to who it targeted, but I don't see anything impossible in that. ϢereSpielChequers 12:45, 1 September 2015 (UTC)
- How would it be targetted though? You don't know which accounts are sockpuppets/sockmasters until you check them so anything relevant would need to be discovered during the current timeframe or be lost, and there is already a way to record data before it expires where it is known this is needed. I don't see what benefits would be gained here? Thryduulf (talk) 13:09, 1 September 2015 (UTC)
- Glad to hear we are retaining some data where there is due cause to be suspicious; I thought I remembered suggesting this during the review of the privacy policy. Perhaps the FAQ there could be changed to reflect what really happens. My suggestion still stands, though modified to "review the current exceptions with a view to broadening them" ϢereSpielChequers 13:41, 1 September 2015 (UTC)
- How would it be targetted though? You don't know which accounts are sockpuppets/sockmasters until you check them so anything relevant would need to be discovered during the current timeframe or be lost, and there is already a way to record data before it expires where it is known this is needed. I don't see what benefits would be gained here? Thryduulf (talk) 13:09, 1 September 2015 (UTC)
- I can appreciate that a general increase would be easier to implement, but the downside would be excessive in that all accounts would lose privacy and be more vulnerable to spurious court cases. A targeted extension would be more difficult to implement and would require careful thought as to who it targeted, but I don't see anything impossible in that. ϢereSpielChequers 12:45, 1 September 2015 (UTC)
- I mean that it is not possible to do a "targeted extension", either there is a "general increase" or no increase. Thryduulf (talk) 12:39, 1 September 2015 (UTC)
- It isn't technically possible to restore deleted data. But it is possible to keep existing data for longer, though this would require changes both to the software and in the legalese of the privacy policy. ϢereSpielChequers 11:13, 1 September 2015 (UTC)
- One might think checkusers would be the ones most likely to agree with this; however, I for one think that 3 months is sufficient. Cases such as the Orangemoody one are rare and exceptional, and I think the WMF's limitation of 3 months (it's a global setting) is reasonable. It's actually longer than many other organizations as it is. As to the 3-month period being a limitation, that cuts both ways. It also meant we had a hard limit on how much checking we'd have to do with the OM case. If we'd had more CU data to review, we may have found more accounts (although their editing pattern is clear enough that most admins should be able to do duck blocks) but the investigation consumed a vast amount of CU and other functionary time. Investigations like this need to be rare, or we'll need a LOT more CUs. Risker (talk) 14:26, 1 September 2015 (UTC)
11. Actively look for mutual patrolling
One feature of the OrangeMoody sockfarm was an interlinked group of accounts that marked each others articles as patrolled. A computer program that watched for similar patterns in future and notified functionaries of patterns it had detected would give us a chance of finding similar farms faster. ϢereSpielChequers 10:20, 1 September 2015 (UTC)
- This sounds like it would be a good idea. It would not be 100% reliable of course and there would be false positives and false negatives, but as a flag for human investigation it could work. I will have to leave it to others to say whether this is technically possible or not though. Thryduulf (talk) 10:47, 1 September 2015 (UTC)
- We have to be cautious not to succumb to the "reds under the bed" syndrome. We've found a bunch of bad actors, who have leveraged our standard site procedures to their advantage and the disadvantage of article subjects/draft creators and the encyclopedia. Amongst the accounts were those that seemed to "specialize" in page curation, and were responsible for marking reviewed a pile of the articles, and most of those "article reviewers" didn't create articles. However, many of those articles were also reviewed by editors who do a lot of page curation who aren't socks here; they just do a lot of page curation, presumably because it's an activity they enjoy. It's certainly one that we need to keep doing. The interlinking of the page-curation socks and the article-creation socks comes from checkuser evidence to start with, which was then correlated with editing behavioural review; it would not necessarily be obvious without both types of data. Risker (talk) 15:08, 1 September 2015 (UTC)
- But if a program was run and identified a group of editors who were patrolling each others articles or all patrolling the articles created by a second group of editors, wouldn't that be worth investigating by checkusers? ϢereSpielChequers 18:05, 1 September 2015 (UTC)
- How would you identify the "article creation" group? Risker (talk) 18:13, 1 September 2015 (UTC)
- By computer matching. If there is a ring of say 20 patrollers, half of whose patrols are random and half concentrate on the same thirty article creators, a large proportion of whose articles are patrolled by those twenty patrollers, then it should be easy for a computer to find that. ϢereSpielChequers 08:17, 2 September 2015 (UTC)
- How would you identify the "article creation" group? Risker (talk) 18:13, 1 September 2015 (UTC)
- But if a program was run and identified a group of editors who were patrolling each others articles or all patrolling the articles created by a second group of editors, wouldn't that be worth investigating by checkusers? ϢereSpielChequers 18:05, 1 September 2015 (UTC)
11a. Limit account creations per IP
- Could we automatically check the IPs used to create new accounts and flag those that create many new accounts in a short period? The only legit occassion this would happen would be in the educational program. I wouldn't suggest blocking the creation, but if we could at least see the groups of accounts we could check them to see what they're up to. SmartSE (talk) 12:49, 2 September 2015 (UTC)
- See Wikipedia:Sockpuppet_investigations/Tzufun for a contemporary example of what this could detect. It's ridiculous that it is so easy to evade detection by creating accounts that make 1 or 2 edits. SmartSE (talk) 13:00, 2 September 2015 (UTC)
- (edit conflict)} This already happens - only 6 accounts can be created per IP per day (this can be overridden by administrators and account creators) - see Wikipedia:Account creator. I've been involved with several editathons where the limit has been hit, so the override is necessary for more than just the education programme. Thryduulf (talk) 13:02, 2 September 2015 (UTC)
- Thanks for the link. As with anti-vandal measures, I guess we have to concede that there will be some false positives. As with our discussion at 11b it would only be necessary to log groups of accounts from IPs creating many accounts e.g. >10 in a month. It would be possible to distinguish good faith editors from socks very easily. SmartSE (talk) 13:16, 2 September 2015 (UTC)
- An additional limit of X account creations per Y days could be very useful. It would need more discussion, e.g. does it need to be customisable (e.g. a university in September may see many legitimate creations)? Would spanning a range (to avoid IP hopping) cause more false positives than benefits? And it will almost certainly need a software update, but it's a good idea. Thryduulf (talk) 14:37, 2 September 2015 (UTC)
11b. score articles on "sockiness"
- This is a good start, but should be part of a system that scores articles on "sockiness" or "COIfulness" based on a number of indicators. If the scores cross a threshold, the system would bring them to attention for human evaluation. You could even do it on a tool server hooked up to a Twitterbot a la CongressEdits and let the volunteer community self-organize around it. After spending some time at COIN, I have some ideas of what the scores should be based on, but this really deserves some serious thinking and testing of the scoring based on identified cases of undisclosed COI/undisclosed paid editing and socking. It should really be done as a machine learning system with partitioned learning/testing datasets, in essence datamining our own COIN and SPI datasets. As to elements of the scoring system, things I've noticed include:
- Creation of fully formed articles on first edit
- Creation of fully formed articles by new user
- Creation of long articles by new user
- Articles with certain types of content, e.g. business infobox, photo caption, logo
- Approval of AfD by approver who has interacted with the submitter in certain ways. Categories of interaction to look at: interaction on any talkpage, interaction the submitter's or approver's talkpage, interaction on XfD pages, interaction on noticeboards.
- Tweak interaction thresholds based on time between co-involvement on whatever page is analyzed.
- Tweak "new user" thresholds
- The above is just off the top of my head, if this gets traction I'll think of some more. Maybe a knowledge engineer could help us conduct good interviews of COI and COIN patrollers, and experienced admins. The neat thing about machine learning is you don't have to be perfect: you can throw a bunch of stuff into the training, and see what works. Poor correlators/classifiers will be trained out. For consideration — Brianhe (talk) 04:12, 2 September 2015 (UTC)
- Other possible factors to weigh in a scoring system include:
- External links quantity/quality metrics. Indicators include:
- Multiple links to same domain
- Social media links
- Inline ELs in general
- Inline ELs without a path
- Distance to deleted article titles
- Density of registered vs unregistered editors (sorry legit IP ed's)
- Editor trust metrics (longevity, blocks, noticeboards, corpname patterns, ...)
- There could also be a voluntary (opt-in) system for various actions that could modify the trust scoring for a particular editor:
- having a WMF-verified identity on file
- participation in a "ring of trust" system along the lines of WP:Personal acquaintances
- community-bestowed reputation points
- External links quantity/quality metrics. Indicators include:
- Another idea, if this scoring system worked sufficiently well, high scores could trigger automatic reversion of the article to the Pending changes protection model. — Brianhe (talk) 06:43, 2 September 2015 (UTC)
- Thanks Brianhe, but I see that as a separate project targeted at auto assessing articles for spam much as we have bots and edit filters to deal with vandalism. This is a complex task for several reasons, but if we can come up with an algorithm that has few false positives it could be used for bot based spamfighting much as bots have brought vandalism under control. No-one nowadays would credibly predict that Wikipedia will eventually succumb to the rising tide of vandalism any more than they'd predict London would eventually overwhelmed in horse shit, but before the edit filters and vandalfighting bots that was as valid a prediction as the prediction in 1890 that London's traffic problems would bury the city in horseshit. Spam has been rising for years, I suspect it would eventually swamp the pedia if we don't find ways to automate dealing with it. ϢereSpielChequers 08:17, 2 September 2015 (UTC)
- I'm exteremely supportive of this idea. As far as I understand, Cluebot works by giving edits a score based on how likely edits are to be vandalism and we could catch a great deal of promotional editing in a similar manner. We have plenty of SPIs listing confirmed COI accounts and we could presumably use these to train the bot to detect new edits. Special:AbuseFilter/354 Special:AbuseFilter/149 and Special:AbuseFilter/148 are the closest we have at the moment but they are comparatively crude and have barely changed in the last 5 years. I've not seen them catch any of the paid edits I've cleaned up over the last six months. How to get the ball rolling? SmartSE (talk) 12:46, 2 September 2015 (UTC)
- You need to be careful here - what will be detected is promotional editing, not paid editing. Promotional editing is obviously harmful to the encyclopaedia, and improved detection of it is a good thing, but it will also catch unpaid promotional editing and will not catch paid editing that is not promotional. It is also incorrect to assume that all promotional editing by editors that have not disclosed they are being paid are breaking the terms of use - they may indeed not being paid and may just be a fan of whatever it is they are promoting. Thryduulf (talk) 12:56, 2 September 2015 (UTC)
- That's the point though - content not contributor. If the content isn't promotional then there isn't so much of a problem (obviously this wouldn't deal with omissions etc. but it would catch a lot). Also, I should have said that unlike Cluebot I would prefer this to only tag edits for scrutiny rather than reverting them. SmartSE (talk) 13:13, 2 September 2015 (UTC)
- You need to be careful here - what will be detected is promotional editing, not paid editing. Promotional editing is obviously harmful to the encyclopaedia, and improved detection of it is a good thing, but it will also catch unpaid promotional editing and will not catch paid editing that is not promotional. It is also incorrect to assume that all promotional editing by editors that have not disclosed they are being paid are breaking the terms of use - they may indeed not being paid and may just be a fan of whatever it is they are promoting. Thryduulf (talk) 12:56, 2 September 2015 (UTC)
- I'm exteremely supportive of this idea. As far as I understand, Cluebot works by giving edits a score based on how likely edits are to be vandalism and we could catch a great deal of promotional editing in a similar manner. We have plenty of SPIs listing confirmed COI accounts and we could presumably use these to train the bot to detect new edits. Special:AbuseFilter/354 Special:AbuseFilter/149 and Special:AbuseFilter/148 are the closest we have at the moment but they are comparatively crude and have barely changed in the last 5 years. I've not seen them catch any of the paid edits I've cleaned up over the last six months. How to get the ball rolling? SmartSE (talk) 12:46, 2 September 2015 (UTC)
- Thanks Brianhe, but I see that as a separate project targeted at auto assessing articles for spam much as we have bots and edit filters to deal with vandalism. This is a complex task for several reasons, but if we can come up with an algorithm that has few false positives it could be used for bot based spamfighting much as bots have brought vandalism under control. No-one nowadays would credibly predict that Wikipedia will eventually succumb to the rising tide of vandalism any more than they'd predict London would eventually overwhelmed in horse shit, but before the edit filters and vandalfighting bots that was as valid a prediction as the prediction in 1890 that London's traffic problems would bury the city in horseshit. Spam has been rising for years, I suspect it would eventually swamp the pedia if we don't find ways to automate dealing with it. ϢereSpielChequers 08:17, 2 September 2015 (UTC)
- Other possible factors to weigh in a scoring system include:
- This is a good start, but should be part of a system that scores articles on "sockiness" or "COIfulness" based on a number of indicators. If the scores cross a threshold, the system would bring them to attention for human evaluation. You could even do it on a tool server hooked up to a Twitterbot a la CongressEdits and let the volunteer community self-organize around it. After spending some time at COIN, I have some ideas of what the scores should be based on, but this really deserves some serious thinking and testing of the scoring based on identified cases of undisclosed COI/undisclosed paid editing and socking. It should really be done as a machine learning system with partitioned learning/testing datasets, in essence datamining our own COIN and SPI datasets. As to elements of the scoring system, things I've noticed include:
12. Write a Wikipedia article about paid editing
There's been plenty of reporting in the press, and this is definitely a notable topic. [2] A Wikipedia article that actually names abusive companies and explains their shady tactics could be a deterrent to the people who pay those companies. I like to imagine someone searching the internet either looking for somebody to write an article for them, or researching a company that has approached them, and finding our article near the top of their search results. ~Adjwilley (talk) 02:36, 2 September 2015 (UTC)
- Conflict of interest editing on Wikipedia is pretty close. Ocaasi t | c 03:26, 2 September 2015 (UTC)
- Indeed, it is. Even the major companies like Wiki-PR seem to have their own articles already. I do have another question: Are the checkusers able to link individual sockpuppets to the larger companies? That's actually two questions: do they have that ability, and would they be able to publicly disclose that, say, the orangemoody accounts are from such and such a company? ~Adjwilley (talk) 04:23, 2 September 2015 (UTC)
- Regarding the Orangemoody socks, we do not know who is behind them. Speaking more generally, if there is sufficient evidence to be certain of a link between a sockfarm and/or paid editing ring and a corporation then I suspect that this will be publicly disclosed, although the advice of the WMF will be sought on each occasion. It is also possible that culprits will be identified through non-checkuser means, and I don't doubt there are people right now trying to do this based on the publicly available Orangemoody evidence. Thryduulf (talk) 10:34, 2 September 2015 (UTC)
- Indeed, it is. Even the major companies like Wiki-PR seem to have their own articles already. I do have another question: Are the checkusers able to link individual sockpuppets to the larger companies? That's actually two questions: do they have that ability, and would they be able to publicly disclose that, say, the orangemoody accounts are from such and such a company? ~Adjwilley (talk) 04:23, 2 September 2015 (UTC)
13. Bonds for certain types of editing
Why not adopt a model from other areas and require the posting of a professional paid editor surety bond for good conduct? Bond would be forfeit fully or partially for various types of misconduct to be determined. This could be done in conjunction with one or more of the ideas above; for instance, I see it working well with #6 corporate accounts. A properly functioning market will price the bonds according to the editor's risk: a shady no-reputation actor would have to put up a lot of his own dough, but a reputable org or individual should be able to get a better rate. There would probably be some intrinsic tradeoff of privacy for accountability that the market would also see to. — Brianhe (talk) 06:54, 2 September 2015 (UTC)
- We want to make legitimate interactions with use easy. And than we made to make poor paid promotional editing hard. Do not think this will accomplish either one of those. Doc James (talk · contribs · email) 07:39, 2 September 2015 (UTC)
- Our aim is for a global service, to achieve that we need to tackle systematic bias not encourage it. A bond system on a global site would inevitably be set at a level that some corporate PR types would consider trivial, but in other parts of the world would seem extortionate. It would be less onerous on the full time wikipedia spammer than on someone who only spams about one or two companies. It would lead to court cases by people claiming they shouldn't lose their bond because what they did wasn't excessively promotional. But worst of all, it would squeeze out the retired, neutral and disinterested as they hadn't paid for the privilege of editing Wikipedia. ϢereSpielChequers 09:57, 2 September 2015 (UTC)
- I should clarify, the bond was only intended to apply to "certain types of editing" by which I mean primarily paid editing, though we could extend it at will, e.g. all articles with MEDRS implications could require a bond. There should be zero cost to individuals who are not paid editors.
- Doc, I think it would actually accomplish both. 1) Making poor paid promotional editing hard: There has to be a cost associated with bad content. Right now the contest is all born by the volunteers like us. The bond would be set to a monetary point, and with terms such that it should scare away poor promotional editing by those with a commercial interest. This is simply a way to make the creator of content have something on the line to offset the incentive that got them to write the content in the first place. Right now it's all a one-way system where the upside is writing content, and the downside is virtually nonexistent: damage to reputation of a throwaway account. 2) Making good-faith editing easy. This should not affect that side of the equation at all, if the good-faith editor acts appropriately. If they violate the terms they agreed to when they created their account and started editing, and re-affirmed when committing to the bonded editing process, then too bad.
- @WereSpielChequers: I agree, the litigious US system could be against us in this regard. However this might be a consideration better made by WMF legal staff; a lot of the other proposals on the table here would require enforcement that would be at least as onerous on them. If this solution is what we-the-community want, we should ask for it.
- Tweaks to this proposal could include:
- Setting a very low bond value at first to demonstrate our resolve and to clarify process for good-faith people. Raise it over time to drive away the systematic exploiters.
- Setting a variable bond level based on locality. I don't like this because I can see bad content being "offshored", but we may have to live with it for the perception of equity, if the alternative is no system at all.
- Maybe membership in good standing in recognized professional organization (e.g. IEEE, AMA or even recognized PR associations) could be the equivalent of a bond for our purposes.
- The system could be applied across the board, or selectively for certain types of articles. For instance, only to business-related articles or only to startup-related articles. Enforcement would be easy: delete on sight if the editor was in a prohibited editor group/topic area, and was not bonded.
- Submitted for (re)consideration — Brianhe (talk) 18:13, 2 September 2015 (UTC)
- Tweaks to this proposal could include:
14. Require real-name registration for editing and/or creating certain types of articles: small and medium-sized companies and organisations, biographies of lesser-known people
Looking at articles on small and medium-sized businesses (law firms, management consultancies etc.), the majority of them seem to have been created by a Wikipedia editor who has done little or nothing else in Wikipedia, suggesting it was a principal, employee or agent of the company in question. Same with other types of organisations and biographies of living people. These are typically articles that get relatively little traffic and scrutiny from regular editors – and the number of highly active editors per 1,000 articles is continuously shrinking (about a fifth of what it was in 2007). The problem will get worse, not better, over time. Andreas JN466 11:15, 2 September 2015 (UTC)
- I strongly oppose requiring a real name on Wikipedia - even ignoring the swathes of privacy issues, we have no way of verifying what a person's real name is or is not. Read the Personal name article to see some of them problems with even defining what someone's "real name" is. Thryduulf (talk) 11:33, 2 September 2015 (UTC)
- I understand why the PR industry wants this, but why should we concede it, especially just after catching some PR outfit out in a massive breach of the TOU? There are two forms of real name registration, one is that used on certain wikis which require your username to be plausible as a real name in the Anglosphere. The other requires verification against an external site such as Facebook. Both systems have the disadvantage that they are a greater deterrent to goodfaith editors than bad, the more effective the system at outing our goodfaith editors the more vulnerable it makes them to litigious companies who don't want anyone changing their PR department's desired wording of the Wikipedia article on them. Of course real name editing is a very different burden depending on how common your name is, and we might well have a few editors with names like John Smith who would still be able to edit PR vulnerable areas. But if we introduce real name editing largely the spammers would have won. ϢereSpielChequers 11:52, 2 September 2015 (UTC)
- I am not sure the PR industry wants this. Certainly, the Orangemoodies are unlikely to want it. As for how to identify, see below (I had neither of the methods you mention in mind). Note too that this would only affect certain types of articles: in particular, those for small and medium-sized businesses, which few unconflicted people want to edit in the first place. Andreas JN466 17:30, 2 September 2015 (UTC)
- I tend to agree with the above two opinions. Since we do require that paid editors disclose their employers, clients, and affiliations, you could say that we have real-name semi-registration for them. But if paid editors don't disclose the real names of their employers, etc. how could we force them to disclose (even in private) their own real names? There might be some use in brainstorming some ideas here - how can we force disclosure from paid editors? I'm not sure it would get far, but let m try 2.
- Have the businesses and other payers register an account in their own names (see above proposal) and have them disclose the user name of who they are paying, together with submitting an OTRS ticket identifying themselves to prevent Joe jobs.
- Require that paid editors enable the e-mail link, if serious issues have been raised, and confirm their identity to Arbcom or other people assigned this task.
- Very likely those would be difficult to implement, e.g. just getting through an RfC would be difficult, but I think if we bounce around enough ideas we might come up with something. Smallbones(smalltalk) 16:29, 2 September 2015 (UTC)
- One possible method often mentioned in this context is a credit card payment to the Foundation of £0.01. Identification to the Foundation is all that is required here: the account could still have a fantasy name on-wiki. However, the additional user right would only be given in response to the nominal penny payment. Andreas JN466 17:30, 2 September 2015 (UTC)
- Problem: anonymous prepaid credit cards. — Brianhe (talk) 17:54, 2 September 2015 (UTC)
- Another problem is that credit card payments cost the recipient to process, many places in the UK don't accept credit card payments for transactions below £10 because it's not economical for them to do so. $0.01 transactions would be a significant drain on the WMF's budget (international payments maybe more so) for at most trivial benefits. Thryduulf (talk) 18:02, 2 September 2015 (UTC)
- Problem: anonymous prepaid credit cards. — Brianhe (talk) 17:54, 2 September 2015 (UTC)
- One possible method often mentioned in this context is a credit card payment to the Foundation of £0.01. Identification to the Foundation is all that is required here: the account could still have a fantasy name on-wiki. However, the additional user right would only be given in response to the nominal penny payment. Andreas JN466 17:30, 2 September 2015 (UTC)
- If this were implemented, everyone who creates stubs about, say, 7th-place Olympic gymnasts or Paralympic boccia medalists, or perhaps minor female scientists, would be required to out themselves or stop editing. I guarantee you that I would never have created what small number of articles I have if I had had to reveal my real name. BethNaught (talk) 17:58, 2 September 2015 (UTC)