TimVickers (talk | contribs) →Review status: reply |
→Review status: crazy talk |
||
Line 142: | Line 142: | ||
::::I believe (don't ask for a source) that PubMed classifies "reviews" based on a handful of numerical criteria: reviews have few authors and many citations, and so forth. In other words, PubMed uses a computed algorithm to determine which articles are "reviews". I've written a couple which are not characterized as such by PubMed - probably I didn't use enough references :) Anyhow, I could be wrong since I can't find a source to support my statements, but I would suggest that editors may be as good or better position to determine which articles are "reviews" than PubMed. '''[[User:MastCell|MastCell]]''' <sup>[[User Talk:MastCell|Talk]]</sup> 21:31, 21 May 2009 (UTC) |
::::I believe (don't ask for a source) that PubMed classifies "reviews" based on a handful of numerical criteria: reviews have few authors and many citations, and so forth. In other words, PubMed uses a computed algorithm to determine which articles are "reviews". I've written a couple which are not characterized as such by PubMed - probably I didn't use enough references :) Anyhow, I could be wrong since I can't find a source to support my statements, but I would suggest that editors may be as good or better position to determine which articles are "reviews" than PubMed. '''[[User:MastCell|MastCell]]''' <sup>[[User Talk:MastCell|Talk]]</sup> 21:31, 21 May 2009 (UTC) |
||
:::If there is a difficult case, there is always the idea of looking at the journal and seeing where in the issue the article is listed and how the journal describes it. [[User:TimVickers|Tim Vickers]] ([[User talk:TimVickers|talk]]) 21:53, 21 May 2009 (UTC) |
:::If there is a difficult case, there is always the idea of looking at the journal and seeing where in the issue the article is listed and how the journal describes it. [[User:TimVickers|Tim Vickers]] ([[User talk:TimVickers|talk]]) 21:53, 21 May 2009 (UTC) |
||
::::Crazy talk! :P '''[[User:MastCell|MastCell]]''' <sup>[[User Talk:MastCell|Talk]]</sup> 22:06, 21 May 2009 (UTC) |
Revision as of 22:06, 21 May 2009
Applied kinesiology: confused use of research sources
In this section of the article, there is a confusing use of sources. This situation has occurred because of the edits made by science-based editors on one side, and a board member of the ICAK on the other side. He has attempted to present this pseudoscience as a scientifically legitimate discipline. The best research has consistenly shown it to be hogwash. Please take a look and you'll be rewarded with a fascinating glimpse into the 10th most used chiropractic technique. -- Fyslee (talk) 05:30, 17 March 2009 (UTC)
Mediation
A user has requested mediation on this issue. Jmh649 is here to help resolve your dispute. The case page for this mediation is located here.
Popular press accuracy
- (This follows up on the #Medicine and the popular press thread above, which was getting a bit long.)
Above, TimVickers suggested inserting this text somewhere:
- Although some articles in the popular press are accurate, many others are inadequate, misleading, or even completely wrong.
I have a few ideas for making the proposal more concrete and better-supported by the cited source, Dentzer 2009 (PMID 19118299). I just now reread the above thread and Dentzer, and with all this in mind, I suggest that in WP:MEDRS #Popular press, we make the following change (deletions struck out, insertions italicized, and changing the preceding comma to a period):
andAlthough medical news articles often deliver public health messages effectively, they too often convey wrong or misleading information about health care, partly when reporters do not know or cannot convey the results of clinical studies, and partly when they fail to supply reasonable context.[PMID 19118299]
This captures Dentzer's thrust more accurately (albeit at more length), and mentions a positive thing she has to say about the news, which seems to be the main point of contention here. Eubulides (talk) 18:47, 27 March 2009 (UTC)
- I'm concerned we are placing too much emphasis on trying to work out and reproduce the opinions and words of one author in one paper, which is an opinion piece, not a product of research. The article is not just concerned with accurate reporting (though she does give examples of misleading or incorrect reporting) but whether the underlying health message is being conveyed successfully. The wrong message may be conveyed even if the reporting of the "new" story is accurate, because it is incompletely described, or is not placed in context with other research or health concerns. The statement that "Often these [public health] messages are delivered effectively" is actually the least convincing opinion in the article and she cites no research to back it up. Perhaps nobody questioned that the word "effectively" means "to have an effect". Public health messages are largely ineffective, and I'm sure there's lots of research on the subject. Indeed, newspapers are guilty of effectively spreading anti-health messages such as those from the anti-vaccination lobby, or over-simplistic messages such as the antioxidant theory encouraging the consumption of red wine, or waste-of-time-and-money messages such as the high coverage of many CAM "therapies". We must also remember that "medical news articles" with the highest readership are found in glossy women's magazines, tabloids, and television news. The author of that paper, along with the folk on this talk page, probably read a better quality of news than average. Colin°Talk 20:06, 27 March 2009 (UTC)
- Colin, I agree that we are placing too much emphasis on the Dentzer piece which you correctly point out is an opnion piece, not supported by data or research. If you want peer-reviewed research supported by data, you can follow the links on Schwitzer's web site. Nbauman (talk) 20:52, 27 March 2009 (UTC)
- Eubulides, I read an article in the BMJ about doctors who murder patients, which cited several examples.
- I don't know about Eubulides, but I would not agree with the statement "doctors all too frequently murder patients". What a ridiculous example. Nbauman, you are seriously in danger of self-highlighting the sort of misunderstanding and misreporting of statistics that we are criticising journalists for. It is a cliché that "one avoidable death is a death too many", but fortunately the world doesn't get paralysed while we wrap it in cotton wool. Colin°Talk 21:34, 27 March 2009 (UTC)
A recent change gave reporter inexperience as the reason for poor medical stories, claiming the cited article excluded "seasoned reporters". It does nothing of the sort. That is a logical error like saying that "weekdays are dry" based on a source that says "often weekends are ruined by rain". Colin°Talk 21:18, 27 March 2009 (UTC)
Nbauman - stop edit warring. You were bold. It got reverted. Now discuss. Your reinsertion of text which is neither supported by the cited article nor supported by consensus of WP editors is extremely bad form. I strongly encourage you to have the decently to self-revert and engage in consensus seeking here before making contentious changes. Colin°Talk 21:37, 27 March 2009 (UTC) WP:RS does not apply outside of article space, and you yourself regard it merely as an "opnion piece, not supported by data or research". Colin°Talk 21:39, 27 March 2009 (UTC)
- Colin, did you read Dentzer's article before making that change? She says:
- Often these messages are delivered effectively by seasoned [experienced] reporters who perform thoughtfully even in the face of breaking news and tight deadlines. But all too frequently, what is conveyed about health news by many other journalists is wrong or misleading.
- We agreed that "seasoned" means "experienced."
- So Dentzer said that "other journalists" are wrong or misleading, not experienced journalists. Nbauman (talk) 21:44, 27 March 2009 (UTC)
- Four statements: [1] There exist seasoned reporters who have delivered messages effectively. [2] There exist other journalists who have conveyed wrong or misleading information. [3] Wrong and misleading information is conveyed by inexperienced journalists. [4] Wrong and misleading information is not conveyed by experienced journalists. 1+2 implies neither 3 nor 4. Please go read up on logic before suggesting I didn't even read the article. Colin°Talk 23:24, 27 March 2009 (UTC)
- "Other" means "those journalists not in the first set". The first set is defined as "seasoned reporters who have delivered messages effectively". The first set is not defined as "all seasoned reporters". For that, the author would have had to write "Seasoned reporters deliver health messages effectively." You are seeing (because you want to) an implication that strictly isn't there. However, I suspect Dentzer didn't worry about whether her statement would be logically analysed or mined for implications, because as I pointed out earlier, her use of the word "effectively" is careless. Colin°Talk 10:44, 28 March 2009 (UTC)
- I strongly disagree with the recent change "news articles
too oftencan convey wrong or misleading information about health care particularly when they are written by by inexperienced reporters". Not only is it ungrammatical, it focuses on what is a secondary topic of Dentzer's article (lack of training) and misses her main point (lack of context in news articles). Also, it removes a "too often" that is an important point in Dentzer, and should be a point here. I agree with both Colin and Nbauman that this thread is focusing too much on Dentzer and too little on what the guideline should say. I frankly am dismayed by Nbauman's debate tactics on this talk page; they are not conducive to collaboration. Let's work out a change on the talk page first, and not edit-war the project page. Eubulides (talk) 21:57, 27 March 2009 (UTC)
- I strongly disagree with the recent change "news articles
- Perhaps more importantly, this is not a mainspace article on Role of popular press in health care. This page is advice from Wikipedia editors to other Wikipedia editors. We don't need sources to "prove" that our advice is to use high-quality secondary scientific sources for statements of scientific fact, and we don't need to split hairs over the ideally and comprehensively neutral way to represent one person's published opinions.
- There is no consensus for change. Nbauman remains offended that we're not showing enough respect to his favorite colleagues, and will probably always remain offended by our supposed scientism. None of the other involved editors want to lower our standard to accommodate his POV. I think we need to just stop talking at this point. WhatamIdoing (talk) 22:56, 27 March 2009 (UTC)
Vitamin E
- An interesting example of a recent popular press distortion comes in the form of NYTimes recent coverage of vitamin E. For example, they painted a misleading picture of the aftermath of the SELECT trial [1] (which
our selenium article reflectsfixed). Much better commentary appears in Gann's accompanying editorial on the SELECT trial [2] or in the article's discussion itself [3]. The most striking difference is that while earlier trials were focused on selenium-deficient people, the SELECT men were high in selenium. Selenium is toxic at higher doses. Big surprise. Similarly, NYTimes recently highlighted the "increased mortality" from vitamin E in an article titled "Extra Vitamin E: No Benefit, Maybe Harm". As I pointed out in my my comment, they hardly emphasize that the increased mortality occurred only at 400+ IU (27 times the RDA) and excluded the fact that at a meta-analysis found a non-statistically significant reduction in deaths at 100 IU. They also excluded the fact that meta-analysis was hesitant to generalize this to the non-elderly population. What they also don't mention is that, according to the latest NHANES, 90% of Americans don't get the RDA of 15 IU. From that perspective, basic vitamin E supplementation looks prudent, and indeed the ATBC trial found benefits for those with higher serum levels. Despite the issues with NYTimes coverage, it was added to the orthomolecular medicine article by experienced medicine editor MastCell [4]. Although I pointed out the problems in E's coverage at that article's talk page, no one seems interested in providing a neutral overview of poor E. II | (t - c) 23:35, 27 March 2009 (UTC)- I'd rather not discuss specific content issues here, especially when they're as historically intractable as vitamin supplementation. From the perspective of WP:MEDRS, you're actually dismissing the New York Times article far too hastily. In fact, it does note that the excess mortality was seen with high doses. An entire subsection of the Times article is entitled "Too Much of a Good Thing", and states:
That's followed by a fairly detailed breakdown of recent studies on the subject. So I think the Times piece is actually an example of responsible mainstream-media reporting, which is one reason I cited it. Another reason is that the citation occurred in orthomolecular medicine, which specifically involves very large doses of vitamins and where the potential harms of such megadoses are relevant.Recent studies have even suggested that at the high doses many people consume, vitamin E could be hazardous. In November 2004, the American Heart Association warned that while the small amounts of vitamin E found in multivitamins and foods were not harmful, taking 400 International Units a day or more could increase the risk of death. ([5])
I can't fault the Times for "excluding" a non-significant finding - I'd be more inclined to praise them for doing so. And the Times piece reaches the same conclusion as Gann's (excellent) editorial: Vitamin E pills aren't a magic bullet. MastCell Talk 00:40, 28 March 2009 (UTC)
- I'd rather not discuss specific content issues here, especially when they're as historically intractable as vitamin supplementation. From the perspective of WP:MEDRS, you're actually dismissing the New York Times article far too hastily. In fact, it does note that the excess mortality was seen with high doses. An entire subsection of the Times article is entitled "Too Much of a Good Thing", and states:
- Dentzer's point is that "when journalists ignore complexities or fail to provide context, the public health messages they convey are inevitably inadequate or distorted". That's exactly what happened there, when the average vitamin E intake is around 7 mg/day [6]; the ATBC study (not covered) found benefits for moderate supplementation, and the meta-analysis found that "low-dose" vitamin E (still several times above the RDA) reduced deaths; the probability that it was due to chance was 20% -- not stat. significant, but not worth discarding either. Based on the literature, the article could also have reasonably been titled: "Extra vitamin E: Good in moderation". It was quite easy to come away from that article thinking that vitamin E is scary and dangerous. Further, the results of the meta-analysis were challenged [7]; anyway, to get back on topic, I actually think Dentzer is being somewhat selectively cited, but the section is fine overall. One thing to keep in mind is that papers can sometimes be extremely technical; in those cases a lay summary is necessary. As far as OMM, I'm disappointed you didn't notice the major problems in the vitamin E section (not the least the statement that "initial hopes for OMM were based on observational studies" -- implying that mainstream, rather than OMM fringe, was responsible for the RCTs), and hopefully you'll comment the next time I bring it up. By the way, looking at Gann again, I'm not so impressed by the way he attributes the results of earlier studies to chance when they were clearly different populations. II | (t - c) 07:10, 28 March 2009 (UTC)
- Agree with MastCell that this thread on Vitamin E doesn't belong here; presumably it belongs in Talk:Orthomolecular medicine. Eubulides (talk) 16:38, 28 March 2009 (UTC)
Scientific evidence for guideline
WhatamIdoing, you've got it exactly backwards. I have always been a strong critic of medical journalism, and I agree with most of Schwitzer's work because Schwitzer is based on evidence. Much medical reporting is terrible. I don't know if "most" of it is terrible, but I'll go with the evidence.
I'm flattered when I see articles like Dentzer's, because I'm one of those experienced journalists she talks about. I write for doctors, my work is edited and reviewed by doctors, and if I didn't get the story right in every respect, my work would be tossed out pretty fast.
My difference with you and the other editors here is that I believe that evaluations and criticisms of medical journalism should be based on the same kind of scientific evidence that we use in medicine and elsewhere in science. That's what I'm offended at -- your abuse of science and contempt for scientific method. You're not using high-quality secondary scientific sources at all.
You're making wild claims about journalism based not on scientific evidence, like Schwitzer's work, but on your own personal prejudices. I won't speculate on the psychological roots of those prejudices, but you can't support your broad overstatements with facts.
You cherry-pick articles like Dentzer's to find excerpts that you think support your position, and quote them selectively and misleadingly, even though you misunderstand the whole essay.
I've seen something similar, where alternative medicine advocates will pick out excerpts of medical articles to support their argument that all doctors and all drug companies are harming people.
Your basic fallacy is to condemn all journalism in the popular press, when the scientific data displays a much more interesting pattern: some of the popular press is accurate and some is not. So you're really missing the point.
It's paricularly ironic that you claim to be supporting "high-quality secondary scientific sources", when you're doing exactly the opposite. Except for Schwitzer, your entire guideline is based on personal opinion and personal prejudice and ignores those sources. Nbauman (talk) 23:45, 27 March 2009 (UTC)
- I dunno, are any of the other Wikipedia guidelines or policies rigorously supported by scientific evidence? Although the scientific method is quite useful for resolving scientific disputes, one can't use it for everything.
- Hmm, come to think of it, shouldn't any effort to document (and perhaps even scientifically support) medical journalism be put into mainspace? I just now created the stub Medical journalism to do that. I get the feeling that if even 10% of the effort expended on this thread had been put into Medical journalism we might have a useful article there by now. (And no, that's not a scientific estimate....)
- Eubulides (talk) 16:38, 28 March 2009 (UTC)
A secondary source dilema
Here is a dilema that I have noticed particularly with books. Whilst it is for obvious reasons preferable to use secondary sources wherever possible, is this always the case? One scenario I saw was when reading a high quality secondary source in a book which is widely used especially by medical students where it stated that benzodiazepines are GABA reuptake inhibitors or increase GABA levels for their therapeutic effects which is clearly false as they work at benzodiazepine receptors enhancing the effect of GABA at the GABAa receptor via benzodiazepine receptors. This scenario didn't happen on wikipedia but using it as an example of inaccurate info in books, but something similar happened in the past few days where on the surface seemingly high quality books on psychiatry and psychopharmacology were cited but on closer inspection the authors were employed by literally over 20 pharmaceutical companies and they had cherry picked 2 or 3 primary sources and then distorted the results of the primary sources and reinterpreted them and falsely represented the primary sources without any evidence but their opinion. So what would happen in that scenario? To be clearer though the sections of the books were not doing an extensive review of the literature but were basically touching a little on every aspect of psychiatry and psychopharmacology. So that brings up another question what happens if the secondary source is not doing a meta-analsysis or extensive review of the literature but are simply touching on one aspect of a condition or drug or syndrome etc? Also is a peer reviewed meta-analysis or review superior to a non-peer reviewed book? Is a peer reviewed secondary source published by people with ties to a drug company or some other conflict of interest equal to an independent peer reviewed secondary source? I guess my point and dilema is this. The use of secondary sources are viewed higher on wikipedia because they are seen as being an independent outside opinion but what if the author(s) are not independent and/or they can be shown to be distorting data? My main concern is secondary sources which are not peer reviewed as this area seems to be most open to abuse by those with a conflict of interest. Currently the recommendations is books which are not necessarily peer reviewed and open to abuse, opinion, synthesis, reinterpreting results, distorting facts are declared as "excellent" (secondary) sources. I disagree and think that the problems of books needs to be addressed. There are some good medical books which are high quality don't get me wrong but you see the dilema that I see and problem for misinformation getting inserted into articles. I think peer reviewed secondary sources trump non peer reviewed secondary sources and that many (but not all) books in general are not excellent sources of reliable citeable data. Infact I think primary sources are quite often superior to many books depending on the subject matter being quoted for reasons that I have just explained.--Literaturegeek | T@1k? 23:32, 1 April 2009 (UTC)
I have striked out some of my text as on closer inspection I see that you differentiate between non-reliable medical books and reliable trusted publications, but still have some concerns mentioned above.--Literaturegeek | T@1k? 15:54, 2 April 2009 (UTC)
- That's kind of a long question, and it sounds like this is really more of a question about a particular case, but in general the answer (I think) is that peer-reviewed sources are generally (although not always) preferred. Conflict of interest is of course an important issue, but in practice almost everybody has a conflict of interest of one form or another, and a review should not be disqualified simply because a reviewer has a conflict of interest. Eubulides (talk) 16:45, 2 April 2009 (UTC)
[Edit conflict with Colin, posting at same time] Sorry for length of questions. This was the book that I was talking about. Would this book be considered superior to an FDA review or Committee on Safety of Medicines review? This book is cited in an article and I have no intention of deleting it so I am not in a conflict with anyone per se. It was just when reading the book and comparing it to their sources I noted how they had distorted or even falsified the conclusions of the sources they were citing. Also they appeared to be giving best practice guidelines based on taking one or two primary sources out of context and also synthesising without any evidence in conflict. I guess I noticed a potential occasional serious problem with using books over more reliable peer reviewed secondary sources. Then of course the above example I mentioned where a trusted book source got the mechanism of benzodiazepines completely wrong is another example. I am not against using books and more often than not they do make good sources to quote and infact I used one today myself. I guess I am asking does a secondary peer reviewed meta-analysis or review source trump this example of a book? Or does FDA review or CSM review or department of health guidelines trump it?--Literaturegeek | T@1k? 17:56, 2 April 2009 (UTC)
- Don't overplay the "independent" aspect of secondary sources. The aspects of secondary sources we like are that they are written by experts and they have undergone some form of expert review (peer or editorial). When wikipedians choose primary sources we end up with the selection & analysis of those primary sources done by nobodies, with no reputations to uphold, and (often) no serious review. So I find your concern about bias in the secondary-source authors to be a red-herring -- there is bias in the primary sources and there most certainly is often bias among wikipedians (which is undeclared, whereas conflicts-of-interest are declared in some sources). A medical book published by a serious publisher will have undergone some form of review. If you have found mistakes in a student textbook, perhaps you should contact the publishers. Colin°Talk 17:51, 2 April 2009 (UTC)
Colin, I think perhaps you misinterpreted me or perhaps it is my fault in how I worded the issue. I was not in anyway shape or form saying that primary sources are superior to secondary sources. I was raising an occasional isolated problem of what happens basically if a secondary source says something which is nonsense. For example should we change the mechanism of action of benzos to say they are reuptake inhibitors because a good quality book made a mistake? Or in such a scenario would a peer reviewed secondary source trump it so we can add accurate data? What happens if the book is demonstratly misrepresenting their sources etc. I was talking about isolated examples. These examples didn't happen in real life on wikipedia but I just noticed a potential from spotting several inaccuracies and thought that I would raise it. I have noticed this is less problematic in the peer reviewed literature especially secondary sources which have been published in medical journals. The inaccuracies I have spotted in sources (particularly books which often briefly skim a topic) are not necessarily POV issues but just a matter of an inaccurate factoids. Another one I have noticed is flunitrazepam is short acting when it is long acting.--Literaturegeek | T@1k? 18:03, 2 April 2009 (UTC)
- One of the reasons that User:Paul gene strongly opposed the emphasis on reviews was that, in his words (if I recall correctly), they are often "written by pharmaceutical hacks". These days the tide has begun to turn and, if you look hard enough, you can find unbiased information, as evidenced by the recent Cochrane review on St. John's wort (PMID 18843608) and the recent review on the NEJM on publication bias in antidepressants [8]. Sources without conflicts of interest should be preferred in my mind, or at the least included, and we should probably have a section on it in this guideline. II | (t - c) 18:41, 2 April 2009 (UTC)
- This guideline covers the whole of medicine. Drugs and the pro/anti-pharma controversy is really a small part. Folk get worked up about conflicts of interest and bias when much of what we should be writing about on WP, and pushing towards FA, is bread-and-butter non-controversial medicine. We must always bear this in mind when giving and suggesting changes to guidance on sources. Much of research is done and reviewed by honest folk earnestly trying to work out why we get ill and how to fix it. It is just a little wearying to keep hearing that secondary sources are full of bias, written by hacks in the pay of 20 pharmaceutical companies, and that if you'll only let me choose, weigh and interpret the primary sources instead, then I'd do a better job... Colin°Talk 19:10, 2 April 2009 (UTC)
Speaking of misrepresenting primary sources, you appear again to be misrepresented me. I never said nor implied that I should resynthesise an authors opinion. I was asking what do we do in that scenario? Do we accept the demonstrately intentional or unintentional inaccurate factoids or opinions of a book which often briefly touched on one aspect of medicine,,, or does a peer reviewed secondary source such as a meta-anaylis or review or FDA, department of health etc etc trump it? I came here just to ask for guidance on what to do. Initially I misread the book guidelines on this project and striked out my text above but still wanted some guidance or discussion on what to do in these isolated scenarios I have mentioned. Instead I appear to be getting responses trivialising and mocking me as an idiot by distorting what I was saying and asking.--Literaturegeek | T@1k? 19:28, 2 April 2009 (UTC)
- Secondary sources are usually more reliable than primary sources. The fact that a handful are not just means that you should, in these cases, use several secondary sources to give a broad view of the range of opinions on the matter. What is not appropriate is for us to use primary sources to cast doubt on the conclusions of secondary sources - that is original research. Tim Vickers (talk) 19:15, 2 April 2009 (UTC)
Thank you Tim this is what I was asking for, some guidance. Could you also give me your opinion on whether official national reviews by say the FDA or Department of Health, meta-analyis or review in a journal would be superior to a book? What happens if a book says for example SSRIs work by stimulating the release of serotonin when the truth is they block its reuptake or something like that? Do we delete the book ref and replace it with a more accurate secondary source? Or do we leave the mistake of the book in even though we know that it is false? I guess I would like this project to state whether reviews or meta-analysises (secondary sources) in medical journals are superior to a (secondary source) book if content is in dispute.--Literaturegeek | T@1k? 19:28, 2 April 2009 (UTC)
- That's really a question for the RS noticeboard, this page is for discussing this policy, not specific sources or specific disputes. However, the advice given already at Wikipedia:Reliable_sources_(medicine-related_articles)#Choosing_sources seems to apply well to this instance. Tim Vickers (talk) 19:33, 2 April 2009 (UTC)
Okie dokie Tim, problems with specific sources, try and sort out on article talk page or else if that fails go to RS noticeboard is what you are saying I believe. So no need to alter guidance. You have answered my question, thank you.--Literaturegeek | T@1k? 19:47, 2 April 2009 (UTC)
- Any source that is rather general and broad (such as a student textbook) is likely to be inferior to a more specific book or paper. It may well have attempted to condense a complex subject with a simple explanation that is in fact wrong. This may be the case for your benzo textbook. A specialist book aimed at medical professionals would be a better source, as would a relevant secondary article such as those you mention. Where you have several sources in disagreement then it may be relevant to note the disagreement among professionals within the article (as Tim suggests) or you could post a query on the article talk page or WP:MED talk page and ask for the opinions of others. You may find someone has access to a better definitive source. Colin°Talk 19:51, 2 April 2009 (UTC)
Ok thank you Colin for your reply and good advice, makes sense.--Literaturegeek | T@1k? 20:00, 2 April 2009 (UTC)
Just to clarify it wasn't a "benzo book", but a clinical psychopharmacology text book (see above) and also another medical text book. Anyway like I said the inaccurate data wasn't cited on wiki so I was raising a potential issue I had noted with isolated books saying inaccurate things eg benzos are reuptake inhibitors or distorting sources. Anyway like Tim said discuss on article talk page to reach consensus, use better secondary sources or as a last resort go to RS noticeboard in such a scenario.--Literaturegeek | T@1k? 21:52, 2 April 2009 (UTC)
This has been bugging me and I would like to end this by saying that I apologise for the confusion generated by myself initially misreading (due to skim reading when half asleep) the policy regarding books and I now know why Colin reacted the way he did. He is only trying to defend wikipedia. Ironically I was trying to do the same in my posting and had nothng to gain and the policy actually reads the way I thought it should read but I misread it initially. Anyway I got my questions answered in the end so all is good. :) I would like to just end this conversation here by saying I support Colin's efforts in defending this project and wikipedia and I am glad we now understand each other's misinterpretation of each other. No need for anyone to reply.--Literaturegeek | T@1k? 02:15, 6 April 2009 (UTC)
Review status
I hope I can raise a query here without offending! But regarding the determination of 'review ' status for a paper, does listing as a review by the NIH generally differentiate it. To fit the criteria, Ideal sources for these articles include general or systematic reviews in reputable medical journals, Given A secondary source in medicine summarizes one or more primary or secondary sources and that not all reviews particularily narrative type necessarily label themselves as reviews, but the NIH might list them as such? Leaving aside of course arguments about various journals. Peerev (talk) 22:37, 20 May 2009 (UTC)
- PubMed, in my experience, is not 100% reliable in labeling reviews. Occasionally it labels an article as a review when it is not; more often, it fails to label an article a review when it is indeed a review. (If you can't tell whether an article is a review after reading the article, then you probably shouldn't be citing that article; reliable sources are not supposed to be so confusing....) Eubulides (talk) 23:29, 20 May 2009 (UTC)
- What the author is actually doing matters much more than how some person in the back office happened to code the database.
- I have also found errors in the presentation of articles, including straightforward case reports labeled as "reviews" and things that I thought were good reviews being unlabeled (well, and basically all older reviews aren't tagged as reviews, but you generally shouldn't be using those anyway).
- If you've got a question about a specific, then please feel free to share it with us. WhatamIdoing (talk) 05:38, 21 May 2009 (UTC)
- Thanks multiple specifics, but in general I have also noticed that some papers say for instance an hypothesis, will on reading be largely a review of the relevant literature with a much smaller expansion of the hypothesis. Such papers are often listed as a review, but may not be tagged such, perhaps they are older as you suggest! I would assume it is safe to cite the review part generally but not necessarily the hypothesis part (unless under an Hypotheses heading). Other papers of course will also contain large reviews within them and I suspect that is why they are coded as review? When a pubmed search is done for a subject with 'and review' included, these can be the only 'reviews' found, regardless of date, hence the need for clarification. Peerev (talk) 21:25, 21 May 2009 (UTC)
- I believe (don't ask for a source) that PubMed classifies "reviews" based on a handful of numerical criteria: reviews have few authors and many citations, and so forth. In other words, PubMed uses a computed algorithm to determine which articles are "reviews". I've written a couple which are not characterized as such by PubMed - probably I didn't use enough references :) Anyhow, I could be wrong since I can't find a source to support my statements, but I would suggest that editors may be as good or better position to determine which articles are "reviews" than PubMed. MastCell Talk 21:31, 21 May 2009 (UTC)
- If there is a difficult case, there is always the idea of looking at the journal and seeing where in the issue the article is listed and how the journal describes it. Tim Vickers (talk) 21:53, 21 May 2009 (UTC)
- Thanks multiple specifics, but in general I have also noticed that some papers say for instance an hypothesis, will on reading be largely a review of the relevant literature with a much smaller expansion of the hypothesis. Such papers are often listed as a review, but may not be tagged such, perhaps they are older as you suggest! I would assume it is safe to cite the review part generally but not necessarily the hypothesis part (unless under an Hypotheses heading). Other papers of course will also contain large reviews within them and I suspect that is why they are coded as review? When a pubmed search is done for a subject with 'and review' included, these can be the only 'reviews' found, regardless of date, hence the need for clarification. Peerev (talk) 21:25, 21 May 2009 (UTC)