Essay on peer pressure is an excuse

Bex June 10, at 8:

Essay on peer pressure is an excuse

By Daniel Bor in neuroimaging Over this week, there has been a striking debate in the blogosphere and on Twitter concerning the flaws in many published neuroimaging studies. His partial explanation was that this was in a different age, with more lax conventions and admittedly he was only a minor author on the paper himself.

Late Tuesday night, Neurocritic posted a provocative blog article in response to this, asking the question: Two key issues quickly surfaced: I thought it might help in this discussion to explain one of the main statistical issues that this debate is pinned on, that of corrected versus uncorrected statistics, and how this applies to brain-scanning.

And if many published imaging papers are so flawed, I want to try to explain how the literature Essay on peer pressure is an excuse so sloppy. Just to flag up that this blog is addressing two audiences.

I wanted to explain the context of the debate to a general audience, which occurs in the next two sections, and suggest how they can assess neuroimaging stories in the light of this in the last small section.

The middle sections, although hopefully understandable and maybe even of some interest to all, is directed more at fellow scientists. So what are corrected and uncorrected statistics?

Imagine that you are running some experiment, say, to see if corporate bankers have lower empathy than the normal population, by giving them and a control group an empathy questionnaire.

Peer Pressure | Essay Example

How can you tell whether this is just some random result, or that bankers really do have lower empathy? This is the point where statistical testing enters the frame. Classically, a statistical test will churn out a probability that you would have got the same result, just by chance.

All well and good, but what if you also tested your control group against politicians, estate agents, CEOs and so on?

Essay on peer pressure is an excuse

His mojo must be building! So he tries again, and again and again. Then, as if by magic, on the 20th attempt, he gets all 4 heads. Joe Superstitious proudly concludes that he is in fact very skilled at telekinesis, puts the coin in his pocket and saunters off. Joe Superstitious was obviously flawed in his thinking, but the reason is actually because he was using uncorrected statistics, just as the empathy study would have been if it concluded that bankers are less empathic than normal people.

If you do multiple tests, you normally have to apply some mathematical correction to take account of how many tests you ran. How does this apply to brainscanning? Moving on to neuroimaging, the data is far more complex and inordinately larger, but in essence exactly the same very common statistical test one might have used for the empathy study, a t-test, is also used here in the vast majority of studies.

So there is a vast problem of some of these voxels to be classed as significantly active, just by chance, unless you are careful to apply some kind of correction for the number of tests you ran. This is still in relatively common use today, but it has been shown, many times, to be an invalid attempt at solving the problem of just how many tests are run on each brain-scan.

Poldrack himself recently highlighted this issue by showing a beautiful relationship between a brain region and some variable using this threshold, even though the variable was entirely made up. In a hilarious earlier version of the same point, Craig Bennett and colleagues fMRI scanned a dead salmonwith a task involving the detection of the emotional state of a series of photos of people.

So the take home message is that we clearly need to be applying effective corrections for the large quantities of statistical test we run for each and every brain activation map produced.

But in almost all other circumstances, we should all be using corrected significance, and reviewers should be insisting on it. Should we retract uncorrected neuroimaging papers?

Surprisingly, there is a vast quantity of published neuroimaging papers, even including some in press, which use uncorrected statistics.

For one thing, some might have found real, yet weak, results, which might now have been independently replicated, as Jon Simons pointed out. Many may have other useful clues to add to the literature, either in the behavioural component of the study, or due to an innovative design.

But whether a large set of literature should now be discarded is a quite separate question from whether they should have been published in the first place.

Essay on peer pressure is an excuse

Ideally, the authors should have been more aware of the statistical issues surrounding neuroimaging, and the reviewers should be barring uncorrected significance.

More of this later. Can any neuroimaging paper do more harm than good? Another point, often overlooked, is the clear possibility that a published study can do more harm than good.

Blog Archive

If a published result is wrong, but influential and believed, then this can negatively impact on the scientific field.It looks like you've lost connection to our server. Please check your internet connection or reload this page.

Peer pressure Peer pressure is influence that a peer group, observers or individual exerts that encourages others to change their attitudes, or behaviors to stratify to group standard.

It also might encourage others to have bad attitude and behaviors for example, smoking, alcohols, drugs, etc. Abstract: We're living in yesterday's future, and it's nothing like the speculations of our authors and film/TV attheheels.com a working science fiction novelist, I take a professional interest in how we get predictions about the future wrong, and why, so that I can avoid repeating the same mistakes.

The Death of the Moth. Moths that fly by day are not properly to be called moths; they do not excite that pleasant sense of dark autumn nights and ivy-blossom which the commonest yellow-underwing asleep in the shadow of the curtain never fails to rouse in us.

Also, “it starts to look like me and the feminists” should be “looks like I”. And “untitled” doesn’t really make sense. And if biology is a hard science, it’s on the extreme soft edge of hard sciences. The false supposition is that you get the same nitrates and the same amount of nitrates and that your body can handle, the additional amount of excess nitrates, they add to the food.

Why Smart People Defend Bad Ideas | Scott Berkun