Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the point should be, this study should not have been published at all. The sample size was way too small with results open to misinterpretation.

After a scan, I think the Boston Globe article is well written.

http://www.bostonglobe.com/lifestyle/health-wellness/2014/04...

The title is "Study finds brain changes in young marijuana users". Maybe it should read "differences" instead of "changes".



The sample size isn't necessarily the problem. For example: if we wanted to compare 20 cannabis users to 20 non-users, and we measured whether or not they felt "high", I bet our total sample of n=40 would be plenty to show a strong effect of pot use on "feeling high," it would be statistically significant, and it would not be a distortion of the truth at all.

This is an interaction between reporters and exploratory research. Statistical significance works best with confirmatory research, in which there is an a priori hypothesis, a set of falsifiable predictions, and a specific experimental design that manipulates just one or two variables in such a manner as to potentially falsify the predictions. However, confirmation is only half of science.

Before the confirmation stage, there should always be exploration, and that's what this study represents. "Let's put cannabis users in an MRI and see what happens!" It's impossible to draw conclusions from exploratory research in the same way we do from confirmatory research. The main problem is that there's usually no hypothesis, nothing to falsify, and therefore no statistical test exists to help sift through the results.

However, significance testing can still suggest whether an exploratory finding is deviant. If one group mean is different from another group mean, it's totally fair to report that difference. I didn't have to dig too deep into the original article to find out the authors weren't making any outrageous claims. It's merely that journalists reported the exploratory results as if they were the generated by a confirmatory, falsifiable hypothesis. In fact, the scientists were just reporting an interesting difference they observed.

Anyway, the point is not that the study should not have been published at all. The study was fine, the work is valuable, and the reporting was overzealous/misinformed.

Edit: updating to add that the scientists did have some ideas about what kinds of differences they were expecting, consistent with animal models. I might venture to say they had some clearly falsifiable hypotheses, too. The science is fine, and so are the scientists. It's the reporting that is the problem.


It absolutely should say differences. Changes implies causality (or, if not causality, at lease differences over time, which also wasn't shown). It is every bit as likely that the brain differences are the reason somebody chose to smoke marijuana. But "Study finds brain differences in young marijuana users" doesn't have any cachet.


As a medical study, I won't pretend to know whether it was properly designed. As policy fodder, which it is first and foremost - it might have benefited from additional control groups - using substances such as alcohol, nicotine, caffeine, etc. - which might have answered the question of marijuana's harms relative to legal substances.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: