A correction on the Bradley and Brand method of estimating effect sizes from published literatureby M. T. Bradley, A. Brand

Theory & Psychology

About

Year
2014
DOI
10.1177/0959354314544920
Subject
Psychology (all) / History and Philosophy of Science

Similar

Cholesteryl ester transfer protein gene effect on CETP activity and plasma high-density lipoprotein in European populations

Authors:
V. Gudnason, S. Kakko, V. Nicaud, M. J. Savolainen, Y. A. Kesaniemi, E. Tahvanainen, S. Humphries, on behalf of the EARS group
1999

Validation of the U.K. diagnostic criteria for atopic dermatitis in a population setting

Authors:
H.C. WILLIAMS, P.G.J. BURNEY, A.C. PEMBROKE, R.J. HAY, ON BEHALF OF THE U.K. DIAGNOST
1996

Text

Theory & Psychology 2014, Vol. 24(6) 860 –862 © The Author(s) 2014

Reprints and permissions: sagepub.co.uk/journalsPermissions.nav

DOI: 10.1177/0959354314544920 tap.sagepub.com

A correction on the Bradley and Brand method of estimating effect sizes from published literature

Michael T. Bradley

University of New Brunswick

Andrew Brand

Bangor University

Abstract

Kühberger, Scherndl, and Fritz commented on an attempt by Bradley and Brand to adjust sets of exaggerated effect sizes reported in literatures associated with underpowered Null Hypothesis

Statistical Tests (NHST). Their comment highlighted two important issues: (a) the senior author,

Bradley, made an error in presenting the correction formula, and (b) there is an inherent incompatibility between inferential statistics and accurate measurement. The proper formula is presented here with evidence that the formula is relatively accurate in appropriately estimating effect sizes that have been exaggerated through NHST. The term relatively accurate is used since power cannot be 100%, and thus any attempt to estimate a true effect size will be out by some relationship between the alpha level, power, and of course statistical variability of the estimates.

Keywords effect sizes, eta, Ns, power, publication bias, r

Bradley and Brand (2013) wrote “by using eta, Bradley et al. could take the correlation between eta and N, square that correlation to obtain the variance accounted for by N, subtract that away from the average eta, and obtain a new reduced estimate of the average of eta that hopefully better approximated the actual eta for that area” (p. 799). This

Corresponding author:

Michael T. Bradley, Department of Psychology, University of New Brunswick, P.O. Box 5050, Tucker Park

Road, Saint John, NB, E2L 4L5, Canada.

Email: Bradley@UNB.ca 544920 TAP0010.1177/0959354314544920Theory & PsychologyBradley and Brand research-article2014

Comment at NANYANG TECH UNIV LIBRARY on May 23, 2015tap.sagepub.comDownloaded from

Bradley and Brand 861 statement, as Kühberger, Scherndl, and Fritz (2013) pointed out, omits multiplying eta by the correlation between n and eta and subtracting that product away from eta to achieve the adjusted eta. The paragraph should have read “by using eta, Bradley et al. could take the correlation between eta and N, square that correlation to obtain the variance accounted for by N, multiply the correlation by the average of eta and subtract that product away from the average eta, and obtain a new reduced estimate of the average of eta ….” For example, if the average of etas from experiments pertaining to a topic was .45 and the correlation between experimental ns and etas was .9, then the result would be .45 – (.92 x .45) = .0855 as the adjusted effect size. Fortunately, we included several examples of our work (most notably, Bradley & Stoica, 2004) for Kühberger et al. to understand our intention of removing the variance accounted for by the correlation between n and eta. Therefore, Kühberger et al. used simulations to not only examine our erroneous approach but also our earlier correct approach. They found, not surprisingly, the erroneous approach yielded far too strong a correction. The correct formula, however, yielded an approximation that was much closer to the true effect size. Using their

Monte-Carlo methods applied on several simulated meta-analysis examples with the correct formula yielded a tendency to underestimate the true effect size (e.g., .12 as the estimate for .15 as the input value, or .36 for .40) by 10 to 20%. We argue this is a workable approximation for calculating power and gauging the phenomena under investigation, although we appreciate the diagnostic approach that led Kühberger et al. to discard corrected effect sizes.

A very important point emerges from the work of Kühberger et al. (2013). Inferential testing and measurement are sometimes incompatible to the extreme and even under the best of circumstances only approach compatibility. Unless power reaches a virtually impossible level of 100% there will always be inaccuracy in effect sizes. Thus, we suggest for those using a corrective approach a good approximation is a reasonable achievement and can guide future research until the phenomena is well known and ready for more accurate and precise measurement.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

References

Bradley, M. T., & Brand, A. (2013). Sweeping recommendations regarding effect size and sample size can miss important nuances: A comment on “A comprehensive review of reporting practices in psychological journals.” Theory & Psychology, 23, 797–800. doi: 10.1177/0959354313491854

Bradley, M. T., & Stoica, G. (2004). Diagnosing estimate distortion due to significance testing in literature on the detection of deception. Perceptual and Motor Skills, 98(3), 827–839. doi: 10.2466/Pms.98.3.827–839

Kühberger, A., Scherndl, T., & Fritz, A. (2013). On the correlation between effect size and sample size: A reply. Theory & Psychology, 23, 801–804. doi: 10.1177/0959354313500863 at NANYANG TECH UNIV LIBRARY on May 23, 2015tap.sagepub.comDownloaded from 862 Theory & Psychology 24(6)

Author biographies

Michael T. Bradley, PhD, is a Professor of Psychology at the University of New Brunswick. He teaches courses on Introductory Psychology, Social Psychology, and Cognition. His research interests include information detection with the polygraph and general problems resulting from a historic emphasis on inferential statistics over accurate measurement. Email: Bradley@UNB.ca

Andrew Brand, PhD, is a data analyst for the North Wales Organisation for Randomised Trials in

Health (& Social Care) Institute of Medical & Social Care Research (IMSCaR) at Bangor

University. He is also the creator of iPsychExpts (www.ipsychexpts.com), a website that encourages and promotes the use of web experiments for conducting psychological research. Email: a.brand@bangor.ac.uk at NANYANG TECH UNIV LIBRARY on May 23, 2015tap.sagepub.comDownloaded from