The idea of causal inference can be considered with respect to interactions in everyday life. Many correlations may be observed throughout our days, however although correlation is necessary, it is not a sufficient condition for causal inference (Warner, 2021a). This is especially so considering real life correlations are not perfect, therefore individual outcomes may differ substantially (Warner, 2021a). One relevant imperfect correlation we are exposed to often is the belief that viral disease is communicable. In light of the post hoc, ergo propter hoc fallacy, we might have particular feelings regarding interacting with sick neighbors, as we consider that having an interaction with a sick person and then becoming sick means the interaction caused the sickness (Warner, 2021a). Although in everyday life we may make causal inferences, these are usually not based on proper reasoning and are tantamount to pure opinion.
Additionally, even in research, ensuring the conditions are met for causal inference is a complicated process. Research in areas such as developmental psychology, which is highly dependent on correlational data, must address challenges in the form of biases that make it difficult to eliminate other explanations for the witnessed effect (Miller et al., 2016). Even in more rigorous randomized trials, researchers deal with dropouts and non-compliance, leading to the question of whether results are truly conducive to causal inferences and therefore an improvement over observational data (VanderWeele et al., 2016). Even replications of studies must be carefully conducted to prevent the propitiation of a causal inference based on biased statistical analysis (Larzelere et al., 2015). Therefore, regardless of situation – whether in real life or contrived research lab – a proper understanding of conditions needed to make an appropriate causal inference is necessary.
The idea of causal inference could also be applied to the way we approach ministry. The highest level of evidence in causal inference is provided when no other explanation can be given for changes in the dependent variable (Warner, 2021a). In a highly personal way, a person who has received the Spirit begins to change. Although these changes are often also portrayed externally, internally the person often is cognizant that nothing else would have elicited such a change in behavior, level of repentance, and willingness to die for the Truth. One way to approach ministry is to educate on the importance of self-reflection regarding their walk. Paul reminds the Corinthian believers to examine themselves whether they are in the belief, to prove themselves (The Scriptures, 2018, 2 Corinthians 13:5). If a believer ever gives credit for the change in their lives to anything other than the Father and Son, through the Spirit, then doubt may creep in regarding their own power and strength.
Another basic aspect needed for adequate research is related to P-values. Adherence to null-hypothesis statistical testing (NHST) leads to a dependence on P-values to determine whether the null hypothesis can be accepted or rejected (Warner, 2021b). However, misunderstandings on the interpretation of P-values can lead to inaccurate reporting of data (Warner, 2021b). Additionally, just because a smaller P-value is determined from the data, it does not result in replicability in future studies (Warner, 2021b). A reason for this could be due to the P-value being easily influenced by factors other than the dependent variable, such as the number of analyses run, and other decisions made by researchers during the collection and analyses process (Warner, 2021b).
Several major alternatives to the use of α < .05 (NHST) have been proposed. These include confidence intervals, information about effect size, and combinations of effect size information using meta-analysis (Warner, 2021b). Considering the dependence on NHST is linked to P-value interpretations, it is understandable to consider better ways to represent data which may include the formally stated. However, the true issue becomes education, so that misinterpretations of whichever form of data is being used are minimized (Lakens, 2021). Although there is a considerable effort to move away from NHST and P-values, these are still being used by many, thereby requiring researchers and readers to properly understand common pitfalls and improve the work being done as well as the critical analysis of the data presented in academic papers (Lakens, 2021).
P-hacking can be described as an unrelenting analysis of data until an arbitrarily defined statistically significant number is reached (Guo & Ma, 2022). Common researcher practices that can be described as P-hacking include conducting numerous tests on the data until a small enough P-value is attained, editing observations until the alternative hypothesis is supported, and testing different statistical models until the desired P-value is found – practices which all expand the amount of data from which eventually an acceptable P-value may be discovered (Guo & Ma, 2022). As a result of P-hacking, research results might be implicated by arbitrary choices made by researchers in the ransacking through data manipulation for the proverbial acceptable value (Wicherts et al., 2016). Moreover, future research studies which are attempting to replicate these studies are subject to irreproducibility due to the inauthentic findings due to P-hacking (Guo & Ma, 2022). In summation, the search for the arbitrary P-value needed to determine statistical significance can cause researchers to miss whether findings are significant in the real world and whether the differences observed are actually statistically significant (Guo & Ma, 2022).
References
Guo, D. & Ma, Y. (2022). The “p-hacking-is-terrific” ocean: A cartoon for teaching statistics. Teaching Statistics, 44, 68-72. DOI: 10.1111/test.12305
Lakens, D. (2021). The practical alternative to the p value is the correctly used p value. Association for Psychological Science, 16(3), 639-648. DOI: 10.1177/1745691620958012
Larzelere, R.E., Cox Jr., R.B., & Swindle, T. M. (2015). Many replications do not causal inferences make: The need for critical replications to test competing explanations of nonrandomized studies. Association for Psychological Science, 10(3), 380-389. DOI: 10.1177/1745691614567904
Miller, P., Henry, D., & Votruba-Drzal, E. (2016). Strengthening causal inference in developmental research. Child Development Perspectives, 10(4), 275-280. DOI: 10.1111/cdep.12202
The Scriptures. (2018). Institute for Scripture Research.
VanderWeele, T.J., Jackson, J. W., & Li, S. (2016). Causal inference and longitudinal data: A case study of religion and mental health. Social Psychiatry & Psychiatric Epidemiology, 51, 1457-1466. DOI:10.1007/s00127-016-1281-9
Warner, R. (2021a). Applied statistics I: Basic bivariate techniques (3rd ed.). SAGE Publications, Inc.
Warner, R. (2021b). Applied statistics II: Multivariable and multivariate techniques (3rd ed.). SAGE Publications, Inc.
Wicherts, J.M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Alert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of freedom in planning, running, analying, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7(1832). DOI: 10.3389/fpsyg.2016.01832
