Research Article | Open Access | Download PDF
Volume 13 | Issue 2 | Year 2026 | Article Id. IJHSS-V13I2P103 | DOI : https://doi.org/10.14445/23942703/IJHSS-V13I2P103Quality Control Strategies for Research Data Collection Instruments
Bostley Muyembe Asenahabi, Titus Mukisa Muhambe
| Received | Revised | Accepted | Published |
|---|---|---|---|
| 10 Feb 2026 | 14 Mar 2026 | 29 Mar 2026 | 13 Apr 2026 |
Citation :
Bostley Muyembe Asenahabi, Titus Mukisa Muhambe, "Quality Control Strategies for Research Data Collection Instruments," International Journal of Humanities and Social Science, vol. 13, no. 2, pp. 26-32, 2026. Crossref, https://doi.org/10.14445/23942703/IJHSS-V13I2P103
Abstract
Quality control in data collection instruments is vital for ensuring the integrity and applicability of research findings. Poorly validated or unreliable tools can compromise measurement accuracy, weaken causal inferences, and limit generalizability. To achieve quality studies, researchers should integrate multiple forms of validity testing, such as face, content, construct, and criterion validity, alongside diverse reliability assessments such as internal consistency, test–retest, and inter-rater reliability. This ensures instruments comprehensively measure intended constructs and consistently yield stable results across contexts. At the study level, internal validity can be strengthened through randomization, control groups, standardized procedures, and elimination of confounders. External validity can be achieved through representative sampling, replication across diverse contexts, ecological relevance, and cross-validation. Together, these strategies minimize measurement error, enhance reproducibility, and advance methodological rigor. This ultimately safeguards the credibility and impact of empirical research.
Keywords
Control, Validity, Reliability, Internal Validity, External Validity, Data Collection Instruments.
References
- Anne Anastasi, and Susana Urbina, Psychological Testing, 8th Ed., Pearson, 2017.
[Google Scholar] [Publisher Link] - Richard P. Bagozzi, and Youjae Yi, “On the Evaluation of Structural Equation Models,” Journal of the Academy of Marketing Science, vol. 40, no. 1, pp. 34-50, 2012.
[CrossRef] [Google Scholar] [Publisher Link] - Ronald Jay Cohen, and Mark E. Swerdlik, Psychological Testing and Assessment, 8th Ed., McGraw-Hill, 2018.
[Google Scholar] [Publisher Link] - Leandre R. Fabrigar et al., “Evaluating the Use of Exploratory Factor Analysis in Psychological Research,” Psychological Methods, vol. 4, no. 3, pp. 272-299, 2019.
[Google Scholar] [Publisher Link] - Andy Field, Discovering Statistics Using IBM SPSS Statistics, 4th Ed., Sage Publications, 2013.
[Google Scholar] [Publisher Link] - Joseph F. Hair et al., Multivariate Data Analysis, 8th Ed., Cengage, 2019.
[Google Scholar] [Publisher Link] - Kevin A. Hallgren, “Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial,” Tutorials in Quantitative Methods for Psychology, vol. 8, no. 1, pp. 23-34, 2012.
[CrossRef] [Google Scholar] [Publisher Link] - Richard Karnia, “Importance of Reliability and Validity in Research,” Psychology and Behavioral Sciences, vol. 13, no. 6, pp. 137-141, 2024.
[CrossRef] [Google Scholar] [Publisher Link] - Rex B. Kline, Principles and Practice of Structural Equation Modeling, 4th Ed., Guilford Press, 2016.
[Google Scholar] [Publisher Link] - Terry K. Koo, and Mae Y. Li, “A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research,” Journal of Chiropractic Medicine, vol. 15, no. 2, pp. 155-163, 2016.
[CrossRef] [Google Scholar] [Publisher Link] - S.E. Maxwell et al., “Is Psychology Suffering from a Replication Crisis? What Does "Failure to Replicate" Really Mean?,” American Psychologist, vol. 70, no. 6, pp. 487-498, 2015.
[CrossRef] [Google Scholar] [Publisher Link] - Mary L. McHugh, “Interrater Reliability: The Kappa Statistic,” Biochemia Medica, vol. 22, no. 3, pp. 276-282, 2012.
[Google Scholar] [Publisher Link] - Samuel Messick, “Validity of Psychological Assessment: Validation of Inferences from Persons' Responses and Performances as Scientific Inquiry into Score Meaning,” American Psychologist, vol. 50, no. 9, pp. 741-749, 1995.
[CrossRef] [Google Scholar] [Publisher Link] - Denise F. Polit, and Cheryl Tatano Beck, Nursing Research: Generating and Assessing Evidence for Nursing Practice, 9th Ed., Lippincott Williams & Wilkins, 2012.
[Google Scholar] [Publisher Link] - Mohsen Tavakol, and Reg Dennick, “Making Sense of Cronbach's Alpha,” International Journal of Medical Education, vol. 2, pp. 53-55, 2011.
[CrossRef] [Google Scholar] [Publisher Link] - Stephen G. West, and Felix Thoemmes, “Campbell's and Rubin's Perspectives on Causal Inference, Psychological Methods, vol. 15, no. 1, pp. 18-37, 2010.
[Google Scholar] [Publisher Link] - Muhamad Saiful Bahri Yusoff, “ABC of Content Validation and Content Validity Index Calculation,” Education in Medicine Journal, vol. 11, no. 2, pp. 49-54, 2019.
[CrossRef] [Google Scholar] [Publisher Link]