DEVELOPMENT OF COMPUTER-BASED DIAGNOSTIC ASSESSMENT SYSTEM: CASE STUDY OF EQUIVALENCE OF PAPER-AND-PENCIL AND COMPUTER-BASED TESTING

Authors

  • Ģirts Burgmanis University of Latvia
  • Marta Mikīte University of Latvia
  • Ilze France University of Latvia
  • Dace Namsone University of Latvia

DOI:

https://doi.org/10.17770/sie2023vol1.7096

Keywords:

assessment, computer-based assessment, diagnostic, literacy, numeracy, paper-based assessment

Abstract

In the last two decades computer-based assessment has become an important part in support of teaching and learning. It is seen as a solution to implement assessment for learning in school and provide immediate feedback on students’ performance in real-time. Research literature on computer-based assessment suggests that every measurement instrument developer before implementation of a test has to provide evidence that computer-based and paper-based versions are equivalent and provide consistent measures. There is a risk that properties of computer-based assessment including unfamiliarity with the system and proficiency level of digital skills can seriously affect students’ performance. This paper focuses on computer-based diagnostic assessment system designed to support numeracy and literacy teaching and learning. The aim of this study is to confirm that literacy and numeracy learning measurement instruments elaborated in diagnostic assessment system provide consistent results as paper-based versions of both instruments. Data were collected administering four tests. Two of the assessments were computer-based literacy and numeracy diagnostic assessments and two were paper-based versions. By analyzing both versions of assessments using various statistical techniques we explore differences in students’ performance. Our results showed that at this development phase of the computer-based diagnostic assessment system the students who completed computer-based test versions showed similar or better performance than their counterparts who completed paper-based versions.

Author Biographies

  • Ģirts Burgmanis, University of Latvia
    Interdisciplinary Centre for Educational Innovation of University of Latvia
  • Marta Mikīte, University of Latvia
    Interdisciplinary Centre for Educational Innovation of University of Latvia
  • Ilze France, University of Latvia
    Interdisciplinary Centre for Educational Innovation of University of Latvia

References

Bennett, R. E. (1998). Reinventing Assessment. Speculations on the Future of Large-Scale Educational Testing. A Policy Information Perspective. Princeton, NJ: Educational Testing Service, Policy Information Center

Bennett, R. E. (2015). The changing nature of educational assessment. Review of Research in Education, 39(1), 370-407.

Chan, S., Bax, S., & Weir, C. (2018). Researching the comparability of paper-based and computer-based delivery in a high-stakes writing test. Assessing Writing, 36, 32-48.

Clariana, R., & Wallace, P. (2002). Paper–based versus computer–based assessment: key factors associated with the test mode effect. British Journal of Educational Technology, 33(5), 593-602.

Drasgow, F., Luecht, R. M., & Bennett, R.E. (2006). Technology and testing. Educational measurement, 4, 471-515.

Gallagher, A., Bridgeman, B., & Cahalan, C. (2002). The effect of computer‐based tests on racial‐ethnic and gender groups. Journal of Educational Measurement, 39(2), 133-147.

McClelland, T., & Cuevas, J. A. (2020). A comparison of computer based testing and paper and pencil testing in mathematics assessment. The Online Journal of New Horizons in Education, 10(2), 78-89.

McDonald, A. S. (2002). The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Computers & Education, 39(3), 299-312.

Noyes, J., Garland, K., & Robbins, L. (2004). Paper‐based versus computer‐based assessment: is workload another test mode effect? British Journal of Educational Technology, 35(1), 111-113.

Parshall, C. G., & Harmes, J. C. (2008). The design of innovative item types: Targeting constructs, selecting innovations, and refining prototypes. CLEAR Exam Review, 19(2), 18-25.

Popp, E. C., Tuzinski, K., & Fetzer, M. (2015). Actor or Avatar?: Considerations in Selecting Appropriate Formats for Assessment Content. In F. Drasgow (Eds.), Technology and testing: Improving educational and psychological measurement. (pp. 79–103). Abingdon, UK: Routledge.

Puhan, G., Boughton, K., & Kim, S. (2007). Examining Differences in Examinee Performance in Paper and Pencil and Computerized Testing. Journal of Technology, Learning, and Assessment, 6(3), 1-21.

Smolinsky, L., Marx, B. D., Olafsson, G., & Ma, Y. A. (2020). Computer-based and paper-and-pencil tests: A study in calculus for STEM majors. Journal of Educational Computing Research, 58(7), 1256-1278.

Way, W. D., Davis, L. L., Keng, L., & Strain-Seymour, E. (2015). From standardization to personalization: The comparability of scores based on different testing conditions, modes, and devices. In F. Drasgow (Eds.), Technology in testing: Improving educational and psychological measurement (Vol. 2). (pp. 260-284). Abingdon, UK: Routledge.

Weiss, D. J. (1982). Improving measurement quality and efficiency with adaptive testing. Applied psychological measurement, 6(4), 473-492.

Downloads

Published

2023-07-03