AUTHOR=Curcin Milja , Lee Ming Wei TITLE=Evaluating accuracy and bias of different comparative judgment equating methods against traditional statistical equating JOURNAL=Frontiers in Education VOLUME=Volume 10 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1538486 DOI=10.3389/feduc.2025.1538486 ISSN=2504-284X ABSTRACT=Traditional common-item or common-person statistical equating cannot always be used for standard maintaining or linking between test forms. In some contexts, comparative judgment (CJ) methods which capture expert judgment of quality of student work on different test forms have been trialed for this purpose. While plausibility, reliability and replicability of CJ outcomes has been shown to be high, little research has established the extent of CJ accuracy, that is, agreement between CJ outcomes and outcomes established by robust statistical equating. We report on the accuracy of outcomes from several trials and replications of different CJ methods and different associated analytical approaches, compared to operational IRT statistical equating, demonstrating largely close alignment between the two. We also compare different CJ methods (and different analytical approaches) in terms of outcome precision, replicability and evidence of bias in expert judgment (that is, a tendency to prefer student work on easier test forms). We discuss the advantages and disadvantages of different CJ methods and analytical approaches and their potential for informing standard maintaining in different contexts.