%0 Journal Article %T Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models %A Bao Sheng Loe %A Filip Simonfy %A Luning Sun %A Philipp Doebler %J - %D 2018 %R https://doi.org/10.3390/jintelligence6020020 %X Abstract This study investigates the item properties of a newly developed Automatic Number Series Item Generator (ANSIG). The foundation of the ANSIG is based on five hypothesised cognitive operators. Thirteen item models were developed using the numGen R package and eleven were evaluated in this study. The 16-item ICAR (International Cognitive Ability Resource 1) short form ability test was used to evaluate construct validity. The Rasch Model and two Linear Logistic Test Model(s) (LLTM) were employed to estimate and predict the item parameters. Results indicate that a single factor determines the performance on tests composed of items generated by the ANSIG. Under the LLTM approach, all the cognitive operators were significant predictors of item difficulty. Moderate to high correlations were evident between the number series items and the ICAR test scores, with high correlation found for the ICAR Letter-Numeric-Series type items, suggesting adequate nomothetic span. Extended cognitive research is, nevertheless, essential for the automatic generation of an item pool with predictable psychometric properties. View Full-Tex %K cognitive models %K automatic item generation %K number series %K Rasch model %K Linear Logistic Test Models %U https://www.mdpi.com/2079-3200/6/2/20