Abstract
This study employed repeated measures ANOVA to assess the reliability of an instrument designed to measure utilization, awareness, and perception of AI in research among 150 undergraduate students. Validated instruments with robust psychometric properties were used for the study. Data collection occurred in three phases spaced two weeks apart, following experts recommendations for longitudinal research. Initial findings using Cronbach’s alpha indicated high reliability in the first phase. However, subsequent test-retest analyses revealed decreasing reliability coefficients below acceptable thresholds for utilization, awareness, and perception constructs. Further analysis using repeated measures ANOVA showed significant differences in mean scores across the three phases, suggesting inconsistency in respondents’ perceptions over time. The study underscores the dynamic nature of attitudes towards AI, necessitating careful consideration in longitudinal research designs. Methodologically, it highlights the limitations of relying solely on static reliability estimates such as Cronbach’s alpha. Practically, the findings suggest the need for continuous refinement of measurement instruments to capture evolving attitudes accurately. Theoretical contributions include advancing understanding of reliability in dynamic contexts, prompting future research to explore more robust statistical methods and measurement approaches in studying attitudes towards emerging technologies.
License
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Article Type: Research Article
PEDAGOGICAL RES, Volume 10, Issue 2, April 2025, Article No: em0239
https://doi.org/10.29333/pr/16402
Publication date: 20 May 2025
Article Views: 19
Article Downloads: 16
Open Access References How to cite this article