Numerous AI ethics checklists and frameworks have been proposed focusing on different dimensions of ethical AI such as fairness, explainability, and safety. Yet, no such work has been done on developing transparent AI systems for real-world educational scenarios. This paper presents a Transparency Index framework that has been iteratively co-designed with different stakeholders of AI in education, including educators, ed-tech experts, and AI practitioners. We map the requirements of transparency for different categories of stakeholders of AI in education and demonstrate that transparency considerations are embedded in the entire AI development process from the data collection stage until the AI system is deployed in the real world and iteratively improved. We also demonstrate how transparency enables the implementation of other ethical AI dimensions in Education like interpretability, accountability, and safety. In conclusion, we discuss the directions for future research in this newly emerging field. The main contribution of this study is that it highlights the importance of transparency in developing AI-powered educational technologies and proposes an index framework for its conceptualization for AI in education.
Latest posts by Ryan Watkins (see all)
- Observing many students using difference-in-differences designs on the same data and hypothesis reveals a universe of uncertainty - June 8, 2025
- Limitations of the LLM-as-a-Judge Approach for Evaluating LLM Outputs in Expert Knowledge Tasks - June 7, 2025
- Neural and Cognitive Impacts of AI: The Influence of Task Subjectivity on Human-LLM Collaboration - June 5, 2025