Integration of An Explainable Dashboard To Enhance Automl Transparency
Explainable Artificial Intelligence (XAI) improves transparency in machine learning models and fosters user trust. However, many Automated Machine Learning (AutoML) solutions lack accessible explanation mechanisms, limiting their adoption among non-experts and reducing confidence in the generated models. This work aims to integrate an XAI layer into a web-based AutoML platform to enhance interpretability, expand user adoption, and move towards a more responsible AutoML. Local and global explanation techniques (SHAP, LIME, surrogate decision trees, and what-if analyses) were implemented within the existing pipeline. Then, the resulting dashboard was evaluated through interviews and usability questionnaires with two user groups: data science professionals and domain specialists. The findings indicate that users valued the system’s ability to clarify predictions and reported higher confidence and understanding when interacting with the explained models, as indicated by Post-Study System Usability Questionnaire (PSSUQ) scores and qualitative feedback. The results suggest that embedding accessible and user-friendly explanations in AutoML platforms empowers users by enhancing their understanding and confidence. This highlights the potential for further improvements in XAI-driven AutoML methods, leading to AI solutions that are more transparent and explainable.