Assistive Mobile Application For Visually Impaired Individuals Using Real-Time Object Recognition With Voice Feedback
Visual impairment significantly affects individuals' independence and mobil-ity, making daily navigation challenging. This paper presents an assistive mobile application leveraging Artificial Intelligence (AI) for real-time object detection and recognition, integrated with voice feedback to enhance acces-sibility. The application employs the YOLO (You Only Look Once) algo-rithm, trained on a diverse dataset to ensure accurate detection in various en-vironments. A text-to-speech (TTS) system provides real-time audio descrip-tions, allowing users to receive essential information about their surround-ings. To optimize performance, the system is deployed on mobile devices us-ing TensorFlow Lite, ensuring efficient on-device inference with minimal la-tency. Extensive testing evaluates accuracy, response time, and usability, demonstrating high object recognition performance across different scenari-os. Results show that the system operates effectively in both indoor and out-door environments, adapting to varying lighting conditions and object types. Additionally, the lightweight implementation ensures that the application runs smoothly on consumer-grade smartphones, making it an accessible and cost-effective solution. The proposed approach contributes to advancing AI-driven assistive technologies, offering a scalable, user-friendly, and practical tool that empowers visually impaired individuals to navigate their surround-ings with greater confidence and autonomy. This study highlights the trans-formative potential of AI in enhancing accessibility and inclusion, paving the way for future advancements in smart assistive systems.