Empowering Accessibility: An Ai-Driven Assistive System For The Visually Impaired
We present an AI-powered assistive system that enhances accessibility for visually impaired individuals by integrating real-time object detection, di-rect convolutional text-to-speech (DCTTS), and a large language model (LLM) for context-aware decision-making. Built on a scalable cloud-based architecture, the system combines YOLOv8 and SSD for object recognition, DCTTS for natural speech synthesis, and LLMs for intelligent navigation support. Experimental results demonstrate improvements in detection accu-racy, latency, and user interaction. Future enhancements will target on-device inference, vision-language model integration, and personalized feed-back, advancing AI-powered accessibility.To evaluate system performance, we analyze detection accuracy, speech la-tency, and user interaction efficiency, demonstrating a significant improve-ment in accessibility for visually impaired individuals. Compared to tradi-tional rule-based approaches, the proposed system offers greater adaptabil-ity, real-time responsiveness, and enhanced personalization. Future en-hancements will focus on on-device inference for reduced latency, integra-tion of vision-language models (VLMs), and personalized user feedback mechanisms to further refine assistive capabilities. This study highlights the potential of AI-powered accessibility solutions in fostering greater in-dependence and safety for individuals with visual impairments.