DreamEyes is an innovative AI project addressing the challenges faced by the visually impaired community. As of 2021, there are approximately 36 million visually impaired individuals worldwide, impacting their daily tasks and social interactions. DreamEyes leverages deep learning to generate narrative descriptions of the surrounding environment, enhancing the interaction and experience of visually impaired individuals.
-
Real-World Translation: DreamEyes translates real-world data into rich and understandable paragraphs, converting them into audio format using advanced deep learning models.
-
Wearable Technology: The project features a "smart" cap equipped with IoT components, including microprocessors that analyze real-world data and provide users with valuable information about their surroundings.
-
Deep Learning Algorithms: Inspired by the learning mechanism of the human brain, DreamEyes employs deep learning algorithms. These algorithms, a subset of machine learning, enable the device to learn and make decisions from complex and large datasets.
The repository includes:
-
Conceptual Design: Describes the assistive tool's conceptual design, including references and techniques used in the development of DreamEyes.
-
Production Section: Explores key concepts behind the creation of DreamEyes, such as video captioning and the encoder-decoder structure.
-
Assessment Section: Evaluates the quality of the scientific approaches, providing feedback on the initial part of the report.
-
Software Design: Explains how the software was designed, highlighting the approaches followed in the production phase.
-
Evaluation Section: Concludes with an assessment of the final results, offering insights into the project's goal completion.
Your feedback and contributions are welcome to further enhance the capabilities of this impactful project. The code for this project is available upon request. All requests are to be made to my email: [email protected]