Project Overview
The project aims to address the communication barriers faced by 466 million people in the world with disabling hearing loss who are unable to speak. The challenge of a lack of standardization in sign language across regions makes it difficult to create a universal model.
Key challenges include:
- Lack of Standardization: Different sign languages across regions.
- Hand Gesture Recognition: Difficulty in accurately detecting hand gestures in real-time due to variations in lighting, size, and orientation.
- Real-Time Processing: Achieving low latency for processing the video feed in real-time for immediate feedback.
Solution
Our solution involved a multi-phase approach to ensure effective translation of sign language to text and speech:
- Dataset Collection: We gathered a comprehensive dataset from multiple regions and dialects for various gestures.
- AI Model Training: Deep learning models were trained to recognize and translate sign language gestures into text.
- Camera & Image Processing: High-quality cameras and advanced algorithms were used to capture and process gestures in real-time.
- Text-to-Speech: Text output was converted into speech for communication.
- Accessibility Features: We incorporated accessibility features like high-contrast mode and audio descriptions.