Home / Articles
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING |
![]() |
Author Name Chandraprabha K ,Mohanasharan K, Sivasubramaniam T, Sanjhaikrishna S Abstract Sign language serves as a vital means of communication for individuals who are deaf or hard of hearing. However, challenges often arise when engaging with those unfamiliar with sign language, leading to communication barriers. This project aims to address this issue by developing an advanced Sign Language Recognition System that utilizes machine learning and computer vision to translate hand gestures into text or speech output, enhancing accessibility and inclusivity.
The system employs Convolutional Neural Networks (CNNs) and MediaPipe Hand Tracking to accurately detect and interpret gestures in real time. Users can choose between text and speech output, ensuring a seamless interaction. Designed for efficiency and ease of use, the application features an intuitive interface developed with React.js, while the backend, powered by Flask, manages gesture processing and database interactions. A MySQL database is integrated to store user progress and frequently used gestures, enabling a personalized learning experience.
Beyond its technological aspects, this project is a step toward empowering the deaf and hard-of-hearing community. By leveraging artificial intelligence and deep learning, the system offers a scalable solution that evolves with user needs. It promotes effective communication, fostering a more inclusive digital space where sign language users can engage effortlessly with the wider society.
Key Words: Sign Language Recognition, Machine Learning, Convolutional Neural Networks, MediaPipe, Deep Learning, Text-to-Speech, Computer Vision, Gesture Recognition, Accessibility, Artificial Intelligence Published On : 2025-03-25 Article Download : ![]() |