Sign Language Detection using Python
Keywords:
Assistive technology, Computer vision, Deep learning, Hand gesture recognition, Human computer interaction, Real-Time object detection, Sign language detection, YOLOv8Abstract
Communication barriers faced by hearing-impaired and speech-impaired individuals remain a significant social challenge, particularly in daily interactions with non-sign-language users. Sign language serves as an effective medium for communication; however, the lack of widespread understanding limits its practical use. To address this issue, this project proposes a real-time sign language detection system using Python and deep learning techniques to automatically recognize hand gestures and convert them into readable text. The primary objective of the proposed system is to enable efficient and accurate interpretation of sign language gestures using commonly available hardware and open-source software tools. The system utilizes a webcam to capture real-time video streams, which are processed frame by frame using OpenCV. Image preprocessing techniques such as resizing, normalization, and noise reduction are applied to enhance image quality and improve detection performance under varying lighting and background conditions. A YOLO-based object detection model is employed to localize hand regions within each frame due to its high detection speed and suitability for real-time applications. The detected hand gestures are then classified into corresponding sign language symbols using a trained deep learning model. The recognized gestures are translated into meaningful textual output and displayed on the screen in real time, allowing seamless interaction between hearing-impaired users and normal users. The proposed approach demonstrates reliable performance with acceptable accuracy and low latency, making it suitable for real-world applications. Experimental observations indicate that the system can effectively recognize common hand gestures while maintaining smooth real-time processing on standard laptop configurations. Overall, the proposed sign language detection system offers a cost-effective, user-friendly, and socially impactful solution for improving accessibility and inclusivity. The system can be further extended by incorporating speech synthesis, supporting dynamic gestures, and expanding the dataset to include more sign language vocabularies for enhanced accuracy and scalability.
References
S. Ghotkar, R. Khatal, S. Khupase, S. Asati, and M. Hadap, “Hand gesture recognition for Indian Sign Language,” IEEE Xplore, Jan. 01, 2012.
A. Pardasani, A. K. Sharma, S. Banerjee, V. Garg, and D. Roy, “Enhancing the Ability to Communicate by Synthesizing American Sign Language using Image Recognition in A Chatbot for Differently Abled,” 2018 7th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Aug. 2018.
J. Ann, J.Macalisang, R. V. Sevilla, “Implementation of Security Access Control using American Sign Language Recognition via Deep Learning Approach,” 2022 International Conference on Emerging Technologies in Electronics, Computing and Communication (ICETECC), pp. 1–5, Dec. 2022.
T. Nehra, D. Saisanthiya, and A. Modi, “Indian Sign Language (ISL) Recognition and Translation using Mediapipe and LSTM,” 2023 World Conference on Communication & Computing (WCONF), pp. 1–5, Jul. 2023.
S. Kurundkar, A. Joshi, A. Thaploo, S. Auti, and A. Awalgaonkar, “Real-Time Sign Language Detection,” 2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN), May 2023.
S. Dutta, A. Bose, S. Dutta, and K. Roy, “Sign Language Detection Using Action Recognition with Python,” International Journal of Engineering Applied Sciences and Technology, vol. 8, no. 1, pp. 61–67, May 2023.
T. A. Siby, S. Pal, J. Arlina, and S. Nagaraju, “Gesture based Real-Time Sign Language Recognition System,” 2022 International Conference on Connected Systems & Intelligence (CSI), pp. 1–6, Aug. 2022.
H. Adhikari, S. Bin, I. Jahan, M. S. Mia, and M. R. Hassan, “A Sign Language Recognition System for Helping Disabled People,” 2023 5th International Conference on Sustainable Technologies for Industry 5.0 (STI), pp. 1–6, Dec. 2023.
X. He, Y. Lin, Z. Hu, X. Xu, R. Xu, and W. Xiang, “AI Chinese sign language recognition interactive system based on audio-visual integration,” IEEE International Conference on Electrical, Automation and Computer Engineering (ICEACE), Dec. 2023.
S. Sivamohan, S. Anslam Sibi, T. R. Divakar, and S. Jagan, “Hand Gesture Recognition and Translation for International Sign Language Communication using Convolutional Neural Networks,” 2024 2nd International Conference on Advancement in Computation & Computer Technologies, pp. 635–640, May 2024.
A. Acharya, N. Patil, U. Pathak, and S. Bhagwat, “Sign Language Translation with fusion of Emotion Detection,” 2022 6th International Conference on Computing, Communication, Control and Automation (ICCUBEA, pp. 1–6, Aug. 2024.
A., L. Jasmine J, C. Charan, C. A. Reddy, and C. Bala, “Real-Time Sign Language and Audio Conversion Using AI,” Real-Time Sign Language and Audio Conversion Using AI” International Conference on Communication, Control, and Intelligent Systems (CCIS), pp. 1–6, Dec. 2024.
J. Snajder and J. Krejsa, “Automation-Driven Dataset Preparation for Continuous Czech Sign Language Recognition,” International Congress of Mathematicians, pp. 1–5, Dec. 2024.
A. S. Selvi, G. Ratiraju, A. Jeevan, J. S. K. Reddy, M. V. Krishna, and S. P. S, “Real-Time Sign Language Detection Using Deep Learning,” 2025 International Conference on Computational, Communication and Information Technology (ICCCIT), pp. 99–103, Feb. 2025.
J. David, M. S. S, and S. Revathy, “Indian Sign Language Recogniser with Text and Speech Translation,” 2025 8th International Conference on Trends in Electronics and Informatics (ICOEI), pp. 835–840, Apr. 2025.