Development of an Audio to Sign Language Translator using a Random Forest Classifier
DOI:
https://doi.org/10.3126/injet.v3i1.87013Keywords:
Porter, Natural Language Processing, Blender, Sign language, Machine learningAbstract
The ubiquity of English as a global language underscores the necessity for accessible communication solutions that accommodate diverse linguistic and auditory needs. In this context, the paper presents a novel English-to-American Sign Language (ASL) translation system that leverages Natural Language Processing (NLP) and Machine Learning (ML) to convert English text or speech into animated sign gestures. This system applies the Porter Stemming Algorithm to reduce words to their root forms and removes stop words to improve clarity. Word2Vec embedding were employed to transform the pre-processed text into vector representations, which were subsequently classified using a Random Forest model trained on a self-curates ASL dataset consisting of 126 videos, encompassing 90 words,26 alphabets and 10 numbers. The model achieves an accuracy of 94.51%, effectively recognizing base words and their synonyms. For out-of-vocabulary terms, the system defaults to letter-by-letter ASL finger spelling. Developed with Blender for 3D gesture animation and Django for backend processing, the solution offers a scalable and cost-effective model for real-time sign language interpretation.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal on Engineering Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.
This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.