Note: This page is under construction. A working demo is available at Hugging Face Spaces.

Problem Statement

Indian Sign Language (ISL) is a crucial mode of communication for people with speaking and hearing impairments. Unlike American or British Sign Language, ISL has limited technological support. Our project aims to bridge this gap by creating an AI-powered translation system for ISL.

Approach

Sign Language Video Dataset

OpenPose Extraction

Generated Stick Models (Feature Extraction)

Our Model / Neural Network (LSTM / Transformer)

Softmax Classification Layer

Output / Translation

Model Selection

noseleftEyerightEyeleftShoulderrightShoulderleftElbowrightElbowleftWristrightWristleftHiprightHipleftKneerightKneeleftAnklerightAnklerightEarleftEarneck

Dataset: INCLUDE

The INCLUDE dataset contains 4292 videos of ISL signs recorded by deaf students from St. Louis School for the Deaf, Adyar, Chennai. Each video is labeled with the corresponding ISL sign and split into training and testing sets.

CharacteristicDetails
Categories15
Words263
Videos4292
Avg Videos per Class16.3
Avg Video Length2.57s
Min Video Length1.28s
Max Video Length6.16s
Frame Rate25 fps
Resolution1920x1080

Visualizations

Test Videos Preview

Select a video to preview

Extracted Frames