11# Sign Language-to-Speech with DeepStack's Custom API
22
3+ ![ ] ( https://github.com/SteveKola/Sign-Language-to-Speech-with-DeepStack-Custom-API/blob/main/scripts/gifs/proof.gif )
4+
35This project is an end-to-end working prototype that uses Artificial intelligence to detect sign language meanings
46in images/videos and generate equivalent, realistic voice of words communicated by the sign language.
57
8+
69## Steps to run the project
710### 1. Install DeepStack using Docker. (Skip this if you already have DeepStack installed)
811- Docker needs to be installed first. For Mac OS and Windows users can install Docker from
@@ -69,6 +72,8 @@ Running the above command would return two new files in your project root direct
69721 . a copy of the image with bbox around the detected sign with the meaning on the top of the box,
70732 . an audiofile of the detected sign language.
7174
75+ ![ image] ( https://user-images.githubusercontent.com/45284829/123965899-cfde8080-d9ac-11eb-874e-14d69b2e0c0c.png )
76+ ![ image] ( https://user-images.githubusercontent.com/45284829/123966073-f4d2f380-d9ac-11eb-8053-80a92130dedc.png )
7277
7378### 5. Detect sign language meanings on a live video (via webcam).
7479- run the livefeed detection script;
@@ -80,7 +85,9 @@ My default port number is 88. To specify the port on which DeepStack server is r
8085 python livefeed_detection.py --deepstack-port port_number
8186```
8287This will spin up the webcam and would automatically detect any sign language words in view of the camera,
83- while also returning the sign meaning and its speech equivalent immediately.
88+ while also displaying the sign meaning and returning its speech equivalent immediately through the PC's audio system.
89+
90+ ![ ] ( https://github.com/SteveKola/Sign-Language-to-Speech-with-DeepStack-Custom-API/blob/main/scripts/gifs/proof.gif )
8491
8592
8693## Additional Notes
0 commit comments