Show simple item record

dc.contributor.authorNgaruiya, Eliud, N
dc.date.accessioned2021-12-17T08:56:55Z
dc.date.available2021-12-17T08:56:55Z
dc.date.issued2021
dc.identifier.urihttp://erepository.uonbi.ac.ke/handle/11295/155923
dc.description.abstractThere are approximately over 360 million people globally living with a hearing disability; partially or completely deaf. In order to communicate, these people use sign language. Like any spoken language, sign language has its own grammar rules and can be translated from signed language to spoken language by a sign language interpreter. However, unlike translating spoken languages, translating sign language is a challenging task. Humans have been doing the translation. But with recent advancement in machine learning and artificial intelligence algorithms, these translation tasks are being taken up by machines. Despite the progress, machine translation of sign language is still faced with challenges since it requires data that is largely unavailable of hand and finger movements for words and phrases signing. Furthermore, the approaches adopted such as computer vision are resource intensive. Embedded systems have evolved from simple transistor circuits to complex microprocessor and microcontroller systems. Though still resource constrained in terms of the processing power, memory and power consumption, these embedded systems can now perform tasks previously not possible on earlier versions. Recently, we have had machine learning algorithms that can run on resource-constrained embedded systems like TensorFlow Lite for Microcontrollers. With these algorithms, we can now perform machine learning inferences locally on-device. In this study, a machine learning on the edge algorithm was used to translate sign language gestures to spoken language by developing an Embedded Intelligent System that uses sensors to track finger curvatures and hand movement. The device is first designed and built using open-source hardware. Then used to create a dataset by collecting data. Data is collected by performing the signing of various sign language gestures. The data collected is curated and used to provide examples for various classes to the k-nearest neighbour. This algorithm is then used to perform on-device inferencing to classify new gestures as they are signed. Arduino nano BLE sense is used together with flex sensors. As flex sensor bends with signing of different letters and numbers, the data is collected and logged in a file. This data is then used to train a model using a K-nearest neighbour (KNN) algorithm. New signed numbers are translated and displayed on the serial monitor. The main contribution of the paper is a new machine learning on the edge approach using open-source resources to create a sign language translating device that works on resource constrained devices without the reliance of external inference enginesen_US
dc.language.isoenen_US
dc.publisherUniversity of Nairobien_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectSign Language gesture recognition using machine learning on the Edgeen_US
dc.titleSign Language gesture recognition using machine learning on the Edgeen_US
dc.typeThesisen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States