Teaching a robot to read sign language
One of the primary use cases of deep learning is in image classification. When translated to robotics, it implies providing a robot the…
One of the primary use cases of deep learning is in image classification. When translated to robotics, it implies providing a robot the ability to see and understand its environment. In this article, we will discuss how to teach the Anki Vector robot to understand the human sign language. As an example, we will consider the American Sign Language (ASL) for English alphabets. An excellent video introducing this sign language is available here.
We build on the experimental program and dataset provided by the Anki Vector SDK. Here are the steps:
Gathering Labelled Data: In this step we use the data gathering program to generate a labelled dataset. We show Vector the human signs for all the alphabets, and label the images manually. This is a very intensive effort. To expand the volume of the dataset, the program generates 10 image copies for every captured image by slightly rotating the captured image along the X and Y axis. This helps in increasi…
Keep reading with a 7-day free trial
Subscribe to Learn With A Robot to keep reading this post and get 7 days of free access to the full post archives.