Saturday, November 7, 2015

Wearable Sensors Could Translate Sign Language Into English

Wearable Sensor Translates Sign Language [A prototype of Jafari's sign language recognition technology that he aims to scale down to the size of a watch.]
Wearable sensors could one day interpret the gestures in sign language and translate them into English, providing a high-tech solution to communication problems between deaf people and those who don’t understand sign language.
Engineers at Texas A&M University are developing a wearable device that can sense movement and muscle activity in a person's arms.
 The device works by figuring out the gestures a person is making by using two distinct sensors: one that responds to the motion of the wrist and the other to the muscular movements in the arm. A program then wirelessly receives this information and converts the data into the English translation. [Top 10 Inventions that Changed the World]

After some initial research, the engineers found that there were devices that attempted to translate sign language into text, but they were not as intricate in their designs.
"Most of the technology ... was based on vision- or camera-based solutions," said study lead researcher Roozbeh Jafari, an associate professor of biomedical engineering at Texas A&M.
These existing designs, Jafari said, are not sufficient, because often when someone is talking with sign language, they are using hand gestures combined with specific finger movements.
"I thought maybe we should look into combining motion sensors and muscle activation," Jafari told Live Science. "And the idea here was to build a wearable device."
The researchers built a prototype system that can recognize words that people use most commonly in their daily conversations. Jafari said that once the team starts expanding the program, the engineers will include more words that are less frequently used, in order to build up a more substantial vocabulary.
One drawback of the prototype is that the system has to be "trained" to respond to each individual that wears the device, Jafari said. This training process involves asking the user to essentially repeat or do each hand gesture a couple of times, which can take up to 30 minutes to complete.
"If I'm wearing it and you're wearing it — our bodies are different … our muscle structures are different," Jafari said.
But, Jafari thinks the issue is largely the result of time constraints the team faced in building the prototype. It took two graduate students just two weeks to build the device, so Jafari said he is confident that the device will become more advanced during the next steps of development.
The researchers plan to reduce the training time of the device, or even eliminate it altogether, so that the wearable device responds automatically to the user. Jafari also wants to improve the effectiveness of the system's sensors so that the device will be more useful in real-life conversations. Currently, when a person gestures in sign language, the device can only read words one at a time.
This, however, is not how people speak. "When we're speaking, we put all the words in one sentence," Jafari said. "The transition from one word to another word is seamless and it's actually immediate."
"We need to build signal-processing techniques that would help us to identify and understand a complete sentence," he added.
Jafari's ultimate vision is to use new technology, such as the wearable sensor, to develop innovative user interfaces between humans and computers.
For instance, people are already comfortable with using keyboards to issue commands to electronic devices, but Jafari thinks typing on devices like smartwatches is not practical because they tend to have  small screens.
"We need to have a new user interface (UI) and a UI modality that helps us to communicate with these devices," he said. "Devices like [the wearable sensor] might help us to get there. It might essentially be the right step in the right direction."
Jafari presented this research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference in June.
Wearable technology, wearables, fashionable technology, wearable devices, tech togs, or fashion electronics are clothing and accessories incorporating computer and advanced electronic technologies. The designs often incorporate practical functions and features, but may also have a purely critical or aesthetic agenda.
Wearable devices such as activity trackers are a good example of the Internet of Things, since they are part of the network of physical objects or "things" embedded with electronics, software, sensors and connectivity to enable objects to exchange data with a manufacturer, operator and/or other connected devices, without requiring human intervention.
We talked about wearables for two classes, but we mostly mentioned about wearables that will make a person’s life easier and healthy. Engineers at Texas A&M University are developing a wearable device that helps disable people, specifically people who communicate through sign languages. This device can sense movement and muscle activities in a person’a arm, mainly the arm and the wrist area. This device has to adjust to the person who are wearing it because different people have different muscle movements as well as gestures. The movements and gestures they detect will translates into english to make it easy for others to understand sign language. Do you see this as New Market or Sustaining Innovation? What do you guys think of this wearable? Will this technology work in the future? How would this impact us?
The smart device is the brainchild of a team led by Biomedical Engineering Associate Professor Roozbeh Jafari. It uses two separate sensors to translate intricate ASL gestures into plain English. The first that is fitted with an accelerometer and gyroscope keeps track of the significant movements - the user's hand and arm as he/she tries to communicate.
The second sensor helps distinguish the smaller movements that follow, the larger ones. Called an electromyographic sensor (sEMG), it can recognize various hand and finger movements based on muscle activity. The two sensors working in tandem help provide an accurate interpretation of the gesture.
For example, when an ASL user is gesturing the words “please” or “sorry," the first sensor will pick up the hand drawing circles to the chest, while the second will ascertain if the fist is open ("please") or closed ("sorry").
Once the device that is worn on the user's right wrist has captured the gesture, it transmits the appropriate signals to a laptop via Bluetooth. A complicated algorithm translates them into English and displays the word on the computer screen.
Jafari, who unveiled the prototype at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference this past June, says there is still some work to be done before the technology can be used in the real world.
For one, it currently recognizes just 40 primary ESL signs, which means that it has thousands more to learn. Also, the smart device only translates one word at a time making ordinary conversations painfully slow.
The research team also realizes that not all communication takes place around a laptop. They hope to eliminate the need for one by incorporating a computer into the wearable. The computer will then send the translation to a smart device, allowing two people "speaking" different languages to have a coherent conversation.
Additionally, each device has to be custom programmed, which means that the individual has to "train" the wearable by repeating every ASL sign a few times. This is a time-consuming process and can only get worse as the translator's vocabulary expands. Jafari hopes to reduce the learning time or eliminate this requirement altogether in the product's next release. Despite the numerous challenges the researcher is not worried. After all, it took his two graduate students just a few weeks to come up with the first impressive prototype.

The Texas team is not the only one working on making conversation between ASL and non-ASL users easier. MotionSavvy has a product that uses a smart device camera to translate gestures into speech. In China, researchers have created a motion sensing device that translates Chinese Sign Language into both spoken and written words. With so many brilliant minds focused on finding a solution, communication difficulties experienced by ASL users may soon be a thing of the past!

A smart device that translates sign language while being worn on the wrist could bridge the communications gap between the deaf and those who don’t know sign language, says a Texas A&M University biomedical engineering researcher who is developing the technology.
The wearable technology combines motion sensors and the measurement of electrical activity generated by muscles to interpret hand gestures, says Roozbeh Jafari, associate professor in the university’s Department of Biomedical Engineering and researcher at the Center for Remote Health Technologies and Systems.
Although the device is still in its prototype stage, it can already recognize 40 American Sign Language words with nearly 96 percent accuracy, notes Jafari who presented his research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference this past June. The technology was among the top award winners in the Texas Instruments Innovation Challenge this past summer.
The technology, developed in collaboration with Texas Instruments, represents a growing interest in the development of high-tech sign language recognition systems (SLRs) but unlike other recent initiatives, Jafari’s system foregoes the use of a camera to capture gestures. Video-based recognition, he says, can suffer performance issues in poor lighting conditions, and the videos or images captured may be considered invasive to the user’s privacy. What’s more, because these systems require a user to gesture in front of a camera, they have limited wearability – and wearability, for Jafari, is key.
"Wearables provide a very interesting opportunity in the sense of their tight coupling with the human body,” Jafari says. “Because they are attached to our body, they know quite a bit about us throughout the day, and they can provide us with valuable feedback at the right times. With this in mind, we wanted to develop a technology in the form factor of a watch.”
In order to capture the intricacies of American Sign Language, Jafari’s system makes use of two distinct sensors. The first is an inertial sensor that responds to motion. Consisting of an accelerometer and gyroscope, the sensor measures the accelerations and angular velocities of the hand and arm, Jafari notes. This sensor plays a major role in discriminating different signs by capturing the user’s hand orientations and hand and arm movements during a gesture.
However, a motion sensor alone wasn’t enough, Jafari explains. Certain signs in American Sign Language are similar in terms of the gestures required to convey the word. With these gestures the overall movement of the hand may be the same for two different signs, but the movement of individual fingers may be different. For example, the respective gestures for “please” and “sorry” and for “name” and “work” are similar in hand motion. To discriminate between these types of hand gestures, Jafari’s system makes use of another type of sensor that measures muscle activity.
Known as an electromyographic sensor (sEMG), this sensor non-invasively measures the electrical potential of muscle activities, Jafari explains. It is used to distinguish various hand and finger movements based on different muscle activities. Essentially, it’s good at measuring finger movements and the muscle activity patterns for the hand and arm, working in tandem with the motion sensor to provide a more accurate interpretation of the gesture being signed, he says.

“These two technologies are complementary to each other, and the fusion of these two systems will enhance the recognition accuracy for different signs, making it easier to recognize a large vocabulary of signs,” Jafari says.

 


In Jafari’s system both inertial sensors and electromyographic sensors are placed on the right wrist of the user where they detect gestures and send information via Bluetooth to an external laptop that performs complex algorithms to interpret the sign and display the correct English word for the gesture. As Jafari continues to develop the technology, he says his team will look to incorporate all of these functions into one wearable device by combining the hardware and reducing the overall size of the required electronics. He envisions the device collecting the data produced from a gesture, interpreting it and then sending the corresponding English word to another person’s smart device so that he or she can understand what is being signed simply by reading the screen of their own device. In addition, he is working to increase the number of signs recognized by the system and expanding the system to both hands.
“The combination of muscle activation detection with motion sensors is a new and exciting way of understanding human intent with other applications in addition to enhanced SLR systems, such as home device activations using context-aware wearables,” Jafari says.
Jafari is associate professor in Texas A&M’s Department of Biomedical Engineering, associate professor in the Department of Computer Science and Engineering and the Department of Electrical and Computer Engineering, and researcher at Texas A&M Engineering Experiment Station’s Center for Remote Health Technologies and Systems. His research focuses on wearable computer design and signal processing. He is director of the Embedded Signal Processing Laboratory .
About the Center for Remote Health Technologies and Systems (CRHTS)
The Center for Remote Health Technologies and Systems is designing and developing advanced health technologies and systems to enable healthy living through health monitoring and disease diagnosis, management and prevention. The center’s mission is to identify and overcome the unmet needs of patients and health care providers through the development of breakthrough remote health care devices, biosignal mapping algorithms, remote health analytics and information systems that will improve access, enhance quality, and reduce the cost of health care.
About the Texas A&M Engineering Experiment Station (TEES)
As an engineering research agency of Texas, TEES performs quality research driven by world problems; strengthens and expands the state’s workforce through educational partnerships and training; and develops and transfers technology to industry. TEES partners with academic institutions, governmental agencies, industries and communities to solve problems to help improve the quality of life, promote economic development and enhance educational systems. TEES, a member of the Texas A&M University System is in its 100th year of engineering solutions.
fvfeed_1.15KinectforWindows-Sensor-angled_h_cL.jpeg-592x245

Apps like Google Translate make the world a little more accessible by allowing users to communicate regardless of language barriers. An engineering professor at Texas A&M University aims to do the same for the deaf by developing a wearable that translates American Sign Language into English using sensors and a smartphone.
Roozbeh Jafari’s device straps onto an ASL speaker’s arm and uses motion sensors in concert with electromyographic sensors to track large movements as well as subtle muscle activity. Data from the sensors is fed to a tablet computer via bluetooth, translating the arcs and flicks of the hand and wrist into written English. The combination of sensors allows the system to interpret small differences between similar gestures with great accuracy. Jafari’s current prototype recognizes 40 ASL words and gets them right 96% of the time.
Most commonly used decoding systems employ video feeds to recognize ASL words, but their reliance on good lighting and pointing cameras at people make them unwieldy and unreliable. Jafari’s sensor-based system ups the ante, and much of the hardware necessary for the device is already contained in smartwatches and phones. The Apple Watch, for example, already uses motion-sensing accelerometers and gyroscopes and connects to a powerful computer (the iPhone) via Bluetooth. The only thing it lacks is electromyographic sensors.
There are over 300 different sign languages used internationally, so while it may take some time to make the system robust enough for widespread use, Jafari’s work points to a future in which deaf people can easily converse with non-signers, provided they have access to a smartphone.


No comments:

Post a Comment