Since I began my Puppet Master gestural interface project, wanted to study American Sign language (ASL). I want to design a system that uses a UNIX-style command line language to control networked physical objects or the “internet of things”. In addition, I am exploring new interfaces that could minimize computer related injuries due to repetitive input and control motions. Instead of reinventing the wheel with a brand new set of gestures, I wanted to study a current working language model to explore how gestural grammar works and which modes of communication are the most ergonomic. Then I hope to apply this knowledge to a gestural language for machine control. Yesterday, I began taking a class in ASL.
What I have learned so far:
- ASL is an interpreted language (interpreted by humans)
- The signs convey meaning, not actual words
- grammar is indicated by facial expressions (often eyebrow lifts / head tilting / eye movements)
- When signing, (for the purposes of story-telling), an individual will “set up the situation” by using left or right placement of gestures to talk about specified people or things.
- Time in story-telling is also subtly indicated by body shifting. Leaning backward indicates that the event took place in the past. Leaning forward while in the process of signing a story indicates future events. Shifting the body to a “straight” position indicates the present.
- I can say “I am called Anna”, “You are?”
I have only taken one class so far, so my understanding of the above is very limited. I look forward to learning more during the coming semester and I will be posting extensive video documentation of my progress both for the class and my Puppet Master project.