Screenless Typing – Exploring the Auditory Keyboard | Published Research and Design | Funded by Google

Research supported and funded through a Google Research Award

Authors: Reeti Mathur | Aishwarya Sheth | Parimal Vyas | Davide Bolchini

Published at: ASSETS 2019, October 28-30

Venue: Pittsburg, USA

Link to the paper: https://dl.acm.org/doi/10.1145/3308561.3353789

Challenge

While conducting research work with the Blind or Visually Impaired (BVI) people, we observed that typing messages manually using a mobile QWERTY keyboard could be difficult to handle and is time-consuming, especially while multitasking – having a cane in one hand and other belongings in the other, when they are on the go.

Technology

Platform: Android

Wearable device: Myo Band

Connector: Bluetooth

Approach

In this study, I contributed towards creating an auditory-based concept, that is prototyped to enable these users to type words or messages without touching on a screen.

The characters are spoken to the users having a visual impairment and they just need to perform an easy hand gesture in order to select the letters. We call this concept a screenless, auditory keyboard.

Research & Design Methods

Adobe XD

Adobe After Effects

Adobe Photoshop

Interviews

Contextual Inquiry

Qualitative Analysis

Quantitative Analysis

Concept

The letters are looped continuously by the keyflow in an A-Z order

The 26 alphabets are divided into groups of 5 called chunks

These letters are designed to be divided in chunks to get to a latter letter faster

The users perform simple combinations of gestures to select a character, skip chunks forward, go letter-by-letter backward, delete characters and pronounce letters or words framed

Auditory and haptic cues (vibrations of the band) are also embedded to provide feedback to the user

My Role

I joined the team when the application was partly prototyped and the script for the usability test interview sessions was being created. My contributions to this research are as follows.

1. Lead my team to complete the partially prototyped concept by the previous team.

2. Designed illustrations and graphics of the prototype.

3. Designed and created animated videos using Adobe After Effects to describe the process of the keyflow in order to type simple words.

4. Contributed to finishing the script for the structured interview sessions to carry out the usability tests for the Screenless typing concept.

5. Conducted 20 interview sessions with participants to test the usability of the keyflow.

6. Researched other wearable technologies like the Smart Rings including concepts of the Magic Ring [1], LightRing [2], Nenya [3], TRing [4], eRing [5] and iRing [6] to name a few.

7. Worked on analyzing the quantitative and qualitative data collected from the interview sessions

Contextual Inquiry Sessions for testing the usability of the designed prototype

References

[1] L. Jing, Z. Cheng, Y. Zhou, J. Wang, and T. Huang, “Magic Ring: A Self-contained Gesture Input Device on Finger,” in Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, New York, NY, USA, 2013, pp. 39:1–39:4.

[2] W. Kienzle and K. Hinckley, “LightRing: Always-available 2D Input on Any Surface,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 2014, pp. 157–160.

[3] D. Ashbrook, P. Baudisch, and S. White, “Nenya: Subtle and Eyes-free Mobile Input with a Magnetically-tracked Finger Ring,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2011, pp. 2043–2046.

[4] S. H. Yoon, Y. Zhang, K. Huo, and K. Ramani, “TRing: Instant and Customizable Interactions with Objects Using an Embedded Magnet and a Finger-Worn Device,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, New York, NY, USA, 2016, pp. 169–181.

[5] M. Wilhelm, D. Krakowczyk, F. Trollmann, and S. Albayrak, “eRing: Multiple Finger Gesture Recognition with One Ring Using an Electric Field,” in Proceedings of the 2Nd International Workshop on Sensor-based Activity Recognition and Interaction, New York, NY, USA, 2015, pp. 7:1–7:6.

[6] M. Ogata, Y. Sugiura, H. Osawa, and M. Imai, “iRing: Intelligent Ring Using Infrared Reflection,” in Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 2012, pp. 131–136.