Gesture Vending
-
Tangible Interface Design Project
-
4-week design sprint
-
Tools: Figma, Blender, Adobe Premiere, Adobe After Effects
Gesture Vending is a tangible interface concept that empowers customers to engage with a vending machine and make purchases without the need for direct physical contact with the machine. This innovative design is particularly valuable in the context of the COVID-19 pandemic, as it offers a solution for individuals looking to minimize their interaction with public machines to reduce their risk of exposure to the virus.
Introduction
Gestural interfaces open up a world where we can interact directly with digital devices using our fingers, hands, arms, or even our whole bodies without the need for physical buttons or keys. They offer a broad range of actions and can be so intuitive that they seamlessly blend into the user experience, unlike traditional keyboard input.
​
In this project, my goal is to design an intuitive gestural interface that empowers users to control and perform specific tasks on a vending machine. The core value of this interface is its ability to allow users to avoid physical contact with public machines, a crucial feature during the COVID-19 pandemic when minimizing touchpoints is essential. The objective is to develop a set of easily learnable gestures through a mobile app, enabling users to complete vending machine transactions without the need for physical contact. For this project, we assume the vending machine's screen utilizes a camera and 3D sensor to accurately capture 3D depth information and interpret user gestures.
​
During the prototyping phase, I followed a structured process. First, I used Figma to create flow diagrams and wireframe prototypes. Then, I turned to Blender to craft a 3D model of the vending machine and animate it. Afterward, I recorded my own hand gestures and movements. Finally, I employed Adobe Premiere to edit these clips and incorporate voiceovers, resulting in the final video presentation.
Research
-
Process
-
For brainstorming and primary research, I explored the meaning of gestures by prompting myself to keywords and record / capture my own gestures.
-
I asked my friends to do hand gestures with the same keywords and observe their gestures.
-
I also reviewed other videos such as films to observe gestures and their meanings.
-
I reviewed the recording and documented some intentional and unintentional gestures I or others made, and then selected gestures and organized them into groups.
​
-
Goal
My goal for this exercise is to look at my own and other’s behavior and to document these gestures used intentionally or unintentionally, see what kind of gestures I use, and try to better understand some of the intentions or emotions behind them.
​
-
Findings
-
Even unintentional gestures have a lot in common such as yawning and stretching when you feel tired.
-
When I feel nervous, I can easily have more unintentional body gestures when I speak
-
When asked for the gesture for a specific prompt of a common action, the gesture is highly consistent across different people
​
-
Reference Images
​
1. Personal Gestures​
2. Captured Gestures from The Queen's Gambit
Design Process
-
Representing Gestures
​
In this project, I tried to make it simple and decided to use 6 different gestures to achieve different gesture-based commands. I use thumb up to represent “up”, thumb down to represent “down”, thumb left to represent “left”, thumb right to represent “right”, palm up to represent “stop or skip”, and Ok gesture to represent “next or confirm”. These gestures are pretty commonly understood and convey explicit meanings when they are associated with commands or actions.
-
Echo and Semantic Feedback
“Echo” feedback and “semantic” feedback are helpful methods to indicate to users what’s going on in the interface. With the indications, it is a good way to avoid accidental triggers. Both “echo” feedback and “semantic” feedback try to make seamless transitions and avoid confusion. Designers try to use these indications to bring connections between the system and users and make each action clear for users to move further in the process.
​
In my project, I tried to use the light indicator below the selected merchandise to inform the user of which item is currently selected, as the user uses gestures to control the light indicator’s movement. The echo feedback is shown below:
A semantic feedback example is that between two actions, the interface shows visual feedback to the user so the user knows that they have successfully performed and completed an action. An example shows below:
-
User Flow Diagramming
A user flow diagram helps describe the path taken by a prototypical user on the vending machine to complete a task. By showing the flow, we can visually describe the journey of a user taking the actions from the moment based on their choices and help illustrate the journey someone could take as they move towards their goal.
Style Guide
Green Screen Filmmaking
The green screen video is shot of video with a green background so that later we can remove the green color background easily and drop the recorded scene onto another background we want to show to the audience. We can easily edit the green screen video and use it as a partially transparent video on top of another video.
-
Process
-
Prepare a green screen using the computer to project the green screen to the TV as a background
-
Setup the tripod to hold your phone
-
Setup the lighting
-
Shoot hand gestures one by one
-
Challenges
-
Hard to predict how long we should keep for each motion in the recording
-
The reflections and shades are a problem, so I needed to do some image adjustments in Premiere
-
Image References
-
green screen record hand gestures
-
Adobe Premiere edit video clips and voice-over
-
Blender build vending machine 3D model and animation
Reflections
Key Takeaways:
-
Visual Echo Feedback: Given that users do not physically interact with devices like a mouse or keyboard when performing actions that affect the system, providing clear visual feedback becomes crucial to help users understand that their gestures are taking effect.
-
Green Screen Filmmaking: Leveraging green screen filmmaking proves to be an effective technique for seamlessly integrating real-world elements, such as hand gestures, into edited videos, enhancing the authenticity of the presentation.
​
Strengths of the Project:
The project's standout feature lies in its high-fidelity representation of user flow and experience, achieved through the use of 3D models and real hand gestures. The project's value proposition aligns well with the context of the pandemic, making it relevant to the target audience. Moreover, the user flows and gestures are designed to be simple and easily comprehensible, requiring minimal user training.
​
Areas for Improvement:
-
Onboarding Process: While users have the option to skip the onboarding process, there is room for further simplification and enhancement to make it more engaging and enjoyable.
-
Gesture Response: The responsiveness of the system to user gestures could be improved. For instance, a more proactive approach could involve triggering a reaction if the user maintains the same gesture for over one second, rather than requiring them to drop and then raise their hand again for the system to respond.