SKIN DEEP: A Virtual-Physical Experience

Katherine Ng
4 min readDec 1, 2020

The purpose of this installation is to virtually simulate the feeling of holding hands with someone while physically being apart. Users will need to interact simultaneously with the pressure sensors to align the hands on-screen. Only when the hands intersect will the video play.

The accompanying video element is to associate the emotions one feels when they are in nature to the joy and comfort they feel when they touch another being. When the hands separate, a captured moment in the video is displayed which simulates the feeling of capturing a moment in time.

The goal for this simple interaction is to represent the complex feelings of comfort in physical touch. It is difficult to describe our thoughts and emotions verbally, but instinctively, we can easily relate and feel closer to people through touch. As Covid-19 continues to push us apart physically, we can still attempt to re-create that emotions and experience through digital interaction.

Parts & Specifications

Fritzing diagram for the product

Hardware: Arduino Uno Rev3 (1), USB cable (1), 830 breadboard (1), M/M jumper wire (8), 10k 5% resistor (2), FSR sensor (2)

Software: Arduino, Processing, Adobe Premiere Pro

Preliminary Ideas & Sketches

In the initial stage of this project, I was exploring new ways of using the force/pressure sensor. In previous experiences using this sensor, it had trouble with detecting the force of an object when it was applied against a flat surface, so I re-evaluated the strengths of the pressure sensor. Through many trials of blowing, flicking, and playing with the sensor, I came to the realization that its strength was in its simplicity as an object to press.

When establishing the sensor’s capabilities, I started to evaluate its use-case as a virtual-physical component. I wanted to challenge the current limitations in physical contact due to the COVID-19 global pandemic and thought of re-creating physical touch through digital interactions. Using the pressure sensor as an extension of our hands, I created ‘SKIN DEEP’.

Process & Code

Process 1: I started by controlling elements of a video from processing using Arduino sensors. One sensor changed the video footage while the other altered the playback speed.

Code credit to Borzu Talaie

Process 2: My next goal was to control two elements independently. I placed two circles on the same y-axis while controlling their movement along the x-axis, which allowed them to overlap. I applied collision detection into processing which measured the distance between the circles. If the distance was less than the sum of the circles, it would mean they were touching and would change their colour from blue to orange.

Code credit to http://jeffreythompson.org/

Process 3: The Arduino sensors were set to reading input from left to right, which made the default state of the circles going in the same direction and already touching. The next step was to change the direction of one circle so that they would collide only when both sensors were active. As such, I reversed the x-axis of one sensor to read the input as negative. This made the circle move left on the screen instead of defaulting the direction to the right.

Process 4: Adding on to the logic developed in the previous processes, I incorporated two images that overlayed the dimensions of the circles to visually emulate two hands holding when they touched. Using collision detection, I imported video processing that would play relaxing scenery and sound when the hands touched.

Video credit to https://www.youtube.com/watch?v=BHACKCNDMW8

Process 5: This version was to test a full-screen view of the installation. Unfortunately, the increase in resolution negatively affected the speed of the interaction on screen and therefore, the window size could not be altered.

Final: In the final version of this project, I overlayed a vignette when the video was paused to portray a dream-like experience. I also applied an element that would jump to a random time in the video so each moment when the hands would touch, the viewers would be transported to a new location.

Future Directions

In future developments of this project, I would like to explore alternative options to allow full-screen display. Due to the current resolution size, the interaction speed would be compromised if changes were made to the viewport size. I would want to explore if there could be more flexibility without compromising the experience.

When interacting with the product, I found that the audio playback would not be smooth even with constraining the sensor input. I attempted to separate the audio from the video to target each independently but it resulted in covering the video element. My next step would be to find a solution for smoothing the audio without disturbing the visual element.

--

--

Katherine Ng

A product designer from Toronto dedicated to user experience. katng.io