PROCESS
The Build
The prototype is made with two parts combined. There is the physical build and the digital build.
Physical Build
First there was the physical build. This incorporated making a squat-rack out of wood than using sheets to enclose the sides to make the inside feel like a small rectangular prism (just like an elevator). This was done following Buff Dudes Tutorial on YouTube. The next step was displaying the computer screen into the elevator, this was done by mimicking what was shown on the website onto a spare iPad. By then installing the iPad onto the squat rack and using Splashtop the website would be able to duplicate into the ‘elevator’ for the users to see. The next step was getting the button to work like a real elevator. By using an Arduino multi-button welded onto the board and connected to the breadboard connected to the Arduino Uno, when a user clicks on their ‘level’ it triggers it so that an interaction would load. By clicking quickly, it opens the single user prototype and by holding the button it opens the mulit-user prototype.The Arduino is than connected to the computer alongside the webcam. The webcam acts as a camera to read the users posing and as a microphone for the multi-user charades game. In creating a physical elevator, I wanted to be able to simulate a real elevator experience (due to COVID-19). In doing so I wanted users to feel and experience how the product would be used in a real-life experience. I believe that in doing so I was able to get more of an emotional response from users when testing.
Digital Build
The digital build incorporates using HTML, CSS, JavaScript, Arduino Code, and Unity alongside Xamp and Splashtop. Once the button is pressed, the Arduino code either prints a 1 or a 2 to the computers serial port. Unity than uses this serial port answer to choose which website to open. The two prototypes for the different users are set up on websites on the zone. The single user uses PoseNet alongside a Cosine-similarity algorithm that helps detect the differences between an image and the live webcam. The Multi-user website uses a google speech recognition API and uses that to detect what is being said then with that, compares it to a list of randomly generated words. As stated above, Splashtop is than used to mimic what’s on the computer screen so users are able to see what they need to do. The image on the right is an image map of how the code works.