Intended concept & experience
Video that demonstrates the interaction flow
Intended interaction
Problem Definition
Social media makes communicating with each other easier than ever before, however, this low barrier of entry makes conversations less personable and superficial. Our intent is to provide the users with a more personalised and intimate method of communicating with each other. This concept of communicating has been shown especially in these times as social distancing has been enforced, due to the COVID-19 pandemic. An influx of new apps and services for communicating with friends and close ones have surfaced which demonstrates the need for new methods of keeping in touch.
The Concept
E-mories provides the users with an easy and engaging way to share emotions with friends and family remotely without the distractions that lies inherently with complex devices such as smart phones and computers. When a user wants to share an emotion or experience with a confidant, they are guaranteed that they will be 100% engaged with what they want to convey. The intended effect is that emotions shared on this platform is received with a greater impact on the users. By sharing, for instance, a positive experience conjoined with a colour will affect the recipient user and improve their day. In contrast, if the user is sharing a negative emotion, the recipient will be able to empathise more with the sender as compared to conventional methods.
To the left is a video that demonstrates the intended prototype and how users would engage with it. In addition, a visualisation of the intended interaction flow is displayed.
Final product and experience
Video by team member Tuva
The final assembled prototype works both ways with the user being able to both send and receive data over a functioning server. The input interaction starts with the ball being in its initial state. From here the user can either wait for a message to be received or start making one of their own. In order to start the process, a squeeze is required. However, our team was not able to make the recording of the message work as we had a problem with the SD card, so this stage of the interaction is simulated. After the initial squeeze, the ball lights up red indicating that it's pretending to record a message. When finished, squeeze again and the ball goes to yellow (pretending to save) and then green (pretending it was successful). Next, the ball enters the colour selection stage which consists of two states. First, the user rotates the ball to find a suitable colour that best represents the intended message. When satisfied with the colour a squeeze is required to lock the colour selection. Next, the user can adjust the brightness of the colour by, again, turning the ball in various directions. When satisfied a squeeze is required to finish this stage. The last interaction is the send/delete state where the user must decide what to do with the current data on the ball. If they wish to send they throw the ball in the air and catches it. Two small vibrations will occur indicating it was a success. However, if the user wishes to delete the message, they instead drop the ball to their other hand which gives one small vibration, also indicating that it was a success. Both of these interaction sends the ball back to its initial state. The data that is currently being sent over the server is the RGB code that the user decided through the colour selection stage.
The second interaction is the output which is the receiving of the message (colour). When a colour has been sent through the server the ball will start pulsating in the colour that was decided by the sending user. The ball will continue to pulsate until the user squeezes the ball after which it lights up normally until the message has been played, then it turns off. As previously, this stage is also simulated since no recording could be obtained from the sending ball.
The technical & physical make-up of the final product
Arduino Schematic
Physical prototype
Stage 1 – Individual Part
The prototype was developed using an Arduino board with several different components. First and foremost an accelerometer needed to be included in order to detect the throwing interaction. In addition, I decided to include a LED ring to have the prototype in the state from the previous step of the interaction plan. With this LED ring I could include visual confirmation that a message had been sent successfully by turning it off when the message had been sent. Otherwise the light would continue to be lit. Additionally, after I conducted a user testing, I decided to include a vibration motor since it was emphasized that visual feedback wasn’t sufficient enough to display success or failure to send the message.
Stage 2 – Assembled Prototype
After each team member had finished their part of the prototype it was decided to assemble all of the components into one prototype. In addition to the accelerometer and vibration motor I needed to switch out the LED ring to a LED strip (since this was what all the other team members used and would therefore be easier to code) and include a bend sensor that would detect the squeezing of the ball. In all, 4 different components were used for this prototype in addition to the Arduino board itself.
The Physical Components
The only physical aspect of the prototype is the ball itself which is a transparent Christmas ball decoration cut in half to put all of the tech inside and then taped back together. The ball needed to be soft so it could be squeezed but not so soft as to make it impossible to interact with it correctly. Lastly, sandpaper (or a nail file) was used to distort the outside to make the light spread better through the ball.
Key aspects of the design process
Individual development
In the very beginning of conceptualising the E-mori device, some basic interactions and actions were brought forward as necessary. Among the different actions, throwing the ball upwards was concluded to be an appropriate metaphor for sending the message. One of the reasoning presented for this choice was that it signified the user throwing the message up to the “cloud”.
After implementing the throwing interaction user testing was in order to get more insight into what the user thought about the prototype and what could be improved. One of the main responses from this testing was that the users wanted additional feedback that signified the throwing action being complete, as the only feedback currently implemented was the LEDs that visualised the selected colour was turned off. The solution was to add haptic feedback similar to notifications on a smartphone. Since haptic feedback is generally used to signify an action being performed.
After the initial prototype was constructed prototype presentation was conducted. Several research questions were asked in this presentation with conclusions that further improvement was needed. When the user initiated the recording action it was no turning back and you had to commit to sending something. This was something most of the users was uncomfortable with. Based on this the opposite action of throwing (dropping) was selected as an appropriate and understandable metaphor to delete the recorded message and the selected colour.
As we now had two possible actions after recording and selecting a colour, I deemed it necessary to distinguish the feedback given when the user had either sent or deleted the message. This didn’t need to be complicated as simplicity makes for better correlations, and the difference in feedback resulted in two distinct vibrations for sending the message and one vibration when the message was deleted.
Putting the parts together
It became apparent quite early on that it would be a problem to distinguish between the throwing action and the shake actions since both used an accelerometer. The team was worried that the accelerometer wouldn’t be able to distinguish between a throw and a shake. The solution was to implement a squeeze to conclude the previous stage (colour selection state) before it was made ready to be sent or deleted.
Also, some changes had to be made since the team used different libraries than each other. To assemble all of the parts a set of new libraries was decided upon. This, however, meant that I had to rewrite part of my code to accommodate for the new libraries.
In the end, the team successfully assembled the various parts into one functioning code that could be used on all of our prototypes. This made it possible to send data through the server to each other.
Reflect on project outcomes
Working with emotional intelligence has been challenging due to the nature of sharing one’s emotions. The team was early on worried that potential users wouldn’t be comfortable with using our concept. Initial testing showed that people, in general, aren’t comfortable with sharing their emotions, especially publicly but they agreed that sharing emotions is important and healthy. Based on this we decided to focus on sharing with peoples loved ones, whether it’s friends or family. This, however, limited the use of the device to only a selected few of a person’s acquaintances which limits the broad use of our concept. However, if this encourages people to engage with the concept and teach them to express their emotions with more confidence, I would consider this project a success.
The concept is both playful and valuable by enhancing people’s everyday life by encouraging sharing moments and feelings that are significant to them. It enables people to share and empathise with their close one’s emotions which can bring people even closer together. Our concept is made to connects one user to another user which makes the experience both personal and intimate as each user knows that no one else has access to this communication. Additionally, when a ball lights up they know it’s from that specific person.
Sadly, one of the features that we wanted to incorporate in the final product was the possibility to record messages. This feature was not implemented as the components we had on hand for this task was either faulty or damaged. To remedy the lack of recording, we incorporated delays at appropriate points during interactions to simulate the user making a recording. If this feature had been successful, it would have provided an even more meaningful experience and the overall concept would have been better demonstrated. However, since colour is strongly associated with emotions I’m satisfied that this feature worked out perfectly.