Problem Space

We started with Sassy Tech as our team domain, where we explored technologies with sass. To give the tech a purpose, we decided to tackle the issue regarding the overuse of technology. Observing the current times where people are stuck to screens, we further narrowed down our problem space to focus on stopping people from watching screens, i.e. reduce screen time. Hence, we worked on using sassy tech to reduce screen time of people.

The Solution

To stop users from focussing on screens for long periods, we created a sassy robot which annoys users through variety of mediums and disturbs their TV watching experience. The robot can talk, play with you and if angered, gain control over your devices. User can interact with robot to try to stop it from annoying them. While this gives them temporary relief from the annoyance but, the interactions have long-term impact on the behaviour of the robot.

So, how does it work?


SassMobile, with Roomba as it's chariot zooms around the house trying to find people on screens. When a user is detected, the robot warns them about overusing. If the screen is turned off, the robot stops annoying and starts moving around the house again. But, if the advice is ignored, the robot takes the sassy approach. Henceforth, the robot starts sassing user until they gets off the TV. The robot can be stopped through interactions but beware, it's sassiness might increase as user tries to stop it. Therefore, the robot reacts based on on how user behaves with it, i.e. "worse the robot is treated, sassier it becomes".

Through the interactions with robot, guilt is induced in users to stop watching them from watching screen. Alternatively, even if the user doesn't feel guilty, eventually due to robot's control over user's devices, they are forced to get off their screens. In the end, either user gives up or they are forced to give up ultimately reducing their screen time.


What can our robot do?


Talk

SassMobile can talk to user, advice them and give sassy comments. (see Ben William's portfolio)

Interact

Our robot has multiple interactions and reactions to user interaction. (my individual focus)

Control Devices

The robot can reduce TV volume, change channels and turn off the TV. (see Tim Harper's portfolio)


Input Interactions


A crucial part of shaping robot’s personality are the input interactions, i.e ways a user interacts with the robot. A user may interact with robot to stop it from annoying them temporarily. Input interaction's are designed to provoke self-reflection in users and guilt them. This is my focus in the project, the input interactions that takes user’s anger (negative interactions) and shapes the personality of robot. As mentioned before, this follows the theme of you get as you do. As an alternative, users can also use positive interactions to make robot happy and stop the robot without changing robot's personality but the positive interactions are more time consuming & requires more effort from the user. Therefore, it is user's choice to use positive interactions and make robot happy or use negative ones and anger the robot. There are four different interactions present within robot that stops four different functionalities of robot. The effects of interacting with robot can also be seen on robot's face as it's smile turns to frown using different led effects. Different led effects and reactions (smile turning to frown) are used to provide emotional feedback to user and induce guilt in them. The four interactions present in robot are:


Volume Control

If the user feels that robot is being loud, they can use volume control located on the mouth of robot and adjust the volume. Turn the volume control towards right to lower volume and towards left to increase it. As the volume is lowered, it's effects are also visible on robot as the smile slowly turns to frown with the led effects similar to how volume is lowered on TV, i.e. volume bar reducing. This interaction is useful when robot speaks a lot and user wants relief from all the nagging and constant sassing of robot.
No volume = No speaking

Blinding Robot

If the user is working or just doesn't want to be disturbed, they can stop robot's movement by blinding it. Simply cover the eye of robot and robot is unable to see and move around. This interaction is particularly useful to stop robot at particular part of house, i.e. stop it before it finds the user. For led effects, the robot's smile disappears as it sees less, indicating that happiness of robot fades as light fades in robot's eye. User also has option to make robot happy by placing the robot in place with lots of light (positive interaction). The robot smiles similar to massaging ears.

Scolding

Pulling ears interaction is part of aggressive interactions used to scold the robot. If the robot becomes naughty, user can scold it to stop robot temporarily. Just grab the ear of the robot and start pulling it. The stronger the ear is pulled, the angrier the robot is. This discomfort of robot is also expressed through it's frown where, the brightness of robot is indicator of how hard ear is being pulled. Alternatively, user can massage the ear of robot (slight touch) to make it happy. Massaging robot takes more effort as it's a constant action. Smile going up indicates positive interaction.

Throwing Stuff

If the user is annoyed, they can express their anger by throwing stuff at robot. Just grab any object and throw it towards robot to take out your anger on the robot. Additionally, other gestures like shaking and slapping the robot also deliver the same effects, i.e. stop robot temporarily. Led effects here are simple but the frown stays for longer period than other interactions, indicating the lasting impact of aggression on it's psychology. This triggers self-reflection on user about their behaviour towards the robot.



How it's made?


Sensors Used

Potentiometer

Used in the volume control interaction. Potentiometer's input is used as the number of led's displayed to get the bar volume effect

Photocell

Acting as eye of robot, photocell senses light. The amount of light on photocell tells how much bright the leds should be.

Force Sensitive Resistor

Located underneath the ears, the FSR inputs the pressure applied on it. This sensor is used for scolding interaction.

Piezo Element

Piezo element is used for detecting vibration. Hence, it can detect when user throws stuff or slaps/shakes the robot.

Working Behind

Various sensors are combined together in the breadboard. All the aforementioned sensors are set up in various part of robot's face. Potentiometer in lips, photocell in eyes, FSR underneath ears and piezo element inside robot's body. Connected through jumper cables, the sensors read the values constantly and using arduino IDE, the values are converted to desired led effects. The code also keeps track of how many times sensors are used and uses this variable for making robot speak. For example, if sensors are used less than three times, the robot warns with sentence like "I think you have watched enough" but, if used more than 3 times, the robot starts being sassy with sentence such as "It's a beautiful day outside, not that you would know".

The code used can be viewed here & arduino file downloaded from here. Code here is combination of my and Ben's functionality combined.

*Circuit for the final build*


Design Process


The Start:

Starting out, our goal was to use interactions that allowed users to stop robot from sassing while also inducing guilt in user. So, while brainstorming we used robot’s body and disabling parts of it as our input interactions. Each interaction stopped a particular functionality of robot and hence stopped the sassing of robot. Being cruel interactions, we hoped to guilt the users and make them self-reflect.

*Initial Interactions*

While the above interactions fulfilled our purposes, it was quickly realised through feedback from teachers and colleagues that the interactions can be easily mistaken as torture, which we didn’t want considering teens as potential users. Therefore, we moved forward dropping these interactions and moving onto research for inspiration. The idea of manipulating body parts was still intact.


Research:

While researching, my goal was to look into existing robots with personality and observe how they engage with the users. Observing social robot Sophia and assistant robot Olly, it was noted that these robot takes user’s emotions as their inputs and use it change themselves. This what intrigued people, their ability to change according to user. This was useful as I got the idea of using user's anger as the input interactions. It would great for robot to change itself based on how user treats it and progress it's behaviour/personality based on it. Combining the manipulation of body to stop particular functionality of robot and inputting anger as interaction, I came up with four new interactions that resembled previous interactions but had meaning for each interaction. Two of interactions were used to input passive anger for people who are passive and the other two interactions were more confronting/aggressive. Volume Control and Block View interactions are passive whereas Ear Pull and Shaking interactions are aggressive.

*New Interactions*

I had laid down a base for the input interactions but wasn't sure if this made sense to the user's and whether the interactions were successful in guilting them so, testing was conducted to explore this.


Testing:

In the penultimate stage of design, a test was conducted to further grasp the concept from user’s viewpoint and verify if they understood the concept. The test was conducted in a form of survey where questions revolved around exploring the concept. Survey results can be viewed here. The main takeaways from survey were –

Following the results, two features were added to the robot -

After testing, we were looking forward to protoype demonstration where we shared our prototype with class mates.


Prototye Demonstration:

The final stage of concept development consisted of building a physical prototype to get feedback from fellow students. All the features aforementioned were implemented in the prototype and displayed to students in the an explainer video. Video was used to show prototype because of COVID-19 restrictions. A complementing document was also provided to students in order to further their understanding of concept. The document can be read here. The document contains problem space, design process, interaction plan and success criteria.

*Prototype developed and sensors used*

*Explainer Video*

After the video was reviwed by class mates, we were given appraisals about our concept. Main takeways from appraisals were -

  • The positive interactions present made sense but there should be more. The positive interactions helps better user's control over robot's personality.
  • Smile and frown works well to provide anthropomorphic feedback and give users emotional connection to robot. Similar features can be added that help with guilting the users.

  • Considering the appraisals, two changes were made to concept and protoype.

    Final Build:

    For the final build, leds were taken away and neopixel strips were used to get the desired led effects. Neopixel rings also gave robot proper smile and frown as compared to the prototype build. An eyebrow was also added above the eye of robot to give the robot human characteristics and make users sympathise with it. This addition goes along with anthropomorphism of robot. The arduino code was rewritten to work with the positive interactions and neopixel strips. Other teammate's individual aspect was also combined with my build, which enables the robot to talk when it is angry and when positive interaction is used.

    *Finished Product*

    For the final deliverable, the build above was combined with each team memeber's individual aspects. Here we bought together the audio feedback and visual hacking together in one code. A demo of final product can be seen in the video below -


    Reflection on project outcomes


    Overall, I am contented with the project and what it delivers. Concept development, design process and bringing everything together was a wonderful experience. The ideal concept and actual project aren’t very far away. The ideal concept promises user some features that couldn’t be delivered in the final product. With the availability of time, I would have liked to include some more effects similar to smile and frown, like moving ears. Currently, the robot has the ability to process an interaction but due to lack of time and resources, it can't take action like turning off TV or stop Roomba. For example, the robot can input when it’s view is blocked but the robot can’t stop moving (something the concept promises) because the team couldn’t get the Roomba working properly with robot though it was tried by Tim. These minute details are present in interaction’s concept but not in the final build.
    The project in itself delivers well on it’s domain of Sassy Tech where the robot acts as sass machine. The robot being able to deliver sass through various mediums such as talk & device control and it’s ability to respond to sass based on user’s action fits perfectly within Sassy Tech. The underlying problem we are trying to resolve about people overusing screens is also tackled well by concept. A weak point of the concept is that it doesn’t have a proper background research present which explains the theory behind behaviour change and motivation. If the robot is there a person’s screen time may be reduced but there’s no guarantee that this change would be a long-term effect. This is something I wish we had explored more. In the overall studio theme of playful and open-ended interactions for everyday life, the robot can be easily seen as something user interacts daily and has fun interacting with it in various ways present. The interactions are playful in sense that they allow users to interact and responds to the user interaction through reactions and different mediums. I think we covered this area pretty well. The robot is intended to be for everyday usage but I personally think our concept here can improve. We haven't thought well about when the robot stays active and how long it stays active. Also, there is no mention about usage of product when user complies to it and stop watching the TV. These are some further considerations I would explore if given more time and resources. To conclude, the concept ticks all the required points but still some minor changes can be made.

    Online Exhibit:

    Online exhibit was a wonderful experience which bought some further insights into the concept and what it can become. Overall, the concept had positive feedback where viewers actively engaged with what the concept meant and what features it could include. It was great to see that visitors ask questions about the interactions and derive emotional connection to robot, which was one of the success criterias. We would have loved to measure other criterias as well where we would measure if the robot actually reduced screen time but those required prolonged usage which was not possible in an online exhibit. We also had some trouble with failing parts during the exhibit and individual aspects not coming like we expected so, we had to roll back to individual components working independently. We simulated the individual parts and made them look like one so viewer's experience doesn't get degrade. The cause of failing parts was that all individual parts were connected to one voltage that started smoking after working for a bit. We should have anticipated this and prepared for any last minute failures. Seeing the overall reactions, I would say the prototype worked and we took the right approach towards the problem space. Some feedback we got on the broader concept relates to how concept can further develop to become a personal assitant and it would botch up it's tasks as sass expression. It would be a great direction to explore more sassy ways to annoy users. Overall, the visitors were interested in the product and contented with it's interactions.


    Outputs