After experimenting with MediaPipe – a tool for machine learning deployed for mobile devices – I wanted to explore how I could use this technology within a live performance situation.
Smile, Institute of Modern Art, Brisbane, Australia, 2024
Playing on the idea of toxic positivity, I built a smile machine with MediaPipe in Touchdesigner that would turn down the volume of the performer unless it detected a strong smile. If the smile is not strong enough audio samples about smiling are triggered. The frequency of the sample speeds up the longer there is no smile, eventually creating a cacophony of voices. The samples stop when the subject’s smile is strong enough, which also increases the volume of the performer.
For the method of performance, I start by explaining that I am trying to learn to smile while I am playing, as I recently saw a video showing how serious I look while I’m playing. As seen in Figure 5, I start to play with the camera pointed at myself, challenging myself to smile while concentrating on performing, which is not easy. After failing the challenge, I asked audience members to sit at the camera and smile so that I could concentrate on my performance. Audience members took turns and tried various techniques to control the response, including spotlighting the subject and swapping participants.
I repeated this performance at the ACMC night program at Bar Open in Melbourne. The audience responded enthusiastically, experimenting with the parameters of the smile detector to test its limits, including using the Aphex twin album artwork, which the system recognised as a smile.