Module 3
Mirroring Mirrored



The metaphor of the mirror is central to thinking about technology: machines reflect human inputs, yet always with slippages, distortions, or reconfigurations.
In Lacanian psychoanalysis, the “mirror stage” frames identity as relational; in media art, mirroring highlights how feedback loops entangle subject and system.
This module explores code and AI as mirrors that are never neutral but co-constructive.
Conceptual Framing
Module 3: Mirroring Mirrored
Over the course of 2 workshops we explore how bodily interactions are “mirrored” through machine generated reinterpretations.
By engaging with both literal and metaphorical mirrors, participants reflect on how humans and machines mutually constitute each other in a recursive, entangled loop.
Workshop Description
Module 3: Mirroring Mirrored
Workshop participants work with real-time computer vision and generative systems to create mirrored relationships between self and machine.
With ml5.js, a friendly Machine Learning library for the Web, they generate visual reflections of their movements in p5.js.
These reflections are then progressively manipulated into visual distortions, responsive oddities, or playful encounters.
p5js
Software rundown
ml5
p5.js is a friendly tool for learning to code and make art. It is a free and open-source JavaScript library built by an inclusive, nurturing community. p5.js welcomes artists, designers, beginners, educators, and anyone else!
p5js.org
editor.p5js.org
ml5.js aims to make machine learning approachable for a broad audience of artists, creative coders, and students. The library provides access to machine learning algorithms and models in the browser, building on top of TensorFlow.js with no other external dependencies.
ml5js.org
Session 1
Capture, Follow, Trigger



Session 1
In this first workshop we will look at the technical requirements to capture the user's In this first workshop we will explore the technical requirements to capture users' movements and generate visual reflections of their movements in p5.js.
This will provide a foundational understanding of how bodily movement is captured and reflected through code.
Participants will learn how to integrate AI-powered tracking using ml5.js, including body pose detection, hand keypoint tracking, and facial landmark detection.
Session 1
Core concepts:
Capture, Follow, Trigger.
-
Capture: Capture body, face and hand as input data to enable real world interactions to video streams
- Follow: Assign body features as keypoints received as data streams to control screen based visual elements
- Trigger: Interact with a screen based visual system triggered by captured body, face or hand keypoints
Capture

Download sample-videos folder
This folder contains sample videos of people moving that you can use without needing a webcam.
Capture
Video samples resource
In our first exercise, we will learn two methods to work with video in p5.js:
- loading video files from the sketch folder
- using the webcam as input or
Resources
Objectives
- to load and display a video in p5js
- to capture and display a video stream
Capture Exercise 1
Video file and stream
Slitscanning
Capture
Slitscan
Slitscan imaging techniques are used to create static images of time-based phenomena.
In traditional film photography, slit scan images are created by exposing film as it slides past a slit-shaped aperture.
In the digital realm, thin slices are extracted from a sequence of video frames, and concatenated into a new image with one axis being time.
visit an Informal Catalogue of Slit-Scan Video Artworks and Research compiled by Golan Levin

Capture Exercise 2
Slitscanning
In this second exercise, explore the example sketches and customize at least one of them. Then, create a static image and video from your modified version.
Objectives
- to customize example code
- to create screenshots and recordings

In this exercise we will use the mesh of a face capture via the webcam to paint by moving our head
Capture Exercise 3
Face mesh drawing
Objectives
- to create an face drawing or video
- to capture screenshots and recordings
This concludes section Capture, lets move on to section Follow
Capture
Follow

Follow
Machine Learning with ml5
Follow
Keypoints
Keypoints are specific points detected on a body, face, or hand.
Think of them like dots placed on important positions on the body, face and hand that can be tracked and used to detect poses, gestures, and movements.
17 keypoints
nose, shoulders, elbows, wrists, hips, knees, ankles
468 keypoints
eyes, nose tip, mouth corners, jawline
21 keypoints
thumb tip, finger tips, knuckles
Follow
Machine Learning with ml5
Body Pose

HandPose


FaceMesh
In this exercise, we will capture the skeletal information of a body and see how 17 keypoints are mapped onto the source video. You can then access individual keypoints and create your own visual interpretation.
Follow Exercise 1
Body Pose

Objectives
- to link visuals to skeletal landmarks
- to capture screenshots and recordings
In this hand tracking exercise, use the example code to control visual elements with your index finger and thumb. Then, replace the given circle with something more exciting.
Follow Exercise 2
Hand Pose

Objectives
- to to replace the given circle with a more exciting application
- to capture screenshots and recordings
In this hand tracking exercise, use the example code to control visual elements with your index finger and thumb. Then, replace the given circle with something more exciting.
Follow Exercise 3
Face Mesh
Objectives
- to to replace the given circle with a more exciting application
- to capture screenshots and recordings

This concludes section Follow, lets move on to section Trigger
Follow
Trigger

In this last exercise we look at how blinking can be used to trigger events.
Trigger Exercise 1
Blink

This concludes section Trigger, lets move on to the end of workshop 1
Trigger
Homework
From the screenshots and screen recordings you have recorded, create
- a selection of images that are meaningful to you and you want to share for the workshop documentation
- A short video compilation of recordings maybe accompanied by a self-made soundtrack? Use a standard video format with an aspect ratio of 9:16 or 16:9
Session 1 → Session 2
Session 2
Train
Respond
Session 1
Getting Started
Follow Along!
- How to download and setup Arduino IDE software
- How to connect Arduino and the Grove Shield
- Open your first Arduino sketch
- Upload Arduino sketch to Arduino board
- Make changes to example Arduino sketches
- Make changes to example p5.js code
Links
45
Documenting
Documenting



Product shots, clean background so that the focus is solely on the subject. Use a good camera, tripod if necessary, and appropriate lighting.
Documenting

Product shots, clean background so that the focus is solely on the subject. Use a good camera, tripod if necessary, and appropriate lighting.
Documenting




Product shots, clean background so that the focus is solely on the subject, add hands to show interactivity. Use a good camera, tripod if necessary, and appropriate lighting.
Documenting



In action, in use: pictures of the work with people interacting or looking at the work. These photos can be staged and choreographed, or taken during a show and tell or exhibition with audience. Use a good camera, a tripod if necessary and adequate lighting.
Documenting




Avoid
In action, in use: pictures of the work with people interacting or looking at the work. These photos can be staged and choreographed, or taken during a show and tell or exhibition with audience. Use a good camera, a tripod if necessary and adequate lighting.


Session 2
Project Interaction
In class development
120
Session 2
Project Interaction
Return of Kits
Session 2
Project Interaction
- Test and demonstrate your outcome, how did others respond to your interactive object
- Take documentation of your outcome and take photos and videos of your peers interacting with it
Finalise
Session 1
Project Interaction
Deliverables, by the end of session 2
entangled-agencies_mirroring-mirrored_2026
By Andreas Schlegel
entangled-agencies_mirroring-mirrored_2026
- 43



