Week 1
This week's "Performance Media" class was our first. The teacher first discussed the core concept—performance media isn't simply interactive art, but rather a collaborative performance involving people, space, sound, and vision. The audience is no longer just "watching," but participating.
He gave several examples: teamLab's immersive spaces, Nonotak's light and shadow installations, and Filamen's synchronized light and sound system. We understood the three-part structure of interactive works: input (sensors) → processing (logical reasoning) → output (visual or audio feedback).
The teacher also explained how spaces "perceive" humans, such as how cameras, microphones, and motion sensors trigger system responses. He then introduced TouchDesigner, our primary tool for this semester. It connects nodes to control visuals, sound, and even physical devices, and will be the core software for our future creations.
Then there was a group sharing session, with each group discussing a work that inspired their work. We chose "Detroit: Become Human," which allows players to drive the plot through choices, perfectly embodying the "people + space + interaction" spirit of performance media.
The homework assignment was to analyze an interactive installation. We chose a performance by the London FOLD Club that used sand vibrations to create patterns. We analyzed its induction, logic, and visual response, learning to understand the work structurally.
Finally, the teacher briefly introduced the four main tasks: case study, concept proposal, final product, and blog reflection. The entire class was very informative, but the most important takeaway was that technology is merely a medium; the real focus is on how people interact with the medium.
Week 2
This week's "Creative Media" class completely changed my perspective on programming and art. I used to think code was just a tool, but now I realize it can actually be used for "creation." The instructor introduced creative coding—not to solve functional problems, but to express ideas. In other words, programs can generate visuals, sounds, animations, and even entire interactive experiences, rather than simply executing commands.
He mentioned that "generative art" involves setting rules so that systems automatically generate images or motion, rather than manually drawing them step by step. Artists focus on the balance between order and chaos—too rigid rules can be boring, while too much chaos can lead to loss of control. That sense of "the system growing on its own" is the appeal of generative art.
In the second half of the class, we learned about TouchDesigner, a visual programming software that uses "nodes" to create logic. The instructor explained its three core structures: input (microphone, camera) → processing (logic algorithm) → output (images, lighting, sound). The course also highlighted three types of operators:
TOP: Processes visual content such as images and videos
CHOP: Manages audio signals and sensory input
SOP: Responsible for 3D geometric modeling and spatial structure
The entire class gave me a new understanding of "creative media"—it's not just about images or sounds, but a symbiotic relationship between people, systems, and variables. Art is no longer static, but a dynamic process that responds to the world.
I'm now eager to use TouchDesigner to conduct larger experiments, such as mood lighting walls, sound visualization stages, or spatial installations that "perceive people."
Week 3
This week, I took the "TouchDesigner Deep Dive," truly beginning to grasp the core of creative coding. The instructor first explained the theory, then led us through practical exercises, which helped me understand more clearly that generative art is essentially "creation" using logic and algorithms.
He emphasized that creative coding isn't about writing programs to solve problems, but rather using code to express and experiment. Generative art emphasizes "systematic creation"—the artist sets the rules, and the system generates the work between these rules and randomness. The instructor specifically distinguished between "randomness" and "noise": randomness is completely uncontrollable, while noise is randomness with structure, creating natural, fluid variations.
During the practical phase, we began experimenting with TOP (texture operators). Nodes used included Constant, Circle, Rectangle, Edge, Blur, Over, and Composite. The instructor had us try connecting these nodes to create compositing and feedback loops, creating dynamic effects.
We also used a video clip of a banana to test the visual effects of overlay and feedback.
This exercise gave me a more intuitive understanding: in TouchDesigner, each node is a stroke, and each connection is the logic of creation. Artwork isn't "drawn" but "generated."


Week 4
In the fourth week, we held a group presentation themed "Precedent Study and Critical Analysis," featuring two artists: Rafael Lozano-Hemmer and Anthony McCall. The former used physiological data such as infrared thermal imaging and heartbeat pulses to directly translate "human existence" into an interactive language of light and sound; the latter constructed an accessible space using light, fog, and time, allowing viewers to experience the flow of time and the weight of existence within the volume of light.
The teacher said our content preparation was relatively complete, but due to my nervousness and language issues, I stumbled over some parts. This was a problem with my preparation, and the teacher hoped I could prepare more in the future, needing to be more confident and energetic. This feedback was quite important to me, making me realize that expressive ability and content are equally crucial.
Through this presentation, I gained a more intuitive understanding of the combination of "body data" and "spatial perception"—interactive art doesn't necessarily rely on piling up complex technologies; the key is finding an entry point that can establish an emotional connection with the audience. Therefore, I've begun to realize more clearly that technology is merely language, and what truly gives a work its vitality is how it builds a relationship between "people" and "space."
Week 5
The fifth week of the course focused on an advanced understanding of CHOPs and the application of sound-driven vision. This content was a watershed moment for me in TouchDesigner—a shift from simply creating visuals in TOP to truly bringing visuals to life with data.
The instructor first broke down the essence of CHOPs: they are not for creating images or building models, but rather modules specifically designed to handle various "dynamic numerical values." Think of it as the neural signal pipeline of the entire system; any change from the outside world or within the system—mouse movement, keyboard input, sensor data, music rhythm—can be captured and transmitted through CHOPs. In contrast, TOP outputs visuals, SOPs manipulate geometry, while CHOPs act as a bridge connecting "sensory input" and "visual response."
We also revisited some commonly used nodes, such as Constant as the starting point for numerical values, LFO for generating periodic changes, Pattern for constructing repeating sequences, Math for adjusting scale and range, Lag for making numerical values appear sluggish, Filter for smoothing signals, and Count for recording the number of events. These nodes combine to form "logic chains," transforming visual effects from static animations into results driven by real-time data.
The most crucial part this week was audio response. The instructor demonstrated how to break down microphone sound input into "energy values," then convert them into parameters usable for visual control through the Analyze node. Simply put, sound is divided into different numerical dimensions, and these values can be mapped to visual attributes such as brightness, size, and color, allowing the screen to truly "hear" external sounds.
Overall, this week made me realize that the core of TouchDesigner is not just images, but a whole set of reaction mechanisms driven by CHOPs. Only by mastering these data logics can subsequent interaction design be more natural and free.



The first exercise focuses on basic mouse and keyboard input interaction. We import mouse position and key signals into CHOP, then use Constant and Math to redistribute these values, allowing them to control the circle's position. For color, we use Switch to toggle different color states, and Composite to overlay all results into the final image. In practice, whenever I move the mouse or press a key, the circle's color and position change instantly, creating the most basic yet intuitive human-computer interaction effect.
The second exercise focused on audio response. We used Audio Device In to capture external sounds, then used Analyze to convert volume and rhythm into numerical values. We then used Math and Count to influence the size or brightness of a circle. The faster and more energetic the music, the larger or brighter the circle became.
This exercise helped me truly understand that the core of TouchDesigner lies in "input → processing → output." CHOP can transform any signal into visual feedback, and the naturalness of the interaction depends on the smoothness of the data and the speed of response. Therefore, I began to realize that visualization is not just an image, but a real-time process of responding to the outside world. In the future, I hope to apply this logic to more complex interactive scenarios, such as having the audience's voice or movements drive the lighting and effects in a space.
Week 6
This week's course focused on re-examining the role of the "body" in interactive systems. The instructor first reviewed the various functions of CHOP—whether it's numerical generation, mathematical processing, or mouse and voice input—reiterating that TouchDesigner is fundamentally built on data flow.
The course then shifted to this week's core theme: embodied interaction. The instructor specifically pointed out that interactive art shouldn't be limited to pressing buttons or clicking, but rather allows the system to react to the presence of the viewer. In the examples presented in the PPT, every step the viewer took changed the light, shadows, or particles, creating a continuous cycle of "body influencing the system, and the system responding to the body."
This made me rethink the meaning of interactive installations: true interaction isn't "what I pressed," but rather "what changes occurred to the artwork after I entered the space." The artwork is activated by the presence of the body, which makes the entire system feel real and alive.
We used a webcam as input, connected to a video device, and then converted human contours, motion, and brightness variations into visual material through various TOP (grayscale, edge, threshold) adjustments. Next, we used CHOP to transform these body movements into controllable parameters, allowing the image to change as we raise our hands, move, or approach the camera.
This week's focus was on ensuring the smooth operation of the "body input → system processing → visual output" workflow, laying the foundation for the upcoming actual prototype.
The instructor's demonstration was visually stunning, but I wanted to create some interesting effects myself. So, I decided to create an effect where my own soul follows me. The final product is a deep blue field where people and activities within it are accompanied by a white outline, similar to a projection or a soul.
Week 7
Week 7 of the course continued last week's theme of "body engagement," but this time we focused on a more granular level—gesture tracking.



At the beginning of the course, the instructor guided us through a review of the core concepts of "embodied interaction": how the body becomes an input source, how the system responds to the viewer's presence, and the feedback loop formed between the two. Last week, we used a regular camera for motion detection, but this method has obvious limitations—any movement of any object in the frame triggers an effect, and tracking can only capture a wide range of pixel changes, resulting in limited precision and an inability to truly respond to "whoever moves their hand."
To improve controllability, this week the instructor introduced MediaPipe. It can recognize specific hand shapes, finger postures, and keypoint coordinates, upgrading the interaction from a coarse "disturbed entire screen" to true "precise gesture control." This fine-grained tracking method allowed us to experience for the first time the shift from "body presence" to "gesture intention," making the expression of interaction clearer and more directional.
Week 8
This week, our teacher guided us in establishing a "conceptual plan" and "prototype priorities," explaining the interactive logic in a way that was easy to understand.
A good plan must clearly articulate the emotional intent, interaction methods, visual and auditory unity, and the feasibility of the system structure. This is especially important for us because our work tells a story through interaction—if the action and meaning don't align, the work becomes empty.
Following this, we completed Activity 1, re-organizing the project using the following chain: User Operation → Input Sensor → Processing → Output Behavior → Expected Meaning. We mapped each action to the device's response, specifying questions like "What does reaching out represent?", "How does the system judge?", and "Can visual changes express repair or disturbance?"
This process allowed us to move beyond the initially vague "the artifact moves" to "Why it moves, how it moves, and what is the meaning of its movement," making the interaction more focused and persuasive.



This is the activity we completed in class and the teacher's evaluation.We have established the basic concepts and interaction logic, and are working towards that goal.
At this stage, we found a suitable subject matter, completed a suitable 3D model, and finished the initial model.
At this stage, we finalized the concept and implementation through discussions, , and began refining and optimizing the initial model.
Because the prototype in the previous stage lacked expressiveness and interactive elements, and could only be implemented by pressing the keyboard, we decided to look for a better method.
Ultimately, we combined the artists' works to find expressive interactive processes and worked hard to implement them in TouchDesigner.
Below are the final presentation slides and supporting document, which document the final presentation and evolution of the project.
Presentation
Supporting document
Week 9
In our Week 9 presentation, we essentially told a story of "prototype growth." The project *Ruins* started with an initial idea, gradually being broken down, validated, and then reassembled. The PowerPoint presentation showed how we started with the concept from Assignment 1, continuously considering the audience's presence, the impact of body movement on space, and the fact that interaction needs time to develop, rather than being completed all at once.
Then, we focused on actual interaction, detailing the core logic: viewers repair the fragments through gestures as they approach, the distance of the hand determining the repair force, while sound acts as a destabilizing factor, interfering with and vibrating the fragments. These actions are not isolated but form a continuous cycle, moving from repair to interference, and then back to disintegration.
At the end of the presentation, we reviewed some key iterations, such as the transition from simple button controls to a combination of gestures and sound. We also reflected on how the system maintains stability and clear feedback when multiple inputs are present simultaneously; these reflections provided direction for future improvements.
Week 10
In Week 10, the focus of the class shifted away from how the work looks or functions, and instead returned to why it exists in the first place. The teacher encouraged us to rethink the core message of our project and reconsider what kind of experience we want the audience to take away.
Ruins explores the fragile connection between attention and cultural memory. Through simple actions such as movement and sound, participants are able to temporarily restore scattered fragments, creating a brief sense of wholeness. However, this state is never stable—interference quickly causes the fragments to break apart again. The work reflects how culture in the digital age survives only through active engagement, and how easily it disappears when attention fades.
Mr. Max responded positively to the clarity of our concept and felt that the interactions effectively communicated the idea. At the same time, he pointed out that the project still has room to grow and needs further refinement.
He especially stressed that the installation should not rely solely on screens or digital effects. Instead, the idea of ruins should be translated into a physical structure, turning abstract fragments into something the audience can experience with their bodies in space. This shift would strengthen the emotional and spatial impact of the work.
Our next task is to develop physical sketches of the installation and organize a detailed list of materials and equipment.
Week 11
Week 11 is a crucial step in moving from "idea" to "physical form." Based on the teacher's suggestions from the previous week, we began to focus on the physical form of the installation, considering what the work would look like when it actually appears in space.
Initially, we organized our thoughts by hand-drawing sketches, trying to construct the overall structure of the installation, while marking the position of the screen, the direction of the cables, and the spatial relationship between the various parts.
These drafts each have their own inclinations, representing our imaginations from different perspectives regarding how the imagery, philosophical ideas, and themes of this work will be presented.
Effect simulation
By combining the strengths of these drafts, the final design was developed, and a simple 3D model was created to facilitate basic judgments regarding dimensions and materials.
Mr. Max deemed our solution feasible and provided us with the necessary stand and monitor.
Material
In terms of materials, we chose a large amount of red thread, several white paper balls, and LED light strips as the main visual elements. The red thread symbolizes the fragile yet continuous connection between the fragments, the white paper balls represent the remnants after being scattered, and the light strips are used to enhance visual guidance and spatial atmosphere. We made purchases...
Mr.Max suggested that we purchase two support frames as the structural foundation, and wrap a lot of red wire around the supports to ensure that they would not fall or bend.
Week 12In week 12, we stepped into the GMBB exhibition space for the first time, gaining a direct understanding of the environment in which the works would be situated, and conducted some simple setup tests. Standing on-site, many spatial issues that had previously only existed in our imaginations suddenly became concrete.
Unlike relying on sketches and models for speculation in the classroom, the real space allowed us to clearly feel the direct impact of height, wall distances, and audience positioning on the presentation of the installations.
We will first build a simple framework.
Arrange the light bulbs and set up the light sources
This site visit made us realize that space itself is not a neutral background, but a part that profoundly participates in and shapes the experience of the work, and also prompted us to rethink the way the installation is placed and the scale relationship in the exhibition.
Week 13
In week 13, we continued working on the Gmbb installation, making several modifications and optimizations to ensure its completion within a controllable framework.
During this phase, we abandoned our previous wiring methods. First, we constructed the longest wire for each support structure, then we constructed the other wires for each support structure. Simultaneously, we wrote oracle bone script symbols on white paper spheres, allowing these abstract cultural symbols to enter the installation space in a more intuitive way.
We used alternating layers of red iron wire and thread for securing the pieces, creating a rich contrast due to their varying thicknesses and textures. The threads were originally white, but we painted them red with spray paint and pigments to ensure color consistency.
We originally planned to wrap the white table in cardboard boxes, but Mr. Max thought the boxes would make the whole thing look cluttered, so we abandoned that plan. Instead, we kept the white table, replaced it with a larger one, and added a smaller one to accommodate more screens.
After securing the wires to each bracket, we carefully moved the two brackets to their respective positions and began connecting them with the longer red wire.
Final effect
Finally, we fixed several white and transparent pipes to the platform, letting them scatter on the ground, and connected each one to an LED light strip. Because the light strips were too bright and made of rubber, they were difficult to insert into the long pipes, so we wrapped them with masking tape. This reduced the brightness of the light and made it easier to insert them into the pipes.We also purchased a suitable light projector that can project light animations onto the wall, which matches our theme.
Exhibition site
Touch Designer Prototype Video
Exhibition Video
Final presentation
Final submission
Reflection
In creating the interactive prototype for *Ruins*, we progressed from a half-finished product where animations could only be triggered by buttons, to one where gestures and voice could control the aggregation, vibration, and disintegration of artifact fragments in real time. This process forced me to understand that the core of interaction is to establish a clear and stable causal relationship between "user behavior → system response → conceptual intent." To ensure smooth gesture tracking and sensitive voice input, we spent a lot of time on smoothing, lag, mapping, and parameter tuning. Only when the fragments' behavior finally changed naturally with gestures and voice did I truly realize that the "smoothness" of a work comes not from visuals, but from logic. At the same time, I deeply felt the decisive role of concept in the interactive structure: since the work speaks of "culture being maintained by attention," we must ensure that the entire chain of approach, repair, disturbance, and disintegration can be experienced by the audience through their bodies. The entire project taught me to view input, processing, and output as a complete system, rather than simply piling up nodes and special effects. In the next step, when working on A3, I will continue to enhance the layering of the interaction, such as adding two-hand control, different frequency sounds corresponding to different disturbances, or presenting more detailed stages of artifact restoration. Overall, this is the first time I have truly achieved such a deep alignment between "behavior and meaning," and I realize that I still have a long way to go before I reach a mature interaction design, but the direction is clear.
In this project, I was involved in many aspects. This was my first time creating a prototype from scratch, and looking back, every step was challenging. Even in the concept stage alone, we internally eliminated many ideas, such as decompression interaction and emotional connection, ultimately choosing our current theme and changes, summarizing them as the phrases "forgetting" and "reproduction." We brainstormed, considering the relationships between these phrases, and finally determined the final form. We continuously strived during the prototype creation process. For our first prototype creation, we proceeded cautiously, frequently falling into despair. For example, some components, after following online tutorials to connect and adjust, didn't change anything. We had to watch videos to review the software's fundamentals and improvements. Ultimately, we made progress, although our prototype still had some bugs, such as unresponsive sensors or image shakiness when a hand was extended or entered the camera's range. These are issues we need to address in the future, and we will continue to work hard to complete this final challenge. I hope to broaden my skills and perspective to better meet future challenges.
Comments
Post a Comment