Sounding Canvas Project Report

1. Project Overview and Goals

The Sounding Canvas project integrates capacitive sensing technology with a Raspberry Pi 4 Model B for interactive sound generation. The core objective was to develop a system where touch or proximity to the canvas surface, detected by changes in capacitance, triggers corresponding audio output through connected speakers. This report details the hardware setup, software implementation, underlying electronics, and the progression from initial sensor testing to the final integrated artwork.

2. Hardware Architecture and Connections

2.1 Components:

2.2 Sensor Connections:

Each of the four capacitive sensors utilizes a shared "send" pin (Arduino pin 2) and an individual "receive" pin (Arduino pins 4, 6, 8, and 10). A 1.4 MΩ resistor is connected between the send and the respective receive pin for each sensor.

3. Capacitive Sensing Technology

3.1 Circuit Explanation:

The capacitive sensing circuit operates by measuring the time delay in an RC (resistor-capacitor) network.

3.2 RC Delay and Sensitivity:

The RC time constant (τ = R × C) dictates the charging and discharging speed. Changes in capacitance (C) due to interaction alter this time constant, which is detected by the Arduino. Sensitivity was fine-tuned by adjusting the resistor value and the sampling rate within the CapacitiveSensor library. The aluminum foil sensor area of approximately 40 square centimeters was found to offer reliable detection.

4. Software Implementation

4.1 Arduino Firmware:

The Arduino Duemilanove utilizes the CapacitiveSensor library by Paul Stoffregen to measure capacitance changes. The core initialization in the Arduino sketch is:

CapacitiveSensor cs_sensorX = CapacitiveSensor(sendPin, receivePinX); // For each sensor
    

Sensitivity is managed through the number of samples taken by the library. Threshold values for sensor activation were determined experimentally via the Arduino IDE Serial Monitor to ensure accurate detection and minimize false triggers.

4.2 Raspberry Pi Software:

A Python program running on the Raspberry Pi 4 Model B is responsible for processing sensor data received from the Arduino and triggering audio playback. Key features include:

  1. Serial Communication: Establishes communication with the Arduino over USB to receive sensor readings.
  2. Threshold-Based Activation: Detects sensor activation when received values exceed predefined thresholds.
  3. Debouncing: Implements a minimum two-second delay between successive activations of the same sensor to prevent rapid re-triggering.
  4. Randomized Sound Selection: For each of the four sensors, the program randomly selects one of four distinct sound samples from a corresponding folder. The sounds were created using sustained electric guitar recordings to provide sonic variety.
  5. Parallel Playback: Allows for the simultaneous playback of sounds triggered by multiple sensor activations.

5. Mounting and Running

In "Echoes of a Line", four capacitive sensors, made from aluminum foil pads, are mounted on the back of the canvas using adhesive to ensure they remain flat and secure. A non-conductive layer is placed between the foil and the canvas to prevent unintended triggers due to external electrical interference. The sensors are connected to the Arduino as described in Section 2.2.

All electronic components, including the Arduino, Raspberry Pi with HiFiBerry Amp2, and loudspeakers, are securely mounted to a wooden back panel positioned behind the canvas. The loudspeakers are fixed using screws within circular recesses for stability. Cabling is carefully managed to ensure a clean and safe internal layout, with easy access for potential maintenance.

Initial testing with a single sensor confirmed the successful detection of capacitance changes with fast and consistent readings. Subsequent testing with four sensors demonstrated the system's ability to detect individual sensor activations without interference. Threshold calibration ensured reliable responsiveness and minimized false positives.

This interactive sound canvas represents the fusion of art and technology, offering a dynamic platform for creative expression. It exemplifies the potential of collaborative efforts between humans and AI in the realization of artistic and technical visions.

6. Communication Between Canvases

A unique and powerful aspect of the Sounding Canvas series is its ability to communicate across multiple artworks, creating a networked, collaborative experience. This inter-canvas communication is facilitated by a central Python server that acts as a hub for all connected Sounding Canvases.

6.1 WebSocket-Based Connectivity

Each individual Sounding Canvas establishes a persistent connection to the Python server using WebSockets. WebSockets provide a full-duplex communication channel over a single TCP connection, allowing for real-time, low-latency data exchange between the canvases and the server. This is crucial for the responsive and synchronous interaction desired across the networked artworks.

6.2 Routing Touch Events

When a user interacts with a specific Sounding Canvas (e.g., touches a sensor), the following communication flow occurs:

  1. Event Detection: The touched canvas detects the sensor activation and processes it locally (as described in the "Software Implementation" section).
  2. Event Transmission to Server: The canvas immediately sends a message to the central Python server via its established WebSocket connection. This message contains information about the touch event, such as the canvas ID and the activated sensor.
  3. Server Routing: Upon receiving a touch event from any canvas, the Python server acts as a router. It takes the incoming event and broadcasts it to all other currently connected Sounding Canvases. This ensures that every canvas in the network is aware of interactions happening on any other canvas.
  4. Reception and Response: Each receiving canvas processes the incoming event from the server. This allows for synchronized visual or auditory responses across the entire network of artworks, enabling complex, multi-canvas compositions or shared interactive experiences.

This server-client architecture, powered by WebSockets, allows the Sounding Canvas series to transcend individual artworks, transforming them into a cohesive, interactive network where a touch on one canvas can resonate and influence the sonic landscape of all others.

Communication Logs:

Below are terminal logs illustrating the communication flow between the canvases and the central server.

Terminal log of the Sounding Canvas in Rome, showing its connection as the first client and received touch events.
Figure 5.1: Terminal Log - Canvas in Rome (Client 1)
Terminal log of the Sounding Canvas in Barcelona, showing its connection as the second client and received touch events.
Figure 5.2: Terminal Log - Canvas in Barcelona (Client 2)
Terminal log of the central Python server, showing incoming client connections and the routing of touch events to all connected canvases.
Figure 5.3: Terminal Log - Central Python Server

BACK to the SoundingCanvas main page