Making Sense of Decentralized TUIs: A Modular Sound-Centric Approach

Francesco Di Maggio, Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands, f.di.maggio@tue.nl
Bart Hengeveld, Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands, b.j.hengeveld@tue.nl
Mathias Funk, Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands, m.funk@tue.nl

This studio invites participants to explore tangible and embodied interaction beyond standalone interfaces into interconnected ecologies of smart objects, focusing on sound as a primary design element. Through a series of hands-on experimentations, participants will prototype with a modular toolkit, exploring how sound can move beyond simple notification functions to create rich and adaptive soundscapes. Drawing from the practices of digital lutherie and sound design, in particular mapping and interaction design strategies, participants will learn how to map sensor data to sound using interactive machine learning techniques based on artificial neural networks. This studio explores sustainable practices by adopting a modular approach to create adaptable systems, while promoting community engagement, open source contributions, and wider accessibility.

CCS Concepts:Human-centered computing → User interface toolkits; • Applied computing → Sound and music computing;

Keywords: Tangible User Interfaces, Decentralized Systems, Sound Design, Interactive Machine Learning

ACM Reference Format:
Francesco Di Maggio, Bart Hengeveld, and Mathias Funk. 2025. Making Sense of Decentralized TUIs: A Modular Sound-Centric Approach. In Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '25), March 04--07, 2025, Bordeaux / Talence, France. ACM, New York, NY, USA 3 Pages. https://doi.org/10.1145/3689050.3708332

Figure 1
Figure 1: Overview of the studio components.

1 DETAILED PROPOSAL DESCRIPTION

This studio explores how interconnected, tangible objects can create enriched sound interactions in diverse environments. Participants will engage in prototyping a multimodal system based on modular components, exploring how these can be configured and reconfigured to create complex, responsive, and adaptable soundscapes. Starting with a tangible hardware toolkit composed of a wireless microcontroller, a breadboard with jumper wires, and various sensors, participants will learn how to acquire and send sensor data wirelessly to a centralized network using Plugdata (https://plugdata.org) and OOCSI (https://oocsi.id.tue.nl) middleware. Participants will learn how to visualize and process this mesh of sensor data in Max (https://cycling74.com) using modulo (https://cycling74.com/packages/modulo), a creative toolkit developed for rapid prototyping and sound interaction design. Benefiting from the on-site facilities, participants will engage in sensor-sound mapping and interaction design, directly experimenting with various aspects of digital lutherie, from mapping strategies using artificial neural networks to sound design techniques. The studio will conclude with a short demo that will showcase the skills acquired so far and contribute to further discussion.

The studio is organized around the following themes:

  • Sound as a Core Design Modality: While maintaining a critical stance towards the more conventional focus on tactile and visual modalities, this studio emphasizes sound—its aesthetics and functional roles—as a rich, nuanced modality for creating adaptive, meaningful interactions.
  • Decentralized Ecologies of Interconnected Objects: The studio explores how standalone interfaces can quickly connect and form networked ecologies that respond dynamically to each other and user interactions.
  • Sustainable Design: This studio prioritizes sustainable practices as it is based on a modular ecosystem designed to streamline the prototyping of bespoke interactive systems, encouraging collaboration, open source contributions, and wider accessibility.

2 GROUNDING IN THEORY

Sound is an integral part of how we experience and make sense of the world around us, and as such has great potential for design in many everyday user interactions. However, in the context of TEI, sound remains a rather underexposed (and underutilized) modality outside music applications [8, 12] compared to the predominant tactile and visual modalities, often relegated to simple notification functions rather than fully integrated into the user experience. By adopting a modular system architecture, this studio explores both the challenges and opportunities of how the principles of TEI [11] can be adapted to expand from highly specialized standalone interfaces into decentralized and interconnected ecologies of smart objects, focusing on sound as a critical design material [3, 7, 10].

The studio draws from:

  • Tangible, Embodied and Embedded Interaction: This studio explores beyond isolated interfaces into decentralized systems, considering how interconnected tangible objects can create adaptive and responsive environments at the edge of peripheral interaction [1, 5, 6].
  • Digital Lutherie and Sound Design: By integrating principles from digital lutherie [9], this studio explores the role of sound in interaction design, moving towards context-sensitive soundscapes that adapt to user interactions.
  • Interactive Machine Learning: Following the interactive machine learning paradigm [2, 4], we will look at emerging behaviors between input data (sensors) and sound mediated by collective behavior in prototyping and performance.

3 MATERIALS TO BE EXPLORED

This studio is meant to be an open space for creative exploration. We aim to create a welcoming and inclusive experience for all participants, including beginners and more advanced participants, inviting them to participate in creation and reflection. This also includes catering and constructively working with the limitations that participants might have. We are curious to adapt the workshop to the wishes and specific needs of the participants as an inclusive environment that fosters cooperation, collective discussion, and celebrating the aesthetics of sound.

During this studio, we will explore and experiment with a multimodal toolkit that includes:

  • Hardware: Input components such as motion sensors, wireless microcontrollers, and prototyping tools.
  • Middleware: A message-based communication framework for sending, receiving, and broadcasting data.
  • Software: A modular toolkit for input acquisition and processing, sound interaction design, and mapping.

Beyond these materials, participants will explore creative assemblages of materials in the form of decentralized, systemic experiences in instrument design and case-specific applications.

4 LEARNING GOALS

By the end of this workshop, participants will develop critical perspectives on the role of sound in tangible and embodied interaction and explore uncharted territory in the vast space of tangible and embedded design for musical and sound expression. And in particular, participants will:

  • Gain practical experience in designing, prototyping, and configuring modular elements to create adaptive sound interfaces relating to a spatial audio context.
  • Learn mapping strategies through traditional and interactive machine learning approaches (e.g. regression and classification tasks, and employ mapping strategies in creative sound applications).
  • Observe and creatively influence emergent phenomena in networked environments, leveraging connectedness and collaborative music-making practices.

There will NOT be a test. :)

5 SCHEDULE

This studio is a full-day event suitable for a range of 10 participants, with the possibility of working individually or in pairs. There are no special requirements, either, a positive attitude toward challenges, a personal computer, and access to an Internet connection.

We propose the following schedule (09:00 – 17:00):

  • Morning Session (09:00-12:30):
    • Introduction (30 min): Overview: Challenges and opportunities in tangible interaction and sound design.
    • Material Exploration (30 min): Experiment with the starter toolkit, exploring various configurations.
    • Prototyping Phase 1. Hardware (30 min): Build a simple sensor-based instrument prototype.
    • Prototyping Phase 2. Middleware (30 min): Connect instruments: OCCSI and plugdata.
  • Coffee Break (11:00-11:15)
    • Prototyping Phase 3. Software (30 min): Collect, visualize and process sensor data in Max using modulo.
    • Prototyping Phase 4. Sound Design (45 min): Design various sound engine(s) and audio effects.
  • Lunch Break (12:30-13:30)
  • Afternoon Session (13:30-17:00):
    • Prototyping Phase 5. Mapping Strategies (60 min): Explore mapping strategies and interactive machine learning through the iterative process of recording, training, editing, and running models based on examples of input-output pairs.
    • Demo Preparation (60 min): Prepare a short demo that includes interconnected tangible objects, custom mappings, distributed sound projection, and spatialization.
  • Coffee Break (15:30-15:45)
    • Demo (15 min): Experience of space interacting with sound.
    • Group Discussion (30 min): Reflect on insights, challenges, and further possibilities. Provide feedback.
    • Documentation (20 min): Participants will document their prototypes, experiences, and thoughts.
    • Wrap-Up (10 min): What have we learned so far and what are we going to do with this now?

6 SUPPORTING DOCUMENTS

To learn more about the studio, visit our website at the following link: https://sites.google.com/view/decentralized-sound-tuis

References

  • Saskia Bakker, Elise van den Hoven, and Berry Eggen. 2015. Peripheral interaction: characteristics and considerations. Personal and Ubiquitous Computing 19 (1 2015), 239–254. Issue 1. https://doi.org/10.1007/S00779-014-0775-2
  • Francisco Bernardo, Mick Grierson, and Rebecca Fiebrink. 2018. User-centred design actions for lightweight evaluation of an interactive machine learning toolkit. Journal of Science and Technology of the Arts 10 (2018), 25–38. Issue 2 Special Issue. https://doi.org/10.7559/citarj.v10i2.509
  • John Bowers and Sten Olof Iteilstrrm. 2000. Simple interfaces to complex sound in improvised music. Conference on Human Factors in Computing Systems - Proceedings (2000), 125–126. https://doi.org/10.1145/633292.633364
  • Rebecca Fiebrink and Baptiste Caramiaux. 2018. The Machine Learning Algorithm as Creative Musical Tool. The Oxford Handbook of Algorithmic Music (1 2018), 181–208. https://doi.org/10.1093/oxfordhb/9780190226992.013.23
  • Giancarlo Fortino, Wilma Russo, Claudio Savaglio, Weiming Shen, and Mengchu Zhou. 2018. Agent-oriented cooperative smart objects: From IoT system design to implementation. IEEE Transactions on Systems, Man, and Cybernetics: Systems 48 (11 2018), 1949–1956. Issue 11. https://doi.org/10.1109/TSMC.2017.2780618
  • Mathias Funk and Bart Hengeveld. 2018. Designing within connected systems. DIS 2018 - Companion Publication of the 2018 Designing Interactive Systems Conference (5 2018), 407–410. https://doi.org/10.1145/3197391.3197400
  • Bart J. Hengeveld and Mathias Funk. 2021. Designing Group Music Improvisation Systems: A Decade of Design Research in Education. International Journal of Design 15 (8 2021), 69–81. Issue 2. https://research.tue.nl/en/publications/designing-group-music-improvisation-systems-a-decade-of-design-re
  • Sergi Jordà, Günter Geiger, Marcos Alonso, and Martin Kaltenbrunner. 2007. The reacTable: Exploring the synergy between live music performance and tabletop tangible interfaces. TEI’07: First International Conference on Tangible and Embedded Interaction (2007), 139–146. https://doi.org/10.1145/1226969.1226998
  • Nathan Renney, Benedict Gaster, Tom Mitchell, and Harri Renney. 2022. Studying How Digital Luthiers Choose Their Tools. Conference on Human Factors in Computing Systems - Proceedings, Article 72 (4 2022), 18 pages. https://doi.org/10.1145/3491102.3517656
  • Toros Senan, Bart Hengeveld, and Berry Eggen. 2022. Sounding Obstacles for Social Distance Sonification. ACM International Conference Proceeding Series (9 2022), 187–194. https://doi.org/10.1145/3561212.3561239/SUPPL_FILE/SS6.AVI
  • Orit Shaer and Eva Hornecker. 2009. Tangible User Interfaces: Past, present, and future directions. Foundations and Trends in Human-Computer Interaction 3 (2009), 1–137. Issue 1-2. https://doi.org/10.1561/1100000026
  • Jens Vetter. 2021. Tangible Signals-Prototyping Interactive Physical Sound Displays. TEI 2021 - Proceedings of the 15th International Conference on Tangible, Embedded, and Embodied Interaction, Article 45 (2 2021), 6 pages. https://doi.org/10.1145/3430524.3442450

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

TEI '25, March 04–07, 2025, Bordeaux / Talence, France

© 2025 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-1197-8/25/03.
DOI: https://doi.org/10.1145/3689050.3708332