GOVERNANCE

Synthball

With regard to contemporary performance technologies, we believe it is no longer useful to create a binary divide between analogue and digital temporalities as experience is mediated and symbolized, codified and decodified by each. We advance an ontology of digital memory to rethink the experience of performative real-time in a way that complicates electronic performance as it is enacted by human and nonhuman actors. With our project, Synthball, we seek to rethink the binary between “liveness” and mediatization in electronic and digital performance techniques. Through the development of contemporary technologies that aim to expand the types of embodied relationships a performer can have with an object, we aim to articulate a type of performance that utilizes recording in a way that does not compromise ‘liveness’, but rather, uses existing technologies to heighten the experience of the ‘real time’. Here, we use the term ‘recording’ to broadly refer to both long-term storage and temporary buffers. Buffers are delimited units of short-term storage, defined by their constant deletion as part of a real-time process. In most ways, they are identical to stored recordings; samples in a datastream must be stored to be processed, but they must also be constantly deleted to make space for new samples. Stored materials can be transferred to buffers, moving from pastness into the now. Rather than stored materials, our system streams data derived from physical facticity in a performative space. Real-time processes mirror, support and influence performances in the present. We aim to differentiate between human, machine, and hybrid systems in the ways they process and perform with information. As it pertains to liveness, we locate the ontology of the electronic performance in our real time system, first by examining the performance object as it relates to record, and second, through analyzing and locating the agency within the performative system.

We’ve created a set of devices that each incorporate inertial sensors to transmit data about their individual movement in real time; specifically, elastic/silicon balls that can be thrown, caught, rolled and bounced. The devices possess their own type of temporality, where human gestures performed while in contact with them are automatically recorded, and used to inform and manipulate the datastream as it unfolds in real time, thus acting as a type of mnemonic device of gesture. The data transmitted includes three dimensional accelerometer data, a gyroscope to measure changes in rotational orientation, a magnetometer to measure gravitational fields (like the Earth’s). The device streams data to media software to control aspects of audio and visuals in real-time. Our initial goal was to remediate physical motion, performance and play from material bodies and objects to abstract digital imagery and sound. In using motion capture (or, more accurately “motion streaming”) to create non-representational transformations, we’re interested in exploring whether elements of gesture or temporal meaning can be dissociated from human form. We’ve chosen the spherical form factor because it is ancient and universally associated with physical play; we share an interest in constructing virtual play spaces that augment and inform, rather than replace, physical experiences as a challenge to the wholly digitized perceptual systems seen in virtual reality products. The objects could be further used to model and manipulate social dynamics. How might they reconfigure the ways that people within a space act together and alone?

Research / Exhibition Summary
We’ve done significant qualitative and quantitative research using the original Synthball prototypes. One study involved testing and observation with children of various ages, while our audiovisual performance duo, Governance, uses Synthball and wearable prototypes with standardized audio and video equipment on a regular basis. We’ve experimented with alternate wireless technologies from those used in the original prototypes and determined that a direct peer-to-peer datastream (via ad-hoc WiFi, XBee wireless or Bluetooth mesh) is more suitable to our application than the cloud/WiFi approach we previously took. We’ve also done experiments with emerging rangefinding technologies and have determined that they are viable to integrate, but also may substantially increase the size of the device, which raises durability concerns.

sound/play/space installation at the opening of the Rubenstein Arts Center

While we’ve used computers running Max/Jitter patches to-date, we feel that a larger usability potential is possible with a dedicated object that receives data and produces sounds/visuals “out-of-the-box” without setup. Our research into integrating the Synthball into more common audiovisual performance setups has proved very promising, but has required both a computer and specialized music technology equipment to translate the motion data to the more commonly used information types used in performances: MIDI, OSC and CV.

Integrating the wearables with MIDI and CV-based instruments during summer 2018 residency

We’ve explored wearable devices communicating with smart objects in real-time (using the same internal technologies as the ball, but in a wearable, fabric enclosure). We successfully prototyped and tested this system in our MoogFest 2018 performance with the SLIPPAGE lab, which featured five dancers shaping different parameters of an audiovisual performance.  Real-time performance testing of multiple devices simultaneously has opened up a need for the design of a network topology that can potentially support dozens of simultaneous devices, and one that does not rely on the proprietary cloud platform (Particle Cloud) that we’ve used in prototypes so far.

Moogfest 2018 performance excerpt
Facebook
Instagram