WithFeel VR Development
WithFeel VR is part of a PhD research project at Ulster University. The overriding goal of its development is to make music performance and composition more accessible using the immersive technologies we have available today. To do this we use what is called Participatory Design.
Participatory design involves co-design with stakeholders throughout the design process. You may have heard of companies holding focus group workshops to ascertain user needs and so on. Participatory design is more intense than that. The end-users are not merely informants but assume the role of co-designers.
As WithFeel VR is for use by disabled (and non-disabled) musicians we are strongly influenced by the disability rights mantra of…
nothing about us without us
Disability Rights Movement
It is this that makes Participatory design the perfect methodology for WithFeel VR. But there’s a problem….
Often in participatory design projects there’s an issue of knowledge imbalance. Let’s say the participants/end-users are members of a Norwegian metal workers union. They’re skilled professionals who know they’re job well but what do they know about designing software? Not a lot I would imagine. Academics in the Participatory Design suggest finding a hybrid or third space that both designers and participants can metaphorically inhabit (Michael j. Muller, Alison Druin).
For the development of WithFeel VR the problem doesn’t so much lie within the language of software design but in the musical language used. I’m a musician with over 25 years experience in performing and composing. The language I use to describe music is very esoteric. Timbre, metric modulations, tritone substitutions, reharmonisation, timbral modulation. These are phrases that come out of my mouth on a fairly regular basis. These are not words used to describe music that someone with, lets say, Downs Syndrome or perhaps someone who has been excluded from mainstream music education because of physical disabilities, would use.
So how do we find a hybrid space when the language we use is so different?
Embodiment
Embodiment is a buzz word in XR technologies and its a little more complicated than you might think. You see, we tend to embody abstract ideas despite the fact that they are conceptual and do not have any physical presence. Confused?
Lets look at a certain way we describe music to see what we mean. I mentioned the word timbre earlier in this page. For non-musicians timbre simply means the quality of a sound. Its what makes a saxophone sound like a saxophone and a violin sound like a violin. Those instruments have certain timbral qualities that we can easily discern.
There are two musical terms a composer might write for a violinist to affect timbre. They might write sul ponticello or flautando. These words simply instruct the violinist to bow at a certain distance from the bridge of the instrument. The result is a ROUGH or SMOOTH sound.
Why the caps? Because rough and smooth in this context are embodied metaphors. They are descriptors based on touch not sound. It turns out our language is riddled with embodied metaphors. This theory is what is called embodied cognition and was pioneered by Mark Johnston and George Lakoff.

A shared musical language
This is where WithFeel VR comes in and why it has its name. This is the hybrid space that Muller and Druin talk about. In the development of WithFeel VR we use these embodied descriptors as part of our shared musical language. This is a musical language that many of us share and it is perfectly suited to VR. WithFeel VR measures the morphology of interactive objects in VR and maps those physical qualities to embodied metaphors of sound and music. We alter the timbral and pitch qualities of audio samples to reflect the object shape and texture, among other things. Take a look at the video below to see what I mean. You’ll notice the audio sample become rougher or smoother depending on the roughness of the texture on the object.
Conducting
The spatial nature of the performance within With Feel VR has also led us to explore conducting using spatial gestures. Below you can see myself and one of the participants in this research explore those possibilities by drawing shapes in the air and sharing them with the performer.
Where next?
This is just a small glimpse of the many avenues of exploration and research involved in WithFeel VR. One question that has always come to my mind since I got involved with accessible music making, is why do I always have the compositional control? I’ve composed works in the past with my disabled colleagues and designed software for those compositions but ultimately I was the one with compositional control. Another part of this research project is the design of an Accessible Composition Interface that will compose for WithFeel VR. What good is a musical instrument without compositions to perform. This is also using Participatory Design methodologies and shared musical descriptors. When my PhD is complete you’ll be sure to see it here.