The future is wireless

In our latest podcast episode, Waiting for Review, Dave Nott and I briefly discussed wireless headphones. For both of us, it seems the future is wireless...

A mobile phone, showing a wireless icon with a deep blue background
Photo by Franck / Unsplash

“Courage”…

In our latest podcast episode, Waiting for Review, Dave Nott and I briefly discussed wireless headphones. For both of us, it seems the future is wireless, and we kind of ‘get’ the direction that Apple and others have been leading things in by elementing hardware stereo jacks.

I recently sold my Edirol V4 video mixer on eBay. It was analogue, SD resolution, and I hadn’t used it for many years. When I first started “VJing” , in 2004, it was the standard for any VJ to use. I had a twinge of sadness in parting with it, but objectively my app GoVJ does everything I used to use it for with multiple DVD and laptop sources over a decade ago. I’d coded a software version of a hardware product that runs on a device that fits in my pocket. It’s fun living in the future!

Stream all the things

All this set me to thinking about wireless video.

I love AirPlay, and GoVJ supports AirPlay to output the video mix that the user is performing. I’m looking into supporting Chromecast down the line as well, possibly even at the same time as AirPlay if possible to provide dual outputs over wifi from the application.

For  real-time video applications on the desktop, there are two technologies that allow inter-app transmission of video data. On macOS this is Syphon, and on Windows there is a counterpart called Spout. These utilise texture sharing functionality that relies heavily on the OS and graphics card drivers to support. On macOS I understand this leverages the IOSurface object.

This allows different apps to ‘transmit’ their video, with extremely low latency, between each other. For example I can create an audio visualiser that creates pretty particle effects in response to a microphone input, and pipe that video straight through to another piece of software that controls multiple screen outputs and video mapping. This interoperability is extremely powerful. It provides a whole other level of expression and choice on the desktop platform for video artists. It has also created a niche eco-system of apps from separate developers that can all be combined with each other.

What about mobile?

I’m keen for there to be something similar on iOS. I believe it could open up the iPad as a tool for live video artists in a similar fashion. Unfortunately due to sandboxing, and other restrictions, recreating Syphon is impossible. IOSurface on iOS is a private API, disallowed for non-Apple applications.

I’m currently looking at Newtek’s NDI SDK. This allows for encoding video data and transmitting it over wifi.

If iOS apps could support this, presenting available outputs over network via Bonjour for example, then something similar to Syphon could be created. This would be subject to network latency when going between devices. I believe on-device would be limited to the speed possible through the network stack running locally on the device itself. This could mean an iPad running two apps in split-screen could send video data from one to the other. I could have a ‘master’ video mixing application, and swap between a variety of companion video synth/visualiser apps along side, providing their input to the mix.

There would be problems I’m sure. Encoding/decoding like this will thrash the hardware and it may not be possible to do this yet with existing iPad hardware.
It also wouldn’t achieve the low latency that desktop can achieve with texture sharing, but, would it be “good enough” ?

Ultimately the NDI SDK is closed source, and I’m unsure relying on it for something like this would be the best choice. On the other hand, some desktop VJ software may support NDI, and this could be a route towards a wider eco system for video artists across different hardware.

I plan on exploring this further as time allows over this coming year.