Further round and further in

Rebuilding a video mixing engine on iOS

A series of hedges, that look like a bit of a maze
Photo by Henry & Co. / Unsplash

Over the last month I have been working on a rebuild of the roboEngine. This is the code that is the core heart of GoVJ  and  HoloVid

When I built HoloVid earlier this year, I brought the video mixing code from GoVJ into a static library and kept it mainly as-was. The static library was entirely Objective C based, with a lot of OpenGL boiler plate.

My plan was to bring the library and all roboheadz products over to Swift by mid-2017. I didn’t fancy re-programming all the boiler plate code. My newest project is Swift based. Where possible I’m trying not to write new Objective C code in my projects, to force myself to learn Swift.

I started bringing Swift code into the library along side the Objective C. This was a bad move – Swift cannot be used in a static library! Had I thought this through, I would have realised. This is related to Swift not having a stable ABI yet, so of course making static libraries with it would be a bad idea. With Swift you should make a dynamic framework instead.

So I was faced with a choice: rebuild the engine sooner than I’d planned, or start adding more Objective C to the base. In the end I decided going all in on Swift was going to fit me better.Fast-forwards a few weeks, and I finally have the entire library working in Swift. This means I can mix multiple layers of video or images with each other in real-time, with custom blending modes. I can also add a filter to each layer (essentially a customer shader).

I feel a lot more comfortable now in how Swift’s general syntax works and things such as delegation and extension. One of my favourite aspects of Swift is how it implements custom setters and getters on variables. This feels very neat. Thomas Hanning’s  post on custom properties expands this well.

The process of refactoring has also meant that the engine itself is better laid out, and more efficient. Swift’s handling of Core Foundation objects and their allocation/deallocation seems to be working fine. My overall memory usage appears to have come right down.

I’m now beyond the porting of old code, and have started adding new features. First off the list is an exporting routine that allows me to export compositions to the camera roll. This will enable an export routine for  HoloVid , and provide the backbone for the my new app.

Two videos blended

It may not seem like much, but I’m very happy with the results. Two videos composited into one, using a luma key to drop the darkest colours (the black background) from the top layer.

I’m fully aware I will have to update a load of this code with Swift 3 and future releases.

Given how clean my code-base now feels to me though, its effort I’m happy to make as and when it becomes necessary.