Author Archives: davehodder

As I mentioned last time, three of us from the Focusrite/Novation engineering team attended the Music Hackday event alongside the amazing Sonar festival in Barcelona.  We learned lots, slept very little, and had a great time – here’s a quick summary!

2013-06-14 16.33.16

Web APIs > everything else

It seems most of the hackday crowd are very web oriented, and we’d failed to make it easy for them to see how to use a MIDI controller (MIDI? What’s that?  Is there a Node.js wrapper for it?  …well, actually, yes, in fact there are two!) in their projects.  Nonetheless, several of them did a great job.

We did point people towards the Web MIDI API and the essential Jazz Plugin, which was included in a particularly interesting hack – we’ll be exploring this further in the coming months.

Going mental

For our own hack, we decided to aim high and use the incredible Enobio EEG sensor from Starlabs.  The team were very helpful getting us up and running with their OS X SDK, despite having so much else to do and having presented the Windows version!

We wanted to exploit the phenomenon known as Steady State Visually Evoked Potentials, or SSVEP.  Broadly this means that if someone looks at a visual stimulus flashing at a rate between 3.5Hz and 75Hz, you can observe EEG activity at that frequency, with harmonics.  We instantly thought of synthesizers and decided we’d build something that could sonify the EEG trace to play musical notes – effectively using Ross’s brain as an oscillator clocked by a flashing Launchpad.

Science is hard

After an hour or so with the sensor and some hacked Launchpad firmware generating outrageously nauseating flash patterns, we’d captured a number of test traces from Ross’s brain, but had seen little in the way of harmonic content in the spectrum analyser.  I blame Ross’s brain.

2013-06-13 14.41.48

At this point, we should probably have given up and decided to make something different, but instead we thought we’d give the EEG trace a helping hand and filter for the frequencies we were expecting.  Combining this with a heart rate sensor to modulate the level, and pitch-shifting the output by an octave or two, we thought we’d be able to get something that was, while not truly “mind music”, at least in some way meaningfully related to brain activity.

DSP is hard

Ben set up an Audio Unit plugin and started integrating the Enobio SDK, discovering just how much fun* it is to add a dylib with dependencies to a component in a way that won’t bomb out at runtime.

Ross was having much more success modifying the Launchpad to do variable rate flashing, PWM for intensity and colour patterns.  He spent time helping out some of the other hackers and got some great results (see later on).

At this point I lost the plot and charged off into DSP territory.  Unfortunately I’d forgotten just how bad I am at DSP, and spent a long time troubleshooting the various buzzing, popping and aliasing introduced by my code.  By 2am, we had something that made a noise in response to the heart-rate sample data, and decided to call it a day.

*really not much fun at all.

This brain has left the station

The next morning we felt like a result was just within reach – we’d use the heart rate sensor to modulate the processed (but horribly aliased) output from the EEG, in response to visual stimulus triggered from the Launchpad buttons.  It might just work…

…but it didn’t.  With an hour to go, we had nothing that made anything approaching a useful sound, we were struggling to normalise or beat-detect from the heart rate sensor, and had only one usable development machine after the ongoing runtime dependency nightmares had wasted hours.

We decided not to demo our hack (especially as it would mean monopolising one of the precious few Enobio sensors), and spent the remaining time helping our fellow hackers include Launchpads in their work, which was a lot of fun.

2013-06-16 01.47.19

In hindsight, we’d have been much better off attacking the problem using something like Pure Data to get going quickly, wasting less time on dependencies, sample rate conversion and C++.  I sincerely doubt I’ll be writing any DSP code in the near future either.

Hacks and demos

The hacks from the event are published here on Hacker League.  Our favourite Launchpad hacks were:

…and some others that really made us smile:

We really enjoyed the event, meeting lots of inspiring people and learning about cutting-edge technologies.  We’ll be back, and next time we’ll demo!

I’m thrilled to announce that a couple of us from the Novation engineering team are coming to Music Hackday Barcelona at Sónar+D!  We’re bringing along some Launchpad S hardware to hack, and we’ll be on hand to help get the most out of the hardware and software, as well as participating in the event ourselves.

This post will serve as a place for us to share some of the content that we’ll bring on the day, but of course it’s not just for Music Hackday participants.

Two Ways to Play

The Launchpad Programmers Reference documents the MIDI messages you can send to the Launchpad to set the LEDs, and the messages that Launchpad will send you.  This allows you to do almost anything with a Launchpad, given source-level access to the software you want to control (or if you’re willing to write an intermediate translation service app).  But sometimes, you might want something a little more immediate!

To do that, what you really need to to is get inside the Launchpad…

Open(ish) Firmware

Yes, that’s right – you can now modify the firmware of your Launchpad S!  To do so, you will need several things:

  • Raisonance Ride7 development environment and RKit-ARM (Windows only)
  • The source code (we’ve exposed the fun parts so you can get straight to the action)
  • The libraries (some bits of Launchpad S are not that interesting, so we’ve packaged them up)
  • A MIDI sysex uploader (try MIDI-OX on Windows or Sysex Librarian on OS X)
  • Some guts!  The bootloader is protected so you’d have to try really really hard to brick one, but I’m sure it’s possible.
  • Optional – an RLink hardware debugger.  We’ll have a couple on hand at Music Hackday Barcelona, they can be really helpful if you’ve got nothing else to help you debug!

We’ve put the firmware on GitHub so you can make use of it.  We decided to release it under the 3-clause BSD license, which is very permissive – mainly because the code is not useful for anything other than a Launchpad S!

Those of you around on the day will be able to ask us questions, but until then we’ll try to document at least partly – we expect the documentation to improve based on your feedback.  It can’t get any worse, as at the moment, this is it:


Ummmm…. this page.  Yup, that’s it.  We’ll be adding documentation to the source code as it matures, so expect to find some useful comments and example code, particularly in sonar.c where we’ll include commented code for some of the key features.

Building the Firmware

Double clicking on build.bat in the root of the project should spit out a .syx file that you can upload to your Launchpad S.  The process is as follows:

  1. Disconnect the Launchpad S
  2. Hold down the “session”, “user 1”, “user 2” and “mixer” buttons
  3. Connect the hardware while holding them down – the device should enter the bootloader screen
  4. Send your .syx file to the device using a MIDI sysex utlilty (the device will scroll “Updating …” across the LEDs)
  5. When the pads return to the bootloader screen, press the bottom right green pad to exit to the main firmware

Note that if you modify the .rprj file, you must ensure you leave the “Standard Configuration” selected, as otherwise your build won’t work without the RLink debugger connected.


Having recently tweeted my concerns about FogBugz in the face of Trello’s (very interesting) new Business Class feature, I was very pleasantly surprised to get a response from Michael Pryor pointing out that Fog Creek are hiring a FogBugz developer, and that the future was not “bleak” as I’d feared.

Happy Kiwi

This is great news – we love FogBugz at Focusrite and use it for everything from issue tracking to customer support via project management and documentation (not to mention Kiln) – but sometimes we find we need to do things that it just can’t help us with.  While I am now much more optimistic about the future of FogBugz, I wanted to make sure we could make use of the system in ways the core app does not support.

Ways in

There are several options which Fog Creek have helpfully provided – BugMonkey, Plugins, API and even source hacking! Much as I’d love to spend a few days making a plugin, I thought I’d see what could be achieved with the API and Google Docs in couple of hours, and I was pleasantly surprised.  With a very simple call to Google Spreadsheet’s importXML function, I could easily slurp in pretty much anything I wanted from our FogBugz install.  The key to this is:

=ImportXML(CONCATENATE("",B1,"&password=",B2); "/*")

This gives you a login token (given that cell B1 has your email address and B2 your password.  If this formula is in cell B3, you can then use this in an XML query, for example:


Given that cell A1 contains the name of a milestone field you’re interested in, for example sFixFor for the milestone name or sProject for the project name, this will give you a list of milestone names or projects.  Note that the XPath query used here is a problem – because it returns unstructured data where the milestone output contains dependency information – I need to modify it to return only “top level” milestones.

What now?

My next plan is to build a visual representation of milestones, and ultimately to add case/estimate breakdowns so I can see where we’re under-planned or over committed easily (though the EBS Schedules feature is interesting, reading the graphs is a fine art!).

The power of this solution is that I can easily share this information with my less technical colleagues who know Excel better than I do, and help them to integrate fine-grained information from FogBugz in their planning processes.  I hope they’ll like it!

The one thing that’s badly missing from the API is access to the holiday log – this prevents us from hooking our holiday booking system in, and means we have to maintain two calendars by hand.

Anyway, thanks Fog Creek, please keep making FogBugz better, we love it!

As promised in my previous post, here is some information that may prove useful for those attempting to write USB drivers for Saffire 6USB Mk I (MkII is audio class 2 compliant and should not need a driver on Linux).  Its not going to be that different for our other USB 1.1 products (Ultranova, VRM Box etc.) so could be extended for those devices later.

Before we get started, a word of warning – this is a work in progress and quite likely to be error prone, so please bear with us – we’ll try to correct any mistakes or omissions as they are discovered.

Finally, you may want to get access to a USB bus analyser, we find them incredibly useful for development!


Audio is transferred to and from Saffire 6USB on Interface 0, alternate setting 1.  Endpoint 0x01 is the output, transmitting four channels, interleaved in 24-bit little-endian format.  Endpoint 0x82 provides two channels of input in the same format.

Saffire 6USB runs from the USB SOF (start of frame) clock, as such it transfers a predictable number of samples for each 1ms USB frame.

At 48kHz, each packet contains 48 samples per channel.  At 44.1kHz, its not possible to transfer an integer number of samples per frame, so instead we transmit / receive nine transfers of 44 samples followed by one of 45 samples (hence transferring 441 samples every 10ms).

To read and set the sample rate, we use the same format as for USB audio class devices, documented in section and

After changing sample rates, it is advisable to wait for a few hundred milliseconds before attempting to start transfers again (the device needs to resynchronise its PLL).


MIDI is transferred on Interface 1, transmitting up to 8 bytes of data per packet on endpoint 0x03, and receiving up to 16 bytes on endpoint 0x84.  The format is raw MIDI, not USB class formatted.  The hardware does not process the stream in any way, it just passes it directly to / from the physical ports.

No data will be transferred unless the device has been set to configuration 1.


Below is a dump of the device descriptor, in case it proves useful!

    Device Descriptor   
        Descriptor Version Number:   0x0100
        Device Class:   0   (Composite)
        Device Subclass:   0
        Device Protocol:   0
        Device MaxPacketSize:   8
        Device VendorID/ProductID:   0x1235/0x0010   (unknown vendor)
        Device Version Number:   0x0100
        Number of Configurations:   1
        Manufacturer String:   1 "Focusrite Audio Engineering"
        Product String:   2 "Saffire 6USB"
        Serial Number String:   0 (none)
    Configuration Descriptor   
        Length (and contents):   64
            Raw Descriptor (hex)    0000: 09 02 40 00 02 01 00 80  F9 09 04 00 00 00 FF 00  
            Raw Descriptor (hex)    0010: 00 00 09 04 00 01 02 FF  00 00 00 07 05 01 01 4C  
            Raw Descriptor (hex)    0020: 02 01 07 05 82 01 26 01  01 09 04 01 00 02 FF 00  
            Raw Descriptor (hex)    0030: 00 00 07 05 03 03 08 00  01 07 05 84 03 10 00 01  
            Unknown Descriptor   0040: 
        Number of Interfaces:   2
        Configuration Value:   1
        Attributes:   0x80 (bus-powered)
        MaxPower:   498 ma
        Interface #0 - Vendor-specific   
            Alternate Setting   0
            Number of Endpoints   0
            Interface Class:   255   (Vendor-specific)
            Interface Subclass;   0   (Vendor-specific)
            Interface Protocol:   0
        Interface #0 - Vendor-specific (#1)   
            Alternate Setting   1
            Number of Endpoints   2
            Interface Class:   255   (Vendor-specific)
            Interface Subclass;   0   (Vendor-specific)
            Interface Protocol:   0
            Endpoint 0x01 - Isochronous Output   
                Address:   0x01  (OUT)
                Attributes:   0x01  (Isochronous no synchronization data endpoint)
                Max Packet Size:   588
                Polling Interval:   1 ms
            Endpoint 0x82 - Isochronous Input   
                Address:   0x82  (IN)
                Attributes:   0x01  (Isochronous no synchronization data endpoint)
                Max Packet Size:   294
                Polling Interval:   1 ms
        Interface #1 - Vendor-specific   
            Alternate Setting   0
            Number of Endpoints   2
            Interface Class:   255   (Vendor-specific)
            Interface Subclass;   0   (Vendor-specific)
            Interface Protocol:   0
            Endpoint 0x03 - Interrupt Output   
                Address:   0x03  (OUT)
                Attributes:   0x03  (Interrupt no synchronization data endpoint)
                Max Packet Size:   8
                Polling Interval:   1 ms
            Endpoint 0x84 - Interrupt Input   
                Address:   0x84  (IN)
                Attributes:   0x03  (Interrupt no synchronization data endpoint)
                Max Packet Size:   16
                Polling Interval:   1 ms

A number of our users have been asking for help using Saffire 6 USB on Linux.  Before we get to that, I thought it would be useful to clarify our interfaces status on Linux, then I’ll post up some information that will be useful for brave driver developers wanting to attack the devices that don’t work.

Please note that this is cobbled together from the back of my head, so might well be inaccurate – I’ll endeavor to correct and update it as best I can.

Finally, please understand that Focusrite does not officially support Linux.  Although some people are seeing positive results in the comments, and some of our products are “known to work”, your mileage may vary.  Good luck!

USB Audio Interfaces

Could work: Scarlett 2i2, 2i4, 8i6, 18i6, 6i6, 18i8, 18i20, Saffire 6 USB MkII (USB audio class 2.0 compatible), Forte and iTrack Solo.

Note that Forte’s display will not function on Linux as its content is rendered by a daemon running on the host.  I don’t think this should affect its operation as a sound card though.

Could work (but no driver): Saffire 6 USB, Novation nio 2|4

VRM Box will work as an audio device, with two outputs (headphones).  However, the VRM processing will not work, as this is embedded in the kernel mode driver on OS X and Windows and would be a very complex task to port (sorry, we have no plans to open source the VRM algorithm any time soon).

FireWire Audio Interfaces

Saffire Pro 40, Pro 24, Pro 24DSP, Liquid Saffire 56: may work via FFADO drivers.

Saffire Pro 40 (second revision): does not work with FFADO driver. These devices can be identified by serial number – any unit with a serial greater than PF0000100000 will not (currently) work.

Saffire, Saffire Tracker, Pro 26i/o, Pro 10 i/o: full support via FFADO drivers

Novation USB Controllers

Impulse, ReMOTE SL MkII (& ZeRO), ReMOTE SL (& ZeRO), Nocturn Keyboard: should work (USB class-compliant MIDI ports).  Note that Impulse and SL/ZeRO MkII have extra vendor-specific USB endpoints that will not work without a driver, these are for communication with the Automap server application which is not essential for MIDI control.

Launchpad: works (thanks to driver by Will Scott)

Nocturn: could work, though not USB class-compliant so would need a driver.  Could probably be adapted from the Launchpad driver with a trivial change of USB PID (see below).

Launchkey: should work – class compliant.

Launchpad S – class compliant!

Novation Synths

Ultranova: requires driver, not known if one exists.  Automap / plug-in editor interaction is complex (routing logic in the Windows / OS X drivers) but MIDI could work easily enough.

MiniNova: requires driver.  The MiniNova (and UltraNova) librarian and soon to be released editor have a back-door to communicate with the hardware (purely to prevent spam MIDI data clogging up your DAW), which can’t work with the class driver.  As with the Nocturn, UltraNova & Launchpad, the format for USB MIDI transfers is simply raw MIDI data (as opposed to the four-byte packeting of USB MIDI class data).

Xio & X-Station: should work (USB audio class-compliant ports).


Scarlett 18i6 1235 0x8000
Scarlett 8i6 1235 0x8002
Scarlett 2i2 1235 0x8006
Scarlett 2i4 1235 0x800A
Scarlett 6i6 1235 0x8012
Scarlett 18i8 1235 0x8014
Scarlett 18i20 1235 0x800C
iTrack Solo 1235 0x800E
Forte 1235 0x8010
Saffire 6USB (USB 2.0 version) 1235 0x8008
Remote 1235 0x4661
XStation               (old) 1235 0x0001
Speedio 1235 0x0002
RemoteSL + ZeroSL 1235 0x0003
RemoteLE 1235 0x0004
XIOSynth             (old) 1235 0x0005
XStation 1235 0x0006
XIOSynth 1235 0x0007
Remote SL Compact 1235 0x0008
nio 1235 0x0009
Nocturn 1235 0x000A
Remote SL MkII 1235 0x000B
ZeRO MkII 1235 0x000C
Launchpad 1235 0x000E
Saffire 6 USB 1235 0x0010
Ultranova 1235 0x0011
Nocturn Keyboard 1235 0x0012
VRM Box 1235 0x0013
VRM Box Audio Class (2-out) 1235 0x0014
Dicer 1235 0x0015
Ultranova 1235 0x0016
Twitch 1235 0x0018
Impulse 25 1235 0x0019
Impulse 49 1235 0x001A
Impulse 61 1235 0x001B
XIO Emergency OS Download Device 1235 0x1005
Nocturn Keyboard Emergency OS Download Device 1235 0x1012
Impulse bootloader 1235 0x1019
nIO DFU 1235 0x3201
6USB DFU 1235 0x3202
VRM Box DFU 1235 0x3203
Twitch DFU 1235 0x3218

To develop the Impulse firmware, we first wrote a software simulation of the hardware in Cocoa.  Using this, we could rapidly redesign the UI and test out the button layout, and we could also write almost all of the device’s application level firmware before the hardware was even built.

Simluating hardware has a number of advantages

We could turn around design changes very rapidly and distribute the simulator to the team, long before there was a firmware update process for the hardware.

On top of this, we had the full benefit of the Xcode tool chain (static analyser, unit test harness, debugger etc.) applied to the embedded firmware, which helped enormously to improve the quality and stability of the device.

Once the hardware arrived, we would regression test any bugs found against the simulator – this helped to pin down hardware / low level firmware issues quickly, for example if the bug did not appear in the simulator.

This was all very useful, and we are already using this technique for new products – this time on iPads and iPhones for a more accurate user experience  (one problem with the Impulse simulator was button combinations – we had to make the “shift” button sticky, which made it confusing to operate).

What should we do with the simulators once the hardware has been released?  We still use them for debugging and testing, and I put a simple synth into the Impulse simulator one hackday… but could they become products in their own right?

What else could we do to make simulators more powerful?

Hello everyone, I’ve made this blog public now, so please refer to the About and Disclaimer pages for some ground rules.

It’s worth saying that this is not the place for support – please contact our wonderful tech support team instead, they will give you a faster and simpler response!

If you’re coming to this blog from outside Focusrite, welcome!  We’re trying something new here, so please bear with us if your comments are not approved.  Hopefully this will be an interesting place for us to talk about the things we do here.

As we build more and more products, we share more and more code and components between them.  As we’re always told, code reuse is a good thing – though as we’re starting to learn, it’s not without its own set of challenges.

Our current setup is based on Subversion externals – shared libraries are pulled in and compiled into derived products.  Shared components, for example USB drivers, are pulled in by release tag and built (which feels like the right way to do it) or simply included as binaries, which is the lazy way.  Some products specify a particular revision of a shared library, while unfortunately most just pull the head revision (this is dangerous, see later).

Our automated build system tracks which projects use which components, and rebuilds all the non-versioned dependancies whenever a component changes (to shake out any build problems).

There are a number of problems we need to address if we want to share and reuse more:

Keeping builds repeatable

Pulling the head revision breaks repeatability of builds, so we must use revisions when specifying an external – though it’s a pain to go through and update them for all dependants when an external changes.  A helper script might be a way of doing this, though it could be risky.

Predicting the scope of a change

A simple way to identify all derived products of a given shared module allows you to make decisions about where, how or whether to do it (for example, does changing a badly named function  justify rebuilding six derived products?).

To achieve this, including compiled binaries in projects has got to stop, as the dependency tracking system has no way of identifying them.

A nice visual dependency graph would be good, but a flat list of externals would be OK.  There doesn’t seem to be a third-party tool which can do this, though there are several things which can make graphs from your repository – so  I decided to write one, which can be found here (internal only, sorry).

Sharing information about library code

Libraries are less useful if you don’t understand them or know about them.  We have a system which generates documentation from key repositories daily, using doxygen.  Now we just need to get into the habit of including doxygen markup in our source, to make using these libraries as easy as possible.

Additionally, changes to shared libraries should be automatically posted to the review server, and reviewed by the entire team (another way of identifying any consequences of changes).  At the moment, only changes to built projects are automatically posted for review.

Keeping dependants up to date with changes to shared code

This is the big one.  When a shared component changes, all dependants must be updated and tested.  One solution might be to always specify the revisions of externals, but have the build server do two builds – one against the head and one against the specified revisions.  Then, if a problem with building against the head of an external is found, it can be identified without losing the repeatability of specified-revision alpha builds.

All you would need is a notification in the build results or a warning flag on the build summary page (“warning – does not build/pass tests against external rev. #67“).

End-user issues

Where a shared component is actually a separate install, for example the USB drivers, this presents problems for end users as well as development – for example, if they install a new driver from our website, then install an older CD from a box for a different product, they will be confused by the “there’s a later version of this driver installed” message.  How can we resolve this?  I think that’s for another day!

Other people have thought about the same problems, and a theme emerges – it’s hard, and it’s often very domain-specific.  It’s not easy to Google for more, as most articles refer to compile-time dependency tracking or runtime dependencies.