Archive

Software Development

Having recently tweeted my concerns about FogBugz in the face of Trello’s (very interesting) new Business Class feature, I was very pleasantly surprised to get a response from Michael Pryor pointing out that Fog Creek are hiring a FogBugz developer, and that the future was not “bleak” as I’d feared.

Happy Kiwi

This is great news – we love FogBugz at Focusrite and use it for everything from issue tracking to customer support via project management and documentation (not to mention Kiln) – but sometimes we find we need to do things that it just can’t help us with.  While I am now much more optimistic about the future of FogBugz, I wanted to make sure we could make use of the system in ways the core app does not support.

Ways in

There are several options which Fog Creek have helpfully provided – BugMonkey, Plugins, API and even source hacking! Much as I’d love to spend a few days making a plugin, I thought I’d see what could be achieved with the API and Google Docs in couple of hours, and I was pleasantly surprised.  With a very simple call to Google Spreadsheet’s importXML function, I could easily slurp in pretty much anything I wanted from our FogBugz install.  The key to this is:

=ImportXML(CONCATENATE("http://your-domain.com/fogbugz/api.asp?cmd=logon&email=",B1,"&password=",B2); "/*")

This gives you a login token (given that cell B1 has your email address and B2 your password.  If this formula is in cell B3, you can then use this in an XML query, for example:

=ImportXML(CONCATENATE("http://your-domain.com/fogbugz/api.asp?cmd=listFixFors&includeDeleted=0&token=",$B$3), CONCATENATE("//", A1))

Given that cell A1 contains the name of a milestone field you’re interested in, for example sFixFor for the milestone name or sProject for the project name, this will give you a list of milestone names or projects.  Note that the XPath query used here is a problem – because it returns unstructured data where the milestone output contains dependency information – I need to modify it to return only “top level” milestones.

What now?

My next plan is to build a visual representation of milestones, and ultimately to add case/estimate breakdowns so I can see where we’re under-planned or over committed easily (though the EBS Schedules feature is interesting, reading the graphs is a fine art!).

The power of this solution is that I can easily share this information with my less technical colleagues who know Excel better than I do, and help them to integrate fine-grained information from FogBugz in their planning processes.  I hope they’ll like it!

The one thing that’s badly missing from the API is access to the holiday log – this prevents us from hooking our holiday booking system in, and means we have to maintain two calendars by hand.

Anyway, thanks Fog Creek, please keep making FogBugz better, we love it!

A few colleagues and I were having difficulty accessing our SVN repository via https when working on our Macs at home. Specifically, an SVN checkout operation would hang. A subsequent SVN update operation would complete the checkout, but also hang prior to completion. We had just been living with this for a while but there comes a point when an engineer is annoyed so much that he or she feels compelled to do something about it! In this post I explain how I solved the problem.

Step 1: Enable Debug Logging

The first step of course is to use the web search engine of your choice (WSEOYC) to see if anyone else has encountered the problem. I tried this but to no avail. Another worthy port of call is to check your system logs for relevant information. On the Mac, I have lost count of the number of times I have forgotten to check the Console application for trace that ended up being the key to solving the problem, and it can even provide more useful text to feed back into the WSEOYC!

The next step is to vary the parameters of the problem. I found that I was successfully able to check out a codebase belonging to another company over https. But that didn’t help me fix our problem.

After much usage of the WSEOMC and checking of system logs I got no further. It was only then that I discovered from Dominic Mitchell’s Jabbering Giraffe blog that the command-line svn has a debug logging option for its network requests. Edit the file:

~/.subversion/servers

and you will find a line relating to the ‘neon debug mask’. Never one to do things in half-measures, I promptly updated this to enable all the logging features as follows:

[global]
neon-debug-mask = 511

Then when I examined the debug trace from the (now very chatty) SVN checkout operation that was hanging, I noticed the following:

    sess: Closing connection.
^C
    sess: Connection closed.

The hanging was occurring just after “Closing connection” was output to the log file.

Step 2: Identify the Faulty Component

I guessed from the debug trace that there was a problem with the networking side of things. But how could I debug this?

My first search revealed a useful program called DTrace, an incredibly powerful tool to investigate almost any aspect of the way a system is running, using a language called D. MacTech’s article Exploring Leopard with DTrace is a great introduction. Unfortunately it looked overkill for the problem at hand so I will have to save that one for a thornier problem another day!

But the networking clue allowed me to find the following very helpful page from Davey Shafik: how-to-fix-svn-apache-ssl-breakage-on-os-x. Although relating to a different problem, the page was incredibly helpful so I definitely owe Davey a beer!

In particular, his problem related to a bug with libneon, a library bundled with OS X that handles HTTP and WebDAV requests. It occurred to me that maybe my problem also related to libneon. So I decided to try his first suggestion, “Upgrade the system libneon (bad idea, as OS X can overwrite it in any update)“. I figured that if OS X updates this library then that might even fix the issue; and if it doesn’t then I could always restore my version of the library after the update.

Step 3: Fix or Replace the Faulty Component

So how to update libneon? I didn’t much fancy compiling it from source myself, but again, Davey’s page introduced me to Homebrew, a package manager for OS X. After ensuring that Xcode or the Command Line Tools for Xcode are installed first, Homebrew can be installed just by running:

ruby <(curl -fsSk https://raw.github.com/mxcl/homebrew/go)

Cute!

After installing Homebrew I simply ran:

brew install neon

and then copied the newly compiled library over the system library:

cp -p /usr/local/Cellar/neon/0.29.6/lib/libneon.27.dylib /usr/lib/

after judiciously backing up the original library! This updated libneon from version 0.29.0 to 0.29.6 (see revision history). Then I made sure that the group, owner and permissions of the new library matched that of the old.

Step 4: Test the Fix

Et voilà! No more hangs when running svn from the command-line. Substituting the old library caused the issue to occur again, so it is seemingly definite that the libneon component was the issue.

The likely cause of the issue is the bug that was fixed in libneon 0.29.3: Change ne_sock_close() to no longer wait for SSL closure alert: fixes possible hang with IIS servers when closing SSL connection.

Now the problem is fixed once and for all which makes me a happy engineer as I have solved a technical problem that was causing us pain. It’s surprising what a little bit of applied effort can do even in the face of an initially completely mysterious problem!

Focusrite are hiring!

in case you didn’t know, Focusrite are hiring.  In fact, we’re almost always hiring, and even if we’re not hiring, we’re probably still open to hiring.  I’ve seen a lot of CVs in the past few years, so I thought I’d offer some tips to help you present yourself in the best light.

That might seem a little odd, but I remember what it was like writing my CV (actually, only dimly as it was about ten years ago!), and having spent some time at the other end of the wire I thought it might help to improve the quality of applications we receive.  The last thing I’d want is for us to miss a great candidate.

Before I get started, please excuse me if the tone is patronising – the best applicants teach us things we don’t know, and the interview process is very much two way – but having seen too many terrible CVs, I can’t help but sound off once in a while.

CVs and covering letters

I do not care about your “Career Purpose” – you really shouldn’t have to state that you aim to be the best at what you do, or to make a difference, or that you’re a great communicator – that should shine through!

I really need to see your other jobs, particularly your achievements there.  I’m much less concerned with keywords (languages, technologies and so on) except where specifically required for the role.  I am also interested in some of the non-technical work you’ve done, so leave some of it in unless it’s ancient history.

I do care about your degree or further education, but probably not as much as you think.  In particular, I don’t necessarily mind if you didn’t go to university, or if you did a non technical subject.  I’m interested in your thesis, final project etc. but not as much as…

…your hobby projects.  We love these!  Above all else, this is the single most important thing you can do to get our interest.  I came back from holiday to find an odd looking box of electronics on my desk, and I was thrilled – I thought I had been sent an electronic CV – but it was just some junk from our WEEE waste that someone thought we could make use of on one of our company hackdays.  Please, please send us your electronic / software / mechanical hobby projects, however crazy or half baked.  If you’ve got the passion to create things in your spare time, we’d love to see what you can do in a full time job!

I also care about your early education (GCSE, A-Level results, subject choices etc.).  I will not reject a CV based on bad early grades, but seeing a dramatic change of tack / results / subjects is interesting background information that offers insight into who you are.

I do not care about your bronze swimming badge from 1984 (from a genuine CV, really)!

Good spelling and grammar are absolutely vital – if English isn’t your first language, or if you’re dyslexic, ask a friend to review your work.  We’re not grammar freaks, but attention to detail is an essential skill, as is clear writing.

I like covering letters that show you know who we are and are interested in what we do.

I dislike templates, especially if you leave “please insert content here” in the body (again, from a real CV!)

The phone screen

So, you’ve sent us a well written, spam free CV, a coupon for your iOS app and a covering letter describing your home studio, the gear you use, what you love and what you’d improve.  At this point I will be really excited, so I’ll remind myself to calm down and give you a call to check three basic things:

  1. You want the job we’re actually offering.
  2. You are available for work, now or soon.
  3. You have realistic pay expectations.

You’d be amazed at how many people fail at one or all of these three steps – it’s very depressing when an interesting candidate says “well, I’m going to finish my studies next year and then take a year out travelling… is there a part time position in Marketing available for £50k?” – ummm… no, there isn’t.

The technical interview

Assuming that goes well, we have a shared understanding of the role, so it’s time to start the fun part – the technical interviews.  This is where we get to talk about (and actually do) all the interesting stuff we work on all day long.  You also get to interview us, to make sure you like the way we work, that we’re competent, that we can give you the support and freedom you need to deliver your best work.

We like to start this process over Skype, as it’s so much quicker and easier for both parties.  We’ll do some simple starter questions, some code / design review, some more general discussion about products and processes, and we’ll hopefully go off on some interesting tangents about the innermost details of something you’ve been working on.  If we get on, we’ll ask you to visit us at Focusrite HQ for the final stage, where you’ll do some more detailed interviewing, have lunch in our canteen and meet the rest of the team.

I look forward to seeing you soon!

 

Every quarter we have a “Making Things Day” where each employee is invited to work for one day on something innovative, maybe in a personal area of interest or just something different to what they do every day.

Notwithstanding the member of R&D that misinterpreted the day as “Baking Things Day” and produced their first cake (very tasty I might add, thanks Andy), I thought I would write about a project a couple of us worked on using the Microsoft Kinect.

The Kinect of course is an excellent example of how a natural user interface, or NUI, can be implemented, and as such it makes it very easy to use gestures and body movement to control things. So we decided we would try using it to control sound.

As we could not obtain a Kinect for Windows sensor in time, we used the Kinect Xbox sensor. One key difference is that the Windows sensor supports “Near Mode”, allowing it to be used with objects as close as 40 cm, while the Xbox sensor requires a minimum distance of 80 cm.

We already have an app (called simply Automap and available in the App Store) that allows the iPhone to work as a controlling device for Automap Server. The app allows you to use your iPhone to control any parameter of an Automap client (such as an effect plugin, virtual instrument plugin, DAW mixer, external MIDI device…) simply by configuring appropriate mappings in the Automap Server application (which runs on your PC or Mac).

So we thought the easiest way to get up and running would be to create a Windows application using the Kinect C++ API, adapting it to speak the same protocol as the existing iPhone device so that it could connect to Automap Server without too many modifications.

The Kinect SDK supports a vast array of sensor information, including depth frames (where each pixel in the frame is given RGB values and its distance from the sensor), skeletal tracking, microphone, speech recognition etc. We decided to base our implementation on the skeleton tracking API. The returned skeletal data updates at 30 frames per second, and each frame contains the 3D positions of 20 skeletal vertices (head, shoulder left, elbow left etc.) for each of up to 2 people in the scene. Additionally, up to 4 other people can be tracked, but only in passive mode, where only the position of their centre of mass is reported rather than their full skeletal data.

Fortunately, the Kinect SDK includes a sample application called Skeletal Viewer which tracks the image from the Kinect’s camera and superimposes the interpreted skeleton on top of the frame. We adapted this application, adding the SDKs for Automap and Bonjour (for the network communication), and a console window to output debug information in real-time.

We decided to use the vertical positions of the left hand and right hand as continuous controllers, and the left and right foot positions as toggles. Then we used Automap Server to map these as follows:

  • LH vertical position → cutoff frequency
  • RH vertical position → resonance
  • LH foot tap 25 cm to left → next preset
  • RH foot tap 25 cm to right → toggle reverb on/off

After playing with this for a bit, we thought it would be nice to add some rhythm, with the ability to start and stop it. So we mapped a hand clap to start/stop the transport, and used it to control playback of a simple loop.

  • hand clap → start/stop transport

A hand-clap event was defined as the distance between the left hand and right hand vertices decreasing below 0.5m, provided this event has not occurred within the last second (to prevent spurious toggling).

Check out the video to see what we got up to!

Future directions could include:

  • choice of a particular scale for the cutoff frequency rather than a continuous value
  • enhancement of Automap Server to add a custom UI for the Kinect
  • support for additional gestures

What ideas do you have for how a NUI could be used in next generation music and audio production?

To develop the Impulse firmware, we first wrote a software simulation of the hardware in Cocoa.  Using this, we could rapidly redesign the UI and test out the button layout, and we could also write almost all of the device’s application level firmware before the hardware was even built.

Simluating hardware has a number of advantages

We could turn around design changes very rapidly and distribute the simulator to the team, long before there was a firmware update process for the hardware.

On top of this, we had the full benefit of the Xcode tool chain (static analyser, unit test harness, debugger etc.) applied to the embedded firmware, which helped enormously to improve the quality and stability of the device.

Once the hardware arrived, we would regression test any bugs found against the simulator – this helped to pin down hardware / low level firmware issues quickly, for example if the bug did not appear in the simulator.

This was all very useful, and we are already using this technique for new products – this time on iPads and iPhones for a more accurate user experience  (one problem with the Impulse simulator was button combinations – we had to make the “shift” button sticky, which made it confusing to operate).

What should we do with the simulators once the hardware has been released?  We still use them for debugging and testing, and I put a simple synth into the Impulse simulator one hackday… but could they become products in their own right?

What else could we do to make simulators more powerful?

Internally, we use Trello boards to indicate the status of products, software components etc. and to show the assignment of responsibilities.  We’ve only been doing it for a few weeks, so there’s bound to be better ways to do it – but the beauty of Trello is that it’s so quick and flexible you can change things in a few clicks and drags.

I’ve created a Trello board specifically for Automap. I think this would be a great way to show the world what’s going on in Automap, without having to expose the whole of FogBugz.

We can add cards and discuss features there, indicate where they are in the grand scheme of things, and crucially allow people to vote and comment on them.

We can create links to forum threads, and even to internal cases etc. if we need to.  Have a look – shall we make it public?

As we build more and more products, we share more and more code and components between them.  As we’re always told, code reuse is a good thing – though as we’re starting to learn, it’s not without its own set of challenges.

Our current setup is based on Subversion externals – shared libraries are pulled in and compiled into derived products.  Shared components, for example USB drivers, are pulled in by release tag and built (which feels like the right way to do it) or simply included as binaries, which is the lazy way.  Some products specify a particular revision of a shared library, while unfortunately most just pull the head revision (this is dangerous, see later).

Our automated build system tracks which projects use which components, and rebuilds all the non-versioned dependancies whenever a component changes (to shake out any build problems).

There are a number of problems we need to address if we want to share and reuse more:

Keeping builds repeatable

Pulling the head revision breaks repeatability of builds, so we must use revisions when specifying an external – though it’s a pain to go through and update them for all dependants when an external changes.  A helper script might be a way of doing this, though it could be risky.

Predicting the scope of a change

A simple way to identify all derived products of a given shared module allows you to make decisions about where, how or whether to do it (for example, does changing a badly named function  justify rebuilding six derived products?).

To achieve this, including compiled binaries in projects has got to stop, as the dependency tracking system has no way of identifying them.

A nice visual dependency graph would be good, but a flat list of externals would be OK.  There doesn’t seem to be a third-party tool which can do this, though there are several things which can make graphs from your repository – so  I decided to write one, which can be found here (internal only, sorry).

Sharing information about library code

Libraries are less useful if you don’t understand them or know about them.  We have a system which generates documentation from key repositories daily, using doxygen.  Now we just need to get into the habit of including doxygen markup in our source, to make using these libraries as easy as possible.

Additionally, changes to shared libraries should be automatically posted to the review server, and reviewed by the entire team (another way of identifying any consequences of changes).  At the moment, only changes to built projects are automatically posted for review.

Keeping dependants up to date with changes to shared code

This is the big one.  When a shared component changes, all dependants must be updated and tested.  One solution might be to always specify the revisions of externals, but have the build server do two builds – one against the head and one against the specified revisions.  Then, if a problem with building against the head of an external is found, it can be identified without losing the repeatability of specified-revision alpha builds.

All you would need is a notification in the build results or a warning flag on the build summary page (“warning – does not build/pass tests against external rev. #67“).

End-user issues

Where a shared component is actually a separate install, for example the USB drivers, this presents problems for end users as well as development – for example, if they install a new driver from our website, then install an older CD from a box for a different product, they will be confused by the “there’s a later version of this driver installed” message.  How can we resolve this?  I think that’s for another day!

Other people have thought about the same problems, and a theme emerges – it’s hard, and it’s often very domain-specific.  It’s not easy to Google for more, as most articles refer to compile-time dependency tracking or runtime dependencies.