Archive

Control

Four years ago, I was involved with the design of the original Launchpad. Conceived as a clip launcher for Ableton, we quickly realised that it had many other uses, so we prepared a MIDI reference manual so that programmers and hackers could control it easily.

It was abundantly clear from the start that people would like Launchpad. Its success as a product came from two sources: the power that it imparted to Ableton, and the community of programmers who, eager for an affordable tool like this, quickly put together a plethora of third-party products: sequencers, music warpers, and even emulators for other controllers.

More than three years have passed since we sold our first Launchpad, and it feels good to revisit it. It’s not very often an engineer gets the chance to reappraise an old project. Even less often does one get the chance to do so with an excellent assistant, Ross. He’s done most of the hardware design and implementation work this time around.

A short while ago we manufactured our last Launchpad. From now on, we are making Launchpad S. Compared with the other controllers that have emerged in the meantime, it is fundamentally a straightforward device, but this simplicity is part of its strength.

We set out to maintain 100% compatibility with the original Launchpad, so that Launchpad S would continue to work seamlessly with existing programs. We’ve actually added very few new features, for reasons that will become clear. But from an engineering perspective, it is a substantial redesign, and the briefing was better in every way. There’s much more to S than a superficial makeover.

Faster!

If I have one regret about Launchpad, it’s our choice of microcontroller – the part that interfaces with the computer and provides all of the device’s functionality. We used exactly the same chip that we employed in Nocturn: a ST7 microcontroller. This enabled us to work very quickly, and to improve our buying power to make the device more economical.

One of the biggest frustrations with this microcontroller is its communication speed. Because the processor is based around a fairly old eight-bit core, we were confined to a low-speed variant of USB 1.1 that limited us to 400 MIDI messages per second. Even when we are clever about it (Ed – kernel mode MIDI drivers are a scary place to be “clever” – getting maximum throughput in the WIndows driver was extremely difficult – davehodder), it takes at least 100ms to update the status of all 80 LEDs on Launchpad. This complicated the effective control of Launchpad beyond the Ableton environment, and stopped us from using class-compliant MIDI, but these were seen as acceptable compromises.

Our sleight of hand was to employ double-buffering – a trick borrowed from certain home computers of the 1980s. This enables a programmer to set up every LED in advance and to switch them all instantly with a single command. Our compromise makes life more complicated for the software designer, but it permits flicker-free fast updating. We put in a mode so that this feature could be used to flash LEDs too. A lot of programmers ended up making very creative use of double buffering, and started to do other things that we didn’t anticipate.

Faster communication.  Given the opportunity for a rethink, the first thing we changed was the microcontroller. Times have moved on since 2009, and it’s possible to upgrade from the old 8-bit device to a shiny 32-bit ARM core. Thanks to a price war between semiconductor manufacturers, this costs only a few cents more than the old solution. Extra speed carries many advantages. The new Launchpad’s USB can handle MIDI at around forty times the speed of the old one, rendering double buffering unnecessary, but we’ve left it in for backwards compatibility.

Class-compliant MIDI.  Faster USB not only allows Launchpad S to be updated directly at a civilised speed; it also allows us to make it class compliant. So the new Launchpad doesn’t require a third-party driver (except for multi-client operation under Windows, but that’s a Microsoft thing). As well as being great news for Linux and iOS developers, class compliance also allows us to improve our MIDI parsing. So Launchpad S deals gracefully with badly-formatted MIDI, and will respond to System Exclusive device inquiry messages that the old Launchpad didn’t have the capacity to support.

Upgrades.  The firmware in the original Launchpad was not upgradeable, but we actually turned this to our advantage. It was a great motivator for keeping the product simple and spurning featuritis, as well as being a spur to get it right first time. Launchpad S is based on technology that we’ve deployed in many other products, and it allows us to respond to demand for product enhancements without stranding our existing users.

A great advantage of being able to change the program space is that we can now add a configuration page. This allows a number of new features. One of the most important is that a user with more than one Launchpad S can now label them to give them different IDs, so that they can enumerate differently and software can determine which is which.

Brighter!

Moore’s Law has a counterpart, Haitz’s Law, in the LED world. The LEDs we put into the original Launchpad, that led the industry in 2008, look rather dim in 2013. One of our goals was to obtain visibility in full sunlight using bus power, and now it’s just about possible to see what the Launchpad is doing when it’s operating in full sunlight. We have achieved this in two ways: by sourcing LEDs that are the brightest available, and by finding smarter ways to squeeze as much light from them as we can.

One thing we didn’t do is add a blue LED element. We ran experiments using them, but there are still a couple of problems. The first is that we can’t quite get the device bright enough for our satisfaction using bus power alone. The annals of engineering are littered with the corpses of products released before their core technology was ready. The time is not yet right for a bus-powered device with eighty LEDs, but give it a year or so and Haitz’s Law will help us out.

The second problem is that we’d need to update the MIDI protocol that Launchpad uses to talk to the outside world. That would turn it into a different product, and would break backwards compatibility with certain software that remaps MIDI traffic to deal with Launchpad.

Colour separation.  Brightness was not our only priority. We’ve been listening to our customers, and one of the more interesting feature requests we have received is to improve the device for colour-blind users. Certain people find it particularly difficult to distinguish between the amber and green states of Launchpad. In Launchpad S, we deliberately selected LEDs with a colour spectrum that spaces the red and the green wavelengths much further apart. The green element is an emerald green, closer to the colour of a traffic light, and the wavelength of the new red element is just a few nanometres higher. Amber is balanced so it looks about the same as it does on the existing Launchpad.

If you do confuse reds and greens, Launchpad S will be a substantial improvement. And if you’re not colour blind, there are now more distinct colours within the colour space that Launchpad offers, whereas before you would have had to be content with three.

Launchpad S vs Launchpad, Full-power mode

Photographs cannot do adequate justice to light emitting devices, but this will give an idea. A Launchpad S battling against a Launchpad under office lighting, both showing the 16 available colours.

Welcoming careful drivers.  It wasn’t sufficient just to find brighter LEDs: we decided to find a more responsible way of putting current through them. The original Launchpad drove its LEDs directly from the 5V USB supply, and LED current was obtained by tempering this voltage with resistors. As we have only 500mA to use, this meant that about 40% of the power we had at our disposal was converted to heat without ever encountering the LEDs. The new scheme uses a switch-mode regulator to divide the input voltage down to 3.6V. Some of our LEDs are driven directly from this, which saves fitting a few resistors, and makes the system about as efficient as it can be: more like 80%. Although the extra regulator costs a little more, we can use fewer components so the cost implications are not severe. The extra efficiency also gives us more current to push through our new LEDs.

Faster multiplexing.  Launchpad multiplexes LEDs. This is a technique that makes more efficient use of our circuitry by turning different LEDs on at different times. No more than a quarter of the original Launchpad is illuminated at any instant and, because it takes time to set up the next set of columns, no LED is on for more than about 18% of the time. The use of a switching regulator gives us more of a power budget to spend on driving LEDs, so Launchpad S is multiplexed one third at a time via a faster processor. Now each LED is on for about 29% of the time (the faster processor helping to reduce the overhead), allowing them to be quite a lot brighter.

Banishing flicker.  We can configure certain LEDs to be lit dimly. When an LED is set to ‘dim’ on Launchpad, it lights for only one multiplexing pass in every five. This was about as small a duty cycle as we could achieve before we could see the device flickering. Unfortunately, video cameras work faster than the human eye. This accounts for the flickering you can see when the LEDs are set to their ‘dim’ mode while the Launchpad is being filmed. One of my pet annoyances with the original device is that it seldom accounts well for itself in Youtube videos.

Launchpad S is so much faster, we changed the way that we do multiplexing completely. We also changed the way that LEDs are dimmed so that we have 64 steps of dimming rather than Launchpad’s five, giving us improved control over contrast and colour balance. The round-trip frequency, at which the dimmest LEDs flicker, has also been increased from 56Hz to 400Hz. This is somewhat faster than a video camera, so dim LEDs no longer flicker when they’re filmed.

To show you this, we used a bit of magic firmware to slow the old and new Launchpads down to 0.25% of their proper speed (so half a minute of the video corresponds to about 1/14 of a second at full speed). The video below shows both Launchpads displaying the same LED pattern as the still photographs on this page. It shows quite clearly that two very different techniques are used, at very different speeds. This is why the dim settings on Launchpad flickered on some video cameras, while the ones on Launchpad S just won’t. Apologies for the video quality, but it proves the point!

More productive!

iPad compatibility.  The LEDs are sufficiently bright to enable us to provide a special low-power mode for Launchpad S. This uses a maximum of 80mA instead of 450mA, which allows the device to be powered from an iPad. Surprisingly, it’s actually a little brighter than the original Launchpad, while using less than 20% of its power.

Launchpad S vs Launchpad, Low-power mode

Launchpad S in low-power mode. The one on the left consumes less than 20% of the power of its older brother on the right.

Reapportioning cost.  We have been able to reclaim quite a lot of the expense of using a newer processor and higher-spec LEDs by thinking harder about the way that the printed circuit board is laid out. Launchpad’s circuit board has four layers in total: two layers of printed circuitry are buried internally. The new one uses a few tricks that we’ve devised in the meantime, and we’ve safely reduced it to two layers. This simplifies the manufacturing process and saves quite a lot of cost, so that we now provide a much-improved Launchpad for the same retail price as the original one. We have even rethought the packaging, slimming down the gift box considerably, so the units are smaller, lighter, and easier and cheaper to transport.

You’ll also notice that we’ve removed the silk screen legends from the buttons: they’re all blank now. This reflects the fact that Launchpad’s uses became far more expansive than we had anticipated, and transcended both Ableton and Automap.

For the same cost as Launchpad, you can now get hold of Launchpad S: a better-engineered product that takes advantage of four years of hard thinking and technological advancement. We hope you like what we’ve done.

Once upon a time, in the days when computers were mysterious and new, there was no simple way of making electronic musical instruments communicate with each other. Every manufacturer invented their own methods for co-ordinating the various bits and pieces that they sold. These would usually involve fragile bundles of wires passing analogue control voltages from one device to another. On reaching their intended devices, these voltages were amplified, manually scaled and offset in order to render them useful.

The pre-MIDI EMS VCS3 Cricklewood keyboard and Putney synthesiser in various stages of interconnectedness. In those days, it was considered acceptable to name one’s products after unprepossessing but well-to-do corners of Greater London. (Apologies for the suboptimal photography.)

The brutal-looking connector on the 1960s VCS3 is called a Jones connector. It supplies two power voltages and a ground to the keyboard. Two scaled control signals and an envelope trigger are generated and returned on separate terminals. Putney’s backplane has an array of jack sockets that allow control voltages to enter and leave.

Midi In

In response to this unhelpful situation, the MIDI Manufacturer’s Association [MMA], a consortium of mostly American and Japanese companies, agreed on a universal specification for a digital interface. This specification was driven entirely by two needs: to encourage interoperability between musical devices, and to keep cost to a minimum. The MMA settled on an asynchronous serial interface, because this reduced the complexity and cost of interconnection. It was specified to run at 31.25kHz, a number chosen because it is easily reachable by dividing 1MHz by a power of two. At the time, this choice rendered it incompatible with RS-232 (which can usually provide nothing between 19.2kHz and 38.4kHz), preventing existing computers from transmitting or receiving MIDI without extra hardware. MIDI may have ended up on computers only as an afterthought.

Data was communicated in one direction only over 5-pin DIN connectors, which were ubiquitous in the home audio market, and were therefore about the cheapest multi-pin connectors available. (They were so cheap, in fact, that the MIDI specification wantonly squandered two of the connector’s pins by leaving them unconnected: a move that would not be countenanced today.)

The data that travels on the MIDI interface was elegantly designed to embrace the feature set of contemporary microprocessors. Only 8-bit data was employed and, to save memory, no standard message could exceed three bytes in length. One bit of every byte was reserved to frame the data, giving rise to the 7-bit data limitation that causes annoyance today.

By design, MIDI embraced note gating information, key velocity, and standard controller messages for the pitch bend wheel, sustain pedal, and key aftertouch. A loosely-defined framework of controller messages was also provided so that other data could be conveyed besides this. The provision was made for almost every command to be assigned one of 16 separate channels, intended to allow sixteen different sounds to be controlled independently over the same physical cable.

The first MIDI devices emerged in 1983. Some unintentionally very amusing episodes of Micro Live were created that demonstrated the technology. The rest is history. Synthpop was essentially democratised by this inexpensive serial protocol. Dizzy with the possibilities of MIDI, musicians ganged their synthesisers together and began controlling them from the same keyboard to layer more and more voices, creating fat digital sounds that were very distinctive and dated very quickly. Artists that did not have the resources to employ professional studios with all their pre-MIDI equipment connected their Casio keyboards to home computers, running new software that enabled them to build up note sequences, and then quantise, manipulate, and replay them in a manner that would have been unthinkably expensive by any other means.

Midi thru

Here we are, nearly thirty years later. The processing power and capacity of a computer is around two million times as great as anything available for similar money in 1983. As a consequence, keyboard controllers, synthesisers, sequencers, and signal processing tools have advanced considerably. And yet, amidst all this change, MIDI still reigns supreme. As a basic protocol, it is just about fit for purpose. With our ten digits, two feet, and a culturally-motivated lack of interest in breath controllers, most of us are still trying to do the same things that we’ve always done in terms of musicianship. Although devices now produce much richer MIDI data at a faster rate, this is not a problem because MIDI is conveyed over faster physical interfaces (such as USB) so we can still capture it.

Aside from the musical data, MIDI has another weapon that has ensured its survival: it allows manufacturer-specific data transmissions. These System Exclusive messages opened a portal that allows modern devices to play with computers in ways that MIDI’s creators could not have imagined. To System Exclusive messages, we owe patch storage and editing software, remote software upgrades, and next-generation protocol extensions like Automap.

And yet … and yet, the specification shows its age to anybody who wants to do more or to delve deeper. MIDI is inherently a single-direction protocol, and its 7-bit data limitation results in an obsession with the number 128 that is now painfully restrictive: 128 velocity gradations; 128 programs in a bank; 128 positions a controller can take. Certain aspects of MIDI were poorly defined at the beginning, and remain unresolved three decades later.

Q. Middle C is conveyed by MIDI note number 60. Should we display this note to the user as C3 or C4?

A. Just choose one at random and provide the other as a configuration option.


Q. How much data might I expect a System Exclusive message to convey?

A. Oh dear, you went and bought a finite amount of memory. Good luck designing that MIDI merge box / USB interface.


Q. I’ve got a MIDI Thru splitter that is supposed to be powered from the MIDI OUT port of my keyboard. Why doesn’t it work?

A. Your keyboard manufacturer and your Thru box manufacturer have both bent the specification. If they’ve bent it in opposite directions, then your box won’t work as advertised.


Q. If the user doesn’t understand MIDI channels, and is attempting to transmit MIDI data on one, and receive MIDI data on another, what will happen?

A. The device at one or other end of the cable will end up back at the shop.


Q. I’m designing a new keyboard. Should my device support Active Sensing?

A. I don’t know. Should it?


Apart from all that, a lack of per-note control data annoys the creators of more expressive instruments. The standard’s rigid genuflection to Western 12-tone chromaticism is an irksome limitation to some (particularly those who use terms such as ‘Western 12-tone chromaticism’). The note model cannot properly handle single-note pitch effects such as glissandi. For devices that must accept or transmit a wide variety of control data, including us, the NRPN system constitutes a fairly unpleasant prospect, loaded with parsing irregularities and a padding-to-payload ratio of 2:1.

In retrospect, dealing with MIDI could have been made somewhat easier. The size of a single MIDI instruction depends on the contents of the first byte in a way that is neither obvious nor easy to derive, and the first byte may not necessarily be repeated in subsequent messages, which leads to a fairly onerous parsing process.

The authors of the USB MIDI specification went to the trouble of re-framing all the data into four-byte packages to simplify parsing. Unfortunately, they left a back door open to transmit an individual data byte where this was deemed essential. When is this essential? When you are deliberately trying to send malformed data that’s useless to the device at the other end. Or, to put it another way, never. The inevitable happened: one company now misframes even valid instructions, using this message capriciously to split up standard data into streams of single bytes. The USB MIDI parser thus becomes more, not less, complex, because it has to be able to support both the new four-byte frames and the old-fashioned variable length ones.

In honesty, it’s only slightly inconvenient. The MIDI parser that we embed into our hardware designs is about 640 bytes long. These are 640 very carefully arranged bytes that took several days and a lot of testing to prove, and all they do is allow a device to accept a music protocol invented in the early 1980s, but it might have been a lot worse. Indeed, it is worse once you start trying to respond to the data. Never mind: if even the pettiest problem stings us, we fix it properly. And if any fool could do MIDI properly, we’d all have to find alternative careers.

Midi out

There have been attempts, and occasionally there still are, to supplant MIDI with an all-new data format, but these seem doomed to obscurity and ultimately to failure. About twenty years ago, there was ZIPI; today, it’s nothing more than a Wikipedia page. mLAN attempted to place MIDI into an inexpensive studio network. In spite of very wide industry support, it had few adopters. With hindsight, the futurologists were wrong and the world took a different turn. Latterly, there’s the HD-MIDI specification, and Open Sound Control [OSC], soon to be re-christened Open Media Control. We’ve looked into these. I cannot remember if we are prevented from discussing our draft of the HD-MIDI spec, but we probably are. My one-sentence review therefore contains nothing that isn’t already in the public domain.

HD-MIDI promises to be improved and more versatile, and does so by adding complexity in ways that not everybody will find useful. OSC suffers from a superset of this problem: it’s anarchy, and deliberately so. The owners of the specification have been so eager to avoid imposing constraints upon it that it has become increasingly difficult for hardware to cope with it. The most orthodox interpretation of the specification has the data payload transmitted via UDP somewhere in the middle of a TCP/IP stack. (You think that MIDI’s 7-bit limitation creates too many processing overheads and data bottlenecks? Wait until you try TCP/IP as a host-to-device protocol!)

Networking protocols are fine for computer software, phone apps, and for boutique home-brew products, but they are somewhat impractical for a mass-market music device. Most musicians are not IT specialists. Those whose savoir faire extends only as far as the concept of MIDI channels cannot be expected to prevail in a world of firewalls, MAC addresses, subnet masks, and socket pairing. Ethernet being the mess that it is, there are at least two simpler ways of interfacing with computers by using old serial modem protocols, but most new operating systems have all but given up supporting these and the burden of configuration is, again, upon the user.

More severely, there is an interoperability problem. OSC lacks a defined namespace for even the most common musical exchanges, to the extent that one cannot use it to send Middle C from a sequencer to a synthesiser in a standardised manner. There are many parties interested in commercialising OSC, and a few have succeeded in small ways, but it wouldn’t be possible to stabilise the specification and reach a wide audience without garnering a consortium of renegade manufacturers for a smash-and-grab raid. The ostensible cost of entry to the OSC club is currently far higher than MIDI, too. Producing a zero-configuration self-powered Ethernet device, as opposed to a bus-powered USB MIDI device of equivalent functionality, would price us out of the existing market, exclude us from the existing MIDI ecosystem, and require a great deal more support software, and to what advantage? For OSC to gain universal acceptance, it will need to be hybridised, its rich control data combined with more regular musical events, embedded together in a stream of – you’ve guessed it. If we’re going to go through all that palaver, and more or less re-invent OSC as a workable protocol in our own club, why would we start with its strictures at all? This brings us back to the MMA, and the original reason for its existence. HD-MIDI, at least, has industry consensus. If it is sufficiently more effective than MIDI 1.0, it may yet form part of a complete next-generation protocol.

For all its shortcomings, we musicians and manufacturers cannot abandon MIDI. We have had thirty years to invent a better protocol and we have singularly failed. Some of us have already lost sight of what makes MIDI great, and we must strive to remind ourselves how we can make it better. Meanwhile, the very simplicity, flexibility, and ubiquity of MIDI 1.0 make it certain to be an important protocol for some time to come. With this in mind, I confidently predict that, in 2023, MIDI will still be indispensible, unimpeachable, and utterly, utterly everywhere.

In my previous post, I touched on the problems of attempting to copy an acoustic (or electroacoustic) instrument via a MIDI controller keyboard. In conclusion, there are a lot of challenges. We must have the serenity to accept the things we cannot change, and the bloodymindedness to change, or at least to challenge, the things that we can.

It’s time to put this into action, and consider the controller keyboard in more depth. In this posting, I will focus on the piano for two reasons. Firstly, it’s a case study for most acoustic or electroacoustic keyboard instruments because it shares all of their vagaries. Secondly, it’s the instrument with which most people are most familiar, and for which the greatest amount of repertoire exists.

Generally speaking, a MIDI controller keyboard gets its sensitivity to nuance in a fairly unsophisticated way: we keep to trusted mechanical designs. Thus, the speed of finger impact is still measured in the same way it was forty years ago, by counting the time interval between two switches being closed, and this is the only information we have.

Top left: a key mechanism that we use. Top right: the C key has been removed to reveal the two levers and switch membranes for the neighbouring key. Bottom left: Just the circuit board and membranes from the keyboard. Bottom right: the bare circuit board showing each pair of switch contacts underneath.

A keypress on a piano or keyboard constitutes a movement of about half an inch (call it 12.5mm). The key switches on a European keyboard mechanism that I tested actuate at 4.5mm and 7.5mm down a white note’s travel, so they can indicate the average speed of note travel over 3mm.

Pairs of switches are read at high speed: they have to be. In our higher-end controller keyboards, we scan each set of key contacts at 10kHz so that we can detect inter-contact times to a worst-case accuracy of about 200 microseconds. That’s pretty much the state of the art because, although the technology can go quite a lot faster, there are certain inescapable design problems that prevent anyone from doing so economically. Our older synthesisers are a bit slower than this: nuance is less critical when you’re playing an acid house bassline or a fat string pad. Nevertheless, it turns out that 10kHz is just about enough to convey the dynamic range of speeds that a pianist produces from a semi-weighted keyboard. Although weighted and hammer-action keyboards feel more luxurious, their terminal velocities are considerably lower. Thus they can be scanned at a more leisurely pace, so it’s generally less expensive to read them effectively.

We spend a long time designing representative velocity curves that feel right. Here’s one from the semi-weighed Impulse keyboard shown in our curve-designing software (every manufacturer who is serious about their craft grows their own). A colleague laboured over this curve for several hours, using different third-party synthesiser modules to develop and prove it:

The graph shows MIDI velocity values on the Y-axis, and inter-contact timings (‘m’ being short for milliseconds) on the X-axis. To produce a white note of velocity 100 (64h) from this curve requires a 5.5ms interval between the top and bottom key contacts. Black notes have their sensors arranged in the same physical places, but the different key size makes them shorter levers, so it takes a 4ms interval to register a velocity of 100. This subtlety is a pain: the black and white curves are always designed separately and, because it’s a matter of subjective feel, no hard rules can be used to relate them.

Advances …

At this stage, things should perhaps get more complicated. As I’ve discussed, real pianos possess a double escapement mechanism, meaning that there are two ways in which the hammer can be made to contact the string: one where the hammer gets a kick throughout the entire travel of the note, and another where the key nudges the hammer more gently over a much shorter distance. The piano deconstructed is a terrific resource with some fun animations of all this. The first form of attack is the most difficult to control: that’s why piano teachers tell their pupils that all the expression is to be found right at the bottom of the keys.

The initial speed of travel of a piano key being hit for the first time is more important than its later speed: you cannot decelerate the hammer once it’s been given a good shove. For a fast attack, the hammer would impact the string around the same time as the first key sensor would be triggered on an electronic keyboard. So, to get the timing and velocity more representative of a real instrument, having three key sensors would improve matters. An extra contact would be actuated just as the key is depressed, so an extra velocity curve would be generated at the top of the key. There would be some complicated interaction between the two velocity curves thus derived, involving an immediate response for fast initial attacks, and a simpler comparison of the two velocities for slower attacks.

I have never seen this design in practice – not even on some of the fancier Italian key mechanisms we’ve tried. Some of those key mechanisms are so lovely that they make me want to retire, take classes in cabinet making, and learn the complete sonatas of Beethoven, but they’re still based on two-contact systems. However, I learned to play on acoustic pianos. After years of coaching, I now approach the keys gently, and exploit the last fraction of an inch of travel to convey my intentions at the right time. I fear for learners playing exclusively on digital instruments, as they may get a surprise when confronted with a real instrument one day, only to find that they cannot get an even tone from it.

A third sensor would make the key mechanism more expensive to build, harder to scan, and the input data harder to process, would render velocity curves and the scanning firmware more troublesome to design, and it puts us into the region of diminishing returns. My inner piano player finds it a bit of a shame that my inner critic can demolish the idea so readily, but perhaps one day I’ll be in a position to experiment. Although it’s too obvious to patent, it might turn out to be a missing link.

If you’ve ever tried to play a real harpsichord, you’ll know how disorientingly high the action is, and how there’s nothing else quite like it. If a keyboard player wants to emulate an organ, harpsichord or a similar Baroque-era mechanism without velocity sensitivity, it would be far more authentic if the actuation for the note happened when the upper key sensor triggered. And yet, I don’t know of any manufacturer that does this: the sound always triggers at the bottom of key travel. This is presumably because a player does not generally want to adjust his or her style just to try a different sound. Nevertheless, it’d be interesting to know if there’s any commercial demand for sensor settings that allow a player to practise as if playing an authentically old instrument. Does anybody out there need an 18th Century performance mode?

(Update: Apparently Clavier do allow triggering from either the top or bottom contact on their Nord keyboards. It also improves the feel of vintage synth emulations. Even more reason why Novation might be overdue an obligation-free firmware update or two. Many thanks to Matt Robertson for this correction, and for being successful enough to own a Nord.)

… Off the end of a plank

There are a few other key mechanisms about. A delightful company called Infinite Response places a Hall Effect sensor underneath every key, so that their instantaneous positions can be monitored throughout the keypress and release. There’s a mode on their controllers so you can see this happening: as a key travels downward it provides a continuous position readout. It’s beautiful to see, and it must take a lot of fast, parallel processing. Their keyboards are priced commensurately, which is one of many reasons why I don’t own one. The problems with this keyboard are the same as the problems with other novel performance interfaces. Firstly, one’s synthesiser or data processing has to be as sophisticated and rich as the keyboard’s data output to make the investment worthwhile; secondly, one has to relearn musicianship skills that have already taken two decades to bring to a modest level in order to exploit these features. There isn’t enough time to re-learn music unless somebody pays you to do it.

In theory, we could already measure the release speed of the key. We actually collect the appropriate data, and MIDI possesses a standard method whereby this could be conveyed to the synthesiser. And yet, we don’t supply this information: all velocities are conveyed homogeneously. Why is this? There are three reasons, locked in a circular argument. Firstly, although a slow release sounds a little different from a fast one on a real instrument, musicians tend not to use it as an effect because the ear is far less sensitive to offset details than to onsets. Secondly, as release velocity is not supported by most controller manufacturers, hardly any synthesisers support it. Thirdly, if synthesisers don’t generally support release velocity, how do we design a curve for it?

Epilogue

Now I’ve given a glimpse of why our key mechanisms, and everyone else’s, are only precisely good enough for the job, I shall finish by turning my scattergun towards the next part of the signal chain: the latest piano synthesisers. There are still things I’ve never heard a piano synthesiser do. There are some wonderful keyboard mechanisms out there allied to cutting-edge, silicon-devouring modelling algorithms, but I haven’t yet heard a digital instrument that can seduce me away from the real thing. It’s not just sentimentality. Here’s an example of something that no digital piano can render properly: the last four bars of the piano part of Berg’s Vier Stücke for Clarinet and Piano Op.5.

The italic instructions to the pianist, for those whose German is as ropey as mine, are ‘strike inaudibly’ and ‘so quiet as to be barely heard’. The loud staccato clusters in the left hand set up a sympathetic resonance in the strings of the notes that the right hand is holding down. When the dampers finish their work, what remains is an ethereal, disembodied chord. Acoustic modelling just cannot render this yet. (He was a clever chap, Alban Berg. If there can be any silver lining to his tragic death in 1935, it’s that his works are now out of copyright.)

Because a digital piano synthesiser can’t reproduce this fragment of Berg, it cannot render anything correctly while the sustain pedal is being held down: there’s just not enough power to compute the resonances of every string interacting with every other. Those synthesisers that claim to model string resonances genuinely do so, but model only those strings that are being played, in mutual isolation. Real pianos aren’t so deterministic. This is why digital pianos still sound a little anaemic.

While we’re on the subject of the sustain pedal, it is an auditory event of its own on any real instrument. However, MIDI treats it as a control change message, so we never hear the warm fizz and the quiet wooden choonk as eighty-eight dampers disengage from their strings. We’re already modelling strings, a soundboard, and hammers, but a bit of mechanical noise and simulated felt adhesion are still too much to ask. Perhaps I haven’t researched this recently enough: it’s not so hard to blend a few samples. There seems to be a bit of an arms race going on in piano synthesiser verisimilitude, so things have probably changed recently. Can I download a Glenn Gould piano model yet, that hums along with the middle voice whenever I attempt to play Bach?

Let’s end positively. One thing I’ve heard some piano models begin to manage at last is the ability to flutter the sustain pedal carefully to mess about with the decay of notes. It’s an effect that has its place when used sparingly. It’s taken twenty years, but there may be hope for these algorithms yet.

Eight hundred years ago, a keyboard was a series of pegs or bars used to control a pipe organ, with each key opening a valve to admit air to a particular group of pipes. It started to take its modern shape in tandem with the development of musical notation. Both had become standards that we would recognise today by the middle of the Renaissance Era, around 1500. With the invention of escapement mechanisms, keys were united with strings, and the first spinets, clavichords and harpsichords appeared. The keyboard as a control interface continued to evolve as the instruments that it served proliferated and matured.

Early keyboard instruments were limited in ways that today’s instruments are not. Only a small class of them could be controlled by changing the speed at which a player’s fingers hit the keys, and these devices provided insufficient power to perform to a concert audience. Louder instruments required a mechanical plectrum to pluck the string from a fixed height, so that any manner of keypress resulted in the same sound.

To alter the tone and character of music, concert instruments started to resemble organ consoles, possessing two or more manuals and a large number of drawbars and stops. Octave doubling, Venetian swell, and auxiliary strings were variously employed to provide some dynamic versatility, but it was not until the invention of the fortepiano around 1720 that a keyboard instrument could combine the subtlety of finger-controlled dynamic range with the power of a concert instrument. Early pianos feel and play like development prototypes: they are quiet, feel insubstantial, and fall out of tune if somebody closes a door too quickly. Fortunately, the Industrial Revolution accelerated their development, mutating the fortepiano into a pianoforte by replacing the wooden frame with steel so that strings could be longer, tighter, louder, and maintain better tuning. This provided the strength and stability to withstand the additional tension of two or more octaves, and allowed second and third strings to be fixed to the higher notes to balance them with the power of the lower ones. The newer bass strings were overstrung with the others to make the resulting instrument more compact, and the grand piano took its distinctive, curvy shape. Pedals were added: one to lift the dampers, and one to soften the treble by shifting the hammers so that they could not contact the third strings. For keyboard musicians, however, an equally significant improvement was the invention of the double escapement in the 1830s.

Whereas keyboards had formerly required each key to be released to its starting position to sound again, double escapement allows a player to retreat the key of a sounded note by a few millimetres, and then strike again to repeat it. It permits a greater palette of playing styles, and accommodates figurations that are both quiet and fast. Certain elements of compositions, such as the note or chord tremolandi that are favoured by some modern composers, would be physically impossible without it. The double escapement is fiendishly complicated: it relies on a moving assembly poetically called the wippen. This couples the key to its hammer via a number of levers, moving linkages, and adjustable screws. It is sufficiently complex, and setting it up is such an art, that it would probably not have been invented or popularised had Victorian engineers had access to electronics. They didn’t, and their legacy is the brilliant and complicated key mechanism that accounts for much of the cost and labour of a modern concert piano.

Of course, the story doesn’t end there. Our forebears composed for many types of keyboard instrument, and so do we. The electronic era has bestowed upon us electroacoustic instruments such as the Rhodes and Wurlitzer pianos. These were designed, in the spirit of the clavichord, to be portable pianos. Because the electronics did some of the work, the hammers and dampers could be smaller, which made the escapements simpler and lighter. One such instrument, the Hohner Clavinet, is little more than an amplified clavichord, reminding us that innovation can be as retrospective as it is progressive. Add to this other classic electronic instruments with no finger-controlled dynamics – the Hammond organ, the Mellotron, and a plethora of classic analogue synthesisers – and it is clear that keyboard players can now choose from an incredible legacy of beautiful, but very different, instruments.

Today, our customers want the sound of these instruments without the liability of ownership. None of these instruments is simple to tune or maintain, and their scarcity, fragility and complexity makes them expensive. Part of Novation’s business is manufacturing MIDI controllers that allow an inexpensive key mechanism to be joined to synthesisers or samplers to reproduce these sounds. Our slogan, It’s The Feel, is also our mission, and it pays no small tribute to five hundred years of progress.

Selling MIDI controllers with such a statement is bold. We run the risk of being compared against not just the responsiveness of a vintage instrument, but its gestalt. A Rhodes piano isn’t just a sound, and its escapement isn’t just a feel. The joy of a Rhodes is just as much in the semiotics of its faded, off-white keys, the weight and rattle of each note as it’s deployed, and the way that the whole keyboard buzzes under the fingers as it is played. It’s the smell of dust and old solder flux, and the black fabric, rounded thermoplastic, and kitsch chrome detailing of a vintage instrument. And, of course, it’s just as much the smoke-filled photographs of jazz and rock legends of the Sixties and Seventies teasing immortal melodies from its keys. In all their flaws, instruments like the Rhodes are evocative and compelling because they are culture, and they are history. Many musicians refuse to play a sampled facsimile, no matter how indistinguishable it is from the original once their track is laid down, because if the performer doesn’t feel the same, the performance won’t be the same.

So, when we make electronic controllers, we find ourselves perched on the shoulders of giants, but sometimes wishing that they’d let us down for a few minutes so that we can take a walk and see the sights ourselves. Meanwhile, we contrive to stage re-enactments of the playing experience of a few favourite keyboards, and to elicit as much cultural meaning from them as possible. The Feel will never be the real thing, but our customer is buying a MIDI controller, rather than trawling classified adverts to become the custodian of an heirloom. What we provide must be an amalgam of everything they need to do, with a cost of ownership that they can afford and an ease of use and portability that mechanical instruments cannot touch. We remember that our art is forever shifting. Our mission is to make the most versatile and playable keyboard we can, and to discard those frustrations of real instruments that performers often forget.

Having set this stage, my next posting will discuss the design of MIDI controller keyboards, the choices we make about them, and what we can do to make them better.