In my previous post, I touched on the problems of attempting to copy an acoustic (or electroacoustic) instrument via a MIDI controller keyboard. In conclusion, there are a lot of challenges. We must have the serenity to accept the things we cannot change, and the bloodymindedness to change, or at least to challenge, the things that we can.
It’s time to put this into action, and consider the controller keyboard in more depth. In this posting, I will focus on the piano for two reasons. Firstly, it’s a case study for most acoustic or electroacoustic keyboard instruments because it shares all of their vagaries. Secondly, it’s the instrument with which most people are most familiar, and for which the greatest amount of repertoire exists.
Generally speaking, a MIDI controller keyboard gets its sensitivity to nuance in a fairly unsophisticated way: we keep to trusted mechanical designs. Thus, the speed of finger impact is still measured in the same way it was forty years ago, by counting the time interval between two switches being closed, and this is the only information we have.
Top left: a key mechanism that we use. Top right: the C key has been removed to reveal the two levers and switch membranes for the neighbouring key. Bottom left: Just the circuit board and membranes from the keyboard. Bottom right: the bare circuit board showing each pair of switch contacts underneath.
A keypress on a piano or keyboard constitutes a movement of about half an inch (call it 12.5mm). The key switches on a European keyboard mechanism that I tested actuate at 4.5mm and 7.5mm down a white note’s travel, so they can indicate the average speed of note travel over 3mm.
Pairs of switches are read at high speed: they have to be. In our higher-end controller keyboards, we scan each set of key contacts at 10kHz so that we can detect inter-contact times to a worst-case accuracy of about 200 microseconds. That’s pretty much the state of the art because, although the technology can go quite a lot faster, there are certain inescapable design problems that prevent anyone from doing so economically. Our older synthesisers are a bit slower than this: nuance is less critical when you’re playing an acid house bassline or a fat string pad. Nevertheless, it turns out that 10kHz is just about enough to convey the dynamic range of speeds that a pianist produces from a semi-weighted keyboard. Although weighted and hammer-action keyboards feel more luxurious, their terminal velocities are considerably lower. Thus they can be scanned at a more leisurely pace, so it’s generally less expensive to read them effectively.
We spend a long time designing representative velocity curves that feel right. Here’s one from the semi-weighed Impulse keyboard shown in our curve-designing software (every manufacturer who is serious about their craft grows their own). A colleague laboured over this curve for several hours, using different third-party synthesiser modules to develop and prove it:
The graph shows MIDI velocity values on the Y-axis, and inter-contact timings (‘m’ being short for milliseconds) on the X-axis. To produce a white note of velocity 100 (64h) from this curve requires a 5.5ms interval between the top and bottom key contacts. Black notes have their sensors arranged in the same physical places, but the different key size makes them shorter levers, so it takes a 4ms interval to register a velocity of 100. This subtlety is a pain: the black and white curves are always designed separately and, because it’s a matter of subjective feel, no hard rules can be used to relate them.
At this stage, things should perhaps get more complicated. As I’ve discussed, real pianos possess a double escapement mechanism, meaning that there are two ways in which the hammer can be made to contact the string: one where the hammer gets a kick throughout the entire travel of the note, and another where the key nudges the hammer more gently over a much shorter distance. The piano deconstructed is a terrific resource with some fun animations of all this. The first form of attack is the most difficult to control: that’s why piano teachers tell their pupils that all the expression is to be found right at the bottom of the keys.
The initial speed of travel of a piano key being hit for the first time is more important than its later speed: you cannot decelerate the hammer once it’s been given a good shove. For a fast attack, the hammer would impact the string around the same time as the first key sensor would be triggered on an electronic keyboard. So, to get the timing and velocity more representative of a real instrument, having three key sensors would improve matters. An extra contact would be actuated just as the key is depressed, so an extra velocity curve would be generated at the top of the key. There would be some complicated interaction between the two velocity curves thus derived, involving an immediate response for fast initial attacks, and a simpler comparison of the two velocities for slower attacks.
I have never seen this design in practice – not even on some of the fancier Italian key mechanisms we’ve tried. Some of those key mechanisms are so lovely that they make me want to retire, take classes in cabinet making, and learn the complete sonatas of Beethoven, but they’re still based on two-contact systems. However, I learned to play on acoustic pianos. After years of coaching, I now approach the keys gently, and exploit the last fraction of an inch of travel to convey my intentions at the right time. I fear for learners playing exclusively on digital instruments, as they may get a surprise when confronted with a real instrument one day, only to find that they cannot get an even tone from it.
A third sensor would make the key mechanism more expensive to build, harder to scan, and the input data harder to process, would render velocity curves and the scanning firmware more troublesome to design, and it puts us into the region of diminishing returns. My inner piano player finds it a bit of a shame that my inner critic can demolish the idea so readily, but perhaps one day I’ll be in a position to experiment. Although it’s too obvious to patent, it might turn out to be a missing link.
If you’ve ever tried to play a real harpsichord, you’ll know how disorientingly high the action is, and how there’s nothing else quite like it. If a keyboard player wants to emulate an organ, harpsichord or a similar Baroque-era mechanism without velocity sensitivity, it would be far more authentic if the actuation for the note happened when the upper key sensor triggered. And yet, I don’t know of any manufacturer that does this: the sound always triggers at the bottom of key travel. This is presumably because a player does not generally want to adjust his or her style just to try a different sound. Nevertheless, it’d be interesting to know if there’s any commercial demand for sensor settings that allow a player to practise as if playing an authentically old instrument. Does anybody out there need an 18th Century performance mode?
(Update: Apparently Clavier do allow triggering from either the top or bottom contact on their Nord keyboards. It also improves the feel of vintage synth emulations. Even more reason why Novation might be overdue an obligation-free firmware update or two. Many thanks to Matt Robertson for this correction, and for being successful enough to own a Nord.)
… Off the end of a plank
There are a few other key mechanisms about. A delightful company called Infinite Response places a Hall Effect sensor underneath every key, so that their instantaneous positions can be monitored throughout the keypress and release. There’s a mode on their controllers so you can see this happening: as a key travels downward it provides a continuous position readout. It’s beautiful to see, and it must take a lot of fast, parallel processing. Their keyboards are priced commensurately, which is one of many reasons why I don’t own one. The problems with this keyboard are the same as the problems with other novel performance interfaces. Firstly, one’s synthesiser or data processing has to be as sophisticated and rich as the keyboard’s data output to make the investment worthwhile; secondly, one has to relearn musicianship skills that have already taken two decades to bring to a modest level in order to exploit these features. There isn’t enough time to re-learn music unless somebody pays you to do it.
In theory, we could already measure the release speed of the key. We actually collect the appropriate data, and MIDI possesses a standard method whereby this could be conveyed to the synthesiser. And yet, we don’t supply this information: all velocities are conveyed homogeneously. Why is this? There are three reasons, locked in a circular argument. Firstly, although a slow release sounds a little different from a fast one on a real instrument, musicians tend not to use it as an effect because the ear is far less sensitive to offset details than to onsets. Secondly, as release velocity is not supported by most controller manufacturers, hardly any synthesisers support it. Thirdly, if synthesisers don’t generally support release velocity, how do we design a curve for it?
Now I’ve given a glimpse of why our key mechanisms, and everyone else’s, are only precisely good enough for the job, I shall finish by turning my scattergun towards the next part of the signal chain: the latest piano synthesisers. There are still things I’ve never heard a piano synthesiser do. There are some wonderful keyboard mechanisms out there allied to cutting-edge, silicon-devouring modelling algorithms, but I haven’t yet heard a digital instrument that can seduce me away from the real thing. It’s not just sentimentality. Here’s an example of something that no digital piano can render properly: the last four bars of the piano part of Berg’s Vier Stücke for Clarinet and Piano Op.5.
The italic instructions to the pianist, for those whose German is as ropey as mine, are ‘strike inaudibly’ and ‘so quiet as to be barely heard’. The loud staccato clusters in the left hand set up a sympathetic resonance in the strings of the notes that the right hand is holding down. When the dampers finish their work, what remains is an ethereal, disembodied chord. Acoustic modelling just cannot render this yet. (He was a clever chap, Alban Berg. If there can be any silver lining to his tragic death in 1935, it’s that his works are now out of copyright.)
Because a digital piano synthesiser can’t reproduce this fragment of Berg, it cannot render anything correctly while the sustain pedal is being held down: there’s just not enough power to compute the resonances of every string interacting with every other. Those synthesisers that claim to model string resonances genuinely do so, but model only those strings that are being played, in mutual isolation. Real pianos aren’t so deterministic. This is why digital pianos still sound a little anaemic.
While we’re on the subject of the sustain pedal, it is an auditory event of its own on any real instrument. However, MIDI treats it as a control change message, so we never hear the warm fizz and the quiet wooden choonk as eighty-eight dampers disengage from their strings. We’re already modelling strings, a soundboard, and hammers, but a bit of mechanical noise and simulated felt adhesion are still too much to ask. Perhaps I haven’t researched this recently enough: it’s not so hard to blend a few samples. There seems to be a bit of an arms race going on in piano synthesiser verisimilitude, so things have probably changed recently. Can I download a Glenn Gould piano model yet, that hums along with the middle voice whenever I attempt to play Bach?
Let’s end positively. One thing I’ve heard some piano models begin to manage at last is the ability to flutter the sustain pedal carefully to mess about with the decay of notes. It’s an effect that has its place when used sparingly. It’s taken twenty years, but there may be hope for these algorithms yet.