Last week, I finished my article on an uncharacteristically cruel cliffhanger. Before I let the results slip, I should qualify them a little. I have already said that we’re giving this ZX Spectrum a head start, firstly by disabling the TV modulator, and then by allowing the test setup to exclude a TV, cassette recorder, and associated cabling, as would normally be required by the rules.

December’s snow handed our test subject a third advantage. Traffic accidents caused by bad weather cost us two hours in the test chamber, and we were able to take only a reduced set of readings. Six complete frequency sweeps were taken, at three angles around the equipment in horizontal and vertical polarisations, with the antenna at a height of 1.5 metres.

This is the first stage of a radiated emissions test to EN55022. The next would require us to focus on the five most problematic-looking frequencies, determine the angle, antenna height, and polarisation for which these have maximum strength, and ensure that the signal is still beneath the pass line. This accounts for the isolated red crosses on last week’s graph. Such a test would take a couple of hours for us to perform completely.

What do we already know about a ZX Spectrum’s RF performance? Anecdotally, there are a couple of causes for concern. The first is that we can actually hear the machine chirping at us when it’s switched on. Here’s a recording of it (hosted off-site), with the computer loudspeaker playing a suitably Kamikaze tune before the computer resets itself. This demonstrates how horrendously loud this chirping is, and also how it modulates during reset.

The chirping is caused by the circuitry that generates the −5V power rail for the RAM chips. It works by switching current through a transformer coil several thousand times per second.  Because we can hear this, the switching frequency is too low to cause much hassle at radio frequencies. However, it implies that corners have been cut. The transformer has been wound fairly crudely and hasn’t been potted to prevent acoustic noise, and the control circuit changes its switching frequency to compensate for the fluctuating load when the RAM starts working. In fact, the whole system is likely to be amplifying and feeding its own mechanical vibrations, generating extra nastiness. No expense spent.

A bigger concern is that, when we look at the screen, we can clearly see the chip’s clock breaking through, manifesting as vertical stripes in the background of the image. This possibly occurs in the ULA chip whose purpose is to glue the screen, memory, cassette sockets, and processor together. It’s producing a fluctuation of a few tens of millivolts on the video signal, and this doesn’t look promising for signal integrity.

Stripy Spectrum

The circuit board has two layers, and ‘plated through’ holes that selectively join them together  – an unusually exacting specification in the early Eighties, but outdated practice today for a board like this. There is no internal copper plane for the power or ground connections as there would be now, so signal current must flow and return along tracks arranged in wide loops. Large loops are bad from a radiation point of view, because they constitute resonant systems.

It doesn’t seem great so far. As any engineer who knows this computer would suspect, it doesn’t measure very well either:

ZX Spectrum EMC

Final readings could easily be 10dB or more above the peaks on this graph, so it’s not just a failure; it’s an abject one. Even that fuzz just above 100MHz might prove problematic when we connect more cables and look more closely.

One of the two most problematic frequencies, 42MHz, we could have predicted. It is the third harmonic of the main crystal. The fundamental frequency of 14MHz finds its way into all sorts of places: it’s divided by four to generate the microprocessor’s master clock (3.5MHz); it paints pixels onto the television (14MHz / 625 PAL lines / 25Hz frame rate = 896 clock cycles per line); generally it also couples into large parts of the circuitry.

The 48.8MHz peak is attributable to a different problem. When the emissions were being measured, we observed that quite a lot of the radiation was vertically polarised. Because PCB tracks and components are horizontal, electromagnetic radiation that originates from the circuit board has hardly any vertical component. Vertically-polarised energy is an indicator that the power supply cable is radiating. The Spectrum’s cable is exactly 1.5m long, which causes it to resonate fairly effectively at about (0.95 × c)/(1.5 × 4) = 48MHz. This 48.8MHz peak is an upper harmonic of the PAL colour clock (4.4336MHz × 11) that just happens to hit the resonant frequency of the power cable. If the cable were a different length, it would just pick out a different harmonic, moving the spike upwards or downwards.

A: Invalid argument

I can’t simply slaughter a piece of computing history on a rotating altar and walk away. Let us perform a proper autopsy. How and why does the Spectrum fail, and what might we do today to improve its RF performance?

It’s a hard question to answer, firstly because I don’t want to test my computer destructively to find out; secondly because we just wouldn’t design a computer in the same way today. In 1982, Sinclair Research worked miracles in compressing the functionality of a home computer into just four main chips plus RAM.

As we have seen, though, the first thing we would have to change is the rather fishy power supply and cable.

ZX Spectrum PSU

As with almost every computer power supply today, its replacement would have to have a ferrite, and both conductors would run through a hollow shield to reduce radiation. Sinclair power supplies are moulded shut and I don’t want to destroy mine, but it is likely that this single step would provide a good deal of radio attenuation: perhaps ten or twenty decibels.

Where the DC jack enters the computer, we would add a common-mode inductor and some extra inductors and capacitors to clean up the power lines. We would need similar components to protect the TV and cassette cables. However, the data lines between chips probably don’t need slewing: presciently, the ULA included the facility for building suitable resistances into the chip itself. Given the high board density, even adding these few inductors and capacitors to fix individual radiation problems would entail a non-trivial redesign and there would be no guarantee of success. There is also the problem of cost: these components would probably add a pound to the material cost of the computer. By the time the customer saw it, though, the manufacturer, distributor and store would have added their margins, resulting in a retail price increase of three or four pounds. It doesn’t sound like much, but it would have made Sinclair’s eyes water: he sold millions of these based on low retail prices and high margins.

E: Out of DATA

As an epilogue, I shall consider what we might do if we were given a cleaner slate. Firstly, what would happen if we were asked to make a Sinclair-compatible clone that could legally be sold? Those RAM chips are now museum pieces and change hands for a few pounds each. The custom ULA, manufactured by Ferranti, is technically impossible to replace because Ferranti went spectacularly bust in the 1990s. Only the Z80 microprocessor and mask ROM remain obtainable in recognisable incarnations, but economies of scale nowadays render that kind of technology abysmal value for money.

speccyinside

The cheapest and safest solution would actually be to emulate the Z80 processor in software running on a more modern chip. Inside this chip, costing maybe £2.00 or less, would be encapsulated all the ROM and RAM that the Spectrum needs to function. The parts of the ULA that drive the screen and cassette would be designed around special input and output functions that are provided by these chips, although the video generation would still require an extra device of its own.

Meanwhile, the big heatsink and 7805 regulator (surrounding the loudspeaker in the bottom-right hand corner in the photograph above) would be replaced by a modern switch-mode regulator that dissipates barely any heat at all, and all that sheet metal would go. We wouldn’t need the -5V supply any more, and we wouldn’t fit a TV modulator: modern TVs cannot demodulate an analogue signal, and VGA or component video would suffice.

Using surface-mount technology, the circuit board would be shrunk to about a fifth of its existing area, so it would resemble a narrow strip along the back that holds the rear sockets, extra components for EMC, headers for the keyboard membrane, and the familiar edge connector which, if we were expecting our device to last for another thirty years, we might consider gold-plating.

The new design would require a four-layer board for electromagnetic reasons, increasing the cost slightly.  If we were making enough of these devices, we’d be able to sell the whole thing for about fifty pounds assembled, with a printed manual – less than half of the actual cost of a Spectrum in 1982 without allowing for inflation, and about the same price as a reasonable second-hand specimen today. We may still make a modest profit.

But I’m considering emulation, and emulation is cheating. Nevertheless, it is the cheapest and quickest way to build a Spectrum today. There are other ways that purists would appreciate more – see, for example, the labour of love that is the Harlequin Project – but these require more complex chips and hence more cost. We may, as an ironic twist, use an ARM chip: these are cheap, powerful, and started life in the offices of Sinclair’s arch rivals at the time, Acorn Computers.

One troublesome question would remain: we’ve built a compatible, but what we could do with all the unused power? With the ROM now held in electronically erasable memory, we could offer operating system upgrades. We could turn the clock speed up and increase the memory practically for free, so why shouldn’t we? We might no longer limit ourselves to emulating a particular 8-bit processor, or to emulating a foreign processor at all. This would allow us to use greedier screen modes, and so provide enhanced graphics. We could change the ROM to allow ourselves to program in a more modern language, attach an SD interface to load and save games quickly and reliably, improve the sound capabilities and the tape modulation scheme, add serial ports and MIDI and USB, even imitate other computers. And so on.

The end of this train of thought always matches the beginning. We’re engineers. We love ideas, and we love experimenting with computers. Were we to reissue the Spectrum today with our vastly improved resources and a commensurately pioneering spirit, we’d create a computer that is nothing like the Spectrum. What we would end up would look more similar to a Raspberry Pi, which was conceived with a similar spirit and restrictions and followed a fairly similar path. If we insisted on affixing a keyboard and screen, it would resemble a small laptop.

More likely, though, if we wanted to be the true successors to Sinclair’s aesthetics and vision, our new device would resemble a smartphone. These shrink-wrapped products represent the leading edge of power, versatility, convenience, and style. Supported by a burgeoning industry of home-grown applications, phones rather than personal computers now excite and engage young people. Some just use them to consume, but others have discovered that phone applications can be written using free software and then distributed for free to the world. This has started to democratise computing again in the same way that Sinclair did thirty years ago.

Sir Clive now states in interviews that he doesn’t own a mobile phone. He might wrinkle his nose at the dominance of the Asian hardware consortia and giant American software companies that so swiftly ate his lunch in the Eighties. He might fear the collective infantilisation and lack of peace of a society in which everybody is glued to a telephone. But he must, surely, approve of the renaissance of home-made software.

Advertisements

Why can’t you really use a mobile phone in a petrol station forecourt? Why do you have to turn off digital cameras and MP3 players during take-off? The reason is that every piece of electronic equipment you will ever own has a secret life as a radio transmitter.

I’ll start explaining what happens using sound waves instead of electromagnetic ones, as they’re slower, less mysterious, and you can hear them. An organ pipe has a natural tendency to resonate. If you place a source of turbulent air at one end, it will travel to the other end. Some of it will leave the tube, but some will be reflected back: this back-propagation is a kind of shock-front caused by the travelling wave suddenly emerging into the open air. That reflection is reflected again when it reaches the first end of the pipe, and so on. A pressure wave can travel up and down a pipe for some considerable time without dissipating. The resulting periodic pressure also influences the source of air, to the extent that a really good standing wave is set up. As long as the air flow keeps it energised, it is quite stable.

So, organ pipes need three things: a source of energy, a long, thin shape to guide the sound wave along, and an end that’s open to the world to allow the sound to radiate and the shock-front to propagate back down the pipe. Electromagnetic waves in wires share many similarities to acoustic waves in the way that they propagate, and it turns out that these similarities include the propensity for resonance.

So, what does an FM radio antenna look like? It’s generally a piece of wire of a carefully-selected length, with a high-frequency signal being driven into one end, and the other end left unconnected. A signal travels up the piece of wire, and some of it travels back down the wire as a reflection. The fact that the wire is tuned by choosing its length to match the carrier frequency helps the system to resonate, and electromagnetic radiation propagates into the air.

ImageThe Hannington transmitter in Hampshire: if you squint, it’s an organ pipe.

Yes, this is a simplification: many antennae look like this; those that don’t are designed so that that they focus their radiated energy, or their coverage, in a particular direction. If we’re going to call Hannington an organ pipe, we can call the microwave mushrooms that formerly decorated the BT Tower flugelhorns. The principle is the same.

Anyway, it turns out that a resonant system is created on a printed circuit board whenever we connect two chips that exchange information: one of them produces a high-frequency signal with nice sharp transitions. It travels down a copper track with all its harmonics stretching gloriously to infinity, and into the input pin of the device that’s reading it. Usually, the input pin is designed with efficiency in mind, so that it doesn’t use much current to read the signal (a certain class of engineers will now be uncontrollably murmuring ‘high input impedance’ at the screen). The far end of the track therefore appears to be very similar to a piece of unconnected wire. The signal and its harmonics travel back down the wire and, even if the length of the track isn’t quite matched, we are suddenly transmitting a little bit of radio. This process is reciprocal, so a little current will flow in the wire in response to strong radio waves. In fact, it’s more complicated than that, because the power connection that supplies both chips behaves in a similar manner.

Why does any of this matter? Well, ordinarily it wouldn’t, until you happen across a piece of equipment that radiates a bit of energy at a certain frequency, and try and make it work at the same time as another piece of equipment that is susceptible to interference at that frequency. Suddenly you have a fault that stops them playing nicely together for mysterious, magical reasons. Neither manufacturer accepts culpability because it’s not entirely their fault and, if you’re really unlucky, the fault kills people.

Of course, we now live in a world full of radio energy, and we rely increasingly on electronic devices for our safety and security, so we are rapidly accumulating anecdotes of near-lethal situations in which electronics fail. A friend of mine managed to crash his heart pacemaker during a compliance test, and it was broken for weeks. His hospital found out during a routine check, discovering that its system clock had stopped at the time he was wandering about inside an EMC test chamber. If it doesn’t kill you, it might just cost you money. That’s why customers are required to turn off mobile phones at the petrol pumps, just in case they mess up the payment system.

So, why does any of this matter to a manufacturer of audio toys? The simple answer is legislation.

That’s the sound of the police

Since the mid-1990s, various laws have ensured that designers and manufacturers need to take electromagnetic radiation seriously. We must certify our own equipment as complying with these regulations in order to sell it, and have to produce documentation on demand to show that we have made the effort. If our products don’t comply, we will be fined. Sometimes we do the compliance testing ourselves; sometimes, when we expect compliance issues to be straightforward, our manufacturer does it for us.

CE/FCC
As well as electromagnetic tests, there are tests to see if equipment can withstand static shocks without crashing or being destroyed, to check that mains-powered equipment can withstand power surges and drop-outs, and to ensure that harmful radiation isn’t propagated down interconnecting cables into neighbouring equipment. This is all very reassuring for the consumer but, interestingly, we often find good, defensive radiofrequency design practice is at odds with the most optimal audio circuit design: particularly when so much of the audio world is still using those horrible phono cables to transmit its most sensitive signals. This leads us to invent a host of novel circuits that serve both masters, and to a ceaseless and obsessive evaluation of our best practice.

The details of what we must do are not the most exciting subject for a non-specialist, but they have shaped the world around us in a few ways. One thing you’ll have noticed is that more and more pieces of equipment tend to come with separate power supplies, rather than accepting an input directly from the mains as they used to. This allows the designers to buy in a power supply from a third party that has already been tested, and prevents them having to test their own for compliance, in accordance with the volume of extra legislation that covers mains-powered equipment.

The second thing is those ubiquitous lumps on computer power supply cables and USB leads.
USB with ferrite This is a ferrite. It’s just a doughnut of material containing mostly iron powder and resin, which magnetises slightly in response to current flowing through the cable and, in magnetising, generates its own field in opposition. That’s its only function. In doing this, it suppresses nasty electromagnetic problems below about 80MHz surprisingly well. Nobody likes them, but we put them on the cable because the CE marking criteria are very strict, and most equipment will fail by a hair’s breadth without them.

Radio silence

To give a quick indication of what we need to do, here is a picture of a bit of one of our products, with parts marked that we use only to make it pass electromagnetic legislation. If we designed the product without them, everything would work in the same way, but the unit wouldn’t get certification.

Product X

  1. A multi-layer circuit board. Most circuit boards these days have internal layers, and this is often just for electromagnetic compatibility. Multi-layer boards are built up from sandwiches of thin boards glued together, and this makes them more expensive than standard one- or two-layer boards. The internal layers sit only a fraction of a millimetre from the outer ones, and the proximity helps to dampen any resonance: a bit like lining the organ pipe with foam.
  2. Series resistor packs also quell standing waves, by absorbing a little reflected energy.
  3. Gigahertz capacitors and chip ferrites selectively admit certain high-frequency signals, while increasingly absorbing their troublesome harmonics.
  4. Common mode inductors sort out particular classes of problems on power and data lines from the USB and power socket by dispersing unwanted high-frequency energy.

This is an atypically complicated product, which is why I chose it. There’s a DSP on there, some fast RAM for audio, a microcontroller, USB, and quite a few wireforms that join up to other circuit boards using long runs of cabling. That is quite a lot of systems interchanging information quickly, and some signal paths are fairly long. Even with care, it is hard to execute such a system correctly because if something resonates, a signal somewhere will find it, couple into it, and use it as a transmitter.

The first time we took measurements, we found a couple of problems – we expected them. However, after two days and two revisions of circuit board, we could spin the device on a turntable, point a steerable antenna at it, and watch it sail under the pass line provided by EN55022 class B with a couple of decibels to spare.

Product X emissions test

We’d normally expect a far more comfortable pass than this, but a pass is a pass, and a tricky product is a tricky product. Whatever we design, electromagnetic performance has to be considered from the start. We need to know more than we used to about where our electrons are going, generally take a mature approach to designing products and reviewing best practice, or we risk spending a fortune on making things that we’re not allowed to sell.

ZX Electromagnetic Spectrum

Now, we get to the fun bit. This year is the 30th anniversary of one of my favourite inventions of all time, the Sinclair ZX Spectrum. A few weeks ago, I finally bought one: a non-working one on eBay that I nursed back to health. Fortunately there was very little wrong with it. Unfortunately it’s a 16K model, and a fairly early one at that, which won’t run much software in its native state. This probably accounts for its unusually pristine condition.

We took half an hour in the chamber to perform an approximate series of EN55022 measurements, to check its radiated emissions against today’s standard. The question is, what have we learned as an industry since 1982? Does a 30-year-old computer, that embodies Sinclair’s mastery of cost-engineering and elegant design like nothing else, pass modern legislation that would render it saleable?

We gave it a fighting chance. One of the things I did was to disable its TV modulator which, as well as stopping it from generating a UHF carrier signal, renders it compatible with modern televisions. Machines of this vintage are notorious for their flaky video circuitry, and it needed all the help it could get just to render a yellowish picture.

speccyThe ZX Spectrum, interpreting cyan with modest success.

I replaced all the electrolytic capacitors, trimmed the colour circuitry as best I could, and ensured via some old documents online that its modification state is up to date: not because this would have helped, but because it’s good practice for any piece of old electronics. The only other modification from stock was a 1 megaohm resistor across the 14MHz system crystal, just to allow the aged thing to work properly. I’m trying to help here.

speccyinside

The inside of my refurbished Issue 2 ZX Spectrum. The empty chip sockets would normally contain an extra 32K of RAM.

Remember when all technology was this exciting? Here it is with the radiation detection antenna pointed at it. Normally we would test such a device in a more representative setup: ideally, it would be attached to a cassette recorder and a TV, but we gave it a fighting chance and left most of the cabling in the box. I’ll talk about what happened next week.

speccy emc

So you didn’t train as an electronic engineer or a computer scientist. This has never been an impediment to working in engineering: good engineers often come from the humanities or the arts. The only prerequisites are good numeracy skills and the correct attitude.

In fact, your unusual origins may actually help you because, if you sell them properly, your extra skills and insight provide you with knowledge, techniques, and perspective that set you above the specialist programmers who are competing with you. I recommend a quick look at Valve Software’s handbook for new employees because it contains a description of T-shaped people (page 46). These are the people that innovative organisations love.

What else do you need to work on to get a first job in software? There’s a lot of advice out there, and some of it is dreadful. As an outsider who found a way in and is now training and leading other engineers, here’s my own list. Forgive me if this is short of depth or justification: I’d have to write a posting for each one if I were to play that game.

Learn C.

There are a lot of sub-specialisms in computer science now, but they all have one gateway in common: a high-level programming language. High-level languages are abstracted enough that it doesn’t matter exactly what your microprocessor is doing with its time, but not so abstracted that you lose touch with the power and memory your program requires. That’s why they are the best place to start learning.

The core elements of high-level languages are essentially the same. Once you’ve learned to think in one, the same techniques can be applied to any other, or broadened into object-oriented programming, or funnelled into the low-ceilinged world of microcontrollers.

Choosing C might be slightly controversial, and maybe seems prescriptive, so here is a little reasoning. Based on all I’ve seen and experienced, I wouldn’t advise starting to program with an object-oriented language. You’ll learn to think in terms of larger-scale forms. Objects and classes are very useful indeed, but many challenges won’t look like that. Nurse Compiler and Nanny Operating System will hide much complexity from you and will expect something in return, and you’ll miss important lessons about how computers function.

A full appreciation of starting constraints is the cornerstone of good design. At the extreme end, however, we have assembly language. This is way too much to learn at once. Addressing modes, register management, memory mapping, the stack, I/O interfacing, and a full instruction set all have to be understood before you’ve written a line of code. Then you immediately need to learn a few algorithms before you can do anything useful with it. It takes considerable thought and insight, especially for a beginner, simply to divide a number by ten and find the remainder.

Launchpad was written in assembly language, because our constraints forced us to get as close to the microcontroller as possible. We wouldn’t have used assembly language it if we didn’t have to, and I wouldn’t have inflicted the project on a beginner.

Every flavour of assembly language is different from the next because it’s tailored to its specific class of processor. What you learn in one workshop will therefore be almost useless in the next. The final reason not to learn an assembly language unless you really have to is that it’s less useful than it once was. Many modern processors cannot be programmed in anything but a high-level language, because they’re internally too complicated for a human to keep track of the state they are in.

C is simple to get a feel for, is properly structured, easy to write in both badly and well, and it opens many doors. It is the language of choice for embedded electronics. It is a gateway to being able to understand object-oriented enhancements such as C++, C#, Objective-C, and a plethora of C-a-like languages that drive the modern world, but without needing these techniques at your disposal from the start. It’s also very similar to Java, so it won’t take much of a contextual shift to move towards Android or Internet stuff.

There is a classic textbook, Kernigan and Ritchie, that you’ll find at all good bookshops, which is comprehensive without being too long. You can get GCC running in a DOS window for free, or you can buy any number of hardware development kits with change from £50, and start building programs that play around with data within minutes.

Having said all this, there’s no ‘wrong’ way into this industry. If you just cannot live without knowing how to write for iOS or Android, go right ahead, download the appropriate SDK, and dive in. iOS and Android programming are where many of the jobs happen to be right now, so this experience wouldn’t do you any professional harm. However, it might be dangerous to your morale. The size and scope of Apple and Google’s development environments is dazzling to a novice, and you’ll have to accept that you’ll make glacial progress and blunder about for weeks before you have any idea what you’re doing. This is how professional programmers feel when they turn their skills to mobile computing, and if you are setting out from scratch there is a risk of snuffing out your enthusiasm. I’d still urge anybody who is just setting out to begin in a smaller, a more restricted world, and to get a bit of confidence in the simpler disciplines of reading and writing code and using simple libraries before tackling the greatest, state-of-the-art development environments. It’s your sanity at stake.

Solve lots of different problems.

The projects you’ll be working on in your first professional job may be anything. If you need inspiration, have a look at Project Euler (protip: rhymes with ‘boiler’) for a few interesting challenges: working on its short problems will teach you how to think laterally as a programmer.

When you get bored of these, start solving your own problems. For example, every programmer I know has to knock out a short and dirty lump of code about once a week that massages data from Program A so that it fits into Program B, or reads an audio file’s header in order to find out where and how it keeps the data, or outputs a graph as a PostScript file. Designing C libraries to devour small tasks like these can be immensely rewarding.

You should work up to solving problems that are sufficiently complicated to require drawing out the program’s structure on paper beforehand. One professional programmer, whose name I’ve forgotten, once gave the advice not to release your first three proper bits of software: they’re just for practice. While business takes priority, the advice is interesting because, even with experience, you often learn to program something well only by doing it badly (or seeing it done badly) first.

Learn a bit about how the Internet works.

A series of exercises for the reader.

  1. Learning from tutorials found using a popular search engine, write a basic HTML document in Notepad. Add some markup to place some headings. Now, write and reference a CSS stylesheet declaring a sans-serif font, tasteful colours, generous line-spacing and margins, so it doesn’t look like it was put together by a college student using Netscape in 1996.
  2. Install Apache and PHP on your own computer, and get a web page up and running in this environment. Use PHP to add an interactive dimension: a simple comment board, something that spews out the first thousand prime numbers, or something that can process an HTML form.
  3. Now learn a bit about how the Internet works. Look at TCP/IP, how DNS resolves domain names into IP addresses, and how HTTP pulls resources over the World Wide Web from one computer to another.

Congratulations! In about half a day, you have obtained Web design experience. No matter what you specialise in, this information is essential for three reasons. Principally, because the Internet is the new world of commerce, and fixing shitty websites is where a good deal of the money in computing is to be made. Secondly, no matter which job you land, to know how to format arbitrary data as a pretty HTML document is one of the most important, useful, and transferable skills you’ll ever acquire. Lastly, the Web is the gateway to writing your own web site, blogging, and thus to publicising yourself and your work. Knowing what you’re doing won’t do you any harm.

PHP, by the way, is another controversial choice of language, and it may cause few engineers to baulk: the deeper you get, the nastier and more inconsistent it becomes, and its error reporting can be unhelpful. The definitive rant about it is here, so I needn’t go on. If you want to go for the latest cool language (Python / Ruby on Rails / Go) to see what the bleeding edge looks like, you have everybody’s blessing. However, the pure ease with which PHP can slot into Apache, its simplicity of installation and the quality of its online documentation, will subtract any pain from this exercise that the language itself could add.

In one day, you’ve doubled your chances of getting through the Google interview. (On that subject, read this. It isn’t meant to be scary.)

Keep an eye on technical blogs.

Interviews will go better if you are attuned to technology and engineering culture, and regularly read informed opinion. engadget.com is an up-to-the-minute technology site with educated, professional reviewers. Joel Spolsky’s blog (joelonsoftware.com) started as a series of essays on the craft of programming, but it’s now more about running a software business, mirroring the career trajectory of its author. Its spin-off book is of significant cultural importance, and is worth a read if you can find a copy. Jeff Atwood’s codinghorror.com is generally a good read. thedailywtf.com is a bit of light relief, and might even serve as a training aid. Ben Goldacre at badscience.net is one of the best-connected scientists in the UK, and his linklog is full of interesting technical material.

In terms of paper magazines, Wired is often entertaining, but tries too hard to be a chic lifestyle magazine. Make Magazine is more fun: rather like a grown-up Blue Peter.

These are some of my favourites; find your own.

Get your first job.

Here we go. Network like crazy. Talk to everybody. Arrange random encounters. Join the AES or the IET or the BCS, and hang out at their lectures. Don’t take ‘maybe’ for an answer: your career is far more important to you than it is to the person you’re talking to, so make it your responsibility to pursue them. Engineers and managers are very busy people, and they might forget about you. Ask when you can expect to hear back from them, and phone or email them on that day.

I’m not going to tell you how to write your CV. Just make sure it’s up to date, lean, customised to the job you’re applying for, and front-loaded with the most relevant projects you’ve been working on, even if you’ve been doing them only for your own education. Your work experience doesn’t have to align with the specified requirements exactly: a decent employer will invite you for a phone screen because they think you’re interesting, and this is when you can prove to them that you can do the job. If they give you an outright ‘no’ even without phone-screening you, you don’t want to work for them anyway.

In my experience, smaller companies (between 20-100 people) are the best at giving graduates job opportunities without prior experience. This is because they can’t afford Human Resources departments. In their most pernicious form, Human Resources departments doom their organisations to eternal mediocrity. Non-experts vet CVs for jobs they can’t possibly understand. Procedures are put in place in the name of equal opportunities that, perversely, stop people with unusual CVs getting interviews because they do not have the appropriate certification. On the other hand, small companies are generally newer, so they still have agility and risk built into their business model. They have no purpose for bureaucracy, they’re more likely to see your different qualifications as assets, you’re more likely to speak to the person who can help you directly, and they’ll be far more willing to take a chance.

A few organisations have graduate training schemes for engineers, and will train you to CEng status no matter what your background, but in my experience these are usually defence companies. Many engineers I know started off in such schemes, but just as many people might have ideological problems doing so.

Keeping your first job, and getting your second.

Being a good diplomat will get you further in your career than being good at making stuff. Being a diplomat means making allies, appreciating the different people around you in all their wonder and their flaws, and not losing sight of your integrity. Early on in your career, people around you will be second-guessing your motives, because your bosses and peers will be concerned that anything you do might become their responsibility. You don’t have to be perfect, but if you prove yourself honest, and impart any news you have objectively rather than trying to hide or gild it, people will come to trust you.

The world of work is a subtle environment. More often than not, you’ll be dropped into an world rife with unspoken and undocumented working practices, armed with an insufficiently detailed specification and a series of operating constraints that people already take for granted, and then given some latitude to find your way. Somebody might be managing the architecture of your project so that the different people working on different parts of it eventually produce a coherent whole; meanwhile, your new MD may be interfering with your working practices in an attempt to make your company more competitive. These people will annoy you: that’s their job. However, unless they are genuinely idiots, you will learn to appreciate them and their processes. You will sacrifice some creative freedom, but in return you gain a much greater chance of success, and some protection if it all goes wrong. If you are sinking two or three years of your life into a design project, it’s good to know that it’s being looked after, and what you’re designing will probably sell.

Often, things are going best when it feels most like you’re about to be sacked. How people behave in the twilight of failure is the strongest indicator of their strength of character. One day, you will be shouted or sworn at. Don’t take it personally, and see it for what it is: a sign that a colleague is having a bad day and needs to be left alone. At some point, your boss will tell you off. Unless you’re actually being laid off, turn into receive mode, be thankful that you are receiving professional advice (no matter how unfair or redundant it might seem), and treat it as a great compliment. Everybody is human. If we criticise too rashly, it is because we want to teach but lack the patience. If we are sometimes disappointed, it is only because we have set our expectations so high.

I say this because I’ve genuinely found it to be true, but you might run across a truly crap job, or a truly evil manager. There is plenty of free guidance out there about such circumstances, and I wouldn’t consider myself an expert. But consider this: the drama might have originated in your own head. Sometimes it’s hard to tell the difference between a paranoid incompetent psychopath with an ulterior motive, and a good, honest person with rusty communication skills who has been poorly briefed, has not slept well, and is desperately trying to give the appearance of coping. Your career will depend on giving these people the benefit of the doubt, supporting them, and sometimes managing from below or leading very gently from behind. It will depend on keeping in touch with the colleague who will rescue you from a shambles when they find a better job. It will depend on building your experience, satisfying customers and managers, and knowing how to let them down when you inevitably must.

Be ready for this, and enjoy yourself. It’s fun.

I’ve been asked by an undergraduate, who is not taking a computer science course, how he might become a software developer. It’s a complicated field, and there is no simple answer. So, with apologies, I present a two-part posting. The first part describes the environment that led me to take this path, and why I can no longer advise it. The second part suggests how people might take it today.

Born to RUN

10 PRINT “HELLO”
20 GOTO 10

This program taught me three lessons. The first is that any popular home microcomputer of the 1980s can be made to do your bidding as long as your instructions are suitably precise. The second is the beginning of program structure. Line 10 makes the computer do something, but line 20 just makes it jump back to line 10. The program therefore restates the initial greeting on a new line of the screen, and so on until the user demands that it stop. The third lesson is that programming is a peculiar subset of English: the commands have clearly defined meanings that differ from human understanding. While the meaning of ‘print’ is clear if one sees the screen as a print-out, ‘goto’ is less clear until you realise it’s two separate words. Even then, it is an entirely abstract symbol that exists only in the world of computing.

It was 1984, and I was six years old. Every home computer came with some variant of the BASIC programming language in ROM. A world of possibility would begin to unfurl a second or two after the power was switched on. My program worked equally well on any of my friends’ home computers, and on the BBC Micro at school. What more does a six-year-old need to set him on the path towards becoming an engineer?

Usborne, the children’s publisher, produced wonderful books on writing BASIC that an eight-year old could understand. I sat on the floor of the school bookshop and learned everything I could from them, and talked about little else. My parents reluctantly endured this enthusiasm, still hoping to have begat a musician or artist rather than an engineer. After about eighteen months they relented, buying me a second-hand ZX Spectrum as a birthday present.

The best thing about the Spectrum, apart from the absurdly low cost of the computer and software, was the BASIC manual. Written by Steven Vickers (now a professor at the University of Birmingham) it remains one of the finest examples of technical writing I have encountered. Mastering Sinclair BASIC didn’t take long: all I had to do was to wait for my grandfather to teach me the fundamentals of binary notation in the car one afternoon, receive my first algebra and trigonometry lessons at school a couple of years later, and I could write and understand real software.

This path led naturally on to assembly language, which is faster and more versatile than BASIC, but considerably more difficult: it is real engineering. Heaven help me, at the age of eleven I actually took a guide to Z80 assembly language on holiday with me and, with difficulty, began to understand it. It wasn’t written very well.

Meanwhile, programming was everywhere. The BBC and ITV ran magazine shows about home computers. Popular magazines would publish listings that readers would type in and run themselves (often masochistically long). The MIDI musicians from last week’s posting were doing things with computers, synths, drum machines, and very gay make-up, and it was changing the world. When they appeared on Top of the Pops, it was accompanied by vertiginous overproduced neon visuals that could only have come from another computer. Debates raged about the future of computers in mass production and commerce. This was the mid-1980s: Thatcher had set her sights on labour-heavy, unionised industries, and computers were rendering some traditional skills redundant. The London Stock Exchange was computerised overnight in 1986. News International and TV-am were among the first big organisations to be dragged by their owners into a part-political, part-technological maelstrom, and they were never the same again. In a very short space of time, computers actually had taken over the world: culture, finance, and political debate.

As a child in this environment, it was impossible not to be excited: here was a technology only slightly older than I was, uprooting everything in its path. I had already learned that it could be tamed, and the rest was waiting to be understood one piece at a time.

Home computing moves on

There were equally significant commercial forces at work in the home computer industry. In about 1988, I remember laughing with derision at an Amstrad PC at a friend’s house when he explained that he had to load BASIC from a floppy disc. In fact, the device was an expensive paperweight until you supplied it with an operating system – CP/M, GEM, or similar – that had to be purchased separately. It was a proper, big computer, and it broke the chain of consequence for me. How can you write software for it? How did they? Why did their instruction manuals contain no example programs? When I eventually retired my Spectrum and got a better computer in the early Nineties, I chose one of the few that still had BASIC contained on a ROM, a feature that was getting rarer as IBM-compatible PCs took over the home computing market.

At about this time, commercial pressures were changing the British home computer industry. Sinclair had run out of money after two projects that were disastrous for different reasons (his QL business computer and his electric car, the C5) and had sold everything to Amstrad. Alan Sugar, then and still a wide boy with an eye for a fast buck, took the Sinclair legacy, threw away everything innovative, closed down all development and, within two years, Sinclair’s original nosedive completed its trajectory. Acorn had overproduced its latest range of computers, overreached itself and, floundering similarly, sold a majority share of the company to Olivetti. Acorn hung around for another decade, quietly doing glorious things including spinning off ARM Holdings (still shaping the world today), but they never recaptured the market they had once dominated. In 1990, the British home computer industry had faded; by 1998 it was dead.

Ten years after a short era when 16-year-olds made the news by writing games in their bedrooms and out-earning their parents, programming had gone from being something that was an easy and inevitable consequence of childhood to something mysterious that had to be sought out, and could be learned only by handing over large sums of money for extra software and extra books. Children who learned programming in this wilderness would have done so using the dreadful implementations of BASIC provided as supplements to MS-DOS, or by messing around in Microsoft Excel, building up simple functions one line at a time.

I moved on, combining and assimilating my computer skills with the audio engineering I picked up at university. My programming hobby became academic project experience, which then became a graduate job just before the dot-com bust, which then turned into a career. I had been lucky to be born at the right time, when these opportunities were unfolding almost in spite of me.

However, I started noticing very young computer geeks again. Two or three different easily-accessible paths into programming had opened up that I had barely noticed, all of which relied on a Web browser combined with open source software. The web page had become a powerfully-interconnected replacement for the computer desktop. Learning the right language could unleash an updated version of those same magical powers I had discovered at a school computer in 1984.

Open-source software and the Web still provides the most cost-effective, high-impact, and compelling route into programming, but it is not the only route. These possiblities will have to be the subject of my next posting.


The Spectrum’s keyword prompt: once a gateway to a happier world.

Once upon a time, in the days when computers were mysterious and new, there was no simple way of making electronic musical instruments communicate with each other. Every manufacturer invented their own methods for co-ordinating the various bits and pieces that they sold. These would usually involve fragile bundles of wires passing analogue control voltages from one device to another. On reaching their intended devices, these voltages were amplified, manually scaled and offset in order to render them useful.

The pre-MIDI EMS VCS3 Cricklewood keyboard and Putney synthesiser in various stages of interconnectedness. In those days, it was considered acceptable to name one’s products after unprepossessing but well-to-do corners of Greater London. (Apologies for the suboptimal photography.)

The brutal-looking connector on the 1960s VCS3 is called a Jones connector. It supplies two power voltages and a ground to the keyboard. Two scaled control signals and an envelope trigger are generated and returned on separate terminals. Putney’s backplane has an array of jack sockets that allow control voltages to enter and leave.

Midi In

In response to this unhelpful situation, the MIDI Manufacturer’s Association [MMA], a consortium of mostly American and Japanese companies, agreed on a universal specification for a digital interface. This specification was driven entirely by two needs: to encourage interoperability between musical devices, and to keep cost to a minimum. The MMA settled on an asynchronous serial interface, because this reduced the complexity and cost of interconnection. It was specified to run at 31.25kHz, a number chosen because it is easily reachable by dividing 1MHz by a power of two. At the time, this choice rendered it incompatible with RS-232 (which can usually provide nothing between 19.2kHz and 38.4kHz), preventing existing computers from transmitting or receiving MIDI without extra hardware. MIDI may have ended up on computers only as an afterthought.

Data was communicated in one direction only over 5-pin DIN connectors, which were ubiquitous in the home audio market, and were therefore about the cheapest multi-pin connectors available. (They were so cheap, in fact, that the MIDI specification wantonly squandered two of the connector’s pins by leaving them unconnected: a move that would not be countenanced today.)

The data that travels on the MIDI interface was elegantly designed to embrace the feature set of contemporary microprocessors. Only 8-bit data was employed and, to save memory, no standard message could exceed three bytes in length. One bit of every byte was reserved to frame the data, giving rise to the 7-bit data limitation that causes annoyance today.

By design, MIDI embraced note gating information, key velocity, and standard controller messages for the pitch bend wheel, sustain pedal, and key aftertouch. A loosely-defined framework of controller messages was also provided so that other data could be conveyed besides this. The provision was made for almost every command to be assigned one of 16 separate channels, intended to allow sixteen different sounds to be controlled independently over the same physical cable.

The first MIDI devices emerged in 1983. Some unintentionally very amusing episodes of Micro Live were created that demonstrated the technology. The rest is history. Synthpop was essentially democratised by this inexpensive serial protocol. Dizzy with the possibilities of MIDI, musicians ganged their synthesisers together and began controlling them from the same keyboard to layer more and more voices, creating fat digital sounds that were very distinctive and dated very quickly. Artists that did not have the resources to employ professional studios with all their pre-MIDI equipment connected their Casio keyboards to home computers, running new software that enabled them to build up note sequences, and then quantise, manipulate, and replay them in a manner that would have been unthinkably expensive by any other means.

Midi thru

Here we are, nearly thirty years later. The processing power and capacity of a computer is around two million times as great as anything available for similar money in 1983. As a consequence, keyboard controllers, synthesisers, sequencers, and signal processing tools have advanced considerably. And yet, amidst all this change, MIDI still reigns supreme. As a basic protocol, it is just about fit for purpose. With our ten digits, two feet, and a culturally-motivated lack of interest in breath controllers, most of us are still trying to do the same things that we’ve always done in terms of musicianship. Although devices now produce much richer MIDI data at a faster rate, this is not a problem because MIDI is conveyed over faster physical interfaces (such as USB) so we can still capture it.

Aside from the musical data, MIDI has another weapon that has ensured its survival: it allows manufacturer-specific data transmissions. These System Exclusive messages opened a portal that allows modern devices to play with computers in ways that MIDI’s creators could not have imagined. To System Exclusive messages, we owe patch storage and editing software, remote software upgrades, and next-generation protocol extensions like Automap.

And yet … and yet, the specification shows its age to anybody who wants to do more or to delve deeper. MIDI is inherently a single-direction protocol, and its 7-bit data limitation results in an obsession with the number 128 that is now painfully restrictive: 128 velocity gradations; 128 programs in a bank; 128 positions a controller can take. Certain aspects of MIDI were poorly defined at the beginning, and remain unresolved three decades later.

Q. Middle C is conveyed by MIDI note number 60. Should we display this note to the user as C3 or C4?

A. Just choose one at random and provide the other as a configuration option.


Q. How much data might I expect a System Exclusive message to convey?

A. Oh dear, you went and bought a finite amount of memory. Good luck designing that MIDI merge box / USB interface.


Q. I’ve got a MIDI Thru splitter that is supposed to be powered from the MIDI OUT port of my keyboard. Why doesn’t it work?

A. Your keyboard manufacturer and your Thru box manufacturer have both bent the specification. If they’ve bent it in opposite directions, then your box won’t work as advertised.


Q. If the user doesn’t understand MIDI channels, and is attempting to transmit MIDI data on one, and receive MIDI data on another, what will happen?

A. The device at one or other end of the cable will end up back at the shop.


Q. I’m designing a new keyboard. Should my device support Active Sensing?

A. I don’t know. Should it?


Apart from all that, a lack of per-note control data annoys the creators of more expressive instruments. The standard’s rigid genuflection to Western 12-tone chromaticism is an irksome limitation to some (particularly those who use terms such as ‘Western 12-tone chromaticism’). The note model cannot properly handle single-note pitch effects such as glissandi. For devices that must accept or transmit a wide variety of control data, including us, the NRPN system constitutes a fairly unpleasant prospect, loaded with parsing irregularities and a padding-to-payload ratio of 2:1.

In retrospect, dealing with MIDI could have been made somewhat easier. The size of a single MIDI instruction depends on the contents of the first byte in a way that is neither obvious nor easy to derive, and the first byte may not necessarily be repeated in subsequent messages, which leads to a fairly onerous parsing process.

The authors of the USB MIDI specification went to the trouble of re-framing all the data into four-byte packages to simplify parsing. Unfortunately, they left a back door open to transmit an individual data byte where this was deemed essential. When is this essential? When you are deliberately trying to send malformed data that’s useless to the device at the other end. Or, to put it another way, never. The inevitable happened: one company now misframes even valid instructions, using this message capriciously to split up standard data into streams of single bytes. The USB MIDI parser thus becomes more, not less, complex, because it has to be able to support both the new four-byte frames and the old-fashioned variable length ones.

In honesty, it’s only slightly inconvenient. The MIDI parser that we embed into our hardware designs is about 640 bytes long. These are 640 very carefully arranged bytes that took several days and a lot of testing to prove, and all they do is allow a device to accept a music protocol invented in the early 1980s, but it might have been a lot worse. Indeed, it is worse once you start trying to respond to the data. Never mind: if even the pettiest problem stings us, we fix it properly. And if any fool could do MIDI properly, we’d all have to find alternative careers.

Midi out

There have been attempts, and occasionally there still are, to supplant MIDI with an all-new data format, but these seem doomed to obscurity and ultimately to failure. About twenty years ago, there was ZIPI; today, it’s nothing more than a Wikipedia page. mLAN attempted to place MIDI into an inexpensive studio network. In spite of very wide industry support, it had few adopters. With hindsight, the futurologists were wrong and the world took a different turn. Latterly, there’s the HD-MIDI specification, and Open Sound Control [OSC], soon to be re-christened Open Media Control. We’ve looked into these. I cannot remember if we are prevented from discussing our draft of the HD-MIDI spec, but we probably are. My one-sentence review therefore contains nothing that isn’t already in the public domain.

HD-MIDI promises to be improved and more versatile, and does so by adding complexity in ways that not everybody will find useful. OSC suffers from a superset of this problem: it’s anarchy, and deliberately so. The owners of the specification have been so eager to avoid imposing constraints upon it that it has become increasingly difficult for hardware to cope with it. The most orthodox interpretation of the specification has the data payload transmitted via UDP somewhere in the middle of a TCP/IP stack. (You think that MIDI’s 7-bit limitation creates too many processing overheads and data bottlenecks? Wait until you try TCP/IP as a host-to-device protocol!)

Networking protocols are fine for computer software, phone apps, and for boutique home-brew products, but they are somewhat impractical for a mass-market music device. Most musicians are not IT specialists. Those whose savoir faire extends only as far as the concept of MIDI channels cannot be expected to prevail in a world of firewalls, MAC addresses, subnet masks, and socket pairing. Ethernet being the mess that it is, there are at least two simpler ways of interfacing with computers by using old serial modem protocols, but most new operating systems have all but given up supporting these and the burden of configuration is, again, upon the user.

More severely, there is an interoperability problem. OSC lacks a defined namespace for even the most common musical exchanges, to the extent that one cannot use it to send Middle C from a sequencer to a synthesiser in a standardised manner. There are many parties interested in commercialising OSC, and a few have succeeded in small ways, but it wouldn’t be possible to stabilise the specification and reach a wide audience without garnering a consortium of renegade manufacturers for a smash-and-grab raid. The ostensible cost of entry to the OSC club is currently far higher than MIDI, too. Producing a zero-configuration self-powered Ethernet device, as opposed to a bus-powered USB MIDI device of equivalent functionality, would price us out of the existing market, exclude us from the existing MIDI ecosystem, and require a great deal more support software, and to what advantage? For OSC to gain universal acceptance, it will need to be hybridised, its rich control data combined with more regular musical events, embedded together in a stream of – you’ve guessed it. If we’re going to go through all that palaver, and more or less re-invent OSC as a workable protocol in our own club, why would we start with its strictures at all? This brings us back to the MMA, and the original reason for its existence. HD-MIDI, at least, has industry consensus. If it is sufficiently more effective than MIDI 1.0, it may yet form part of a complete next-generation protocol.

For all its shortcomings, we musicians and manufacturers cannot abandon MIDI. We have had thirty years to invent a better protocol and we have singularly failed. Some of us have already lost sight of what makes MIDI great, and we must strive to remind ourselves how we can make it better. Meanwhile, the very simplicity, flexibility, and ubiquity of MIDI 1.0 make it certain to be an important protocol for some time to come. With this in mind, I confidently predict that, in 2023, MIDI will still be indispensible, unimpeachable, and utterly, utterly everywhere.

In my previous post, I touched on the problems of attempting to copy an acoustic (or electroacoustic) instrument via a MIDI controller keyboard. In conclusion, there are a lot of challenges. We must have the serenity to accept the things we cannot change, and the bloodymindedness to change, or at least to challenge, the things that we can.

It’s time to put this into action, and consider the controller keyboard in more depth. In this posting, I will focus on the piano for two reasons. Firstly, it’s a case study for most acoustic or electroacoustic keyboard instruments because it shares all of their vagaries. Secondly, it’s the instrument with which most people are most familiar, and for which the greatest amount of repertoire exists.

Generally speaking, a MIDI controller keyboard gets its sensitivity to nuance in a fairly unsophisticated way: we keep to trusted mechanical designs. Thus, the speed of finger impact is still measured in the same way it was forty years ago, by counting the time interval between two switches being closed, and this is the only information we have.

Top left: a key mechanism that we use. Top right: the C key has been removed to reveal the two levers and switch membranes for the neighbouring key. Bottom left: Just the circuit board and membranes from the keyboard. Bottom right: the bare circuit board showing each pair of switch contacts underneath.

A keypress on a piano or keyboard constitutes a movement of about half an inch (call it 12.5mm). The key switches on a European keyboard mechanism that I tested actuate at 4.5mm and 7.5mm down a white note’s travel, so they can indicate the average speed of note travel over 3mm.

Pairs of switches are read at high speed: they have to be. In our higher-end controller keyboards, we scan each set of key contacts at 10kHz so that we can detect inter-contact times to a worst-case accuracy of about 200 microseconds. That’s pretty much the state of the art because, although the technology can go quite a lot faster, there are certain inescapable design problems that prevent anyone from doing so economically. Our older synthesisers are a bit slower than this: nuance is less critical when you’re playing an acid house bassline or a fat string pad. Nevertheless, it turns out that 10kHz is just about enough to convey the dynamic range of speeds that a pianist produces from a semi-weighted keyboard. Although weighted and hammer-action keyboards feel more luxurious, their terminal velocities are considerably lower. Thus they can be scanned at a more leisurely pace, so it’s generally less expensive to read them effectively.

We spend a long time designing representative velocity curves that feel right. Here’s one from the semi-weighed Impulse keyboard shown in our curve-designing software (every manufacturer who is serious about their craft grows their own). A colleague laboured over this curve for several hours, using different third-party synthesiser modules to develop and prove it:

The graph shows MIDI velocity values on the Y-axis, and inter-contact timings (‘m’ being short for milliseconds) on the X-axis. To produce a white note of velocity 100 (64h) from this curve requires a 5.5ms interval between the top and bottom key contacts. Black notes have their sensors arranged in the same physical places, but the different key size makes them shorter levers, so it takes a 4ms interval to register a velocity of 100. This subtlety is a pain: the black and white curves are always designed separately and, because it’s a matter of subjective feel, no hard rules can be used to relate them.

Advances …

At this stage, things should perhaps get more complicated. As I’ve discussed, real pianos possess a double escapement mechanism, meaning that there are two ways in which the hammer can be made to contact the string: one where the hammer gets a kick throughout the entire travel of the note, and another where the key nudges the hammer more gently over a much shorter distance. The piano deconstructed is a terrific resource with some fun animations of all this. The first form of attack is the most difficult to control: that’s why piano teachers tell their pupils that all the expression is to be found right at the bottom of the keys.

The initial speed of travel of a piano key being hit for the first time is more important than its later speed: you cannot decelerate the hammer once it’s been given a good shove. For a fast attack, the hammer would impact the string around the same time as the first key sensor would be triggered on an electronic keyboard. So, to get the timing and velocity more representative of a real instrument, having three key sensors would improve matters. An extra contact would be actuated just as the key is depressed, so an extra velocity curve would be generated at the top of the key. There would be some complicated interaction between the two velocity curves thus derived, involving an immediate response for fast initial attacks, and a simpler comparison of the two velocities for slower attacks.

I have never seen this design in practice – not even on some of the fancier Italian key mechanisms we’ve tried. Some of those key mechanisms are so lovely that they make me want to retire, take classes in cabinet making, and learn the complete sonatas of Beethoven, but they’re still based on two-contact systems. However, I learned to play on acoustic pianos. After years of coaching, I now approach the keys gently, and exploit the last fraction of an inch of travel to convey my intentions at the right time. I fear for learners playing exclusively on digital instruments, as they may get a surprise when confronted with a real instrument one day, only to find that they cannot get an even tone from it.

A third sensor would make the key mechanism more expensive to build, harder to scan, and the input data harder to process, would render velocity curves and the scanning firmware more troublesome to design, and it puts us into the region of diminishing returns. My inner piano player finds it a bit of a shame that my inner critic can demolish the idea so readily, but perhaps one day I’ll be in a position to experiment. Although it’s too obvious to patent, it might turn out to be a missing link.

If you’ve ever tried to play a real harpsichord, you’ll know how disorientingly high the action is, and how there’s nothing else quite like it. If a keyboard player wants to emulate an organ, harpsichord or a similar Baroque-era mechanism without velocity sensitivity, it would be far more authentic if the actuation for the note happened when the upper key sensor triggered. And yet, I don’t know of any manufacturer that does this: the sound always triggers at the bottom of key travel. This is presumably because a player does not generally want to adjust his or her style just to try a different sound. Nevertheless, it’d be interesting to know if there’s any commercial demand for sensor settings that allow a player to practise as if playing an authentically old instrument. Does anybody out there need an 18th Century performance mode?

(Update: Apparently Clavier do allow triggering from either the top or bottom contact on their Nord keyboards. It also improves the feel of vintage synth emulations. Even more reason why Novation might be overdue an obligation-free firmware update or two. Many thanks to Matt Robertson for this correction, and for being successful enough to own a Nord.)

… Off the end of a plank

There are a few other key mechanisms about. A delightful company called Infinite Response places a Hall Effect sensor underneath every key, so that their instantaneous positions can be monitored throughout the keypress and release. There’s a mode on their controllers so you can see this happening: as a key travels downward it provides a continuous position readout. It’s beautiful to see, and it must take a lot of fast, parallel processing. Their keyboards are priced commensurately, which is one of many reasons why I don’t own one. The problems with this keyboard are the same as the problems with other novel performance interfaces. Firstly, one’s synthesiser or data processing has to be as sophisticated and rich as the keyboard’s data output to make the investment worthwhile; secondly, one has to relearn musicianship skills that have already taken two decades to bring to a modest level in order to exploit these features. There isn’t enough time to re-learn music unless somebody pays you to do it.

In theory, we could already measure the release speed of the key. We actually collect the appropriate data, and MIDI possesses a standard method whereby this could be conveyed to the synthesiser. And yet, we don’t supply this information: all velocities are conveyed homogeneously. Why is this? There are three reasons, locked in a circular argument. Firstly, although a slow release sounds a little different from a fast one on a real instrument, musicians tend not to use it as an effect because the ear is far less sensitive to offset details than to onsets. Secondly, as release velocity is not supported by most controller manufacturers, hardly any synthesisers support it. Thirdly, if synthesisers don’t generally support release velocity, how do we design a curve for it?

Epilogue

Now I’ve given a glimpse of why our key mechanisms, and everyone else’s, are only precisely good enough for the job, I shall finish by turning my scattergun towards the next part of the signal chain: the latest piano synthesisers. There are still things I’ve never heard a piano synthesiser do. There are some wonderful keyboard mechanisms out there allied to cutting-edge, silicon-devouring modelling algorithms, but I haven’t yet heard a digital instrument that can seduce me away from the real thing. It’s not just sentimentality. Here’s an example of something that no digital piano can render properly: the last four bars of the piano part of Berg’s Vier Stücke for Clarinet and Piano Op.5.

The italic instructions to the pianist, for those whose German is as ropey as mine, are ‘strike inaudibly’ and ‘so quiet as to be barely heard’. The loud staccato clusters in the left hand set up a sympathetic resonance in the strings of the notes that the right hand is holding down. When the dampers finish their work, what remains is an ethereal, disembodied chord. Acoustic modelling just cannot render this yet. (He was a clever chap, Alban Berg. If there can be any silver lining to his tragic death in 1935, it’s that his works are now out of copyright.)

Because a digital piano synthesiser can’t reproduce this fragment of Berg, it cannot render anything correctly while the sustain pedal is being held down: there’s just not enough power to compute the resonances of every string interacting with every other. Those synthesisers that claim to model string resonances genuinely do so, but model only those strings that are being played, in mutual isolation. Real pianos aren’t so deterministic. This is why digital pianos still sound a little anaemic.

While we’re on the subject of the sustain pedal, it is an auditory event of its own on any real instrument. However, MIDI treats it as a control change message, so we never hear the warm fizz and the quiet wooden choonk as eighty-eight dampers disengage from their strings. We’re already modelling strings, a soundboard, and hammers, but a bit of mechanical noise and simulated felt adhesion are still too much to ask. Perhaps I haven’t researched this recently enough: it’s not so hard to blend a few samples. There seems to be a bit of an arms race going on in piano synthesiser verisimilitude, so things have probably changed recently. Can I download a Glenn Gould piano model yet, that hums along with the middle voice whenever I attempt to play Bach?

Let’s end positively. One thing I’ve heard some piano models begin to manage at last is the ability to flutter the sustain pedal carefully to mess about with the decay of notes. It’s an effect that has its place when used sparingly. It’s taken twenty years, but there may be hope for these algorithms yet.

Eight hundred years ago, a keyboard was a series of pegs or bars used to control a pipe organ, with each key opening a valve to admit air to a particular group of pipes. It started to take its modern shape in tandem with the development of musical notation. Both had become standards that we would recognise today by the middle of the Renaissance Era, around 1500. With the invention of escapement mechanisms, keys were united with strings, and the first spinets, clavichords and harpsichords appeared. The keyboard as a control interface continued to evolve as the instruments that it served proliferated and matured.

Early keyboard instruments were limited in ways that today’s instruments are not. Only a small class of them could be controlled by changing the speed at which a player’s fingers hit the keys, and these devices provided insufficient power to perform to a concert audience. Louder instruments required a mechanical plectrum to pluck the string from a fixed height, so that any manner of keypress resulted in the same sound.

To alter the tone and character of music, concert instruments started to resemble organ consoles, possessing two or more manuals and a large number of drawbars and stops. Octave doubling, Venetian swell, and auxiliary strings were variously employed to provide some dynamic versatility, but it was not until the invention of the fortepiano around 1720 that a keyboard instrument could combine the subtlety of finger-controlled dynamic range with the power of a concert instrument. Early pianos feel and play like development prototypes: they are quiet, feel insubstantial, and fall out of tune if somebody closes a door too quickly. Fortunately, the Industrial Revolution accelerated their development, mutating the fortepiano into a pianoforte by replacing the wooden frame with steel so that strings could be longer, tighter, louder, and maintain better tuning. This provided the strength and stability to withstand the additional tension of two or more octaves, and allowed second and third strings to be fixed to the higher notes to balance them with the power of the lower ones. The newer bass strings were overstrung with the others to make the resulting instrument more compact, and the grand piano took its distinctive, curvy shape. Pedals were added: one to lift the dampers, and one to soften the treble by shifting the hammers so that they could not contact the third strings. For keyboard musicians, however, an equally significant improvement was the invention of the double escapement in the 1830s.

Whereas keyboards had formerly required each key to be released to its starting position to sound again, double escapement allows a player to retreat the key of a sounded note by a few millimetres, and then strike again to repeat it. It permits a greater palette of playing styles, and accommodates figurations that are both quiet and fast. Certain elements of compositions, such as the note or chord tremolandi that are favoured by some modern composers, would be physically impossible without it. The double escapement is fiendishly complicated: it relies on a moving assembly poetically called the wippen. This couples the key to its hammer via a number of levers, moving linkages, and adjustable screws. It is sufficiently complex, and setting it up is such an art, that it would probably not have been invented or popularised had Victorian engineers had access to electronics. They didn’t, and their legacy is the brilliant and complicated key mechanism that accounts for much of the cost and labour of a modern concert piano.

Of course, the story doesn’t end there. Our forebears composed for many types of keyboard instrument, and so do we. The electronic era has bestowed upon us electroacoustic instruments such as the Rhodes and Wurlitzer pianos. These were designed, in the spirit of the clavichord, to be portable pianos. Because the electronics did some of the work, the hammers and dampers could be smaller, which made the escapements simpler and lighter. One such instrument, the Hohner Clavinet, is little more than an amplified clavichord, reminding us that innovation can be as retrospective as it is progressive. Add to this other classic electronic instruments with no finger-controlled dynamics – the Hammond organ, the Mellotron, and a plethora of classic analogue synthesisers – and it is clear that keyboard players can now choose from an incredible legacy of beautiful, but very different, instruments.

Today, our customers want the sound of these instruments without the liability of ownership. None of these instruments is simple to tune or maintain, and their scarcity, fragility and complexity makes them expensive. Part of Novation’s business is manufacturing MIDI controllers that allow an inexpensive key mechanism to be joined to synthesisers or samplers to reproduce these sounds. Our slogan, It’s The Feel, is also our mission, and it pays no small tribute to five hundred years of progress.

Selling MIDI controllers with such a statement is bold. We run the risk of being compared against not just the responsiveness of a vintage instrument, but its gestalt. A Rhodes piano isn’t just a sound, and its escapement isn’t just a feel. The joy of a Rhodes is just as much in the semiotics of its faded, off-white keys, the weight and rattle of each note as it’s deployed, and the way that the whole keyboard buzzes under the fingers as it is played. It’s the smell of dust and old solder flux, and the black fabric, rounded thermoplastic, and kitsch chrome detailing of a vintage instrument. And, of course, it’s just as much the smoke-filled photographs of jazz and rock legends of the Sixties and Seventies teasing immortal melodies from its keys. In all their flaws, instruments like the Rhodes are evocative and compelling because they are culture, and they are history. Many musicians refuse to play a sampled facsimile, no matter how indistinguishable it is from the original once their track is laid down, because if the performer doesn’t feel the same, the performance won’t be the same.

So, when we make electronic controllers, we find ourselves perched on the shoulders of giants, but sometimes wishing that they’d let us down for a few minutes so that we can take a walk and see the sights ourselves. Meanwhile, we contrive to stage re-enactments of the playing experience of a few favourite keyboards, and to elicit as much cultural meaning from them as possible. The Feel will never be the real thing, but our customer is buying a MIDI controller, rather than trawling classified adverts to become the custodian of an heirloom. What we provide must be an amalgam of everything they need to do, with a cost of ownership that they can afford and an ease of use and portability that mechanical instruments cannot touch. We remember that our art is forever shifting. Our mission is to make the most versatile and playable keyboard we can, and to discard those frustrations of real instruments that performers often forget.

Having set this stage, my next posting will discuss the design of MIDI controller keyboards, the choices we make about them, and what we can do to make them better.