Archive

Monthly Archives: November 2012

So you didn’t train as an electronic engineer or a computer scientist. This has never been an impediment to working in engineering: good engineers often come from the humanities or the arts. The only prerequisites are good numeracy skills and the correct attitude.

In fact, your unusual origins may actually help you because, if you sell them properly, your extra skills and insight provide you with knowledge, techniques, and perspective that set you above the specialist programmers who are competing with you. I recommend a quick look at Valve Software’s handbook for new employees because it contains a description of T-shaped people (page 46). These are the people that innovative organisations love.

What else do you need to work on to get a first job in software? There’s a lot of advice out there, and some of it is dreadful. As an outsider who found a way in and is now training and leading other engineers, here’s my own list. Forgive me if this is short of depth or justification: I’d have to write a posting for each one if I were to play that game.

Learn C.

There are a lot of sub-specialisms in computer science now, but they all have one gateway in common: a high-level programming language. High-level languages are abstracted enough that it doesn’t matter exactly what your microprocessor is doing with its time, but not so abstracted that you lose touch with the power and memory your program requires. That’s why they are the best place to start learning.

The core elements of high-level languages are essentially the same. Once you’ve learned to think in one, the same techniques can be applied to any other, or broadened into object-oriented programming, or funnelled into the low-ceilinged world of microcontrollers.

Choosing C might be slightly controversial, and maybe seems prescriptive, so here is a little reasoning. Based on all I’ve seen and experienced, I wouldn’t advise starting to program with an object-oriented language. You’ll learn to think in terms of larger-scale forms. Objects and classes are very useful indeed, but many challenges won’t look like that. Nurse Compiler and Nanny Operating System will hide much complexity from you and will expect something in return, and you’ll miss important lessons about how computers function.

A full appreciation of starting constraints is the cornerstone of good design. At the extreme end, however, we have assembly language. This is way too much to learn at once. Addressing modes, register management, memory mapping, the stack, I/O interfacing, and a full instruction set all have to be understood before you’ve written a line of code. Then you immediately need to learn a few algorithms before you can do anything useful with it. It takes considerable thought and insight, especially for a beginner, simply to divide a number by ten and find the remainder.

Launchpad was written in assembly language, because our constraints forced us to get as close to the microcontroller as possible. We wouldn’t have used assembly language it if we didn’t have to, and I wouldn’t have inflicted the project on a beginner.

Every flavour of assembly language is different from the next because it’s tailored to its specific class of processor. What you learn in one workshop will therefore be almost useless in the next. The final reason not to learn an assembly language unless you really have to is that it’s less useful than it once was. Many modern processors cannot be programmed in anything but a high-level language, because they’re internally too complicated for a human to keep track of the state they are in.

C is simple to get a feel for, is properly structured, easy to write in both badly and well, and it opens many doors. It is the language of choice for embedded electronics. It is a gateway to being able to understand object-oriented enhancements such as C++, C#, Objective-C, and a plethora of C-a-like languages that drive the modern world, but without needing these techniques at your disposal from the start. It’s also very similar to Java, so it won’t take much of a contextual shift to move towards Android or Internet stuff.

There is a classic textbook, Kernigan and Ritchie, that you’ll find at all good bookshops, which is comprehensive without being too long. You can get GCC running in a DOS window for free, or you can buy any number of hardware development kits with change from £50, and start building programs that play around with data within minutes.

Having said all this, there’s no ‘wrong’ way into this industry. If you just cannot live without knowing how to write for iOS or Android, go right ahead, download the appropriate SDK, and dive in. iOS and Android programming are where many of the jobs happen to be right now, so this experience wouldn’t do you any professional harm. However, it might be dangerous to your morale. The size and scope of Apple and Google’s development environments is dazzling to a novice, and you’ll have to accept that you’ll make glacial progress and blunder about for weeks before you have any idea what you’re doing. This is how professional programmers feel when they turn their skills to mobile computing, and if you are setting out from scratch there is a risk of snuffing out your enthusiasm. I’d still urge anybody who is just setting out to begin in a smaller, a more restricted world, and to get a bit of confidence in the simpler disciplines of reading and writing code and using simple libraries before tackling the greatest, state-of-the-art development environments. It’s your sanity at stake.

Solve lots of different problems.

The projects you’ll be working on in your first professional job may be anything. If you need inspiration, have a look at Project Euler (protip: rhymes with ‘boiler’) for a few interesting challenges: working on its short problems will teach you how to think laterally as a programmer.

When you get bored of these, start solving your own problems. For example, every programmer I know has to knock out a short and dirty lump of code about once a week that massages data from Program A so that it fits into Program B, or reads an audio file’s header in order to find out where and how it keeps the data, or outputs a graph as a PostScript file. Designing C libraries to devour small tasks like these can be immensely rewarding.

You should work up to solving problems that are sufficiently complicated to require drawing out the program’s structure on paper beforehand. One professional programmer, whose name I’ve forgotten, once gave the advice not to release your first three proper bits of software: they’re just for practice. While business takes priority, the advice is interesting because, even with experience, you often learn to program something well only by doing it badly (or seeing it done badly) first.

Learn a bit about how the Internet works.

A series of exercises for the reader.

  1. Learning from tutorials found using a popular search engine, write a basic HTML document in Notepad. Add some markup to place some headings. Now, write and reference a CSS stylesheet declaring a sans-serif font, tasteful colours, generous line-spacing and margins, so it doesn’t look like it was put together by a college student using Netscape in 1996.
  2. Install Apache and PHP on your own computer, and get a web page up and running in this environment. Use PHP to add an interactive dimension: a simple comment board, something that spews out the first thousand prime numbers, or something that can process an HTML form.
  3. Now learn a bit about how the Internet works. Look at TCP/IP, how DNS resolves domain names into IP addresses, and how HTTP pulls resources over the World Wide Web from one computer to another.

Congratulations! In about half a day, you have obtained Web design experience. No matter what you specialise in, this information is essential for three reasons. Principally, because the Internet is the new world of commerce, and fixing shitty websites is where a good deal of the money in computing is to be made. Secondly, no matter which job you land, to know how to format arbitrary data as a pretty HTML document is one of the most important, useful, and transferable skills you’ll ever acquire. Lastly, the Web is the gateway to writing your own web site, blogging, and thus to publicising yourself and your work. Knowing what you’re doing won’t do you any harm.

PHP, by the way, is another controversial choice of language, and it may cause few engineers to baulk: the deeper you get, the nastier and more inconsistent it becomes, and its error reporting can be unhelpful. The definitive rant about it is here, so I needn’t go on. If you want to go for the latest cool language (Python / Ruby on Rails / Go) to see what the bleeding edge looks like, you have everybody’s blessing. However, the pure ease with which PHP can slot into Apache, its simplicity of installation and the quality of its online documentation, will subtract any pain from this exercise that the language itself could add.

In one day, you’ve doubled your chances of getting through the Google interview. (On that subject, read this. It isn’t meant to be scary.)

Keep an eye on technical blogs.

Interviews will go better if you are attuned to technology and engineering culture, and regularly read informed opinion. engadget.com is an up-to-the-minute technology site with educated, professional reviewers. Joel Spolsky’s blog (joelonsoftware.com) started as a series of essays on the craft of programming, but it’s now more about running a software business, mirroring the career trajectory of its author. Its spin-off book is of significant cultural importance, and is worth a read if you can find a copy. Jeff Atwood’s codinghorror.com is generally a good read. thedailywtf.com is a bit of light relief, and might even serve as a training aid. Ben Goldacre at badscience.net is one of the best-connected scientists in the UK, and his linklog is full of interesting technical material.

In terms of paper magazines, Wired is often entertaining, but tries too hard to be a chic lifestyle magazine. Make Magazine is more fun: rather like a grown-up Blue Peter.

These are some of my favourites; find your own.

Get your first job.

Here we go. Network like crazy. Talk to everybody. Arrange random encounters. Join the AES or the IET or the BCS, and hang out at their lectures. Don’t take ‘maybe’ for an answer: your career is far more important to you than it is to the person you’re talking to, so make it your responsibility to pursue them. Engineers and managers are very busy people, and they might forget about you. Ask when you can expect to hear back from them, and phone or email them on that day.

I’m not going to tell you how to write your CV. Just make sure it’s up to date, lean, customised to the job you’re applying for, and front-loaded with the most relevant projects you’ve been working on, even if you’ve been doing them only for your own education. Your work experience doesn’t have to align with the specified requirements exactly: a decent employer will invite you for a phone screen because they think you’re interesting, and this is when you can prove to them that you can do the job. If they give you an outright ‘no’ even without phone-screening you, you don’t want to work for them anyway.

In my experience, smaller companies (between 20-100 people) are the best at giving graduates job opportunities without prior experience. This is because they can’t afford Human Resources departments. In their most pernicious form, Human Resources departments doom their organisations to eternal mediocrity. Non-experts vet CVs for jobs they can’t possibly understand. Procedures are put in place in the name of equal opportunities that, perversely, stop people with unusual CVs getting interviews because they do not have the appropriate certification. On the other hand, small companies are generally newer, so they still have agility and risk built into their business model. They have no purpose for bureaucracy, they’re more likely to see your different qualifications as assets, you’re more likely to speak to the person who can help you directly, and they’ll be far more willing to take a chance.

A few organisations have graduate training schemes for engineers, and will train you to CEng status no matter what your background, but in my experience these are usually defence companies. Many engineers I know started off in such schemes, but just as many people might have ideological problems doing so.

Keeping your first job, and getting your second.

Being a good diplomat will get you further in your career than being good at making stuff. Being a diplomat means making allies, appreciating the different people around you in all their wonder and their flaws, and not losing sight of your integrity. Early on in your career, people around you will be second-guessing your motives, because your bosses and peers will be concerned that anything you do might become their responsibility. You don’t have to be perfect, but if you prove yourself honest, and impart any news you have objectively rather than trying to hide or gild it, people will come to trust you.

The world of work is a subtle environment. More often than not, you’ll be dropped into an world rife with unspoken and undocumented working practices, armed with an insufficiently detailed specification and a series of operating constraints that people already take for granted, and then given some latitude to find your way. Somebody might be managing the architecture of your project so that the different people working on different parts of it eventually produce a coherent whole; meanwhile, your new MD may be interfering with your working practices in an attempt to make your company more competitive. These people will annoy you: that’s their job. However, unless they are genuinely idiots, you will learn to appreciate them and their processes. You will sacrifice some creative freedom, but in return you gain a much greater chance of success, and some protection if it all goes wrong. If you are sinking two or three years of your life into a design project, it’s good to know that it’s being looked after, and what you’re designing will probably sell.

Often, things are going best when it feels most like you’re about to be sacked. How people behave in the twilight of failure is the strongest indicator of their strength of character. One day, you will be shouted or sworn at. Don’t take it personally, and see it for what it is: a sign that a colleague is having a bad day and needs to be left alone. At some point, your boss will tell you off. Unless you’re actually being laid off, turn into receive mode, be thankful that you are receiving professional advice (no matter how unfair or redundant it might seem), and treat it as a great compliment. Everybody is human. If we criticise too rashly, it is because we want to teach but lack the patience. If we are sometimes disappointed, it is only because we have set our expectations so high.

I say this because I’ve genuinely found it to be true, but you might run across a truly crap job, or a truly evil manager. There is plenty of free guidance out there about such circumstances, and I wouldn’t consider myself an expert. But consider this: the drama might have originated in your own head. Sometimes it’s hard to tell the difference between a paranoid incompetent psychopath with an ulterior motive, and a good, honest person with rusty communication skills who has been poorly briefed, has not slept well, and is desperately trying to give the appearance of coping. Your career will depend on giving these people the benefit of the doubt, supporting them, and sometimes managing from below or leading very gently from behind. It will depend on keeping in touch with the colleague who will rescue you from a shambles when they find a better job. It will depend on building your experience, satisfying customers and managers, and knowing how to let them down when you inevitably must.

Be ready for this, and enjoy yourself. It’s fun.

Advertisements

I’ve been asked by an undergraduate, who is not taking a computer science course, how he might become a software developer. It’s a complicated field, and there is no simple answer. So, with apologies, I present a two-part posting. The first part describes the environment that led me to take this path, and why I can no longer advise it. The second part suggests how people might take it today.

Born to RUN

10 PRINT “HELLO”
20 GOTO 10

This program taught me three lessons. The first is that any popular home microcomputer of the 1980s can be made to do your bidding as long as your instructions are suitably precise. The second is the beginning of program structure. Line 10 makes the computer do something, but line 20 just makes it jump back to line 10. The program therefore restates the initial greeting on a new line of the screen, and so on until the user demands that it stop. The third lesson is that programming is a peculiar subset of English: the commands have clearly defined meanings that differ from human understanding. While the meaning of ‘print’ is clear if one sees the screen as a print-out, ‘goto’ is less clear until you realise it’s two separate words. Even then, it is an entirely abstract symbol that exists only in the world of computing.

It was 1984, and I was six years old. Every home computer came with some variant of the BASIC programming language in ROM. A world of possibility would begin to unfurl a second or two after the power was switched on. My program worked equally well on any of my friends’ home computers, and on the BBC Micro at school. What more does a six-year-old need to set him on the path towards becoming an engineer?

Usborne, the children’s publisher, produced wonderful books on writing BASIC that an eight-year old could understand. I sat on the floor of the school bookshop and learned everything I could from them, and talked about little else. My parents reluctantly endured this enthusiasm, still hoping to have begat a musician or artist rather than an engineer. After about eighteen months they relented, buying me a second-hand ZX Spectrum as a birthday present.

The best thing about the Spectrum, apart from the absurdly low cost of the computer and software, was the BASIC manual. Written by Steven Vickers (now a professor at the University of Birmingham) it remains one of the finest examples of technical writing I have encountered. Mastering Sinclair BASIC didn’t take long: all I had to do was to wait for my grandfather to teach me the fundamentals of binary notation in the car one afternoon, receive my first algebra and trigonometry lessons at school a couple of years later, and I could write and understand real software.

This path led naturally on to assembly language, which is faster and more versatile than BASIC, but considerably more difficult: it is real engineering. Heaven help me, at the age of eleven I actually took a guide to Z80 assembly language on holiday with me and, with difficulty, began to understand it. It wasn’t written very well.

Meanwhile, programming was everywhere. The BBC and ITV ran magazine shows about home computers. Popular magazines would publish listings that readers would type in and run themselves (often masochistically long). The MIDI musicians from last week’s posting were doing things with computers, synths, drum machines, and very gay make-up, and it was changing the world. When they appeared on Top of the Pops, it was accompanied by vertiginous overproduced neon visuals that could only have come from another computer. Debates raged about the future of computers in mass production and commerce. This was the mid-1980s: Thatcher had set her sights on labour-heavy, unionised industries, and computers were rendering some traditional skills redundant. The London Stock Exchange was computerised overnight in 1986. News International and TV-am were among the first big organisations to be dragged by their owners into a part-political, part-technological maelstrom, and they were never the same again. In a very short space of time, computers actually had taken over the world: culture, finance, and political debate.

As a child in this environment, it was impossible not to be excited: here was a technology only slightly older than I was, uprooting everything in its path. I had already learned that it could be tamed, and the rest was waiting to be understood one piece at a time.

Home computing moves on

There were equally significant commercial forces at work in the home computer industry. In about 1988, I remember laughing with derision at an Amstrad PC at a friend’s house when he explained that he had to load BASIC from a floppy disc. In fact, the device was an expensive paperweight until you supplied it with an operating system – CP/M, GEM, or similar – that had to be purchased separately. It was a proper, big computer, and it broke the chain of consequence for me. How can you write software for it? How did they? Why did their instruction manuals contain no example programs? When I eventually retired my Spectrum and got a better computer in the early Nineties, I chose one of the few that still had BASIC contained on a ROM, a feature that was getting rarer as IBM-compatible PCs took over the home computing market.

At about this time, commercial pressures were changing the British home computer industry. Sinclair had run out of money after two projects that were disastrous for different reasons (his QL business computer and his electric car, the C5) and had sold everything to Amstrad. Alan Sugar, then and still a wide boy with an eye for a fast buck, took the Sinclair legacy, threw away everything innovative, closed down all development and, within two years, Sinclair’s original nosedive completed its trajectory. Acorn had overproduced its latest range of computers, overreached itself and, floundering similarly, sold a majority share of the company to Olivetti. Acorn hung around for another decade, quietly doing glorious things including spinning off ARM Holdings (still shaping the world today), but they never recaptured the market they had once dominated. In 1990, the British home computer industry had faded; by 1998 it was dead.

Ten years after a short era when 16-year-olds made the news by writing games in their bedrooms and out-earning their parents, programming had gone from being something that was an easy and inevitable consequence of childhood to something mysterious that had to be sought out, and could be learned only by handing over large sums of money for extra software and extra books. Children who learned programming in this wilderness would have done so using the dreadful implementations of BASIC provided as supplements to MS-DOS, or by messing around in Microsoft Excel, building up simple functions one line at a time.

I moved on, combining and assimilating my computer skills with the audio engineering I picked up at university. My programming hobby became academic project experience, which then became a graduate job just before the dot-com bust, which then turned into a career. I had been lucky to be born at the right time, when these opportunities were unfolding almost in spite of me.

However, I started noticing very young computer geeks again. Two or three different easily-accessible paths into programming had opened up that I had barely noticed, all of which relied on a Web browser combined with open source software. The web page had become a powerfully-interconnected replacement for the computer desktop. Learning the right language could unleash an updated version of those same magical powers I had discovered at a school computer in 1984.

Open-source software and the Web still provides the most cost-effective, high-impact, and compelling route into programming, but it is not the only route. These possiblities will have to be the subject of my next posting.


The Spectrum’s keyword prompt: once a gateway to a happier world.