2. Introduction to MIDI

MIDI (Musical Instrument Digital Interface) is a control language that enables electronic instruments to communicate with each other. In days gone by, if you wanted to play a sound from a given synthesiser, you had to physically play that specific synth’s keyboard - this resulted in performers being surrounded by huge arrays of keyboards.

MIDI was originally designed as a means of solving this problem – two MIDI equipped synths could be connected together and either of them played from either keyboard, or even both simultaneously. This effectively separated the keyboard itself from the sound generating electronics, opening up the possibility of a single MIDI keyboard controlling a whole rack of ’sound modules’. Directing keyboard information to different sound modules was only the beginning though:

MIDI can actually carry many different types of instructions, the most obvious being the ‘Note On’ command, which is sent when you press a key, followed by ‘Note Off’ when you release it. There’s room in the language for up to 128 different commands, primarily covering anything you can do with a keyboard – pitch bend, patch change, modulation, volume, velocity (how hard a key is hit) and aftertouch for example, but also parameters more specific to the device in question, such as filter cut-off and resonance, oscillator frequency and so on. Before long, it was realised that you could use this language to control many other things - lighting rigs, mixers and video for instance. Most exciting however, was the idea that as MIDI was simply a list of commands, it would be simple to edit if you could store and view the data – it was not long before the ‘MIDI sequencer’ was born. Early analogue sequencers allowed a series of notes to be set up in a cycle using different voltages for different pitches – these sequences were typically 8 or 16 notes long, and can be found all over early electronic music. A good MIDI sequencer though can record any MIDI data at all being sent to it from a keyboard or MIDI equipped drum pad or mixer etc, and present it for editing in any way you choose - far beyond analogue sequencers’ capabilities, but the name remains nonetheless. MIDI sequencers initially took the form of standalone hardware, but these days of course, computer software dominates.

An important but frequently overlooked or misunderstood point is that MIDI data does not contain any sound. It’s true that if you play a note on a keyboard connected to a sound module you will hear a sound, but what you’re hearing is the result of a MIDI instruction, not MIDI itself. By way of analogy, MIDI information is very much like a sheet of musical notation - the notation contains information like pitch, duration, expression and tempo, just as MIDI does, but try to listen to sheet of music and it’ll look at you very blankly, or maybe rustle a little if you’re lucky! If you want to listen to the notation then you have to give it to a musician who understands it, who will then be able to play it back on their instrument. The instrument the musician uses is not necessarily important - providing it’s capable of reproducing the correct range of pitches, you could choose pretty much any instrument you liked. It’s the same with MIDI - you can’t hear it, and if you tried it would just sound a bit like a fax machine. But you can send MIDI information to a device that understands it, and that device will then be able to respond to it, in the case of Note On/Off commands by making a sound. The sound it makes is not predetermined (although you can send ‘Patch Change’ or ‘Program Change’ commands with MIDI if you want), so you can choose any sound the device can produce, and any MIDI device for that matter.

Your Contacts