NanoMuse Blog by Randy Chance

Music, Art, Guitars and cool stuff

A Brief History of Electronic Music October 4, 2009

Filed under: electronic music — nanomuse @ 5:36 pm
Tags: , , , , , ,

A BRIEF TUTORIAL ON THE DEVELOPMENT OF ELECTRONIC MUSIC

Early Developments – The Tape Recorder –

Although human attempts to create sounds not found in nature predate the invention of the phonograph (!) It was really the development of the analogue tape recorder that created a working meduim that allowed musicians to edit sound events for the first time in a meaningful way.

The first important contribution of the tape recorder was that it allowed musicians to listen to what they had just done without having to “mother” and “stamp” a master recording. Because direct-to-disc analogue recordings up through the 1940’s had to go through this involved process in order to be “listenable”, musicians had to set a date to come back into the studio and listen to playback. With tape, no process was needed, just rewind the tape.

Several other advantages to tape recording soon evolved, including “punching in” (re-recording a portion of a piece of music in order to make changes or fix mistakes), “overdubbing” (recording more tracks in synch onto an existing piece of music), and the use of Variable Speed Oscillators (to change the pitch and speed of a musical passage), running tape backwards, and processing the signal with reverb, etc. The tape recorder created a whole new era in sound manipulation.

The ability to physically cut tape allowed musicians to remove a sound from context and place it in a new context by eliminating certain aspects of that sound and only allowing the ear to hear certain other aspects. For instance, if you record a church bell and eliminate the first part, or “attack” of the sound, it doesn’t sound like a chuch bell at all. It sounds like something entirely new. This resulted in a preoccupation with sound events for their own intrinsic purpose, which led to a concept of electronic music that we still work in today.

Tape recorder quality increased dramatically, partially due to the popularity of rock music and it’s preoccupation with sound, partially due to the demands of the space program on small, light, durable electronic equipment. From the first tape recorder models that were developed by the Germans in World War Two, to the first Ampexs that Bing Crosby used commercially in his radio show, to Les Paul’s innovative eight track recorder of the late fifties, tape recorders suddenly seemed to be everywhere.

As the 1950’s gave way to the ’60s, this preoccupation with tape manipulation introduced a new art form: electronic music. Musicians would often record sounds found in nature and manipulate their charicteristics (a musical form called, “Electronic Concrete”, because one was starting with contrete, or physical sounds), or they would use electronic devices that created sounds not found in nature (a musical form that grew into “synthesized music”).

The Synthesizer –

A musician who plays a traditional instrument can control his sequence of notes, and he has certain, sometimes mesmerizing control over the tonal quality of those notes. But what if he could get down to a sort of “microscopic” level, as it were, and literally be in conplete control over every aspect of each sound event? This is the question that Donald Buchla and Robert A. Moog attempted to answer in the early 1960’s as they struggled to expand a musician’s ability to organize sound.

Each in his turn developed a device that contained a separate module that could control each individual variable of sound, and then allowed all of those modules to function together under a common voltage control. Because this device generated an artifical waveform to create sound, it was dubbed the ” Synthesizer”.

Sound Construction –

What is sound? Sound is condensation and rarefaction of air: Air “bunching up” and air “thinning out”. If you could see air, it would look sort of like waves of water as they “bunch up” and “thin out” when something disturbs it – for instance if you throw a pebble into a pond that was previous still and unruffled.

A microphone takes the fluctuations in air that we call “sound” and turns them into fluctuations in electrical signal. The fluctuations in electrical signal are just like the fluctions in the air – in other words, they are “analogous” – the electricity condenses (“bunches up”) and rarefies (“thins out”) in the same way the air did. This kind of signal is called, “analogue”, because it is analogues to the waveforms of air that it represents.

A speaker vibrates in a way that makes the air move just like it originally did, before the microphone picked it up. In that sense, the speaker’s job is exactly the opposite of the microphone: to turn electrical signal back into moving air.

A synthesizer simply eliminates the first step. It does not begin by turning moving air waveforms into electrical signal waveforms, it simply begins by generating an electrical signal which the speakers by their vibration, will turn into sound, or moving air. To do this, it uses an oscillator, which oscilates (goes back and forth) in electrical output and generates electrical vibration. Once the oscilators have created waveforms, the synthesizer has various components that can modify each aspect of sound, to give, as least theoretically, “total control” over all parameters of a sound event. The speakers then turn that electrical signal into moving air – sound

Sound has two primary characteristics: frequency (pitch), and amplitude (volume). Frequency is simply how often the waveform is repeating, or oscillating. Amplitude is the amount of air that is moved with each repetition. Every musical pitch has it’s specific number of repetitions per second. “A 440”, (A above middle C) is the designation of the number of repetitions, or “cycles per second” for that particular pitch. The range of human hearing is about 20 to 20,000 cycles per second. Cycles per second are also referred to as Hertz, in honour of the scientist who developed this system of measurement.

The amplitude, or size of each vibration, (in other works, the amount of air moved in each wave) is generally measured in decibels. The pain threshold in sound volume is generally thought to be about 120 decibels.

The Digital Revolution –

The advent of microprocessors into electronic circuitry in the 60’s and 70’s opened up a whole new world of control over sound. A digital synthesizer simply measures the curvature of a waveform numerically, and stores it as a numerical file. Now synth parameters could be poked into digital memory, stored, and brought up again whenever they were needed!

Transferring control of synth parameters into the digital domain meant that those changes could take place in real time, as the music was playing. Whether it was the waveform itself, changes in pitch, or volume, or just the amount of reverb, a computer could affect changes along the time line of a musical composition, assigning the different variables of sound to numeric quantities as the piece was being played.

Sampling –

By the late 70’s and early 80’s pioneering companies such as Synclavier, Fairlight and Kurtzweil had developed methods for analyzing incoming analogue signals, and storing the data as a digital file. These methods for analyzing sound, or “sampling” the sound, allowed composers to use any recorded sound for their waveform source. The sampler was born! A sampler is exactly like a synthesizer, except that instead of using an artifically generated waveform for it’s sound source, it uses a digital file of a recorded sound.

As analogue to digital (A to D) converters, and digital to analogue (D to A) converters increased in quality, and microprossers became smaller and less expensive, an entire range of performance synthesizers and samplers emerged in the 80’s. Even the older, analogue approach to synth design could benefit from these advances, because an analogue synth, with warmth and “realism” in it’s sound usually superior to digital synths, could be controlled digitally, and could have it’s sound parameters stored in digital memory.

MIDI –

In the early 80’s several of the key manufacturers of synthesizers and other digital music equipment met to discuss a standardized meduim by which all digital music equipment could have a kind of common language. They reasoned that if word processing files could save and load ASCII and text files in a way that broke down barriers between different types of word processing software and hardware (for instance, to transfer a file between a Macintosh computer and an IBM computer), why couldn’t, say, middile C on a Yamaha synthesizer have the same numerical value as middle C on an Oberheim synth?

As this format was standardized, MIDI was born. MIDI (Musical Instrument Digital Interface) is a common means by which various computer music devices can “converse” with one another. It is a kind of programming lanquage and music notation at at the same time. At first, MIDI was just a way of using one synth to contol another. But as programmers quickly began to realize the possibilities, endless programs appeared. Everything from patch librarians that allowed musicians to use a personal computer to edit and store synth parameters, to programs that translated MIDI key oard performance into music notation and print out the notation on a printer, to “sequencing” programs that allowed musicians to compose music on a computer, making use of banks of synthesizers to perform the music.

The original MIDI format specified 16 MIDI channels. Information (not sound) passes through MIDI cables (5 pin DIN connectors) from one device into another. If a certain synth that is programmed to play a sound similar to, say, a piano, is set to, say, midi channel 1, then all information spewing through the MIDI cables and assigned to MIDI channel 1 will be “grabbed” by that particular synth, and the synth will play those notes. Another synth programmed to play, for instance, a violin sound and set to receive on MIDI channel 2 will play only those notes meant for the violin, and so forth. In this way, a composer with a computer and a few synthesizers has a whole orchestra at his disposal.

PCM Sampling and General MIDI –

In the maze of equpment that MIDI enabled, it soon became obvious that every musician did not want to spend the needed time creating and recording every sound source necessary for a composition, storing those sounds on disc, cataloguing them, etc. By the mid eighties a new job had evolved: that of “Sound Designer”. Some musicians developed lucrative careers doing nothing but creating and editing sounds for the other musicians. PCM sampling, or Pulse Code Modulation, was established as a method of compressing sampled sound data. A new generation of samplers arose that had no ability to record their own sounds. This drawback was balanced by the ease with which musicians could draw up sounds that had been recorded “at the facotry” and smply assign them to MIDI channels, synth parameters, keygroupings, etc. Recordable samplers continue to be popular to this day for musicians who wish to create their own sound sources “from scratch” but if one just wants “a piano sound”, a “violin sound” etc., the PCM Sample Playback Module proves an excellent solution.

This “generic sound source” attitude was taken farther in the 90’s, with a format for assigning certain sounds to certain patch numbers on a synthesizer. Thus, patch number one on any snythesizer would be a piano patch, etc. This system is called “General MIDI”, and it allows compositions to be played on different sequencers without having to first figure out which sound source will play this or that particular sequence within a composition. It is an excellent system for certain applications (for example, computer games, or music “classics” or “standards” downloaded off the Internet). It can be ignored at any time that it isn’t appropriate, and one can assign whatever patches one wants.

MIDI interfaces today often have far more than 16 MIDI channels (128 is not uncommon!) and mixing boards, effects units and digital audio recorders now generally come equipped with MIDI in, out ant thru jacks as a standard feature. Even stage lighting systems can synchronize thru the MIDI format. Though many musicians dislike MIDI because it can make music sound less natural, (and others dislike MIDI because it’s far too limited, technically), one should realize that MIDI was never designed as a method for every musical process nor every musical idiom. It has it’s purposes, and it continues to evolve.

Digital Audio

>From it’s humble beginnings in the sixties and seventies, digital audio recording has all but eclipsed analogue music in the nineties in many areas of the music business. Essentially the same concept as sampling (but without assigning sound events to MIDI channels or keygroupings), digital audio coverts an analogue sound signal into a numeric file through the digital to analogue (D to A) converter, and then converts it back again through the analogue to digital (A to D) converter. The standard sampling rate for CD quality (Compact Disc) music is 44.1 Kilohertz. That means that the incoming signal is getting sampled, or analyzed, forty four thousand, one hundred times per second. (!) This creates a file that is five megabytes per mono channel, per minute (Ten megabytes per minute for sterio). At 16 bit resolution, this produces an acceptable musical quality. In recent years, digital audio has jumped to 20 bits and 24 bits in professional recording environments, for even higher sound quality.

Although some people feel that analogue sound has a warmer, bigger quality to it, the digital audio process allows musicians to make use of cut, copy, and paste functions, along with all the other editing tools that the digital process provides. Music recorded on digital files has no distortion, tape hiss, no deterioration from one generation to another (a copy of a digital file is identical to the original – it’s just a series of numbers that are translated into sound by the D to A converters). And, as with MIDI it continues to evolve today. Some would say it’s really only just begun.

Since the first commercial use of the tape recorder in the late forties, the advances in electronic music have been absolutely staggering. Synthesizers in the sixties took up an entire wall, cost many thousands of dollars, could usually only produce one sound event at a time, had keyboards that could only trigger one note at a time (chords had to be built up through overdubbing on a tape recorder), had keyboards that were not sensitive to changes in volume, and after a sound event had been programmed on a synthesizer and recorded on tape, every parameter of that sound had to be literally written down on a chart with a pencil in order to be reproduced again after the synth had created other sounds! Today, a sound module infinitely more advanced and as small as a book (with a digital memory of over a hundred patches) can be purchsed “used” for a couple hundred dollars! In fact, it’s safe to say that even the most visionary, progressive thinkers in this field thirty years ago couldn’t imagine in their wildest fantasies how far things have progressed today.

And as for the next millenium ? ? ?

How would you like to go down to the music store and purchase an android like “Data” [From Star Trek, The Next Generation]to be your keyboard player, sit around and talk to him about the kind of “feel” you’d like on a certain piece of music . . . . . . . . or maybe go have an operation performed where a certain type of input jack would be installed in the back of your head, and everthing you think would be immedeately transferred to some kind of digital memory medium . . . . . . . . . . . . . . . . . . . . . . … . . . . . . . . . … .