COMPUTER MUSIC

Bell Labs & Roots of the Digital World

Ancient Origins

Recalling the Ancient Discoveries video from the beginning of the class, there were many devices that had some of the characteristics of the computers centuries earlier:  Programmability, automation, and storage/recall.

  • 300 BC
    Indian mathematician/scholar/musician Pingala first described the binary number system. He also conceived the notion of a binary code similar to the Morse code.
  • 87 BC
    The Antikythera mechanism: A clockwork, analog computer designed and built in Rhodes. The mechanism contained a differential gear and was capable of tracking the relative positions of all then-known heavenly bodies. It is considered to be the first analog computer.
  • C. 60
    Heron of Alexandria made numerous inventions, including "Sequence Control" in which the operator of a machine set a machine running, which then follows a series of instructions. This was, essentially, the first computer program. He also made numerous innovations in the field of automata, which are important steps in the development of robotics.
  • 1206
    Al-Jazari invents numerous automata and other technological innovations. One of these is a design for a programmable human shaped mannequin. This seems to have been the first serious scientific plan for a robot.
  • 1834
    Charles Babbage and his colleague Ada Lovelace conceive and begin to design his Analytical Engine. A program for it was to be stored on memory, in the form of punch cards.

One of the most important centers for the development of digital technology in the 20th century was Bell Telephone Labs in New Jersey.  As the research arm of the phone company, Bell's interest was in improving communication technology, in general.

 

It was at Bell that researcher Harry Nyquist developed some of the most fundamental principles of handling digital data– including his sampling theorem, that is used in audio to determine how sampling rate affects frequency response.  Amazingly, Nyquist published this work in 1928 (!).  He had no notion of computers or music, specifically, but his work became applicable to that years later.

After Nyquist, the work of Claude Shannon laid more groundwork for computer programming.  His 1937 master's thesis (at age 21!) demonstrated how mathmatical logic could be used to control electronic switches and gates; this is essentially how software is used to control a CPU and manage memory.  His 1948 paper A Mathematical Theory of Communication is the basis for the modern field of information theory, and even coined the term "bit."

Max Matthews

Max Matthews was also a researcher at Bell in the 1950s.  Although music was of no interest to Bell at the time, he was allowed to come in on evenings and weekends to use the facilities to investigate the application of computers to music.  Matthews wrote MUSIC I, the first software to synthesize a musical piece in a computer, and continued developing it over the next 10 years.  Even today, Matthews is often called "the father of computer music."

Matthews and other early practitioners of computer music had to write all their own software.  Since there was no portability of computer code yet, all their work was tied to the specific hardware that it was written for.

The IBM 704 computer, on which Matthews' MUSIC I was written.

There were no graphic user interfaces.  Everything was done with text, and the composer had to learn the cryptic commands of the programming language.  While this is not unlike computer programming is done today, it is very unlike how computer music is done, which relies heavily on graphics.

The tape storage machines of the IBM 704.

There was also no real-time analog-to-digital conversion.  This meant that the composer could not hear the piece while they were creating it.  Once a piece was finished, the digital tape storing the code had to be taken to separate building to be converted to analog audio tape– a process that could take up to a week!

Matthews also did early work in voice synthesis using computers.  In fact, director Stanley Kubrick happened to be touring Bell Labs one day and heard this piece, which he later used in his film 2001: A Space Odyssey.

Other Computer Music Centers

As had been the case with analog electronic music studios, computer music centers began appearing.  Following the their historically different funding models, European centers were usually part of the government; U.S. studios were connected with universities, and occasionally industry (such as Bell and IBM).

IRCAM

("eer-cam")

In 1977, the French government established the world's premiere center for computer music research, IRCAM (Institute for Research and Coordination in Acoustics/Music).

The first director of IRCAM was composer and conductor Pierre Boulez.  This is significant because it allowed the organization to be lead by a musician– not an administrator, bureaucrat, or business leader, as was often the case.

To this day, IRCAM is still recognized as the one of the most important places for research and creation of computer music.

The Importance of Research Centers

It is very important to recognize the role of these types of research centers in our creative lives.  The technology we use on a daily basis was developed by artists, scientists, and administrators in facilities like Bell, IRCAM, and others.  As was the case with digital technology at Bell, years of research goes into developing a new paradigm.  Although we purchase products from software companies, they are typically not the ones 'inventing' the fundamental technology their products use.

For example, FM (frequency modulation) synthesis is a type of synthesis that is impossible to do without digital technology.  Its basic principles were developed over a number of years by John Chowning in the CCRMA research center at Stanford University. 

When the technology was mature enough, it was licensed from Stanford by Yamaha.  Yamaha took that technology and implemented it in a product people could buy; they scaled it down for a musician to use on hardware that was more affordable than the supercomputers at CCRMA and IRCAM.

The result of that licensing was the DX7 keyboard from 1983, which was one of the first affordable ($2000) digital synthesizers, and one of the most popular music technology products of all-time.

Likewise, many of the plug-ins we use regularly were once cutting-edge research at these type of centers:  Convolution (Altiverb), physical modeling (Pianoteq), and music information retrieval (the Shazam app).

 

Although research centers are usually invisible to consumers, they are the bedrock of the tools those consumers use.  Continued support of these facilities is crucial to continue the economic and creative evolution of music technology.  

Paul Lansky

Paul Lansky taught at Princeton University from 1969–2014.  This places his career over the span of time that took computer music from its near-beginning to modern paradigm. 

One of Lansky's interests was in how computers could analyze a recording, 'fish out' aspects of it, and then apply an algorithm (a process) to them.  He explored doing this with both the rhythmic elements (in Idle Chatter) and harmonic elements (in Table's Clear) of recordings he made.

In Idle Chatter (1985), Lansky unearthed the rhythmic elements of speech.  He explains the inspiration for the piece (which unfortunately gets cut-off):

In Table's Clear (1992), Lansky used recordings of his family clearing the table after dinner.  He then processed them with his software to generate harmony.  Here is an edit from the almost 20 minute full piece (available on D2L).  Listen for how the plates, pots, and silverware become more 'musical' as it starts to evolve:

You may also notice the influence of Minimalism:  Every day sound sources, longer pieces, and audible process. 

Laurie Spiegel