Cinema for the Ears by Georgina Brett
“Cinema for the Ears” was the title of the concert series run by Birmingham University, in the 90’s. This was where I acquired my love for Electro-Acoustic music.
I had heard some electronic works by Stockhausen and Cage but a year at one of the top educational establishments for this music really got me interested. I went to my first concert alone, and sat in the middle of the concert hall. The lights were off and hundreds of speakers were placed around the cinema theatre space: the front, back, side, wide and little tweeters overhead. Cinema for the ears was such a good description for it… music with no overt rhythm, no melody, just noises, all moving along and shifting as time runs its course and the composer sits, spinning his/her creation around the room from speaker to speaker. John Cage once said “every noise you hear is music.” He was so right, these sounds were beautiful and so fresh.
Over the years I have thought long and hard about the meaning of this music. It is certainly not commercial, hardly anyone has heard of it, even the wackiest of the new experimenters don’t really seem to get the point of it… but what is the point? Well the point, seems to me, is to celebrate sound. To really understand and get into the nature of what it is and what it does to our ears and brains. This is where I think cinema for the ears is a great phrase to describe this music… When you listen and you let go of striving to understand it, the sounds become clearer and your imagination runs wild, visual images that your mind associates with the sounds appear in rapid succession, you go on a journey, sometimes a delicate refined journey, sometimes a journey which is so violent that by the end your heart is racing and you feel exhilarated by the surprise of such jerky aggressive sounds. There is often no subject matter, just a title.
The making of music using electronic means began naturally after the invention of the gramophone and the analogue tape machine, Visionary composer, Edgar Varese talks of a need for new instruments as early as 1916, his philosophy of music expression, to use his own term, was based on the concept of ‘organised sound’, his works at the time reflected this, he composed his ‘Airphonic Suite’ for the (newly invented) RCA Theremin and Orchestra in 1929. In his article of 1931 he writes “The growth of musical art of any age is determined by the technological progress which parallels it…. Although it is true that musicians may have ideas which hurdle these technical barriers, yet being forced to use existing instruments, their intentions remain unrealised until scientific progress comes to the rescue”. It wasn’t til 1958 that Varese had the opportunity to write “Poeme Electronique” for electronic tape, which was not only a brilliant early electronic work but the first to explore the projection of sound in space.
Possibly the first electronic composition was made immediately after the 2nd World War, in 1945 by Pierre Schaffer at The Radio Television Francaise, in Paris. It was composed of sounds recorded from the Paris Gare du Nord train station, and was aptly named “Etude aux Chemins de Fer”. Pierre Schaffer together with Pierre Henry ran the studio for musical exploration. This is where the signature of the French style of electronic music began, it was called ‘Musique Concrete’. This music is characterised by sounds mostly taken from nature and modernity (recorded sound) which is then collated and processed in some way and mixed together to make a piece. Technology has come a long way since then, but nowadays if a composer uses predominantly samples from the real world he is said to be writing in the French Tradition.
The German school was where the analogue synthesiser had it’s infancy, the music from Germany reflected this in sounding the pure electronic noises that we, these days know so well in dance music and early sci-fi movies. Research began with a visit from Homer Dudley, a research physicist at Bell Telephone Laboratories to Dr Werner Mayer-Eppler, director at the time of the department of phonetics at Bonn University. He bought with him a newly developed machine called a vocoder (Voice Operated reCOrDER). The vocoder’s possibilities were explored and new instruments analogue waveforms using tone generators were invented, like the Melochord later called the Trautonium, which was the first instrument to use triangle, sawtooth and sine waves to synthesise sounds.
Between 1945-1960 these two schools developed new music, but from two different angles, in France, research was centred around how to capture natural sound, how to describe it, and how to filter, modify, and then mix the sounds together. In Germany the musicians were using equipment to make new sounds together, from inventions, such as; the ring modulator, echo , analogue high and low pass filtering, synthesis using various multiple tone generators and white noise generators.
In 1955 the rivalry and opposition of these two centres of electronic music was finally broken down, as a new major studio for research was set up in Milan, Luciano Berio successfully showed in his piece “Differences” that the concrete sound and electronic sound could be successfully married. Berio’s fascination with the human voice lead him to compose “Thema Omega Joyce”. Vocal samples were taken from the staggering voice of Cathy Berberian, reciting a passage from James Joyce Ulysses.
The first American electronic works were by John Cage, ‘Imaginary Landscape No.5’ (1951-2) and ‘Williams Mix’ (1952), they explored musique concrete, but much of the American innovations between 1950-1980 centred around the making of the RCA synthesiser. In the 70’s many very basic computer synthesis languages were born, FORTRAN, MUSIC IV B, CMUSIC etc but it was Barry Vercoes CSOUND (invented in 1986) which became the mainstream language, it is still in use today. John Chowning and Yamaha developed the first synthesiser using MIDI technology in 1983. In the 1970’s the first computer, the UPIC, to turn drawings into music was invented in Paris and later on in the 80’s computer music that could make itself were also being pioneered, a work could be written in computer code which, when performed, every time it would give a different outcome.
All massive developments made by musicians at the forefront of sound design have given us the technology to make the music we make today. Strange in the recounting of the history of it one seems to lose the point of why they were experimenting, the resulting sound of their compositions is a different entity from the making of the inventions that make the sound. The common thread of these styles is the challenge to our ears to sidestep our usual referencing habits.
Let me clarify what I am trying to say.
If you look at music in semiotic terms, almost all genres of music are highly iconic. When you listen to anything, you instantly think of something that you can relate to it, either a place you heard it, someone it reminds you of, a time your life when you heard it first. If you never heard the music before, what does it remind you of? What genre does it fall into? Does it have a fast rhythm? Is the melody pleasing? Is it like anything you already know? Is the instrument a violin? Is the player a good player? Do you recognise the singer? The non-musical associations are endless and they come to us without us even being aware that we are not listening to the ‘sounds’ per se, but what those sounds mean to us. I am not saying that these sounds not cause pure emotion us just from the nature of their waveforms, but that nature of the sound is almost always secondary to the context we immediately put the music into.
No music is totally free from our tendency to clarify it. However, electro-acoustic music/University research music goes further in the elimination of the cost obvious of subliminal questions that occupy us in the moment when we could be focussing on the actual waveform we are hearing.
A different sort of reference can appear. For example – This sounds like a massive cave with water dripping and bats flying out past my head.. seems mad, but how nice to be free to imagine anything.. and this is the key, our imagination is rarely given such a free rein.
Another aspect of this music is the evocation of strong emotions without any context. The noise/sound itself makes the audience feel calm, sad, frustrated, irritated. This raises the idea of manipulation in sound. We are all aware that Hollywood almost always uses the music to make their films work at the most dramatic moments, the music helps us to feel something about the story they are trying to covey/sell. Much music is made purely to manipulate our emotions into making us buy and follow. How wonderful it is to sit in a concert hall and be manipulated into an emotion, but have one’s own thoughts to oneself. Maybe this is why electro-acoustic music is not successful… maybe we don’t like to be left alone with our own thoughts.
It does flourish in academic institutions around the globe, where students and lecturers make it for the love of it. Sometimes its practitioners’ techniques slip into the mainstream. Complicated sound design creeping ever faster into the more avant-garde pop music. Perhaps EAM has its function here, and in the programmers who, through the practice of making this music, invent new computer programs which years later become available to everyone.
This is where the forefront of music is, I don’t listen to it often, it is not possible to truly appreciate it without full involvement with the sounds. I listen to a wide diversity of music, however, listening to EAM on a custom built sound theatre environment is probably my most loved experience, although there is no specific sound system in the UK at least, that constantly puts on music of this type for audiences to truly immerse themselves in non-referential music.