Into the groove: Lessons from the desktop music revolution

(Originally published in interactions magazine, I’ve expanded this a bit to include more examples.)

Musical instruments provide really intriguing examples of user interface design. While it can take years of training and no small amount of aptitude, an instrument in the right hands can provide highly nuanced control over the many aspects of sound that come together to form one of the highest forms of human expression. And even for those of us who will never achieve such heights of virtuosity, merely using such a “user interface” can result a great sense of enjoyment, immersion and fulfillment (what is often referred to as a state of “flow”).

Music is almost universally important to human culture, but instruments are not strictly “useful” and it seems strange to think of them as mere tools. That said, from the first bone flutes and stone tools, the evolution of musical instruments has closely paralleled to that of more utilitarian technology. As inventor and futurist Ray Kurzweil puts it, “[musical expression] has always used the most advanced technologies available.”

Not surprisingly then, as with so many other things, the dramatic increase of processor speeds has brought about a revolution in the way people use computers to make music. But, while computational power has been a critical enabling factor in this revolution, at least equally as important has been the ongoing evolution of the user interfaces of these new digital instruments.

novation launchpad
The Novation Launchpad, a hardware controller specifically designed to work with Ableton Live running on a computer.

A recent history of musical technology and interactivity

As with the broader universe of technology, musical instruments have co-evolved with the practice of music. New technologies are often first introduced as a way of replicating and incrementally improving upon a previously established way of doing things, and then they may eventually point the way to something entirely new. In the same way the first cars were designed as “horseless carriages,” synthesizers were largely first looked to as means to emulate the sounds of acoustic instruments, and it took decades before electronic sounds became aesthetically appealing in their own right. Starting with the concept of the instrument, of course we still have all the traditional kinds of sound-makers from percussion to strings to brass to woodwinds. And then we also have the first generation of plugged-in versions of these: electric guitars and pianos, for example. The next step from there was the synthesizer, which brings us a closer to where we are today.

Synthesizers are electronic instruments that employ a variety of techniques to allow users to explicitly control different qualities of the sound, including pitch, harmonic content, duration and how the sound changes as it is played. These are typically controlled by a piano keyboard for pitch, as well as other hardware controls (such as knobs, sliders, dials and buttons) for other aspects of the sound.


The Sequential Circuits Prophet 5, a classic analog synthesizer.

The first synthesizers, such as the Moog, used analog circuitry and a style of synthesis referred to as “subtractive synthesis” where an oscillator (or several) is used to generate a tone, and filters are used to remove harmonic content from that tone (hence are subtractive). While these machines are capable of creating deep, rich sounds, they have a somewhat limited sonic palette, and are not terribly effective at emulating more traditional instruments.

From a user interface perspective, synthesizers provided an interesting leap forward. Where a violinist must train for years to gain the dexterity to adjust the timbre of their sound, someone playing an analog synthesizer is able to make such an adjustment by twisting a knob or two. By making this control explicit and direct, it greatly flattens the learning curve (though to be fair it has proven no shortcut to the highly nuanced styling of a master instrumentalist).

In the 80′s the first digital synthesizers were created. These machines employed a variety of computational techniques that were made possible by advances in microprocessor technology. While these sophisticated synthesis techniques were (sometimes) capable of more natural and expressive sounds, what was in essence the grafting of a computer onto a piano keyboard resulted in a dramatic increase in user interface complexity. The direct manipulation provided by the filter cutoff knob on an analog synthesizer was completely lost in favor of page after page of parameters presented on the small LCD screen of digital synthesizers like the Yamaha DX7

dx7 screen
The programming screen of the Yamaha DX7

Most recently, there have been a couple significant evolutionary steps in the world of synthesis. As the processing power of personal computers increased through the 90′s, it became possible to create synthesizers that run as applications in Windows or MacOS. These “softsynths” are particularly useful because it’s possible to integrate them quite seamlessly into a computer-based production environment (often called a DAW or Digital Audio Workstation, discussed further below). Also, because they are able to rely on significant computing power and large graphical user interfaces, they can present both novel methods of synthesis (such as physical modeling, which uses principles of physics to model the sound emitted by things like plucked strings), and most significantly for this article, novel methods of control (more on this a bit later).

fm8
Native Instruments’ FM8, a softsynth loosely based on the DX-7.

Closely related to the synthesizer is the sampler, which plays back recorded sound in a musical fashion. Some samplers are designed around short samples of a individual instruments, which, for example, allows one use a keyboard to control the sonic diversity of whole orchestra. Other samplers are useful for manipulating and looping longer phrases of recorded audio, and were first popularized by hip-hop producers in the late 80′s.

Along with the evolution of the synthesizer and sampler came that of the sequencer. With its heritage in the paper rolls and mechanisms of the player piano, the idea behind a sequencer is a way of recording, playing back and editing a control signal that tells a synthesizer or sampler what notes to play, as well as other information about how they should be played. From an interaction perspective, this was quite revolutionary. A musician or producer could record the notes to be played in song, and while playing them back with the sequencer, continue to modify and adjust the sonic characteristics without ever having to make an audio recording.

The sequencer also provided musicians with a way of coordinating several synthesizers at once, but unfortunately, this often required sorting out piles of cables and settings between the various devices. Originally taking the form of a standalone hardware box (or part of a drum machine or sampler), the sequencer was the first part of the electronic music production environment to find its way onto the personal computer.

At first glance, mucking around with a sequencer is not a terribly musical activity, and can be a great distraction from the actual playing of music. This strikes at the heart of a typical conundrum faced by today’s product designers: adding sophisticated and (potentially) useful capabilities to a person’s toolset can also add the significant overhead of managing those capabilities. Many musicians complain that if their equipment setup is too complex it becomes easy to lose track of their musical ideas because they have to spend so much attention managing technology. However, for many electronic musicians and hip-hop producers, the sequencer itself has become an instrument in its own right—after hard practice it is possible to achieve an impressive level of virtuosity in live performance.


Hip producer and performer Exile using the sampler and sequencer capabilities of the Akai MPC 2000XL as a live instrument.

The final bits of the contemporary music technology equation are the components around the recording and manipulation of sounds. Where in an acoustic live setting, the qualities of sound are entirely determined by the musician, the instrument and the acoustics of the room, in the contemporary world of recorded music, the number of factors that influence the sound grow exponentially to include all flavors of recording and distribution technologies. While the importance of the musician has not necessarily diminished (though this is, I suppose, debatable), what is done with sound after it leaves the instrument has become dramatically more important. While this is not necessarily a new development—the concept of the studio as an instrument goes back at least to the 60′s and the pioneering work done at Abbey Road, as well as by Brian Wilson and Phil Spector—it is at the core of recent evolutions in digital music technology.

These include: the mixer, which combines sounds from multiple sources while adjusting the volume of each source as well as shaping the sounds (by way of an equalizer); the multi-track recorder, which can record several sound sources at once while maintaining their independence (ultimately to be mixed together later); and effects devices, such as distortion and echo boxes, change the character of the played or recorded sounds. Once again, while there are certainly many prominent producers who stand by their 60′s era analog mixing desks and still prefer to record to 1/2″ tape, here again increased processing power has meant that all of these capabilities can be found on a personal computer.

In fact, today, software such as Ableton Live, Apple Logic, Steinberg Cubase and Digidesign ProTools provide the capabilities of synthesizers, samplers, sequencers, mixers, effects and recorders, all through a single, integrated environment. (These are commonly referred to as Digital Audio Workstations or DAWs). While these are not universally simple to use, in many cases the utility and usability of these applications is a big improvement over the technology that came before them (though there are still plenty of folks who debate this point). This fact, combined with their relative cost-effectiveness has meant that a lot of musicians of all stripes have brought computers into their bedrooms, studios, practice rooms and even onto stage, and in the process they are redefining the way that recorded music is made. No longer does a band need to work according to a schedule in an expensive studio to record their ideas, but rather they’re able record as the feeling strikes wherever and whenever that happens to be. This transformation hasn’t been universally easy or fruitful, but the computer’s new status as a music instrument surely has some lessons for us.

Designing for fluency

Creating music requires a musician to achieve a sense of fluency with their tools. This is why they have to practice so much. It is incredibly detrimental to the musical experience to stop and think about how to operate a piece of equipment; the fleeting and elusive ideas behind a song are easily lost to technical distractions. For a musician to get an idea out of her head, the use of her instruments must be effortless. And this effortless isn’t just a state of minimum exertion and maximum efficiency. The experience must allow for playfulness and spontaneity, the enablers of the best of human capabilities.
So it seems that these musicians and producers who are successfully composing music and producing recordings using their computers are able to achieve a sense of fluency (or even virtuosity) that is an ideal we should have for all kinds of users of technology: from surgeons to stock traders. The question, then is what about these musical user interfaces contributes to this sense of fluidity.

Simplicity

One of the most important aspects of a good musical interface is simplicity. Guitars, pianos and trumpets are all quite straightforward in the way the present their capabilities. Obviously, the less interface a user is confronted with, the more mental bandwidth they can devote to hear and play music. With fewer choices, the user is both able to explore a nuanced way of using each aspect of an instrument, and is ultimately more able to play notes decisively and meaningfully.

To be clear, the for a user interface to be simple it is not necessarily true that it must also be basic or unsophisticated. Just because a drum is quite straightforward and direct in the way it’s capabilities are presented, it is obviously capable of being played with nuance and subtlety, though this takes years of practice. For an instrument to facilitate a feeling of fluidity, it must be learnable, but not necessarily focused on novice use.

While not always successful, the drive for simplicity in the creation of digital instruments has always been there. Greg Hendershott, the creator of Cakewalk, one of the first computer-based sequencers, explains the driving force behind his design decisions:

My design philosophy for the early Cakewalk software was to make simple things simple and to have more complicated things require a little more effort. So anything that you’d be doing frequently… I tried to keep that very up-front in the user interface — a single keystroke — and things that were a little less frequent I’d tried to have buried in a dialog box or away from view. I think that’s one thing that we’ve tried to keep up, although that [gets] more challenging as you have more and more functionality and more features to the software.

Unfortunately, as Hendershott alludes, Cakewalk and other early DAW’s quickly grew top-heavy under the weight of features, and it became so that these tools lost their simplicity started to impose their way of working on their users. Ultimately, a new player emerged with renewed dedication to simplicity.

In 2001, Live was released by Ableton, a small Berlin-based company focused on creating a simple tool for creative music performance. As Thomas Bangalter of French pop act Daft Punk puts it, “Live is by definition one of the most transparent and simple [pieces of] creative software I have ever used. Its interface is streamlined and its features, though with a minimal approach, offer endless possibilities. The whole process is easy, simple, and fun, yet with accessible possible sophistication and professional results.”

There are several things that contribute to Live’s feeling of simplicity. First, the feature set of the product is fairly limited. Where the more bloated DAW’s provide every function that could possibly be required in any production situation, the Ableton design team has very intentionally omitted features that could contribute to complexity with adding to its capability as a creative performance tool. Music-related information (such as notes and control data) is best input through performance (e.g. via a keyboard), rather than numerically or through traditional notation in a score (which other DAW’s provide).

Ableton Live
The Session View of Ableton Live.

Live’s user interface also contributes to its sense of simplicity. The entire interface is contained in two main screen states, one optimized for improvisational performance, the other for composition and more structured performances. A musician is typically able to spend an entire usage session in one view or the other, never with a thought to “navigation” (a uniquely unmusical concept). Both screens follow the same structural patterns, which provide access to files, instruments and all music-related parameters without dialog boxes or overlapping windows. Further contributing to the simplicity are the control elements (such as “knobs” and “faders”), which are rendered in a flat style that minimizes the amount of pixels devoted to ornamentation. This makes it easy for users to understand the state of all the controls through effective use of modeless visual feedback (discussed further below), without the visual noise of faux-3d rendering.

Redefinition of the instrument

Another important design decision made by the Ableton team was to respond to and accommodate the changing definition of a “musical instrument.” Here we return to the idea of the studio as an instrument, first explored by rock and R&B producers in the 60′s, and then fully embraced by hip-hop and electronic musicians in the 80′s and 90′s. As Robert Henke, member of seminal electronic group Monolake and co-founder of Ableton explains:

Software sequencers and hard-disk recording applications were originally designed as studio tools, replacing tape machines. Historically, they were more aimed towards sound engineers than towards musicians. The underlying idea of timeline-based editing is construction and sculpturing, not so much performing. As a result, those tools fail onstage or in any context where improvisation or interaction with musicians is essential. So all the software Gerhard and I wrote for our own purposes enabled us to interact with the music in real time.

To generalize, it seems that the lesson to be learned here is many product designers (and business people) become unnecessarily constrained by a rigid product definition based upon existing categories, rather than a holistic understanding of user needs and mental models. Taking a step back to observe broader context may present opportunities to deliver something that people will find useful and desirable.

Modeless, visual feedback

Since music happens in real-time, musicians must be able to quickly understand what’s going on with their instruments. Most traditional instruments don’t maintain state (and if they do, it’s visibly and audibly noticeable). In the world of a computer-based production environment, it’s quite easy for things to get so complex that it isn’t obvious from the sound exactly what’s going on with various components. This is where visual feedback can really help. Visual representations of knobs and faders are effective devices here—a knob not only allows direct manipulation, but also a way to quickly understand the current value of the parameter without touching the control or requesting the information (and are therefore modeless).

Recent innovations in musical user interfaces have broken from metaphors referring to our mechanical past to achieve many novel ways of providing visual feedback. Many of the instruments that ship with Reaktor, Native Instrument’s modular audio application construction environment, provide ingenious graphical mechanisms for controlling sounds and musical compositions.

reaktor newschool
The Newscool “ensemble” for Native Instruments’ Reaktor. Newscool features a generative sequencer based on John Horton Conway’s Game of Life.

reaktor skrewell
Reaktor’s Skrewell ensemble.

Physical control

While an elegant visual interface can greatly contribute to a musicians’ sense of fluency, focusing too much on the visual can be a problem. After all, music is an auditory experience. Visualizations of music are always an abstraction, and in many ways can stand in the way of a musician trusting their ears and their personal perception of tone and time, which is truly at the heart of music.

One of the things that helps allow musicians to take their eyes off the screen are physical control surfaces. It’s very difficult to interact with an onscreen knob with a mouse if one isn’t looking at the screen. Also, music is traditionally a dexterous two-handed affair—there’s usually more than one thing going on at once, and it helps to use two hands to keep everything moving. Computer operating systems have historically only offered direct manipulation through the mouse pointer in a single place (while this has now changed with Windows 7 and OSX support of multi-touch, the software still hasn’t quite caught up).

As legendary musician and composer Brian Eno explains, “I want to use the whole of my body, not just my index finger. One of the problems with a lot of software systems is that they expect you to type, but it uses a part of my brain that I don’t always want to be in the music process. I don’t want to shift between being a musician and being a secretary.”

Quite a number of physical control surfaces have been developed to solve this problem. These range from surfaces with dedicated controls for common functions (such as faders for mixer channels and knobs for equalizers) to banks of knobs and faders that can be flexibly assigned to any aspect of a synthesizer, sequencer or mixer. Not only are there standard piano keyboards, but also pads for triggering samples and even gestural controllers (including interfaces with Nintendo’s Wii Remote).

faderfox
The Faderfox, a compact physical controller for use with laptop-based audio software. See the Novation Launchpad at the top of the article for another good example of a physical control surface.

One of the most exciting control surfaces is Jazzmutant’s Lemur which provides a multitouch screen (beating the iPhone to market by years) that can be flexibly configured to provide a number of different types of touch and gestural controls. It seems pretty much a foregone conclusion at this point that the capabilities of the Lemur will be largely available in some form on the iPad and other tablet platforms in the not too distant future, dramatically driving down the price of highly configurable and touch-based controls.

jazzmutant lemur
The Jazzmutant Lemur, a highly customizable multi-touch control surface.


Video of musician and sound designer Richard Devine demoing a Lemur-based controller for Native Instruments’ Absynth.

Balance between structure and experimentation

One of the hallmarks of the best of modern music, from jazz to pop is a simultaneous reference to and departure from conventional forms. In most cases, either slavishly following the rules or blindly breaking them will bore the audience. Similarly, for musicians to work effectively, they require a fair amount of structure so there’s not the need to reconsider first assumptions every time they want to play a note, but there’s also the flexibility to reconfigure their instruments to chase that sound they hear in their minds.

Most DAW’s allow for this with a straightforward architecture for recording and sequencing, that also accommodates more arcane, cross-modulated routings. Perhaps the best example of this balance between structure and experimentation comes with Native Instrument’s Reaktor, which is a modular musical toolkit. The product ships with an arsenal of highly innovative synthesizers, samplers, drum machines and other musical inventions, but the inner workings of each are accessible and drastically modifiable through a visual construction environment. Users also build and share their own devices either from primitives or components borrowed from various other devices.

photone structure
The underlying structure of Photone, a Reaktor ensemble.

What makes this experience quite effective is that one can start from something useful (and even inspirational) and make small adjustments to come closer to an imagine sound or way of working. This is a great approach for all kinds of interactive tools. Often customization is used as a proxy for design. The fallacy here is that while some people can and do tweak products to get them just right, the starting place is hugely important.

Looking forward

Clearly, the evolution of musical technology isn’t over. If anything, it’s accelerating and changing course. Dissatisfaction with the computer as a musical platform has led to renewed interest in musical instruments as physical devices. Yamaha’s Tenori-On is perhaps the most dramatic of recent inventions in this arena. Designed by the renowned Toshio Iwai, this beautiful device provides an LED-lit grid of buttons as a novel user interface for musical improvisation and composition.

tenori on
The Tenori-On.


Video demo of the Tenori On.

Even game consoles provide new avenues for innovation. Electroplankton, a game for the Nintendo DS (incidentally also designed by Iwai) allows users to create music by interacting with small onscreen “organisms”, each of which has a unique control idiom and resulting sound.

Video demo of Eletroplankton.

And of course, the iPad and other medium-sized touchscreen tablets certainly offer incredible promise as a platform for musical instruments. Getting away from keyboard- and mouse-based conventions has the potential to be a huge enabler of more direct, flow-conducive experiences. If the iPad is really the computer for hanging out on the couch, then it might just be an ideal form factor for noodling on a musical instrument.


Smule’s Magic Piano provides play-along guidance using the familiar Guitar Hero / Rock Band idioms, plus allows for a radial keyboard.

The application of these inventions to more utilitarian digital products may not be universally obvious, but there’s a great potential to follow the lead of musical technology. Many attributes of a successful musical experience can be applied to all kinds of venues where critical decisions must be creatively made in real time. With appropriate application, the design strategies discussed here can help us create products that better allow for the kind of fluency and virtuosity that represent our best abilities.

4 Comments

Bob MacNeal
Cool post Dave. The balance of physical vs. visual control is particularly interesting vis-a-vis Apple's strident push of the Cocoa Touch interface on the iPad & iPhone. Perhaps iPad will prove too visual? I hadn’t thought of hardware devices as a sort of instrument. Retrospectively, the parallels are obvious ;-) I liked the Eno quote "I want to use the whole of my body, not just my index finger." Thanks. I’ll recommend your post to friends.
Jonas LaRance
Great analysis and something I've pondered before. As a musician, I think Brian Eno is expressing a desire for an immersive tactile experience with an instrument. I've always found feeling the vibration of the instrument is important. Are haptics being explored to facilitate this?
Adrian Haselhuber
Great article Dave! I strongly believe in the importance of designing simplified tools for idea capture and idea generation, i.e. the music writing process. As you said, the typical DAW (and we're making one of 'em over here at Avid ;-) is too complex and perhaps even a bit slow when you attempt quick context switches between "writing" and "engineering." I too will forward your article to my team and friends.
Ralph Crouch
Hi, Great article, except for the omission of friuty loops and reason, which i think were the best at bridging all the gaps in terms of user learning curve, routing, instrument design and variety of pre packed sound. Most producers will tell you many of the last decades hits started out on one of those two.

Post a comment

We’re trying to advance the conversation, and we trust that you will, too. We’d rather not moderate, but we will remove any comments that are blatantly inflammatory or inappropriate. Let it fly, but keep it clean. Thanks.

Post this comment