MusicKit Classes

The following sections briefly describe the topics that the Kit addresses through its classes and protocols. Within the descriptions, class and protocol names are highlighted as they're introduced for easy identification.

Encapsulating Musical Data

Music is represented in a three-level hierarchy of MKScore, MKPart, and MKNote objects. MKScores and MKParts are analogous to orchestral scores and the instrumental parts that they contain: a MKScore represents a musical composition while each MKPart corresponds to a particular means of realization. MKParts consists of a time-sorted collection of MKNotes, each of which contains data that describes a musical event.

The information in a MKNote object falls into four categories:

A parameter supplies a value for a particular attribute of a musical sound, its frequency or amplitude, for example. A parameter's value can be simple―an integer, floating point number, or character string―or it can be another object. The MKNote object provides special methods for setting the value of a parameter as an MKEnvelope object or a MKWaveTable object. With the MKEnvelope object you can create a value that varies over time. The MKWaveTable object contains sound or spectrum data that's used in various types of synthesis, such as wavetable synthesis.

The manner in which a parameter is interpreted depends on the object that realizes the MKNote. For example, one object could interpret a heightened brightness parameter by increasing the amplitude of the sound, while another, given the same MKNote, might increase the sound's spectral content. In this way, parameters are similar to Objective-C messages: The precise meaning of either depends on how they are implemented by the object that receives them.

A MKNote's noteType and noteTag are used together to help interpret a MKNote's parameters. There are five noteTypes:

noteDurs and noteOns both establish the beginning of a musical note. The difference between them is that the noteDur also has information that tells when the note should end. A note created by a noteOn simply keeps sounding until a noteOff comes along to stop it. In either case, a noteUpdate can change the attributes of a musical note while it's sounding. The mute noteType is used to represent any additional information, such as barlines or rehearsal numbers.

A noteTag is an arbitrary integer that's used to identify different MKNotes as part of the same musical note or phrase. For example, a noteOff is paired with a noteOn by matching noteTag values. You can create a legato passage with a series of noteOns, all with the same noteTag, concluded by a single noteOff.

The MusicKit's noteTag system solves many of the problems inherent in MIDI, which uses a combination of key number and channel to identify events that are part of the same musical phrase. For example, the MusicKit can create and manage an unlimited number of simultaneous legato phrases while MIDI can only manage 16 (in MIDI mono mode). Also, with MIDI's tagging system, mixing streams of notes is difficult―notes can easily get clobbered or linger on beyond their appointed end. The MusicKit avoids this problem by reassigning unique noteTag values when streams of MKNotes are mixed together.

A MKNote's timeTag is relevant only when the MKNote is in a MKPart―it specifies the time of the MKNote relative to the start of its MKPart. timeTag values are measured in beats, where the value of a beat can be set by the user. If the MKNote is a noteDur, its duration is also computed in beats.

An entire MKScore can be stored in a scorefile. The scorefile format is designed to represent any information that can be put in a MKNote object, including the MKPart to which the MKNote belongs. Scorefiles are in ASCII format and can easily be created and modified with a text editor. In addition, the MusicKit provides a language called ScoreFile that lets you add simple programming constructs such as variables, assignments, and arithmethic expressions to your scorefile.

Note and Event Scheduling, Sequencing and Processing

During a MusicKit performance, MKNote objects are dispatched, in time order, to objects that realize them in some manner―usually by making a sound on the DSP or on an external MIDI synthesizer. This process involves, primarily, instances of MKPerformer, MKInstrument, and MKConductor:

This system is useful for designing a wide variety of applications that process MKNotes sequentially. For example, a MusicKit performance can be configured to perform MIDI or DSP sequencing, graphic animation, MIDI real-time processing (such as echo, channel mapping, or doubling), sequential editing on a file, mixing and filtering of MKNote streams under interactive control, and so on.

Both MKPerformer and MKInstrument are abstract classes. This means that you never create and use instances of these classes directly in an application. Rather, they define common protocol (for sending and receiving MKNotes) that's used by their subclasses. The subclasses build on this protocol to generate or realize MKNotes in some application-specific manner.

The MusicKit provides a number of MKPerformer and MKInstrument subclasses. The principle MKPerformer subclasses are:

The MKInstrument subclasses provided by the MusicKit are:

DSP Synthesis

MKOrchestra handles all allocation and DSP time management. MKSynthInstrument is a voice allocator and manages instances of MKSynthPatch, each of which representes a single sound-producing/processing voice on the DSP. MKSynthPatches are comprised of MKUnitGenerators, basic building blocks of DSP synthesis, as well as MKSynthData, DSP memory objects. The MusicKit provides an extensive set of MKSynthPatch and MKUnitGenerator subclasses, in the MKSynthPatch and MKUnitGenerator frameworks, respectively.