The following sections briefly describe the topics that the Kit addresses through its classes and protocols. Within the descriptions, class and protocol names are highlighted as they're introduced for easy identification.
Music is represented in a three-level hierarchy of
MKScore
, MKPart
,
and MKNote
objects. MKScore
s and
MKPart
s are analogous to orchestral scores
and the instrumental parts that they contain: a
MKScore
represents a musical composition
while each MKPart
corresponds to a
particular means of realization. MKParts
consists of a time-sorted collection of
MKNote
s, each of which contains data that
describes a musical event.
The information in a MKNote
object falls
into four categories:
A list of attribute-value pairs called parameters that describe the characteristics of a musical event
A noteType
that determines the general character
of the MKNote
An identifying integer called a noteTag
, used to
associate different MKNote
s with each other
A timeTag
, or the onset time of the
MKNote
A parameter supplies a value for a particular attribute of a musical
sound, its frequency or amplitude, for example. A parameter's value
can be simple―an integer, floating point number, or character
string―or it can be another object. The
MKNote
object provides special methods for
setting the value of a parameter as an
MKEnvelope
object or a
MKWaveTable
object. With the
MKEnvelope
object you can create a value that
varies over time. The MKWaveTable
object
contains sound or spectrum data that's used in various types of
synthesis, such as wavetable synthesis.
The manner in which a parameter is interpreted depends
on the object that realizes the MKNote
. For
example, one object could interpret a heightened brightness parameter
by increasing the amplitude of the sound, while another, given the
same MKNote
, might increase the sound's spectral
content. In this way, parameters are similar to Objective-C messages:
The precise meaning of either depends on how they are implemented by
the object that receives them.
A MKNote
's noteType
and
noteTag
are used together to help interpret a
MKNote
's parameters. There are five
noteType
s:
noteDur
represents an entire musical note (a note
with a duration).
noteOn
establishes the beginning of a note.
noteOff
establishes the end of a note.
noteUpdate
represents the middle of a note.
mute
represents a MKNote
not directly associated with sound-production.
noteDur
s and noteOn
s both
establish the beginning of a musical note. The difference between them
is that the noteDur
also has information that
tells when the note should end. A note created by a
noteOn
simply keeps sounding until a
noteOff
comes along to stop it. In either case, a
noteUpdate
can change the attributes of a musical
note while it's sounding. The mute
noteType
is used to represent any additional
information, such as barlines or rehearsal numbers.
A noteTag
is an arbitrary integer that's used to
identify different MKNote
s as part of the same musical note or
phrase. For example, a noteOff
is paired with a
noteOn
by matching noteTag
values. You can create a legato passage with a series of
noteOn
s, all with the same
noteTag
, concluded by a single
noteOff
.
The MusicKit's noteTag
system solves many of the problems inherent in
MIDI, which uses a combination of key number and
channel to identify events that are part of the same musical
phrase. For example, the MusicKit can
create and manage an unlimited number of simultaneous legato phrases
while MIDI can only manage 16 (in
MIDI mono mode). Also, with
MIDI's tagging system, mixing streams of notes is
difficult―notes can easily get clobbered or linger on beyond
their appointed end. The MusicKit avoids
this problem by reassigning unique noteTag
values
when streams of MKNote
s are mixed together.
A MKNote
's timeTag
is
relevant only when the MKNote
is in a
MKPart
―it specifies the time of the
MKNote
relative to the start of its
MKPart
. timeTag
values are
measured in beats, where the value of a beat can be set by the
user. If the MKNote
is a noteDur, its duration
is also computed in beats.
An entire MKScore
can be stored in a
scorefile. The scorefile format is designed to represent any
information that can be put in a MKNote
object,
including the MKPart
to which the
MKNote
belongs. Scorefiles are in ASCII format
and can easily be created and modified with a text editor. In
addition, the MusicKit provides a language called
ScoreFile that lets you add simple programming
constructs such as variables, assignments, and arithmethic expressions
to your scorefile.
During a MusicKit performance,
MKNote
objects are dispatched, in time order,
to objects that realize them in some manner―usually by making a
sound on the DSP or on an external
MIDI synthesizer. This process involves, primarily,
instances of MKPerformer
,
MKInstrument
, and
MKConductor
:
A MKPerformer
acquires
MKNote
s, either by opening a file, looking
in a MKPart
or
MKScore
, or generating them itself, and
sends them to one or more MKInstrument
s.
Pseudo-performers such as MKMidi
or the
application itself may act as MKPerformer
s,
supplying MKNote
s in response to
asynchronous events.
An MKInstrument
receives
MKNote
s sent to it by one or more
MKPerformer
s and realizes them in some
distinct manner.
The MKConductor
acts as a scheduler,
ensuring that MKNote
s are transmitted from
MKPerformer
s to
MKInstruments
in order and at the right
time.
This system is useful for designing a wide variety of applications
that process MKNote
s sequentially. For example,
a MusicKit performance can be configured to
perform MIDI or DSP sequencing,
graphic animation, MIDI real-time processing (such
as echo, channel mapping, or doubling), sequential editing on a file,
mixing and filtering of MKNote
streams under
interactive control, and so on.
Both MKPerformer
and
MKInstrument
are abstract classes. This
means that you never create and use instances of these classes
directly in an application. Rather, they define common protocol (for
sending and receiving MKNote
s) that's used
by their subclasses. The subclasses build on this protocol to generate
or realize MKNote
s in some
application-specific manner.
The MusicKit provides a number of
MKPerformer
and
MKInstrument
subclasses. The principle
MKPerformer
subclasses are:
MKScorePerformer
and
MKPartPerformer
. These read
MKNote
s from a designated
MKScore
and
MKPart
, respectively.
MKScorePerformer
is actually a collection
of MKPartPerformer
s, one for each
MKPart
in the
MKScore
.
MKScorefilePerformer
reads a
scorefiles, forming MKNote
objects from the
contents of the file. It's only advantage over
MKScorePerformer
is that there is no
memory-resident representation of the
MKScore
is used. Thus, it can
instantaneously perform huge MKScores
that
would require some time to read into a
MKScore
.
MKMidi
(a
pseudo-MKPerformer
) creates
MKNote
objects from the byte stream
generated by an external MIDI synthesizer attached to a serial port.
The MKInstrument
subclasses provided by the
MusicKit are:
MKSynthInstrument
objects realize
MKNote
s by synthesizing them on the
DSP.
MKMidi
(a pseudo-Instrument) turns
MKNote
objects into MIDI
commands and sends the resulting byte stream back out to an external
MIDI synthesizer connected to a serial port.
MKScoreRecorder
and
MKPartRecorder
receive
MKNote
s, copy them, and add them to a
MKScore
and MKPart
,
respectively.
MKScorefileWriter
writes scorefiles on the fly
during a performance. It is analagous to
MKScorefilePerformer
.
MKNoteFilter
is a subclass of
MKInstrument
that also implements
MKPerformer
's
MKNote
-sending protocol, thus it can both
receive and send MKNote
s. Any number of
MKNoteFilter
objects can be interposed between
a MKPerformer
and an
MKInstrument
. MKNoteFilter
is, itself, abstract. The action a
MKNoteFilter
object takes in response to
receiving a MKNote
is defined by the
subclass. For example, you can create a
MKNoteFilter
subclass that creates and
activates a new MKPerformer
for every
MKNote
it receives.
MKOrchestra
handles all allocation and
DSP time management.
MKSynthInstrument
is a voice allocator and
manages instances of MKSynthPatch
, each of
which representes a single sound-producing/processing voice on the
DSP. MKSynthPatch
es are
comprised of MKUnitGenerator
s, basic building
blocks of DSP synthesis, as well as
MKSynthData
, DSP memory
objects. The MusicKit provides an extensive
set of MKSynthPatch
and
MKUnitGenerator
subclasses, in the
MKSynthPatch
and
MKUnitGenerator
frameworks, respectively.