This chapter describes the classes, methods, and C Functions that the MusicKit defines to represent music data.
MKNote
ClassWhether you are composing music, designing a performance
scheme, or building software instruments, you need a thorough
understanding of the MKNote
class. The
MKNote
class provides a means for describing
musical sounds; a MKNote
object is a repository
of musical information that's passed to and acted on by other MusicKit
objects.
A MKNote
contains three categories of
information:
A collection of
parameters. Parameters describe the attributes
of a musical sound, such as its frequency (pitch) and amplitude
(loudness). A MKNote
can contain any number of
parameters, including none.
A single note type that
expresses the basic character of the MKNote
object, whether it defines an entire musical note, or just its
beginning, middle, or end.
An integer identifier called a note
tag. Note tags are used to associate a series of
MKNote
s. For example, two separate
MKNote
objects that define the beginning and
the end of a musical note must have the same note tag value.
The three categories of MKNote
information are examined in detail in the following sections.
Parameters are the pith of a MKNote
object. They're used to enumerate and describe the quantifiable
aspects of a musical sound. The most important rule of parameters is
that they don't do anything; in order for a
parameter to have an effect, another object (or your application) must
retrieve and apply it in some way. For example, the subclasses of
MKSynthPatch
provided by the MusicKit are
designed to look for particular sets of parameters when synthesizing a
MKNote
. Some common parameters, such as those
for frequency and amplitude, are looked for by all these
classes.
A parameter consists of a unique integer tag, a unique print name (a string), and a value. The tag and name are used to identify the parameter:
The parameter's tag identifies it within an application.
The print name identifies the parameter in a scorefile.
Thus, the tag and the name are simply two ways of identifying the same parameter. To create a new parameter, you pass a print name (that you make up yourself) to the parTagForName: class method. The method returns a unique tag for the parameter:
/* Create a new parameter tag (an int). */ int myPar = [MKNote parTagForName: @"myPar"]; |
The name of the variable that represents the tag needn't be the same as the string name, although in the interest of clarity this is regarded as good form. The parTagForName: method can also be used to retrieve the tag of a parameter that's already been created: parTagForName: creates a new tag for each unique argument that's passed to it; subsequent invocations with the same argument will return the already-created tag.
Since the MusicKit MKSynthPatch
es look
for particular parameters during synthesis, it follows that the
MusicKit must also supply some number of parameter tags. These are
listed and described in Appendix A.
MusicKit parameter tags are represented by integer constants such as MK_freq (for frequency) and MK_amp (for amplitude). The print names are formed by dropping the “MK_” prefix. Thus, MK_freq is represented in a scorefile as “freq” and MK_amp is “amp”.
By definition, the parameter tags supplied by the MusicKit are
sufficient for all uses of its MKSynthPatch
es
and MKMidi
. If you create your own
MKSynthPatch
subclass, you can create
additional parameter tags to fully describe its functionality, but you
should use as many of the MusicKit parameter tags as are applicable.
For example, it's assumed that all MKSynthPatch
subclasses will have a settable frequency; rather than create your own
frequency parameter tag, you should use MK_freq. This promotes portability between
MKSynthPatch
es.
Lest the emphasis on synthesis be misconstrued, keep in mind
that a parameter's purpose is not restricted to that arena.
Parameters can be used in any way that your application sees fit; for
example, a graphic notation program could use parameters to describe
how a MKNote
object is displayed on the screen.
However, you should also keep in mind that a parameter is significant
only if some other object or your application looks for and uses
it.
The method you use to set a parameter's value depends on the
data type of the value. The MKNote
class
provides six value-setting methods. The first three of these
are:
setPar:toDouble: sets the parameter value as a double.
setPar:toInt: sets the value as an int.
setPar:toString: sets the value as a pointer to a string.
The other three methods will be examined later.
The argument to the setPar: keyword is a parameter tag; the second argument is the value that you're setting. For example, to set the value of the bearing parameter (stereophonic location of a DSP synthesized sound) to 45.0 degrees (hard right), you could send any of the following messages:
/* Of course, you have to create the MKNote first. */ MKNote *aNote =[[MKNote alloc] init]; /* Set the bearing. */ [aNote setPar: MK_bearing toDouble: 45.0]; /* or */ [aNote setPar: MK_bearing toInt: 45]; /* or */ [aNote setPar: MK_bearing toString: @"45"]; |
You generally set bearing as a double―all the MusicKit
MKSynthPatch
es apply bearing, as well as most
other number-valued parameters, as a value of that type. However,
retrieval methods are provided that perform type conversion for you.
For example, the message
/* Retrieve the bearing parameter value as a double. */ double theBearing = [aNote parAsDouble:MK_bearing]; |
returns the double 45.0 regardless of which of the three methods you used to set the value. The retrieval methods include:
parAsDouble: returns the value as a double.
parAsInt: returns the value as an int.
parAsString: returns a pointer (a char *) to a copy of the value.
parAsStringNoCopy: returns a pointer to the value itself.
You shouldn't alter the string returned by parAsStringNoCopy:. It's owned by the
|
If the parameter hasn't been set, the retrieval methods return values as follows:
parAsDouble: returns
MK_NODVAL
.
parAsInt: returns
MAXINT
.
The string retrieval methods return an empty string.
Unfortunately, you can't use
MK_NODVAL
in a simple comparison predicate. To
check for this return value, you must call the in-line function
MKIsNoDVal()
; the function returns 0 if its
argument is MK_NODVAL
and nonzero if not:
/* Retrieve the value of the amplitude parameter. */ double amp = [aNote parAsDouble: MK_amp]; /* Test for the parameter's existence. */ if (!MKIsNoDVal(amp)) ... /* do something with the parameter */ |
For most uses of parameters―in particular, if you're
designing a MKSynthPatch
subclass―it's
important to know whether the parameter was actually set before
applying its value. You can compare the retrieved values with the
values shown above to check whether the parameter had actually been
set, or you can test the BOOL value
returned by the isParPresent:
method:
/* Only retrieve bearing if its value was set. */ if ([aNote parIsPresent: MK_bearing]) double theBearing = [aNote parAsDouble: MK_bearing]; |
To properly set a parameter's value, you need to know the range
of values that are meaningful to the object that applies it. The
MusicKit parameter lists given in Appendix B supply this information.
If you're creating an application (or writing a scorefile) in order to
synthesize MKNote
s on the DSP or on an external
MIDI synthesizer, you should consult these lists to make sure you're
setting the MKNote
s' parameters to reasonable
and musically useful values.
Three of the most commonly used parameters, those for pitch, begin time, and duration, are special. See the Section called Basic Parameters, later in this chapter, for a discussion of alternative ways to set and retrieve the values of these parameters.
Some parameters take objects as their values. The methods for setting an object-valued parameter are:
setPar:toEnvelope:
sets the value as an MKEnvelope
object.
setPar:toWaveTable:
sets the value as a MKWaveTable
object.
setPar:toObject: sets the value as a non-MusicKit object.
MKEnvelope
s and MKWaveTable
s are
described later in this chapter. The setPar:toObject: method is provided so you can
set a parameter to an instance of one of your own classes. The class
that you define should implement the following methods so its
instances can be written to and read from a scorefile:
writeASCIIStream:
provides instructions for writing the object as
ASCII text. In a scorefile, the text that
represents an object―this includes
MKEnvelope
s and
MKWaveTable
s―is enclosed in square
brackets ([]). The ASCII
representation of an object must not include a close bracket. The
method's argument is the NSMutableData
to which the text is written.
readASCIIStream:
provides instructions for creating an object from its ASCII
representation. When the method is called, the argument (an NSMutableData
)
is pointing to the first character after the open bracket. You should
leave the argument pointing to the close bracket. In other words, you
should read in whatever you wrote out.
Both of these methods are called automatically when you read a scorefile into your application (scorefile-reading methods are described later in this chapter).
You can retrieve an object-valued parameter through the following methods:
parAsEnvelope:
returns an MKEnvelope
object.
parAsWaveTable:
returns a MKWaveTable
object.
parAsObject: returns a non-MusicKit object.
Unlike the value retrieval methods shown in the previous section, these methods return nil if the parameter's value isn't the correct type.
A handful of attributes are common to all musical notes: pitch, loudness, begin time, and duration. Special methods and values are used to set the parameters that represent these attributes, as explained in the following sections.
Frequency and pitch are two terms that refer to the most fundamental aspect of a musical sound: its register or tonal height. Frequency is the exact measurement of the periodicity of an acoustical waveform expressed in hertz. Pitch, on the other hand, is an inexact representation expressed in musical names such as F sharp, A flat, or G natural.
When the DSP synthesizes a musical note, it produces a tone at a specified frequency. However, musicians think in terms of pitch. To bridge the gap between frequency and pitch, the MusicKit defines sets of pitch variables and key numbers that represent particular frequencies.
A pitch variable takes the following form:
pitchLetter[sharpOrFlat]octave
pitchLetter is a lowercase letter from a to g. As in standard music notation, the Music Kit's pitch variables are organized within an octave such that c is the lowest pitch and b is the highest.
The optional sharpOrFlat is s for sharp and f for flat. They raise or lower by a semitone the pitch indicated by pitchLetter.
octave is 00 or an integer from 0 to 9. The octave component of the pitch name variable places the pitch class within a particular octave, where 00 is the lowest octave and 9 is the highest. Octaves are numbered such that c4 is middle C.
Some examples of pitch variables are:
Table 4-1. Pitch variable examples
Pitch Variable | Pitch |
---|---|
ef4 | E flat above middle C |
gs3 | G sharp below middle C |
f00 | F natural in the fifth octave below middle C |
bs8 | B sharp five octaves above middle C (the same as c9) |
Notice that the natural sign isn't represented in the pitch variables. If neither the sharp nor the flat sign is present, the pitch is automatically natural. In addition, key signatures aren't represented; the accidentals that define a key must be present in each pitch that they affect.
Each pitch variable represents a predefined frequency. By default, the frequencies that correspond to the pitch variables define a twelve-tone equal-tempered tuning system, with a4 equal to 440.0 Hz:
Twelve-tone means that there are twelve discrete tones within an octave.
Equal-tempered means that the frequency ratio between any pair of successive tones is always the same.
This is the tuning system used to tune modern fixed-pitch instruments, most notably the piano. The complete table of pitch variables and the corresponding default frequencies is given in the Section called Pitches and Frequencies in Appendix A.
Another way to specify the pitch is to use a key number. Key numbers are integers that correspond to the keys of a MIDI keyboard. As a MIDI standard, 60 is the key number for the middle C of the keyboard. The MusicKit provides constants to represent key numbers. The form of these constants is like that of the pitch variables, but with the letter k appended; for example:
Key numbers are provided primarily to accommodate MIDI
instruments. If you record a MIDI performance (using a
MKMidi
object), the pitch specifications will
all be represented as key numbers. When you realize
MKNote
s on a MIDI synthesizer, the actual
frequency represented by a particular key number is controlled by the
synthesizer itself. The standard of “60 equals middle C”
simply means that key number 60 creates a tone at whatever frequency
the synthesizer's middle C key is tuned to produce.
MKNote
You can specify the pitch of MKNote
objects as a frequency or pitch variable (a double), or as a key number (an int). These are represented by the parameter
tags MK_freq and MK_keyNum. Regardless of how it's synthesized
(on the DSP or on a MIDI instrument), the appropriate value is
converted from whichever parameter is present.
To set a MKNote
's pitch, you use the methods described
earlier:
/* You must import this file when using pitch variables. */ #import <MusicKit/pitches.h> /* Set the MKNote's pitch to middle C as a frequency. */ [aNote setPar: MK_freq toDouble: 261.625]; /* The same using a pitch variable. */ [aNote setPar: MK_freq toDouble: c4]; /* And as a key number. */ [aNote setPar: MK_keyNum toDouble: c4k]; |
The conversion between frequencies or pitch variables and key
numbers allows you to create MKNote
objects that can be played on both
the DSP and on a MIDI instrument using the same pitch
parameter.
MKNote
Special methods are provided to retrieve pitch:
freq: returns a double value as a frequency.
keyNum: returns an int as a key number.
If the MK_freq parameter isn't present but MK_keyNum is, the freq: method returns a frequency value converted from the MK_keyNum parameter. Similarly, keyNum: returns a key number value converted from MK_freq in the absence of MK_keyNum.
The MusicKit MKSynthPatch
es use freq: to retrieve pitch information;
MKMidi
uses keyNum:.
Keep in mind that either retrieval method converts a value from the opposite parameter only if its own parameter isn't set. In addition, you can set MK_freq and MK_keyNum independently of each other: Setting one doesn't reset the other.
Since frequencies are continuous and key numbers are discrete, the correspondence between them isn't exact; conversion from frequency to key number sometimes requires approximation. The pitch table in the Section called Pitches and Frequencies in Appendix A gives the frequency range that corresponds to particular key numbers (in the default tuning system).
The perceived loudness of a musical note depends on a number of
factors, the most important being the amplitude of the waveform and
its spectral energy, or brightness. All the MusicKit
MKSynthPatch
es use the amplitude parameter,
MK_amp; most also use MK_bright, the brightness parameter.
Amplitude is fairly straightforward: The value of the amplitude parameter determines the strength of the signal produced by the DSP. The value of MK_amp is retrieved as a double. Its value must be between 0.0 and 1.0, where 0.0 is inaudibly soft and 1.0 is a fully saturated signal. Perceived amplitude increases logarithmically: Successive Notes with incrementally increasing amplitude values are perceived to get louder by successively smaller amounts. For instance, the difference in loudness between amplitudes of 0.1 and 0.2 sounds much greater than the difference between 0.8 and 0.9.
Amplitude is set and retrieved through the normal methods; for example:
/* Set the amplitude of a MKNote. */ [aNote setPar: MK_amp toDouble: 0.2]; /* Retrieve amplitude. */ double myAmp = [aNote parAsDouble: MK_amp]; /* Set the amplitude of a MKNote. */ [aNote setPar: MK_amp toDouble: MKdB(-15.0)]; |
The range of the decibel scale extends from negative infinity (inaudible) to 0.0 (maximally loud). Decibel scaling creates a linear correspondence between increasing value and perceived loudness: The perceived increase in loudness from -20.0 to -15.0 is the same as that from -15.0 to -10.0 (as well as from -10.0 to -5.0 and from -5.0 to 0.0).
Brightness can be thought of as a tone control. The greater the
value of MK_bright, the brighter the
synthesized sound. As you decrease brightness, the sound becomes
darker. MK_bright is used
differently by the various MKSynthPatch
subclasses; usually it's used to modify the values of other
timbre-related parameters. Some
MKSynthPatch
es, such as those that perform
MKWaveTable
synthesis, don't use MK_bright at all.
Brightness values are usually set and retrieved through
setPar:toDouble: and parAsDouble: (the MusicKit
MKSynthPatch
es always retrieve the value of
MK_bright as a double). The range of valid brightness values
is, in general, 0.0 to 1.0; you can actually set MK_bright to a value in excess of 1.0, although
this may cause distortion in some
MKSynthPatch
es. Specifying brightness in
decibels is possible, but the scale tends to have less meaning here
than it does for amplitude.
The begin time, or time tag, and duration
parameters of a MKNote
are set through the
methods setTimeTag: and setDur:. Both methods take a double argument that's measured in
beats. By default, a beat is one second long;
however, you can change the size of a beat through methods defined in
the MKConductor
class.
The retrieval methods timeTag
and dur return a
MKNote
's time tag and duration. Because of the
specialized methods for setting and retrieving these parameters, they
don't have parameter tags to represent them, nor do they have print
names. Their representation in a scorefile is explained in
Reference.
For some applications, setting a MKNote
's
time tag isn't necessary; for instance, you can design a
MKPerformer
object that creates
MKNote
s and performs them immediately.
However, in many musical applications―in particular, for any
application that adds MKNote
s to a
MKPart
―time tags are indispensable. For
convenience, the newSetTimeTag:
method lets you create a new MKNote
and set the
time tag value at the same time:
/* Create a new MKNote and set the time tag value to 3.5 beats. */ MKNote *myNote = [MKNote newSetTimeTag: 3.5]; |
A newly created MKNote
otherwise has a
time tag value of 1.0. Time tags are typically measured from the
beginning of a performance (the MKPerformer
class provides methods that let you add a begin time offset). The
MKNote
in the example would thus be played
after three and a half beats of a performance.
Duration is also in beats and indicates, ostensibly, the
longevity of a MKNote
during synthesis: When
the duration has expired, the MKNote
doesn't
stop short; instead, its MKEnvelope
objects (if
any) start to wind down. The actual length of the
MKNote
is its duration value plus the amount of
time it takes for its amplitude MKEnvelope
to
finish. This is described in detail in the Section called The MKEnvelope
Class, later in this chapter.
Many MKNote
s don't have a duration value.
For example, some MKNote
objects initiate a
synthesized tone that plays until a subsequent
MKNote
object, also lacking a duration,
specifically turns it off. The necessity or superfluity of the
duration value is described in the following sections.
A MKNote
's note type describes its
musical function with regard to the life of a synthesized sound.
There are five note types. Briefly they are:
NoteDur represents an entire musical note.
NoteOn establishes the beginning of a note.
NoteOff establishes the end of a note.
NoteUpdate modifies a sounding note.
Mute makes no sound.
Each of the five types is represented by an MKToken constant:
MK_noteDur
MK_noteOn
MK_noteOff
MK_noteUpdate
MK_mute
Every MKNote
has exactly one note type;
the default is MK_mute. You set the
note type with setNoteType: and
retrieve it with noteType.
There are two styles for creating a complete musical note, either with a single noteDur or with a noteOn/noteOff pair.
Note tags are integers that are used to identify
MKNote
objects that are part of the same
phrase; in particular, matching note tags are used to create
noteOn/noteOff pairs and to associate noteUpdates with other
MKNote
s. The actual integer value of a note
tag has no significance. The range of note tag values extends from 0
to 231-1.
You set a MKNote
's note tag through
setNoteTag: and retrieve it with
noteTag. The C function
MKNoteTag()
is provided to create note tag values
that are guaranteed to be unique across your entire
application―you should never create note tag values except
through this function.
The following example, in which a noteOff is paired with a noteOn, demonstrates how to create and administer note tags:
/* Create a noteOn and a noteOff and set their time tags. */ MKNote *aNoteOn = [[MKNote alloc] initWithTimeTag: 1.0]; MKNote *aNoteOff = [[MKNote alloc] initWithTimeTag: 3.5]; [aNoteOn setNoteType: MK_noteOn]; [aNoteOff setNoteType: MK_noteOff]; /* Create a new note tag for the noteOn. */ [aNoteOn setNoteTag: MKNoteTag()]; /* Set the noteOff note tag to that of the noteOn. */ [aNoteOff setNoteTag: [myNoteOn noteTag]]; |
The following sections further examine each note type and discuss note tags as they apply to each type.
The information in a noteDur defines an entire musical note. A
noteDur is distinguished by having a duration (“Dur” stands
for duration). Of the five note types, only noteDur can have a
duration value―invoking setDur:
automatically sets a MKNote
's duration to
MK_noteDur.
You can associate any number of noteUpdates with a noteDur, thereby changing the attributes of the musical note while it's sounding. In order to associate a noteUpdate to a noteDur, they must be given the same note tag, as described above. NoteUpdates are described in a subsequent section.
The other way to define a complete musical note is to use a noteOn/noteOff pair. A noteOn starts a musical note and a subsequent noteOff terminates it. Each noteOn/noteOff pair must share a unique note tag.
If the same note tag is given to successive noteOns that aren't
articulated by intervening noteOffs, the second and subsequent noteOns
retrigger the MKNote
's
MKEnvelope
s when it's synthesized on the
DSP.
A noteOff triggers the release portion of a
MKNote
's MKEnvelope
.
Any parameters that it contains are applied to that portion of the
MKNote
, however brief. See the Section called The MKEnvelope
Class, later in this chapter.
NoteUpdates are used to alter the parameters of a musical note
that's already underway. A noteUpdate is associated with another
MKNote
by virtue of matching note tags. In the
following example, a noteUpdate is used to change the pitch of a
musical note represented by a noteDur:
MKNote *myNoteDur, *myNoteUpdate; /* Create a MKNote with a time tag and set its pitch, and duration. */ myNoteDur = [[MKNote alloc] initWithTimeTag: 1.0]; [myNoteDur setPar: MK_freq toDouble: c4]; [myNoteDur setDur: 3.0]; /* Create a noteUpdate with a time tag and set its pitch. */ myNoteUpdate = [[MKNote alloc] initWithTimeTag: 2.5]; [myNoteUpdate setNoteType: MK_noteUpdate]; [myNoteUpdate setPar: MK_freq toDouble: d4]; /* Set the note tags to the same value. */ [myNoteDur setNoteTag: MKNoteTag()]; [myNoteUpdate setNoteTag: [myNoteDur noteTag]]; |
The effect of the two MKNote
s is a
single, two-beat-long musical note that changes pitch after
one-and-a-half beats.
Only the parameters that are explicitly present in the
noteUpdate are applied to the sounding note: If a particular parameter
is present in the original MKNote
but is absent
in an associated noteUpdate, the value of the original parameter is
retained.
A noteUpdate with no note tag affects all the currently sounding
MKNote
s that are being realized through the
same MKSynthInstrument
object.
A mute is normally ignored by
MKSynthPatch
and MKMidi
objects, so it can't be used to represent a sound-making event. Mutes
are useful for representing structural breakpoints such as bar lines.
If you send the setNoteTag: message
to a mute, its note type is changed to MK_noteUpdate.
MKEnvelope
ClassAn envelope is a function that varies over time. Envelopes are extremely important to synthesized music because they allow continuous control of the attributes of a sound. For example, with an envelope you can specify how quickly a musical note speaks and how long it takes to die away. Without envelopes, a synthesized tone would snap on, maintain a steady amplitude for its entire duration, and then snap off. (“Snap” can be taken literally: Both the arrival and the departure of the sound would be accompanied by an audible click.)
An envelope is depicted as a continuous line on an xy coordinate system, where time moves forward from left to right on the x-axis, and the envelope's value at a particular time is given as y. Figure 4-1 shows some typical envelope shapes. The top two envelopes, with their characteristic initial rise and ultimate fall, are typical of those used to control amplitude. The bottom one, applied to frequency, would introduce some warble at the beginning and end of a note.
Instances of the MusicKit's MKEnvelope
class are used to represent envelope functions. An
MKEnvelope
object contains a series of x,y
coordinates, or breakpoints, that mark a change
in an envelope's direction or trajectory. Figure 4-2 superimposes
breakpoints on the previously illustrated envelope shapes (an open
circle denotes the location of a breakpoint).
An MKEnvelope
object can have any number
of breakpoints, allowing you to create arbitrarily complex
functions.
You can use an MKEnvelope
object to
control virtually any attribute of a sound synthesized on the DSP.
While MKEnvelope
control is indispensable for
amplitude, it can also be used to good effect for frequency and
timbre-related attributes associated with particular synthesis
techniques.
Besides providing continuous control of a sound's attributes, an
MKEnvelope
can also be used to retrieve
discrete values of y for given values of x. The retrieved values can
then be used, for example, to set the same parameter in a series of
MKNote
s, allowing you to control the
parameter's evolution over an entire musical phrase.
The following sections examine the methods that define
MKEnvelope
objects and demonstrate how to use
them in DSP synthesis and for discrete-value retrieval.
MKEnvelope
The (x,y) value pairs that define an
MKEnvelope
's shape are set through the
setPointCount:xArray:yArray: method.
The first argument is the number of breakpoints in the
MKEnvelope
; the other two arguments are arrays
of x values and y values:
/* Create an MKEnvelope object. */ MKEnvelope *anEnvelope = [[MKEnvelope alloc] init]; /* Create and instantiate arrays for the x and y values. */ double xVals[] = {0.0, 1.0, 4.0, 5.0}; double yVals[] = {0.0, 1.0, 1.0, 0.0}; /* Define the MKEnvelope with data. */ [anEnvelope setPointCount: 4 xArray: xVals yArray: yVals]; |
The elements in the x and y arrays are paired in the order
given. Thus, the first breakpoint in an
MKEnvelope
is created from the first element in
the x array and the first element in the y array, the second
breakpoint is created from the second elements of either array, and so
on. Figure 4-3 illustrates the MKEnvelope
object defined in the example. The x and y values for each breakpoint
are shown in parentheses.
The way the x and y values are interpreted depends on the way
the MKEnvelope
is used. In general, an
MKEnvelope
is scaled by other values, allowing
the same MKEnvelope
object to be stretched and
squeezed to fit a number of different contexts.
MKEnvelope
s and the
DSPThe most important use of an MKEnvelope
is to provide continuous control over a musical attribute of a
MKNote
that's synthesized on the DSP. To do
this, you supply the MKEnvelope
object as a
parameter to the MKNote
. For example, an
MKEnvelope
used to control amplitude is set as
a MKNote
's MK_ampEnv parameter:
/* Create a MKNote and an MKEnvelope. */ MKNote *aNote = [[MKNote alloc] init]; MKEnvelope *anEnvelope = [[MKEnvelope alloc] init]; /* Create x and y value arrays and define the MKEnvelope. */ double xVals = {0.0, 1.0, 4.0, 5.0}; double yVals = {0.0, 1.0, 1.0, 0.0}; [anEnvelope pointCount: 4 xArray: xVals yArray: yVals]; /* Set the MKEnvelope to control aNote's amplitude. */ [aNote setPar: MK_ampEnv toEnvelope: anEnvelope]; |
The MKEnvelope
defined here is the same
as the one illustrated in Figure 4-3, above. When aNote is synthesized its amplitude follows the
curve shown in the illustration. It rises from zero, maintains a
steady state, and then falls back to zero.
As with any parameter, an
MKEnvelope
-valued parameter is only meaningful
if it's looked for and used by the MKSynthPatch
object that synthesizes the MKNote
. Appendix A lists and describes the
MKEnvelope
parameters used by the MusicKit
MKSynthPatch
subclasses.
In addition, the MusicKit MKSynthPatch
es
are designed such that MKEnvelope
s are only
significant in a noteOn or a noteDur. Setting an
MKEnvelope
parameter in a noteOff or a
noteUpdate has no immediate effect, although it's used if the phrase
is rearticulated and the rearticulating MKNote
(by definition, a noteOn or noteDur) doesn't specify the
MKEnvelope
parameter itself.
Associated with each MKEnvelope
parameter
provided by the MusicKit are two related parameters that interpret
the MKEnvelope
's y values. The names of these
parameters are formed as MK_attribute0 and MK_attribute1:
MK_attribute0 is the value of the
MKEnvelope
when y is 0.0.
MK_attribute1 is the value of the
MKEnvelope
when y is 1.0. As a convenience,
the parameter MK_attribute is defined as
a synonym for MK_attribute1.
The parameters that interpret the amplitude
MKEnvelope
, for example, are MK_amp0 and MK_amp1(which is synonymous with MK_amp). Since amplitude should always rise
from and fall back to 0.0 (to avoid clicks), you'll probably never
need to set the value of MK_amp0―if the parameter isn't set, its
value defaults to 0.0. The amplitude
MKEnvelope
is normally interpreted by setting
the value of MK_amp (only):
/* Set the amplitude MKEnvelope (as previously defined). */ [aNote setPar: MK_ampEnv toEnvelope: anEnvelope]; /* The value of MK_amp sets the value when y is 1.0. */ [aNote setPar: MK_amp toDouble: 0.15]; |
During synthesis, aNote's amplitude is scaled according to the value of MK_amp, as depicted in Figure 4-4 (notice that the breakpoint values themselves don't change, only their interpretations are affected).
Technically, the interpretation of a particular value of y is calculated according to the following formula:
interpretedValue = (scale* y) + offset
where scale is calculated as MK_attribute1 - MK_attribute0 and offset is simply the value of MK_attribute0.
When a MKSynthPatch
receives a noteOn or
noteDur, it starts processing the MKNote
's
MKEnvelope
s, reading their breakpoints one by
one. The y values are scaled and offset as described above; the x
values are taken as seconds (with modifications described in the next
section). If the MKNote
's duration (in
seconds) is greater than the duration of the
MKEnvelope
―in other words, if the
MKEnvelope
runs out of breakpoints before the
DSP is done synthesizing the MKNote
―then
the final y value is maintained for the balance of the
MKNote
.
To accommodate MKNote
s of different
lengths, the MKEnvelope
object lets you define
one of its breakpoints as a stickpoint. When the
MKSynthPatch
reads an
MKEnvelope
's stickpoint, it “sticks”
until a noteOff arrives (or the declared duration of a noteDur
elapses). The MKEnvelope
shown in the previous
example, with its flat middle section, can easily be redefined using a
stickpoint, as follows:
/* Instantiate arrays for x and y. */ double xVals = {0.0, 1.0, 2.0}; double yVals = {0.0, 1.0, 0.0}; /* Define the MKEnvelope and set the MK_amp constant. */ [anEnvelope pointCount: 3 xArray: xVals yArray: yVals]; [aNote setPar: MK_amp toDouble: 0.15]; /* Set the MKEnvelope's stickpoint. */ [anEnvelope setStickpoint: 1]; |
The argument to setStickpoint:
is a zero-based index into the MKEnvelope
's
breakpoints. In the example, anEnvelope's second breakpoint is declared to
be the stickpoint. Figure 4-5 shows how the stickpoint allows the
same MKEnvelope
to be applied to
MKNote
s (or MKNote
phrases) with different durations. The stickpoint is shown as a solid
circle; the sustained portion of the MKEnvelope
is indicated as a dashed line. The tempo in the illustration is
assumed to be 60.0.
Notice that the duration between the end of the stickpoint
segment and the following breakpoint is always the same (one second,
as defined by the MKEnvelope
itself),
regardless of the length of the MKNote
.
An MKEnvelope
object is divided into
three parts: attack, sustain, and release. The stickpoint defines the
sustain; the attack is the portion that comes before the stickpoint
and the release is the portion that comes after it. An
MKEnvelope
can have any number of breakpoints
in its attack and release segments.
You can specify the absolute duration of the attack portion of
an MKEnvelope
by setting the value of the
MK_attributeAtt parameter; the release is set through
MK_attributeRel. For example, the amplitude attack and
release parameters are MK_ampAtt and
MK_ampRel, respectively. The values
of these parameters are taken as the number of seconds (given as
doubles) to spend in either segment,
as illustrated in Figure 4-6. The x values of the breakpoints in the
two segments are scaled within the given durations to maintain their
defined proportions.
Since they're set as seconds (not beats), the attack and release times aren't affected by tempo. |
Figure 4-7 shows the same (amplitude)
MKEnvelope
used in the previous examples with
various attack and release values.
Figure 4-8 shows what happens when a noteOff arrives (or a
noteDur expires) during the attack portion of the
MKEnvelope
―in other words, before the
stickpoint is reached. For this illustration, both MK_ampAtt and MK_ampRel are assumed to have values of
1.0.
When the noteOff arrives, the MKEnvelope
heads for the first breakpoint in the release (the first breakpoint
after the stickpoint) from wherever it happens to be at the time. The
release takes its full duration (as defined in the
MKEnvelope
itself, or by MK_ampRel, if present) regardless of whether
the noteOff arrives before or after the stickpoint is reached.
MKNote
without SustainNot every instrument can create a sustained tone; the amplitude envelope of a piano tone, for example, is characterized by a sharp rise and fall followed by a gradual but steady decay to quiescence. This is depicted in Figure 4-9.
To simulate this sort of envelope shape, yet still accommodate
MKNote
s of any length, the
MKEnvelope
object definition would look
something like this:
double xVals = {0.0, 0.05, 0.2, 0.5, 8.0, 8.15}; double yVals = {0,0, 1.0, 0.5, 0.3, 0.0, 0.0}; [anEnvelope setPointCount:6 xArray:xVals yArray:yVals]; /* Set the stickpoint to breakpoint 4, xy:(8.0, 0.0). */ [anEnvelope setStickpoint:4]; |
The MKEnvelope
is depicted in Figure 4-10
Notice that the MKEnvelope
's stickpoint
is, curiously enough, set to a breakpoint that has a y value of 0.0.
Equally curious is the release portion of the
MKEnvelope
: a flat piece of seemingly useless
real estate. However, consider the result of the two possible
scenarios:
The noteOff arrives after the stickpoint is reached. In this case, the synthesized sound has already decayed to an amplitude of 0.0. When the noteOff arrives, the release portion is indeed executed, but since the amplitude is already at 0.0, the release portion doesn't produce an audible effect.
The noteOff arrives before the stickpoint is reached. The release portion is triggered, causing the amplitude to decay to 0.0 in 0.15 seconds.
Attack and release durations on a nonsustaining instrument are generally invariant, so you would rarely set the MK_ampAtt and MK_ampRel parameter.
The MusicKit provides an additional parameter, MK_portamento, with which you can further
manipulate your MKEnvelope
s' attack times.
Like the MK_attributeAtt parameters, MK_portamento takes a double value that's measured in seconds, but
rather than affect the entire attack portion, it sets the duration
between the first two breakpoints only. Also, as used by the
MKSynthPatch
es provided by the Music kit,
MK_portamento affects all the
MKEnvelope
s in a
MKNote
―there aren't individual portamento
parameters for amplitude, frequency, and so on. In a
MKNote
that contains a portamento value and one
or more attack scalers, the attacks of the individual
MKEnvelope
s are scaled before the value of
MK_portamento is applied.
MK_portamento is provided so
you can easily and quickly control, to some degree, the rearticulation
of a MKNote
's
MKEnvelope
s. As such, it's only significant in
a MKNote
that rearticulates a phrase―it's
ignored in a noteDur with no note tag, and has no immediate effect in
a noteOn or a noteDur with a previously inactive note tag (although,
in the latter case, the value of MK_portamento is stored in anticipation of
subsequent rearticulations).
You should keep in mind that portamento is optional. It can be
quite useful if you're modelling an instrument that has different
attack characteristics depending on whether a
MKNote
is the beginning of a new phrase or part
of a legato passage. For example, in some instruments, such as a
horn, the attack of an initial musical note―in amplitude,
frequency, and timbre―is more drawn out than in the subsequent
notes of a phrase. To simulate such an instrument, it's convenient to
use MK_portamento to affect all the
MKEnvelope
s at once.
The previous examples have shown the lines that connect an
MKEnvelope
's breakpoints as straight segments.
In reality, as synthesized by the DSP, these segments follow an
asymptotic curve. In an asymptotic curve, the target is never fully
reached―the curve rises (or falls) in successively smaller steps
as it approaches the target. However, there's a point in the curve
where the target is perceived to have been attained. This point is
controlled by the smoothing value.
By default, smoothing is 1.0, a value that's used to mean that the point at which the target is perceived to have been reached is equal to the difference between the x values in successive breakpoints; in other words, it takes the entire time between breakpoints to reach the target y value. Other values are, similarly, the ratio of curve duration to overall duration between a pair of breakpoints. For example, a smoothing of 0.5 means it takes half the time between a pair of breakpoints to (perceptually) complete the curve between the breakpoints' y values. A smoothing value in excess of 1.0 falls short of the target altogether (it takes longer than the allotted time to reach the target).
You can set the smoothing value for each breakpoint by defining
the MKEnvelope
through an alternative
method:
setPointCount:xArray:orSamplingPeriod:yArray:
smoothingArray:orDefaultSmoothing:
Smoothing is set as either an array of values (one for each breakpoint) passed to as the argument to the smoothingArray: keyword, or as a default value passed as the argument to the orDefaultSmoothing: keyword. The default smoothing is used only if the argument to smoothingArray: is NULL.
Smoothing is, admittedly, a somewhat elusive concept, best
explained by illustration. Figure 4-11 shows the shape of an
MKEnvelope
that uses various smoothing values
between successive breakpoints. The values in parentheses are the x,
y, and smoothing values for each breakpoint.
Notice that the smoothing value is omitted from the first
breakpoint. While a smoothing value must be supplied as the first
element in the smoothing array (if you use the smoothingArray: keyword), this value is
actually ignored when the MKEnvelope
is
synthesized. This is because a breakpoint's smoothing value applies
to the curve leading into it―the curve from the previous
breakpoint to the current one. Since there isn't a previous
breakpoint before the first one, the smoothing value for breakpoint 0
is thrown away.
Returning to the illustration, the smoothing value for the
second breakpoint is 1.0; thus the curve leading from the first
breakpoint into the second breakpoint takes up the entire duration
between the two points. The smoothing value for the third break point
is 0.2; the curve leading into the third breakpoint reaches the target
y value with time to spare. The fourth breakpoint has a smoothing of
0.0. This means that it takes no time to reach the target; the
MKEnvelope
immediately leaps to the target y
value. (Note that a smoothing of 0.0 is the only way to ensure that
the asymptotic curve will, in truth, reach its target.) The final
breakpoint smoothing value is 0.5. Accordingly, the curve reaches the
target halfway between breakpoints.
While smoothing control is provided for completeness, most musical applications will be satisfied with the default smoothing provided by the setPointCount:xArray:yArray: method.
You may have noticed that the MKEnvelope
definition method that brought you smoothing also introduced an
alternate way to set an MKEnvelope
's x values.
Rather than define x values in an array, you can also set them as a
default increment by passing a (double) value to the method's orSamplingPeriod: keyword. Again, the default
argument is used only if the array argument (in this case, the
argument to xArray:) is NULL.
If you use a sampling period, the first x value is always 0.0. Successive x values are integer multiples of the sampling period value.
The other way to use an MKEnvelope
is to
retrieve a discrete value of y for a given x. This is performed in a
single method, lookupYForX:, which
takes a double argument that
specifies an x value and returns the y value that corresponds to it.
If the x value doesn't lie directly on a breakpoint, a linear
interpolation between the y values of the surrounding breakpoints is
performed to determine the appropriate value. For example, consider a
simple, two-breakpoint MKEnvelope
defined as
follows:
double xVals = {0.0, 1.0}; double yVals = {0.0, 2.0}; MKEnvelope *anEnvelope = [[MKEnvelope alloc] init]; [anEnvelope setPointCount: 2 xArray: xVals yArray: yVals]; |
The message
double interpY = [anEnvelope lookupYForX: 0.5]; |
returns 1.0. The computation is illustrated in Figure 4-12
With discrete-value lookup, the
MKEnvelope
's stickpoint and smoothing values
are ignored. Also, using an MKEnvelope
in this
way doesn't require its presence in a MKNote
object; thus, the parameters that help shape an
MKEnvelope
used for DSP synthesis, such as
MK_amp, MK_ampAtt, and MK_ampRel, aren't applied to discrete-value
lookup.
If you request a discrete y value for an x that's out of bounds,
the lookupYForX: method returns the y
value of the breakpoint at the exceeded boundary. For example (using
the same MKEnvelope
), the message
/* Specify an x for which there is no y. */ double interpY = [anEnvelope lookupYForX: 1.5]; |
returns 2.0, as it also would for any argument greater than 1.0.
Similarly, any argument that's less than 0.0 would return (from this
MKEnvelope
) 0.0.
MKEnvelope
s in Scorefile
FormatWhen you write a MKScore
to a scorefile,
either through a message to the MKScore
object
or by using a ScorefileWriter in a performance, the
MKEnvelope
objects that appear in the
MKNote
s in the MKScore
are written out as a series of breakpoints in parentheses. The
MKEnvelope
's stickpoint, if any, is indicated
by the presence of a vertical bar following the so-designated
breakpoint. The entire MKEnvelope
representation is enclosed by square brackets. For example:
[(0.0, 0.0, 1.0) (0.3, 1.0, 1.0) | (0.5, 0.0, 1.0)] |
The three values inside the parentheses are, in order, the breakpoint's x, y, and smoothing values. The smoothing value is always written out―keep in mind that smoothing defaults to 1.0. In this example, the second point is the stickpoint.
If you give the MKEnvelope
a name before
you write the scorefile, the MKEnvelope
is only
written in this long form once; subsequent references (in the
scorefile) are made to the MKEnvelope
object by
its name. To name an MKEnvelope
, call the
MKNameObject()
C function:
MKNameObject("env1", anEnvelope); |
It's a good idea to name your MKEnvelope
objects. This saves space in the scorefile and also makes processing
the file during a performance more efficient.
A named MKEnvelope
appears in a scorefile
statement as:
BEGIN; . . . noteOn ... ampEnv:envelope env1=[(0.0, 0.0, 1.0) ... ] ... ; . . . |
(The noteOn type is used here only as an example.) envelope is a keyword that declares the
following name (env1 in the example)
to represent an MKEnvelope
.
If you write your own scorefile, you should be aware of the following:
The x, y, and smoothing values can be expressions. Because of this, the three values must be separated by commas.
The smoothing value is sticky; it applies to the
breakpoint in which it appears and to all subsequent breakpoints in
that MKEnvelope
declaration (until another
smoothing value is encountered).
If you don't specify a smoothing value, it defaults to 1.0.
You should declare and set all your
MKEnvelope
objects as envelope variables in the header of the
scorefile. This makes reading the file more efficient.
For more on the scorefile format and ScoreFile language, see Reference.
MKWaveTable
ClassMKWaveTable
objects are used exclusively
in DSP synthesis to describe and create musical timbres. While
MKWaveTable
synthesis has limitations, it's a
particularly easy and direct way to create a library of sounds.
However, to intelligently define a MKWaveTable
,
you need to be familiar with a few basic concepts of musical
acoustics. the Section called What is Sound? in Chapter 2 introduces some of these
fundamentals; the cogent points from that section are summarized and
new concepts that pertain to MKWaveTable
s are
introduced in the next section.
When matter vibrates, a pressure disturbance is created in the surrounding air. The pressure disturbance travels as a wave to your ears and you hear a sound. A sound, particularly if it's a musical sound, can be characterized by its waveform, the shape of the air pressure's rise and fall. Waveforms created by musical instruments are generally periodic; this means that the pressure rises and falls in a cyclical pattern.
A periodic waveform has two basic characteristics, frequency and amplitude:
The number of pattern repetitions, or periods, within a given amount of time determines a sound's frequency (pitch). Frequency is measured in hertz (abbreviated Hz), or cycles per second. For example, a musical sound with a period that repeats itself 440 times a second produces a tone at 440 Hz (A above middle C).
The amplitude of a sound wave is the amount of energy in the air pressure disturbance. Amplitude is heard as loudness―the greater the energy, the louder the sound. (There are other factors that contribute to the loudness of a sound, but amplitude is generally the most important.) A number of different methods are used to measure loudness; of greatest use for musical purposes is to describe the loudness of a sound (or, as we shall see, a component of a sound) in comparison with another sound (or sound component).
A special waveform is the sine wave: Sine waves are important to musical acoustics in that they define the basic component used to describe musical sounds: Any periodic waveform can be broken down into one or more sine waves. The sine waves that make up a musical sound have frequencies that are (usually) integer multiples of a basic frequency called the fundamental. For example, if you pluck the B string on a guitar, it produces a fundamental frequency of approximately 494 Hz. However, the sound that's produced contains sine waves with frequencies that are integer multiples of 494:
494 * 1 = 494 494 * 2 = 988 494 * 3 = 1482 494 * 4 = 1976 |
and so on
Sine wave components, or partials, that are
related to each other as integer multiples of a fundamental frequency
are said to make up a harmonic series. A musical
sound can also have partials that are inharmonically related to the
fundamental; for example, the shimmer and pungency of a bell's tone is
created by the abundance of inharmonic partials. However, as
explained later, a MKWaveTable
object is best suited to represent
timbres that are created from a harmonic series.
The partials in a sound have amplitudes that are measured in relation to each other. For the guitar, the amplitude of each successive sine wave is generally less than that of the previous partial.
The fundamental (the partial at the fundamental frequency) needn't have the greatest amplitude of all the partials, nor must successive partials decrease in amplitude. Some instruments, such as the bassoon, have very little energy at the fundamental. Nonetheless, your ears decode the information in a harmonic series such that there is rarely confusion about the frequency of the fundamental; in other words, we almost always hear the fundamental as the frequency that's the least common denominator of the partials that make up the sound.
MKWaveTable
A MKWaveTable
object represents one complete period of a musical
waveform. There are two ways to create a MKWaveTable
, as embodied by
MKWaveTable
's subclasses, MKPartials
and MKSamples
:
With a MKPartials
object, you can define a
MKWaveTable
by specifying the individual partials that make up the
waveform.
A MKSamples
object represents a waveform as a series
of sound samples. It uses a Snd
object
(defined by the SndKit) as its data.
MKPartials
ClassYou define a MKPartials
object by supplying the frequency,
amplitude, and phase information for a series of partials. This is
done through the method setPartialCount:freqRatios:ampRatios:phases:orDefaultPhase:.
The first argument is the number of partials; the next three arguments
are arrays of double data that
specify the frequency ratios, amplitude ratios, and initial phases of
the partials, respectively. You can also set the phase as a constant
by passing a double as the argument
to the orDefaultPhase: keyword. In
this case, you must pass NULL as the argument to phases:.
In the following example, a waveform is created from a series of partials that are integer multiples of a fundamental frequency; the partials decrease in amplitude as they increase in frequency.
/* Create the MKPartials object. */ MKPartials *aPartials = [MKPartials new]; double freqs[] = {1.0, 2.0, 3.0, 4.0, 5.0, 6.0}; double amps[] = {1.0, 0.5, 0.25, 0.12, 0.06, 0.03}; /* Fill the object with data. */ [aPartials setPartialCount: 6 freqRatios: freqs ampRatios: amps phases: NULL orDefaultPhase: 0.0]; |
Phase is generally unimportant in creating musical timbres, although it can drastically affect a waveform that's used as a low-frequency control signal, such as vibrato. |
The frequencies in a MKPartials
object
are specified as ratios, or multiples, of a fundamental frequency (the
fundamental is represented by a frequency ratio of 1.0). The actual
(fundamental) frequency of the waveform created from a
MKPartials
depends on the how the object is
used by the MKSynthPatch
that synthesizes it.
In general, the waveform is “transposed” to produce the
frequency specified in a MKNote
's frequency
parameter, MK_freq. Similarly, the
amplitude of each partial is relative to the value of another
parameter, usually MK_amp.
MKSamples
ClassThe MKSamples
class lets you create a MKWaveTable
through
association with a Snd
object (an instance of
the SndKit's Snd
class). This is done by
invoking the MKSamples
' setSound:
method:
/* You must import the SndKit's header file. */ #import <SndKit/SndKit.h> . . . /* Create a MKSamples object and a Snd object. */ MKSamples *aSamples = [MKSamples new]; Snd *aSound = [Snd new]; /* Fill the Snd with data. */ . . . /* Associate the Snd with the MKSamples. */ [aSamples setSound: aSound]; |
A copy of the Snd
object is created and
stored in the MKSamples
object when setSound: is invoked, so it's important that
you fill the Snd
with data before invoking the
method. Chapter 2 describes ways to create Sound data.
You can also associate a MKSamples
with a
Snd
by reading a soundfile, through the
readSoundfile: method. The
MKSamples
object creates a
Snd
object and then reads the soundfile by
sending newFromSoundfile: to the
Snd
. The argument is a UNIX pathname that must
include the soundfile-identifying “.snd” extension.
A MKSamples
' Snd
object must contain one channel of 16-bit linear, sampled data. The
length of the data (the number of samples) must be a power of two if
it is to be used for an oscillator table.
MKWaveTable
in a
MKNote
To hear the timbre represented by a
MKWaveTable
object, you set the
MKWaveTable
as a parameter of a
MKNote
and then play the
MKNote
using a
MKSynthPatch
that recognizes the parameter.
Most of the MusicKit
MKSynthPatch
es recognize the MK_waveform parameter:
/* Create a MKNote. */ MKNote *aNote = [MKNote new]; /* Set its MK_waveform value. */ [aNote setPar: MK_waveform toWaveTable: aPartials]; |
In this example, the value of MK_waveform is set to the previously defined
MKPartials
object, aPartials. The manner in which the
MKPartials
object is used during synthesis
depends on the MKSynthPatch
to which the
MKNote
is sent.