Doing Real-time Sound Processing

One of the most exciting aspects of the MusicKit is the ability to process sound in real time, under arbitrarily rich control. To do this, you follow all the guidelines given above. There are only three differences:

  1. You must connect hardware that accepts sound input to the DSP serial port.

  2. You must set up the MKOrchesra to do sound processing.

  3. You must have an In1aUG or In1bUG in your MKSynthPatch.

The standard NeXT configuration (at the time of this writing) does not include a high-quality sound input. Therefore, in order to do real-time high-quality sound processing, you need to obtain a device that plugs into the DSP serial port. See the Section called Using the DSP Serial Port above for how to set up the hardware and how to configure the MKOrchestra.

Making a MKSynthPatch to do Sound Processing

Making a MKSynthPatch that does sound processing is simple. You just include an instance of In1aUG to get the left sound input channel or In1bUG to get the right sound input channel. These MKUnitGenerators write their input to an output patchpoint. They also optionally provide a scale factor. Thus, you can convert the MKWaveTable synthesis MKSynthPatch described above to a sound processing synthpatch by merely replacing the oscillator with an In1aUG.