SF2Lib - a SoundFont (SF2) Library for Parsing and Rendering in C++ (mostly) for iOS, macOS, and tvOS
This library can read SF2 SoundFont files and render audio samples from them. It properly reads in a compliant SF2 file and can be used to obtain meta data such as preset names. It also has an audio rendering engine that can generate audio samples for key events that come from (say) a MIDI keyboard. Work on the rendering side is still on-going, but at present it can generate audio at the right pitch. This library is currently being used by my SoundFonts application for SF2 file parsing, and soon for rendering.
Although much of the code is generic C++17, there are bits that expect an Apple platform that has
the AudioToolbox and Accelerate frameworks available. As such, there are some code files that have the
so that they compile as Obj-C++ instead of C++ -- these "bridge" files provide a means to interact with the SF2Lib from
Swift code. However, such cases are fairly isolated. The goal is to be a simple library for reading SF2 files as well
as a competent SF2 audio renderer whose output can be fed to any sort of audio processing chain, not just macOS and iOS
Core Audio systems.
This package depends on some general DSP and audio classes from my AUv3Support package.
SF2 Spec Support
Currently, all SF2 generators and modulators are supported and/or implemented, except for the following:
- chorusEffectSend -- how much of a rendered sample is sent to a chorus effect audio channel (L+R)
- reverbEffectSend -- how much of a rendered sample is sent to a reverb effect audio channel (L+R)
Since there are plenty of chorus and reverb effects available, this library will not have any of its own. Rather the
goal will be to make available the effect send busses for other AUv3 nodes to process as they wish. This is the case
now: the render Engine
renderInto method takes a
Mixer instance which supports a main "dry" bus and two busses
for the "chorus effect send" and a "reverb effect send". These are populated with samples from active voices,
and their levels are controlled by the
reverbEffectSend parameters mentioned above. One
can then connect bus 1 to a chorus effect and bus 2 to reverb, and then connect those outputs and bus 0 of this
library to a mixer to generate the final output.
Here is a rough description of the top-level folders in SF2Lib:
DSP -- various utility functions for signal processing and converting values from one unit to another. Some of these conversion rely on tables generated by the
DSPTableGeneratortool mentioned above.
Entity -- representations of entities defined in the SF2 spec. Provides for fast loading of SF2 files.
include -- headers for the library. Note that the library exists mostly in header files with very little to be found in source files.
IO -- performs the reading and loading of SF2 files.
MIDI -- state for a MIDI connection.
Render -- handles rendering of audio samples from SF2 entities.
There are quite a lot (yet not enough) unit tests that cover much of the code base (currently > 75%). There are even
some rendering tests that will play audio at the end if configured to do so. This option is found in the
Package.swift file, in the line
.define("PLAY_AUDIO", to: "1", .none). Change the "1" to "0" to
disable the audio output.
All of the code has been written by myself over the course of several years, but I have benefitted from the existence of other projects, especially FluidSynth and their wealth of knowledge in all things SF2. In particular, if there is any confusion about what the SF2 spec means, I rely on their interpretation in code. That said, any misrepresentations of SF2 functionality are of my own doing.