Following the discussion about an API to pin and unpin preset samples in the sample cache here:
https://lists.nongnu.org/archive/html/fluid-dev/2020-10/msg00016.html
Short explanation of the change:
Only the default loader currently supports dynamic sample loading, so I thought it might be a good idea to keep the changes for this feature mostly contained in the default loader as well. I've added two new preset notify flags (FLUID_PRESET_PIN and FLUID_PRESET_UNPIN) that are handled by the preset->notify callback and trigger the loading and possibly unloading of the samples.
I don't see any 'allocation' of preset. And ALL public synth functions have a mutex lock which might potentially block when called from synth context, but only then if the client app pessimizes this situation by extensively calling the synth from outside the synth context.
Access to field 'zone' results in a dereference of a null pointer (loaded from variable 'prev_preset'), if `size` is negative. Problem is: Parameter `size` is `chunk.size` and should be unsigned.
Originally, I have only marked it deprecated. But since we have an SOVERSION bump next release and because this function was only meant for internal usage, I think it's safe to remove it right now.
This PR addresses #665.
1) Add new functions for multi channels support: `fluid_synth_write_float_channels()`, `fluid_synth_write_s16_channels()`
2) `dsound` and `waveout` driver make use of this support. tested on 2 audio devices:
- creative SB Live! (6 channels).
- Realtek: ALC889A (8 channels).
Currently, all fx unit output (in mix mode) are mapped to the `first buffer`.
This is not appropriate for synth.audio-groups > 1
This PR allows the mapping of fx output based on `fx unit index` and `synth.audio-groups` value.
This allows us to get the `fx output `mixed to the respective `buffer` on which a `MIDI channel` is mapped.
For example: with `synth.audio-groups = 3` and `synth.effect-groups = 3`:
- MIDI chan 0 (dry + fx0) is mapped to buf 0
- MIDI chan 1 (dry + fx1) is mapped to buf 1
- MIDI chan 2 (dry + fx2) is mapped to buf 2
It could be that during runtime an older version of libinstpatch is used than the one fluidsynth was compiled against. In this case, libinstpatch will fail to load DLS fonts, because libinstpatch's initialization semantics don't match those compiled into fluidsynth.
For Soundfonts bigger 2GiB, num_samples becomes negative. When being passed to safe_fread() it's promoted to long long and when being passed to fread(), it's cast to size_t. Works fine in twos-complement, but still is not nice.
Proposing a new event queue for the sequencer, based on prior discussion:
https://lists.nongnu.org/archive/html/fluid-dev/2019-12/msg00001.html
With this change fluidsynth will require a C++98 compliant compiler.
Consider this as RFC, feedback is welcome.
The "pain-points" from the discussion:
#### 1. It is slow.
Done (thanks to heap sort), see runtime of `test_seq_event_queue_sort`.
#### 2. A meaningful ordering for events with the same tick has not been considered.
Done, see comments in `fluid_seq_queue.cpp`.
#### 3. Complicated implementation
Now uses one single event queue, which requires C++98. Implemented similarly to std::priority_queue by using heap sort.
The "queue" I use currently is of type `std::deque`. `deque` does not provide preallocation. `std::vector` does provide it. However, `std::deque` has the huge advantage that appending additional elements is cheap. For `std::vector` appending new elements would require to reallocate all the memory and copy it to the new array. So,
* either use `std::deque` with the risk that memory allocation may occur during `fluid_sequencer_send_at()`, or
* use `std::vector` with a preallocated pool of events and make `fluid_sequencer_send_at()` when the `vector` runs out of capacity.
Comments?
#### 4. Events that have been processed are deleted and gone.
After having thought about this more, this is the correct behavior. After events have been dispatched, they must be released to free underlying memory, see point 3. For the very rare case that a client (e.g. fluid_player) may require those events in the future, the client should be responsible for storing the events somewhere.
#### 5. The sequencer supports the system timer as alternative time source.
The conclusion from the mailing list was that the system timer can be removed. This has been done.
#### 6. Time Scaling
Time scaling can now be used for arbitrary tempo changes. The previous implementation was capable of that as well, however, the time-scale was limited to 1000. The only limitation for the scale is now >0, see `test_seq_scale`.
### Other Points
* `fluid_sequencer_remove_events()` turned out to be broken before, as it did not remove all events from the queue. This has been fixed, see `test_seq_event_queue_remove`.
* Consider the following code executed by `threadA`:
```c
fluid_sequencer_send_at(event0);
fluid_sequencer_set_time_scale(); // new scale
fluid_sequencer_send_at(event1);
```
The new scale will be definitely applied to `event1`. However, if another concurrently running `threadB` executes `fluid_sequencer_process()`, it was previously not clear, whether the new scale was also applied to event0. This depends on whether `event0` was still in the `preQueue`, and this depends on `event0.time` and the tick count that `fluid_sequencer_process()` is being called with. This has been changed. As of now, events are queued with their timestamp AS-IS. And only the latest call to `fluid_sequencer_set_time_scale()` is being considered during `fluid_sequencer_process()`. This makes the implementation very simple, i.e. no events need to be changed and the sequencer doesn't have to be locked down. On the other hand, it means that `fluid_sequencer_set_time_scale()` can only be used for tempo changes when called from the sequencer callback. In other words, if `threadA` executes the code above followed by `fluid_sequencer_process()`, `event0` and `event1` will be executed with the same tempo, which is the latest scale provided to the seq. Is this acceptable? The old implementation had the same limitation. And when looking through the internet, I only find users who call `fluid_sequencer_set_time_scale()` from the sequencer callback. Still, this is a point I'm raising for discussion.
Since sizeof(long) == 4 even on 64 bit Windose, big files cannot be
loaded natively via the ANSI C file API. This change makes fluidsynth's
file callback API use long long, which is guaranteed to be at least 64
bit wide.
This PR allows the reverb to pre-allocate the memory needed for the maximum sample-rate of the synth. That means that on sample rate change there are no memory allocation and the function fluid_revmodel_samplerate_change() should always return FLUID_OK.
The PR addresses discussion in #608.
Stopping `dsound `before stopping `audio thread` avoid dsound playing transient glitches between the time audio task is stopped and dsound will be released. These short trailing glitches are particularly audible when captured by a reverb connected on output.
For some reason, the configure command must be specified explicitly in the `gentables` build step. Otherwise, the ARM target compiler will be used to build `make_tables` rather than the host compiler.
Now that microsoft/vcpkg#10485 has been completed, an ARM CI build can be added to AppVeyor. Also, the build status table in the README has been updated.
Supplying --verbose to the fluidsynth executable now prints debug messages to stdout. Debug messages are still being printed by default when fluidsynth was compiled in debug mode.
Responsibility for calling fluid_sequencer_unregister_client() in case of FLUID_SEQ_UNREGISTERING events has been moved to fluid_sequencer_send_now(). In other words, a FLUID_SEQ_UNREGISTERING event now really unregisters the client, no matter how the client's callback function looks like.
Avoids leaking the sequencer clients if implementations do not unregister them explicitly.
Also fixes another memory leak if fluid_sequencer_register_fluidsynth() clients were unregistered with fluid_sequencer_unregister_client() rather than by sending an unregistering event.
During the creation of a jack audio driver, it is checked whether the sample-rate of the settings object matches jack's rate. If not, it was adjusted previously via fluid_synth_set_sample_rate(). Due to the deprecation of that function and removal of real-time capability of the synth.sample-rate setting, a regression was introduced in 5fbddcecc3 causing the synth's sample-rate to be not updated.
This workaround obtains the synth via the settings instance and for now calls the deprecated sample-rate set function.
See discussion in #591 for details. Basically an incorrect size was
being allocated for the CoreAudio buffer list for a device. It was being
allocated by a VLA (which already did not quite fit the semantics of the
list) and the length calculated could be 0 (instead of the size of the
struct with no buffers elements) causing undefined behaviour.
This corrects it to allocate the amount of memory required by the
CoreAudio framework function and adds a check for the size retrieval and
for the dynamic allocation. This change passed UBSan in my test where
before the change it did not.
Fixes#591
This fixes a regression introduced in 907ec27a9e
When rendering a voice, there are 3 cases to consider: silent, playing,
and finished. When optimizing away the memset, I incorrectly assumed that
a voice cannot switch between playing and silence (like crazy) while
rendering FLUID_MIXER_MAX_BUFFERS_DEFAULT. Apparently this does not
hold true, esp. when rendering at sample rates ~96kHz.
This adds new LFO modulators:
- these modulators are computed on the fly, instead of using lfo lookup table. Advantages:
- Avoiding a lost of 608272 memory bytes when lfo speed is low (0.3Hz).
- Allows to diminish the lfo speed lower limit to 0.1Hz instead of 0.3Hz.
A speed of 0.1 is interesting for chorus. Using a lookup table for 0.1Hz
would require too much memory (1824816 bytes).
- Make use of first-order all-pass interpolator instead of bandlimited interpolation.
- Although lfo modulator is computed on the fly, cpu load is lower than using
lfo lookup table with bandlimited interpolator.
Also adds a stereo unit controlled by WIDTH macro. WIDTH [0..10] value define a stereo separation between left and right.
- When 0, the output is monophonic.
- When > 0 , the output is stereophonic.
WIDTH is currently fixed to maximum value to provide maximum stereo effect.
`GTimeVal` has been deprecated in glib 2.62 . While switching to `g_get_monotonic_time()`, I realized that we could simply reuse `fluid_utime()` for that purpose.
This provides a less branchy and therefore more instruction-cache-friendly version of fluid_ct2hz(), which also significantly reduces the number of floating-point comparisons.
- memset should clear all the memory
- end position should be at the end of sample data
Closes#576
Signed-off-by: Stefan Westerfeld <stefan@space.twc.de>
The synthesizer is using stdint types for a long time, perhaps for consistency, it would be worth to get rid of the GLIB macros for integer to pointer conversion and viceversa, and use just a type cast for that purpose.
The parameters (roomsize, level, etc.) of the reverb effects unit are initialized at the very end of `new_fluid_synth()` with `fluid_synth_set_reverb_full_LOCAL()`.
This however only adds an update-event to the `rvoice_mixer` queue.
The call to `fluid_synth_process_event_queue()` in `new_fluid_synth()` should make sure, that this event is dispatched and triggers the actual update.
However, the event is not dispatched immediately, because the rvoice event queue has not been committed yet, that is, a call to `fluid_rvoice_eventhandler_flush()` is missing.
So, although a reverb param initialization event has been queued, the reverb params still are garbage initialized (on Windows to some `-6.2774385622041925e+66`).
The next call to through the synth's public API will flush the queue and finally dispatch the update event, but when will it happen?
1. If the soundfont is specified as command-line argument, this call will happen before the audio driver starts rendering.
2. If the soundfont is loaded via shell command `load`, the audio driver will first start rendering audio, after updating the reverb.
Case 1. is trivial, everything works as it should.
Case 2. is interesting. Since the synth already started rendering audio by using that uninitialized reverb unit, the reverb engine's internal buffer is completely filled up with noise. Before outputting that signal to sound card, the sound is clipped to `1.0f`. That's the click we hear at the beginning. And because the reverb is so loud, the rendered audio signal stays 1.0f for quite a long time (...or always, can't tell).
Why is it not reproducible on Linux? Because GCC and Clang (AFAIK) leave all values uninitialized after allocating memory. And because malloc() often return zero-initialized memory, the reverb params seem to be nicely zero-initialized. MSVC however, always initializes memory with garbage, which is why we "hear" this resonance disaster.
Solution: Just update the reverb params via public API, which implicitly calls `fluid_rvoice_eventhandler_flush()`.
Fixes#563.
Fixes a use-after-free when the MIDI player is deleted before the audio
driver, because the synthesis thread is still actively making callbacks
on the sample timer, which is deleted by the player though.
Enabling the option supresses the default welcome message and
some other text output that normally gets printed to stdout.
It also slightly changes the way the welcome message and argument
errors are handled: in case of an argument error, the welcome message
is never printed.
Previously, sample timers were deleted in fluid_player_stop() which caused a use-after-free when at the same time the sample timers were advanced by the synthesizer thread. This was incorrectly addressed in 5d3f727547 . Deleting sample timers is now done in delete_fluid_player(). A broken application could still crash if it does not respect the order of object creation though. At least now, this issue is properly documented.
In an out of memory situation, fluid_synth_t::voice and fluid_synth_t::channel may not be fully initialized, causing a NULL dereference and heap corruption in delete_fluid_synth().
By default, libinstpatch silently adds all SF2 default modulators to the
converted DLS voices. Since fluidsynth respects the modulators provided
by libinstpatch, those modulators would conflict with the default
modulator list managed by fluidsynth. This is only noticeable, if the
user used fluidsynth's API to manipulate default modulators.
When compiled with compatibility for WinXP (toolset v141_xp), read() may return 0 (EOF) rather than '\n' which led to an early exit of the shell after a single user input.
Vorbis compressed SF3 samples are always loaded individually and stored
in the sample cache in uncompressed form. When dynamic-sample-loading is
not active (the default), then the uncompressed samples did not get
unloaded when unloading the SF3 font.
This fix makes sure that those samples are also freed. For bulk loaded
samples, the sample->data pointer is always the same as the
font->sampledata pointer. For individually loaded samples, the sample->data
pointer always points to a different memory region. So we can use that
information to determine if and when to unload the samples one by one.
Fixes#530Fixes#528
The old buffering code assumes that synth->cur is between 0 and FLUID_BUFSIZE.
However since fluid_synth_process() can render more than one buffer at a time
this isn't always true, and the new code handles other values properly.
Closes#527
This set of changes implements audio drivers for Android, OpenSLES and Oboe. The changes in the original sources are kept minimal so that it should be easily maintained.