This driver is currently tested and verified to work on:
- Windows Vista x64 VM
- Windows 7 x64 VM
- Windows 10 1909 x64 (VM and Laptop)
- Windows 10 21296 x64 on a ThinkPad X1 Yoga 1st Gen with 3 different sound cards (Conexant CX20753/4, Scarlett Solo Gen 2, Aureon 7.1 USB)
This driver is capable of reaching very low latency in exclusive mode (~6ms on Scarlett Solo with 48kHz).
This PR addresses issue https://github.com/FluidSynth/fluidsynth/issues/758.
It ensures that the `internal buffer size` used by waveout device driver and the `extra buffers size` required by fluid_synth_process() are both coherent (i.e they should be of same size). To ensure this, both kind of buffers are now dependent of`audio.period` and `audio.period-size settings`.
Resets the default sample timer properly, allowing the internal
MIDI file player to restart its playlist on any other time but
the initial first.
Lets Qsynth play any MIDI files that are drag-n-dropped, anytime
after the first, following synth engine initialization.
In `fluid_rvoice_mixer.c`:`fluid_rvoice_mixer_process_fx()`:
If an audio processing callback is used, `mix_fx_to_out` would be `FALSE`. As a result, `in_ch` and `out_ch_l` points to the same buffer.
In `fluid_chorus.c`:`fluid_chorus_processreplace()`:
```C
/* process stereo unit */
/* store the chorus stereo unit d_out to left and right output */
left_out[sample_index] = d_out[0] * chorus->wet1 + d_out[1] * chorus->wet2;
right_out[sample_index] = d_out[1] * chorus->wet1 + d_out[0] * chorus->wet2;
/* Write the current input sample into the circular buffer */
push_in_delay_line(chorus, in[sample_index]);
```
Here the chorus processing code writes to the left output buffer (which will overwrite the input buffer in this case) before the sample from the input buffer is stored into the delay buffer, making the chorus output all zeros. If no audio processing callback is used, `mix_fx_to_out` would be `TRUE` and `in` and `left_out` will not point to the same buffer, therefore the order doesn't matter.
Simply swapping the two steps should be a sufficient fix. This patch also apply the same change to `fluid_chorus_processmix` only for the sake of consistency (since they are almost exact copies of each other).
Resolves#751.
When fluid_shell is called from fluid_source(), is is currently
printing this message:
Received EOF while reading commands, exiting the shell.
Suppress it.
If polyphony is exceeded and FluidSynth has to allocate a voice by
calling fluid_synth_free_voice_by_kill_LOCAL(), two problems occur:
1)The value returned by fluid_synth_get_active_voice_count() never
returns back to 0.
2)SoundFont samples are not unref'd properly, and therefore if an attempt is
made to unload the SoundFont, the deferred unload timer is started, and
fluid_synth_sfunload_callback() unsuccessfully tries to unload the SoundFont forever.
These 2 issues are fixed by this commit.
While `fopen` (used through the macro `FLUID_FOPEN`) uses UTF-8 on *nix, it's restricted to ANSI on Windows. A change to enable using paths containing non-ANSI characters was suggested before in issue #128 but was rejected due to requiring large parts of both the public API and private implementation to be modified to accommodate Windows.
This PR instead changes the macro definition for `FLUID_FOPEN` from `fopen` to a new wrapper, `fluid_fopen`. This wrapper is defined in `fluidsynth_priv.h` and defined in `fluid_sys.c` (following the pattern of `fluid_alloc`). Under Windows, it converts the `const char*` UTF-8 parameters to Unicode `wchar_t*` strings using the Windows API function `MultiByteToWideChar` and opens the file using the Windows API-specific `_wfopen`. On all other platforms, it simply calls `fopen`.
The public API is unchanged. This solution will require Windows users of the API to convert UTF-16 strings to UTF-8 (which then get converted back into UTF-16 anyway), but that's still an improvement over only being able to use ANSI paths.
This PR also adds a new test, `test_utf8_open`, which tests `FLUID_FOPEN` directly and through `fluid_is_soundfont` and `fluid_synth_sfload` using a new soundfont file, `sf2/â– VintageDreamsWaves-v2â– .sf2`, which is just a copy of `VintageDreamsWaves-v2.sf2` with Unicode characters in the filename.
Some MIDI files that uses the GS standard uses channels other than channel 10 as percussion channel. Currently FluidSynth ignores the messages setting that up, causing notes meant to be played with a drum instrument played with a melodic instrument or vice versa. This patch will partially fix the issue.
Currently the implementation in this patch doesn't cover a specific "quirk" in Roland GS Modules: they seem to remember the last used instrument in the selected mode. This patch simply sets the instrument number to 0 instead.
A test file is attached. If played correctly (with `-o synth.device-id=16`) no out of place drum or piano sounds should be heard.
[wikipedia_MIDI_sample_gstest.mid.gz](https://github.com/FluidSynth/fluidsynth/files/5610727/wikipedia_MIDI_sample_gstest.mid.gz)
This PR addresses #669 point 2.1.
It proposes set/get API functions to change/read fx unit parameters.
The deprecated shell commands are updated. Now the commands line have 2 parameters:
- first parameter is the fx unit index.
- second parameter is the value to apply to the fx unit.
Following the discussion about an API to pin and unpin preset samples in the sample cache here:
https://lists.nongnu.org/archive/html/fluid-dev/2020-10/msg00016.html
Short explanation of the change:
Only the default loader currently supports dynamic sample loading, so I thought it might be a good idea to keep the changes for this feature mostly contained in the default loader as well. I've added two new preset notify flags (FLUID_PRESET_PIN and FLUID_PRESET_UNPIN) that are handled by the preset->notify callback and trigger the loading and possibly unloading of the samples.
I don't see any 'allocation' of preset. And ALL public synth functions have a mutex lock which might potentially block when called from synth context, but only then if the client app pessimizes this situation by extensively calling the synth from outside the synth context.
Access to field 'zone' results in a dereference of a null pointer (loaded from variable 'prev_preset'), if `size` is negative. Problem is: Parameter `size` is `chunk.size` and should be unsigned.
Originally, I have only marked it deprecated. But since we have an SOVERSION bump next release and because this function was only meant for internal usage, I think it's safe to remove it right now.
This PR addresses #665.
1) Add new functions for multi channels support: `fluid_synth_write_float_channels()`, `fluid_synth_write_s16_channels()`
2) `dsound` and `waveout` driver make use of this support. tested on 2 audio devices:
- creative SB Live! (6 channels).
- Realtek: ALC889A (8 channels).
Currently, all fx unit output (in mix mode) are mapped to the `first buffer`.
This is not appropriate for synth.audio-groups > 1
This PR allows the mapping of fx output based on `fx unit index` and `synth.audio-groups` value.
This allows us to get the `fx output `mixed to the respective `buffer` on which a `MIDI channel` is mapped.
For example: with `synth.audio-groups = 3` and `synth.effect-groups = 3`:
- MIDI chan 0 (dry + fx0) is mapped to buf 0
- MIDI chan 1 (dry + fx1) is mapped to buf 1
- MIDI chan 2 (dry + fx2) is mapped to buf 2
It could be that during runtime an older version of libinstpatch is used than the one fluidsynth was compiled against. In this case, libinstpatch will fail to load DLS fonts, because libinstpatch's initialization semantics don't match those compiled into fluidsynth.
For Soundfonts bigger 2GiB, num_samples becomes negative. When being passed to safe_fread() it's promoted to long long and when being passed to fread(), it's cast to size_t. Works fine in twos-complement, but still is not nice.
Proposing a new event queue for the sequencer, based on prior discussion:
https://lists.nongnu.org/archive/html/fluid-dev/2019-12/msg00001.html
With this change fluidsynth will require a C++98 compliant compiler.
Consider this as RFC, feedback is welcome.
The "pain-points" from the discussion:
#### 1. It is slow.
Done (thanks to heap sort), see runtime of `test_seq_event_queue_sort`.
#### 2. A meaningful ordering for events with the same tick has not been considered.
Done, see comments in `fluid_seq_queue.cpp`.
#### 3. Complicated implementation
Now uses one single event queue, which requires C++98. Implemented similarly to std::priority_queue by using heap sort.
The "queue" I use currently is of type `std::deque`. `deque` does not provide preallocation. `std::vector` does provide it. However, `std::deque` has the huge advantage that appending additional elements is cheap. For `std::vector` appending new elements would require to reallocate all the memory and copy it to the new array. So,
* either use `std::deque` with the risk that memory allocation may occur during `fluid_sequencer_send_at()`, or
* use `std::vector` with a preallocated pool of events and make `fluid_sequencer_send_at()` when the `vector` runs out of capacity.
Comments?
#### 4. Events that have been processed are deleted and gone.
After having thought about this more, this is the correct behavior. After events have been dispatched, they must be released to free underlying memory, see point 3. For the very rare case that a client (e.g. fluid_player) may require those events in the future, the client should be responsible for storing the events somewhere.
#### 5. The sequencer supports the system timer as alternative time source.
The conclusion from the mailing list was that the system timer can be removed. This has been done.
#### 6. Time Scaling
Time scaling can now be used for arbitrary tempo changes. The previous implementation was capable of that as well, however, the time-scale was limited to 1000. The only limitation for the scale is now >0, see `test_seq_scale`.
### Other Points
* `fluid_sequencer_remove_events()` turned out to be broken before, as it did not remove all events from the queue. This has been fixed, see `test_seq_event_queue_remove`.
* Consider the following code executed by `threadA`:
```c
fluid_sequencer_send_at(event0);
fluid_sequencer_set_time_scale(); // new scale
fluid_sequencer_send_at(event1);
```
The new scale will be definitely applied to `event1`. However, if another concurrently running `threadB` executes `fluid_sequencer_process()`, it was previously not clear, whether the new scale was also applied to event0. This depends on whether `event0` was still in the `preQueue`, and this depends on `event0.time` and the tick count that `fluid_sequencer_process()` is being called with. This has been changed. As of now, events are queued with their timestamp AS-IS. And only the latest call to `fluid_sequencer_set_time_scale()` is being considered during `fluid_sequencer_process()`. This makes the implementation very simple, i.e. no events need to be changed and the sequencer doesn't have to be locked down. On the other hand, it means that `fluid_sequencer_set_time_scale()` can only be used for tempo changes when called from the sequencer callback. In other words, if `threadA` executes the code above followed by `fluid_sequencer_process()`, `event0` and `event1` will be executed with the same tempo, which is the latest scale provided to the seq. Is this acceptable? The old implementation had the same limitation. And when looking through the internet, I only find users who call `fluid_sequencer_set_time_scale()` from the sequencer callback. Still, this is a point I'm raising for discussion.
Since sizeof(long) == 4 even on 64 bit Windose, big files cannot be
loaded natively via the ANSI C file API. This change makes fluidsynth's
file callback API use long long, which is guaranteed to be at least 64
bit wide.
This PR allows the reverb to pre-allocate the memory needed for the maximum sample-rate of the synth. That means that on sample rate change there are no memory allocation and the function fluid_revmodel_samplerate_change() should always return FLUID_OK.
The PR addresses discussion in #608.
Stopping `dsound `before stopping `audio thread` avoid dsound playing transient glitches between the time audio task is stopped and dsound will be released. These short trailing glitches are particularly audible when captured by a reverb connected on output.
For some reason, the configure command must be specified explicitly in the `gentables` build step. Otherwise, the ARM target compiler will be used to build `make_tables` rather than the host compiler.
Now that microsoft/vcpkg#10485 has been completed, an ARM CI build can be added to AppVeyor. Also, the build status table in the README has been updated.
Supplying --verbose to the fluidsynth executable now prints debug messages to stdout. Debug messages are still being printed by default when fluidsynth was compiled in debug mode.
Responsibility for calling fluid_sequencer_unregister_client() in case of FLUID_SEQ_UNREGISTERING events has been moved to fluid_sequencer_send_now(). In other words, a FLUID_SEQ_UNREGISTERING event now really unregisters the client, no matter how the client's callback function looks like.
Avoids leaking the sequencer clients if implementations do not unregister them explicitly.
Also fixes another memory leak if fluid_sequencer_register_fluidsynth() clients were unregistered with fluid_sequencer_unregister_client() rather than by sending an unregistering event.
During the creation of a jack audio driver, it is checked whether the sample-rate of the settings object matches jack's rate. If not, it was adjusted previously via fluid_synth_set_sample_rate(). Due to the deprecation of that function and removal of real-time capability of the synth.sample-rate setting, a regression was introduced in 5fbddcecc3 causing the synth's sample-rate to be not updated.
This workaround obtains the synth via the settings instance and for now calls the deprecated sample-rate set function.
See discussion in #591 for details. Basically an incorrect size was
being allocated for the CoreAudio buffer list for a device. It was being
allocated by a VLA (which already did not quite fit the semantics of the
list) and the length calculated could be 0 (instead of the size of the
struct with no buffers elements) causing undefined behaviour.
This corrects it to allocate the amount of memory required by the
CoreAudio framework function and adds a check for the size retrieval and
for the dynamic allocation. This change passed UBSan in my test where
before the change it did not.
Fixes#591
This fixes a regression introduced in 907ec27a9e
When rendering a voice, there are 3 cases to consider: silent, playing,
and finished. When optimizing away the memset, I incorrectly assumed that
a voice cannot switch between playing and silence (like crazy) while
rendering FLUID_MIXER_MAX_BUFFERS_DEFAULT. Apparently this does not
hold true, esp. when rendering at sample rates ~96kHz.
This adds new LFO modulators:
- these modulators are computed on the fly, instead of using lfo lookup table. Advantages:
- Avoiding a lost of 608272 memory bytes when lfo speed is low (0.3Hz).
- Allows to diminish the lfo speed lower limit to 0.1Hz instead of 0.3Hz.
A speed of 0.1 is interesting for chorus. Using a lookup table for 0.1Hz
would require too much memory (1824816 bytes).
- Make use of first-order all-pass interpolator instead of bandlimited interpolation.
- Although lfo modulator is computed on the fly, cpu load is lower than using
lfo lookup table with bandlimited interpolator.
Also adds a stereo unit controlled by WIDTH macro. WIDTH [0..10] value define a stereo separation between left and right.
- When 0, the output is monophonic.
- When > 0 , the output is stereophonic.
WIDTH is currently fixed to maximum value to provide maximum stereo effect.
`GTimeVal` has been deprecated in glib 2.62 . While switching to `g_get_monotonic_time()`, I realized that we could simply reuse `fluid_utime()` for that purpose.
This provides a less branchy and therefore more instruction-cache-friendly version of fluid_ct2hz(), which also significantly reduces the number of floating-point comparisons.
- memset should clear all the memory
- end position should be at the end of sample data
Closes#576
Signed-off-by: Stefan Westerfeld <stefan@space.twc.de>
The synthesizer is using stdint types for a long time, perhaps for consistency, it would be worth to get rid of the GLIB macros for integer to pointer conversion and viceversa, and use just a type cast for that purpose.
The parameters (roomsize, level, etc.) of the reverb effects unit are initialized at the very end of `new_fluid_synth()` with `fluid_synth_set_reverb_full_LOCAL()`.
This however only adds an update-event to the `rvoice_mixer` queue.
The call to `fluid_synth_process_event_queue()` in `new_fluid_synth()` should make sure, that this event is dispatched and triggers the actual update.
However, the event is not dispatched immediately, because the rvoice event queue has not been committed yet, that is, a call to `fluid_rvoice_eventhandler_flush()` is missing.
So, although a reverb param initialization event has been queued, the reverb params still are garbage initialized (on Windows to some `-6.2774385622041925e+66`).
The next call to through the synth's public API will flush the queue and finally dispatch the update event, but when will it happen?
1. If the soundfont is specified as command-line argument, this call will happen before the audio driver starts rendering.
2. If the soundfont is loaded via shell command `load`, the audio driver will first start rendering audio, after updating the reverb.
Case 1. is trivial, everything works as it should.
Case 2. is interesting. Since the synth already started rendering audio by using that uninitialized reverb unit, the reverb engine's internal buffer is completely filled up with noise. Before outputting that signal to sound card, the sound is clipped to `1.0f`. That's the click we hear at the beginning. And because the reverb is so loud, the rendered audio signal stays 1.0f for quite a long time (...or always, can't tell).
Why is it not reproducible on Linux? Because GCC and Clang (AFAIK) leave all values uninitialized after allocating memory. And because malloc() often return zero-initialized memory, the reverb params seem to be nicely zero-initialized. MSVC however, always initializes memory with garbage, which is why we "hear" this resonance disaster.
Solution: Just update the reverb params via public API, which implicitly calls `fluid_rvoice_eventhandler_flush()`.
Fixes#563.
Fixes a use-after-free when the MIDI player is deleted before the audio
driver, because the synthesis thread is still actively making callbacks
on the sample timer, which is deleted by the player though.
Enabling the option supresses the default welcome message and
some other text output that normally gets printed to stdout.
It also slightly changes the way the welcome message and argument
errors are handled: in case of an argument error, the welcome message
is never printed.
Previously, sample timers were deleted in fluid_player_stop() which caused a use-after-free when at the same time the sample timers were advanced by the synthesizer thread. This was incorrectly addressed in 5d3f727547 . Deleting sample timers is now done in delete_fluid_player(). A broken application could still crash if it does not respect the order of object creation though. At least now, this issue is properly documented.
In an out of memory situation, fluid_synth_t::voice and fluid_synth_t::channel may not be fully initialized, causing a NULL dereference and heap corruption in delete_fluid_synth().
By default, libinstpatch silently adds all SF2 default modulators to the
converted DLS voices. Since fluidsynth respects the modulators provided
by libinstpatch, those modulators would conflict with the default
modulator list managed by fluidsynth. This is only noticeable, if the
user used fluidsynth's API to manipulate default modulators.
When compiled with compatibility for WinXP (toolset v141_xp), read() may return 0 (EOF) rather than '\n' which led to an early exit of the shell after a single user input.