0d98c47545
Proposing a new event queue for the sequencer, based on prior discussion: https://lists.nongnu.org/archive/html/fluid-dev/2019-12/msg00001.html With this change fluidsynth will require a C++98 compliant compiler. Consider this as RFC, feedback is welcome. The "pain-points" from the discussion: #### 1. It is slow. Done (thanks to heap sort), see runtime of `test_seq_event_queue_sort`. #### 2. A meaningful ordering for events with the same tick has not been considered. Done, see comments in `fluid_seq_queue.cpp`. #### 3. Complicated implementation Now uses one single event queue, which requires C++98. Implemented similarly to std::priority_queue by using heap sort. The "queue" I use currently is of type `std::deque`. `deque` does not provide preallocation. `std::vector` does provide it. However, `std::deque` has the huge advantage that appending additional elements is cheap. For `std::vector` appending new elements would require to reallocate all the memory and copy it to the new array. So, * either use `std::deque` with the risk that memory allocation may occur during `fluid_sequencer_send_at()`, or * use `std::vector` with a preallocated pool of events and make `fluid_sequencer_send_at()` when the `vector` runs out of capacity. Comments? #### 4. Events that have been processed are deleted and gone. After having thought about this more, this is the correct behavior. After events have been dispatched, they must be released to free underlying memory, see point 3. For the very rare case that a client (e.g. fluid_player) may require those events in the future, the client should be responsible for storing the events somewhere. #### 5. The sequencer supports the system timer as alternative time source. The conclusion from the mailing list was that the system timer can be removed. This has been done. #### 6. Time Scaling Time scaling can now be used for arbitrary tempo changes. The previous implementation was capable of that as well, however, the time-scale was limited to 1000. The only limitation for the scale is now >0, see `test_seq_scale`. ### Other Points * `fluid_sequencer_remove_events()` turned out to be broken before, as it did not remove all events from the queue. This has been fixed, see `test_seq_event_queue_remove`. * Consider the following code executed by `threadA`: ```c fluid_sequencer_send_at(event0); fluid_sequencer_set_time_scale(); // new scale fluid_sequencer_send_at(event1); ``` The new scale will be definitely applied to `event1`. However, if another concurrently running `threadB` executes `fluid_sequencer_process()`, it was previously not clear, whether the new scale was also applied to event0. This depends on whether `event0` was still in the `preQueue`, and this depends on `event0.time` and the tick count that `fluid_sequencer_process()` is being called with. This has been changed. As of now, events are queued with their timestamp AS-IS. And only the latest call to `fluid_sequencer_set_time_scale()` is being considered during `fluid_sequencer_process()`. This makes the implementation very simple, i.e. no events need to be changed and the sequencer doesn't have to be locked down. On the other hand, it means that `fluid_sequencer_set_time_scale()` can only be used for tempo changes when called from the sequencer callback. In other words, if `threadA` executes the code above followed by `fluid_sequencer_process()`, `event0` and `event1` will be executed with the same tempo, which is the latest scale provided to the seq. Is this acceptable? The old implementation had the same limitation. And when looking through the internet, I only find users who call `fluid_sequencer_set_time_scale()` from the sequencer callback. Still, this is a point I'm raising for discussion. |
||
---|---|---|
.circleci | ||
.github | ||
cmake_admin | ||
doc | ||
include | ||
sf2 | ||
src | ||
test | ||
.appveyor-vcpkg.yml | ||
.azure-pipelines-mac.yml | ||
.azure-pipelines.yml | ||
.cirrus.yml | ||
.clang-format | ||
.clang-tidy | ||
.gitignore | ||
.travis.yml | ||
AUTHORS | ||
ChangeLog | ||
CMakeLists.txt | ||
CONTRIBUTING.md | ||
fluidsynth.conf.in | ||
fluidsynth.pc.in | ||
fluidsynth.service.in | ||
fluidsynth.spec.in | ||
LICENSE | ||
README.cmake.md | ||
README.md | ||
THANKS | ||
TODO |
FluidSynth
Build Status | |
---|---|
Linux | |
FreeBSD | |
Windows | |
Windows (vcpkg) | |
MacOSX | |
Android |
FluidSynth is a cross-platform, real-time software synthesizer based on the Soundfont 2 specification.
FluidSynth generates audio by reading and handling MIDI events from MIDI input devices by using a SoundFont. It is the software analogue of a MIDI synthesizer. FluidSynth can also play MIDI files.
Documentation
The central place for documentation and further links is our wiki here at GitHub:
https://github.com/FluidSynth/fluidsynth/wiki
If you are missing parts of the documentation, let us know by writing to our mailing list. Of course, you are welcome to edit and improve the wiki yourself. All you need is an account at GitHub. Alternatively, you may send an EMail to our mailing list along with your suggested changes. Further information about the mailing list is available in the wiki as well.
Latest information about FluidSynth is also available on the web site at http://www.fluidsynth.org/.
License
The source code for FluidSynth is distributed under the terms of the GNU Lesser General Public License, see the LICENSE file. To better understand the conditions how FluidSynth can be used in e.g. commercial or closed-source projects, please refer to the LicensingFAQ in our wiki.
Building from source
For information on how to build FluidSynth from source, please refer to our wiki.
Links
-
FluidSynth's Home Page, http://www.fluidsynth.org
-
FluidSynth's wiki, https://github.com/FluidSynth/fluidsynth/wiki
-
FluidSynth's API documentation, http://www.fluidsynth.org/api/
Historical background
Why did we do it
The synthesizer grew out of a project, started by Samuel Bianchini and Peter Hanappe, and later joined by Johnathan Lee, that aimed at developing a networked multi-user game.
Sound (and music) was considered a very important part of the game. In addition, users had to be able to extend the game with their own sounds and images. Johnathan Lee proposed to use the Soundfont standard combined with intelligent use of midifiles. The arguments were:
-
Wavetable synthesis is low on CPU usage, it is intuitive and it can produce rich sounds
-
Hardware acceleration is possible if the user owns a Soundfont compatible soundcard (important for games!)
-
MIDI files are small and Soundfont2 files can be made small thru the intelligent use of loops and wavetables. Together, they are easier to downloaded than MP3 or audio files.
-
Graphical editors are available for both file format: various Soundfont editors are available on PC and on Linux (Smurf!), and MIDI sequencers are available on all platforms.
It seemed like a good combination to use for an (online) game.
In order to make Soundfonts available on all platforms (Linux, Mac, and Windows) and for all sound cards, we needed a software Soundfont synthesizer. That is why we developed FluidSynth.
Design decisions
The synthesizer was designed to be as self-contained as possible for several reasons:
-
It had to be multi-platform (Linux, macOS, Win32). It was therefore important that the code didn't rely on any platform-specific library.
-
It had to be easy to integrate the synthesizer modules in various environments, as a plugin or as a dynamically loadable object. I wanted to make the synthesizer available as a plugin (jMax, LADSPA, Xmms, WinAmp, Director, ...); develop language bindings (Python, Java, Perl, ...); and integrate it into (game) frameworks (Crystal Space, SDL, ...). For these reasons I've decided it would be easiest if the project stayed very focused on its goal (a Soundfont synthesizer), stayed small (ideally one file) and didn't dependent on external code.