That describes every music related purchase I’ve made in the past 20 years. I’m glad I’ve done a small part to support some creative people, but aside from that I’ve got a bunch of pedals I’ve used once.
I played piano for 9 years as a child, and then stopped when I went to college (because no piano). But I taught myself to play guitar in (boredom + found guitar and music know-how). But I'm in my mid 40s now and have done very little musically in 20 years. I really really really want to play more music, but I'll probably end up saying that on my deathbed (in past tense). Maybe downloading this thing and plugging a guitar or piano up to a computer will change that.
The only difference is that I have a few keyboards at my disposal, but again, I don't touch them... I need to get back at playing and creating music.
Imagine if they applied something similar to a git versioning system to music projects.... I don't even know if the VST interface can be used or if it's licensed somehow from Steinberg.
Also consider that there are no good audio drivers for Linux (like Asio for example) so you're almost forced to stay in windows or Mac...
No plug-in or DAW has a CLI... I could go on for hours...
I'm doing some digital audio processing for a startup idea and the only thing I've came up with is using sox trough a Python API.
This is false.
> Imagine if they applied something similar to a git versioning system to music projects.
People have done this. Using git itself is a little problematic because it is very line-oriented and most project file formats for DAWs are not.
Regarding plugins, I know that I'm not the only lead developer of a DAW who, if they possibly could, would refuse to support plugins entirely. The problem is that most users want more functionality than a DAW itself could feasibly provide (they also sometimes like to use the same functionality (plugin) in different DAWs or different workflows).
There are things close to DAW functionality that have a CLI (such as ecasound). You can also run plugins from the command line by using standalone plugin hosts. You can use oscsend(1) to control plugins inside several different plugin hosts.
It sounds to me as if you've worked with a relatively small number of DAWs on only Windows and macOS and are not really aware of the breadth or depth of the "field".
> This is false.
This was my immediate thought as well. Not sure what level we're talking here, so sorry if I'm addressing the wrong part of the stack, but JACK on Linux has been a great experience for me in terms of latency and ease of use. I run into way more day-to-day problems on Windows.
What feature specifically are you missing on Linux?
Re: plugins, DAWs with VST sandboxing are great. I use Bitwig, and I've never lost work due to a plugin crash.
Exactly, the original thread reads as someone who hasn't touched a modern DAW from the last 8 years or so. Even Renoise has multicore support with sandboxed plugins so one of my ancient free shitty vsts doesn't bring down the whole system.
Ardour and Reaper use plaintext project formats that work well with Git, at least for basic versioning.
> Regarding plugins, I know that I'm not the only lead developer of a DAW who, if they possibly could, would refuse to support plugins entirely. The problem is that most users want more functionality than a DAW itself could feasibly provide (they also sometimes like to use the same functionality (plugin) in different DAWs or different workflows).
I think the answer to this would be something like Reaper's "JS" plugins, which are written in a small compiled language and distributed as source code. Compared to "JS", it would need to: 1) be open source; 2) be a better language; and 3) support pretty skeuomorphic graphics ('cause people seem to really want that in their plugins). Ardour seems to be working on something like this using Lua (don't know about the graphics, or if the plugins could be supported in other DAWs).
Ardour comes with a small set of basic "curated" plugins written in C or C++, that are "blessed" by us. Writing DSP plugins in Lua is also possible, but generally discouraged and, as you guessed, you can't provide a dedicated GUI for them, nor can they be used elsewhere (same limitation as Reaper's Jesusonic plugins).
However, even if those details were improved, the idea that a DAW manufacturer is going to be able to supply the precise EQ that demanding users want, let alone noise reduction, polyphonic pitch correction, and so, so much more, strikes me as unrealistc.
Git even understands that it's possible that you can neatly summarise a change in text (for display e.g. as part of the change log) but that summary is not actionable (so the data stored to implement that change is different), e.g. I believe git's man pages provide an example showing how you can get EXIF from a JPEG so that your git tools say the change was from "Photo of Pam and mummy" to "Cropped image of Pam" when actually it's a huge binary data change that is unintelligible to reader.
You could expose enough GTK bits to expose an event loop to the LGI lua library. It's gobject-introspection for Lua. Since you already use these libs, it would not make Ardour any bigger.
I am not saying it's a great idea to mix GUI and a realtime DSP in the same thread, but it would be supported in you see some demand there.
There's really no technological reason for not allowing Lua to create GUIs within Ardour. It's more a question of whether or not we actually want to. Either way, you would not be mixing GUI and realtime code - the architecture of Ardour doesn't allow that.
The question I was raising (which I think you understand) is whether most users care that this is possible if it can't be done without a rebuild (compiling).
Honestly the sound quality of most DAWs' built-in effects and synths are garbage. Even the effects section of most VST synths is bad! Best to allow plugins rather than trying to reinvent the wheel; you'll have to pry my beloved serum/iZotope/u-he/softube plugins out of my cold dead hands
Also, the "sometimes" is an understatement. Anyone who's been doing this a while likely to be pretty invested in the plugins they have. I would say the majority of working musicians that use DAWs work this way
Even in the case of developers only targeting one DAW (Pro Tools), at least one company (AIR music) saw that it was worth the extra effort to release its products in other plugin formats like VST.
Honestly, I would not like to see new developers trying to oust the plugin standard. It's one of those quirks about music software that exists for very good reason.
In my opinion, plugins only exist because it's convenient to package a synthesizer/reverb/whatever for users that way, but there is no reason that can't be supplanted with something that is more convenient. Of course if it's less convenient then that wouldn't be worth doing, if that is your concern then I agree with you there.
Do you think those existing developers are just going to drop their code into your environment and hit compile? How do you plan on getting all these musical components into your DAW? It doesn't work like that. Someone is going to have to write something, or you're going to be acquiring the rights to existing code somehow, which is still going to have to be ported to your environment.
Or you could skip all that and implement VST. There are even libraries that present a standard interface and output plugins in all the major formats, so you could target that instead. (I forget the name of the framework I'm thinking of but it was written by the Cockos guy and his DAW's ReaPlugs effects suite targets that library)
How would you motivate a synth/effects developer want to spend time on your project? Unless you hire them, and if you're going to ask them to "write reusable code," they're going to point to their existing portfolio of plugins.
I don't think it is currently popular for plugin developers to implement against VST themselves, the libraries you mention seem to be gaining a lot of traction, at least from my experience from trying to catalog open-source plugins on github.
AAX, VST, AU, LV2 (at least)
Merging plugins "with the DAW" is entirely feasible in the open source world where I live and work, and we do that sort of thing when appropriate.
But the reliability issues do not come from the fact that plugins are dynamically loaded shared objects (mostly); they come from the vastly different levels of skill AND very different interpretations of subtle aspects of plugins APIs (notably threading, GUI<->DSP communication and more). These are hard to get rid of if you've really got a diverse, distributed and largely independent "team" of developers working on something.
Maybe someone could come up with another way to do it that works across DAWs, and then plugins could still happen, but I consider this unlikely because with that there is never going to be a reliable way for the host to verify it that works better than what we have now (user reports X plugin works with Y host, etc).
The problem is getting people to care. For years, on macOS, the "standard" for plugin developers has been "does it work in Logic?" There were even plugins that would fail auval(1) (the command line AudioUnit validator), but somehow pass in Logic. As far as their developers were concerned, the plugin worked. Working on Ardour, I've seen at least a dozen plugins that used ambiguity in the AU spec to justify why "well, it works in Logic even if it doesn't follow your interpretation of the spec" was the end of their interest.
Edit: I also think testing is hard in general for plugin developers to use correctly. I personally would prefer to see plugin APIs designed in a better way that make it so it's hard to accidentally cause race conditions.
Plugin API design is an art, indeed. There was recently a brief bubble of activity on KVR among a number of independent plugin devs who found many things to dislike about VST3. A long discussion ensued, there was some talk of picking up LV2 instead, then it all evaporated and nothing was left. The last time the industry tried this was in around 2003, and nothing came (directly) of that effort either.
To me, if there is any interest in solving that, I would just expect someone to put a new plugin backend in JUCE/DPF that is specific to the DAW and then compile that together with the plugins into a big giant build. That's more what I mean by "dropping plugins", it's how you avoid the MxN problem too. But I think that many DAWs (including Ardour) gain little benefit from doing this at this time, so if that was what your original sentiment was, I agree.
Have you ever tried Reason? Then again I'm actually using Reason as a plugin (via vst3 to vst2 wrapper) so I can sequence it easily with renoise. Because why not just load a DAW in your DAW.
Back in the day producers used to complain that you could recognize a Reason track from a mile away. FL used to have the same problem. 'Garbage' is probably an overstatement nowadays (at least judging by the FLStudio demo tracks, their quality has been steadily improving over time) but the meme persists.
Personally I like using "garbage" plugins in my music. I've gotten some nice strange out of old free vst plugins run through much nicer effects racks.
Suffice it to say that it's non-obvious to me where to start to go about getting a stable and mobile (ie laptop) experience. I'd like nothing better than to receive a response that makes me feel sheepish for thinking that Linux is the problem, and if anyone can give out good pointers I'd imagine you can.
On Linux you do not (as a rule) install device drivers for your devices. They come with the system or they (generally) don't exist. I know of only one audio interface manufacturer who ever maintained their own drivers outside of the kernel tree (i.e. not part of mainstream Linux) and even they have had their drivers integrated now.
Next, since you're on a laptop, you're relieved of the unenviable task of figuring out whether to use a PCI(.) bus device or a USB interface. USB is your only option. The good news here is that any USB audio interface that works with an iPad also works on Linux. Why? Because iPad doesn't allow driver installs, and so manufacturers have been forced to make sure their devices work with a generic USB audio class device driver, just like they need to do on Linux. With very few exceptions, you can more or less buy any contemporary USB audio interface these days, just plug it into your Linux laptop (or desktop or whatever), and it will work.
What can be an issue is a lack of ability to configure the internals of the device. Some manufacturers e.g. MOTU have taken the delightful step of doing this by putting an http server on the device, and thus allowing you to configure it from any browser on anything at all. Others have used just generic USB audio class features, allowing it to be controlled from the basic Linux utilities for this sort of thing. And still more continue to only provide Windows/macOS-native configuration utilities. For some devices, dedicated Linux equivalents exist. Best place to check on that would be to start at linuxmusicians.com and use their forums.
Beyond the hardware, it's hard to give more advice because it depends on the experience/workflow you want to use. If you're looking for something Ableton Live-like, Bitwig is likely your best option. If you want a more traditional linear timeline-y DAW ala ProTools, Logic etc., then Reaper, Ardour or Mixbus would probably be good choices. If you want to do software modular, VCV Rack is head and shoulders above anything else (and runs on other platforms too).
There's a very large suite of LV2 plugins on Linux. Stay away from CALF even though they look pretty. The others range from functional to excellent. Your rating will depend on your workflow and aesthetics. You will not find libre plugins that do what deeply-DSP-oriented proprietary plugins do (e.g. Izotope, Melodyne), though you may be satisfied with things in the same ballpark (e.g. Noise Repellent and AutoTalent).
There's a growing body of VST3 plugins for Linux. If you're looking for amazing (non-libre) synths, U-he has all (?) their products available in a perpetual beta for Linux. Great stuff. There are plenty of libre synths too. There's an LV2 version of Vital called Vitalium which is more stable than the VST3 version; this synth has had rave reviews from many different reviewers.
Sample libraries are a problem because most of them are created for Kontakt. You have a choice of running Kontakt inside a Windows VST adapter (e.g. yabridge) or using other formats such as SFZ or DecentSampler, both of which have both free and libre players. pianobook.co.uk has hundreds of somewhat interesting sample libraries, many (but definitely not even most) of them available in DS format.
Hope this helps.
Why, what's wrong with them?
Thanks for a very informative comment!
ASIO, really? Sorry but you couldn’t pay me to go back to that broken piece of crap after switching to Linux and JACK2. I’m actually traumatized by that piece of software, thinking of moments where ASIO would just break and cause my Live session to collapse into a glitchy cacophony of latency-induced noise. I’ve seen this happen on several computers with different Windows installations and external audio hardware and the problem always ends up being ASIO. Some of the producers I knew swore off anything that wasn’t a Mac because of this exact problem.
The problem with audio production on Linux in 2021 isn’t the audio protocol. It’s that most free and open source audio production software for Linux is dreadful to use. UX is actually very important for DAWs. I want to like Ardour but it’s a miserable piece of software to try to make music in. Feels like a chore to perform any action, kills my vibe, would not recommend. After trying really hard to become comfortable using it, I finally gave up and bought Bitwig. It’s a proprietary DAW and kinda expensive but I’ve been producing music with it for a couple of years and it’s a dream to use - sort of a spiritual successor to Ableton IMO.
> No plug-in or DAW has a CLI…
Most people who make music don’t care about this. I’m a software developer and musician who only uses Linux and I don’t care about this. In my opinion, Linux developers of free and open source creative software should spend less time building these features for other developers and focus more on making their software feel good to create with. If I feel bad trying to use your clunky-ass UI to make my art / music / whatever then I’m not going to hold myself back because it has a free software license. I’m going to find a piece of software that gets out of the way and lets me make what I want to make.
As I've mentioned above, I get all kinds of email about Ardour, some declaring their love for it, and some much more condemnatory than anything you've said here.
The point is that "trying to make music" isn't much of a description: people's workflows for "making music" vary dramatically. Not many years ago, more or less the only way to do this was to record yourself playing one or more instruments and/or singing. These days, there are many fundamentally different workflows, and countless minor variations of each one. If Bitwig works for you, it's no surprise that Ardour doesn't. There's a bunch of people for whom the opposite is true. You have to be prepared to try different tools and figure out which ones work for you.
Finally, ASIO and JACK don't really at the same level. JACK on Windows actually uses ASIO. The comparison to ASIO on Linux is ALSA, and sure, I'd agree that it's better than ASIO is most ways (though maybe not 100%).
Excellent point and apologies if that comment came across as inflammatory. I really respect the work you and the Ardour team have done even if it's not for me (and infinite thanks for your work on JACK, it truly is a special piece of software). My frustration has more to do with there not being a FOSS DAW that gives me that true Ableton-like experience. I understand why though, this stuff is hard to build and one workflow does not fit all as you point out.
Oh wow, Zrythm looks awesome! Thank you for the suggestion, I'll be taking this DAW for a spin sometime soon. :)
> Ardour is really great for recording and mixing.
Yeah, I'm actually warming up to Ardour as a general mix & mastering environment. It reminds me of Logic Pro in that sense, being more suited for final touches than composition (in my personal workflow).
> If you exclusively make electronic music you could also look into LMMS², it's more of an electronic-music-toy than an actual DAW but thats not necessarily a bad thing.
How is LMMS these days? I tried it sometime last year and had a lot of fun but it crashed too much for my personal comfort (tbf that could have just been whatever buggy LV2 / VST plugins I was testing). It comes a bit closer to the "look and feel" I look for in a DAW - kinda reminds me of older versions of FL Studio which is kewl because that's the software I learned how to produce music on.
> I’m a software developer and musician who only uses Linux and I don’t care about this.
I am a software developer and musician who uses Linux and I do care about this. I run headless, and control my audio software through custom logic and hardware while playing live. I ended up writing a custom synthesizer and see because I couldn't find anything that works well for my use case
(I'm still open to something else; my synth doesn't sound very good. Designing custom sounds is not something I'm great at or something where I really want to focus.)
That's pretty cool. Most modern DAWs allow you to define per-controller triggers for custom logic in the form of MIDI events. I guess you could write a CLI that maps custom commands to MIDI events and allows you to send those events to your DAW when they are called. It's not exactly what you're describing (and maybe it doesn't fit your use case) but is that something you've considered?
Havng had zero problems with this (already many years ago, in the days where getting low latency on linux was extremely hard) on a variety of machines but all with pretty decent cards is it possible that the problem has nothing to do with ASIO but rather with crappy drivers / manufacturers? Or perhaps you just had bad luck?
Also there was the night spent recompiling the right version of bison in the middle of Qsampler's dependency hell, so that I could have a piano sound. That was all in 2017.
- Auto restart any important services (with systemd or similar)
- Use JACK/Pipewire session management
- Report the crash to the developers (of a2jmidi in this case, but it could be anything)
I honestly have never used LinuxSampler so I can't comment on that, I believe they have some strange licensing thing going on.
By contrast, JACK1 contains a2jmidid as a builtin client, no extra work is needed. You just start JACK, all your MIDI devices are listed.
- Distro: Arch Linux
- Audio backend: JACK2 and PipeWire
- USB Audio Interface: Behringer U-Phoria UMC404HD
- USB controllers: Akai APC 40 mkii, Casio CTK-6200
- Microphone: Zoom H6
I run this exact setup on my Ryzen desktop and a Thinkpad T480 with no problems. I've also tried routing the audio output of various software directly into my DAW using qjackctl, works perfectly fine.
To use PipeWire in place of JACK, you have to install a specific package (`pipewire-jack` on Arch Linux) and run all of your audio applications using a wrapper command called `pw-jack`. You can update the `.desktop` entries for audio software on your system to automatically run this command; I've done that and everything I use launches correctly, tbh I forget that PipeWire is there. I just use Bitwig, qjackctl, Catia, etc. and they all think they're using JACK but really everything is being handled by PipeWire. Pretty kewl and it's been working perfectly for me for quite some time. :)
For audio on linux: https://pipewire.org/
For a more code oriented audio workflow (python lib to load VST and AU) https://github.com/spotify/pedalboard
Pedalboard is also not a realtime audio environment (as was clarified by one of its developers here on the HN thread last week). In that sense it is extremely different from Bespoke (and nearly everything else).
Most plugins these days are stable, and don’t crash. I haven’t had issues in like 10 years honestly. At least if you’re paying for them from groups who have a reputation.
Git for a music project would be detrimental, going to be really honest.
Linux can run things just fine. JACK has been a relatively stable audio platform. RT pre-emption can help. The only issue with Linux I see is software support. But with Apple making some less than stellar choices, I expect linux to become more important to the daw/digital recording ecosystem
One fantastic one that does work well on linux (reaper) also has a scripting interface. Not a CLI but actually more useful.
If you’re meaning from crashes, Bitwig has configurable levels of process isolation for plugins - I guess the trade off is performance, I haven’t tested it out so can’t comment on how well it works.
What if you run each plugin in a separate process, or does IPC add too much overhead or not provide enough bandwidth for this to be feasible?
edit: I now also remembered that virtual memory is a thing and you can share a chunk of physical memory between processes to avoid the need to copy anything at all.
I know of no evidence that Linux context switching on x86/x86_64 is slower than any other OS, and some suggestions that it is faster (Linux does not save/restore FP state, which Windows (at least at one point) does).
Linux is as capable or more capable of realtime work than any other general purpose OS, and the latency numbers from actual measurement are excellent (when using RT_PREEMPT etc).
What are you referring to?
Tanenbaum argued that a microkernel was lighter, and could switch context faster than a macrokernel (the likes of which UNIX was typically reincarnated with). Linus argued that throughput, not latency is what matters to end users. At that time your typical OS switched tasks 18.5 times per second and Linux did substantially better than that. Case closed, the throughput argument won.
But now, many years later the consequences of that mean that we are switching contexts orders of magnitude slower than we could have because the context contains a lot more information than it strictly speaking has to. My own QnX clone switched 10K / second on a 486/33, and yes, the IPC mechanism meant that throughput suffered but for real time applications with a lot of the hard stuff in userspace context switches are far more important than throughput (and incidentally, also for perceived responsiveness of the OS and apps).
The latency numbers are excellent from the perspective of very forgiving applications, a typical DAW runs with 1K or even larger sample buffers which is acceptable, but for many real time applications that is an eternity and so those are not typically built using Linux as the core but some dedicated RTOS.
edit: I had 100K / second before, this was in error. It's been 30 years ;)
you will find that on Linux a context switch takes about 30 usec. More recent measurements that take account of the effect of the TLB flush put the range at 10-300usec.
That means that in 2010, on Linux, you could reasonably expect to do at least 30k/sec. In 2021, with realistic audio processing workloads, the range is probably 3-50k/sec.
The 486 is a much lower register count than contemporary processors, which accounts for the faster context switching.
Modern audio processing software on Linux can run with 64 sample buffers, not 1k.
This recent paper on RT linux on RPi/Beagleboard single board systems concludes that on some of these relatively "low power" systems, 95% of latencies are in the 40-60usec range, which is completely adequate for the majority of RTOS tasks (but not all).
>"The majority of Linux kernels’ measurements with PREEMPT_RT-patched kernel show the minimum response latency to be below 50 μs, both in user and kernel space. The maximum worst-case response latency (wcrl) reached 147 μs for RPi3 and 160 μs for BBB in user space, and 67 μs and 76 μs, respectively, in kernel space (average values). Most of the latencies are quite below this maximum (90% and 95%, respectively, for user space and kernel space). In general, it seems that maximal latencies do not often cross these values."
[ ... ]
"As an outcome, Linux kernels patched with PREEMPT_RT on such devices have the ability to run in a deterministic way as long as a latency value of about 160 μs, as an upper bound, is an acceptable safety margin. Such results reconfirm the reliability of such COTS devices running Linux with real-time support and extend their life cycle for the running applications."
This slide presentation offers up very similar numbers with graphs, also on ARM systems (I think):
This article shows cyclictest, a very minimal scheduling latency tester, getting the following results on an x86_64 system:
"The average average latency (Avg) is 4.875 us and the average maximum latency (Max) is 20.750 us, with the Max latency on 23 us. So, the average latency raises by 1.875 us, while the average maximum raises by 1.875 us, with the maximum latency raised by 2 us."
> "Maximum observed latency values generally range from a few microseconds on single-CPU systems to 250 microseconds on non-uniform memory access systems, which are acceptable values for a vast range of applications with sub-millisecond timing precision requirements. This way, PREEMPT_RT Linux closely fulfills theoretical fully-preemptive system assumptions that consider atomic scheduling operations with negligible overheads."
I'm not sure where you're getting your current info from, but I'm extremely confident that it's wrong. If I had to guess, you have not kept up with the impact of the PREEMPT_RT patchset on the kernel, nor scheduling improvements in general, but I don't know (obviously).
It might be worth documenting my setup (reproduced across three different machines, a laptop, an 'all-in-one' and a very beefy desktop), to see what could be improved because that difference is substantial.
that sounds very weird, I don't even run a RT kernel and I have no trouble running at 64 with a fair amount of plug-ins and even 32 samples when I just want some live guitar effects (i7 6900k, RME multiface 2). My only configuration is installing this AUR package: https://archlinux.org/packages/community/any/realtime-privil...
There's a wide variety of reasons, all of which can interact. It's one of the few good arguments for buying Apple hardware, where this is not an issue.
Over the years I've been working on pro-audio/music creation on Linux (22+ years), I've had a couple of systems that could perform reliably at 64 samples/buffer. My current, based on a Ryzen Threadripper 2950X, can get down to 256 but not 128 or 64.
If someone were to put together a guaranteed low latency config and keep it patched using a custom distro (assuming say 'Ubuntu Studio' would not be up to the task, would there be a market for that? Are there such suppliers? What specifically is different in Apple hardware that it works there?
I read that page earlier, its helpful, but more helpful would be a shopping list that says 'get this: it will work, assuming you install this particular distro'. And after independent verification you could then add alternatives for each slot. For me for instance a big question would if NVidia video cards would break the latency guarantees (their driver is pretty opaque) by keeping interrupts masked for too long in their drivers. If that would be a deal breaker then I'd have to set up a system only for studio use.
Lots of efforts have been made over the years to create "audio PC" companies. Even with the Windows market within their intent, I don't know of a single one that has lasted more than a year or two. How much of that is a market problem and how much of it is a problem of actually sourcing reliable components, I don't know. I do know that when large scale mixing console companies find mobos that work for them, they buy dozens of them, just to ensure they don't get switched out by the manufacturer.
Apple stuff works because Apple sort of has to care about this workflow functioning correctly. There's no magic beyond careful selection of components and then rigorously sourcing them for the duration of a given Apple product's lifetime.
I have no actual evidence on the video adapter front, but my limited experience would keep me aware from NVidia if I was trying to build a low latency audio workstation. Back in the olden days (say, 2002), there were companies like Matrox who deliberately made video adapters that were "2D only, targetting audio professionals". These cards were properly engineered to get the hell off the bus ASAP, and didn't have of the 3D capabilities that audio professionals (while wearing that hat) really don't tend to need.
You might be interested in MrsWatson. Even though the development on it has been discontinued, there is still a lot of potential for its use:
probably would be a bit much in a complex finished instrument but that's amazingly intuitive for the building phase, or for reading someone else's instrument.
i wish there a way to translate old Reaktor library stuff into more modern synth GUIs. there's some amazing gold in there but it is nigh impossible to understand between Reaktor's uh... challenging UI and the total lack of documentation for the signal paths to try and explain them to a relative novice. you can very easily see _what's_ built, but god help you try to understand why on your own without adding a ton of scopes everywhere manually
I wouldn’t be surprised to see it start popping up elsewhere, similar to how Ableton made everyone realize they had somehow been living without retrospective MIDI capture.
>fifteen fewer dollars in your pocket
Definitely gave me a chuckle
Bespoke Plus is $5 and Pro is $15; there should be no check in the $5 line for Pro, OR if both lines are checked, the second line should be $10 and not $15.
As it is it means Pro is $5+$15=$20.
const originalDollars = 30;
const myDollars = originalDollars - 15;
const fiveDollarsLess = myDollars <= (originalDollars - 5);
const fifteenDollarsLess = myDollars <= (originalDollars - 15);
console.log(fifteenDollarsLess && fiveDollarsLess);
what you propose wouldn't make sense in my opinion, as its basically two consecutive boolean declaration
$5 less = pocket - paid <= pocket - 5
$15 less = pocket - paid <= pocket - 15
The difference in interpretation comes from the nature of "features": are they actions or verifications? A verification is (usually) idempotent / has no consequence on the state of the world, but an action isn't.
I read the lines as "takes $x from your pocket".
Also, add this to your list. :-)
Still, can 100% recommend renoise for what it is, and more, and I doubt I'll ever fully stop using it
I'll have to try Bespoke as well ...
On a more serious note, modular music is an extremely interesting and growing area and just about every module is surprisingly expensive; I'm curious to how well this translates to virtual racks.
I want to get into music production but a barrier is that Omnisphere and FL Studio are $500 and have a super-limited trial version. As a grad student I'm not going to spend $500 for a piece of software I might be interested in using.
I would much rather have it be like software development where almost everything is free. And instead of paying upfront, synths / effects can make money by taking a cut of your revenue (I don't think that's like software development but it means synth producers still make revenue).
• Get Reaper. It’s a mainstream DAW, is fully functional, a free download, and only $60 to register after 90 days.
• Valhalla Supermassive for reverb: https://valhalladsp.com/shop/reverb/valhalla-supermassive/
• The VST fork of VCVrack for a modular synth: https://github.com/bsp2/VeeSeeVSTRack#downloads
I would get a keyboard controller with full sized keys and a 5-pin DIN MIDI out for just over $200, but that can come later.
One thing to avoid is the rabbit hole of concentrating on what gear to buy over actually making music with the gear.
No reason to get 5 pin DIN MIDI at this point; almost all devices offer USB MIDI and its as good as DIN MIDI in almost all scenarios.
[ EDIT: VCV Rack ] 2.0 will be out "soonish" which will offer an "official" VST (and if we're lucky, LV2 also) plugin, though at a price.
People's mileage will vary when it comes to the DAW. As the author of another (libre & open source) DAW, I get emails that vary from "Oh my god, I've used X and Y and Z and yours is so much easier to use and incredibly fast and reliable" to "how can you look at yourself in a mirror when you make such shit software". Reaper works for a bunch of people, but not for another bunch, as is the case for most DAWs.
DAWless setups are definitely a thing, and you need DIN MIDI to connect the keyboard directly to a sound module (USB can only be connected to a computer).
It is a $50-$100 extra investment to get a quality keyboard with DIN MIDI, but those quality keyboards come with better software and have a better build quality to them.
It’s a lot better to spend the extra money up front to get a keyboard with a DIN MIDI connection (e.g. an Arturia Keylab or Novation Launchkey) than to have something which will need a hacked together USB-to-DIN box (and I notice I haven’t seen any names of make and models of MIDI USB to DIN boxes which supposedly will always work) if they ever want to go DAWless.
In fact, as with a lot of pirated soft/media the experience is superior. Licensing and DRM of music software is a headache - dongles, software centers and other bloat. Scene groups like R2R even optimize performance and patch out bugs in addition to cracking protections, making their releases superior than that of the original developers.
Otherwise have a look at Splice rent-to-own plugin licensing.
Ableton Live Suite has a tremendous amount of tools out of the box and contains everything you need to make music with. Look for second hand copies on forums. You'll likely be able to pick a copy up for $400 or so. You could buy that and never buy any software again.
> And instead of paying upfront, synths / effects can make money by taking a cut of your revenue
Hahahaha, have you asked how much the average electronic music producer makes vs the average software dev? ;)
There are countless free VSTs that are very, very good. KVRAudio is a good news source for what's happening in VST World, both paid and free.
There are also numerous plug-in discount stores (Audio Plugin Deals, Plugin Boutique, Emmett VST Buzz, and others) that regularly sell mainstream VSTs at hugely discounted sale prices.
Free DAWs are harder to find, but you can get intro-level DAWs with enough features to get started for less than $200.
On a Mac Logic Pro is $199, which is a full-featured DAW with a solid collection of virtual instruments.
Ableton Live Lite (1 step up from intro) comes free with a lot of music gear. I had a bunch of licenses lying around because it came with my USB audio box, my MIDI keyboard, etc etc.
Also consider FL Studio. Unlike nearly everyone else it comes with free lifetime updates. Like Ableton, every edition except the cheapest has a ton of plugins to cover nearly every need. My license is almost 20 years old, and I'm still running the latest version. Easily the best money my broke-student ass ever spent.
I imagine there are oodles of cheap DAW(s) that will at least let you sequence and mix everything together. Technically, I don't think you need a DAW, if you're just playing (though obviously that's a very limited use case). I feel like you could even record tracks into Audacity straight from the instrument
There are TONS of great sounding free VSTs and some very good cheap ones. Some of them can be gotten at a deep discount (though you're usually shelling out $50-100+), but the MSRP is something outrageous. I don't want to shill, but you can "rent-to-own" for zero fee now (don't know how many DAWs are available, but plugins for sure.
It doesn't HAVE to be expensive, though there's probably some stuff you'll inevitably be tempted to buy, as with any hobbies. Is dropping eg. $100-500 on a hobby once or twice a year "expensive"? It's also pretty easy to get into and buy things as you go. You really don't have to drop thousands of dollars on tons of software.
You don't have to pay a dime. Don't quote me on this, but I think there's a major free modular simulator that's pretty good. You could get some solid vintage synths, some drum machine/groovebox, and effects for like $50-80.
If you think about everything going into it that's a ton of functionality for your buck, especially compared with traditional hardware synths. I have $3-5k in my rig (plus I buy VSTs), and it's still relatively basic. I don't own a single high end synth, my most expensive would be considered midrange at ~$1200... ONE BOX ... you could buy so much fucking software for that, it's crazy.
BTW almost none of this is necessary for you to experiment and create. You need very little... like 1-5 vsts, some utility(s), and minimal plug-in. I encourage OP and anyone reading this to create if they have the urge. I'm 100% positive you can get into it at any price point. Paying more money won't help you as much as you think here. Much better to keep it small and master one box at a time.
>I would much rather have it be like software development where almost everything is free. And instead of paying upfront, synths / effects can make money by taking a cut of your revenue (I don't think that's like software development but it means synth producers still make revenue).
I have to disagree with you there. First of all, I don't agree with this generalization, as it seems very focused on your own background. There are lots of people selling software (including design tools), sometimes for outrageous prices, if it's sought after.
It seems like almost all software is sold. I don't know how this business model is supposed to work. This is how small to medium sized software companies work. They have to sell to as many people as possible TBH, because most of their userbase is going to make $0.
I think your frame of reference is out of wack regarding what stuff costs, because we're getting screaming bargains on soft things (eg. media, newspapers, ect.) This "everything must be" free attitude is really toxic and having some troubling effects. I think it would also be hard to prove what people are using, and enforce this. Even for recordings, but so much money is made live.
Wow, what a tirade. Sorry folks, but I wanted to get that out there.
Quality DSP from Xfer Records (Serum), Fabfilter (Many amazing data viz plugs and good sounding compressor, limiter, gate, multi band eq, saturator, etc), izoTope (trash2 distortion) etc etc etc have been worth every cent for their clarity and performance.
Carefully shop around for some of these and you will realize that money does actually buy real quality sometimes. But only sometimes. Most quality products has free limited demos that can be very informative.
Sometimes free/libre stuff is also amazing, like Dexed. And perhaps also Bespoke - I’ll definitely be trying it out, and sending $$$ if I think I’ll use it.
That alone makes me want to donate.
Can it work as a VST plugin ?
I wouldn't mind feature bounties for a project like this.
This was when I was a "poor" student and I couldn't spend money on music.
Later I could afford real modular/semi-modular synths and I enjoy them a lot but I still appreciate being able to connect cables on a screen where you can save the state and rapidly recall it, rather than having to free up a table, setup all the modular synths, connect them together etc. - So I think I'm going to enjoy Bespoke Synth a lot.
Another alternative that I have enjoyed is VCV Rack and its little brother for iPad/iPhone miRack.
PS I loved the "Feature Matrix" on the Bespoke home page ahah :-D
It was a shame there was the whole thing of losing the source code that killed development for years, and that it never made the leap to cross platform, otherwise I’m sure I’d still use it today!
Have you tried Drambo on iOS? I guess in some ways it’s the closest thing I’ve found to Buzz, in that it operates at the same level of “granularity” in terms of the modules, and in that it has sequencing (step based in this case, though piano roll is coming) built in. It’s easily my most used iOS music making app these days, really brilliant bit of software.
MiRack is also really fun, although I find dealing with the myriad UIs for each module a bit challenging sometimes (though it is a fun touch!) - Drambo has a much more standardised layout for each module (like how Buzz was just sliders IIRC).
Yes, I can remember that very well! Eventually Oskari rebuilt the app basically from scratch, using .NET (at least IIRC) and that's what you get today if you go to the site and download it. It's not the original app but it maintains the same API for modules. Oskari nowadays is a researcher and he published some interesting articles: https://arxiv.org/abs/1407.5042
> Have you tried Drambo on iOS?
Nice, thank you! I had never heard of it but after watching a couple of YouTube videos I decided I'm going to download it
I hope you enjoy Drambo! It can take a little while to get into the swing of it but if you used to use Buzz I really think you’ll enjoy it. There are a tonne of tutorials on YouTube about it to get going. Being able to host AUv3 plugins is awesome too!
On tracker websites I used to be mastazi just like my HN username, but I never released my own music I think. However I made a CD and gave it to friends, I might still have a copy somewhere at my parents' home in Italy. That's the only memory I have, because I lost all of my Buzz project files in a HD failure many years ago.
The modern web is different from what we have back then, but if you look hard enough you can still find corners of the web that look familiar :-)
Also, I like a lot the way it links inputs and outputs just by dragging the mouse. Does anyone know if there is any general purpose library to do that?
I mean, Ideally I create some list nodes, then use that library to link them in a certain order by using the mouse.
This is usually called "MIDI mapping" or similar and is available in basically every DAW these days.
> Also, I like a lot the way it links inputs and outputs just by dragging the mouse. Does anyone know if there is any general purpose library to do that? I mean, Ideally I create some list nodes, then use that library to link them in a certain order by using the mouse.
Something like qjackctl (with Jack, obviously) could do this for you, as you can route things however you want in a drag-and-drop UI.
I'm aware of MIDI mapping, my question was about the possibility to make a stand alone sysnthesizer (with keyboard and knobs) in which the necessary controls would be read from analog inputs while still displaying them on the screen as if they were changed with the mouse, which is not easy to do when playing live.
> Something like qjackctl (with Jack, obviously) could do this for you, as you can route things however you want in a drag-and-drop UI.
Sorry, my second sentence wasn't clear at all. I'd love to see a general purpose (not related to sound or any other use) library to allow the graphical representation of structures links (list nodes) together as this software and others do with generators, effects, etc. Ideally, it should operate on the header of a structure which contains the relevant fields for linking to others. When I alter the links on the screen, it does the same on the represented nodes.
That would be the way I would build for example a drum machine in which each structure contains also a pattern and I can move them at will back and forth, replicate them, set their own fields (number of repeats, etc). This would be again a sound related application, but what I'm looking for is something really general purpose.
I see, I understood the opposite way! But yeah, what you are describing also exists! Many (high-end) mixers have motorized faders, also using MIDI if I remember correctly, so you can change the fader in your DAW and it's represented in meat space.
Supercollider sounds like what you're after. You can even use vim :)
https://github.com/pjagielski/awesome-live-coding-music here's a list of related stuff
My favorite approach is in Sporth: because it's concatenative, you don't need any operator, you just type the things you want to connect. Shameless plug, I made a playground for it: https://audiomasher.org/browse
It's certainly not as powerful nor polished as Bespoke but might be worth a look.
Or at the very least some kind of remedial hot key navigation would be great too.
Linux build instructions would be great. I wasn't able to get the binary running on Debian 11 due to library version differences.
(fwiw, you have to pay for the $1k suite version of Ableton to get Max, so Bespoke could still be a great alternative even if they do a lot of the same things)
The JS support is really weird. It's only JS 1.6 (from 2005), and had weird glitches (like loading two instances of the same device causing the first device to stop working), and I couldn't get the timing tighter than about 30ms. Ideally you could write code that runs at audio rate.
There's also "gen", which is a Max-specific scripting language that is presumably real-time suitable through a JIT. Unfortunately you need a separate Max license to use it, even the full Ableton Live Suite doesn't give you gen support. You can sorta hack around and use it by manually editing the .maxpat files (which are almost JSON), copying from a device that uses gen, but there are lots of weird glitches going this route.
A list of a few annoying things about M4L:
* Documentation is pretty sparse and/or low quality, and weirdly split into two (help and references).
* All variables are global across devices by default, local (device-specific) variables need the prefix "---", which is barely documented
* Tons of annoying UX issues, like entering an invalid value in the inspector just reverts to the old value. You can't enter an empty string for parameter values, that reverts too (you need to enter a literal ""). Certain functionality is only available depending on whether the device is "locked", so you have to lock/unlock the view all the time if you're working with e.g. subpatchers
* Abstraction is quite annoying to do. There's three different types of patches, and it's not really clear what the difference is between them. Creating subpatches and then duplicating them creates two different subpatches--changes in one are not shared with others.
* ...and a ton of other things. I have a big text document of these gripes I was intending to turn into a blog post, but haven't gotten around to it.
Maybe I'm wrong and there's better ways to do some of these things, but overall my experience learning M4L was pretty bad. If it wasn't the only way to do certain advanced things in Ableton, I'd never touch it again.
I will agree however that the JS implementation is neutered, probably because they have to worry about support volumes. This is one of the reasons I created Scheme For Max, which unlike the JS object, allows running in the high priority thread, does hot code reloading, and is open source and can be recompiled to your liking. Now that I have Scheme for Max, I love Max (and Max4Live) to bits, they are a fantastic environment, and I do all the coding in S7 Scheme or C. :-)
I didn't touch too much on the specifics in my post, but there are lots of little design oddities I ran into when doing relatively simple tasks. Like creating a multiplexer for messages: the [switch] object has an "empty" channel for input 0 (and regular inputs are 1-indexed), so you'll often need an extra +1 object for the control input. And the inputs of [switch] have no memory, so every time the control changes, you need to send a bang to resend the message in the now-active channel. Or say you want to multiplex signals. That uses the [selector~] object (why not [switch~]?), and has the same +1 issue as [switch]. But what if you want a short crossfade when switching inputs to avoid harsh transients? "Good luck" is all I'll say here.
I'm not generally trying to do anything super-complicated with Max. I have indeed used the C extension API when it makes sense (building a wavetable synth), and it was OK. I still contend that the main visual programming environment is not very good, for programmers and non-programmers alike.
...anyways, all that said, Scheme For Max looks really cool :)
I spent some time figuring out nicer ways to work with it in order to build an Octatrack-style parameter crossfader for M4L, it provides some abstractions and setup to make using Typescript with Max a bit more pleasant. Still plenty of limitations but I was able to get my device working pretty well in the end. Apologies for lack of docs!
I've gotten really into Ableton this past year and I've been curious whether I should get into Max for Live. Being a programmer and looking at their marketing materials, it seemed like it should tap into the right parts of my brain. But seeing your comment now ... maybe not the right move. Especially because I'm not looking to accomplish anything special, I just want a sandbox to play with digital audio concepts.
[ I edited it because I realized that VCO's actually go back to about 1910 ]
So for example when I recently went to describe in a README how to install some software that I’m workin on, I relied mostly on my memory and secondary sources in order to try and give a pointer to users on various distros for how to install the dependencies in question. And for example from what I could find for openSUSE Leap 15.3, both of the pieces of software are/were not in the official package repos at the time so I simply stated that, linking to the relevant pages under software.opensuse.org that told me this, but not having run openSUSE myself for years I am not sure the reason for it or indeed if it’s even completely correct.
I guess there’d be room for some CI service where instead of a specific Docker image like many use you’d instead list the dependencies in a kind of meta format and the service would install the corresponding packages and run the tests across many distros. Then the service could generate scripts or readme instructions for each distro.
At the moment I think realistically in most cases it will need to be that people who run various distros take it upon themselves to sort it out and to submit pull requests to projects about how to install and use on any given distro.
My own preferred Linux distro for desktop is Debian-based too. KDE Neon.
But so, I think it may be worth it that you try and submit a PR to the OP for adding instructions or an install script adapted to your own distro of choice.
Although ultimately, if the software grows big enough eventually someone will add it to the package repositories of each distro and then there will be no need for manually or scriptually installing the deps, because the deps will be specified in the package repos. And for example if you use Arch I guess someone is bound to add it to AUR if it’s there already.
But at least its a short install list  and its probably not too difficult to install the deps on your distro of choice and fire it up.
Thought I'd try build it and...
> Use the "Projucer" from https://juce.com/ to generate solutions/project files/makefiles for building on your platform. 
Is this really a FOSS project?
I would love see more prefabs, especially ones designed to emulate some classics synths.
It also might be time to add undo. Everyone (especially me) likes to save it for last.
Keep up the good work!
Reminds me a bit of Reaktor's builder environment, but based on that demo video, it seems like a more useable, better thought out version (and you can't beat that price / feature matrix).
are designed such that you can easily assign fine-grained inputs on the controller to whatever software parameter you like. Traditional modular required you build a lot of things you'd expect from a physical instrument incrementally, and there's some interesting music that comes from that alone, but it's a real joy to have something you can use to construct, say, lifelike vibrato that responds to multidimensional inputs, without a shitton of work.
much more cost-effective too. you can use a single $300 controller and software to make the same shit that'd require several discrete multi-grand dollar setups if fully implemented in hardware
I think with instruments it’s more about what keeps you in the creative flow than anything else. At least for me hardware has been 10x more productive than software in terms of actually making music.
I've just installed it and wanted to try this Python live-coding but it crashes when loading the example file.
This looks all so promising, it could well be what I've been waiting for for years.
I've been doing research on music AI, inverse synthesis, and the like, and shockingly few open-source software packages were usable for creating training sets.
If you haven't seen them already, SuperCollider, Chuck, PD, Faust and/or the Web Audio API might be better suited to generating training sets in an offline fashion.
Windows protected your PC
Microsoft Defender SmartScreen prevented an unrecognized app from starting. Running this app might put your PC at risk.
(but I'm not suggesting you follow the same rule, I'm not responsible for anything that happens etc. etc.)
(edit: added links to Mac notarizing / Windows certification guides)
Are there any artists today working with pure code?
I'm less of an audio artist than a visual artist (which I do professionally to some extent) but I imagine the reasons are similar to why few people make visual art using scripts and imagemagick or some similar workflow.
Most creative output is birthed in a more freeform state of play, to some extent, rather than being reasoned about and assembled like code. Coding itself can be playful, but when your goal is expressing emotions and ideas in art, having to pipe that through a rigid, logical process is much less expressive than grabbing something, even a virtual something, and making some sort of gesture with it. So unless you're trying to express something algorithmic, code is a layer of abstraction that operates in a very different way with persnickety bounds that just aren't useful in most creative expression.
I've done generative design using action script in Photoshop and made some really cool algorithmic photo collages, but even as a long-time developer, the frequency with which I think "this could look cool if I set up a function to do it like this" is pretty rare.
Once you get past the preliminaries, play can begin...except, you can usually return to structure by adding another layer of it. Harmonization principles in music have layers of structure to them, and "expressive" pop harmony can often be seen as a very calculated thing of crafting maximal tension and release in a short period. Likewise, you can definitely go in the direction of technical visuals, either with detailed illustration or computer renderings. Often there's a desire to digitally emulate analog workflows without literally using those workflows, which results in some additional technical considerations around achieving that.
I think this is how all content creation software eventually grows into a spaceship panel UI - even if you have a specific thing in mind for each process layer and can reduce it to a preset or template, you have to configure the software to get there.
Then again, I'm not architect or mechanical engineer so maybe that's different for different folks.
hey could you please share the link where you've read that. Thanks!
 https://www.redbullmusicacademy.com/lectures/tim-hecker-lect... (ctrl f for "P-P-O-O-L-L")
That would probably be PureData.
Can't wait to play around with it!