We came from a world where bytes mattered they come from a world where Gigabytes matter.
In some ways caring about that stuff can be detrimental, at the back of my mind there is always that little niggle - you could do this in 1/10th the runtime/memory cost but it'll take twice as long to write and you'll be the only one who understands it.
These days we don't optimise for the machine but instead for the human time and honestly, that's an acceptable trade off in many (but not all) cases.
It can be frustrating when you remember how much of an upgrade getting a 286 was over what you had, that I now routinely throw thousands of those (in equivalence) at a problem inefficiently and still get it done in under a second.
Oh, so this is like a web browser thing?
> Click here to download FarPlay
Oh, so I _do_ need to install FarPlay. Just not any software that's a third party besides FarPlay. Which wouldn't make any sense.
Heh. "Note for Windows: To use FarPlay on Windows, you must have an ASIO audio driver. We recommend the free ASIO4ALL."
Yep, that's because Asio4all uses WDM-KS and WDM-KS since Vista doesn't support multiple sources.
Actual ASIO drivers made by sound card manufacturers usually don't have this limitation, as long as you keep the same sample rate everywhere. But it can also vary depending on who made the driver and/or on whether the source apps are using ASIO or a mix of ASIO and WDM/WASAPI. Getting low latency audio to work nicely on Windows can be messy (compared to macOS, at least).
These days, I can get ~1ms latency with shared-mode WASAPI on a 2012 then-budget i5 desktop, with on-board audio...
I seriously doubt it. I don't think it's possible for shared WASAPI to go below 20-30ms. How are you measuring it? Input, output or round-trip?
For easy RTL measurements, you can use this: https://oblique-audio.com/rtl-utility.php
It's just that I'm more focused on soft synths, and I can get a clean signal out of 64-samples buffers. Granted, that's not what I'd use with any realistic processing (for instance, I use Reaper at [email protected]).
While I haven't measured end-to-end yet, I do hope it stays below 20ms. I'm working on a synth-powered rhythm game, and the whole reason I chose to stick with plain WASAPI was to avoid requiring users to install extra drivers and because of Windows 10's low latency stack, with its advertised 0ms capture and 1.3ms output overhead on top of application and driver buffers.
Update: I ran RTL on the budget 2012 desktop and got worse results than I expected at [email protected] for shared-mode WASAPI . For some reason, I couldn't select smaller buffers. On the same hardware, exclusive-mode WASAPI managed [email protected], and ASIO4ALL managed [email protected] and [email protected]
Since I had a bunch of software open and the system had been online forever, I restarted and tested again — turns out it squeezed a couple of ms to ~16ms on shared-mode.
By the way, I believe the low latency stack is enabled system-wide, so Reaper should already be using it.
For what it's worth, on my game I've been piping a SunVox instance to a 128-samples shared-mode WASAPI stream and haven't encountered distorted audio, yet.
At the end of the day, I guess I would indeed default to recommending WASAPI unless one has hardware with great ASIO drivers.
Note: 192 kHz is 5.2 microseconds/sample, 48 kHz is 20.8 microseconds/sample. The 15 cm distance between the ears takes around 100 microseconds to traverse (at the higher speed-of-sound in the head, vs. free-air).
The 1m distance of air for close-by human 1:1 talking takes a full 3 milliseconds to traverse.
There is a non-profit  with hard-realtime applications (including CNC) that runs a few racks of systems with latency monitoring.
For example, the blue rack3slot0 line  is a histogram for an almost-standard distribution kernel on an IvyBridge Xeon-E3 running a thread with timer interrupts every 200 microseconds for about 5.5 hours (100 M times, specifically), and recording the latency of that interrupt.
As one can see, there were about 20 at-or-above 20 microsecond delay, and even then just barely over.
With remotely decent under/overrun hiding, 10 microsecond latency should be easily usable.
And yes, those systems had background load at normal priority and this realtime thread at high priority:
> Between 7 a.m. and 1 p.m. and between 7 p.m. and 1 a.m., a simulated application scenario is running using cyclictest at priority 99 with a cycle interval of 200 µs and a user program at normal priority that creates burst loads of memory, filesystem and network accesses. The particular cyclictest command is specified in every system's profile referenced above and on the next page. The load generator results in an average CPU load of 0.2 and a network bandwidth of about 8 Mb/s per system.
That figure is thrown around a lot and is definitely grounded in some solid research ... just ... lower latency numbers (in jackd) "feel" better when playing guitar. There is a lot of subjectivity in the guitar playing world and I am definitely not immune to that.
So ... 2.8 ms round-trip time in jackd plus 1-2 ms for AD/DA conversion plus 3 ms that the sound takes to travel from the speaker to my ear (plus any latency that the brain needs to process the sound). 2.8 + 2*1-2 + 3 already gets us very close to 10 ms.
No idea what I am getting at here, but I am on my third generation of modelling amps (cheap M-Audio BlackBox, POD X3 Live, now Guitarix) and while I never really had an issue with the latency ... I feel like I probably would if I went back to a previous generation.
Also, by replacing a speaker with headphones, there is enough latency budget from that that can be spent on lightspeed delay for 100+km distance, if optimizing the audio stack for deep-sub-millisecond delays using RT_PREEMPT.
Yes, this precludes USB2, but modern computers have quite decent on-board audio codecs (aka A/D + D/A engines) that end up connected to the southbridge and are accessed via PCIe. That has sub-microsecond latency between the digital side of the A/D + D/A converters and the CPU cache.
I guess I mostly just wanted to say "RT_PREEMPT reduces jitter enough to allow sub-millisecond AD->jackd(mixer only)->DA without much effort using modern onboard audio", and to show that is truly little in sound wave path length.
That makes perfect sense! I am aware of the issue ... but apparently I never put two and two together there :-)
USB Audio 2.0 on USB 3.x is actually quite decent. But honestly I have no clue what actually changed there over USB Audio 1.0.
Oh, I know USB-attached-SCSI (the good USB3 storage protocol) is nice in latency due to actually exploiting the dedicated TX/RX lanes for latency benefits. It just shoves command packets towards the drive, and receives response packets when the drive has them ready.
However, USB still has comparatively severe driver overhead due to the MMIO-level protocols, to a similar (but iirc worse) extend as AHCI (with NVMe being the better replacement).
Recently I had a cool art project where we ran 48-channel ambisonic sound spatialization + live video effects, all that from a single Dell laptop sending audio through AES67 (so Ethernet) and video on 3 1080p outputs. Linux is incredible with the right hardware !
(shameless self-promotion: this was with https://ossia.io score :-))
Now I am jealous! I toyed around during my time at university, but never really got further than my crappy 4.0 setup at home :-)
Maybe I thought it would be more like Facade. Stuff like "Ask galatea her favorite color" and "tell galatea her dress is pretty" aren't working. It's moving really slow (minutes between attempts) as I try to guess what keywords I can use.
I can't even "examine room" like I usually do for conversation. "You can't see any such thing".
This is an exercise in NPC interactivity. There's no puzzle and no set solution, but a number of options with a number of different outcomes.
HINTS: Ask or tell her about things that you can see, that she mentions, or that you think of yourself. Interact with her physically. Pause to see if she does anything herself. Repeat actions. The order in which you do things is critical: the character's mood and the prior state of the conversation will determine how she reacts.
VERBS: Many standard verbs have been disabled. All the sensory ones (LOOK, LISTEN, SMELL, TOUCH, TASTE) remain, as do the NPC interaction verbs ASK, TELL, HELLO, GOODBYE, and SORRY; KISS, HUG, and ATTACK. You may also find useful THINK and its companion THINK ABOUT, which will remind you of the state of conversation on a given topic. The verb RECAP gives a summary list of topics that you've discussed so far; if she's told you that she's said all she knows on that topic, it appears in italics.
SHORTCUT: 'Ask her about' and 'tell her about' may be abbreviated to A and T. So >A CHEESE is the same as >ASK GALATEA ABOUT CHEESE.
There is an assortment of walkthroughs available at http://emshort.home.mindspring.com/cheats.htm, but I suggest not looking at them until you have already experimented somewhat.
For some reason I assumed she would know I was talking to her, but she then said "you might try talking to me".
With CPUs, it's less intuitive - The people with DRM-hamstrung CPUs are supposed to be getting a cheaper price subsidized by the people buying the product at full price. The cheaper models otherwise would have to be sold at the full price.
That's not to say that Intel's prices are necessarily fair at any tier...
A popular vesting schedule is 25/25/25/25, which means you would be able to sell 25% one year after getting the shares, another 25% after the second year and so on. Typically they would keep giving you more shares as your shares vest so that you always have some shares that you can't sell yet.
 There's a bunch of edge cases for self-defense and how to apprehend people. I don't have the time to get into these. Most people know what I mean by "Abolish the death penalty".
A few hypotheses come to mind:
1. Walking on your toes vs. on your heels
2. What happens if you slip
If you're facing into the stairs, your toes are gripping the next step. Maybe the extra degree of freedom from your ankle makes it easier to get a steady foothold. Whereas if you're facing away, on narrow steps, your toe may be off the step. If you try to balance on level ground on the balls of your feet vs. on your heels, you can tell heels are harder.
If you're going up and you slip, your moving foot just slides onto the same step as your static foot. If you're going down and you slip, your feet are now separated by 2 steps instead of just 1.
All things that move across ground move best if they lean into the direction they're walking. If you're walking down stairs, leaning forward means leaning away from the stairs. Going up, you're just leaning into the stairs as if you're rock climbing. Also it's likely our bodies are just better at leaning forward safely than backwards.
I heard a quote somewhere that "If you go down facing it, it's a ladder. If you go down facing away, it's stairs." Some narrow stairs might be better treated as just strange ladders. 2 of the possible causes would be mitigated.
This comment from 9 months ago indicate its only at the app level. has it changed? https://news.ycombinator.com/item?id=26090873
its a fight against entropy, same as road repairs.
its not that I don't want big sweeping reforms, but I believe in gradient descent. all good progress is good progress. like the UK restricting conversion therapy. I want it gone, but this is still an improvement.