Hacker News new | past | comments | ask | show | jobs | submit | ReactiveJelly's comments login


Yep. Although with the right language, even on cheap hardware, that limit might be 1,000 or so.

1000… pft.. just holding open a connection and sending on average a few bytes a second hardly costs anything and the memory requirements on eg Linux are minimal. You can easily do 100k or more with python and a few hundred megs of memory. Millions are doable with something a little less memory hungry or throwing more memory at it.

Most programmers these days don't know what computers are capable of.

if you aren't using 14 layers of abstraction you clearly aren't a real programmer /s

In fairness to them, a lot of programmers didn't come up the way (we presumably) did - if you started using computers/programming in the 80's and building computers in the 90's your worldview is going to be fundamentally different to someone who started in 2018.

We came from a world where bytes mattered they come from a world where Gigabytes matter.

In some ways caring about that stuff can be detrimental, at the back of my mind there is always that little niggle - you could do this in 1/10th the runtime/memory cost but it'll take twice as long to write and you'll be the only one who understands it.

These days we don't optimise for the machine but instead for the human time and honestly, that's an acceptable trade off in many (but not all) cases.

It can be frustrating when you remember how much of an upgrade getting a 286 was over what you had, that I now routinely throw thousands of those (in equivalence) at a problem inefficiently and still get it done in under a second.


> You don't need to install any third-party software

Oh, so this is like a web browser thing?

> Click here to download FarPlay

Oh, so I _do_ need to install FarPlay. Just not any software that's a third party besides FarPlay. Which wouldn't make any sense.


>Just not any software that's a third party besides FarPlay. Which wouldn't make any sense.

Heh. "Note for Windows: To use FarPlay on Windows, you must have an ASIO audio driver. We recommend the free ASIO4ALL."


Coming soon to a Windows automatic update near you, next time you're running a business critical overnight batch job.

Hope you're not planning to play with a backing track, ASIO4ALL notoriously does not play nice with multiple audio sources. It's almost like someone wanted to backport the horrors of ALSA to Windows because they missed how annoying it was having a single pair of inputs and outputs.

>ASIO4ALL notoriously does not play nice with multiple audio sources.

Yep, that's because Asio4all uses WDM-KS and WDM-KS since Vista doesn't support multiple sources. Actual ASIO drivers made by sound card manufacturers usually don't have this limitation, as long as you keep the same sample rate everywhere. But it can also vary depending on who made the driver and/or on whether the source apps are using ASIO or a mix of ASIO and WDM/WASAPI. Getting low latency audio to work nicely on Windows can be messy (compared to macOS, at least).


Recommending ASIO first seems more like a holdover from a troubled past to me.

These days, I can get ~1ms latency with shared-mode WASAPI on a 2012 then-budget i5 desktop, with on-board audio...


>I can get ~1ms latency with shared-mode WASAPI

I seriously doubt it. I don't think it's possible for shared WASAPI to go below 20-30ms. How are you measuring it? Input, output or round-trip? For easy RTL measurements, you can use this: https://oblique-audio.com/rtl-utility.php


Oof, I forgot what thread I was in, because I just meant the buffer size, not the round-trip latency or not even the one-way latency to audio output. Not sure why I said "latency" as that's plain wrong, especially when we're talking from capture in this case.

It's just that I'm more focused on soft synths, and I can get a clean signal out of 64-samples buffers. Granted, that's not what I'd use with any realistic processing (for instance, I use Reaper at [email protected]).

While I haven't measured end-to-end yet, I do hope it stays below 20ms. I'm working on a synth-powered rhythm game, and the whole reason I chose to stick with plain WASAPI was to avoid requiring users to install extra drivers and because of Windows 10's low latency stack, with its advertised 0ms capture and 1.3ms output overhead on top of application and driver buffers.

Update: I ran RTL on the budget 2012 desktop and got worse results than I expected at [email protected] for shared-mode WASAPI [1]. For some reason, I couldn't select smaller buffers. On the same hardware, exclusive-mode WASAPI managed [email protected], and ASIO4ALL managed [email protected] and [email protected]

[1] https://i.imgur.com/xq9xiNh.png


Thanks for testing. That 18ms result is actually much better than I expected for a shared mode. It got me curious, so I tried it and I was able to replicate it with my Realtek (I got 19ms). I'm still a bit skeptical about its real-world use, because I've experienced some garbled/distorted audio with some low latency modes. I also can't find that mode in Reaper, which usually has everything. Still, it looks promising.

Interesting, that's very close. :)

Since I had a bunch of software open and the system had been online forever, I restarted and tested again — turns out it squeezed a couple of ms to ~16ms on shared-mode.

By the way, I believe the low latency stack is enabled system-wide, so Reaper should already be using it.

For what it's worth, on my game I've been piping a SunVox instance to a 128-samples shared-mode WASAPI stream and haven't encountered distorted audio, yet.

At the end of the day, I guess I would indeed default to recommending WASAPI unless one has hardware with great ASIO drivers.


Wild. Even on beastly computers I'm never able to go below something like 256 samples at 48k with WASAPI

You might want to check out audio processing on Linux with a (soft) real-time kernel. The choice of plugins is limited, but it is reasonable to run a 5 man band (including three guitar amp modelers and voice processing) at 2.8 ms (internal) round-trip latency (plus some ms for AD/DA) on a "some what beefy but still just a laptop"-laptop.

Actually, the RT_PREEMPT stuff gives you worst-case blips around the 100-300 microsecond mark, and if it's just audio with remotely tolerant handling of buffer under/overrun, you can ignore those and use the more normal latency ceiling around 20-50 microseconds.

Note: 192 kHz is 5.2 microseconds/sample, 48 kHz is 20.8 microseconds/sample. The 15 cm distance between the ears takes around 100 microseconds to traverse (at the higher speed-of-sound in the head, vs. free-air). The 1m distance of air for close-by human 1:1 talking takes a full 3 milliseconds to traverse.

There is a non-profit [0] with hard-realtime applications (including CNC) that runs a few racks of systems with latency monitoring. For example, the blue rack3slot0 line [1] is a histogram for an almost-standard distribution kernel on an IvyBridge Xeon-E3 running a thread with timer interrupts every 200 microseconds for about 5.5 hours (100 M times, specifically), and recording the latency of that interrupt. As one can see, there were about 20 at-or-above 20 microsecond delay, and even then just barely over. With remotely decent under/overrun hiding, 10 microsecond latency should be easily usable. And yes, those systems had background load at normal priority and this realtime thread at high priority:

> Between 7 a.m. and 1 p.m. and between 7 p.m. and 1 a.m., a simulated application scenario is running using cyclictest at priority 99 with a cycle interval of 200 µs and a user program at normal priority that creates burst loads of memory, filesystem and network accesses. The particular cyclictest command is specified in every system's profile referenced above and on the next page. The load generator results in an average CPU load of 0.2 and a network bandwidth of about 8 Mb/s per system.

[0]: https://www.osadl.org/Realtime-Linux.projects-realtime-linux... [1]: https://www.osadl.org/Optimization-latency-plot-of-selected-...


Not entirely sure what you are getting at, I am guessing "10 ms latency is good enough for audio"? Plus "normally we are so far away from the speaker that it does not really make a difference"?

That figure is thrown around a lot and is definitely grounded in some solid research ... just ... lower latency numbers (in jackd) "feel" better when playing guitar. There is a lot of subjectivity in the guitar playing world and I am definitely not immune to that.

So ... 2.8 ms round-trip time in jackd plus 1-2 ms for AD/DA conversion plus 3 ms that the sound takes to travel from the speaker to my ear (plus any latency that the brain needs to process the sound). 2.8 + 2*1-2 + 3 already gets us very close to 10 ms.

No idea what I am getting at here, but I am on my third generation of modelling amps (cheap M-Audio BlackBox, POD X3 Live, now Guitarix) and while I never really had an issue with the latency ... I feel like I probably would if I went back to a previous generation.


There is a lower threshold beyond which you won't feel the difference anymore.

Also, by replacing a speaker with headphones, there is enough latency budget from that that can be spent on lightspeed delay for 100+km distance, if optimizing the audio stack for deep-sub-millisecond delays using RT_PREEMPT. Yes, this precludes USB2, but modern computers have quite decent on-board audio codecs (aka A/D + D/A engines) that end up connected to the southbridge and are accessed via PCIe. That has sub-microsecond latency between the digital side of the A/D + D/A converters and the CPU cache.

I guess I mostly just wanted to say "RT_PREEMPT reduces jitter enough to allow sub-millisecond AD->jackd(mixer only)->DA without much effort using modern onboard audio", and to show that is truly little in sound wave path length.


> replacing a speaker with headphones

That makes perfect sense! I am aware of the issue ... but apparently I never put two and two together there :-)

USB Audio 2.0 on USB 3.x is actually quite decent. But honestly I have no clue what actually changed there over USB Audio 1.0.


"the issue" refers to the audio latency of speakers, I assume? If not, please elaborate; my understanding of the matter in practical/human terms is fairly fuzzy due to most of this being very domain-specific knowledge I haven't been in the right places for.

Oh, I know USB-attached-SCSI (the good USB3 storage protocol) is nice in latency due to actually exploiting the dedicated TX/RX lanes for latency benefits. It just shoves command packets towards the drive, and receives response packets when the drive has them ready.

However, USB still has comparatively severe driver overhead due to the MMIO-level protocols, to a similar (but iirc worse) extend as AHCI (with NVMe being the better replacement).


My main platform is Linux :) on it I run a RME Multiface II ; with it I can go down to 32 samples of latency and still do some useful stuff without even using a RT kernel (can only go down to 64 in Windows w/ ASIO).

Recently I had a cool art project where we ran 48-channel ambisonic sound spatialization + live video effects, all that from a single Dell laptop sending audio through AES67 (so Ethernet) and video on 3 1080p outputs. Linux is incredible with the right hardware !

(shameless self-promotion: this was with https://ossia.io score :-))


> 48-channel ambisonic sound spatialization

Now I am jealous! I toyed around during my time at university, but never really got further than my crappy 4.0 setup at home :-)


no 4th party

I'm trying it and it's _really_ hard to get any commands to work.

Maybe I thought it would be more like Facade. Stuff like "Ask galatea her favorite color" and "tell galatea her dress is pretty" aren't working. It's moving really slow (minutes between attempts) as I try to guess what keywords I can use.

I can't even "examine room" like I usually do for conversation. "You can't see any such thing".


Did you try typing "help", which gives you the following rather verbose block of text:

----

This is an exercise in NPC interactivity. There's no puzzle and no set solution, but a number of options with a number of different outcomes.

HINTS: Ask or tell her about things that you can see, that she mentions, or that you think of yourself. Interact with her physically. Pause to see if she does anything herself. Repeat actions. The order in which you do things is critical: the character's mood and the prior state of the conversation will determine how she reacts.

VERBS: Many standard verbs have been disabled. All the sensory ones (LOOK, LISTEN, SMELL, TOUCH, TASTE) remain, as do the NPC interaction verbs ASK, TELL, HELLO, GOODBYE, and SORRY; KISS, HUG, and ATTACK. You may also find useful THINK and its companion THINK ABOUT, which will remind you of the state of conversation on a given topic. The verb RECAP gives a summary list of topics that you've discussed so far; if she's told you that she's said all she knows on that topic, it appears in italics.

SHORTCUT: 'Ask her about' and 'tell her about' may be abbreviated to A and T. So >A CHEESE is the same as >ASK GALATEA ABOUT CHEESE.

There is an assortment of walkthroughs available at http://emshort.home.mindspring.com/cheats.htm, but I suggest not looking at them until you have already experimented somewhat.


Can I use "Say" if I'm in the same room ?

For some reason I assumed she would know I was talking to her, but she then said "you might try talking to me".


I also had a hard time moving it forward. Mostly asking about single words picked from the text worked best, but as a result it feels a bit like choose your own adventure but with hidden choices. And I'd try to ask about things that seemed important and get "You can't form your question into words" only to try with different words later and finally get some information. (But I always have this problem with IF... maybe I'm not patient enough, or have to learn the typical IF vocabulary, or maybe I just word things in funny ways, I don't know.)

I think most of the commands are of the form “talk about x” “tell about x” and “ask about x.” There are also “Galatea, come here” type commands. It definitely takes some some experimentation to figure the parser out unfortunately

At least surge pricing makes sense - when there are more fares than drivers, the fares have to pay more to be picked up first.

With CPUs, it's less intuitive - The people with DRM-hamstrung CPUs are supposed to be getting a cheaper price subsidized by the people buying the product at full price. The cheaper models otherwise would have to be sold at the full price.

That's not to say that Intel's prices are necessarily fair at any tier...


Is holding the shares a condition of working there? Cause I'd rather sell them and diversify. I don't want the risk of the company going under to cost me my salary _and_ my investments.


That's essentially what a vesting period is.

A popular vesting schedule is 25/25/25/25, which means you would be able to sell 25% one year after getting the shares, another 25% after the second year and so on. Typically they would keep giving you more shares as your shares vest so that you always have some shares that you can't sell yet.


I think there's a good balance with the current vesting schedules. You vest the shares at some point in the future (giving you current incentive to align your actions behind the goals of the company). Then, when they vest, you can sell them to diversify (sometimes with a short waiting period if there's a trading window blackout or something).


That's why I'm pre-committing against the death penalty. Even murderers shouldn't be killed by the state. [1] I invite anyone on the right to join me in this stance - It puts a cap on how bad the consequences for our political positions can get.

[1] There's a bunch of edge cases for self-defense and how to apprehend people. I don't have the time to get into these. Most people know what I mean by "Abolish the death penalty".


I don't think it's because of being tired.

A few hypotheses come to mind:

1. Walking on your toes vs. on your heels

2. What happens if you slip

3. Lean

If you're facing into the stairs, your toes are gripping the next step. Maybe the extra degree of freedom from your ankle makes it easier to get a steady foothold. Whereas if you're facing away, on narrow steps, your toe may be off the step. If you try to balance on level ground on the balls of your feet vs. on your heels, you can tell heels are harder.

If you're going up and you slip, your moving foot just slides onto the same step as your static foot. If you're going down and you slip, your feet are now separated by 2 steps instead of just 1.

All things that move across ground move best if they lean into the direction they're walking. If you're walking down stairs, leaning forward means leaning away from the stairs. Going up, you're just leaning into the stairs as if you're rock climbing. Also it's likely our bodies are just better at leaning forward safely than backwards.

I heard a quote somewhere that "If you go down facing it, it's a ladder. If you go down facing away, it's stairs." Some narrow stairs might be better treated as just strange ladders. 2 of the possible causes would be mitigated.


I think I/O isolation will be part of a solution. I'm interested to see how Deno handles that.


Does deno allow you to scope the IO permission at a dependency level?

This comment from 9 months ago indicate its only at the app level. has it changed? https://news.ycombinator.com/item?id=26090873


agreed. "The struggle for justice is an ongoing and necessary pursuit that should prevail over laws and institutions."

its a fight against entropy, same as road repairs.

its not that I don't want big sweeping reforms, but I believe in gradient descent. all good progress is good progress. like the UK restricting conversion therapy. I want it gone, but this is still an improvement.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: