Hacker News new | past | comments | ask | show | jobs | submit | jagger27's comments login

“Winding road” is a nice way to disambiguate from gusty.

The beauty of WebAssembly is that you don't need Google's permission to add support. Just send your Wasm blob to the browser and Chrome's existing Wasm runtime will just run it.

My knowledge is years out of date, but does WASM still require the application to request its maximum memory footprint up-front? Granted, that's what Sun/Oracle's JVM has been doing to allocate its heap from the OS for well over a decade, but I'm also not aware if WASM is able to use the equivalent of madvise() to tell the browser/OS that it's fine to unmap a region of memory and map it back zeroed-out when it's next needed.

Yep you need to specify the maximum memory amount up-front. Its defined as "webassembly memory pages". Each page is 64kb. You need to specify an initial and a maximum amount. The webassembly module can call memory.grow() to grow it by a page until it reaches the maximum. Though you can't "un-grow" or decrease the amount of allocated memory.

This is not correct, it is not necessary to specify a maximum memory size. See the WebAssembly specification https://webassembly.github.io/spec/core/syntax/modules.html#.... Due to 32-bit address space, the maximum memory is limited to 4GB however.

(In asm.js, memory was provided by an ArrayBuffer of fixed size, so there memory could truly not grow at runtime.)


You'd need a pretty solid reason to want users to download the Python engine to run your code in their browser every time they visit an updated version of your site though. "I like writing Python better than JS" would be a sucky excuse.

If anyone does choose to do this I hope they spend significant amount of effort making their caching and code splitting optimal.


Given that there were people paying ActiveState for their Python ActiveX, I assume there are enough people that care enough for such use cases.

Many devs already make me download their SPA to display static text and images.


Say what you will about Jupyter notebooks and all that, but the talent pool for Python is still at a higher level. Then, this could be the worse-is-better equilibrium, but there's also a market-for-lemons situation regarding web dev these days.

https://en.wikipedia.org/wiki/The_Market_for_Lemons


My impression was that top-dollar was being payed for web-devs, with competition from some of the biggest tech giants driving the trend. Not sure that's a market for lemons, unless you are talking about the lower-end of web dev.

What the caching story like? Is it possible to cache the Python interpreter in one (unchanging between apps) blob and then send another blob with your app-specific code? I’m imagining a world where lots of apps want to use WASM Python but don’t want to have to ship the whole interpreter with their page.

Cross-Origin resource caching has been disabled by all modern browsers by now. So your page could use the same cached Python interpreter again and again, but you and example.com would each have to download the interpreter, even if it comes from the same URL.

With normal Wasm blobs that’s more of an issue. It wouldn’t be an issue in this case because you would just pass your Python script in as text.

Mazda has been doing the reasonable thing in this area for a while*. Recent models don't have touch screens and require you to use the (funnily enough, BMW i-Drive style) control knob for radio and CarPlay. Even their most recent model, the CX-50, has a full array of manual knobs and controls.[1]

* With the sad exception of their first EV, the MX-30, which puts climate controls on a touch screen[2]. The upper display is still controlled by the knob, though. Weird choice.

1: https://smartcdn.prod.postmedia.digital/driving/wp-content/u...

2: https://www.netcarshow.com/mazda/2021-mx-30/1600x1200/wallpa...


> I take the even more draconian approach of wrapping to 72 columns when writing comments, after the fashion of PEP 8.[3]

Which also happens to be how Linus wrapped the lines of this particular email.


Plaintext email is wrapped to 72, that's in the RFC - Linus didn't have much choice there.

Sure did:

> Each line of characters MUST be no more than 998 characters, and SHOULD be no more than 78 characters, excluding the CRLF.

https://datatracker.ietf.org/doc/html/rfc5322#section-2.1.1

https://www.arp242.net/email-wrapping.html


People clearly don’t die immediately after infection. Consider that most people dying in North America were being kept alive on a ventilator.

Not sure why a two month lag is so hard to believe.


Those people would be classified as hospitalized inside two months though.

This is why I'm excited about Intel getting into the market.

This is complete FUD.

This really doesn't seem true. Maybe Qualcomm, but not AMD.

When Discord first launched and my usual TeamSpeak friends moved over there I was super annoyed by the extra latency. How a group of fairly serious gamers who otherwise complained about less lag in any other circumstance shrugged it off, I'm not sure.

More recently, I was surprised by low latency was between two Asterisk servers in the same city on different ISPs. It was a very adhoc setup with a cheap EOL Cisco IP phone on either end connected to an Asterisk SIP server on the local network. I'm so used to laggy voice chat that it actually caught me off guard how nice it was once I actually got it to work. Besides being a total nightmare to configure.

I struggle enough in person to find the right time to talk without interrupting, and >100ms of Discord latency makes it that much worse for me. I hope something peer-to-peer like this catches on for remote teams. I could really benefit from it. I don't think I would mind if a screen share lagged behind the speaker's voice. I'm sure Zoom already does that though.


You should be getting considerably less latency than that. Is the Discord server perhaps set to the wrong region?

The server I normally talk on is on US East, which is as close as I can get in Canada.

To be clear, I don’t have hard numbers on the actual voice-to-ear lag I’m experiencing, and of course it depends about a dozen different factors. I just know that it could feel better.


I use it as a bare metal OS on my blog server and it's great in that scenario too. I've personally found the libc stuff makes it really inconvenient to use as a desktop OS, unfortunately. You have to jump through hoops to get something like VSCode to "just work".

Software like VS code is in direct conflict with the way alpine does things. Alpine on the desktop is certainly not for everyone, but for some use cases it works great.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: