Hacker News new | past | comments | ask | show | jobs | submit | Scene_Cast2's comments login

I like Margin Call's quote - "be first, be smarter, or cheat".

I've never used Jax, but it keeps popping up as an interesting library. How does it fit in and compare to pytorch, numba, and numpy?

Should be lower level than pytorch. Not sure about numba, but it should be pretty similar to numpy.

I guess Lenovo has a history not unlocking full PCIe slots. Way back in the day, I found a footprint (solder pads) for a mini PCIe slot on my Thinkpad's motherboard. Well, I did what any reasonable person would do and soldered on a connector. It worked, surprisingly enough.

It worked, as in "the rest of the PCIe lanes were available", or "it still worked as a PCIe slot"?

I tried swapping the Wifi card to that slot and it worked. This is before m.2 SSDs became popular, so I didn't exactly have much use for the slot, it just sat empty afterwards. Or until I bricked the laptop by trying to flash coreboot, IIRC.

I wouldn’t agree on that

You wouldn't agree that GP did what they said, or what do you mean?

I've wanted to add such an indicator to my car's dash (I already added a boat compass, which I find quite useful and aesthetic). Unfortunately, electronic indicators of any kind are much more rare than vacuum powered ones or all-glass cockpits.

I am currently pondering the idea of building a modern electronics "replica" one of these, with 3d printed sphere halves containing stepper motors, magnetic rotary encoders, and a 6dof compass/gyro imu. If you put an Arduino or ESP32 inside to drive those, you could have simple slip rings that only needed to supply power through the roll and pitch axes.

(Only pondering though, I have had the same idle thoughts about making my own Russian Soyuz mechanical navigation instrument too from this other writeup of Ken's https://www.righto.com/2023/01/inside-globus-ink-mechanical-... but somehow the idea of making replica soviet vintage tech isn't as appealing as it was a few years back...)


That would be a wonderful project. It'd be cool (albeit inefficient) if there were a way to use induction and remove the slip rings altogether.

That's because vacuum powered is the traditional way in small aircraft and the modern replacements are all-glass based on AHRS.

The number of planes without a vacuum system but with electrical mechanical attitude indicators is quite small. Your best bet are electric mechanical backup instruments used on earlier installations of the all-glass G1000.

Take a look at Diamond DA40 and DA42 for electrical backup attitude indicators, but for example their next models (DA50 and DA62) use all-glass backup instruments.


What you need my friend is a ring-laser gyro.

Boat compass on the dash is awesome, I might have to borrow that. Any issue with interference from the vehicle itself?

Yeah. My compass (a Ritchie) has two axis calibration at the bottom; I ended up maxing out one of the axes (so it's still a bit off). Also, it tends to shift by a decent amount when the car is pointing up or down steep hills.

Also doing the same here. I'm hoping that more places will run power to performance analysis like the derbauer 4090 review video.

There's also that video of a guy adding a headphone jack in a similar fashion. I was impressed that he found the internal room for it.

I never understood why this was always seen as such a constraint. It’s three (or four) pins! It doesn’t even have to have the “hole” form factor. You could have them all in a straight line and then place a headphone jack AGAINST them. I always thought a magnetic TRRS set of pins along the side of a phone could work. Add a sleeve to the connector’s side too. Heck you could standardize a flattened TRRS or whatever that would still slip readily into a typical jack.

When you already need an adapter you might as well use the usb-c connector.

That was on the iPhone 7. Its internals really look like the decision to omit the headphone jack was made pretty late in the development cycle, so they replaced it with a useless piece of plastic.


I'd go digging into this - https://pola.rs/posts/polars-on-gpu/


I remember first seeing Factor more than a decade ago (maybe even when it just launched, if memory serves correctly). It's really neat to see it thriving.


LIME (local linear approximation basically) is one popular technique to do so. Still has flaws (such as not being close to a decision boundary).


LIME and other post-hoc explanatory techniques (deepshap, etc.) only give an explanation for a singular inference, but aren't helpful for the model as a whole. In other words, you can make a reasonable guess as to why a specific prediction was made but you have no idea how the model will behave in the general case, even on similar inputs.


The purpose of post-prediction explanations would be to increase confidence of a practitioner to use said inference.

It’s a disconnect between finding a real life “AI” and trying to find something which works and you can have a form of trust with.


Is there a study of "smooth"/"stable" "AI" algorithms - i.e. if you feed them input that is "close" then then the output is also "close"? (smooth as in smoothly differentiable/stable as in stable sorted)


Some embedding models are explicitly trained on cosine similarity. Otherwise, if you have a 512D vector, discarding magnitude is like discarding just a single dimension (i.e. you get 511 independent dimensions).


This is not quite right; you are actually losing information about each of the dimensions and your mental model of reducing the dimensionality by one is misleading.

Consider [1,0] and [x,x] Normalised we get [1,0] and [sqrt(.5),sqrt(.5)] — clearly something has changed because the first vector is now larger in dimension zero than the second, despite starting off as an arbitrary value, x, which could have been smaller than 1. As such we have lost information about x’s magnitude which we cannot recover from just the normalized vector.


Well, depends. For some models (especially two tower style models that use a dot product), you're definitely right and it makes a huge difference. In my very limited experience with LLM embeddings, it doesn't seem to make a difference.


Interesting, I hadn’t heard of two tower modes before!

Yes, I guess it’s curious that the information lost doesn’t seem very significant (this also matches my experience!)


Two tower models (and various variants thereof) are popular for early stages of recommendation system pipelines and search engine pipelines.


That‘s exactly the point no? We lost 1 dim (magnitude). Not so nice in 2d but no biggie in 512d


Magnitude is not a dimension, it’s information about each value that is lost when you normalize it. To prove this normalize any vector and then try to de-normalize it again.


Magnitude is a dimension. Any 2-dimensional vector can be explicitly transformed into the polar (r, theta) coordinate system where one of the dimensions is magnitude. Any 3-dimensional vector can be transformed into the spherical (r, theta, phi) coordinate where one of the dimensions is magnitude. This is high school mathematics. (Okay I concede that maybe the spherical coordinate system isn't exactly high school material, then just think about longitude, latitude, and distance from the center.)


Impossible because... you lost a dimension.


That’s not mathematically accurate though, is it? We haven’t reduced the dimension of the vector by one.

Pray tell, which dimension do we lose when we normalize, say a 2D vector?


Mathematically, it's fine to say that you've lost the magnitude dimension.

Before normalization, the vector lies in R^n, which is an n-dimensional manifold.

After normalization, the vector lies in the unit sphere in R^n, which is an (n-1)-dimensional manifold.


Magnitude, obviously.

>>> Magnitude is not a dimension [...] To prove this normalize any vector and then try to de-normalize it again.

Say you have the vector (18, -5) in a normal Euclidean x, y plane.

Now project that vector onto the y-axis.

Now try to un-project it again.

What do you think you just proved?


A circle circumference is a line, is 1D?


you dont lose anything when you normalize things. not sure what you are tallking about.


There's something wrong with the picture here but I can't put my finger on it because my mathematical background here is too old. The space of k dimension vectors all normalized isn't a vector space itself. It's well-behaved in many ways but you lose the 0 vector (may not be relevant). Addition isn't defined anymore, and if you try to keep it inside by normalization post addition, distribution becomes weird. I have no idea what this transformation means for word2vec and friends.

But the intuitive notion is that if you take all 3D and flatten it / expand it to be just the surface of the 3D sphere, then paste yourself onto it Flatland style, it's not the same as if you were to Flatland yourself into the 2D plane. The obvious thing is that triangles won't sum to 180, but also parallel lines will intersect, and all sorts of differing strange things will happen.

I mean, it might still work in practice, but it's obviously different from some method of dimensionality reduction because you're changing the curvature of the space.


The space of all normalized k-dimensional vector is just a unit k-sphere. You can deal with it directly, or you can use the standard inverse stereographic projection to map every point (except for one) onto a plane.

> triangles won't sum to 180

Exactly. Spherical triangles have the sum of their interior angles exceed 180 degrees.

> parallel lines will intersect

Yes because parallel "lines" are really great circles on the sphere.


So is it actually the case that normalizing down and then mapping to the k-1 plane yields a useful (for this purpose) k-1 space? Something feels wrong about the whole thing but I must just have broken intuition.


I do not understand the purpose that you are referring to in this comment or the earlier comment. But it is useful for some purposes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: