Hacker News new | past | comments | ask | show | jobs | submit | antirez's comments login

Not possible since Claude is effectively GPT5 level in most tasks (EDIT: coding is not one of them). OpenAI lost the lead months ago. Altman talking about AGI (may take decades or years, nobody knows) is just the usual crazy Musk-style CEO thing that is totally safe to ignore. What is interesting is the incredible steady progresses of LLMs so far.

> Claude is effectively GPT5 level

Which model? Sonnet 3.5? I subscribed to Claude for while to test Sonnet/Opus, but never got them to work as well as GPT-4o or o1-preview. Mostly tried it out for coding help (Rust and Python mainly).

Definitely didn't see any "leap" compared to what OpenAI/ChatGPT offers today.


Both, depending on the use case. Unfortunately Claude is better in almost every regard than ChatGPT but fo coding so far. So you would not notice improvements if you test it only for code. Where it shines is understanding complex things and ideas in long text, and the context window is AFAIK 2x than ChatGPT.

Tried it for other things too, but then they just seem the same (to me). Maybe I'll give it another try, if it has improved since last time (2-3 months maybe?). Thanks!

Writing assembly programs for DOS was incredibly easy: BIOS calls via interrupts, so you could already do a lot. Fixed video memory address... and the BIOS in order to enter the video modes without poking the video card directly. High level and low level mixed together.

Ok so post author is AI skeptic and this is his retaliation, likely because his work is affected. I believe governments should address the problem with welfare but being against technical advances is always being in the wrong side of history.


This is a tech site, where >50% of us are programmers who have achieved greater productivity thanks to LLM advances.

And yet we're filled to the gills with Luddite sentiments and AI content fearmongering.

Imagine the hysteria and the skull-vibrating noise of the non-HN rabble when they come to understand where all of this is going. They're going to do their darndest to stop us from achieving post-economy.


I think programmers are in the perfect profession to call LLMs out for just how bad they are. They are fancy auto-complete and I love them in my daily usage, but a big part of that is because I can tell when they are ridiculously wrong. Which is so often you really have to question how useful they would be for anything where they aren’t just fancy auto-complete.

Which isn’t AIs fault. I’m sure they can be great in cancer detection, unless they replace what we’re already doing because they are cheaper than doctors. In combination with an expert AI is great, but that’s not what’s happening is it?


I fail to see the difference. Actually, programming was one of the first field where LLMs shown proficiency. The helper nature of LLMs is true in all the fields so far, in the future this may change. I believe that for instance in the case or journalism the issue was already there: three euros per post written without clue by humans.

Anyway in the long run AI will kill tons of jobs. Regardless of blog posts like that. The true key is governments assistance.


I don't know what difference you are referring to. I was agreeing with you.

And also agreed: many trumpet the merits of "unassisted" human output. However, they're suffering from ancestor veneration: human writing has always been a vast mine of worthless rock (slop) with a few gems of high-IQ analysis hidden here and there.

For instance, upon the invention of the printing press, it was immediately and predominantly used for promulgating religious tracts.

And even when you got to Newton, who created for us some valuable gems, much of his output was nevertheless deranged and worthless. [1]

It follows that, whether we're a human or an LLM, if we achieve factual grounding and the capacity to reason, we achieve it despite the bulk of the information we ingest. Filtering out sludge is part of the required skillset for intellectual growth, and LLM slop qualitatively changes nothing.

[1] https://www.newtonproject.ox.ac.uk/view/texts/diplomatic/THE...


Sorry I didn't imply we didn't agree but that programmers were and are going to be impacted as much as writers for instance, yet I see an environment where AI is generally more accepted as a tool.

About your last point sometimes I think that in the future there will be models specifically distilling the climax of selected thinkers, so that not only their production will be preserved but maybe something more that is only implicitly contained in their output.


That's a good point: the greatest value that we can glean from one another is likely not epistemological "facts about the world", nor is it even the predictive models seen in science and higher brow social commentary, but in patterns of thinking. That alone is the infinite wellspring for achieving greater understanding, whether formalized with the scientific method or whether more loosely leveraged to succeed with a business endeavor.

Anecdotally, I met success in prompting GPT-3 to "mimic Stephen Pinker" when solving logical puzzles. Puzzles that it would initially fail, it would succeed attempting to mimic his language. GPT-3 seemed to have grokked the pattern of how Stephen Pinker thinks through problems, and it could leverage those patterns to improve its own reasoning. OpenAI o1 needs no such assistance, and I expect that o2 will fully supplant humans with its ability to reason.

It follows that all that we have to offer with our brightest minds will be exhausted, and we will be eclipsed in every conceivable way by our creation. It will mark the end of the Anthropocene; something that likely exceeds the headiest of Nick Bostom speculations will take its place.

It seems that this is coming in 2026 if not sooner, and Alignment is the only thing that ought occupy our minds: the question of whether we're creating something that will save us from ourselves, or whether all that we've built will culminate in something gross and final.

Looking around myself, however, I see impassioned "discourse" about immigration. The merits of DEI. Patriotism. Transgenderism. Religion. Copyright. Vast herds of dinosaurs preying upon one another, giving only idle attention to the glowing object in the sky. Is it an asteroid? Is it a UFO that is coming down to provide dinosaur healthcare? Nope, not even that level of thought is mustered. With 8 billion people on the planet, Utopia by Nick Bostrom hasn't even mustered 100 reviews on Amazon. On the advent of the defining moment of the universe itself, when virtually all that is imaginable is unlocked for us, our species' heads remains buried in the mud, gnawing at one another's filthy toes, and I'm alienated and disgusted.

The only glints of beauty I see in my fellow man are in those with minds which exceed a certain IQ threshold and cognitive flexibility, as well as in lesser minds which exhibit gentleness and humility. There is beauty there, and there is beauty in the staggering possibility of the universe itself. The rest is at best entomology, and I won't mourn its passing.


Related: [rumors] Audible is starting a pilot project to do just that with the ebooks.


At this point, this is seems more like a question of "how soon", not if.


does this mean we could buy an ebook on Kindle and listen to it on Audible?


Terrible article. The author does not understand how LLMs work basically, since an LMM cares a lot about the semantic meaning of a token, this thing about the next word probability is so dumb that we can use it as "fake AI expert" detector.


Something tells me that the author [0] is probably well aware of how these work under the hood, and the math behind it - When writing scientific articles with a laymen audience in mind, you'll often have to use laymen-specific terms. But feel free to enlighten us further!

[0] - https://en.wikipedia.org/wiki/Jim_Waldo


Whatever his credentials, what he says is plain wrong. GPTs don't follow "the grass is" with "green" because it's the most probable continuation- this idea is incredibly naive and breaks down with sentences longer than a few words. And GPTs don't crowdsource the answers to questions, their answers are not necessarily the most common, and neither "the consensus view is determined by the probabilities of the co-occurrence of the terms"- there is no such algorithm implemented anywhere.

What LLMs crowdsource is a world model, and they need an incredible amount of language to squeeze one out from it, second hand. We do train them for the ability to predict the next word, which is a task that can only be performed satisfactorily by working at the level of concepts and their relationships, not at the level of words.


> We train them for the ability to predict thr next word, which is a task that can only be performed satisfactorily by working at the level of concepts and their relationships, not at the level of words.

This is just obviously, trivially false.


Obviously, trivially false? Now I'm curious. Can you expand a bit?


I think what they mean (not OP here so just chiming in to to try interpret and answer your question) is that you don't know what you are talking about.


It's very surprising that the author of this post does 99% of the work and writing and then does not go forward for the other 1% downloading ollama (or some other llama.cpp based engine) and testing how some decent local LLM works in this use case. Because maybe a 7B or 30B model will do great in this use case, and that's cheap enough to run: no GPT-4o needed.


Not OP, but thanks for the suggestion. I’m starting to play around with LLMs and will explore locally hosted versions.


Writing programs in Rust is not simpler then writing programs in C.


For compilers specifically, I think plenty of people would disagree.

It's not that it's exceedingly hard in C, but programming languages have evolved in the last millenium, and there are indeed language features that make writing compilers easier than it used to be

I have the most fun when I write x86 MASM assembly. It's a pretty simple language all in all, even with the macro system. Much simpler than C.

But a simple language doesn't always make it simple to write complex programs like compilers.


It is really remarkably sucky to process trees without algebraic datatypes and full pattern matching. Most of your options for that are ML progeny, and the rest are mostly Lisps with a pattern-matching macro. While it’s definitely possible to implement, say, unification in C, I wouldn’t want to—and I happen to actually like C.

Given the task is to bootstrap Rust, a Rust subset is a reasonable and pragmatic choice if not literally the only one (Mes, a Lisp, could also work and is already part of the bootstrappable ecosystem).


Sure, for you it isn't. It is for me. Especially if we're talking "working roughly as intended" programs.


Rust feels impossible to use until you "get" it. It eventually changes from fighting the borrow checker to a disbelief how you used to write programs without the assurances it gives.

And once you get past fighting the borrow checker it's a very productive language, with the standard containers and iterators you can get a lot done with high level code that looks more like Python than C.


I agree but it's not different than C with a decent library of data structures. And even when you become more borrow checker aware and able to anticipate most of the issues, still there are cases where the solution is either non obvious or requires doing things in indirect ways compared to C or C++.


The quality difference between generics and proc macros vs the hoops C jumps through instead is pretty significant. The way you solve this in C is also unobvious, but doesn't seem like it when you have a lot of C experience.

I've been programming in C for 20 years, and didn't realize how much of using it productively wasn't a skilful craft, but busywork that doesn't need to exist.

This may sound harsh, but sensitivity to order definition, and the fragility of headers combined with a global namespace is just a waste of time. These aren't problems worth caring about.

Every function having its own idea of error handling is also nuts. Having to be diligent about error checking and cleanup is not a point of pride, but a compiler deficiency.

Maintenance of build scripts is not only an unnecessary effort, but it makes everything downstream of them worse. I can literally not have build scripts at all, and be able to work on projects bigger than ever. I can open a large project, with an outrageous number of dependencies, and have it build on the first try, integrate with IDEs, generate API docs, run unit tests out of the box. Usually works on Windows too, because the POSIX vs Windows schism can be fixed with a good standard library and cross-platform dependency management.

Multi-threading can be the default standard for every function (automatically verified through the entire call graph including 3rd party code), and not an adventurous novelty.


Writing non-trivial programs is easier in Rust than in C, for people that are equally proficient in C as in Rust. Especially if you're allowed to use Cargo and the Rust crates ecosystem.

C isn't even in the same league as Rust when it comes to productivity – again, if you're equally proficient in Rust as in C.


I have 40 years of C muscle memory and it took me many tries and a real investment to get into Rust, but I don’t do any C anymore (even for maintenance- I’d rather rewrite it in Rust first).

Rust isn’t in a difference class from C, it’s a different universe!


This does not match my experience.


Try putting everything in Arc<Mutex<>> or allow mutable_transmutes and things get rather comfy.


Doesn't this defeat the point of using Rust a bit?


You have to consider that those who write the Rust compiler are experts in Rust, but not necessarily experts in C. So even if writing programs in C may be simpler than in writing programs in Rust for some developers, the opposite is more likely in this case, even before we compare the merits of the respective languages.


This is 100% the case. All of the honest-to-god Rust experts I know work on the compiler in some way. Same goes for Lean, which bootstraps from C as well.


Writing programs that compile is much easier in C. It lets me accidentally do all sorts of ill-advised things that the Rust compiler will correctly yell at me about.

I don't remember it being any easier to write C that passes through a static analyzer like Coverity etc. than it is to write Rust. Think of rustc like a built-in static analyzer that won't let you ignore it. Sometimes that means it's harder to sneak bad ideas past the compiler.


This is probably true if you assume it doesn't matter whether the program is correct.


Yes it is, why would anyone use it otherwise?


Ignoring failures is a bad idea, but in many applications quitting on malloc() retiring NULL is the most sensibile thing to do. Many, but not all kinds of applications.


The Awair Element works very well in my experience. If you check the sensors list, the sum of the cost of just the sensors (that are very high quality ones) will match the total cost of the device. And the software and the design are great as well.


In case somebody is interested: there are decent ST77xx based displays for a lot less than $20 for the 2" size. Otherwise, with $20 you can get a much larger one on AliExpress, always based on the same chip so compatible with the code in this post. If you are ok with paying a bit more and want to be sure to get a quality product, for $20 you can get a solid color display on Pimoroni if you are in the UK or are willing to wait forever for the shipment in Europe.


I've bought a 1.77" one for 2€ on AliExpress, if somebody needs help on how to wire it properly I wrote a short post about it: http://goran-juric.from.hr/blog/2020/03/connecting-177-inch-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: