Hacker News new | past | comments | ask | show | jobs | submit | best comments login

> I always want to see a page with the product details and price before I click "buy". Reducing the number of clicks is not going to make me change my decision.

This is compounded by the multi-headed monster that large orgs like theirs have no choice but to become. If customers could trust that every day essentials had a relatively stable price and availability pattern like they trust from their local grocery store (rightly or wrongly), blind ordering might be more tenable.

But some other head on the beast wants to keep Amazon shaped like an unmonitored digital marketplace where orders are fulfilled dynamically by bidders and algorithms, so your Tide Pods could be anywhere from $6.99 to $64.99, and you might get anywhere between 10 and 100, and they might arrive tomorrow or next week, and they might come in retail packaging or as a bag of tide-pod-resembling-mystery-objects, etc.

Of course blind ordering won't work when you can't give your customers any assurances (let alone guarantees) about price, quality, volume, etc


The core issue is that Amazon envisioned Alexa as a product that would help it increase sales. Smart home features were always an afterthought. How convenient would it be if people could shout "Alexa order me Tide Pods" from wherever they were in their home and the order got magically processed? That demo definitely got applause from a boardroom full of execs.

The problem is that consumers don't behave like that. This is also why Amazon's Dash buttons failed. I always want to see a page with the product details and price before I click "buy". Reducing the number of clicks is not going to make me change my decision and suddenly order more things.

If they want to salvage Alexa, they need to forget shopping and start doubling down on the smart home and assistant experience. The tech is still pretty much where it was in 2014. Alexa can set timers and tell me the weather, and...that's basically it. Make it a value add in my life and I wouldn't mind paying a subscription fee for it.


I reported this on their HackerOne many years ago (2018 it seems) and they said it was working as intended. Conclusion: don't use private forks. Copy the repository instead.

Here is their full response from back then:

> Thanks for the submission! We have reviewed your report and validated your findings. After internally assessing the finding we have determined it is a known low risk issue. We may make this functionality more strict in the future, but don't have anything to announce now. As a result, this is not eligible for reward under the Bug Bounty program.

> GitHub stores the parent repository along with forks in a "repository network". It is a known behavior that objects from one network member are readable via other network members. Blobs and commits are stored together, while refs are stored separately for each fork. This shared storage model is what allows for pull requests between members of the same network. When a repository's visibility changes (Eg. public->private) we remove it from the network to prevent private commits/blobs from being readable via another network member.


> As a designer, I feel the need to be original. If you’re a designer, or even if you’re just interested in design, you probably feel the need to be original, too.

I've been a professional designer since 2006, and I got over that thinking pretty quickly. A designer trying to be strikingly original is rarely acting in service of the design. If you want to be strikingly original, you probably want to be an artist instead of a designer. What a designer fundamentally does is communicate the best solution to a problem, given the requirements, goals, and constraints of that problem. Originality is subordinate to that at best.


Users should never be expected to know these gotchas for a feature called "private", documented or not. It's disappointing to see GitHub calling it a feature instead of a bug, to me it just shows a complete lack of care about security. Privacy features should _always_ have a strict, safe default.

In the meantime I'll be calling "private" repos "unlisted", seems more appropriate


Then maybe "taking over the market" is a bad metric, and we should be optimizing for making a company that makes the workers' lives better. The US cultural bias is showing here, as it's assumed that profit is above all else, and a company that forgoes profit to make workers happier must thus be less good.

The vast majority of people in companies are workers. Let's stop optimizing for owner wealth and start optimizing for worker happiness instead.


I opted out at Boston International Airport. It involved arguing with the TSA for about 5 minutes while holding up a 150 person line. Then the supervisor came over, told me that I "was required to have to have my photo taken" and opting out consisted of checking a box in the software to not save my photo. My alternative was to not get on the flight.

The whole idea of opting out is a scam. They are 100% planning to force mandatory facial recognition on the general public.


This a million times.

I swear I will never understand why Amazon's supply, organization, and pricing for household goods is such a disaster.

Because their experience for mainstream books is mostly perfectly fine -- there's a single listing for each book, and the price doesn't change much, just some discount from list. It works.

But for things like paper towels or Tide or whatever, it's utter chaos. Multiple listings for the same item, sizes and quantities that mysteriously move from one listing to another, prices that vary 10x or more...

It's utterly baffling to me why Amazon created this consumer-hostile nightmare. I buy a lot of stuff from Amazon, but everything home and toiletries I buy from Target online, simply because the listings and prices are totally consistent. Even though I have Prime! I don't understand why Amazon doesn't figure out that Prime consumers like me buy from Target instead because Amazon's household supplies listings are such utter unpredictable garbage, while Target just works like a normal store.


The city of Cracow in Poland banned billboards (and other visual advertising quite aggressively) about 2 years ago. Great outcomes. There are still some workarounds that companies do to put this s..t out in the public (e.g. covers of renovation works can contain up to 50% of advertising area, so we have renovations of just finished buildings only to put the covers with ads). Now, when I visit another city when there's no such ban I cannot stand this visual garbage. This should be banned everywhere.

"Eventually though, open source Linux gained popularity – initially because it allowed developers to modify its code however they wanted ..."

I find the language around "open source AI" to be confusing. With "open source" there's usually "source" to open, right? As in, there is human legible code that can be read and modified by the user? If so, then how can current ML models be open source? They're very large matrices that are, for the most part, inscrutable to the user. They seem akin to binaries, which, yes, can be modified by the user, but are extremely obscured to the user, and require enormous effort to understand and effectively modify.

"Open source" code is not just code that isn't executed remotely over an API, and it seems like maybe its being conflated with that here?


“The Heavy Press Program was a Cold War-era program of the United States Air Force to build the largest forging presses and extrusion presses in the world.” This ”program began in 1944 and concluded in 1957 after construction of four forging presses and six extruders, at an overall cost of $279 million. Six of them are still in operation today, manufacturing structural parts for military and commercial aircraft” [1].

$279mm in 1957 dollars is about $3.2bn today [2]. A public cluster of GPUs provided for free to American universities, companies and non-profits might not be a bad idea.

[1] https://en.m.wikipedia.org/wiki/Heavy_Press_Program

[2] https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=279&year1=1957...


You could publish it in the Journal of Trial and Error (https://journal.trialanderror.org), which I created with a number of colleagues a couple years ago!

Our editor-in-chief was interviewed for this related Nature article a couple months ago (https://www.nature.com/articles/d41586-024-01389-7).

While it’s easy pickings, it’s still always worth pointing out the hypocrisy of Nature publishing pieces like this, given that they are key drivers of this phenomenon by rarely publishing null results in their mainline journals. They are have extremely little incentive to change anything about the way scientific publishing works, as they are currently profiting the most from the existing structures, so them publishing something like this always leaves a bit of a sour taste.


Used to guide in Yellowstone. This has no bearing on the greater Yellowstone Caldera (supervolcano) which spans nearly 30miles by 40miles. In my time there I never saw anything like this. If you're ever in a situation similar to this, run as fast and as far as you can.

The interesting thing about geysers and pools is how relatively predictable they are... until they are not. A mathematical and statistical person would have a lot of fun building prediction models for all the different geysers.


So I am extremely hyped about this, but it's not clear to me how much heavy lifting this sentence is doing:

> First, the problems were manually translated into formal mathematical language for our systems to understand.

The non-geometry problems which were solved were all of the form "Determine all X such that…", and the resulting theorem statements are all of the form "We show that the set of all X is {foo}". The downloadable solutions from https://storage.googleapis.com/deepmind-media/DeepMind.com/B... don't make it clear whether the set {foo} was decided by a human during this translation step, or whether the computer found it. I want to believe that the computer found it, but I can't find anything to confirm. Anyone know?


> We’re releasing Llama 3.1 405B, the first frontier-level open source AI model, as well as new and improved Llama 3.1 70B and 8B models.

Bravo! While I don't agree with Zuck's views and actions on many fronts, on this occasion I think he and the AI folks at Meta deserve our praise and gratitude. With this release, they have brought the cost of pretraining a frontier 400B+ parameter model to ZERO for pretty much everyone -- well, everyone except Meta's key competitors.[a] THANK YOU ZUCK.

Meanwhile, the business-minded people at Meta surely won't mind if the release of these frontier models to the public happens to completely mess up the AI plans of competitors like OpenAI/Microsoft, Google, Anthropic, etc. Come to think of it, the negative impact on such competitors was likely a key motivation for releasing the new models.

---

[a] The license is not open to the handful of companies worldwide which have more than 700M users.


The American state of Vermont has banned billboards since 1968. It makes spending time in the state extraordinarily pleasant.

A company that provides a phone service (mobile or other) has to conform to a large amount of regulatory red tape. Why? because either a company before tried to monopolise the entire country, or they killed someone.

Now, large tech companies haven't wholesale killed people (unlike say tobacco, or talc powder, 3M and half of their solvents, weed killer, most car makers, etc etc)

but they have been trying desperately to stop all competition.

They've also been trying to extract as much personal info as possible for profit. Because regulators in the USA are hamstrung, they are used to being able to basically doing stuff that would be illegal if it were in physical stores/pre-existing industries.


Why is the goal to get people to quit their jobs and get a nice apartment?

Isn't it supposed to be a minimum base level of support? Why do we keep moving the goal posts?

And if everyone quits their job and lives in a nice apartment, where is this money going to come from? The problem with welfare today is that its a disincentive to work. Start working, you lose your transfer payments. A lot of people are stuck in this trap and don't want to start working, forsaking valuable on the job training and socialization that will hurt them in the long run. That's where universal part comes in


Employee stock options are not a new idea, obviously.

If the story is "every company should offer stock options to its employees", then sure, that's often a good business plan. The reason not every company does it to all its employees is probably that for those employees, it wouldn't affect incentives much and it would make payment subject to the vagaries of the stock market. Your barista at Starbucks is not going to increase the stock price no matter how well he fills your order; at the same time, maybe he wants to know how much he takes home every day.

If the story is "it should be the law that every company offers stock options", then that would be a dumb law for the reasons above.

If the story is "all companies must be fully employee-owned workers' cooperatives", then first, note that you are calling for a restriction on workers' rights: they have to be given part of their pay as stocks, and they can't sell them freely. Second, that will probably make markets work worse. There's a large economics literature on this: worker-owned cooperatives have not taken over the market, although they are an available institutional form, because (a) they find it hard to raise capital (b) they tend to make decisions that maximize worker welfare rather than profit, e.g. they won't sack underperforming divisions or expand in ways that dilute existing workers' stake.


> instead of learning how to start and run a business

Because the stress and risks of running a small business as a solo inexperienced first time business owner are insane compared to a regular 9-5.

Especially if you start hiring other people, your liability then increases 10 fold as now anything can happen with them (sick leave, absenteism, low performance, sabotage, etc) but you're still bound to the same deadlines you agreed with your customers or you'll get sued for damages by them.

As an employee you have some basic rights and protections from the state on the limits your employer can squeeze from you, more or less, depending on where you live. As a contractor or company however, you don't, and can be fully liable in court for your failures to deliver regardless of your personal circumstances. Unless you know what you're getting into and have the know-how, experience, or mentorship, it's not worth it in most cases.


> # Reddit believes in an open internet, but not the misuse of public content.

Calling it "public" content in the very act of exercising their ownership over it. The balls on whoever wrote that.


Far more likely is Google was not willing to complete the deal and was pulling the plug after looking at internal data. Wiz, fearing the bad press of Google backing out rushes to tell journalists that THEY are walking away because they are worth more.

Wiz's valuation is insane. Most people havent even heard of them. I think it was a > 60x ARR multiple on this deal. Id actually be kinda pissed if I was a google shareholder and they went through with it.

Something very strange is going on with Wiz. My gut tells me if they ever IPO to go big on puts.


So you’re setting up a multi-region RDS. If region A goes down, do you continue to accept writes to region B?

A bank: No! If region A goes down, do not process updates in B until A is back up! We’d rather be down than wrong!

A web forum: Yes! We can reconcile later when A comes back up. Until then keep serving traffic!

CAP theorem doesn’t let you treat the cloud as a magic infinite availability box. You still have to design your system to pick the appropriate behavior when something breaks. No one without deep insight into your business needs can decide for you, either. You’re on the hook for choosing.


Surprised at the comments minimizing this.

I've used github for a long time, would not have expected these results, and was unnerved by them.

I'd recommend reading the article yourself. It does a good job explaining the vulnerabilities.


The National Science Foundation has been doing this for decades, starting with the supercomputing centers in the 80s. Long before anyone talked about cloud credits, NSF has had a bunch of different programs to allocate time on supercomputers to researchers at no cost, these days mostly run out of the Office of Advanced Cyberinfrastruture. (The office name is from the early 00s) - https://new.nsf.gov/cise/oac

(To connect universities to the different supercomputing centers, the NSF funded the NSFnet network in the 80s, which was basically the backbone of the Internet in the 80s and early 90s. The supercomputing funding has really, really paid off for the USA)


A well known anecdote reported by Shannon:

"My greatest concern was what to call it. I thought of calling it 'information,' but the word was overly used, so I decided to call it 'uncertainty.' When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, 'You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.'"

See the answers to this MathOverflow SE question (https://mathoverflow.net/questions/403036/john-von-neumanns-...) for references on the discussion whether Shannon's entropy is the same as the one from thermodynamics.



Personally, I like how HN focuses on content and discussions rather than individual users. If I wanted to follow experts, I'd probably curate a selection on a social network like Mastodon, or kludge together some RSS feeds.

Also, I feel like this tool selects for active commenters, not for knowledgeable experts. Not to mention throwaway accounts.

Still a cool project.


‘ On Wednesday, some of the people who posted about the gift card said that when they went to redeem the offer, they got an error message saying the voucher had been canceled. When TechCrunch checked the voucher, the Uber Eats page provided an error message that said the gift card “has been canceled by the issuing party and is no longer valid.”’

One thing to note is that it is impossible to strip types from TypeScript without a grammar of TypeScript. Stripping types is not a token-level operation, and the TypeScript grammar is changing all the time.

Consider for example: `foo < bar & baz > ( x )`. In TypeScript 1.5 this parsed as (foo<bar) & (baz > (x)) because bar&baz wasn’t a valid type expression yet. When the type intersection operator was added, the parse changed to foo<(bar & baz)>(x) which desugared to foo(x). I realise I’m going back in time here but it’s a nice simple example.

If you want to continue to use new TypeScript features you are going to need to keep compiling to JS, or else keep your node version up to date. For people who like to stick on node LTS releases this may be an unacceptable compromise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: