Hacker News new | past | comments | ask | show | jobs | submit login
Total Cookie Protection (blog.mozilla.org)
1526 points by todsacerdoti 12 days ago | hide | past | favorite | 427 comments





> Total Cookie Protection makes a limited exception for cross-site cookies when they are needed for non-tracking purposes, such as those used by popular third-party login providers.

Would be great to have some more details about it: in particular, how do I turn it off if I prefer to add any exceptions manually.

Edit 1: Mozilla Hacks blog [1] has a bit more but still doesn't answer the question:

> In order to resolve these compatibility issues of State Partitioning, we allow the state to be unpartitioned in certain cases. When unpartitioning is taking effect, we will stop using double-keying and revert the ordinary (first-party) key.

What are these "certain cases?"

Edit 2: Reading on, there's this bit about storage access grants heuristics [2] linked from the blog. But is that really it, or is there a hardcoded whitelist as well? If so, it'd be great to see it.

This bit in particular is ambiguous in how it's supposed to work exactly (who's "we" here):

> If we discover that an origin is abusing this heuristic to gain tracking access, that origin will have the additional requirement that it must have received user interaction as a first party within the past 30 days.

1. https://hacks.mozilla.org/2021/02/introducing-state-partitio...

2. https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Pri...


(I’m one of the developers of this feature and co-author of the blog posts)

This is a great question and I’m glad you found the answer, you probably understand that for many blog posts we avoid going into too much technical detail.

To answer your final question, there is no hardcoded allow-list for State Partitioning. The heuristics as described on MDN are accurate.


Have you considered using something like Expounder (https://skorokithakis.github.io/expounder/) in your posts? (Disclosure, I made it but it's a small open source lib).

I don't see why we can have full-blown web apps but our text needs to be very specifically just text these days.


This is super cool!

I've only recently discovered that Markdown has footnotes, and I've gone to down adding footnotes everywhere.

I use Jekyll + markdown on my website, and I now have lots of fun adding footnotes to my writing.

I added a "footnote tutorial" for readers on https://josh.works/turing-backend-prep-01-intro#why-this-rub..., to help them learn how to navigate the footnotes.

I _love_ your library, and I love the problem that you're solving with it.

Along the way, I've looked at Gwern's sidenotes[0] and Nate Berkapec's "footnotes"/sidenotes [1].

I eventually want to do something more "in-line", like what you've down with Expounder, but I've been satiated with markdown footnotes for now.

[0]: https://www.gwern.net/Sidenotes# [1]: https://www.nateberkopec.com/blog/2017/03/10/how-i-made-self...


Thank you! I used to use footnotes too, but I didn't like how they took you out of the flow of the text. Expounder aims to specifically let users stay in the flow of reading, which is why one of the core instructions is that the text should work in context, as if it were never hidden.

It's good to see experiments along these lines. I really like Wikipedia's recent-ish rich tooltips on link mouseover, and the HTML <summary>/<details> elements deserve to be more widely known.

From the demo it look as if Expounder is one-way - once you've expanded something, you can't collapse it again. Is that correct?



I miss footnotes on the printed page because, in addition to references (where they're probably better as endnotes to be honest) I find they're great to use for parentheticals that bulletproof a point, add some background that's not essential to a point being made, etc. But these latter uses work significantly less well in a blog post or ebook.

Oh, wow. The Sidenotes discussion from Gwern that you linked is _phenomenal_. Thank you for sharing these.

What I dislike about footnotes like that is that they pollute the browser history. If you want to leave the page but clicked on a few footnotes and their backlinks, you have to go “back” through all of them.

Thank you so much for posting gwern’s sidenote article! I want to use sidenotes on my site and this was a very valuable resource!


Back button usually come with an unfoldable list of jump points.

I am more ennoyed by how the jump points are turned into a useless feature by so many javascript out there which load new content without impacting the browsing history.


I love this, but I'm a bit surprised that you do not include the ability to "unexpound" an "expounded" term. Is that intentional?

If I were reading a technical text, I would definitely end up reading most paragraphs at least twice. It would make no sense to keep the expounded terms in the second time; I'd be tempted to hide them back as soon as I was finished with them the first time.


Yes, it is intentional. The functionality actually exists, it's just not mentioned:

https://github.com/skorokithakis/expounder/blob/master/examp...

It's because, once clicked, the new text should become part of the old, and that's it. Presumably you've already read it, and I don't want to make the viewer have to re-collapse the links every time.

Your use case makes sense, though, which is why the feature was included. Maybe I should mention it in the README.


I think collapsing would also be useful when all you need is a quick reminder, not a full explanation. Like "What's that again? [click to expand] Oh that's right [click to collapse]". That's easier than finding the place to skip to.

Hmm, true, I've added it to the README!

Hi, can you consider adding some accessibility to the library? Currently, I don't have a way to know that a term could be expanded, because the signal seems to be visual only and not detectable via a screen reader. Adding aria-pressed might be the solution, but I'm not an expert, just an user.

Oh, that's a good point! I didn't realize it wouldn't be discoverable, you're right.

Thanks!

I feel like the inserted text should be highlighted with a light yellow background or some indicator. Just appearing like that inline seems a bit funky or unexpected.

But I see there is a css class which is nice.

Just a simple rgba(x,x,x,0.5) where the x’s are the usual yellow height.


I prefer to leave the styling to the user, the library is intentionally minimally invasive there...

I agree with this. It would be helpful.

I wonder what this does to SEO, does the hidden text get indexed, and is it not picked up as a dark pattern by crawlers?

Why use this instead of footnotes? For example in these Feynman lectures below the footnotes and references to formulas and images activate then you hover over it. These footnote can even include graphics and formulas.

https://www.feynmanlectures.caltech.edu/III_21.html


To me, footnotes serve a different purpose, e.g. linking to papers, like the Feynman lectures site does. Expounder is more about indicating that you don't know something, so the text itself can change to accommodate you.

I like how it unfold the text, but it doesn't give visual hint on what was unfolded, and doesn't provide a way to fold it again, it seems.

Be it topographic emphasize or coloring, there should be an hint. And clicking the text thus emphasized should collapse it.

That's my opinion, otherwise, nice done.


It should animate the text while unfolding, but, other than that, there's no need to know what was unfolded. You just click what you don't know and eventually read the relevant info!

Hasn't HTML the summary and details elements for this specifically, or am I overlooking something?

<abbr>/<defn> are also quite relevant, and would fit a number of the example uses better (like the definition of 'atoms').

Not the author, but presumably you're overlooking the fact that the expounded term doesn't necessarily have to be "inside" or even "neighbouring" to the details element.

The author's intent here is to have terms explained in the text explicitly in such a way that it would 'augment' the text with an explanation somewhere further down the line, but not necessarily "in-place".

It is also intended for text specifically, rather than replacing one element with another.

I agree that display/summary are similar in spirit though, I had not come across those before.


As far as I know, those work quite differently.

Yes, this! Your lib looks awesome. Thanks for publishing it and sharing here!

Thank you!

This looks amazing. Would you mind if I packaged this in a WordPress plugin?

Not at all, go for it!

Awesome. Just a heads up, I've already finished it and just submitted it. HOWEVER, the plugin has to be licensed as GPLv2, but it shouldn't affect your license (since it's just using your code as a library). I'd feel better about it (and it will probably be smoother sailing during the review process) if I could submit your names as authors on the plugin.

If you want to be listed as an author, just drop over to https://github.com/withinboredom/expounder-wordpress/tree/ma... and let me know your wordpress.org user names in an issue.


Thanks! I don't think either of us have a Wordpress username, but it'd be great if you could include a link to the repo in the description.

Thanks again for your help!


Will do!

I would like this as well, please share once you do.

I've submitted it to the WordPress.org plugins directory, but you can download it right now from the repo in the sibling comment.

Is there support for an expound-all button on a page? I definitely have days where I just want to also read the details and don’t want to click a dozen times while I’m reading.

Not currently, but it shouldn't be hard to add a button with one line of JS to add the required CSS class to all the elements. This might defeat the purpose, though, as it's kind of intended to save you from reading things you already know.

Cool! I've been thinking of a similar solution to add to my (planned ;) ) longer blog posts. I'm guilty of going into the details too much sometimes.

Same here, and I didn't like the tradeoff, so I figured I'd solve it with the power of T E C H N O L O G Y.

That is FN DOPE. Wikipedia should adopt it in full.

I know that you didn’t mean to completely throw the conversation from Firefox to Expounder, but you succeeded.

Mozilla who? That’s where we are now.


This should have always been the only way it worked. Plus it should be easier to create white lists of allowed websites and all other cookies delete with every broswer restart. I know it is possible with Firefox but you need to add websites to whitelist manually in deep settings. At least there are some extensions that make it easier, like CookieAutoDelete https://addons.mozilla.org/en-US/firefox/addon/cookie-autode...

I would like something like, each site by default gets a bucket by name.

If cookies from another bucket should be shared with other sites, or might be seen when requested by a cross-site load from another site, ask the user a four choice question.

"Allow (site) to see cookies from (site)?"

Always Allow, Just this time, Ask later, Always Deny


Have you considered that "Total Cookie Protection / Isolation Partition" would be a much better name? :D

What I wonder/concern is how can one decide for legit use. This also sounds like a possibility for discriminating small players with legit use. (similar to Microsoft's SmartScreen)

Would be great to know how are those concerns handled?


Thank you for your clarification, and your work on Firefox.

I guess that clears it up.


> you probably understand that for many blog posts we avoid going into too much technical detail.

Not really... for a highly technical issue like this, at a minimum you should link to the technical details.

There really is no excuse for making every reader of your blog who wants to know the details dig for them independently.

imo, at least.


Both the more technical blog post as well as the MDN page are linked shortly after that paragraph.

I agree I wish they had more detail about the exceptions.

I've been a FPI user for years as a best-effort to reign in tracking but there are a common few sites that just break with FPI (50% of the time PayPal checkout doesn't work). Even if "Total Cookie Protection" is only 98% as effective as FPI, I'm making the switch.

EDIT: FPI = first-party isolation


Yes, it’s essentially that, FPI with workarounds for common breakage. You should switch from FPI, this is essentially another take on FPI by some of its original developers, so it should have fewer issues overall, not just site breakage.

It will be interesting to see how many sites break with “Total Cookie Protection”. Currently I use what I consider are bare minimum of anti-tracking, that is what I can make Firefox provide on its own, plus the DuckDuckGo browser extention. Those two things alone break an alarming number of sites. The DDG extention is pretty regularly mistaken for an ad-blocker.

Given Firefoxs low adoption, I fear that website owner will just ignore that their excessive tracking breaks their site in Firefox... “Works in Chrome... good enough”


I have strict tracking enabled in Firefox as well as uBlock Origin and I've yet to see a site broken. The only "broken" ones I've seen are badly coded ones that also fail to work in Chrome. Reputable sites tend to be just fine. YMMV.

FF blocked fingerprinting by visa during a transaction. To my surprise, even that did not break.

FPI?


So if I happen to run a less popular third-party login provider, my fate is sealed?

No, there’s no allow-list, you get the same heuristics as described on that MDN page.

> Total Cookie Protection makes a limited exception for cross-site cookies when they are needed for non-tracking purposes, such as those used by popular third-party login providers.

Facebook and Google will be excepted? This makes it a joke, sadly.


This is basically Google (Chrome) paying Mozilla (Firefox) to kill 3rd party cookies because Google has a better way to fingerprint users without 3rd party cookies, because they have SO MUCH data about us.

This move is aimed at killing other AdTech companies which rely on 3rd party cookies to track users.

They painting this as a 'PRIVACY' move, after they have already found other ways of tracking users across websites and devices.


It's still a good thing though. Better to be tracked by one company than a whole industry.

One company is the whole industry, Google

No. There's no whitelist. RTFM.

I did not say whitelisted, I said popular.

I wish there was something better than cookies for these use cases. But then, designing something that can't be abused for tracking, that empowers all the legitimate use cases is also really hard, maybe even impossible.

> Would be great to have some more details about it: in particular, how do I turn it off if I prefer to add any exceptions manually.

(on mac) Firefox > Preferences > Privacy & Security > Custom


The question is how to use "Total Cookie Protection" without any hardcoded or heuristics-based exceptions.

Your answer seems to be about how to turn off "Enhanced Tracking Protection"/"Total Cookie Protection" or parts of it (resulting in weaker protection). I want to keep it enabled and disable the exceptions (for stronger protection), i.e. the opposite.

I haven't installed the new version yet, so can't say for sure, but as far as I know there is no setting for this in that menu. [1]

If I misunderstood what you meant, please elaborate.

1. https://support.mozilla.org/en-US/kb/enhanced-tracking-prote...


There's a lot of comments in here about how it's bad that cookies haven't always worked this way, but a significant amount of web content to this day still requires third-party cookies to work. And I'm not talking about cookies that are designed for analytics purposes; the discussions here where concern is raised revolve around simple things like logins breaking.

For greenhorn web developers, you could say the same thing about TLS certificates. Why weren't they always free?

Well, another reason is because TLS (and formerly SSL) wasn't (weren't) just about encryption, but about a "web of trust." Encryption alone isn't trust.

Many things about web technologies have changed over time; and it's easy to say that any individual piece of functionality should have worked this or that way all along, but the original intent of many web features and how those features are used today can be very different.

One day industry standards may dictate that we don't even process HTTPS requests in a way where the client's IP address is fully exposed to the server. Someone along the way might decide that a trusted agent should serve pages back on behalf of a client, for all clients.

After all, why should a third-party pixel.png request expose me browsing another website?! How absurd. Don't you think? And yet, we do it every day.


> Well, another reason is because TLS (and formerly SSL) wasn't (weren't) just about encryption, but about a "web of trust." Encryption alone isn't trust.

Which is a nice principle, but given corporate and government incentives, the trust provided was lackluster at best. The PKI is pretty much broken because of it.

In the end, all it did is incur an unaffordable cost for hobbyist bloggers and other netizens.


You used to be able to simply install a Firefox extension[1] or Android app[2] and automatically steal the accounts of everyone on your wifi network on every website. https stopped that.

[1] https://en.wikipedia.org/wiki/Firesheep

[2] http://faceniff.ponury.net/


Widespread https did that. Firesheep motivated the big players to stop cheaping out and go fully https unlike approaches which did https for login pages only but it took let's encrypt also for https to become truly widespread

100% agree

Yeah, in the end it’s silly that we ended up with “trust” meaning only you’re connected to someone that controls the domain” which doesn’t actually need PKI to accomplish if we just supported a SRV record with the public key(s) and verifiably authoritative DNS queries.

Which fair it’s trading one PKI for another but web servers vastly outnumber authoritative DNS servers. But DKIM gets along fine without it so we probably could too.


Well there is DANE but browser support is unfortunately missing.

Well, but there is nothing that makes it impossible to build logins and the like without 3rd party cookies. Yes, there are certain patterns out there, that use them, but slowly turning 3rd party cookies off and giving major sites time to adapt might help to dump 3rd party cookies one day completely.

I think the whole idea of sharing cookies across origins was a conceptual mistake right from the beginning, because it is also responsible for quite a lot of security vectors which had to be fixed by other mechanics like the SOP (Same Origin Policy) which in turn required mechanics like CORS (Cross Origin Resource Sharing).

And with all those mechanics in place, modern browsers are pretty tied up and are significantly reduced in their abilities compared to other HTTP/S clients. So when you want to build a PWA (Progressive Web App) that can use a configurable backend (as in federated), you will run into all kinds if problems, that can all be traced back to the decision to share cookies across origins.


My point is you can make these arguments all day. Why do we allow iframes? You could argue web servers should simply communicate among themselves and serve subdocuments to clients.

Why is HTTP/2 Server Push being rescinded?

Why do user agents not provide additional types for <script> based on runtime installations?

Why isn't there a renderer API that allows me to use painted trees in <canvas>, but there is a bluetooth API that no one uses?


I am not sure if I get your point then. I think it is important to see the patterns of bad decisions in the past to improve decision making in the future.

That those mistakes were not done deliberately and with good intentions is a completely different story and that in hindsight everything looks so clear is also well known ;-)


> "web of trust."

"Web of trust" is a pretty specific term that doesn't apply to TLS/SSL: https://en.wikipedia.org/wiki/Web_of_trust

Did you mean to say "public key infrastructure" (PKI)?


I'm having a bit of an old person moment, and I'm not really that old, but I recall SSL cert sales back in the day pushing trust over encryption.

I may be confusing the terms "chain of trust" and "web of trust," but to my best knowledge, I don't recall EVs being sold on the former term.

My apologies. I hope there are folks out there who have a better recollection who can piece this together.


> a significant amount of web content to this day still requires third-party cookies to work.

Not in the corners of the web I frequent. I've been blocking 3rd party cookies for years and the only site that's broken was some Pearson online homework site.


A lot of IDPs break. For example any website that presents "Login with Google" will not work or require a reload after completing the Auth flow before the login is accepted.

This isn't simply "blocking third party cookies", it's "even an iframe has no access to the other state partition". The third party cookie is allowed to exist but it cannot leak to other sites. However, this leak prevention breaks plenty of other things if one is not careful (Mozilla was, there is a heuristic).


Total Cookie Protection creates a separate cookie jar for each website you visit.

why this is not the default behavior already?


Because it breaks a lot of things like SSO providers (although I completely agree with you, screw that, make it the default and add exceptions as necessary like Mozilla is doing now).

I've had third party cookies completely disabled for years, and first party cookies only allowed by exception. It works fine on everything I use except for whatever it was Atlassian were (are?) doing with their very odd collection of about two dozen domains they round tripped through on authentication.

To be honest though, browser fingerprinting makes this mostly irrelevant unless you carefully use a script blocker with a whitelist too. Any domain that includes trackers that drop third party cookies almost certainly includes scripts that can fingerprint you and send results to a server without using a third party cookie.


(A bit of OT)... which is why I am considering SPAs to be complicit in 'evilness'. All these webpages that require js for no real reason is generally making the web insecure and implicitly hostile and difficulty to navigate. Very few have the mental overhead to evaluate each site, so most just let any page do what ever it wants. Tracking and miners be damned.

SPA: single page application (I actually had to look it up... Shame on me.)

This is just my hunch as I work in analytics and deal with cookies a lot but both Salesforce and Atlassian appear to intentionally trade off the third party inconvenience because their products are enterprise (you have to log in for work) and they rely on upsell/cross sell across their products which they host on different top level domains. So forcing the third party cookie helps immensely with their sales and retention, and doesn't hurt usage because it's often required for work and if you need to work around it, you usually can find a way if you are so inclined.

If they had used the same domain for their products historically and just separate subdomains they wouldn't have to make this trade off, but it probably also helps with third-party ad networks/segmentation to get folks to turn it on anyways.


> makes this mostly irrelevant

Solving a problem isn't irrelevant just because there are other problems; there's definitely more to do, but this still has value.


Weirdly for me Atlassian doesn't work when I have the spoof referrer enabled in about:config. Like why does referrer, a property that is a header, define whether my login is valid or not?

I've worked on (non-Atlassian) SSO projects where the provider used the referrer to send the client to the page-after-logout (and occasionally page-after-login) if they weren't set as parameters in some circumstances.

Here's a reference to a F5 device providing SAML SSO services and having a similar issue:

https://www.devcentral.f5.com/s/question/0D51T00007npfjw/chr...


I actually had a member of Atlassian's "security dev team" tell me in a support ticket I opened about being unable to login with referer headers disabled that:

> since we cannot discount the possibility of malicious users programatically generating tokens and forcing them upon users, we check the referer header to ensure that the request chain was initiated in the one place that we're comfortable with: id.atlassian.com

Make of that what you will.


Providing this is the reason Atlassian uses the referrer, then this seems reasonable usage. Thanks for clarifying!

I had the same problem and tracked it down to uMatrix's quite reasonable spoof-referrer default, which breaks nothing else. Just Atlassian's sign-in, which seems to bounce you around to several domains before it lets you in.

Some sites use referer for CSRF protection. If they do that an you spoof your referer, they think you're being CSRF attacked and block it.

At least based on my usage, it breaks very few sites.

SSO via OAuth still works fine, because OAuth uses redirects instead of cookies.


Not only does redirect based login work, it's an inherently better model than sharing cookies.

With shared cookies nothing stops site A from taking a copy of your cookie and using it to impersonate you on site B. With redirect based login the identity provider has to authorize each application that is being accessed and each site has its own session cookies.

The main problem is dealing with globally revoking access but that's usually solved with shorter termed session cookies that periodically need to be refreshed from the identity provider.


Site A can’t access 3rd party cookies. Cookies only can be accessed by the domain they are created on. Otherwise any site could toss a 1x1 image pointing to any website and steal the cookies.

Could a site fix this by delegating a subdomain or CNAME to the SSO provider like sso-company.example.com so that the cookie is still using the same domain, but pointing the IP to the SSO provider? Assuming the SSO provider supports this, that is. I believe OKTA supports this method.

I mean effectively today hardware you or your boss owns is doing most of the work of tracking yourself.

This is making them have to allocate resources to achieve the same effect. Like taking lojack off of your car and phone, and making 'Them' have to tail you and scour security footage like in the old days. It's more expensive. Expensive things do not scale, so you have to prioritize who is worth the cost. People who are under legitimate suspicion of causing harm. Less 'by-catch' to use a commercial fishing concept.

When it's cheap to harass everyone, nobody is 'safe'. But when terrorists can't be tracked at all, nobody is 'safe' either. So we have checks and balances.


I believe so. That is what ad tech companies are now doing to get past the improved privacy measures.

I regularly use nginx to reverse proxy third-party API calls. I use it to protect API keys.

In my case, I strip all cookies and sensitive headers. One must keep in mind that the browser will treat it as a first-party request and the security implications that has. You may have to filter or modify cookies/headers.

https://jeremypoole.ca/posts/protecting_api_keys_on_the_fron...


That is the preferred solution if you're using cookies across a company.

well sso providers would still work, if it was made correctly? sso works without cookies. if I implement google sso I would not login via the google supercookie

Most seem to require a cookie to the pin the session or to match the passed state

there is a state parameter? so If I want to have a cookie that passes stuff, I can just store my stuff inside a cookie and pass the stuff inside the state param, there are so many possibilites via openid (which is super easy), I do not know how saml2 works, which might be different tough.

I know of a token system that some questionable engineers started pushing session state into and since it shipped before anyone noticed, walking that back turned out to be quite a chore. What was supposed to be a couple hundred byte cookie started hitting max cookie length warnings in other parts of the system.

When people need to keep a door open, if they don't see a doorstop in the immediate vicinity after two seconds of looking, some will just use whatever heavy object that is closest and consider the problem 'solved' instead of managed.

I needed data, I didn't know where to put it, this thing can give me data, boom, solved.


yes, but the solutions I have seen they seem to store the state also in a cookie and then check against it on the redirect that it didn't change

saml also has a relaystate parameter

Not a huge loss, if you depend on federated logins its just a matter of time until Google or Facebook's algorithms decide to ban your account without explanation or recourse and then how do your users access your site? All you'll be able to do is try to shame the companies on social media and hope enough people are outraged that the company takes notice.

Bearer tokens via post parameters seems a lot easier / less problematic than cookies.

Disabling cross site cookies breaks many sites.

No it does not. I've had 3rd party cookies disabled for as long as I can remember. I've found less than five sites that had issues.

It's going to break all 3rd party social layer providers. Most news sites don't have native comments and rely on a 3rd party like a Disqus. Login in state is stored as a cookie. It also going to break all the openID stuff that is heavily used in organizations like Walmart. OpenID is all based around cookies. I remember having to rebuild our provider when Safari released an update that you can't set 3rd party cookies without user interaction.

>> It's going to break all 3rd party social layer providers

Good. Disqus had it too easy.

>> It also going to break [..]

Good. They had it too easy.

I'm absolutely loving the fact that my switch to Firefox is paying off. Finally!


That type of attitude toward the millions of users that use discus just shows why Firefox is a dying browser with ever decreasing install base. Funding will keep decreasing as it is tied to search engine deals which is based in active users.

Anything that shields me to some extent from the "grab money fast, before anyone notices we're fucking them over" companies out there is a champion, as far as I'm concerned.

> Funding will keep decreasing as it is tied to search engine deals

Good. They had it too easy. I'd pay $20 for clean version of FFX on Mac/iOS App Store.


No Firefox is dying because browsers aren't a product anymore they're a feature of the OS.

Thank Microsoft, Google and Apple for that.


there are valid criticisms of firefox but breaking disqus is a bizarre one. when is the last time you used it? my impressions is that these days the literal majority of content produced on it is spam and it's been this way for the better part of the last decade

What did you do instead? Redirects?

Same. I've always had 3rd party cookies disabled for as long as the option has existed (which is a long, long time). Never noticed any problem to me.

I guess we use different sites then. I should specify I mean it doesn't keep me logged in. I consider this breaking because if I click a link to that site, it loses the original context once logged in.

It's a shame because local storage and friends aren't quite as secure (no way to block all JS from accessing it like you can with cookies).

What would be the point of localstorage if JS couldn't access it? Cookies can be set and get via http headers, but is localstorage available by other means than JS?

No, it is only accessible from JS. Parent comment does not make sense.

By that logic, we should turn off our computers to improve security.


Is this really an issue? If the attacker has XSS on your site you're already screwed because they can manipulate the DOM to simulate user actions.

It means they can't exfiltrate the cookie, which I think is a pretty nice win, even if they can still perform requests to the domain with that cookie.

For one thing it means they're locked to my session.


How would they steal HTTP-only cookies this way?

They wouldn’t steal the cookie, they’d just have the script send the requests as the user directly.

sounds like a desirable feature to me

Agreed, that's why I use it!

The only sites that really break are organizational websites, which you can whitelist anyway.

why?

Good question. third party login sites mostly don't keep me logged in, kick me out, doesn't let me log in, etc.

Give us some real concrete examples. This does not match my experience at all so I'm dubious.

I have trouble with google login (url must be copied into a google tab) and oracle cloud loses my tenancy home region every few minutes (https://i.imgur.com/ZCsepq3.png). Several other examples like LMS's that use O365 to log in must be manually logged in every time

I use both Google and O365 at the educational institutions I work at and both platforms works fine across a wide variety of applications. Strange that you are experiencing these issues.


People have been asking that question for twenty-five years.

No one but idiots like me wants to figure out how to unbreak every other site they go to.

What sites does it break for you?

Nice, sounds like I can get rid of the extension I use to toggle `privacy.firstparty.isolate`.

> In addition, Total Cookie Protection makes a limited exception for cross-site cookies when they are needed for non-tracking purposes, such as those used by popular third-party login providers. Only when Total Cookie Protection detects that you intend to use a provider, will it give that provider permission to use a cross-site cookie specifically for the site you’re currently visiting. Such momentary exceptions allow for strong privacy protection without affecting your browsing experience.

That's exactly why I have to toggle it. Anyone that uses auth0, and many publications sites (follow a link to a PDF, get redirected to `/cookie-absent` instead) fall foul.


Moreover, I've heard loud voices before that controlling 3rd party cookies will break login providers - guess what, it turned out if there is a will, there is a way.

I find this very annoying. An OpenID Connect provider is perfectly capable of working without using third-party cookies. The only reason they need them is to allow OIDC authentication without actually redirecting to the provider (by using a hidden iframe to do the OIDC flow on the same site). But if 3rd-party cookies are disabled it should just fall back to the normal OIDC redirect.

The OIDC front channel signout functionality relies on third party cookies to work properly. This feature has the IDP basically loading your app's end session page in a hidden iframe.

Similarly the OpenID Connect Session Management feature (check_session_iframe) also depends on the ability to use third party cookies.

This functionality is needed to be able to detect if user logged out from front-end code without relying on having any back end code that could receive either a front-channel or back-channel signout notification and send it back.

In the absence of that a pure SPA with no backend could only detect the logout if access tokens are stateful, and they get an error message back that the token refers to an ended session.

Some people get really cranky if a single sign out feature does not actually sign you out of everything.


Sorry you're right. I was just thinking about sign in. But at the same time it seems like the cat is already out of the bag on this one. Safari already blocks all third party cookies by default and it seems like other browsers are moving in the same direction.

"Nice, sounds like I can get rid of the extension I use to toggle `privacy.firstparty.isolate` ..."

Forgive me ... do I understand that there is a true/false setting in Firefox named "privacy.firstparty.isolate" that you like to toggle from time to time ... and you use an extension to do that ?

I don't do much browser customization and use only one extension (uBlock Origin) but ... couldn't I toggle a single Firefox setting with a simple command line ?

Why would you need an extension to do that ?

Genuinely curious ...


Toggling it manually requires going to about:config, and searching for it.

On startup it's enabled (i.e. do isolate) via a config file, so I could change it there with a shell script. I think though that I'd have to restart Firefox for it to take effect.

The extension gives me a handy button in the toolbar that's red (danger) when it's off (i.e. not isolating) that I can just click to toggle.

Yes it's a tiny job for an extension, but do one thing well right? Also, to be honest, it's easier that it's there than switching to or pulling up a new shell.

Afk to confirm, but pretty sure this is the one I use: https://github.com/mozfreddyb/webext-firstpartyisolation


So if I happen to run a less popular third-party login provider, my platform will break and I will need to lobby for an exception...?

No. There’s no hard coded list. You get the same heuristics as everyone.

They don't spell it out here, but I wonder if this means that third-party embedded web software requires the Storage Access API now.

It's not particularly fun to implement. It's not hard, but the heuristics are enough of a nudge that it can create weird experiences for users.

"I thought I already signed in, but after I navigate, I have to click sign in again, and a window pops up and then I'm automatically signed in? Why?"

Edit: Yeah, seems so.

https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Pri...

See also: https://webkit.org/blog/8124/introducing-storage-access-api/


I’ve heard the whole name for this is Total Cookie Protection/Identity Protection, or TCP/IP for short.

/j


What's the difference to setting "privacy.firstparty.isolate = true"?

And what's the migration path for users who have been using that setting previously?

Can I now disable it? Do I have to disable it?


Maybe I don't know enough about cookies but it's kind of shocking that that this wasn't the behavior from day one. I suppose it's one of many things designed for a simpler time, but so many of those have been fixed by now.

Kind of an important point: this appears to be an attempt to make third party cookies useless, without actually disabling them since many sites depend on them. This is achieved in two ways:

1. By allowing third party cookies, but compartmentalizing them by the first-party site that sent the request (a much better name for this feature would be "per-site cookie containers", "total cookie protection" is completely uninformative).

2. By using a heuristic to selectively allow cookies to be accessed across the container boundary if they are actually needed, e.g. for logins.

To answer your question, this doesn't make sense as "day one behavior" because it's basically a patch to work around a historical problem with as little breakage as possible. If you were setting up cookie permissions on day one, knowing what we know now, you wouldn't kneecap third party cookies, you'd disable them entirely. Mozilla is trying to make third party cookies useless for 99% of what they're used for: if that's how you feel about third party cookies, you'd just not implement them.

Incidentally, I do block all third party cookies by default and have for years. That's a much stronger approach than the compartmentalization that Mozilla is attempting. I can count on one hand the number of sites I've seen break because of this, most of them are happy to let these cookies fail silently.


There is so much legacy tech out there that is still working on the trust level from back when DNS was a hosts file you manually copied to your system once in a while.

BGP and SS7 are other famous examples.


Is this really effective for the users' privacy? Won't AdTech networks simply migrate to browser fingerprinting, perhaps with a bit of server-side tracking?

I'm not arguing to give up. Rather, I'm more convinced in investing in privacy NGOs like noyb.eu and make it expensive to toy with my privacy.


> Won't AdTech networks simply migrate to browser fingerprinting, perhaps with a bit of server-side tracking?

they don't even have to. Just store two (or N) sets of cookie trails as they already do. This will waste a few MB of storage on the client side and do nothing to Ad/privacy.

Sites never shared the ID anyway, specially since GDPR-et-al.

AD tech works like this: you send a hash of one ID and on the backend attach all the profile info (nobody will ever share that with partners, because that is gold), then the other side just assign their own hash of their ID and also keep all their targeting info on their backend. The only thing that matters is that party A ID123 is known to match party B IDabc. Note that those IDs are transient and set at random, because party A and party B doesn't want to give up their secret info by matching IDs from multiple sites. That is called cookie match. it does NOT depend on a single cookie jar. It doesn't even depend on cookies! why do you think most Ads (and google search result links -ha!) have those weird hashs appended? zero cookies needed)

Another thing that helps even more than 3rd party cookie is multi-site referrer, but google killed that on both chromium and firefox a long time ago (firefox still have the about:config way to disable/set to single-site, set to multi-site-domain-only, but good luck finding a single human who changes that setting by selecting magic numbers)


This is wrong: third party cookies are still widely used in the ad industry. Among other things, the cookie matching that you describe is dramatically more effective with third-party cookies than first-party only.

(Disclosure: I work on ads at Google, speaking only for myself)


never said it is not widely used or not effective.

Just saying that it won't matter much if removed from the equation.

I mean, if something makes your life easier, you would be a fool to not use it. but that is like saying not having a ferrari prevents you from driving to the store.


Third party cookies are not simply a matter of making adtech developer's lives easier. Imagine you visit shoes.example and are now on news.example. Both of these sites work with ads.example, and the shoe site would like to show you a shoe ad.

With third party cookies this looks like (simplified MVP form):

1. When you visited shoes.example, it loaded a pixel from ads.example. That pixel automatically sent your ads.example cookie, and put you on a remarketing list.

2. When you visit news.example, it sent an ad request to ads.example, which also automatically sent your ads.example cookie. Now the ad tech vendor knows to include the ad from the shoe site because it recognizes the third-party cookie.

On the other hand, without third-party cookies or any replacement browser APIs, how do these identities get joined? Very occasionally someone will follow a link between a pair of sites, and then you can join first party identities, but you probably don't have a chain of identities that connects a news.example first-party identity to a shoes.example identity.


>On the other hand, without third-party cookies or any replacement browser APIs, how do these identities get joined?

1. When you visit shoes.example, it has an iframe to show an ad from ads.example. This iframe runs some JS to compute a browser fingerprint and then nests an iframe to hxxps://ads.example/?target=shoes.example&client=$fingerprint . The ads.example server records that this fingerprint has visited shoes.example

2. When you visit news.example, it has an iframe to show an ad from ads.example. This iframe runs some JS to compute a browser fingerprint and then nests an iframe to hxxps://ads.example/?target=news.example&client=$fingerprint . The ads.example server recognizes the fingerprint, knows that the client visited shoes.example earlier, and returns a shoes ad.


My parent claimed this was possible to do with link decoration and first party cookie matching, and I'm saying it isn't.

I do agree this is possible to do with fingerprints, though (a) all the browsers are trying to prevent fingerprinting and (b) a reputable ad company would not use fingerprints for targeting. This is my understanding of why Google is putting so much effort into https://github.com/WICG/turtledove

(Still speaking only for myself)


Thanks for the description! I would love to read a longer post on the topic with an MVP implementation / demo.

right, if you know how cookies and urls work, all that can happen with zero cookies and some query parameters, like the ones google search surreptitious add on every search result.

cookie synch, It's a freaking industry standard. And you want us to believe google money cow will dry as soon as the effort they are leading goes live?


No, it is not possible to remarket at any meaningful scale with "zero cookies and some query parameters" (though Arnavion's sibling comment is correct that it can be done with fingerprinting). Would you be up for describing how you'd do it in the shoes.example/news.example/ads.example case?

> you want us to believe google money cow will dry as soon as the effort they are leading goes live?

"we are confident that with continued iteration and feedback, privacy-preserving and open-standard mechanisms like the Privacy Sandbox can sustain a healthy, ad-supported web in a way that will render third-party cookies obsolete. Once these approaches have addressed the needs of users, publishers, and advertisers, and we have developed the tools to mitigate workarounds, we plan to phase out support for third-party cookies in Chrome. Our intention is to do this within two years." -- https://blog.chromium.org/2020/01/building-more-private-web-...

They are describing adding new capabilities to the browser that would make this possible, in a privacy preserving way. Ex, https://github.com/WICG/turtledove/blob/master/FLEDGE.md

(Still speaking only for myself)


hint: the same way attribution happened in the early days.

google sends id abc to shoes.com and id xyz to news.com. both sends those ids back to google's own adserver. presto, google knows you are seeing those two ads.


btw, the only way to fix this mess and not break the internet in the short term is to fix the UI. not the black magic hidden from the user.

Just show 1st class useful controls on the browser UI for cookies and the problem solves itself. what EU cookie law should have been.

Every user understands "site A wants to store a save file" "site A wants to access save file". Nobody understands cookies and same-origin and cors.


Yeah, the cookie law was a false start. Laypeople don't care about the exact technical implementation (e.g., session cookies vs. persistent cookies vs. local storage vs. browser fingerprinting).

What I care as a EU citizen: Are you collecting and storing information that can directly or indirectly identify me? Yes, tracking and profiling are included in this.

You want to store some session cookies, so you remember my shopping cart? Go ahead!

You want to store some cookies, so you remember I was logged in? Sure!

You want to use every available technological loophole to follow my every path on the Internet? Errrr, no thanks!


I see this as a test of government. A well functioning government will iterate on their laws and see what they got right/wrong and improve it.

I'll keep my fingers crossed for a GDPR 1.1 that patches some of the things they got wrong.


I think the cookie law is somewhat meah, but I feel GDPR is pretty future proof. I don't expect GDPR to change a lot, rather our application of it (so-called ECJ recitals) will evolve.

Wouldn't you agree?


This is basically Google (Chrome) paying Mozilla (Firefox) to kill 3rd party cookies because Google has a better way to fingerprint users without 3rd party cookies, because they have SO MUCH data about us.

This move is aimed at killing other AdTech companies which rely on 3rd party cookies to track users.

They painting this as a 'PRIVACY' move, after they have already found other ways of tracking users across websites and devices.


> Total Cookie Protection creates a separate cookie jar for each website you visit.

This should have always been the only way it worked. Every website should run like if it was opened in a separate browser.

> third-party login providers

Don't use these, it's a trap.


> Don't use these, it's a trap.

Except if you're setting up SSO for your company's employees. Using a 3rd party login provider is a necessity. You shouldn't trust employees to create unique / strong passwords for every individual service they login to.


Or if you're setting up a SaaS application where some of your customers will want integration with their own SSO. We don't have developer time to spare implementing that sort of thing but Auth0 lets us do it as one of its built-in integrations.

It lets us offer SSO with whatever Auth0 supports as a freebie add-on, instead of "well, we could work with your platform but it's gonna cost you."

I don't see how it's a trap, except that we have to pay auth0 a monthly fee to handle our authentications instead of having some number of hours a month spent maintaining and securing our customers' logins and integrations.


I don't see why OAuth doesn't solve this problem for you.

Would a password manager solve that problem?

Not really, at scale.

SSO is a must in any big organisation, there are tens or hundred of applications.

People are incredibly and consistently bad with security. You really need a way to be able to cancel all accesses in one swoop for any individual.


Not only that. As a user it's incredibly frustrating entering a password 5 or more times each morning. This results in users using extremely weak passwords.

The same is true for forcing users to reset their password every 50 days or so, by the way. This outdated password guideline doesn't seem to die. I know way to many cases where people are using a weak base password with a number attached to it because they got sick of trying to remember a new password every month.


> The same is true for forcing users to reset their password every 50 days or so, by the way. This outdated password guideline doesn't seem to die. I know way to many cases where people are using a weak base password with a number attached to it because they got sick of trying to remember a new password every month.

there are people who actually invent a new password every time instead of cycling numbers?

also, change password a few times until history is flushed and switch back to the same password you started with is a thing.


Well, sadly this rule about password aging made its way into some regulations. We know it is idiotic, but it is the law.

SSO is more than password management. It is instant provisioning and deprovisioning of users. Role management and auditing. Enforcement of security standards like 2FA in a central place.

Not really relevant for the specific topic, but to be more precise, SSO is only the sign on part. Usually the provisioning/de-provisioning is handled by SCIM, which is related but distinct. You have some SaaS products that offer SSO but not SCIM, for example.

Curious what IDP service doesn't provide SCIM and just SSO. Doesn't SAML 2.0 have SCIM support?

Sorry, I should have been more clear. When I typed SaaS products I meant more about a non-IDP product. They might support SSO but not SCIM-based account provisioning, especially if it's in-house auth (not using something like Auth0). I worked on a product that supported SSO but not SCIM for a long time and not all SCIM features were supported.

Who is the best SSO provider?

Where can I learn about best SSO practice/implementation?


I've used Okta to provide gateway access to physical devices and AWS roles in the same deployment. Very impressive when every endpoint and SaaS product is behind a single 2FA login.

Okta is my favorite. One Login is cheaper but have never used it.

If you can enforce that they use the password manager, it solves that one problem.

But SSO centralizes access management. For instance, with one switch I can set password requirements, require 2FA, and grant/revoke access to all of an employee's services when they join the company or leave.


I'm sure there are ways to use 2FA or OTP without externalising access management to Facebook, Google or another SSO provoder, unless you want to pick convenience over privacy and security.

How do you enforce it over a bunch of 3rd party software which either doesn't support 2FA or doesn't support enforcing it? If they support SSO which they usually do, its a non issue.

There are, but writing your own authn/authz is about as wise as writing your own cipher. https://www.schneier.com/crypto-gram/archives/1998/1015.html...

I'm talking about using a library like privacyID3A or something else, not writing your own.

How do you centralized your authn, your 2fa provisioning? How do you ensure that your cloud native apps have access to the auth backend without risking exposing the wrong ports on the wrong vpc?

Just adding a library to application code is not sufficient. What I mean is that organizations should not roll their own SSO provider. At the very least, work with one of the many companies that offer it as a product or service. If your threat model requires it, you can host the product on premises.

No because you want to be able to offboard/disable those accounts without having to manually do it for each one.

> Don't use [third-party login providers], it's a trap.

Pretty hard to avoid in many cases. Logging in to your Microsoft account for Office (Teams, Outlook, et al.) uses a login service, as does Google, and practically all services that span across multiple domains. Which includes all of the major ones, at this point.

Good that Firefox gives us this option, given how the web has evolved!


For what it's worth, I find third-party logins (e.g. Spotify via Facebook) to be a nice convenience feature that I use quite often.

i don't think anyone would deny that third party logins are convenient -- either from the user perspective or from the developer perspective. but they are also a huge vector for privacy-invasive ad-profiling, if that's the login provider's business model.

I'd bet for the average user privacy impact of tracking is much less significant that the privacy impact of constant account compromises.

that is true, but that is virtually always because of password re-use. if you use a password manager and randomly-generated passwords unique to each service, this is almost entirely mitigated.

with a single third party login for all services, though, if that third party account gets compromised the results are catastrophic.


> with a single third party login for all services, though, if that third party account gets compromised the results are catastrophic.

The same can be said of the password manager account. It's turtles all the way down.

The fact that we rely on users to not reuse passwords, the fact that using a password manager is all but required to get reasonable security despite being far from convenient, these indicate a major failure to serve the actual needs of users, in my view.

Users have head space for 1-3 strong passwords. They can tolerate carrying maybe 1 security token with them. They can tolerate a little bit of security setup when using a new device for the first time, and they can tolerate a touch or fingerprint scan at authentication time. All authentication systems can and should operate within these parameters.

No web site or app outside of an authentication provider should ever present a user a screen asking them to pick a strong password that they have never used before. That is asking a user to do something that the human brain cannot reasonably do for 99% of the population. At best, a browser or password manager will intervene at that point and pick the password for them. At worst, the user ignores the warning and picks the same password they use for everything else.


> The same can be said of the password manager account. It's turtles all the way down.

What password manager account, what are you talking about? There is never any password manager account, yes, I have heard that some weird people are synchronizing their passwords to some strange 3rd party services but those don't matter. You have one password. Encryption password for login database and that one is local and never transmitted over the internet. If you know a password manager that provides this decryption password to their servers, please open the topic here and they will be bashed to hell for this.

I am a tad more strange, my password manager is synchronized with my sftp server using private key and I am not only randomizing the passwords for each site but also the email address (imagine sha(user+salt) + delimiter + sha(domain + master password)@mydomain.com). And I will never in my life use any SSO as they are mostly spyware designed for tracking users across the sites and certainly not for what they are advertised for. They will break with firefox latest addition? FINE! At least people will stop using them.

One thing are companies self hosted SSOs. Sure, I can trust those for company services. For anything else, like "login with google" or "login with facebook"? Yeah right, my hearth is jumping out of joy and barely waits to use it. It actually works in reverse, if you dont allow me to register using non SSO account (email, password) I wont use your service/webpage/whatever.


What about two-step verification via an Authenticator or SMS? Is that spyware? Or do you have a self-hosted solution for 2FA too?

> but they are also a huge vector for privacy-invasive ad-profiling

Do they actually do this? Also don't most of the big ones allow you to opt-out of personalized ads.

I like this because it's easier to have strong 2FA with backup codes on a few well protected accounts, than to do it for every tiny site.


With all respect, did you think of the consequences of you losing access to your login account?

This is a feature in corporate contexts.

a good password manager beats this hands down, for convenience, privacy, and security.

It doesn't for corporate usage... having to create accounts for every new employee on every service you use, and then remove those accounts when someone leaves is not scalable. Having SSO is needed.

I use 1Password (and the browser extension) for all my passwords, but I still choose "Sign-in with Google" when that's an option.

The "Sign-in with Google" button is makes it much quicker to create an account and slightly quicker to log in.

Also, I can rely on my Google 2FA rather than setting up and filling in a different TOTP for each site. Something like U2F or WebAuthn would make the filling-in part more convenient, but even sites that offer 2FA usually don't offer those. (And many sites don't even offer 2FA.)

Using 1Password's 2FA feature would make TOTP more convenient, but I'm a little nervous about putting 2FA in 1Password. This might be overly-conservative thinking, though.


I agree it can be super convenient, though 'Sign in with Google' is totally broken for me, because I've accumulated a handful of google accounts.

Every time I log in to a service, I have to guess which account it's associated with (bearing in mind I may have signed up years ago). And if I'm wrong, half the time it immediately attempts to create a new account, and then I'm stuck with a bunch of empty dummy accounts on various services.


>> Total Cookie Protection creates a separate cookie jar for each website you visit.

> This should have always been the only way it worked. Every website should run like if it was opened in a separate browser.

FYI: Extension "Temporary Containers" does this: https://addons.mozilla.org/en-US/firefox/addon/temporary-con...


I have no choice but to. The school services I must use are all tied into O365.

Mozilla is really fighting the good fight for the users privacy. I've been using Firefox for as long as I can remember, even when there were faster and more fancy alternatives available. Their ideology and service to the user is what makes me loyal to them

I've noticed that Firefox has become even snappier than Chrome.

One big advantage is that I now have way more addons installed on Firefox that would otherwise make Chrome utterly slow and unusable.


I have tried regular as well as the developer version of Firefox, but no matter what I use, YouTube videos always skip frames after every 10-15 seconds or so. So I use Brave for YouTube and other WebGL heavy stuff and Firefox developer version for daily browsing.

That sounds very strange. Certainly don't see that in Firefox on Mac (work laptop) and both Linux and Windows (personal laptop). Try adding the h.264 extension. That forces YouTube to provide h.264 videos which is hardware accelerated on pretty much any hardware.

Tried the addon. Still nothing. I have also tried clean installing Firefox with no addons, but same issue.

Adding that extension disables 4k video on YouTube.

I don't know if you're on Linux. But I had issues with Youtube as well. Two things helped me an updated graphics driver and Wayland.

I'm on Windows 10 with latest drivers for Nvidia 1050Ti. Still the same issue.

> even when there were faster and more fancy alternatives available

This seems to indicates there's not faster alternatives around anymore, but the last time I tried FF (4-6 months ago) I couldn't make the transition because the lag was pretty obvious when coming from Chrome based browsers. Is this not the case anymore?


I use Firefox and Chrome at the same time and I don't really notice any difference. Maybe a bit for Google apps (Hangouts, Docs, Meet, etc) but I just see that as a symptom of Google's attempts at using their market dominance to harm competitors, which makes me want to use Firefox even more.

It seems to me that Google is always trying to make their products run much slower on browsers that aren't Chrome.

It's unlikely they put any effort into intentionally make them run slower, it's just that they are written to work optimally on Chrome and minor differences in the behavior of things like the V8 vs. SpiderMonkey and Blink vs Gecko. Given that each one is written with different tradeoffs, it's not surprising things perform differently.

Whether or not the Google programmers use specific proprietary knowledge about the behavior of Chrome to optimize performance is different. If they do, that would be similar to the things that got Microsoft in trouble.


I'd agree with you, except for Google's long and sordid history of doing exactly that, time and time again (found with a 30-second search):

https://tech.co/news/google-slowed-youtube-firefox-edge-2019...

https://www.techspot.com/news/79672-google-accused-sabotagin...

https://www.zdnet.com/article/former-mozilla-exec-google-has...

Google knows that every time they release a Firefox bug, FF's user percentage goes down a tiny bit. Repeat over dozens of bugs, for years, and you have a strategy.

There's one blog post from another Mozillian that I can't find anywhere that came out within the last year with other examples, I think it was on HN.


> There's one blog post from another Mozillian that I can't find anywhere

You are looking for https://web.archive.org/web/20180728122724if_/https://twitte...


I read that post. It was enough to convince me of malice at the time. I don't have the link though.

What is your opinion of Brave Browser.

I use Brave + Ublock exclusively.


I haven't tried Brave, never understood the point of it. What does Brave + uBlock offer you that Firefox + uBlock doesn't?

I hope you mean uBlock Origin.

Brave and uBO share filter tech and we aim to make uBO unnecessary (this may require setting shields to aggressive). We do much more than any extension can do, and Google has made it clear they will further restrict extension APIs.

https://www.theregister.com/2019/05/29/google_webrequest_api...

https://brave.com/privacy-updates-7/ (latest in series)


What does Google restricting APIs in Chrome have to do with Firefox? I haven't heard of any plans like that from them.

Firefox has had same API as Chrome for a while.

I think this might be more about perception than anything else.

I've used Firefox since 2006, and Chrome always seemed heavier, laggier and uglier. Maybe it's the snappy iOS-like animation when you scroll to the bottom of the page that makes it seem snappier?


It's not imaginary - for years Firefox drained battery on macbooks really fast. Then there is this pesky issue of randomly freezing whole laptop for a minute or so, usually associated with file uploads or locking screen [1], [2], [3], ... Fixed in one version, then appears again in the next version.

I still used Firefox a lot for various reasons (and still do), but I'm not blind to how it performed.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1595998 [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1415923 [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1489785


Firefox is fine and quick as long as you don't need to use any heavy Google apps. Some people might even consider this a plus. For me, between work and personal use I'm effectively married to Gmail, Google Calendar, Google Docs, and Google Hangouts. Unfortunately that makes Firefox a non-starter for me. Not to mention Firefox's privacy settings trigger countless reCAPTCHA gates across most of GSuite. I get that this is not Firefox's "fault" and it's done intentionally by Google, but as a user it becomes my problem.

I really want Firefox to work for me and I'd love to drop Chrome, but last time FF made big noise about performance improvements I tried it out and Gmail was still unusably slow.


I use Google Calendar and Google Docs without any issues in Firefox. I agree Gmail is coded terribly and do not use the web site! I stick to using Thunderbird on the computer, and checking email on my phone. Have not been using Hangouts for a couple years, though.

For me, the way Google is keeping Gmail terrible for other browsers is exactly the reason to not use Chrome. No way I'm OK with that.


FWIW I use all of those apps on a daily basis with Firefox and have not noticed any performance issues. It may be worth giving it another try if you haven't in a while.

Indeed. Hangouts is one I find works better in Firefox even! But I observe it seems to vary. Perhaps Intel Macs has some quirks that makes it more peformant and reliable in Firefox.

I switched to FF when Quantum came out. I use it exclusively. Not because I hate Chrome, but because I don't see any need for chrome. Once in a while I see a website that forces me to use something other than FF. But it happens rarely, and it is mostly some webgl-based under-development demo website.

I even use it on my phone. The mobile version is definitely worse than Chrome, but it has plugins (or it used to! nowadays it only support a few popular ones which is a shame) and also I can send tabs from my phone to my computer (which is a better place to read articles anyways).


Have not ever noticed any performance problems using FF for Google products, personally. Works great.

I switched back to Firefox last week and I had the same experience -- Google apps and Slack were dog slow. But after a day or so they were working fine, I imagine it's a matter of populating the cache. YMMV.

It also depends upon the operating system among several other variables,

I didn't find noticeable difference between FF and Chrome based browsers(Vivaldi, Edge) on macOS(although Safari runs circles around them) after using them extensively. I used each of them for a separate project with several common websites loaded in them, there were different quirks for each browser(especially reg tab hibernation) but latency was not one of them.

On Linux FF seems definitely faster than Chromium, although there are occasional DNS errors which stops loading the web pages altogether(likely result of my own doing). I've stopped having different browsers for different projects and just use FF for all.

On Android with Chrome, not just Chrome but even WebView using it is astonishingly fast(e.g. DDG browser), I presume it's because of data saver feature. On de-googled android like LineageOS, FF/Fennec seems to be on same level as Chromium and DDG is faster here as well.

On iOS, everything is Safari.

I don't use Windows much, but I've seen others mentioning Edge seems to be faster than Chrome recently.


How much faster is it for you guys? I legitimately can not tell the difference.

I find them to be close enough to imperceptible for just normal html and css etc.

The stumbling block for me as FireFox user is I am increasingly bumping into web apps that preform poorly in FF but are fine in Chrome for one reason or another. One instance I bump into a lot is ElasticSearches Kibana runs like trash in FF for some reason.


It sounds like the old "nobody uses Firefox because nobody tests on Firefox because nobody uses Firefox" vicious cycle, unfortunately.

I am guessing performance differences might be masked by good hardware? Sometimes performance differences don't show up until you use an underpowered machine.

I don't think it's just that. I have a half-dead Chromebook with linux, and I use Firefox on it. Some years back I ran Chrome on it because it worked better, but at some point I started seeing issues with Chrome and tried Firefox again. I've been using Firefox since.

No. I still use Firefox, but when I use Edge or Chrome it hurts a bit just how much snappier they are.

I switched from Chrome to Firefox about a year and a half ago. Chrome definitely felt more snappy, but the difference wasn't that much.

Except on Facebook. My Facebook tab is incredibly laggy, and gets more and more laggy the longer I leave it open. I'm one of those users that tends to keep 50+ tabs open, and I have to close and reopen the Facebook tab at least once a day to keep it from becoming a nearly frozen mess. Even then, if a video is playing and I click it to make it fill the window, it takes several seconds for it to happen. And with an i9-9900K, 32 GB of RAM, RTX 3080, and a 1 TB NVMe drive, my computer is definitely no slouch.


But facebook is a pile of... like I have a screenshot of FF’s task manager showing facebook using 800 MB of memory!

In a way I see it as a win, I really really hate opening it on desktop.


Did you have ublock origin installed on Firefox?

I feel that most people complaining about slow browsers have no blocker installed.


My CPU immediately pumps to 100% usage after opening google docs. Granted, it's on my old laptop, but I can use electron apps and they run far better than gdocs.

Interesting, I have uBlock Origin and indeed I can't tell the difference between Chrome and Firefox.

Did you see lag on all websites? Or in specific instances? Which platform and on what kind of hardware?

Keep in mind that Firefox opens their website on first run and on every update and that includes Google Analytics.

I find the majority of their privacy claims dubious and dangerously misleading for those that don't know any better. If they were serious about privacy they'd offer uBlock Origin (or equivalent functionality) preinstalled by default.

Their current countermeasures such as containers, tracking protection and this cookie thing is trivial to bypass with browser fingerprinting and IP address tracking if you have a global view of the Internet (which Facebook and Google do have).


I modified the settings long ago to come up with a blank tab on startup. I use NoScript and do not allow google analytics through. No facebook domains make it through NoScript as far as javascript is concerned, very few google ones do.

I get you about the updates. It's a risk-reward ratio I accept because firefox + noscript + always starting in a private session is way more helpful than the update problem is harmful. Using a VPN a lot of the time helps, too. There is no solution I know of that is perfect. My threat model is pretty relaxed, though, so what I do is mostly for my peace of mind. You have reminded me that I should start spoofing my user agent again.


I don't disagree that it's possible to configure Firefox to respect your privacy. I myself use it sometimes and have a similar configuration.

But it is extremely misleading for them to be shouting "privacy" at every opportunity while the truth is that their browser leaks personal data like a sieve in the default configuration. This would give a false sense of security to non-technical people who don't have the skills to see through these lies.


And here are the FUD-spreaders yet again, that instead of the tiny “bad” things like some form of harmless analytics (it is not even that) they would run towards the goddamn gate of Hell itself. Like, what do you imagine chrome does? Or do you think brave have eveything removed? It’s the exact same browser with different name and logo and preinstalled adblocker..

Sorry for the somewhat angry comment, but I honestly can’t understand this mentality.


Google Analytics isn't harmless though. It's giving a single party a wide view on the entire Internet (thus the ability to circumvent cookie-based tracking by just using IP addresses and heuristics) and said party makes its money by tracking people online.

I'm not saying Chrome is any better, but at least Chrome doesn't toot the "privacy" horn at every opportunity.

Brave does have some kind of blocker built-in which might actually help even if it's not perfect.


> Keep in mind that Firefox opens their website [...] on every update

I haven't experienced this since the rapid release schedule started. They're pretty silent now.


What do you think of enabling letterboxing, uBlock, and DoH to prevent fingerprinting?

Are there any other config changes you would recommend to Firefox to harden it?


Not only that, but Firefox for US users will track what websites you visit to target their discover campaign content.

https://discover.buysellads.com/firefox-new-tab


From Mozilla's Firefox New Tab FAQ:

"neither Mozilla nor Pocket ever receives a copy of your browser history. When personalization does occur, recommendations rely on a process of story sorting and filtering that happens locally in your personal copy of Firefox."

https://help.getpocket.com/article/1142-firefox-new-tab-reco...


Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: