> To add the secret to the watchface code, you need to convert it to hexadecimal bytes. This cryptii.com page will allow you to do that conversion. Note you’ll have to enter your TOTP secret in uppercase.
I wouldn't be comfortable entering my TOTP secret into a random web page. In Linux (Ubuntu here, probably other distributions as well) you might have the `base32` and `od` tools already installed (package 'coreutils').
Otherwise the project is awesome (just the watch is fugly :))
if you're wondering why you see a weird ⌍ symbol from time to time on the demo, it's a "small 7" because the watch ties the top and bottom segments of the first and third digits (segments A and D) together.
it's really amazing how much efficiency they packed in this display. in normal use, these digits only need to display the numbers 0-5 [for the first digit, the clock only needs 0, 1, 2, but the chronometer goes up to 59:59.99], none of which need to distinguish between those segments. technically I guess the chronometer could have gone up to 69:59.99 without breaking anything though, but I guess "one hour" is sufficient? the numbers 8 and 9 also illuminate both the top and bottom segment, so it's only 7 that is an issue.
I used totp first time yesterday on GitHub and I don't understand it's point. I had to install otpclient app (from Ubuntu repository) where I typed 4 strings and it spit out one number which I typed back to GitHub. Attacker could do this as well, so the only thing totp does is to prove I can read and write. What am I missing here?
You are missing that the TOTP secret will only be presented once during setup.
It is now a second factor because you need to prove possession of the secret by entering the current TOTP code during login.
It will not be presented again, so an attacker needs to have been able to intercept the initial secret exchange. (well or phish for it etc.)
You are usually prompted to enter the code during setup to ensure the secret has actually been put into some authenticator and is not immediately going to be lost.
GitHub sent you those 4 strings while you were logged in and they are now stored on your computer. GitHub will not send them to an attacker that is not already logged in.
No they cannot. They should not/will not be able to view that initial TOTP generation code. That is the "secret" that determines what digits are generated at one time.
Doesn't have to be. While storing them on your computer does not protect you from an adversary with access to your computer, it still protects you against an advrsaey e that intercepts (or guesses, maybe after a breach) your password.
For what it’s worth, whilst your point somewhat stands, generally just 2 devices are not considered 2 factors.
Usually, the factors are considered as:
- something you know (e.g a password)
- something you have (e.g. a device token)
- something you are (e.g. a fingerprint or other biometrics)
Single factor with uses just one of these, which is why you can unlock your phone with either a passcode and a biometric with the same level of security (when talking about factors)
Two factors should have two unique ones of these, and in this case a TOTP generator on the same computer as you are logging in on is fine because the computer counts as “something you have” and the password you enter counts as “something you know”. An attacker who takes your computer still only gains 1 factor (disregarding secure enclaves and password protection etc) and doesn’t have both.
Of course if an attackers manages to access both your password manager and your TOTP generator (whether or not they’re on the same device), then both factors are compromised because the “something you know” factor has been broken due to the things you know being stored somewhere.
Of course, the way you practice the security of each of the factors is important and can vary greatly depending on how you effort you want to put in to it. For instance, keeping TOTPs on just hardware tokens which you never keep plugged in protects against your device being stolen.
E-mail or sms codes are not 2fa then either, if the attacker has your device (presumably with the e-mail app logged in already and the password saved). But this seems like a dubious distinction, its like saying 2fa is no longer 2fa if the attacker has access to the second factor. Thats not particularly remarkable.
You can call it 2sv, though. Two step verification. But a user can certainly chose to use in a way that makes it 2fa by storing the totp secret on a dedicated device. The bottom line for most use cases is that it stops people from getting in even if they guess or crack your password.
With hardware tokens, it still has tradeoffs. What happens when the “user” (read attacker) claims they lost or damaged the yubi key? What factor do you use to verify them before sending a new yubikey in the mail? What happens if someone breaks into the user’s mail? Etc. no method is perfect.
The second factor isn't about a second device. It is additional to something you know (password), typically the second factor is something you have (device, yubikey, etc.).
The idea being that the intersection of {people who can get your password, such as through phishing or other digital attack} and {people who have physical proximity and can steal your physical device} are typically much smaller than the set of people in either category.
Conveniently saved in your browser :) Might not be easy to extract from a logged-out device, but grabbing the device quickly can bypass both "factors" simultaneously.
Makes me wonder how functions like CryptProtectData protect against physical disk access with hex editor. The hash of the login password can be changed to anything and obviously they cannot access the actual password since it should be destroyed after hashing. So unless TPM is involved I don't see how it can be secure.
Not too long ago I implemented a new interface for defining the TOTP codes from within the source code. Unfortunately that work has invalidated the instructions in this article. It works like this now:
I love this, and have thought of doing the same with a dumb smartwatch but... is it good opsec to have top so visible/available? What about losing the watch or getting stolen?
What's the threat model here? An attacker is going to read this person's blog post, track them down in real life, and steal their watch to get access to their github account? That seems...unlikely.
Eh, I keep TOTP codes on my Pebble and am fine with it, they are labeled in such a way that doesn't make it obvious what services they're for.
There's basically no lock mechanism or security on a Pebble, but it's just a second factor.
If you have my randomly generated password, have done your intel to know I might have the TOTP on my wrist, and can physically steal my watch, you've got me beat and I'm okay with that for the convenience it provides.
All security is a balance if the threat risk and the potential loss. I love that you have a mix that works for you while staying reasonable about it.
We all have terrible, terrible tumbler locks on our doors because they are good enough to stop the extremely casual attempts but anywhere with unbarred windows is one rock from "unlocked" and we're generally fine with this for 99% of things.
Early totp devices were designed to look like pocket calculators when these things were less well known. But you are supposed to reset the key if you lose the device.
I wouldn't be comfortable entering my TOTP secret into a random web page. In Linux (Ubuntu here, probably other distributions as well) you might have the `base32` and `od` tools already installed (package 'coreutils').
Otherwise the project is awesome (just the watch is fugly :))
reply