Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Add ForceMunlock option #760

Closed
IohannesArnold opened this issue Jun 2, 2024 · 9 comments
Closed

Proposal: Add ForceMunlock option #760

IohannesArnold opened this issue Jun 2, 2024 · 9 comments

Comments

@IohannesArnold
Copy link

IohannesArnold commented Jun 2, 2024

Bbolt has its own Mlock option which will make call mlock on the mmapped db file. But mlock can also be active if bbolt is used in a process that had previously called mlockall(MCL_FUTURE). If that command was previously called by the program that then opens a bbolt db, the entire mmapped db will be paged into memory immediately upon creation of the mmap, even if the bbolt-specific Mlock option was not used.

This might not be desired. In my case, I'm writing on behalf of OpenBao, a community fork of HashiCorp Vault. Vault has an mlock setting which, if enabled, simply calls mlockall(MCL_FUTURE) early in the program execution. This then causes it to run into the above problem if it uses its embedded bbolt storage, so upstream Vault docs tell users to disable mlock in this case.

We want to fix this better in OpenBao. The most correct fix would be to not call mlockall but only call mlock on the specific regions of memory that contain sensitive data. That's too deep of an overhaul for this project right now though. Another route to fix this would be if bbolt could add an option to munlock the db mmap immediately upon creation. Then, while the rest of OpenBao memory would be mlocked, the pages of the bbolt db mmap would be free to be evicted back to disk.

This should be little more that a +10 or so PR. The core of it would be adding to ~bbolt/bolt_unix.go:61:

if db.ForceMunlock {
	munlock(db, sz)
}

I almost opened this as a PR, but ended up starting with an Issue in case someone wanted to discuss design or naming more. What do you think?

@ahrtr
Copy link
Member

ahrtr commented Jun 3, 2024

I really think the upper applications (e.g. OpenBao in this case) should resolve this issue instead of bbolt.

  • If OpenBao already calls unix.Mlockall(syscall.MCL_CURRENT | syscall.MCL_FUTURE), then bbolt may fail at unix.Mmap. It means that there is no chance to execute munlock at all.
  • I am not a fan of such a proposal. Who locks the pages, then who should be responsible for unlock the pages. Distributing the responsibilities across multiple repositories/projects looks not graceful, and also error prone to me.

One workaround proposal for Openbao:

  • call unix.Mlockall(syscall.MCL_CURRENT) before you call bbolt.Open, bbolt.Update or bbolt.Commit.
  • call bbolt functions/methods...
  • call unix.Mlockall(syscall.MCL_CURRENT | syscall.MCL_FUTURE) afterwards.

Refer to https://man7.org/linux/man-pages/man2/mlock.2.html,

If a call to mlockall() which uses the MCL_FUTURE flag is
followed by another call that does not specify this flag, the
changes made by the MCL_FUTURE call will be lost.

@cipherboy
Copy link

cipherboy commented Jun 3, 2024

@ahrtr said (and thanks for the reply!!):

  • call unix.Mlockall(syscall.MCL_CURRENT) before you call bbolt.Open, bbolt.Update or bbolt.Commit.

  • call bbolt functions/methods...

  • call unix.Mlockall(syscall.MCL_CURRENT | syscall.MCL_FUTURE) afterwards.

I'm not quite sure I follow how this'd work.

The first mlockall would lock all presently memory, sure, but wouldn't the third one mlock bbolt as well? I think it'd need to be unix.Mlockall(syscall.MCL_FUTURE) only to prevent that.

But if you ever closed+reopened bbolt afterwards, this new database would be covered under syscall.MCL_FUTURE if I'm not mistaken? In other words, I think we'd need to do it around every bbolt write operation? The latter gets dicey with concurrent access if the parent doesn't lock, no? You risk mlocking other concurrent bbolt operations (if mlockall(CURRENT) is executed in one thread, then mlockall(FUTURE) in another thread, then the first thread calls bbolt's update/commit operations and they mmap more memory...). Even then, you'd want to stop-the-world for database write operations as any other memory you create concurrently may not me mlock'd (or otherwise be fine with it leaking to disk)...

Or, if you open the database and then update it, with the above pattern, wouldn't the bbolt db get mlock'd under the first mlockall(CURRENT) on the write op?

I think this means, if we'd want to do this properly, we'd have to push mlocking into every area of OpenBao, explicitly, since mlockall will chance locking too many things, and chance not locking others?


I do agree in general that the mmap may fail before munlock can be called, so I agree it isn't an ideal solution...

So perhaps the real solution is push bbolt into a separate process and mlockall only the main server process?

@ahrtr
Copy link
Member

ahrtr commented Jun 3, 2024

if you ever closed+reopened

Yes, my proposal also has significant limitations. You must open bbolt after calling unix.Mlockall(syscall.MCL_CURRENT), and close bbolt before calling unix.Mlockall(syscall.MCL_CURRENT | syscall.MCL_FUTURE).

perhaps the real solution is push bbolt into a separate process and mlockall only the main server process?

Yes, it's also a solution, but it may need big refactoring/change on upper applications.

@tjungblu
Copy link
Contributor

tjungblu commented Jun 3, 2024

Maybe I need a quick reality check here with a very stupid set of questions:

Disabling mlock is not recommended unless the systems running OpenBao only use encrypted swap or do not use swap at all.

are there really people that do run Vault-like software on unecrypted disk drives? How likely is that you store the bbolt file on an encrypted drive, but not your swap? Who still uses swap in 2024? 🤔

Just to also leave something constructive, what about we're exposing an mmap interface where you could implement your own locking/unblock around the actual mmap syscalls?

@IohannesArnold
Copy link
Author

Thanks @ahrtr. I agree that this is not the most elegant design, and that in principle separating lock from unlock like this shouldn't happen. I wanted to suggest it as an upstream option though because it might be an emergency escape valve for others. Vault was a professionally developed piece of software; if they can stumble into this design flaw, so might other developers. So add the option but put a big scary warning in the docs that using this is a code smell; it's there to help you while you refactor your app. But I understand if you don't want this in upstream at all.

@cipherboy if this won't be added upstream, what do you think about patching a temporary fork of bbolt? I still think that getting munlock into bolt_unix.go is the fastest immediate solution to the problem, although not the best long-term solution. If we used a patched fork, we wouldn't even have to edit the Option struct; we could just add the munlock(db, sz) call directly, so this would be a +1/-0 patch, which should be pretty easy to keep in sync with upstream.

are there really people that do run Vault-like software on unecrypted disk drives? How likely is that you store the bbolt file on an encrypted drive, but not your swap? Who still uses swap in 2024? 🤔

I'm a new contributor to the project so I don't think my opinion is authoritative, but I would think that, especially as the free community fork, the OpenBao project would want to have safe defaults for as many possible systems as could be thrown at it. I believe unencryped swap is still the default in several Linux distro; Arch for example.

@cipherboy
Copy link

cipherboy commented Jun 3, 2024

@ahrtr said:

@cipherboy said:

perhaps the real solution is push bbolt into a separate process and mlockall only the main server process?

Yes, it's also a solution, but it may need big refactoring/change on upper applications.

Indeed, but I think it might be the only viable alternative at the minute. In the Raft backend, bbolt is used as the underlying storage and there tend to be lots of read/write operations concurrently (well, only a single writer obviously). We're working on pushing through transactional storage as well, which use bbolt read-only transactions, so operations will need to be concurrent with memory allocation of secrets. Databases can grow rather large as well (I'm aware of 100s+ GB in production IIRC).

@tjungblu said:

@IohannesArnold said:

Disabling mlock is not recommended unless the systems running OpenBao only use encrypted swap or do not use swap at all.

are there really people that do run Vault-like software on unecrypted disk drives? How likely is that you store the bbolt file on an encrypted drive, but not your swap? Who still uses swap in 2024? 🤔

Besides what @IohannesArnold has mentioned above, I think the other point is OpenBao encrypts entries into bbolt. The underlying disk has lower priority for being encrypted. Not that its bad to encrypt disks, just that perhaps in some scenarios it is hard to do securely or not done by default... How can you tell (e.g., as a pod in kubernetes) if your workload is running strictly on encrypted storage? My 2c., but I think overall security posture from using mlock benefits more users, regardless of whether or not their underlying disk is encrypted or not.

(as an aside, the threat model of host compromise is outside the scope of OpenBao).

@ahrtr
Copy link
Member

ahrtr commented Jun 5, 2024

Based on discussion above, can we close this ticket, since there is no any action from bbolt side?

@cipherboy
Copy link

Yes, I think this is good. Thanks @ahrtr and @tjungblu for your thoughts!

@IohannesArnold
Copy link
Author

IohannesArnold commented Jun 5, 2024

Yes, thanks for your time and consideration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants