You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Current kube logstream reads agent tokens from the K8s pod spec and then writes to the agent log endpoint. This has a few flaws:
The agent token needs to be readable by kube logstream, which means the CODER_AGENT_TOKEN needs to be plaintext on the pod spec and not referenced via a secret (which is a security issue)
The agent token isn't actually usable until after the job is marked completed, as we only insert the resources/agents into the database on job completion
These logs show as "agent startup script" logs, which doesn't make much sense as they occur before the agent starts
We should refactor kube logstream to use it's own auth method (similarly to provisionerd, proxies, etc.) and custom API endpoints for adding entries to the build log. These logs should appear just like provisioner logs, most likely in their own section.
Pods will need to be linked back to the build via some other way than the token to overcome problem 2. One method would be to require use of a coder.com/workspace-id annotation on the pod. Another method would be to use a JWT for agent tokens that contains the workspace ID, but this isn't future proof if cloud K8s supports instance-identity auth (this also means problem 1 is still an issue).
Ideally whatever method we use can also be used for customers to add to the build log so they don't encounter problem 3.
The text was updated successfully, but these errors were encountered:
We could probably enhance the kube logstream to pull tokens from secrets, assuming it's given the correct permission.
Minting JWTs for agent tokens is attractive in that we wouldn't have to store anything but the public key in the database, but it introduces an operational burden around the private signing keys. Right now provisionerd mints the agent token, and if we kept that model for the JWT, then external provisioner daemons would need signing keys.
Current kube logstream reads agent tokens from the K8s pod spec and then writes to the agent log endpoint. This has a few flaws:
CODER_AGENT_TOKEN
needs to be plaintext on the pod spec and not referenced via a secret (which is a security issue)We should refactor kube logstream to use it's own auth method (similarly to provisionerd, proxies, etc.) and custom API endpoints for adding entries to the build log. These logs should appear just like provisioner logs, most likely in their own section.
Pods will need to be linked back to the build via some other way than the token to overcome problem 2. One method would be to require use of a
coder.com/workspace-id
annotation on the pod. Another method would be to use a JWT for agent tokens that contains the workspace ID, but this isn't future proof if cloud K8s supports instance-identity auth (this also means problem 1 is still an issue).Ideally whatever method we use can also be used for customers to add to the build log so they don't encounter problem 3.
The text was updated successfully, but these errors were encountered: