Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PodSecurityStandards being enforced provide different log information for Deployments and for pods to the user #125507

Closed
cck1860 opened this issue Jun 14, 2024 · 10 comments
Assignees
Labels
kind/documentation Categorizes issue or PR as related to documentation. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/security Categorizes an issue or PR as relevant to SIG Security.

Comments

@cck1860
Copy link

cck1860 commented Jun 14, 2024

What happened?

I set up a namespace as described in the documentation example:

apiVersion: v1
kind: Namespace
metadata:
  name: my-baseline-namespace
  labels:
    pod-security.kubernetes.io/enforce: baseline
    pod-security.kubernetes.io/enforce-version: v1.30

    # We are setting these to our _desired_ `enforce` level.
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/audit-version: v1.30
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: v1.30

When trying to make a request to deploy the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      hostNetwork: true
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

I am only getting the warnings restricted specification violation:

 kubectl apply -f nginx.yml -n  my-baseline-namespace
Warning: would violate PodSecurity "restricted:v1.30": host namespaces (hostNetwork=true), allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

The deployment resource is generated and the pods are not spinned up due to the violation of the baseline standard (because of the hostNetwork:true setting).

I expected to get here also a message that the deployment is not possible due to this violation. - When I am deploying just a pod with the same configuration:
kubectl apply -f nginx-pod.yml -n my-baseline-namespace
Error from server (Forbidden): error when creating "nginx-pod.yml": pods "nginx" is forbidden: violates PodSecurity "baseline:v1.30": host namespaces (hostNetwork=true)

I wonder, if deployments should behave in the same way as the pods being configured like shown below:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  hostNetwork: true
  containers:
  - command:
    - sleep
    - 1h
    image: nginx
    name: nginx

What did you expect to happen?

I expected to get the same message for deployment resources like I am getting it for pods for the kubectl output.

Also when running kubectl describe deployment nginx, there is no further hint regarding to PodSecurityStandards why the pods are not started:
kubectl describe deployment nginx-deployment -n my-baseline-namespace
Name: nginx-deployment
Namespace: my-baseline-namespace
CreationTimestamp: Fri, 14 Jun 2024 11:31:35 +0200
Labels:
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=nginx
Replicas: 2 desired | 0 updated | 0 total | 0 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.14.2
Port: 80/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:
Node-Selectors:
Tolerations:
Conditions:
Type Status Reason


Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
Progressing False ProgressDeadlineExceeded
OldReplicaSets:
NewReplicaSet: nginx-deployment-7cc7dc59f6 (0/2 replicas created)
Events:

How can we reproduce it (as minimally and precisely as possible)?

The yaml files are posted above, I deployed this on a Minikube Kubernetes 1.30.2

Anything else we need to know?

No response

Kubernetes version

$ kubectl version
# paste output here
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.2

Cloud provider

Not applicable

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
 cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ uname -a
# paste output here
  uname -a
Linux svmseadevel 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64 GNU/Linux
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

minikube start --kubernetes-version=v1.30.2 --container-runtime=containerd

Container runtime (CRI) and version (if applicable)

containerd containerd.io 1.6.28 ae07eda36dd25f8a1b98dfbf587313b99c0190bb

Related plugins (CNI, CSI, ...) and versions (if applicable)

n.a

/wg Policy

@cck1860 cck1860 added the kind/bug Categorizes issue or PR as related to a bug. label Jun 14, 2024
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 14, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@cck1860
Copy link
Author

cck1860 commented Jun 14, 2024

/sig Policy

@k8s-ci-robot
Copy link
Contributor

@cck1860: The label(s) sig/policy cannot be applied, because the repository doesn't have them.

In response to this:

/sig Policy

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@cck1860
Copy link
Author

cck1860 commented Jun 14, 2024

/sig Security

@k8s-ci-robot k8s-ci-robot added sig/security Categorizes an issue or PR as relevant to SIG Security. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jun 14, 2024
@neolit123
Copy link
Member

/sig auth

@k8s-ci-robot k8s-ci-robot added the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Jun 14, 2024
@ritazh
Copy link
Member

ritazh commented Jun 17, 2024

/assign @stlaz

@stlaz
Copy link
Member

stlaz commented Jun 18, 2024

Enforcement is actually not run on the pod controllers (such as Deployment), meaning that the enforce label is ignored and only the "warn" label applies for client-side warnings.

On the contrary, the warn-level admission is not run when the enforcement fails.

@liggitt you added this code originally, does running the warn admission on a pod that already failed the enforcement check make sense? Or, perhaps from the other side, should we run enforcement at warn level for pod controllers, and then running the warn admission again?
Are we even able to convey warnings along with errors to the client side?

@liggitt
Copy link
Member

liggitt commented Jun 18, 2024

/remove-kind bug
/kind documentation
/close

The reason we only issue warnings at the controller level is because we don't know the pod will be disallowed until a creation is actually attempted and any mutating admission plugins interact with the create attempt.

From https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/2579-psp-replacement#podtemplate-resources

Audit and Warn modes are also checked on resource types that embed a PodTemplate (enumerated below), but enforce mode only applies to actual pod resources.

Since users do not create pods directly in the typical deployment model, the warning mechanism is only effective if it can also warn on templated pod resources. Similarly, for audit it is useful to tie the audited violation back to the requesting user, so audit will also apply to templated pod resources. In the interest of supporting mutating admission controllers, policies will only be enforced on actual pods.

From https://kubernetes.io/docs/concepts/security/pod-security-admission/#workload-resources-and-pod-templates

To help catch violations early, both the audit and warning modes are applied to the workload resources. However, enforce mode is not applied to workload resources, only to the resulting pod objects.

@k8s-ci-robot k8s-ci-robot added kind/documentation Categorizes issue or PR as related to documentation. and removed kind/bug Categorizes issue or PR as related to a bug. labels Jun 18, 2024
@k8s-ci-robot
Copy link
Contributor

@liggitt: Closing this issue.

In response to this:

/remove-kind bug
/kind documentation
/close

The reason we only issue warnings at the controller level is because we don't know the pod will be disallowed until a creation is actually attempted and any mutating admission plugins interact with the create attempt.

From https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/2579-psp-replacement#podtemplate-resources

Audit and Warn modes are also checked on resource types that embed a PodTemplate (enumerated below), but enforce mode only applies to actual pod resources.

Since users do not create pods directly in the typical deployment model, the warning mechanism is only effective if it can also warn on templated pod resources. Similarly, for audit it is useful to tie the audited violation back to the requesting user, so audit will also apply to templated pod resources. In the interest of supporting mutating admission controllers, policies will only be enforced on actual pods.

From https://kubernetes.io/docs/concepts/security/pod-security-admission/#workload-resources-and-pod-templates

To help catch violations early, both the audit and warning modes are applied to the workload resources. However, enforce mode is not applied to workload resources, only to the resulting pod objects.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@liggitt
Copy link
Member

liggitt commented Jun 18, 2024

@liggitt you added this code originally, does running the warn admission on a pod that already failed the enforcement check make sense?

Rejecting the pod (enforce) takes precedence over warning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/security Categorizes an issue or PR as relevant to SIG Security.
Projects
Archived in project
Development

No branches or pull requests

6 participants