Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Crossplane CLI should support a standardized testing model for compositions #5710

Open
lindblombr opened this issue May 17, 2024 · 4 comments

Comments

@lindblombr
Copy link

lindblombr commented May 17, 2024

What problem are you facing?

Introduction

crossplane beta render is super useful for seeing the resulting output of a composition given an input XR and optional input XRs and context. This is infinitely useful for initial shift-left testing of compositions during development.

Currently, we need to duct-tape together various tools in order to provide something approximating unit tests that are automated, easy to interpret, and are useful to demonstrating composition fitness.

In order to facilitate this, we've implemented some simple shell scripting that allows us to define a test directory structure like so

tests/
  - <composition_base_file_name>
    - functions.yaml   # functions to be included during test
    - <test-suite-directory>/
      - _n_-xr.yaml    # test case represented as an input XR
      - _n_-xr.assertions.yaml   # assertions to be applied against the resultant `crossplane beta render...` output of _n_-xr.yaml
      - observed.yaml  # optional observation input to use with `crossplane beta render...`
      - environment.yaml # optional EnvironmentConfigs to use with `crossplane beta render...` and `function-environment-config`
    - <test-suite-directory-2>/
      ...

The tool then iterates through the tests/ directory tree and for each <composition_base_file_name>, finds a corresponding <composition_base_file_name>.yaml within a top-level resources/ directory. If the file is found, we then orchestrate the execution of crossplane beta render... based on the test suite sub-directories present under the <composition_base_file_name> directory.

The orchestration involves injection of function-unit-test as an additional function step with the contents of _n_-xr.assertions.yaml used as the desired assertions to check. If observed.yaml is present in the suite directory, we include it in the crossplane beta render... command arguments. Same is true for the optional environment.yaml. We then capture return codes and report back the overall status.

Current Gaps

  1. This approach provides us with a test framework for implementing assertion-based testing of compositions within a given repository. What we've found in this approach that, I think, represents an easily-filled gap with the crossplane cli, is lack of an assertion mechanism that we can apply to the output of crossplane beta render....

  2. Invoking crossplane beta render test-xr.yaml ... | crossplane beta test -f test-xr-assertions.yaml is pretty slow if you want to iterate through a number of test cases. Having a robust suite of test cases is a requirement for producing high-quality APIs for our end users to consume and being able to rapidly validate all of those test cases is really nice to have for composition developers to get their jobs done more quickly. Secondarily, with a simple UNIX-philosphy-like pipeline-of-commands approach, the wheel -- with respect to composition testing -- will be re-invented over and over again, leading different teams to design different testing frameworks that miss out on the community's shared knowledge of what testing involves.

Now, I'm not claiming to be the expert on composition testing, but myself and my team have learned a lot about testing compositions during development and feel that we have a decent handle on the concerns that are involved. Working with the Crossplane community to articulate a test framework design for composition development can be a huge benefit to the community overall, taking the guesswork out of testing for many teams.

How could Crossplane help solve your problem?

  1. Given lack of simple support for assertions that are not directly coupled to the Composition implementation i.e. function-unit-test, a simple tool that incorporates CEL, similar to function-unit-test, but as a crossplane cli subcommand would allow us to run this as a pipeline, i.e.
$ crossplane beta render test-xr.yaml ... | crossplane beta test -f test-xr-assertions.yaml

allowing for seamless offline unit testing of compositions without having to modify the composition during testing.

  1. Given these concerns -- test execution "slowness" coupled with the open-ended approach of a simple pipeline-of-commands approach, I'd like to propose a second possible Crossplane command extension, crossplane test-runner.

crossplane test-runner will take as input a test framework file. The test framework file would represent a TestFramework object which would describe the

  1. A list of compositions
  2. For each composition, a list of required functions
  3. For each composition, a list of suites
  4. For each suite, optional additional context via fileRef or inline
  5. For each suite, optional additional observed data via fileRef or inline
  6. For each suite, a list of input XR and XR assertions via fileRefs or inline

test-runner would then consume the the TestFramework input file and orchestrate the execution of the tests, composing crossplane beta render against the input specified in the TestFramework input.

Example framework definition

apiVersion: crossplane.io/v1alpha1
kind: TestFramework
spec:
  compositions:
  - fileRef: resources/my-composition.yaml
    functions:
      - docker-upstream.apple.com/crossplane-contrib/function-go-templating:v0.4.1
      - docker-upstream.apple.com/crossplane-contrib/function-auto-ready:v0.2.1
    suites:
      - id: suite-1
        # no observed data means initial reconciliation
        context:
          ...
        extraResources:
          ...
        tests:
          - xr: tests/my-composition/suite-1/1-xr.yaml
             assertions: tests/my-composition/suite-1/1-xr-assertions.yaml
          - xr: tests/my-composition/suite-1/2-xr.yaml
             assertions: tests/my-composition/suite-1/2-xr-assertions.yaml
      - id: suite-2
        # incorporate observed status i.e. pretend like this is a later reconciliation
        observed:
          fileRef: tests/my-composition/suite-2/observed.yaml
        context:
          ...
        extraResources:
          ...
        tests:
          - xr: tests/my-composition/suite-2/1-xr.yaml
             assertions: tests/my-composition/suite-2/1-xr-assertions.yaml
          - xr: tests/my-composition/suite-2/2-xr.yaml
             assertions: tests/my-composition/suite-2/2-xr-assertions.yaml

The model describe above would allow a composition author to model the entire lifecycle of an XRs reconciliation against a given composition and apply assertions to each stage of reconciliation from the initial no-observation reconciliation to subsequent reconciliations containing required data. To run an entire set of unit tests against multiple compositions and multiple XR input use cases for each composition, would simply require the execution of crossplane test-runner -f test-framework-input.yaml, causing the expected tests and test cases to be run.

In the above example, we can allow the explicit definition of xr/assertion tuples OR we can allow them to be discovered implicitly by looking for files matching an expected convention. This allows folks to skip pain-staking implementation of a TestFramework definition OR to fully control the TestFramework definition, depending on their likes/dislikes.

crossplane test-runner can be made to support different output "drivers" to facilitate integration with existing tools that understand test framework output, likely facilitating easier integration with IDEs and other tooling.

I am happy to contribute and collaborate on design and implementation to this if the community feels it would be useful.

@jtucci
Copy link

jtucci commented Jun 17, 2024

@lindblombr I implemented a version of this that I have been using on my team, I would happy to collaborate on this or share my current work.

@lindblombr
Copy link
Author

A teammate also pointed out to me that this exists: https://github.com/swisscom/crossplane-composition-tester

It seems to hit many of the same requirements and has a couple added benefits. My main concern involves taking a dependency on a totally different runtime/language ecosystem just to facilitate testing. I'd love to hear from folks what they might think about this.

@jtucci that would be really awesome. Would love to see how others have tackled this.

@jtucci
Copy link

jtucci commented Jul 4, 2024

@lindblombr sorry for the late reply I've been super busy!

I haven't looked into https://github.com/swisscom/crossplane-composition-tester yet, but I completely agree that having a built in framework for testing compositions would be incredibly valuable.

Recently, I developed assertion logic to validate rendered output from compositions. Taking it a step further, I've integrated this functionality directly into the Crossplane CLI tool. This integration allows for more seamless testing, whether by piping in rendered output or specifying a rendered file.

My current setup bears similarities to what you've described. I use a script that orchestrates the testing process, which includes running crossplane beta render, followed by crossplane validate, and then crossplane assert (the new command I've added for assertions). This workflow allows me to iterate through various example claims and compositions, render the output, and then apply both validation and assertion checks. I modeled the output of the assert command after the existing validate command for consistency, which I believe helps maintain a familiar interface for Crossplane users.

Your proposed TestFramework object and the crossplane test-runner concept could indeed provide a more comprehensive and standardized approach to composition testing. It would be particularly beneficial for modeling the entire lifecycle of an XR's reconciliation and applying assertions at different stages.

Below is the example run / output of the assertion logic I created:

→ crossplane beta render xr.yaml ... | crossplane beta assert -e expected.yaml -
[x] storage.azure.upbound.io/v1beta1, Kind=Account, Labels=[crossplane.io/claim-namespace: test-stage-us]
 - spec.forProvider.allowNestedItemsToBePublic: key is missing from map
 - spec.forProvider.blobProperties.[0].containerDeleteRetentionPolicy.[0].days: value mismatch: expected 60, got 30
 - spec.providerConfigRef.name: value mismatch: expected test-stage-us-azure, got test-stage-eu-azure
[x] storage.azure.upbound.io/v1beta1, Kind=Container, Labels=[crossplane.io/claim-namespace: test-stage-us, external-name: storage1]
- resource is missing
[✓] storage.azure.upbound.io/v1beta1, Kind=Container, Labels=[external-name: storage2] asserted successfully
[✓] storage.azure.upbound.io/v1beta1, Kind=Container, Labels=[external-name: storage3] asserted successfully

Total 4 resources: 1 missing resources, 2 success cases, 1 failure cases

@negz, I would happy to open a PR with this added logic and flesh it out more if you think it would be valuable for the community? Would you be interested in discussing this further or providing any initial feedback on the concept?

@lindblombr
Copy link
Author

I've been working on a one-pager design for this that I can put up for draft review as well. We can perhaps iterate on it to get to a clean design? The key part is that, while we have a TestFramework/TestModel spec like proposed here, we allow that to be modeled completely via directory layout, so there is no need to explicitly define a TestFramework/TestModel directly. Absence or presence of expected files in a specified directory automatically "deserializes" to the TestFramework/TestModel and allows the tool to do its thing. I, too, implemented this as a script and use it to great success with several of our compositions today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants