Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] main from hashicorp:main #18

Merged
merged 8 commits into from
Jun 1, 2023
Merged

Conversation

pull[bot]
Copy link

@pull pull bot commented Jun 1, 2023

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 馃挅 Please sponsor : )

This new concept allows constraining the range of an unknown value beyond
what can be captured in a type constraint. We'll make more use of this
in subsequent commits.
If we encounter an interpolated unknown value during template rendering,
we can report the partial buffer we've completed so far as the refined
prefix of the resulting unknown value, which can then potentially allow
downstream comparisons to produce a known false result instead of unknown
if the prefix is sufficient to satisfy them.
When ConditionalExpr has an unknown predicate it can still often infer
some refinement to the range of its result by noticing characteristics
that the two results have in common.

In all cases we can test if either result could be null and return a
definitely-not-null unknown value if not.

For two known numbers we can constrain the range to be between those two
numbers. This is primarily aimed at the common case where the two possible
results are zero and one, which significantly constrains the range.

For two known collections of the same kind we can constrain the length
to be between the two collection lengths.

In these last two cases we can also sometimes collapse the unknown into
a known value if the range gets reduced enough. For example, if choosing
between two collections of the same length we might return a known
collection of that length containing unknown elements, rather than an
unknown collection.
If either the given value is refined non-null or if the default value is
refined non-null then the final attribute value after defaults processing
is also guaranteed non-null even if we don't yet know exactly what the
value will be.

This rule is pretty marginal on its own, but refining some types of value
as non-null creates opportunities to deduce further information when the
value is used under other operations later, such as collapsing an unknown
but definitely not null list of a known length into a known list of that
length containing unknown values.
We know that a splat expression can never produce a null result, and also
in many cases we can use length refinements from the source collection to
also refine the destination collection because we know that a splat
expression produces exactly one result for each input element.

This also allows us to be a little more precise in the case where the
splat operator is projecting a non-list/set value into a zero or one
element list and we know the source value isn't null. This refinement is
a bit more marginal since it would be weird to apply the splat operator
to a value already known to be non-null anyway, but the refinement might
come from far away from the splat expression and so could still have
useful downstream effects in some cases.
This new spec type allows adding value refinements to the results of some
other spec, as long as the wrapped spec does indeed enforce the
constraints described by the refinements.
@pull pull bot added the 猡碉笍 pull label Jun 1, 2023
@pull pull bot merged commit 7208bce into makesoftwaresafe:main Jun 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
1 participant