Skip to content

Commit

Permalink
docs: address various typos throughout our documentation (argoproj#8697)
Browse files Browse the repository at this point in the history
  • Loading branch information
juliev0 committed May 10, 2022
1 parent 1a39ae7 commit 7ae2905
Show file tree
Hide file tree
Showing 25 changed files with 50 additions and 50 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -624,7 +624,7 @@ docs-linkcheck: /usr/local/bin/markdown-link-check
.PHONY: docs-lint
docs-lint: /usr/local/bin/markdownlint
# lint docs
markdownlint docs --fix --ignore docs/fields.md --ignore docs/executor_swagger.md --ignore docs/cli
markdownlint docs --fix --ignore docs/fields.md --ignore docs/executor_swagger.md --ignore docs/cli --ignore docs/walk-through/the-structure-of-workflow-specs.md

/usr/local/bin/mkdocs:
pip install mkdocs==1.2.4 mkdocs_material==8.1.9 mkdocs-spellcheck==0.2.1
Expand Down
4 changes: 2 additions & 2 deletions docs/argo-server-sso.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ To allow service accounts to manage resources in other namespaces create a role

RBAC config is installation-level, so any changes will need to be made by the team that installed Argo. Many complex rules will be burdensome on that team.

Firstly, enable the `rbac:` setting in [workflow-controller-configmap.yaml](workflow-controller-configmap.yaml). You almost certainly want to be able configure RBAC using groups, so add `scopes:` to the SSO settings:
Firstly, enable the `rbac:` setting in [workflow-controller-configmap.yaml](workflow-controller-configmap.yaml). You almost certainly want to be able to configure RBAC using groups, so add `scopes:` to the SSO settings:

```yaml
sso:
Expand Down Expand Up @@ -113,7 +113,7 @@ The precedence must be the lowest of all your service accounts.
> v3.3 and after
You can optionally configure RBAC SSO per namespace.
Typically, on organization has a Kubernetes cluster and a central team manages the cluster who is the owner of the cluster. Along with this, there are multiple namespaces which are owned by individual team. This feature would help namespace owners to define RBAC for their own namespace.
Typically, on organization has a Kubernetes cluster and a central team (the owner of the cluster) manages the cluster. Along with this, there are multiple namespaces which are owned by individual teams. This feature would help namespace owners to define RBAC for their own namespace.

The feature is currently in beta.
To enable the feature, set env variable `SSO_DELEGATE_RBAC_TO_NAMESPACE=true` in your argo-server deployment.
Expand Down
2 changes: 1 addition & 1 deletion docs/artifact-visualization.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ from the same origin, so normal browser controls are not secure enough.

### Sub-Path Access

Previously, users can access the artifacts of any workflows they can access. To allow HTML files to link to other files
Previously, users could access the artifacts of any workflows they could access. To allow HTML files to link to other files
within their tree, you can now access any sub-paths of the artifact's key.

Example:
Expand Down
4 changes: 2 additions & 2 deletions docs/cluster-workflow-templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
## Introduction

`ClusterWorkflowTemplates` are cluster scoped `WorkflowTemplates`. `ClusterWorkflowTemplate`
can be created cluster scoped like `ClusterRole` and can be accessed all namespaces in the cluster.
can be created cluster scoped like `ClusterRole` and can be accessed across all namespaces in the cluster.

`WorkflowTemplates` documentation [link](./workflow-templates.md)

Expand All @@ -30,7 +30,7 @@ spec:

## Referencing other `ClusterWorkflowTemplates`

You can reference `templates` from another `ClusterWorkflowTemplates` using a `templateRef` field with `clusterScope: true` .
You can reference `templates` from other `ClusterWorkflowTemplates` using a `templateRef` field with `clusterScope: true` .
Just as how you reference other `templates` within the same `Workflow`, you should do so from a `steps` or `dag` template.

Here is an example:
Expand Down
6 changes: 3 additions & 3 deletions docs/container-set-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,11 @@ Instead, have a workspace volume and make sure all artifacts paths are on that v

## ⚠️ Resource Requests

A container set actually starts all containers, and the Emissary only starts the main container process when the containers it depends on have completed. This mean that even though the container is doing no useful work, it is still consume resources and you're still getting billed for them.
A container set actually starts all containers, and the Emissary only starts the main container process when the containers it depends on have completed. This mean that even though the container is doing no useful work, it is still consuming resources and you're still getting billed for them.

If your requests are small, this won't be a problem.

If your request are large, set the resource requests so the sum total is the most you'll need at once.
If your requests are large, set the resource requests so the sum total is the most you'll need at once.

Example A: a simple sequence e.g. `a -> b -> c`

Expand Down Expand Up @@ -107,6 +107,6 @@ Example B: Lopsided requests, e.g. `a -> b` where `a` is cheap and `b` is expens
* `a` needs 100 cpu, 1Mi memory, runs for 10h
* `b` needs 8Ki GPU, 100 Gi memory, 200 Ki GPU, runs for 5m

Can you see the problem here? `a` only wont small requests, but the container set will use the total of all requests. So it's as if you're using all that GPU for 10h. This will be expensive.
Can you see the problem here? `a` only has small requests, but the container set will use the total of all requests. So it's as if you're using all that GPU for 10h. This will be expensive.

Solution: do not use container set when you have lopsided requests.
10 changes: 5 additions & 5 deletions docs/cost-optimisation.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ nodeSelector:

> Suitable if you have a workflow that passes a lot of artifacts within itself.
Copying artifacts to and from storage outside of a cluster can be expensive. The correct choice is dependent on your artifact storage provider is vs. what volume they are using. For example, we believe it may be more expensive to allocate and delete a new block storage volume (AWS EBS, GCP persistent disk) every workflow using the PVC feature, than it is to upload and download some small files to object storage (AWS S3, GCP cloud storage).
Copying artifacts to and from storage outside of a cluster can be expensive. The correct choice is dependent on what your artifact storage provider is vs. what volume they are using. For example, we believe it may be more expensive to allocate and delete a new block storage volume (AWS EBS, GCP persistent disk) every workflow using the PVC feature, than it is to upload and download some small files to object storage (AWS S3, GCP cloud storage).

On the other hand if they are using a NFS volume shared between all their workflows with large artifacts, that might be cheaper than the data transfer and storage costs of object storage.
On the other hand if you are using a NFS volume shared between all your workflows with large artifacts, that might be cheaper than the data transfer and storage costs of object storage.

Consider:

Expand All @@ -39,9 +39,9 @@ Consider:

> Suitable for all.
A workflow (and for that matter, any Kubernetes resource) will incur a cost as long as they exist in your cluster, even after they are no longer running.
A workflow (and for that matter, any Kubernetes resource) will incur a cost as long as it exists in your cluster, even after it's no longer running.

The workflow controller memory and CPU needs increase linearly with the number of pods and workflows you are currently running.
The workflow controller memory and CPU needs to increase linearly with the number of pods and workflows you are currently running.

You should delete workflows once they are no longer needed, or enable a [Workflow Archive](workflow-archive.md) and you can still view them after they are removed from Kubernetes.

Expand Down Expand Up @@ -90,7 +90,7 @@ Suggestions for operators who installed Argo Workflows.

> Suitable if you have many instances, e.g. on dozens of clusters or namespaces.
Set a resource requests and limits for the `workflow-controller` and `argo-server`, e.g.
Set resource requests and limits for the `workflow-controller` and `argo-server`, e.g.

```yaml
requests:
Expand Down
2 changes: 1 addition & 1 deletion docs/default-workflow-specs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ If a Workflow has a value that also has a default value set in the config map, t
## Setting Default Workflow Values

Default Workflow values can be specified by adding them under the `workflowDefaults` key in the [`workflow-controller-configmap`](./workflow-controller-configmap.yaml).
Values can be added as the would under the `Workflow.spec` tag.
Values can be added as they would under the `Workflow.spec` tag.

For example, to specify default values that would partially produce the following `Workflow`:

Expand Down
2 changes: 1 addition & 1 deletion docs/events.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
## Overview

To support external webhooks, we have this endpoint `/api/v1/events/{namespace}/{discriminator}`. Events can be sent to that can be any JSON data.
To support external webhooks, we have this endpoint `/api/v1/events/{namespace}/{discriminator}`. Events sent to that can be any JSON data.

These events can submit *workflow templates* or *cluster workflow templates*.

Expand Down
2 changes: 1 addition & 1 deletion docs/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Only one controller can run at once. If it crashes, Kubernetes will start anothe

> v3.0
For many users, a short loss of workflow service maybe acceptable - the new controller will just continue running
For many users, a short loss of workflow service may be acceptable - the new controller will just continue running
workflows if it restarts. However, with high service guarantees, new pods may take too long to start running workflows.
You should run two replicas, and one of which will be kept on hot-standby.

Expand Down
2 changes: 1 addition & 1 deletion docs/http-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

> v3.2 and after
`HTTP Template` is a type of template which can execute the HTTP Requests.
`HTTP Template` is a type of template which can execute HTTP Requests.

```yaml
apiVersion: argoproj.io/v1alpha1
Expand Down
2 changes: 1 addition & 1 deletion docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Determine your base installation option.

⚠️ Double-check you have the right version of your executor configured, it's easy to miss.

⚠️ If you are using GitOps. Never use Kustomize remote base, this is dangerous. Instead, copy the manifests into your Git repo.
⚠️ If you are using GitOps, never use Kustomize remote base: this is dangerous. Instead, copy the manifests into your Git repo.

Review the following:

Expand Down
2 changes: 1 addition & 1 deletion docs/rest-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

> v2.5 and after
Argo Workflows ships with a server that provide more features and security than before.
Argo Workflows ships with a server that provides more features and security than before.

The server can be configured with or without client auth (`server --auth-mode client`). When it is disabled, then clients must pass their KUBECONFIG base 64 encoded in the HTTP `Authorization` header:

Expand Down
2 changes: 1 addition & 1 deletion docs/rest-examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Document contains couple of examples of workflow JSON's to submit via argo-serve
Assuming

* the namespace of argo-server is argo
* authentication is turned off (otherwise provide Authentication header)
* authentication is turned off (otherwise provide Authorization header)
* argo-server is available on localhost:2746

## Submitting workflow
Expand Down
2 changes: 1 addition & 1 deletion docs/running-at-massive-scale.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,5 @@ Where Argo has a lot of work to do, the Kubernetes API can be overwhelmed. There
## Overwhelmed Database

If you're running workflows with many nodes, you'll probably be offloading data to a database. Offloaded data is kept
for 5m. You can reduce the number of records create by setting `DEFAULT_REQUEUE_TIME=1m`. This will slow reconciliation,
for 5m. You can reduce the number of records created by setting `DEFAULT_REQUEUE_TIME=1m`. This will slow reconciliation,
but will suit workflows where nodes run for over 1m.
6 changes: 3 additions & 3 deletions docs/running-locally.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ make start UI=true PROFILE=sso

### Running E2E tests locally

Start up the Argo Workflows using the following:
Start up Argo Workflows using the following:

```bash
make start PROFILE=mysql AUTH_MODE=client STATIC_FILES=false API=true
Expand All @@ -114,7 +114,7 @@ Our CI will run those concurrently when you create a PR, which will give you fee
Find the test that you want to run in `test/e2e`

```bash
make TestArtifactServer'
make TestArtifactServer
```

#### Running A Set Of Tests
Expand All @@ -133,7 +133,7 @@ make test-api

#### Diagnosing Test Failure

Tests often fail, that's good. To diagnose failure:
Tests often fail: that's good. To diagnose failure:

* Run `kubectl get pods`, are pods in the state you expect?
* Run `kubectl get wf`, is your workflow in the state you expect?
Expand Down
10 changes: 5 additions & 5 deletions docs/synchronization.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ metadata:
name: my-config
data:
workflow: "1" # Only one workflow can run at given time in particular namespace
template: "2" # Two instance of template can run at a given time in particular namespace
template: "2" # Two instances of template can run at a given time in particular namespace
```

### Workflow-level Synchronization

Workflow-level synchronization limits parallel execution of the workflow if workflow have same synchronization reference.
In this example, Workflow refers `workflow` synchronization key which is configured as rate limit 1,
Workflow-level synchronization limits parallel execution of the workflow if workflows have the same synchronization reference.
In this example, Workflow refers to `workflow` synchronization key which is configured as rate limit 1,
so only one workflow instance will be executed at given time even multiple workflows created.

Using a semaphore configured by a `ConfigMap`:
Expand Down Expand Up @@ -74,9 +74,9 @@ spec:

### Template-level Synchronization

Template-level synchronization limits parallel execution of the template across workflows, if template have same synchronization reference.
Template-level synchronization limits parallel execution of the template across workflows, if templates have the same synchronization reference.
In this example, `acquire-lock` template has synchronization reference of `template` key which is configured as rate limit 2,
so, two instance of templates will be executed at given time even multiple step/task with in workflow or different workflow refers same template.
so two instances of templates will be executed at a given time: even multiple steps/tasks within workflow or different workflows referring to the same template.

Using a semaphore configured by a `ConfigMap`:

Expand Down
6 changes: 3 additions & 3 deletions docs/tls.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Defaults to [Plain Text](#plain-text)
Defaults to [Encrypted](#encrypted) if cert is available

Argo image/deployment defaults to [Encrypted](#encrypted) with a self-signed certificate expires after 365 days.
Argo image/deployment defaults to [Encrypted](#encrypted) with a self-signed certificate which expires after 365 days.

## Plain Text

Expand Down Expand Up @@ -71,8 +71,8 @@ readinessProbe:

Recommended for: production environments.

Run your HTTPS proxy in front of the Argo Server. You'll need to set-up your certificates and this out of scope of this
documentation.
Run your HTTPS proxy in front of the Argo Server. You'll need to set-up your certificates (this is out of scope of this
documentation).

Start Argo Server with the `--secure` flag, e.g.:

Expand Down
2 changes: 1 addition & 1 deletion docs/walk-through/custom-template-variable-reference.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Custom Template Variable Reference

In this example, we can see how we can use the other template language variable reference (E.g: Jinja) in Argo workflow template.
Argo will validate and resolve only the variable that starts with Argo allowed prefix
Argo will validate and resolve only the variable that starts with an Argo allowed prefix
{***"item", "steps", "inputs", "outputs", "workflow", "tasks"***}

```yaml
Expand Down
16 changes: 8 additions & 8 deletions docs/walk-through/the-structure-of-workflow-specs.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# The Structure of Workflow Specs

We now know enough about the basic components of a workflow spec to review its basic structure:
We now know enough about the basic components of a workflow spec. To review its basic structure:

- Kubernetes header including meta-data
- Spec body
- Entrypoint invocation with optionally arguments
- List of template definitions
- Entrypoint invocation with optional arguments
- List of template definitions

- For each template definition
- Name of the template
- Optionally a list of inputs
- Optionally a list of outputs
- Container invocation (leaf template) or a list of steps
- For each step, a template invocation
- Name of the template
- Optionally a list of inputs
- Optionally a list of outputs
- Container invocation (leaf template) or a list of steps
- For each step, a template invocation

To summarize, workflow specs are composed of a set of Argo templates where each template consists of an optional input section, an optional output section and either a container invocation or a list of steps where each step invokes another template.

Expand Down
2 changes: 1 addition & 1 deletion docs/walk-through/volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ spec:
```

It's also possible to declare existing volumes at the template level, instead of the workflow level.
This can be useful workflows that generate volumes using a `resource` step.
Workflows can generate volumes using a `resource` step.

```yaml
apiVersion: argoproj.io/v1alpha1
Expand Down
2 changes: 1 addition & 1 deletion docs/widgets.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

> v3.0 and after
Widgets are intended to be embedded into other applications using inline frames (`iframe`). This is may not work with your configuration. You may need to:
Widgets are intended to be embedded into other applications using inline frames (`iframe`). This may not work with your configuration. You may need to:

* Run the Argo Server with an account that can read workflows. That can be done using `--auth-mode=server` and configuring the `argo-server` service account.
* Run the Argo Server with `--x-frame-options=SAMEORIGIN` or `--x-frame-options=`.
2 changes: 1 addition & 1 deletion docs/workflow-archive.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Be aware that this feature will only archive the statuses of the workflows (whic

However, the logs of each pod will NOT be archived. If you need to access the logs of the pods, you need to setup [an artifact repository](artifact-repository-ref.md) thanks to [this doc](configure-artifact-repository.md).

In addition the table specified in the config map above, the following tables are created when enabling archiving:
In addition to the table specified in the config map above, the following tables are created when enabling archiving:

* `argo_archived_workflows`
* `argo_archived_workflows_labels`
Expand Down
4 changes: 2 additions & 2 deletions docs/workflow-of-workflows.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
## Introduction

The Workflow of Workflows pattern involves a parent workflow triggering one or more child workflows, managing them, and acting their results.
The Workflow of Workflows pattern involves a parent workflow triggering one or more child workflows, managing them, and acting on their results.

## Examples

Expand Down Expand Up @@ -39,7 +39,7 @@ spec:

```yaml
# This template demonstrates a workflow of workflows.
# Workflow triggers one or more workflow and manage it.
# Workflow triggers one or more workflows and manages them.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
Expand Down
2 changes: 1 addition & 1 deletion docs/workflow-pod-security-context.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Workflow Pod Security Context

By default, a workflow pods run as root. The Docker executor even requires `privileged: true`.
By default, all workflow pods run as root. The Docker executor even requires `privileged: true`.

For other [workflow executors](workflow-executors.md), you can run your workflow pods more securely by configuring the [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for your workflow pod.

Expand Down
4 changes: 2 additions & 2 deletions docs/workflow-templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ spec:

When working with parameters in a `WorkflowTemplate`, please note the following:

1. When working with global parameters, you can instantiate your global variables in your `Workflow`
- When working with global parameters, you can instantiate your global variables in your `Workflow`
and then directly reference them in your `WorkflowTemplate`. Below is a working example:

```yaml
Expand Down Expand Up @@ -166,7 +166,7 @@ spec:
template: hello-world
```

1. When working with local parameters, the values of local parameters must be supplied at the template definition inside
- When working with local parameters, the values of local parameters must be supplied at the template definition inside
the `WorkflowTemplate`. Below is a working example:

```yaml
Expand Down

0 comments on commit 7ae2905

Please sign in to comment.