Skip to content

Commit

Permalink
[hotfix] [docs] Fix typos in documentation
Browse files Browse the repository at this point in the history
This closes apache#4885
  • Loading branch information
yew1eb authored and StephanEwen committed Oct 24, 2017
1 parent 5ebe3fb commit 1e7d5be
Show file tree
Hide file tree
Showing 6 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/dev/connectors/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ database in an inconsistent state since part of the first attempt may already be
The write-ahead log guarantees that the replayed checkpoint is identical to the first attempt.
Note that that enabling this feature will have an adverse impact on latency.

<p style="border-radius: 5px; padding: 5px" class="bg-danger"><b>Note</b>: The write-ahead log functionality is currently experimental. In many cases it is sufficent to use the connector without enabling it. Please report problems to the development mailing list.</p>
<p style="border-radius: 5px; padding: 5px" class="bg-danger"><b>Note</b>: The write-ahead log functionality is currently experimental. In many cases it is sufficient to use the connector without enabling it. Please report problems to the development mailing list.</p>

### Checkpointing and Fault Tolerance
With checkpointing enabled, Cassandra Sink guarantees at-least-once delivery of action requests to C* instance.
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/connectors/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -648,7 +648,7 @@ we retrieved and emitted successfully. The `committed-offsets` is the last commi
The Kafka Consumers in Flink commit the offsets back to Zookeeper (Kafka 0.8) or the Kafka brokers (Kafka 0.9+). If checkpointing
is disabled, offsets are committed periodically.
With checkpointing, the commit happens once all operators in the streaming topology have confirmed that they've created a checkpoint of their state.
This provides users with at-least-once semantics for the offsets committed to Zookeer or the broker. For offsets checkpointed to Flink, the system
This provides users with at-least-once semantics for the offsets committed to Zookeeper or the broker. For offsets checkpointed to Flink, the system
provides exactly once guarantees.

The offsets committed to ZK or the broker can also be used to track the read progress of the Kafka consumer. The difference between
Expand Down
4 changes: 2 additions & 2 deletions docs/dev/event_timestamps_watermarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ There are two ways to assign timestamps and generate watermarks:
2. Via a timestamp assigner / watermark generator: in Flink timestamp assigners also define the watermarks to be emitted

<span class="label label-danger">Attention</span> Both timestamps and watermarks are specified as
millliseconds since the Java epoch of 1970-01-01T00:00:00Z.
milliseconds since the Java epoch of 1970-01-01T00:00:00Z.

### Source Functions with Timestamps and Watermarks

Expand Down Expand Up @@ -338,7 +338,7 @@ Kafka consumer, per Kafka partition, and the per-partition watermarks are merged
For example, if event timestamps are strictly ascending per Kafka partition, generating per-partition watermarks with the
[ascending timestamps watermark generator](event_timestamp_extractors.html#assigners-with-ascending-timestamps) will result in perfect overall watermarks.

The illustrations below show how to use ther per-Kafka-partition watermark generation, and how watermarks propagate through the
The illustrations below show how to use the per-Kafka-partition watermark generation, and how watermarks propagate through the
streaming dataflow in that case.


Expand Down
4 changes: 2 additions & 2 deletions docs/dev/java8.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ env.fromElements(1, 2, 3)
~~~

The next two examples show different implementations of a function that uses a `Collector` for output.
Functions, such as `flatMap()`, require a output type (in this case `String`) to be defined for the `Collector` in order to be type-safe.
If the `Collector` type can not be inferred from the surrounding context, it need to be declared in the Lambda Expression's parameter list manually.
Functions, such as `flatMap()`, require an output type (in this case `String`) to be defined for the `Collector` in order to be type-safe.
If the `Collector` type can not be inferred from the surrounding context, it needs to be declared in the Lambda Expression's parameter list manually.
Otherwise the output will be treated as type `Object` which can lead to undesired behaviour.

~~~java
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/stream/state/queryable_state.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ val keySerializer = createTypeInformation[Long]
.createSerializer(new ExecutionConfig)
```

If you don't do this, you can run into mismatches between the serializers used in the Flink job and in your client code, because types like `scala.Long` cannot be caputured at runtime.
If you don't do this, you can run into mismatches between the serializers used in the Flink job and in your client code, because types like `scala.Long` cannot be captured at runtime.

## Configuration

Expand Down
4 changes: 2 additions & 2 deletions docs/quickstart/scala_api_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ These templates help you to set up the project structure and to create the initi

### Create Project

You can scafold a new project via either of the following two methods:
You can scaffold a new project via either of the following two methods:

<ul class="nav nav-tabs" style="border-bottom: none;">
<li class="active"><a href="#sbt_template" data-toggle="tab">Use the <strong>sbt template</strong></a></li>
Expand All @@ -53,7 +53,7 @@ You can scafold a new project via either of the following two methods:
{% highlight bash %}
$ sbt new tillrohrmann/flink-project.g8
{% endhighlight %}
This will will prompt you for a couple of parameters (project name, flink version...) and then create a Flink project from the <a href="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/tillrohrmann/flink-project.g8">flink-project template</a>.
This will prompt you for a couple of parameters (project name, flink version...) and then create a Flink project from the <a href="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/tillrohrmann/flink-project.g8">flink-project template</a>.
You need sbt >= 0.13.13 to execute this command. You can follow this <a href="http:https://www.scala-sbt.org/download.html">installation guide</a> to obtain it if necessary.
</div>
<div class="tab-pane" id="quickstart-script-sbt">
Expand Down

0 comments on commit 1e7d5be

Please sign in to comment.