Skip to content

Commit

Permalink
[hotfix] Fix various typos
Browse files Browse the repository at this point in the history
This closes apache#5497.
  • Loading branch information
jeis2497052 authored and zentol committed Feb 21, 2018
1 parent 3d22f7c commit b17610e
Show file tree
Hide file tree
Showing 13 changed files with 30 additions and 32 deletions.
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Learn more about Flink at [http:https://flink.apache.org/](http:https://flink.apache.org/)

* Support for *event time* and *out-of-order* processing in the DataStream API, based on the *Dataflow Model*

* Flexible windowing (time, count, sessions, custom triggers) accross different time semantics (event time, processing time)
* Flexible windowing (time, count, sessions, custom triggers) across different time semantics (event time, processing time)

* Fault-tolerance with *exactly-once* processing guarantees

Expand Down Expand Up @@ -127,7 +127,7 @@ or in the `docs/` directory of the source code.

## Fork and Contribute

This is an active open-source project. We are always open to people who want to use the system or contribute to it.
This is an active open-source project. We are always open to people who want to use the system or contribute to it.
Contact us if you are looking for implementation tasks that fit your skills.
This article describes [how to contribute to Apache Flink](http:https://flink.apache.org/how-to-contribute.html).

Expand All @@ -136,4 +136,3 @@ This article describes [how to contribute to Apache Flink](http:https://flink.apache.o

Apache Flink is an open source project of The Apache Software Foundation (ASF).
The Apache Flink project originated from the [Stratosphere](http:https://stratosphere.eu) research project.

26 changes: 13 additions & 13 deletions docs/dev/migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,8 +55,8 @@ Please visit the [CEP Migration docs]({{ site.baseurl }}/dev/libs/cep.html#migra

In Flink 1.3, to make sure that users can use their own custom logging framework, core Flink artifacts are
now clean of specific logger dependencies.
Example and quickstart archtypes already have loggers specified and should not be affected.

Example and quickstart archetypes already have loggers specified and should not be affected.
For other custom projects, make sure to add logger dependencies. For example, in Maven's `pom.xml`, you can add:

~~~xml
Expand Down Expand Up @@ -145,16 +145,16 @@ public class BufferingSink implements SinkFunction<Tuple2<String, Integer>>,
bufferedElements.add(value);
if (bufferedElements.size() == threshold) {
for (Tuple2<String, Integer> element: bufferedElements) {
// send it to the sink
}
bufferedElements.clear();
}
// send it to the sink
}
bufferedElements.clear();
}
}

@Override
public ArrayList<Tuple2<String, Integer>> snapshotState(
long checkpointId, long checkpointTimestamp) throws Exception {
return bufferedElements;
return bufferedElements;
}

@Override
Expand Down Expand Up @@ -445,15 +445,15 @@ The code to use the aligned window operators in Flink 1.2 is presented below:

// for tumbling windows
DataStream<Tuple2<String, Integer>> window1 = source
.keyBy(0)
.window(TumblingAlignedProcessingTimeWindows.of(Time.of(1000, TimeUnit.MILLISECONDS)))
.apply(your-function)
.keyBy(0)
.window(TumblingAlignedProcessingTimeWindows.of(Time.of(1000, TimeUnit.MILLISECONDS)))
.apply(your-function)

// for sliding windows
DataStream<Tuple2<String, Integer>> window1 = source
.keyBy(0)
.window(SlidingAlignedProcessingTimeWindows.of(Time.seconds(1), Time.milliseconds(100)))
.apply(your-function)
.keyBy(0)
.window(SlidingAlignedProcessingTimeWindows.of(Time.seconds(1), Time.milliseconds(100)))
.apply(your-function)

{% endhighlight %}
</div>
Expand Down
4 changes: 2 additions & 2 deletions docs/quickstart/run_example_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ give you a good foundation from which to start building more complex analysis pr

## Setting up a Maven Project

We are going to use a Flink Maven Archetype for creating our project stucture. Please
We are going to use a Flink Maven Archetype for creating our project structure. Please
see [Java API Quickstart]({{ site.baseurl }}/quickstart/java_api_quickstart.html) for more details
about this. For our purposes, the command to run is this:

Expand Down Expand Up @@ -284,7 +284,7 @@ similar to this:
The number in front of each line tells you on which parallel instance of the print sink the output
was produced.

This should get you started with writing your own Flink programs. To learn more
This should get you started with writing your own Flink programs. To learn more
you can check out our guides
about [basic concepts]({{ site.baseurl }}/dev/api_concepts.html) and the
[DataStream API]({{ site.baseurl }}/dev/datastream_api.html). Stick
Expand Down
2 changes: 1 addition & 1 deletion docs/redirects/storm_compat.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Storm Compatability"
title: "Storm Compatibility"
layout: redirect
redirect: /dev/libs/storm_compatibility.html
permalink: /apis/streaming/storm_compatibility.html
Expand Down
6 changes: 3 additions & 3 deletions flink-contrib/flink-storm-examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ This module contains multiple versions of a simple Word-Count example to illustr

* how to submit a whole Storm topology to Flink
3. `WordCountTopology` plugs a Storm topology together
* `StormWordCountLocal` submits the topology to a local Flink cluster (similiar to a `LocalCluster` in Storm)
* `StormWordCountLocal` submits the topology to a local Flink cluster (similar to a `LocalCluster` in Storm)
(`WordCountLocalByName` accesses attributes by field names rather than index)
* `WordCountRemoteByClient` submits the topology to a remote Flink cluster (simliar to the usage of `NimbusClient` in Storm)
* `WordCountRemoteBySubmitter` submits the topology to a remote Flink cluster (simliar to the usage of `StormSubmitter` in Storm)
* `WordCountRemoteByClient` submits the topology to a remote Flink cluster (similar to the usage of `NimbusClient` in Storm)
* `WordCountRemoteBySubmitter` submits the topology to a remote Flink cluster (similar to the usage of `StormSubmitter` in Storm)

Additionally, this module package the three example Word-Count programs as jar files to be submitted to a Flink cluster via `bin/flink run example.jar`.
(Valid jars are `WordCount-SpoutSource.jar`, `WordCount-BoltTokenizer.jar`, and `WordCount-StormTopology.jar`)
Expand Down
2 changes: 1 addition & 1 deletion flink-filesystems/flink-s3-fs-hadoop/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ steps are required to keep the shading correct:
}
```
- copy `core-default.xml` to `src/main/resources/core-default-shaded.xml` and
- change every occurence of `org.apache.hadoop` into `org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop`
- change every occurrence of `org.apache.hadoop` into `org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop`
- copy `core-site.xml` to `src/test/resources/core-site.xml` (as is)
2. verify the shaded jar:
- does not contain any unshaded classes except for `org.apache.flink.fs.s3hadoop.S3FileSystemFactory`
Expand Down
2 changes: 1 addition & 1 deletion flink-filesystems/flink-s3-fs-presto/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ steps are required to keep the shading correct:
}
```
- copy `core-default.xml` to `src/main/resources/core-default-shaded.xml` and
- change every occurence of `org.apache.hadoop` into `org.apache.flink.fs.s3presto.shaded.org.apache.hadoop`
- change every occurrence of `org.apache.hadoop` into `org.apache.flink.fs.s3presto.shaded.org.apache.hadoop`
- copy `core-site.xml` to `src/test/resources/core-site.xml` (as is)
2. verify the shaded jar:
- does not contain any unshaded classes except for `org.apache.flink.fs.s3presto.S3FileSystemFactory`
Expand Down
2 changes: 1 addition & 1 deletion flink-filesystems/flink-swift-fs-hadoop/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ steps are required to keep the shading correct:
}
```
- copy `core-default.xml` to `src/main/resources/core-default-shaded.xml` and
- change every occurence of `org.apache.hadoop` into `org.apache.flink.fs.openstackhadoop.shaded.org.apache.hadoop`
- change every occurrence of `org.apache.hadoop` into `org.apache.flink.fs.openstackhadoop.shaded.org.apache.hadoop`
- copy `core-site.xml` to `src/test/resources/core-site.xml` (as is)
2. verify the shaded jar:
- does not contain any unshaded classes except for `org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ class FlinkCalciteSqlValidator(
// We accept only literal true
case c if null != c =>
throw new ValidationException(
s"Left outer joins with a table function do not accept a predicte such as $c. " +
s"Left outer joins with a table function do not accept a predicate such as $c. " +
s"Only literal TRUE is accepted.")
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ div(ng-if="checkpoint")
thead
tr
td #[strong Name]
td #[strong Acknowleged]
td #[strong Acknowledged]
td #[strong Latest Acknowledgment]
td #[strong End to End Duration]
td #[strong State Size]
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
Expand Down Expand Up @@ -62,7 +61,7 @@ <h4>Operators</h4>
<thead>
<tr>
<td><strong>Name</strong></td>
<td><strong>Acknowleged</strong></td>
<td><strong>Acknowledged</strong></td>
<td><strong>Latest Acknowledgment</strong></td>
<td><strong>End to End Duration</strong></td>
<td><strong>State Size</strong></td>
Expand Down Expand Up @@ -168,4 +167,4 @@ <h4>Operators</h4>
<p ng-if="unknown_checkpoint" role="alert" class="alert alert-danger"><strong>Unknown or expired checkpoint ID.</strong></p>
<p ng-if="!unknown_checkpoint" role="alert" class="alert alert-info"><strong>Waiting for response from JobManager with checkpoint details...</strong> <i aria-hidden="true" class="fa fa-circle-o-notch fa-spin fa-fw"></i>
</p>
</div>
</div>
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,7 @@ class TaskManager(
"TaskManager was triggered to register at JobManager, but is already registered")
} else if (deadline.exists(_.isOverdue())) {
// we failed to register in time. that means we should quit
log.error("Failed to register at the JobManager withing the defined maximum " +
log.error("Failed to register at the JobManager within the defined maximum " +
"connect time. Shutting down ...")

// terminate ourselves (hasta la vista)
Expand Down
2 changes: 1 addition & 1 deletion tools/merge_flink_pr.py
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ def get_version_json(version_str):
asf_jira.transition_issue(
jira_id, resolve["id"], fixVersions=jira_fix_versions, comment=comment)

print "Succesfully resolved %s with fixVersions=%s!" % (jira_id, fix_versions)
print "Successfully resolved %s with fixVersions=%s!" % (jira_id, fix_versions)


#branches = get_json("%s/branches" % GITHUB_API_BASE)
Expand Down

0 comments on commit b17610e

Please sign in to comment.