Skip to content

Commit

Permalink
[hotfix][docs][javadocs] Remove double "the"
Browse files Browse the repository at this point in the history
This closes apache#4865.
  • Loading branch information
yew1eb authored and zentol committed Oct 20, 2017
1 parent bc065cd commit 19d484a
Show file tree
Hide file tree
Showing 22 changed files with 25 additions and 25 deletions.
2 changes: 1 addition & 1 deletion docs/dev/execution_configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Note that types registered with `registerKryoType()` are not available to Flink'

- `disableAutoTypeRegistration()` Automatic type registration is enabled by default. The automatic type registration is registering all types (including sub-types) used by usercode with Kryo and the POJO serializer.

- `setTaskCancellationInterval(long interval)` Sets the the interval (in milliseconds) to wait between consecutive attempts to cancel a running task. When a task is canceled a new thread is created which periodically calls `interrupt()` on the task thread, if the task thread does not terminate within a certain time. This parameter refers to the time between consecutive calls to `interrupt()` and is set by default to **30000** milliseconds, or **30 seconds**.
- `setTaskCancellationInterval(long interval)` Sets the interval (in milliseconds) to wait between consecutive attempts to cancel a running task. When a task is canceled a new thread is created which periodically calls `interrupt()` on the task thread, if the task thread does not terminate within a certain time. This parameter refers to the time between consecutive calls to `interrupt()` and is set by default to **30000** milliseconds, or **30 seconds**.

The `RuntimeContext` which is accessible in `Rich*` functions through the `getRuntimeContext()` method also allows to access the `ExecutionConfig` in all user defined functions.

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/scala_shell.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Scala-Flink> benv.execute("MyProgram")

### DataStream API

Similar to the the batch program above, we can execute a streaming program through the DataStream API:
Similar to the batch program above, we can execute a streaming program through the DataStream API:

~~~scala
Scala-Flink> val textStreaming = senv.fromElements(
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/table/streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -549,7 +549,7 @@ val stream: DataStream[Row] = result.toAppendStream[Row](qConfig)
</div>
</div>

In the the following we describe the parameters of the `QueryConfig` and how they affect the accuracy and resource consumption of a query.
In the following we describe the parameters of the `QueryConfig` and how they affect the accuracy and resource consumption of a query.

### Idle State Retention Time

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/table/tableApi.md
Original file line number Diff line number Diff line change
Expand Up @@ -1445,7 +1445,7 @@ The `OverWindow` defines a range of rows over which aggregates are computed. `Ov

<ul>
<li><code>CURRENT_ROW</code> sets the upper bound of the window to the current row.</li>
<li><code>CURRENT_RANGE</code> sets the upper bound of the window to sort key of the the current row, i.e., all rows with the same sort key as the current row are included in the window.</li>
<li><code>CURRENT_RANGE</code> sets the upper bound of the window to sort key of the current row, i.e., all rows with the same sort key as the current row are included in the window.</li>
</ul>

<p>If the <code>following</code> clause is omitted, the upper bound of a time interval window is defined as <code>CURRENT_RANGE</code> and the upper bound of a row-count interval window is defined as <code>CURRENT_ROW</code>.</p>
Expand Down
2 changes: 1 addition & 1 deletion docs/ops/deployment/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ when creating an EMR cluster.

After creating your cluster, you can [connect to the master node](http:https://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-connect-master-node.html) and install Flink:

1. Go the the [Downloads Page]({{ download_url}}) and **download a binary version of Flink matching the Hadoop version** of your EMR cluster, e.g. Hadoop 2.7 for EMR releases 4.3.0, 4.4.0, or 4.5.0.
1. Go the [Downloads Page]({{ download_url}}) and **download a binary version of Flink matching the Hadoop version** of your EMR cluster, e.g. Hadoop 2.7 for EMR releases 4.3.0, 4.4.0, or 4.5.0.
2. Extract the Flink distribution and you are ready to deploy [Flink jobs via YARN](yarn_setup.html) after **setting the Hadoop config directory**:

```bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,7 @@ public void manualBulkRequestWithAllPendingRequests() {
/**
* On non-manual flushes, i.e. when flush is called in the snapshot method implementation,
* usages need to explicitly call this to allow the flush to continue. This is useful
* to make sure that specific requests get added to the the next bulk request for flushing.
* to make sure that specific requests get added to the next bulk request for flushing.
*/
public void continueFlush() {
flushLatch.trigger();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ public interface KeyedStateStore {
/**
* Gets a handle to the system's key/value state. The key/value state is only accessible
* if the function is executed on a KeyedStream. On each access, the state exposes the value
* for the the key of the element currently processed by the function.
* for the key of the element currently processed by the function.
* Each function may have multiple partitioned states, addressed with different names.
*
* <p>Because the scope of each value is the key of the currently processed element,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -906,7 +906,7 @@ public boolean initOutPathDistFS(Path outPath, WriteMode writeMode, boolean crea
private static HashMap<String, FileSystemFactory> loadFileSystems() {
final HashMap<String, FileSystemFactory> map = new HashMap<>();

// by default, we always have the the local file system factory
// by default, we always have the local file system factory
map.put("file", new LocalFileSystemFactory());

LOG.debug("Loading extension file systems via services");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ public Option alt(String shortName) {
/**
* Define the type of the Option.
*
* @param type - the type which the the value of the Option can be casted to.
* @param type - the type which the value of the Option can be casted to.
* @return the updated Option
*/
public Option type(OptionType type) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ public void join(BipartiteEdge<KT, KB, EV> first, BipartiteEdge<KT, KB, EV> seco
* Convert a bipartite graph into a graph that contains only top vertices. An edge between two vertices in the new
* graph will exist only if the original bipartite graph contains at least one bottom vertex they both connect to.
*
* <p>The full projection performs three joins and returns edges containing the the connecting vertex ID and value,
* <p>The full projection performs three joins and returns edges containing the connecting vertex ID and value,
* both top vertex values, and both bipartite edge values.
*
* <p>Note: KT must override .equals(). This requirement may be removed in a future release.
Expand Down Expand Up @@ -271,7 +271,7 @@ public void join(Tuple5<KT, KB, EV, VVT, VVB> first, Tuple5<KT, KB, EV, VVT, VVB
* Convert a bipartite graph into a graph that contains only bottom vertices. An edge between two vertices in the
* new graph will exist only if the original bipartite graph contains at least one top vertex they both connect to.
*
* <p>The full projection performs three joins and returns edges containing the the connecting vertex ID and value,
* <p>The full projection performs three joins and returns edges containing the connecting vertex ID and value,
* both bottom vertex values, and both bipartite edge values.
*
* <p>Note: KB must override .equals(). This requirement may be removed in a future release.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import scala.collection.JavaConverters._
/**
* This class is responsible to connect an external catalog to Calcite's catalog.
* This enables to look-up and access tables in SQL queries without registering tables in advance.
* The the external catalog and all included sub-catalogs and tables is registered as
* The external catalog and all included sub-catalogs and tables is registered as
* sub-schemas and tables in Calcite.
*
* @param catalogIdentifier external catalog name
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
package org.apache.flink.optimizer.dataproperties;

/**
* An enumeration of the the different types of distributing data across partitions or
* An enumeration of the different types of distributing data across partitions or
* parallel workers.
*/
public enum PartitioningProperty {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
/**
* Base class for Optimizer tests. Offers utility methods to trigger optimization
* of a program and to fetch the nodes in an optimizer plan that correspond
* the the node in the program plan.
* the node in the program plan.
*/
public abstract class CompilerTestBase extends TestLogger implements java.io.Serializable {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ public interface MasterTriggerRestoreHook<T> {
* This method is called by the checkpoint coordinator prior to restoring the state of a checkpoint.
* If the checkpoint did store data from this hook, that data will be passed to this method.
*
* @param checkpointId The The ID (logical timestamp) of the restored checkpoint
* @param checkpointId The ID (logical timestamp) of the restored checkpoint
* @param checkpointData The data originally stored in the checkpoint by this hook, possibly null.
*
* @throws Exception Exceptions thrown while restoring the checkpoint will cause the restore
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -938,7 +938,7 @@ public void run() {
},
futureExecutor);

// from now on, slots will be rescued by the the futures and their completion, or by the timeout
// from now on, slots will be rescued by the futures and their completion, or by the timeout
successful = true;
}
finally {
Expand Down Expand Up @@ -1211,7 +1211,7 @@ public void restart(long expectedGlobalVersion) {
*
* @param errorIfNoCheckpoint Fail if there is no checkpoint available
* @param allowNonRestoredState Allow to skip checkpoint state that cannot be mapped
* to the the ExecutionGraph vertices (if the checkpoint contains state for a
* to the ExecutionGraph vertices (if the checkpoint contains state for a
* job vertex that is not part of this ExecutionGraph).
*/
public void restoreLatestCheckpointedState(boolean errorIfNoCheckpoint, boolean allowNonRestoredState) throws Exception {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
* message has to be serialized.
* <p>
* In order to fail fast and report an appropriate error message to the user, the method name, the
* parameter types and the arguments are eagerly serialized. In case the the invocation call
* parameter types and the arguments are eagerly serialized. In case the invocation call
* contains a non-serializable object, then an {@link IOException} is thrown.
*/
public class RemoteRpcInvocation implements RpcInvocation, Serializable {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ public interface StateObject extends Serializable {
void discardState() throws Exception;

/**
* Returns the size of the state in bytes. If the the size is not known, this
* Returns the size of the state in bytes. If the size is not known, this
* method should return {@code 0}.
*
* <p>The values produced by this method are only used for informational purposes and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -697,8 +697,8 @@ object AkkaUtils {
* @param fn The function to retry
* @param stopCond Flag to signal termination
* @param maxSleepBetweenRetries Max random sleep time between retries
* @tparam T Return type of the the function to retry
* @return Return value of the the function to retry
* @tparam T Return type of the function to retry
* @return Return value of the function to retry
*/
@tailrec
def retryOnBindException[T](
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
* <p>To finalize the join operation you also need to specify a {@link KeySelector} for
* both the first and second input and a {@link WindowAssigner}.
*
* <p>Note: Right now, the the join is being evaluated in memory so you need to ensure that the number
* <p>Note: Right now, the join is being evaluated in memory so you need to ensure that the number
* of elements per key does not get too high. Otherwise the JVM might crash.
*
* <p>Example:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ public class FieldsFromTuple implements Extractor<Tuple, double[]> {
int[] indexes;

/**
* Extracts one or more fields of the the type Double from a tuple and puts
* Extracts one or more fields of the type Double from a tuple and puts
* them into a new double[] (in the specified order).
*
* @param indexes
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ public interface Output<T> extends Collector<T> {
void emitWatermark(Watermark mark);

/**
* Emits a record the the side output identified by the given {@link OutputTag}.
* Emits a record the side output identified by the given {@link OutputTag}.
*
* @param record The record to collect.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -527,7 +527,7 @@ class DataStream[T](stream: JavaStream[T]) {
* stream of the iterative part.
*
* The input stream of the iterate operator and the feedback stream will be treated
* as a ConnectedStreams where the the input is connected with the feedback stream.
* as a ConnectedStreams where the input is connected with the feedback stream.
*
* This allows the user to distinguish standard input from feedback inputs.
*
Expand Down

0 comments on commit 19d484a

Please sign in to comment.