Skip to content

Commit

Permalink
[FLINK-17352][docs] Fix doc links using site.baseurl w/ link tag
Browse files Browse the repository at this point in the history
This closes apache#11885
  • Loading branch information
alpinegizmo authored and sjwiesman committed Apr 23, 2020
1 parent e3faf63 commit 401290a
Show file tree
Hide file tree
Showing 21 changed files with 111 additions and 112 deletions.
6 changes: 3 additions & 3 deletions docs/concepts/flink-architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,10 @@ The Flink runtime consists of two types of processes:
There must always be at least one TaskManager.

The Flink Master and TaskManagers can be started in various ways: directly on
the machines as a [standalone cluster]({{ site.baseurl }}{% link
the machines as a [standalone cluster]({% link
ops/deployment/cluster_setup.md %}), in containers, or managed by resource
frameworks like [YARN]({{ site.baseurl }}{% link ops/deployment/yarn_setup.md
%}) or [Mesos]({{ site.baseurl }}{% link ops/deployment/mesos.md %}).
frameworks like [YARN]({% link ops/deployment/yarn_setup.md
%}) or [Mesos]({% link ops/deployment/mesos.md %}).
TaskManagers connect to Flink Masters, announcing themselves as available, and
are assigned work.

Expand Down
25 changes: 12 additions & 13 deletions docs/concepts/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,13 @@ specific language governing permissions and limitations
under the License.
-->

The [Hands-on Tutorials]({{ site.baseurl }}{% link tutorials/index.md %}) explain the basic concepts
The [Hands-on Tutorials]({% link tutorials/index.md %}) explain the basic concepts
of stateful and timely stream processing that underlie Flink's APIs, and provide examples of how
these mechanisms are used in applications. Stateful stream processing is introduced in the context
of [Data Pipelines & ETL]({{ site.baseurl }}{% link tutorials/etl.md %}#stateful-transformations)
and is further developed in the section on [Fault Tolerance]({{ site.baseurl }}{% link
of [Data Pipelines & ETL]({% link tutorials/etl.md %}#stateful-transformations)
and is further developed in the section on [Fault Tolerance]({% link
tutorials/fault_tolerance.md %}). Timely stream processing is introduced in the section on
[Streaming Analytics]({{ site.baseurl }}{% link tutorials/streaming_analytics.md %}).
[Streaming Analytics]({% link tutorials/streaming_analytics.md %}).

This _Concepts in Depth_ section provides a deeper understanding of how Flink's architecture and runtime
implement these concepts.
Expand All @@ -45,17 +45,16 @@ Flink offers different levels of abstraction for developing streaming/batch appl
<img src="{{ site.baseurl }}/fig/levels_of_abstraction.svg" alt="Programming levels of abstraction" class="offset" width="80%" />

- The lowest level abstraction simply offers **stateful and timely stream processing**. It is
embedded into the [DataStream API]({{ site.baseurl}}{% link
dev/datastream_api.md %}) via the [Process Function]({{ site.baseurl }}{%
link dev/stream/operators/process_function.md %}). It allows users to freely
process events from one or more streams, and provides consistent, fault tolerant
*state*. In addition, users can register event time and processing time
callbacks, allowing programs to realize sophisticated computations.
embedded into the [DataStream API]({% link dev/datastream_api.md %}) via the [Process
Function]({% link dev/stream/operators/process_function.md %}). It allows
users to freely process events from one or more streams, and provides consistent, fault tolerant
*state*. In addition, users can register event time and processing time callbacks, allowing
programs to realize sophisticated computations.

- In practice, many applications do not need the low-level
abstractions described above, and can instead program against the **Core APIs**: the
[DataStream API]({{ site.baseurl }}{% link dev/datastream_api.md %})
(bounded/unbounded streams) and the [DataSet API]({{ site.baseurl }}{% link
[DataStream API]({% link dev/datastream_api.md %})
(bounded/unbounded streams) and the [DataSet API]({% link
dev/batch/index.md %}) (bounded data sets). These fluent APIs offer the
common building blocks for data processing, like various forms of
user-specified transformations, joins, aggregations, windows, state, etc.
Expand All @@ -69,7 +68,7 @@ Flink offers different levels of abstraction for developing streaming/batch appl

- The **Table API** is a declarative DSL centered around *tables*, which may
be dynamically changing tables (when representing streams). The [Table
API]({{ site.baseurl }}{% link dev/table/index.md %}) follows the
API]({% link dev/table/index.md %}) follows the
(extended) relational model: Tables have a schema attached (similar to
tables in relational databases) and the API offers comparable operations,
such as select, project, join, group-by, aggregate, etc. Table API
Expand Down
14 changes: 7 additions & 7 deletions docs/concepts/index.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,13 @@ specific language governing permissions and limitations
under the License.
-->

The [Hands-on Tutorials]({{ site.baseurl }}{% link tutorials/index.zh.md %}) explain the basic concepts
The [Hands-on Tutorials]({% link tutorials/index.zh.md %}) explain the basic concepts
of stateful and timely stream processing that underlie Flink's APIs, and provide examples of how
these mechanisms are used in applications. Stateful stream processing is introduced in the context
of [Data Pipelines & ETL]({{ site.baseurl }}{% link tutorials/etl.zh.md %}#stateful-transformations)
and is further developed in the section on [Fault Tolerance]({{ site.baseurl }}{% link
of [Data Pipelines & ETL]({% link tutorials/etl.zh.md %}#stateful-transformations)
and is further developed in the section on [Fault Tolerance]({% link
tutorials/fault_tolerance.zh.md %}). Timely stream processing is introduced in the section on
[Streaming Analytics]({{ site.baseurl }}{% link tutorials/streaming_analytics.zh.md %}).
[Streaming Analytics]({% link tutorials/streaming_analytics.zh.md %}).

This _Concepts in Depth_ section provides a deeper understanding of how Flink's architecture and runtime
implement these concepts.
Expand All @@ -54,8 +54,8 @@ Flink offers different levels of abstraction for developing streaming/batch appl

- In practice, many applications do not need the low-level
abstractions described above, and can instead program against the **Core APIs**: the
[DataStream API]({{ site.baseurl }}{% link dev/datastream_api.zh.md %})
(bounded/unbounded streams) and the [DataSet API]({{ site.baseurl }}{% link
[DataStream API]({% link dev/datastream_api.zh.md %})
(bounded/unbounded streams) and the [DataSet API]({% link
dev/batch/index.zh.md %}) (bounded data sets). These fluent APIs offer the
common building blocks for data processing, like various forms of
user-specified transformations, joins, aggregations, windows, state, etc.
Expand All @@ -69,7 +69,7 @@ Flink offers different levels of abstraction for developing streaming/batch appl

- The **Table API** is a declarative DSL centered around *tables*, which may
be dynamically changing tables (when representing streams). The [Table
API]({{ site.baseurl }}{% link dev/table/index.zh.md %}) follows the
API]({% link dev/table/index.zh.md %}) follows the
(extended) relational model: Tables have a schema attached (similar to
tables in relational databases) and the API offers comparable operations,
such as select, project, join, group-by, aggregate, etc. Table API
Expand Down
12 changes: 6 additions & 6 deletions docs/concepts/stateful-stream-processing.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,11 @@ and [savepoints]({{ site.baseurl }}{%link ops/state/savepoints.md %}).
Knowledge about the state also allows for rescaling Flink applications, meaning
that Flink takes care of redistributing state across parallel instances.

[Queryable state]({{ site.baseurl }}{% link dev/stream/state/queryable_state.md
[Queryable state]({% link dev/stream/state/queryable_state.md
%}) allows you to access state from outside of Flink during runtime.

When working with state, it might also be useful to read about [Flink's state
backends]({{ site.baseurl }}{% link ops/state/state_backends.md %}). Flink
backends]({% link ops/state/state_backends.md %}). Flink
provides different state backends that specify how and where state is stored.

* This will be replaced by the TOC
Expand Down Expand Up @@ -123,7 +123,7 @@ to enable and configure checkpointing.
stream source (such as message queue or broker) needs to be able to rewind the
stream to a defined recent point. [Apache Kafka](https://kafka.apache.org) has
this ability and Flink's connector to Kafka exploits this. See [Fault
Tolerance Guarantees of Data Sources and Sinks]({{ site.baseurl }}{% link
Tolerance Guarantees of Data Sources and Sinks]({% link
dev/connectors/guarantees.md %}) for more information about the guarantees
provided by Flink's connectors.

Expand Down Expand Up @@ -247,15 +247,15 @@ If state was snapshotted incrementally, the operators start with the state of
the latest full snapshot and then apply a series of incremental snapshot
updates to that state.

See [Restart Strategies]({{ site.baseurl }}{% link dev/task_failure_recovery.md
See [Restart Strategies]({% link dev/task_failure_recovery.md
%}#restart-strategies) for more information.

### State Backends

`TODO: expand this section`

The exact data structures in which the key/values indexes are stored depends on
the chosen [state backend]({{ site.baseurl }}{% link
the chosen [state backend]({% link
ops/state/state_backends.md %}). One state backend stores data in an in-memory
hash map, another state backend uses [RocksDB](https://rocksdb.org) as the
key/value store. In addition to defining the data structure that holds the
Expand All @@ -276,7 +276,7 @@ All programs that use checkpointing can resume execution from a **savepoint**.
Savepoints allow both updating your programs and your Flink cluster without
losing any state.

[Savepoints]({{ site.baseurl }}{% link ops/state/savepoints.md %}) are
[Savepoints]({% link ops/state/savepoints.md %}) are
**manually triggered checkpoints**, which take a snapshot of the program and
write it out to a state backend. They rely on the regular checkpointing
mechanism for this.
Expand Down
12 changes: 6 additions & 6 deletions docs/concepts/stateful-stream-processing.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,11 @@ and [savepoints]({{ site.baseurl }}{%link ops/state/savepoints.zh.md %}).
Knowledge about the state also allows for rescaling Flink applications, meaning
that Flink takes care of redistributing state across parallel instances.

[Queryable state]({{ site.baseurl }}{% link dev/stream/state/queryable_state.zh.md
[Queryable state]({% link dev/stream/state/queryable_state.zh.md
%}) allows you to access state from outside of Flink during runtime.

When working with state, it might also be useful to read about [Flink's state
backends]({{ site.baseurl }}{% link ops/state/state_backends.zh.md %}). Flink
backends]({% link ops/state/state_backends.zh.md %}). Flink
provides different state backends that specify how and where state is stored.

* This will be replaced by the TOC
Expand Down Expand Up @@ -123,7 +123,7 @@ to enable and configure checkpointing.
stream source (such as message queue or broker) needs to be able to rewind the
stream to a defined recent point. [Apache Kafka](https://kafka.apache.org) has
this ability and Flink's connector to Kafka exploits this. See [Fault
Tolerance Guarantees of Data Sources and Sinks]({{ site.baseurl }}{% link
Tolerance Guarantees of Data Sources and Sinks]({% link
dev/connectors/guarantees.zh.md %}) for more information about the guarantees
provided by Flink's connectors.

Expand Down Expand Up @@ -247,15 +247,15 @@ If state was snapshotted incrementally, the operators start with the state of
the latest full snapshot and then apply a series of incremental snapshot
updates to that state.

See [Restart Strategies]({{ site.baseurl }}{% link dev/task_failure_recovery.zh.md
See [Restart Strategies]({% link dev/task_failure_recovery.zh.md
%}#restart-strategies) for more information.

### State Backends

`TODO: expand this section`

The exact data structures in which the key/values indexes are stored depends on
the chosen [state backend]({{ site.baseurl }}{% link
the chosen [state backend]({% link
ops/state/state_backends.zh.md %}). One state backend stores data in an in-memory
hash map, another state backend uses [RocksDB](https://rocksdb.org) as the
key/value store. In addition to defining the data structure that holds the
Expand All @@ -276,7 +276,7 @@ All programs that use checkpointing can resume execution from a **savepoint**.
Savepoints allow both updating your programs and your Flink cluster without
losing any state.

[Savepoints]({{ site.baseurl }}{% link ops/state/savepoints.zh.md %}) are
[Savepoints]({% link ops/state/savepoints.zh.md %}) are
**manually triggered checkpoints**, which take a snapshot of the program and
write it out to a state backend. They rely on the regular checkpointing
mechanism for this.
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/timely-stream-processing.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ evaluation of event time windows.
For this reason, streaming programs may explicitly expect some *late* elements.
Late elements are elements that arrive after the system's event time clock (as
signaled by the watermarks) has already passed the time of the late element's
timestamp. See [Allowed Lateness]({{ site.baseurl }}{% link
timestamp. See [Allowed Lateness]({% link
dev/stream/operators/windows.md %}#allowed-lateness) for more information on
how to work with late elements in event time windows.

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/timely-stream-processing.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ evaluation of event time windows.
For this reason, streaming programs may explicitly expect some *late* elements.
Late elements are elements that arrive after the system's event time clock (as
signaled by the watermarks) has already passed the time of the late element's
timestamp. See [Allowed Lateness]({{ site.baseurl }}{% link
timestamp. See [Allowed Lateness]({% link
dev/stream/operators/windows.zh.md %}#allowed-lateness) for more information on
how to work with late elements in event time windows.

Expand Down
6 changes: 3 additions & 3 deletions docs/dev/event_time.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,13 +25,13 @@ under the License.
-->

In this section you will learn about writing time-aware Flink programs. Please
take a look at [Timely Stream Processing]({{site.baseurl}}{% link
take a look at [Timely Stream Processing]({% link
concepts/timely-stream-processing.md %}) to learn about the concepts behind
timely stream processing.

For information about how to use time in Flink programs refer to
[windowing]({{site.baseurl}}{% link dev/stream/operators/windows.md %}) and
[ProcessFunction]({{ site.baseurl }}{% link
[windowing]({% link dev/stream/operators/windows.md %}) and
[ProcessFunction]({% link
dev/stream/operators/process_function.md %}).

* toc
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/stream/state/broadcast_state.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ under the License.
{:toc}

In this section you will learn about how to use broadcast state in practise. Please refer to [Stateful Stream
Processing]({{site.baseurl}}{% link concepts/stateful-stream-processing.md %})
Processing]({% link concepts/stateful-stream-processing.md %})
to learn about the concepts behind stateful stream processing.

## Provided APIs
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/stream/state/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ under the License.

In this section you will learn about the APIs that Flink provides for writing
stateful programs. Please take a look at [Stateful Stream
Processing]({{site.baseurl}}{% link concepts/stateful-stream-processing.md %})
Processing]({% link concepts/stateful-stream-processing.md %})
to learn about the concepts behind stateful stream processing.

{% top %}
Expand Down
4 changes: 2 additions & 2 deletions docs/dev/stream/state/state.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ under the License.

In this section you will learn about the APIs that Flink provides for writing
stateful programs. Please take a look at [Stateful Stream
Processing]({{site.baseurl}}{% link concepts/stateful-stream-processing.md %})
Processing]({% link concepts/stateful-stream-processing.md %})
to learn about the concepts behind stateful stream processing.

* ToC
Expand Down Expand Up @@ -499,7 +499,7 @@ val counts: DataStream[(String, Int)] = stream
## Operator State

*Operator State* (or *non-keyed state*) is state that is is bound to one
parallel operator instance. The [Kafka Connector]({{ site.baseurl }}{% link
parallel operator instance. The [Kafka Connector]({% link
dev/connectors/kafka.md %}) is a good motivating example for the use of
Operator State in Flink. Each parallel instance of the Kafka consumer maintains
a map of topic partitions and offsets as its Operator State.
Expand Down
10 changes: 5 additions & 5 deletions docs/tutorials/datastream_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ public class Person {
Person person = new Person("Fred Flintstone", 35);
{% endhighlight %}

Flink's serializer [supports schema evolution for POJO types]({{ site.baseurl }}{% link dev/stream/state/schema_evolution.md %}#pojo-types).
Flink's serializer [supports schema evolution for POJO types]({% link dev/stream/state/schema_evolution.md %}#pojo-types).

### Scala tuples and case classes

Expand Down Expand Up @@ -229,9 +229,9 @@ instructions in the README, do the first exercise:
## Further Reading

- [Flink Serialization Tuning Vol. 1: Choosing your Serializer — if you can](https://flink.apache.org/news/2020/04/15/flink-serialization-tuning-vol-1.html)
- [Anatomy of a Flink Program]({{ site.baseurl }}{% link dev/api_concepts.md %}#anatomy-of-a-flink-program)
- [Data Sources]({{ site.baseurl }}{% link dev/datastream_api.md %}#data-sources)
- [Data Sinks]({{ site.baseurl }}{% link dev/datastream_api.md %}#data-sinks)
- [DataStream Connectors]({{ site.baseurl }}{% link dev/connectors/index.md %})
- [Anatomy of a Flink Program]({% link dev/api_concepts.md %}#anatomy-of-a-flink-program)
- [Data Sources]({% link dev/datastream_api.md %}#data-sources)
- [Data Sinks]({% link dev/datastream_api.md %}#data-sinks)
- [DataStream Connectors]({% link dev/connectors/index.md %})

{% top %}
10 changes: 5 additions & 5 deletions docs/tutorials/datastream_api.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ public class Person {
Person person = new Person("Fred Flintstone", 35);
{% endhighlight %}

Flink's serializer [supports schema evolution for POJO types]({{ site.baseurl }}{% link dev/stream/state/schema_evolution.zh.md %}#pojo-types).
Flink's serializer [supports schema evolution for POJO types]({% link dev/stream/state/schema_evolution.zh.md %}#pojo-types).

### Scala tuples and case classes

Expand Down Expand Up @@ -229,9 +229,9 @@ instructions in the README, do the first exercise:
## Further Reading

- [Flink Serialization Tuning Vol. 1: Choosing your Serializer — if you can](https://flink.apache.org/news/2020/04/15/flink-serialization-tuning-vol-1.html)
- [Anatomy of a Flink Program]({{ site.baseurl }}{% link dev/api_concepts.zh.md %}#anatomy-of-a-flink-program)
- [Data Sources]({{ site.baseurl }}{% link dev/datastream_api.zh.md %}#data-sources)
- [Data Sinks]({{ site.baseurl }}{% link dev/datastream_api.zh.md %}#data-sinks)
- [DataStream Connectors]({{ site.baseurl }}{% link dev/connectors/index.zh.md %})
- [Anatomy of a Flink Program]({% link dev/api_concepts.zh.md %}#anatomy-of-a-flink-program)
- [Data Sources]({% link dev/datastream_api.zh.md %}#data-sources)
- [Data Sinks]({% link dev/datastream_api.zh.md %}#data-sinks)
- [DataStream Connectors]({% link dev/connectors/index.zh.md %})

{% top %}
Loading

0 comments on commit 401290a

Please sign in to comment.