Skip to content

Commit

Permalink
[FLINK-17244][docs] Update the Getting Started page (apache#11988)
Browse files Browse the repository at this point in the history
* Update docs/getting-started/index.md
  • Loading branch information
alpinegizmo committed May 5, 2020
1 parent f90740f commit 4cb58e7
Showing 1 changed file with 32 additions and 13 deletions.
45 changes: 32 additions & 13 deletions docs/getting-started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,34 +28,53 @@ under the License.
-->

There are many ways to get started with Apache Flink. Which one is the best for
you depends on your goal and prior experience:
you depends on your goals and prior experience:

* take a look at the **Docker Playgrounds** for a docker-based introduction to
specific Flink concepts
* explore on of the **Code Walkthroughs** if you want to get an end-to-end
introduction to using one of the Flink APIs
* use **Project Setup** if you already know the basics of Flink but want to get a
project setup template for Java or Scala and need help setting up
dependencies
* take a look at the **Docker Playgrounds** if you want to see what Flink can do, via a hands-on,
docker-based introduction to specific Flink concepts
* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
introduction to one of Flink's APIs
* work your way through the **Hands-on Training** for a comprehensive,
step-by-step introduction to Flink
* use **Project Setup** if you already know the basics of Flink and want a
project template for Java or Scala, or need help setting up the dependencies

### Taking a first look at Flink

The **Docker Playgrounds** provide sandboxed Flink environments that are set up in just a few minutes and which allow you to explore and play with Flink.

* The [**Operations Playground**](./docker-playgrounds/flink-operations-playground.html) shows you how to operate streaming applications with Flink. You can experience how Flink recovers application from failures, upgrade and scale streaming applications up and down, and query application metrics.
* The [**Operations Playground**]({% link getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you how to operate streaming applications with Flink. You can experience how Flink recovers application from failures, upgrade and scale streaming applications up and down, and query application metrics.

<!--
* The [**Streaming SQL Playground**]() provides a Flink cluster with a SQL CLI client, tables which are fed by streaming data sources, and instructions for how to run continuous streaming SQL queries on these tables. This is the perfect environment for your first steps with streaming SQL.
-->

### First steps with one of Flink's APIs

The **Code Walkthroughs** are the best way to get started and introduce you step by step to an API.
A walkthrough provides instructions to bootstrap a small Flink project with a code skeleton and shows how to extend it to a simple application.
The **Code Walkthroughs** are a great way to get started quickly with a step-by-step introduction to
one of Flink's APIs. Each walkthrough provides instructions for bootstrapping a small skeleton
project, and then shows how to extend it to a simple application.

* The [**DataStream API**](./walkthroughs/datastream_api.html) code walkthrough shows how to implement a simple DataStream application and how to extend it to be stateful and use timers. The DataStream API is Flink's main abstraction to implement stateful streaming applications with sophisticated time semantics in Java or Scala.
* The [**DataStream API** code walkthrough]({% link getting-started/walkthroughs/datastream_api.md %}) shows how
to implement a simple DataStream application and how to extend it to be stateful and use timers.
The DataStream API is Flink's main abstraction for implementing stateful streaming applications
with sophisticated time semantics in Java or Scala.

* The [**Table API**](./walkthroughs/table_api.html) code walkthrough shows how to implement a simple Table API query on a batch source and how to evolve it into a continuous query on a streaming source. The Table API Flink's language-embedded, relational API to write SQL-like queries in Java or Scala which are automatically optimized similar to SQL queries. Table API queries can be executed on batch or streaming data with identical syntax and semantics.
* Flink's **Table API** is a relational API used for writing SQL-like queries in Java, Scala, or
Python, which are then automatically optimized, and can be executed on batch or streaming data
with identical syntax and semantics. The [Table API code walkthrough for Java and Scala]({% link
getting-started/walkthroughs/table_api.md %}) shows how to implement a simple Table API query on a
batch source and how to evolve it into a continuous query on a streaming source. There's also a
similar [code walkthrough for the Python Table API]({% link
getting-started/walkthroughs/python_table_api.md %}).

### Taking a Deep Dive with the Hands-on Training

The [**Hands-on Training**]({% link training/index.md %}) is a self-paced training course with
a set of lessons and hands-on exercises. This step-by-step introduction to Flink focuses
on learning how to use the DataStream API to meet the needs of common, real-world use cases,
and provides a complete introduction to the fundamental concepts: parallel dataflows,
stateful stream processing, event time and watermarking, and fault tolerance via state snapshots.

<!--
### Starting a new Flink application
Expand Down

0 comments on commit 4cb58e7

Please sign in to comment.