Skip to content

Commit

Permalink
[FLINK-18654][docs][jdbc] Correct missleading documentation in "Parti…
Browse files Browse the repository at this point in the history
…tioned Scan" section of JDBC connector

This closes apache#14523
  • Loading branch information
xiaoHoly authored Dec 30, 2020
1 parent c04bc9e commit 596bba9
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/dev/table/connectors/jdbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ See [CREATE TABLE DDL]({% link dev/table/sql/create.md %}#create-table) for more
To accelerate reading data in parallel `Source` task instances, Flink provides partitioned scan feature for JDBC table.

All the following scan partition options must all be specified if any of them is specified. They describe how to partition the table when reading in parallel from multiple tasks.
The `scan.partition.column` must be a numeric, date, or timestamp column from the table in question. Notice that `scan.partition.lower-bound` and `scan.partition.upper-bound` are just used to decide the partition stride, not for filtering the rows in table. So all rows in the table will be partitioned and returned.
The `scan.partition.column` must be a numeric, date, or timestamp column from the table in question. Notice that `scan.partition.lower-bound` and `scan.partition.upper-bound` are used to decide the partition stride and filter the rows in table. If it is a batch job, it also doable to get the max and min value first before submitting the flink job.

- `scan.partition.column`: The column name used for partitioning the input.
- `scan.partition.num`: The number of partitions.
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/table/connectors/jdbc.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ See [CREATE TABLE DDL]({% link dev/table/sql/create.zh.md %}#create-table) for m
To accelerate reading data in parallel `Source` task instances, Flink provides partitioned scan feature for JDBC table.

All the following scan partition options must all be specified if any of them is specified. They describe how to partition the table when reading in parallel from multiple tasks.
The `scan.partition.column` must be a numeric, date, or timestamp column from the table in question. Notice that `scan.partition.lower-bound` and `scan.partition.upper-bound` are just used to decide the partition stride, not for filtering the rows in table. So all rows in the table will be partitioned and returned.
The `scan.partition.column` must be a numeric, date, or timestamp column from the table in question. Notice that `scan.partition.lower-bound` and `scan.partition.upper-bound` are used to decide the partition stride and filter the rows in table. If it is a batch job, it also doable to get the max and min value first before submitting the flink job.

- `scan.partition.column`: The column name used for partitioning the input.
- `scan.partition.num`: The number of partitions.
Expand Down

0 comments on commit 596bba9

Please sign in to comment.