Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-17425][blink-planner] supportsFilterPushDown rule in DynamicSource. #12866

Closed
wants to merge 13 commits into from

Conversation

liuyongvs
Copy link
Contributor

@liuyongvs liuyongvs commented Jul 10, 2020

What is the purpose of the change

make the DynamicSource supports FilterPushDown Rule

Verifying this change

This change added tests and can be verified as follows:

  • Added PushFilterIntoTableSourceScanRuleTest to verify the plan
  • Extended TableSourceITCase (batch and stream )to verify the result filter projection

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): ( no)
  • The public API, i.e., is any changed class annotated with @public(Evolving): (no)
  • The serializers: ( no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, - ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (yes)
  • If yes, how is the feature documented? (JavaDocs)

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit dcdd7b7 (Fri Jul 10 08:56:12 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@liuyongvs
Copy link
Contributor Author

@godfreyhe ,because of my careless operation, i drop my github repo. So i open a open PR. And the third commit is modification for all your reviews.

@flinkbot
Copy link
Collaborator

flinkbot commented Jul 10, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

@liuyongvs
Copy link
Contributor Author

hi @godfreyhe, you can review it now. The blink planner tests passed.

@liuyongvs
Copy link
Contributor Author

liuyongvs commented Jul 14, 2020

PR before #12851
Add this for tracking.


LogicalTableScan scan = call.rel(1);
TableSourceTable tableSourceTable = scan.getTable().unwrap(TableSourceTable.class);
//we can not push filter twice
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add a space after //

Copy link
Contributor

@godfreyhe godfreyhe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update, I left some comments.
There are some comments in #12851 are not been addressed, WDYT?

}),
context.getCatalogManager().getDataTypeFactory())
.build();
SupportsFilterPushDown.Result result = ((SupportsFilterPushDown) newTableSource).applyFilters(resolver.resolve(remainingPredicates));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: wrap the line, it's too long

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

getNewFlinkStatistic(oldTableSourceTable, originPredicatesSize, updatedPredicatesSize),
getNewExtraDigests(oldTableSourceTable, result.getAcceptedFilters())
);
TableScan newScan = new LogicalTableScan(scan.getCluster(), scan.getTraitSet(), scan.getHints(), newTableSourceTable);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use LogicalTableScan.create instead of new LogicalTableScan

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i want to know why?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because the traits of Scan may be changed after filter push down, It's better the traits could be recomputed which is done in LogicalTableScan#create method. Currently, new LogicalTableScan is ok, but I think using LogicalTableScan#create is better.

//we can not push filter twice
return tableSourceTable != null
&& tableSourceTable.tableSource() instanceof SupportsFilterPushDown
&& !Arrays.stream(tableSourceTable.extraDigests()).anyMatch(str -> str.contains("filter"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if a table contain a field named filter ?
use str.startWith("filter=[")

@liuyongvs
Copy link
Contributor Author

Hi @godfreyhe . Yes, i find some type is not Comparable, such as Period int getValueAs(Class clazz).
Thanks for you review in great details. And I learned a lot

Object lhsValue = getValue(children.get(0), row);
Object rhsValue = getValue(children.get(1), row);
// validate that literal is comparable
if (!isComparable(lhsValue, binExpr) || !isComparable(rhsValue, binExpr)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about check whether the type of a field is comparable in shouldPushDownUnaryExpression method ? then the logic of binaryFilterApplies need not to change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea


if (expr instanceof ValueLiteralExpression) {
// validate that literal is comparable
Optional value = ((ValueLiteralExpression) expr).getValueAs(((ValueLiteralExpression) expr).getOutputDataType().getConversionClass());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also consider whether the type of FieldReferenceExpression is comparable, consider the following pattern:
the filterableFields is a and b and the pushed pattern a > b

@godfreyhe
Copy link
Contributor

thanks for the update. LGTM overall, last comment: please remove the import: import static org.apache.flink.runtime.state.CheckpointStreamWithResultProvider.LOG; in TestValuesTableFactory

@@ -565,14 +562,12 @@ private boolean binaryFilterApplies(CallExpression binExpr, Row row) {
}
}

private boolean isComparable(Class<?> clazz) {
// validate that literal is comparable
private void validateTypeComparable(Class<?> clazz) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this method should be

private boolean isComparable(Class<?> clazz) {
    return Comparable.class.isAssignableFrom(clazz);
}

we do not need throw exception if a class is non-comparable.TestValuesTableFactory just does not support non-comparable type. just like the case that TestValuesTableFactory just supports UPPER and LOWER UDF.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@godfreyhe , But need it log some warning info here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. We can add some java doc at the class level to explain which patterns this class supports

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea +1

* Test implementation of {@link DynamicTableSourceFactory} that creates
* a source that produces a sequence of values.
* Test implementation of {@link DynamicTableSourceFactory} that creates a source that produces a sequence of values.
* And this source {@link TestValuesTableSource} supports FilterPushDown. And it has some limitations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And {@link TestValuesTableSource} can push down filter into table source. A predicate can be pushed down only if it satisfies the following conditions:

  1. field name is in filterable-fields which is defined in with properties
  2. the field type is comparable
  3. UDF is UPPER or LOWER

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks

Copy link
Contributor

@godfreyhe godfreyhe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update, LGTM. cc @wuchong

@liuyongvs
Copy link
Contributor Author

@godfreyhe @wuchong , could it merge to master . The error is test_rocksdb_state_memory_control, which it is not relevant to my commits.

Copy link
Member

@wuchong wuchong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late response. LGTM.

I will merge this once the build is passed in my repo.

wuchong pushed a commit to wuchong/flink that referenced this pull request Jul 21, 2020
@wuchong wuchong closed this in fe0d001 Jul 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants