Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-17313]fix type validation error when sink table ddl exists column with precision of decimal/varchar #11848

Merged
merged 4 commits into from
Apr 27, 2020

Conversation

zjuwangg
Copy link
Contributor

What is the purpose of the change

  • Now TypeMappingUtils#checkPhysicalLogicalTypeCompatible method doesn't consider the different physical and logical type validation logic of source and sink: logical type should be able to cover the physical type in source, but physical type should be able to cover the logic type in sink vice verse. Besides, the decimal type should be taken more carefully, when target type is Legacy(Decimal), it should be able to accept any precision decimal type.*
  • This pr aims to solve the above problem.*

Brief change log

  • 9b24a40 correct the validation logc

Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
  • The S3 file system connector: (no )

Documentation

  • Does this pull request introduce a new feature? (no)
  • If yes, how is the feature documented? (not applicable)

…dicates precision of decimal/timestamp/varchar
@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 9b24a40 (Wed Apr 22 03:39:45 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!
  • This pull request references an unassigned Jira ticket. According to the code contribution guide, tickets need to be assigned before starting with the implementation work.

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Apr 22, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@dawidwys dawidwys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall. I had one minor comment that I will address when merging.

logicalFieldName,
physicalFieldType,
physicalFieldName,
"TableSource return type"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets just inline the last argument.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Copy link
Contributor

@dawidwys dawidwys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it would be worth adding a test for the case it fixes. Could you do that @zjuwangg ?

@zjuwangg
Copy link
Contributor Author

Thanks for your detailed review @dawidwys
I'll address your comments and add tests soon.

@zjuwangg zjuwangg changed the title [FLINK-17313]fix type validation error when sink table ddl exists column with precision of decimal/timestamp/varchar [FLINK-17313]fix type validation error when sink table ddl exists column with precision of decimal/varchar Apr 24, 2020
@zjuwangg zjuwangg requested a review from dawidwys April 27, 2020 08:20
@dawidwys
Copy link
Contributor

I will review it today.

@dawidwys dawidwys merged commit aed8c19 into apache:master Apr 27, 2020
@dawidwys
Copy link
Contributor

Thank you for the update. Merged.

@zjuwangg
Copy link
Contributor Author

Thank you for the update. Merged.

Thanks your review and merge 🌹 @dawidwys

dawidwys pushed a commit to dawidwys/flink that referenced this pull request May 5, 2020
…columns with precision of decimal/varchar (apache#11848)

This is a backport of aed8c19. Because
the method LogicalTypeCasts#supportsAvoidingCast was only introduced in
1.11. This commit extracts the necessary logic for comparing
char/varchar/binary/varbinary of different length.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants