Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-17148][python] Support converting pandas DataFrame to Flink Table #11832

Merged
merged 2 commits into from
Apr 30, 2020

Conversation

dianfu
Copy link
Contributor

@dianfu dianfu commented Apr 20, 2020

What is the purpose of the change

This pull request add the support for converting pandas dataframe to flink table.

Brief change log

  • Introduce ArrowSourceFunction and ArrowTableSource which takes the serialized byte array of arrow record batch as the source data
  • Add TableEnvironment.from_pandas which could be used to convert pandas dataframe to flink table

Verifying this change

This change added tests and can be verified as follows:

  • Added Java tests ArrowSourceFunctionTest and RowArrowSourceFunctionTest
  • Added Python tests test_pandas_conversion.py

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (yes)
  • If yes, how is the feature documented? (not documented)

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit b463662 (Mon Apr 20 15:04:35 UTC 2020)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Apr 20, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

@hequn8128 hequn8128 self-assigned this Apr 21, 2020
@hequn8128 hequn8128 changed the title [FLINK-17148][python] Support converting pandas dataframe to flink table [FLINK-17148][python] Support converting pandas DataFrame to Flink Table Apr 26, 2020
Copy link
Contributor

@hequn8128 hequn8128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dianfu Thanks a lot for the PR. The code looks very good. Left some suggestions about improvement below.

class PandasConversionTestBase(object):

@classmethod
def setUpClass(cls):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found we should use lowercase for these test methods. However, it is not related to this PR. Maybe we can create another jira to address the problem.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name setUpClass is from unittest.TestCase and I guess we can not change it.

data_dict = {}
for j, name in enumerate(cls.data_type.names):
data_dict[name] = [cls.data[i][j] for i in range(len(cls.data))]
# need convert to numpy types
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need to convert to NumPy types?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The integer types will be parsed as int64 by default and so we need to specify it explicitly.

"1970-01-01 00:00:00.123,[1, 2]"])


class StreamPandasConversionTests(PandasConversionITTests,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also cover the batch mode for the old planner?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most code could not be reusable for the batch mode of the old planner. So I'd like to handle it in a separate JIRA if needed.

@@ -1107,6 +1107,63 @@ def _from_elements(self, elements, schema):
finally:
os.unlink(temp_file.name)

def from_pandas(self, pdf,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add detailed python docs for the API.
BTW, do we plan to add Flink document for this API in another PR? If so, we can first create a jira to address it under FLINK-17146

running = false;
}

public abstract ArrowReader<OUT> createArrowReader(VectorSchemaRoot root);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

protected

}

@Override
public void initializeState(FunctionInitializationContext context) throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add some log in this method? For example, LOG.info the restored information.

runner2.join();

Assert.assertNull(error[0]);
Assert.assertEquals(testData.f0.size(), numOfEmittedElements.get());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also verify the content of the data?


Assert.assertNull(error[0]);
Assert.assertNull(error[1]);
Assert.assertEquals(testData.f0.size(), numOfEmittedElements.get());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also verify the content of the data?

arrowSourceFunction.run(sourceContext);
} catch (Throwable t) {
if (!t.getMessage().equals("Fail the arrow source")) {
error[0] = t;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the corresponding assert to verify that error[0] is not null?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

error[0] should always be null and it has been asserted at the end of the test. I'm not sure what do you mean about verify that error[0] is not null?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignore my comment here. You are right.

@hequn8128
Copy link
Contributor

BTW. Would be great if you can rebase to the master. The interface in BaseRow has been changed, i.e., getHeader() has been replaced with getRowKind().

@dianfu
Copy link
Contributor Author

dianfu commented Apr 29, 2020

@hequn8128 Thanks a lot for your valuable feedback. I have updated the PR accordingly (also rebased the PR).

Copy link
Contributor

@hequn8128 hequn8128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me and with some minor comments.

Example:
::

# use the second parameter to specify custom field names
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this comment after the creation of DataFrame.


:param pdf: The pandas DataFrame.
:param schema: The schema of the converted table.
:type schema: RowType or list[str] or list[DataType]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate type hint.

If not specified, the default parallelism will be used.
:type splits_num: int
:return: The result table.
:rtype: Table
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate type hint.

:param splits_num: The number of splits the given Pandas DataFrame will be split into. It
determines the number of parallel source tasks.
If not specified, the default parallelism will be used.
:type splits_num: int
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate type hint.

@@ -32,5 +32,6 @@ Apache Flink has provided Python Table API support since 1.9.0.
- [Installation]({{ site.baseurl }}/dev/table/python/installation.html): Introduction of how to set up the Python Table API execution environment.
- [User-defined Functions]({{ site.baseurl }}/dev/table/python/python_udfs.html): Explanation of how to define Python user-defined functions.
- [Vectorized User-defined Functions]({{ site.baseurl }}/dev/table/python/vectorized_python_udfs.html): Explanation of how to define vectorized Python user-defined functions.
- [Conversion between PyFlink Table and Pandas DataFrame]({{ site.baseurl }}/dev/table/python/conversion_of_pandas.html): Explanation of how to convert between PyFlink Table and Pandas DataFrame.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conversions?

@@ -32,5 +32,6 @@ Apache Flink has provided Python Table API support since 1.9.0.
- [环境安装]({{ site.baseurl }}/zh/dev/table/python/installation.html): Introduction of how to set up the Python Table API execution environment.
- [自定义函数]({{ site.baseurl }}/zh/dev/table/python/python_udfs.html): Explanation of how to define Python user-defined functions.
- [自定义向量化函数]({{ site.baseurl }}/zh/dev/table/python/vectorized_python_udfs.html): Explanation of how to define vectorized Python user-defined functions.
- [PyFlink Table和Pandas DataFrame互转]({{ site.baseurl }}/zh/dev/table/python/conversion_of_pandas.html): Explanation of how to convert between PyFlink Table and Pandas DataFrame.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to most copywriting guidelines, it's better to leave a blank between an English word and a Chinese word.

pdf = pd.DataFrame(np.random.rand(1000, 2))

# Create a PyFlink Table from a Pandas DataFrame
table = t_env.from_pandas(pdf)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add more examples here. For example, how to specify table names, which is commonly required.

@dianfu
Copy link
Contributor Author

dianfu commented Apr 29, 2020

@hequn8128 Thanks for the review. Updated.

@dianfu
Copy link
Contributor Author

dianfu commented Apr 30, 2020

Not sure why the CI of Azure wasn't triggered. It has succeed in my private azure pipeline: https://dev.azure.com/dianfu/Flink/_build/results?buildId=50&view=results

@hequn8128
Copy link
Contributor

@dianfu Thanks. Merging...

@hequn8128 hequn8128 merged commit c54293b into apache:master Apr 30, 2020
shuiqiangchen added a commit to shuiqiangchen/flink that referenced this pull request Apr 30, 2020
…7255

* 'master' of https://github.com/apache/flink:
  [FLINK-15591][table] Support create/drop temporary table in both planners
  [FLINK-15591][sql-parser] Support parsing TEMPORARY in table definition
  [FLINK-17148][python] Support converting pandas DataFrame to Flink Table (apache#11832)
  [FLINK-17254][python][docs] Improve the PyFlink documentation and examples to use SQL DDL for source/sink definition.
  [FLINK-17374][travis] Further removals of travis-mentions
  [FLINK-17374][travis] Remove tools/travis directory
  [FLINK-17374][travis] Remove travis-related files
  [FLINK-16423][e2e] Introduce timeouts for HA tests
  [FLINK-17440][network] Resolve potential buffer leak in output unspilling for unaligned checkpoints
  [hotfix][checkpointing] Use practical ChannelStateReader instead of NO_OP
  [hotfix][network] Rename ResultPartitionWriter#initializeState to #readRecoveredState
  [hotfix][tests] Deduplicate code in SlotManagerImplTest
  [FLINK-16605][runtime] Make the SlotManager respect the max limitation for slots
  [FLINK-16605][runtime] Pass the slotmanager.max-number-of-slots to the SlotManagerImpl
  [FLINK-16605][core][config] Add slotmanager.max-number-of-slots config option
  [hotfix][runtime] Add sanity check to SlotManagerConfiguration
  [FLINK-17455][table][filesystem] Move FileSystemFormatFactory to table common
  [FLINK-17391][filesystem] sink.rolling-policy.time.interval default value should be bigger
  [FLINK-17414][python][docs] Update the documentation about PyFlink build about Cython support
@dianfu dianfu deleted the from_pandas branch June 10, 2020 02:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants