-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pyspark Linting Rules #7272
Comments
I'm generally open to adding package-specific rule sets for extremely popular packages (as with Pandas, NumPy, etc.), and Spark would fit that description. However, it'd be nice to have a few rules lined up before we move forward and add any one of them. Otherwise, we run the risk that we end up with really sparse categories that only contain a rule or two. |
Super. I've updated the issue with a couple of rules that we can track. |
Hi, I was looking for such a thread. To add to the proposed list, here are some rules we wish we had at my company:
|
Just to add that I would be interested in this functionality. Also, the first link in the original post is broken, and the pylint extension looks unmaintained?
|
Did you get going with this? Thinking about jumping on it |
Currently looking at recommending Ruff for data team and this feature would be great. |
Hey! Did you end up jumping on it? We're also considering starting it, so I'd love to hear how it went for you! |
I'm working on getting a first set of rules out there :) |
@guilhem-dvr Thanks for your suggestions!
Also note that the import convention is already possible via: |
Sure, here's what I had in mind: df = spark.createDataFrame(
[("John", 25, "Engineer"), ("Jane", 30, "Doctor"), ("Jim", 35, "Teacher")],
["Name", "Age", "Profession"],
)
# Uncessary drop
df.drop("Age").select("Name", "Profession")
# Same statement without the drop
df.select("Name", "Profession") But now I see that there's a pattern that shouldn't be flagged: where an 'anti' select is performed, i.e. drop some columns then select all the remaining ones with select("*"): # Reusing the previous df schema
df.drop("Age").select("*") Edit: this is still a bad pattern because drop already returns the whole dataframe - minus the dropped column - so you should never be chaining drop and select anyway.
Lol, I had completely skimmed over the settings, thank you! |
Thanks for clarifying! There is not as much Spark open-source code available as there is for other libraries, so it's hard to tell how frequent this error is - but a good addition nonetheless. Funny enough, Polars uses this in their tests: https://github.com/pola-rs/polars/blob/main/py-polars/tests/unit/lazyframe/test_lazyframe.py#L1090 |
No, but looks like @sbrugman has, which is exciting! |
Apache Spark is widely used in the python ecosystem for distributed computing. As user of spark I would like for ruff to lint problematic behaviours. The automation that ruff offers is especially useful in projects with various levels of software engineering skills, e.g. where people has more of a statistics background.
There exists a pyspark style guide and pylint extension.
I would like to start contributing a rule that checks for repeated use of withColumn:
Are you ok with a PR introducing "Spark-specific rules" (e.g.
SPK
)?ruff
includes rules that are specific to third party libraries: numpy, pandas and airflow. Spark support would be a nice addition.I would like to close with the following thought: supporting third-party packages may at first seem to be effort in the long tail of possible rules to add to
ruff
. Why not focus only on rules that affect all Python users? I hope that adding these will lead to creating helper functions that make adding new rules easier. I also think that these libraries will end up with similar API design patterns, that can be linted across the ecosystem. As an example, call chaining is common for many packages that perform transformations.The text was updated successfully, but these errors were encountered: