Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Formatter: Improve single-with item formatting for Python 3.8 or older #10276

Merged
merged 1 commit into from
Mar 8, 2024

Conversation

MichaReiser
Copy link
Member

@MichaReiser MichaReiser commented Mar 7, 2024

Summary

This PR changes how we format with statements with a single with item for Python 3.8 or older. This change is not compatible with Black.

This is how we format a single-item with statement today

def run(data_path, model_uri):
    with pyspark.sql.SparkSession.builder.config(
        key="spark.python.worker.reuse", value=True
    ).config(key="spark.ui.enabled", value=False).master(
        "local-cluster[2, 1, 1024]"
    ).getOrCreate():
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))

This is different than how we would format the same expression if it is inside any other clause header (while, if, ...):

def run(data_path, model_uri):
    while (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))

Which seems inconsistent to me.

This PR changes the formatting of the single-item with Python 3.8 or older to match that of other clause headers.

def run(data_path, model_uri):
    with (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))

According to our versioning policy, this style change is gated behind a preview flag.

Test Plan

See added tests.

Added

@MichaReiser MichaReiser added formatter Related to the formatter preview Related to preview mode features labels Mar 7, 2024
@MichaReiser MichaReiser changed the base branch from main to fix-with-item-parenthesizing March 7, 2024 15:30
Copy link
Contributor

github-actions bot commented Mar 7, 2024

ruff-ecosystem results

Formatter (stable)

ℹ️ ecosystem check detected format changes. (+2 -2 lines in 1 file in 1 projects; 42 projects unchanged)

rotki/rotki (+2 -2 lines across 1 file)

rotkehlchen/rotkehlchen.py~L1213

         if oracle != HistoricalPriceOracle.CRYPTOCOMPARE:
             return  # only for cryptocompare for now
 
-        with (
-            contextlib.suppress(UnknownAsset)
+        with contextlib.suppress(
+            UnknownAsset
         ):  # if suppress -> assets are not crypto or fiat, so we can't query cryptocompare  # noqa: E501
             self.cryptocompare.create_cache(
                 from_asset=from_asset,

Formatter (preview)

ℹ️ ecosystem check detected format changes. (+8 -7 lines in 2 files in 2 projects; 41 projects unchanged)

mlflow/mlflow (+6 -5 lines across 1 file)

ruff format --preview

examples/flower_classifier/score_images_spark.py~L81

 @cli_args.MODEL_URI
 @click.argument("data-path")
 def run(data_path, model_uri):
-    with pyspark.sql.SparkSession.builder.config(
-        key="spark.python.worker.reuse", value=True
-    ).config(key="spark.ui.enabled", value=False).master(
-        "local-cluster[2, 1, 1024]"
-    ).getOrCreate() as spark:
+    with (
+        pyspark.sql.SparkSession.builder.config(key="spark.python.worker.reuse", value=True)
+        .config(key="spark.ui.enabled", value=False)
+        .master("local-cluster[2, 1, 1024]")
+        .getOrCreate()
+    ) as spark:
         # ignore spark log output
         spark.sparkContext.setLogLevel("OFF")
         print(score_model(spark, data_path, model_uri))

rotki/rotki (+2 -2 lines across 1 file)

ruff format --preview

rotkehlchen/rotkehlchen.py~L1211

         if oracle != HistoricalPriceOracle.CRYPTOCOMPARE:
             return  # only for cryptocompare for now
 
-        with (
-            contextlib.suppress(UnknownAsset)
+        with contextlib.suppress(
+            UnknownAsset
         ):  # if suppress -> assets are not crypto or fiat, so we can't query cryptocompare  # noqa: E501
             self.cryptocompare.create_cache(
                 from_asset=from_asset,

@MichaReiser MichaReiser force-pushed the fix-with-item-parenthesizing branch from 6b290c0 to 25ada0a Compare March 7, 2024 16:43
@MichaReiser MichaReiser force-pushed the single-with-item-pre-39 branch from 02bae56 to 4932a2d Compare March 7, 2024 16:56
Comment on lines +714 to +727
-with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as c:
+with (
+ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
+) as c:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change might be controversial. I would say it's not ideal but the best we can do with the limitations given by Python 3.8, other than not parenthesizing and exceeding the configured line width.

@MichaReiser MichaReiser force-pushed the single-with-item-pre-39 branch from 4932a2d to cde6584 Compare March 7, 2024 17:45
@MichaReiser MichaReiser changed the title single with item pre 39 Formatter: Improve single-with item formatting for Python 3.8 or older Mar 7, 2024
@MichaReiser MichaReiser force-pushed the fix-with-item-parenthesizing branch 3 times, most recently from 4100b9f to a42c365 Compare March 8, 2024 11:30
@MichaReiser MichaReiser force-pushed the single-with-item-pre-39 branch from cde6584 to f8937a3 Compare March 8, 2024 11:40
@MichaReiser MichaReiser marked this pull request as ready for review March 8, 2024 11:40
@charliermarsh charliermarsh force-pushed the fix-with-item-parenthesizing branch from a42c365 to f9416ea Compare March 8, 2024 23:41
Base automatically changed from fix-with-item-parenthesizing to main March 8, 2024 23:48
@charliermarsh charliermarsh force-pushed the single-with-item-pre-39 branch from f8937a3 to c00f4bd Compare March 8, 2024 23:49
Copy link
Member

@charliermarsh charliermarsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems reasonable to me.

@charliermarsh charliermarsh enabled auto-merge (squash) March 8, 2024 23:53
@charliermarsh charliermarsh merged commit b64f2ea into main Mar 8, 2024
17 checks passed
@charliermarsh charliermarsh deleted the single-with-item-pre-39 branch March 8, 2024 23:56
nkxxll pushed a commit to nkxxll/ruff that referenced this pull request Mar 10, 2024
astral-sh#10276)

## Summary

This PR changes how we format `with` statements with a single with item
for Python 3.8 or older. This change is not compatible with Black.

This is how we format a single-item with statement today 

```python
def run(data_path, model_uri):
    with pyspark.sql.SparkSession.builder.config(
        key="spark.python.worker.reuse", value=True
    ).config(key="spark.ui.enabled", value=False).master(
        "local-cluster[2, 1, 1024]"
    ).getOrCreate():
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))
```

This is different than how we would format the same expression if it is
inside any other clause header (`while`, `if`, ...):

```python
def run(data_path, model_uri):
    while (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))

```

Which seems inconsistent to me. 

This PR changes the formatting of the single-item with Python 3.8 or
older to match that of other clause headers.

```python
def run(data_path, model_uri):
    with (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))
```

According to our versioning policy, this style change is gated behind a
preview flag.

## Test Plan

See added tests.

Added
@MichaReiser MichaReiser mentioned this pull request Sep 16, 2024
29 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
formatter Related to the formatter preview Related to preview mode features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants