Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preview minimal f-string formatting #9642

Merged
merged 20 commits into from
Feb 16, 2024
Merged

Conversation

dhruvmanila
Copy link
Member

@dhruvmanila dhruvmanila commented Jan 25, 2024

Summary

This is preview only feature and is available using the --preview command-line flag.

With the implementation of PEP 701 in Python 3.12, f-strings can now be broken into multiple lines, can contain comments, and can re-use the same quote character. Currently, no other Python formatter formats the f-strings so there's some discussion which needs to happen in defining the style used for f-string formatting. Relevant discussion: #9785

The goal for this PR is to add minimal support for f-string formatting. This would be to format expression within the replacement field without introducing any major style changes.

Newlines

The heuristics for adding newline is similar to that of Prettier where the formatter would only split an expression in the replacement field across multiple lines if there was already a line break within the replacement field.

In other words, the formatter would not add any newlines unless they were already present i.e., they were added by the user. This makes breaking any expression inside an f-string optional and in control of the user. For example,

# We wouldn't break this
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa { aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"

# But, we would break the following as there's already a newline
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa {
	aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"

If there are comments in any of the replacement field of the f-string, then it will always be a multi-line f-string in which case the formatter would prefer to break expressions i.e., introduce newlines. For example,

x = f"{ # comment
    a }"

Quotes

The logic for formatting quotes remains unchanged. The existing logic is used to determine the necessary quote char and is used accordingly.

Now, if the expression inside an f-string is itself a string like, then we need to make sure to preserve the existing quote and not change it to the preferred quote unless it's 3.12. For example,

f"outer {'inner'} outer"

# For pre 3.12, preserve the single quote
f"outer {'inner'} outer"

# While for 3.12 and later, the quotes can be changed
f"outer {"inner"} outer"

But, for triple-quoted strings, we can re-use the same quote char unless the inner string is itself a triple-quoted string.

f"""outer {"inner"} outer"""  # valid
f"""outer {'''inner'''} outer"""  # preserve the single quote char for the inner string

Debug expressions

If debug expressions are present in the replacement field of a f-string, then the whitespace needs to be preserved as they will be rendered as it is (for example, f"{ x = }". If there are any nested f-strings, then the whitespace in them needs to be preserved as well which means that we'll stop formatting the f-string as soon as we encounter a debug expression.

f"outer {   x =  !s  :.3f}"
#                  ^^
#                  We can remove these whitespaces

Now, the whitespace doesn't need to be preserved around conversion spec and format specifiers, so we'll format them as usual but we won't be formatting any nested f-string within the format specifier.

Miscellaneous

Test Plan

  • Add new test cases
  • Review existing snapshot changes
  • Review the ecosystem changes

@dhruvmanila dhruvmanila force-pushed the dhruv/fstring-formatting branch 2 times, most recently from 1f2328a to c87cd15 Compare February 5, 2024 19:25
@dhruvmanila dhruvmanila added formatter Related to the formatter preview Related to preview mode features labels Feb 12, 2024
@dhruvmanila dhruvmanila changed the title WIP: F-string formatting Preview minimal f-string formatting Feb 12, 2024
@dhruvmanila dhruvmanila force-pushed the dhruv/fstring-formatting branch 2 times, most recently from c1e8f9d to 870c2a6 Compare February 12, 2024 15:14
@dhruvmanila dhruvmanila changed the base branch from main to dhruv/split-string-part February 12, 2024 15:19
@dhruvmanila dhruvmanila force-pushed the dhruv/split-string-part branch from 55195fd to 8627f40 Compare February 12, 2024 18:50
@dhruvmanila dhruvmanila force-pushed the dhruv/fstring-formatting branch 2 times, most recently from 4216d2e to f839da3 Compare February 13, 2024 07:32
@dhruvmanila dhruvmanila force-pushed the dhruv/split-string-part branch from 8627f40 to d3402d7 Compare February 13, 2024 07:40
@dhruvmanila dhruvmanila reopened this Feb 13, 2024
@dhruvmanila dhruvmanila force-pushed the dhruv/fstring-formatting branch 2 times, most recently from b9e26ee to 5274d78 Compare February 13, 2024 07:52
Copy link
Contributor

github-actions bot commented Feb 13, 2024

ruff-ecosystem results

Formatter (stable)

ℹ️ ecosystem check encountered format errors. (no format changes; 1 project error)

zulip/zulip (error)

Failed to clone zulip/zulip: error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function.
error: 6838 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output

Formatter (preview)

ℹ️ ecosystem check detected format changes. (+483 -453 lines in 203 files in 22 projects; 21 projects unchanged)

RasaHQ/rasa (+3 -3 lines across 2 files)

ruff format --preview

rasa/core/train.py~L43

                     domain,
                     policy_config,
                     stories=story_file,
-                    output=str(Path(output_path, f"run_{r +1}")),
+                    output=str(Path(output_path, f"run_{r + 1}")),
                     fixed_model_name=config_name + PERCENTAGE_KEY + str(percentage),
                     additional_arguments={
                         **additional_arguments,

tests/graph_components/validators/test_default_recipe_validator.py~L861

 ):
     assert (
         len(policy_types) >= priority + num_duplicates
-    ), f"This tests needs at least {priority+num_duplicates} many types."
+    ), f"This tests needs at least {priority + num_duplicates} many types."
 
     # start with a schema where node i has priority i
     nodes = {

tests/graph_components/validators/test_default_recipe_validator.py~L871

 
     # give nodes p+1, .., p+num_duplicates-1 priority "priority"
     for idx in range(num_duplicates):
-        nodes[f"{priority+idx+1}"].config["priority"] = priority
+        nodes[f"{priority + idx + 1}"].config["priority"] = priority
 
     validator = DefaultV1RecipeValidator(graph_schema=GraphSchema(nodes))
     monkeypatch.setattr(

apache/airflow (+20 -20 lines across 10 files)

ruff format --preview

airflow/dag_processing/processor.py~L546

             <pre><code>{task_list}\n<code></pre>
             Blocking tasks:
             <pre><code>{blocking_task_list}<code></pre>
-            Airflow Webserver URL: {conf.get(section='webserver', key='base_url')}
+            Airflow Webserver URL: {conf.get(section="webserver", key="base_url")}
             """
 
             tasks_missed_sla = []

airflow/providers/google/cloud/hooks/dataproc_metastore.py~L685

                     TBLS.TBL_NAME = '{table}'"""
         if _partitions:
             query += f"""
-                    AND PARTITIONS.PART_NAME IN ({', '.join(f"'{p}'" for p in _partitions)})"""
+                    AND PARTITIONS.PART_NAME IN ({", ".join(f"'{p}'" for p in _partitions)})"""
         query += ";"
 
         client = self.get_dataproc_metastore_client_v1beta()

airflow/providers/google/cloud/triggers/cloud_storage_transfer_service.py~L48

     def serialize(self) -> tuple[str, dict[str, Any]]:
         """Serialize StorageTransferJobsTrigger arguments and classpath."""
         return (
-            f"{self.__class__.__module__ }.{self.__class__.__qualname__}",
+            f"{self.__class__.__module__}.{self.__class__.__qualname__}",
             {
                 "project_id": self.project_id,
                 "job_names": self.job_names,

dev/breeze/src/airflow_breeze/utils/packages.py~L338

     processed_package_filters.extend(get_long_package_names(short_packages))
 
     removed_packages: list[str] = [
-        f"apache-airflow-providers-{provider.replace('.','-')}" for provider in get_removed_provider_ids()
+        f"apache-airflow-providers-{provider.replace('.', '-')}" for provider in get_removed_provider_ids()
     ]
     all_packages_including_removed: list[str] = available_doc_packages + removed_packages
     invalid_filters = [

dev/breeze/src/airflow_breeze/utils/packages.py~L524

     prefix = "apache-airflow-providers-"
     base_url = "https://airflow.apache.org/docs/"
     for dependency in cross_package_dependencies:
-        pip_package_name = f"{prefix}{dependency.replace('.','-')}"
-        url_suffix = f"{dependency.replace('.','-')}"
+        pip_package_name = f"{prefix}{dependency.replace('.', '-')}"
+        url_suffix = f"{dependency.replace('.', '-')}"
         if markdown:
             url = f"[{pip_package_name}]({base_url}{url_suffix})"
         else:

dev/prepare_bulk_issues.py~L228

             except GithubException as e:
                 console.print(f"[red]Error!: {e}[/]")
                 console.print(
-                    f"[yellow]Restart with `--start-from {processed_issues+start_from}` to continue.[/]"
+                    f"[yellow]Restart with `--start-from {processed_issues + start_from}` to continue.[/]"
                 )
         console.print(f"Created {processed_issues} issue(s).")
 

docker_tests/test_prod_image.py~L37

 PROD_IMAGE_PROVIDERS_FILE_PATH = SOURCE_ROOT / "prod_image_installed_providers.txt"
 AIRFLOW_ROOT_PATH = Path(__file__).parents[2].resolve()
 SLIM_IMAGE_PROVIDERS = [
-    f"apache-airflow-providers-{provider_id.replace('.','-')}"
+    f"apache-airflow-providers-{provider_id.replace('.', '-')}"
     for provider_id in AIRFLOW_PRE_INSTALLED_PROVIDERS_FILE_PATH.read_text().splitlines()
     if not provider_id.startswith("#")
 ]
 REGULAR_IMAGE_PROVIDERS = [
-    f"apache-airflow-providers-{provider_id.replace('.','-')}"
+    f"apache-airflow-providers-{provider_id.replace('.', '-')}"
     for provider_id in PROD_IMAGE_PROVIDERS_FILE_PATH.read_text().splitlines()
     if not provider_id.startswith("#")
 ]

tests/jobs/test_scheduler_job.py~L4210

         with dag_maker("test_dagrun_states_are_correct_2", start_date=date) as dag:
             EmptyOperator(task_id="dummy_task")
         for i in range(16):
-            dr = dag_maker.create_dagrun(run_id=f"dr2_run_{i+1}", state=State.RUNNING, execution_date=date)
+            dr = dag_maker.create_dagrun(run_id=f"dr2_run_{i + 1}", state=State.RUNNING, execution_date=date)
             date = dr.execution_date + timedelta(hours=1)
         dr16 = DagRun.find(run_id="dr2_run_16")
         date = dr16[0].execution_date + timedelta(hours=1)
         for i in range(16, 32):
-            dr = dag_maker.create_dagrun(run_id=f"dr2_run_{i+1}", state=State.QUEUED, execution_date=date)
+            dr = dag_maker.create_dagrun(run_id=f"dr2_run_{i + 1}", state=State.QUEUED, execution_date=date)
             date = dr.execution_date + timedelta(hours=1)
 
         # third dag and dagruns

tests/jobs/test_scheduler_job.py~L4223

         with dag_maker("test_dagrun_states_are_correct_3", start_date=date) as dag:
             EmptyOperator(task_id="dummy_task")
         for i in range(16):
-            dr = dag_maker.create_dagrun(run_id=f"dr3_run_{i+1}", state=State.RUNNING, execution_date=date)
+            dr = dag_maker.create_dagrun(run_id=f"dr3_run_{i + 1}", state=State.RUNNING, execution_date=date)
             date = dr.execution_date + timedelta(hours=1)
         dr16 = DagRun.find(run_id="dr3_run_16")
         date = dr16[0].execution_date + timedelta(hours=1)
         for i in range(16, 32):
-            dr = dag_maker.create_dagrun(run_id=f"dr2_run_{i+1}", state=State.QUEUED, execution_date=date)
+            dr = dag_maker.create_dagrun(run_id=f"dr2_run_{i + 1}", state=State.QUEUED, execution_date=date)
             date = dr.execution_date + timedelta(hours=1)
 
         scheduler_job = Job()

tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py~L117

             {"timestamp": current_time, "message": "Third"},
         ]
         assert [handler._event_to_str(event) for event in events] == ([
-            f"[{get_time_str(current_time-2000)}] First",
-            f"[{get_time_str(current_time-1000)}] Second",
+            f"[{get_time_str(current_time - 2000)}] First",
+            f"[{get_time_str(current_time - 1000)}] Second",
             f"[{get_time_str(current_time)}] Third",
         ])
 

tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py~L141

 
         msg_template = "*** Reading remote log from Cloudwatch log_group: {} log_stream: {}.\n{}\n"
         events = "\n".join([
-            f"[{get_time_str(current_time-2000)}] First",
-            f"[{get_time_str(current_time-1000)}] Second",
+            f"[{get_time_str(current_time - 2000)}] First",
+            f"[{get_time_str(current_time - 1000)}] Second",
             f"[{get_time_str(current_time)}] Third",
         ])
         assert self.cloudwatch_task_handler.read(self.ti) == (

tests/system/providers/amazon/aws/example_sagemaker.py~L138

         dockerfile.write(
             f"""
             FROM public.ecr.aws/amazonlinux/amazonlinux
-            COPY {preprocessing_script.name.split('/')[2]} /preprocessing.py
+            COPY {preprocessing_script.name.split("/")[2]} /preprocessing.py
             ADD credentials /credentials
             ENV AWS_SHARED_CREDENTIALS_FILE=/credentials
             RUN yum install python3 pip -y

tests/system/providers/amazon/aws/example_sagemaker.py~L182

     """generates a very simple csv dataset with headers"""
     content = "class,x,y\n"  # headers
     for i in range(SAMPLE_SIZE):
-        content += f"{i%100},{i},{SAMPLE_SIZE-i}\n"
+        content += f"{i % 100},{i},{SAMPLE_SIZE - i}\n"
     return content
 
 

tests/utils/log/test_secrets_masker.py~L211

             The above exception was the direct cause of the following exception:
 
             Traceback (most recent call last):
-              File ".../test_secrets_masker.py", line {line+4}, in test_masking_in_explicit_context_exceptions
+              File ".../test_secrets_masker.py", line {line + 4}, in test_masking_in_explicit_context_exceptions
                 raise RuntimeError(f"Exception: {{exception}}") from exception
             RuntimeError: Exception: Cannot connect to user:***
             """

aws/aws-sam-cli (+31 -31 lines across 20 files)

ruff format --preview

samcli/commands/init/interactive_init_flow.py~L484

     click.echo(f"\n{question}")
 
     for index, option in enumerate(options_list):
-        click.echo(f"\t{index+1} - {option}")
+        click.echo(f"\t{index + 1} - {option}")
         click_choices.append(str(index + 1))
     choice = click.prompt(msg, type=click.Choice(click_choices), show_choices=False)
     return options_list[int(choice) - 1]

samcli/commands/init/interactive_init_flow.py~L497

         click.echo("\nSelect your starter template")
         click_template_choices = []
         for index, template in enumerate(templates):
-            click.echo(f"\t{index+1} - {template['displayName']}")
+            click.echo(f"\t{index + 1} - {template['displayName']}")
             click_template_choices.append(str(index + 1))
         template_choice = click.prompt("Template", type=click.Choice(click_template_choices), show_choices=False)
         chosen_template = templates[int(template_choice) - 1]

samcli/commands/local/invoke/core/command.py~L58

                     ),
                     RowDefinition(
                         name=style(
-                            f"$ echo {json.dumps({'message':'hello!'})} | "
+                            f"$ echo {json.dumps({'message': 'hello!'})} | "
                             f"{ctx.command_path} HelloWorldFunction -e -"
                         ),
                         extra_row_modifiers=[ShowcaseRowModifier()],

samcli/commands/remote/invoke/core/command.py~L38

                         RowDefinition(
                             name=style(
                                 f"${ctx.command_path} --stack-name hello-world -e"
-                                f" '{json.dumps({'message':'hello!'})}'"
+                                f" '{json.dumps({'message': 'hello!'})}'"
                             ),
                             extra_row_modifiers=[ShowcaseRowModifier()],
                         ),

samcli/commands/remote/invoke/core/command.py~L59

                     formatter.write_rd([
                         RowDefinition(
                             name=style(
-                                f"$ echo '{json.dumps({'message':'hello!'})}' | "
+                                f"$ echo '{json.dumps({'message': 'hello!'})}' | "
                                 f"{ctx.command_path} HelloWorldFunction --event-file -"
                             ),
                             extra_row_modifiers=[ShowcaseRowModifier()],

samcli/commands/remote/invoke/core/command.py~L111

                         RowDefinition(
                             name=style(
                                 f"${ctx.command_path} --stack-name mock-stack StockTradingStateMachine"
-                                f" -e '{json.dumps({'message':'hello!'})}'"
+                                f" -e '{json.dumps({'message': 'hello!'})}'"
                             ),
                             extra_row_modifiers=[ShowcaseRowModifier()],
                         ),

samcli/commands/remote/invoke/core/command.py~L149

                     formatter.write_rd([
                         RowDefinition(
                             name=style(
-                                f"$ echo '{json.dumps({'message':'hello!'})}' | "
+                                f"$ echo '{json.dumps({'message': 'hello!'})}' | "
                                 f"${ctx.command_path} --stack-name mock-stack StockTradingStateMachine"
                                 f" --parameter traceHeader=<>"
                             ),

samcli/commands/remote/invoke/core/command.py~L231

                         RowDefinition(
                             name=style(
                                 f"${ctx.command_path} --stack-name mock-stack MyKinesisStream -e"
-                                f" '{json.dumps({'message':'hello!'})}'"
+                                f" '{json.dumps({'message': 'hello!'})}'"
                             ),
                             extra_row_modifiers=[ShowcaseRowModifier()],
                         ),

samcli/commands/remote/test_event/put/core/command.py~L60

                     ),
                     RowDefinition(
                         name=style(
-                            f"$ echo '{json.dumps({'message':'hello!'})}' | "
+                            f"$ echo '{json.dumps({'message': 'hello!'})}' | "
                             f"{ctx.command_path} --stack-name hello-world HelloWorldFunction --name MyEvent "
                             f"--file -"
                         ),

tests/integration/buildcmd/test_build_cmd.py~L2139

             "Function2Handler": "main.second_function_handler",
             "FunctionRuntime": "3.7",
             "DockerFile": "Dockerfile",
-            "Tag": f"{random.randint(1,100)}",
+            "Tag": f"{random.randint(1, 100)}",
         }
         cmdlist = self.get_command_list(parameter_overrides=overrides)
 

tests/integration/buildcmd/test_build_cmd.py~L2826

         overrides = {
             "Runtime": "3.7",
             "DockerFile": "Dockerfile",
-            "Tag": f"{random.randint(1,100)}",
+            "Tag": f"{random.randint(1, 100)}",
             "LocalNestedFuncHandler": "main.handler",
         }
         cmdlist = self.get_command_list(

tests/integration/local/start_lambda/test_start_lambda.py~L450

 class TestImagePackageType(StartLambdaIntegBaseClass):
     template_path = "/testdata/start_api/image_package_type/template.yaml"
     build_before_invoke = True
-    tag = f"python-{random.randint(1000,2000)}"
+    tag = f"python-{random.randint(1000, 2000)}"
     build_overrides = {"Tag": tag}
     parameter_overrides = {"ImageUri": f"helloworldfunction:{tag}"}
 

tests/integration/local/start_lambda/test_start_lambda.py~L480

     template_path = "/testdata/start_api/image_package_type/template.yaml"
     container_mode = ContainersInitializationMode.EAGER.value
     build_before_invoke = True
-    tag = f"python-{random.randint(1000,2000)}"
+    tag = f"python-{random.randint(1000, 2000)}"
     build_overrides = {"Tag": tag}
     parameter_overrides = {"ImageUri": f"helloworldfunction:{tag}"}
 

tests/integration/local/start_lambda/test_start_lambda.py~L510

     template_path = "/testdata/start_api/image_package_type/template.yaml"
     container_mode = ContainersInitializationMode.LAZY.value
     build_before_invoke = True
-    tag = f"python-{random.randint(1000,2000)}"
+    tag = f"python-{random.randint(1000, 2000)}"
     build_overrides = {"Tag": tag}
     parameter_overrides = {"ImageUri": f"helloworldfunction:{tag}"}
 

tests/integration/testdata/sync/code/after/function/app.py~L10

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+2}",
+            "message": f"{layer_method() + 2}",
             "message_from_layer": f"{layer_method()}",
             "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist(),  # checking external library call will succeed
         }),

tests/integration/testdata/sync/code/before/function/app.py~L10

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+1}",
+            "message": f"{layer_method() + 1}",
             "message_from_layer": f"{layer_method()}",
             "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist(),  # checking external library call will succeed
         }),

tests/integration/testdata/sync/infra/after/Python/function/app.py~L10

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+2}",
+            "message": f"{layer_method() + 2}",
             "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist(),  # checking external library call will succeed
         }),
     }

tests/integration/testdata/sync/infra/before/Python/function/app.py~L10

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+1}",
+            "message": f"{layer_method() + 1}",
             "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist(),  # checking external library call will succeed
         }),
     }

tests/integration/testdata/sync/infra/cdk/after/asset.6598609927b272b36fdf01072092f9851ddcd1b41ba294f736ce77091f5cc456/main.py~L10

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+2}",
+            "message": f"{layer_method() + 2}",
             "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist(),  # checking external library call will succeed
         }),
     }

tests/integration/testdata/sync/infra/cdk/after/asset.b998895901bf33127f2c9dce715854f8b35aa73fb7eb5245ba9721580bbe5837/main.py~L10

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+2}",
+            "message": f"{layer_method() + 2}",
             "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist(),  # checking external library call will succeed
         }),
     }

tests/integration/testdata/sync/infra/cdk/before/asset.6598609927b272b36fdf01072092f9851ddcd1b41ba294f736ce77091f5cc456/main.py~L10

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+1}",
+            "message": f"{layer_method() + 1}",
             "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist(),  # checking external library call will succeed
         }),
     }

tests/integration/testdata/sync/infra/cdk/before/asset.b998895901bf33127f2c9dce715854f8b35aa73fb7eb5245ba9721580bbe5837/main.py~L10

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+1}",
+            "message": f"{layer_method() + 1}",
             "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist(),  # checking external library call will succeed
         }),
     }

tests/integration/testdata/sync/nested/after/child_stack/child_functions/child_function.py~L7

 
     return {
         "statusCode": 200,
-        "body": json.dumps({"message": f"{layer_method()+6}"}),
+        "body": json.dumps({"message": f"{layer_method() + 6}"}),
     }

tests/integration/testdata/sync/nested/after/root_function/root_function.py~L17

 
     return {
         "statusCode": 200,
-        "body": json.dumps({"message": f"{layer_method()+6}", "location": ip.text.replace("\n", "")}),
+        "body": json.dumps({"message": f"{layer_method() + 6}", "location": ip.text.replace("\n", "")}),
     }

tests/integration/testdata/sync/nested/before/child_stack/child_functions/child_function.py~L7

 
     return {
         "statusCode": 200,
-        "body": json.dumps({"message": f"{layer_method()+5}"}),
+        "body": json.dumps({"message": f"{layer_method() + 5}"}),
     }

tests/integration/testdata/sync/nested/before/root_function/root_function.py~L17

 
     return {
         "statusCode": 200,
-        "body": json.dumps({"message": f"{layer_method()+6}", "location": ip.text.replace("\n", "")}),
+        "body": json.dumps({"message": f"{layer_method() + 6}", "location": ip.text.replace("\n", "")}),
     }

tests/integration/testdata/sync/nested_intrinsics/before/child_stack/child_function/function/function.py~L19

     return {
         "statusCode": 200,
         "body": json.dumps({
-            "message": f"{layer_method()+1}",
+            "message": f"{layer_method() + 1}",
             "location": ip.text.replace("\n", ""),
             # "extra_message": np.array([1, 2, 3, 4, 5, 6]).tolist() # checking external library call will succeed
         }),

tests/unit/lib/build_module/test_build_graph.py~L311

     handler = "{HANDLER}"
     functions = ["HelloWorldPython", "HelloWorld2Python"]
     [function_build_definitions.{UUID}.metadata]
-    Test = "{METADATA['Test']}"
-    Test2 = "{METADATA['Test2']}"
+    Test = "{METADATA["Test"]}"
+    Test2 = "{METADATA["Test2"]}"
     [function_build_definitions.{UUID}.env_vars]
-    env_vars = "{ENV_VARS['env_vars']}"
+    env_vars = "{ENV_VARS["env_vars"]}"
 
     [layer_build_definitions]
     [layer_build_definitions.{LAYER_UUID}]

tests/unit/lib/build_module/test_build_graph.py~L327

     manifest_hash = "{MANIFEST_HASH}"
     layer = "SumLayer"
     [layer_build_definitions.{LAYER_UUID}.env_vars]
-    env_vars = "{ENV_VARS['env_vars']}"
+    env_vars = "{ENV_VARS["env_vars"]}"
     """
 
     def test_should_instantiate_first_time(self):

bloomberg/pytest-memray (+2 -2 lines across 1 file)

ruff format --preview

src/pytest_memray/marks.py~L159

         stacks_left = num_stacks
         for function, file, line in stack_trace:
             if stacks_left <= 0:
-                text_lines.append(f"{padding*2}...")
+                text_lines.append(f"{padding * 2}...")
                 break
-            text_lines.append(f"{padding*2}{function}:{file}:{line}")
+            text_lines.append(f"{padding * 2}{function}:{file}:{line}")
             stacks_left -= 1
 
     return "\n".join(text_lines)

bokeh/bokeh (+27 -27 lines across 14 files)

ruff format --preview

examples/basic/annotations/legend_two_dimensions.py~L16

 sinx = np.sin(x)
 
 p1 = figure(title="Default legend layout", width=500, height=300)
-[p1.line(x, (1 + i / 20) * sinx, legend_label=f"{1+i/20:.2f}*sin(x)") for i in range(7)]
+[p1.line(x, (1 + i / 20) * sinx, legend_label=f"{1 + i / 20:.2f}*sin(x)") for i in range(7)]
 
 p2 = figure(title="Legend layout with 2 columns", width=500, height=300)
-[p2.line(x, (1 + i / 20) * sinx, legend_label=f"{1+i/20:.2f}*sin(x)") for i in range(7)]
+[p2.line(x, (1 + i / 20) * sinx, legend_label=f"{1 + i / 20:.2f}*sin(x)") for i in range(7)]
 p2.legend.ncols = 2
 
 p3 = figure(title="Legend layout with 3 rows", width=500, height=300)
-[p3.line(x, (1 + i / 20) * sinx, legend_label=f"{1+i/20:.2f}*sin(x)") for i in range(7)]
+[p3.line(x, (1 + i / 20) * sinx, legend_label=f"{1 + i / 20:.2f}*sin(x)") for i in range(7)]
 p3.legend.nrows = 3
 
 show(column(p1, p2, p3))

examples/server/app/fourier_animated.py~L62

 terms_plot.circle(0, 0, radius=A, color=palette, line_width=2, line_dash=dashing, fill_color=None)
 
 for i in range(4):
-    legend_label = f"4sin({i*2+1}x)/{i*2+1}pi" if i else "4sin(x)/pi"
+    legend_label = f"4sin({i * 2 + 1}x)/{i * 2 + 1}pi" if i else "4sin(x)/pi"
     terms_plot.line("x", f"y{i}", color=palette[i], line_width=2, source=lines_source, legend_label=legend_label)
 
 terms_plot.circle("xterm-dot", "yterm-dot", size=5, color="color", source=items_source)

examples/styling/mathtext/latex_schrodinger.py~L51

     p.line(q, y, color="red", line_width=2)
 
     p.add_layout(Label(x=-5.8, y=E_v, y_offset=-21, text=rf"$$v = {v}$$"))
-    p.add_layout(Label(x=3.9, y=E_v, y_offset=-25, text=rf"$$E_{v} = ({2*v+1}/2) \hbar\omega$$"))
+    p.add_layout(Label(x=3.9, y=E_v, y_offset=-25, text=rf"$$E_{v} = ({2 * v + 1}/2) \hbar\omega$$"))
 
 V = q**2 / 2
 p.line(q, V, line_color="black", line_width=2, line_dash="dashed")

examples/topics/hierarchical/crosstab.py~L37

     p.hbar(y=value(y), left=left, right=right, source=source, height=0.9, color=factor_cmap("Region", "MediumContrast4", regions))
 
     pcts = source.data[y]
-    source.data[f"{y} text"] = [f"{r}\n{x*100:0.1f}%" for r, x in zip(regions, pcts)]
+    source.data[f"{y} text"] = [f"{r}\n{x * 100:0.1f}%" for r, x in zip(regions, pcts)]
 
     p.text(y=value(y), x=left, text=f"{y} text", source=source, x_offset=10, text_color="color", text_baseline="middle", text_font_size="15px")
 

examples/topics/hierarchical/crosstab.py~L45

 
 p.hbar(right=0, left=-totals, y=totals.index, height=0.9, color="#dadada")
 
-text = [f"{name} ({totals.loc[name]*100:0.1f}%)" for name in cats]
+text = [f"{name} ({totals.loc[name] * 100:0.1f}%)" for name in cats]
 p.text(y=cats, x=0, text=text, text_baseline="middle", text_align="right", x_offset=-12, text_color="#4a4a4a", text_font_size="20px", text_font_style="bold")
 
 show(p)

examples/topics/stats/pyramid.py~L34

 for i, (count, age) in enumerate(zip(f_hist, edges[1:])):
     if i % 2 == 1:
         continue
-    p.text(x=count, y=edges[1:][i], text=[f"{age-bin_width}-{age}yrs"], x_offset=5, y_offset=7, text_font_size="12px", text_color=DarkText[5])
+    p.text(x=count, y=edges[1:][i], text=[f"{age - bin_width}-{age}yrs"], x_offset=5, y_offset=7, text_font_size="12px", text_color=DarkText[5])
 
 # customise x-axis and y-axis
 p.xaxis.ticker = (-80, -60, -40, -20, 0, 20, 40, 60, 80)

setup.py~L83

             stamp, txt = m.groups()
             msg.append(f"   {dim(green(stamp))} {dim(txt)}")
     print(BUILD_SUCCESS_MSG.format(msg="\n".join(msg)))
-    print(f"\n Build time: {bright(yellow(f'{t1-t0:0.1f} seconds'))}\n")
+    print(f"\n Build time: {bright(yellow(f'{t1 - t0:0.1f} seconds'))}\n")
 
     print("Build artifact sizes:")
     try:

src/bokeh/colors/color.py~L334

 
         """
         if self.a < 1.0:
-            return f"#{self.r:02x}{self.g:02x}{self.b:02x}{int(round(self.a*255)):02x}"
+            return f"#{self.r:02x}{self.g:02x}{self.b:02x}{int(round(self.a * 255)):02x}"
         else:
             return f"#{self.r:02x}{self.g:02x}{self.b:02x}"
 

src/bokeh/colors/color.py~L458

 
         """
         if self.a == 1.0:
-            return f"hsl({self.h}, {self.s*100}%, {self.l*100}%)"
+            return f"hsl({self.h}, {self.s * 100}%, {self.l * 100}%)"
         else:
-            return f"hsla({self.h}, {self.s*100}%, {self.l*100}%, {self.a})"
+            return f"hsla({self.h}, {self.s * 100}%, {self.l * 100}%, {self.a})"
 
     def to_hsl(self) -> HSL:
         """Return a HSL copy for this HSL color.

src/bokeh/core/property/either.py~L98

         if any(param.is_valid(value) for param in self.type_params):
             return
 
-        msg = "" if not detail else f"expected an element of either {nice_join([ str(param) for param in self.type_params ])}, got {value!r}"
+        msg = "" if not detail else f"expected an element of either {nice_join([str(param) for param in self.type_params])}, got {value!r}"
         raise ValueError(msg)
 
     def wrap(self, value):

src/bokeh/core/property_mixins.py~L233

 _hatch_pattern_help = f"""
 Built-in patterns are can either be specified as long names:
 
-{', '. join(HatchPattern)}
+{", ".join(HatchPattern)}
 
 or as one-letter abbreviations:
 
-{', '. join(repr(x) for x in HatchPatternAbbreviation)}
+{", ".join(repr(x) for x in HatchPatternAbbreviation)}
 """
 
 _hatch_weight_help = """

src/bokeh/sphinxext/bokeh_model.py~L119

         name_prefix, model_name, arglist, retann = m.groups()
 
         if getenv("BOKEH_SPHINX_QUICK") == "1":
-            return self.parse(f"{model_name}\n{'-'*len(model_name)}\n", "<bokeh-model>")
+            return self.parse(f"{model_name}\n{'-' * len(model_name)}\n", "<bokeh-model>")
 
         module_name = self.options["module"]
 

src/bokeh/util/sampledata.py~L212

             file.write(data)
 
             if progress:
-                status = f"\r{fetch_size:< 10d} [{fetch_size*100.0/file_size:6.2f}%%]"
+                status = f"\r{fetch_size:< 10d} [{fetch_size * 100.0 / file_size:6.2f}%%]"
                 stdout.write(status)
                 stdout.flush()
 

tests/integration/widgets/test_select.py~L143

             opts = grp.find_elements(By.TAG_NAME, "option")
             assert len(opts) == i
             for j, opt in enumerate(opts, 1):
-                assert opt.text == f"Option {i*10 + j}"
-                assert opt.get_attribute("value") == f"Option {i*10 + j}"
+                assert opt.text == f"Option {i * 10 + j}"
+                assert opt.get_attribute("value") == f"Option {i * 10 + j}"
 
         assert page.has_no_console_errors()
 

tests/integration/widgets/test_select.py~L167

             opts = grp.find_elements(By.TAG_NAME, "option")
             assert len(opts) == i
             for j, opt in enumerate(opts, 1):
-                assert opt.text == f"Option {i*10 + j}"
-                assert opt.get_attribute("value") == f"Option {i*10 + j}"
+                assert opt.text == f"Option {i * 10 + j}"
+                assert opt.get_attribute("value") == f"Option {i * 10 + j}"
 
         assert page.has_no_console_errors()
 

tests/integration/widgets/test_select.py~L189

             opts = grp.find_elements(By.TAG_NAME, "option")
             assert len(opts) == i
             for j, opt in enumerate(opts, 1):
-                assert opt.text == f"Option {i*10 + j}"
-                assert opt.get_attribute("value") == f"{i*10 + j}"
+                assert opt.text == f"Option {i * 10 + j}"
+                assert opt.get_attribute("value") == f"{i * 10 + j}"
 
         assert page.has_no_console_errors()
 

tests/integration/widgets/test_select.py~L213

             opts = grp.find_elements(By.TAG_NAME, "option")
             assert len(opts) == i
             for j, opt in enumerate(opts, 1):
-                assert opt.text == f"Option {i*10 + j}"
-                assert opt.get_attribute("value") == f"{i*10 + j}"
+                assert opt.text == f"Option {i * 10 + j}"
+                assert opt.get_attribute("value") == f"{i * 10 + j}"
 
         assert page.has_no_console_errors()
 

tests/test_examples.py~L230

     result = run_in_chrome(url)
     end = time.time()
 
-    info(f"Example rendered in {(end-start):.3f} seconds")
+    info(f"Example rendered in {(end - start):.3f} seconds")
 
     success = result["success"]
     timeout = result["timeout"]

tests/unit/bokeh/core/property/test_bases.py~L113

         def raise_(ex):
             raise ex
 
-        p.asserts(False, lambda obj, name, value: raise_(ValueError(f"bad {hp==obj} {name} {value}")))
+        p.asserts(False, lambda obj, name, value: raise_(ValueError(f"bad {hp == obj} {name} {value}")))
 
         with pytest.raises(ValueError) as e:
             p.prepare_value(hp, "foo", 10)

commaai/openpilot (+9 -9 lines across 8 files)

ruff format --preview

selfdrive/car/vin.py~L67

                 except Exception:
                     cloudlog.exception("VIN query exception")
 
-        cloudlog.error(f"vin query retry ({i+1}) ...")
+        cloudlog.error(f"vin query retry ({i + 1}) ...")
 
     return -1, -1, VIN_UNKNOWN
 

selfdrive/controls/lib/events.py~L382

 def joystick_alert(CP: car.CarParams, CS: car.CarState, sm: messaging.SubMaster, metric: bool, soft_disable_time: int) -> Alert:
     axes = sm["testJoystick"].axes
     gb, steer = list(axes)[:2] if len(axes) else (0.0, 0.0)
-    vals = f"Gas: {round(gb * 100.)}%, Steer: {round(steer * 100.)}%"
+    vals = f"Gas: {round(gb * 100.0)}%, Steer: {round(steer * 100.0)}%"
     return NormalPermanentAlert("Joystick Mode", vals)
 
 

selfdrive/debug/count_events.py~L65

     for k, v in cnt_cameras.items():
         s = SERVICE_LIST[k]
         expected_frames = int(s.frequency * duration / cast(float, s.decimation))
-        print("  ", k.ljust(20), f"{v}, {v/expected_frames:.1%} of expected")
+        print("  ", k.ljust(20), f"{v}, {v / expected_frames:.1%} of expected")
 
     print("\n")
     print("Alerts")

selfdrive/modeld/tests/timing/benchmark.py~L35

     print("\n\n")
     print(f"ran modeld {N} times for {TIME}s each")
     for _, t in enumerate(execution_times):
-        print(f"\tavg: {sum(t)/len(t):0.2f}ms, min: {min(t):0.2f}ms, max: {max(t):0.2f}ms")
+        print(f"\tavg: {sum(t) / len(t):0.2f}ms, min: {min(t):0.2f}ms, max: {max(t):0.2f}ms")
     print("\n\n")

selfdrive/navd/navd.py~L144

             waypoint_coords = json.loads(waypoints)
 
         coords = [(self.last_position.longitude, self.last_position.latitude), *waypoint_coords, (destination.longitude, destination.latitude)]
-        params["waypoints"] = f"0;{len(coords)-1}"
+        params["waypoints"] = f"0;{len(coords) - 1}"
         if self.last_bearing is not None:
             params["bearings"] = f"{(self.last_bearing + 360) % 360:.0f},90" + (";" * (len(coords) - 1))
 

selfdrive/test/test_onroad.py~L394

                 result += f"{s} - failed RSD timing check\n"
                 passed = False
 
-            result += f"{s.ljust(40)}: {np.array([np.mean(ts), np.max(ts), np.min(ts)])*1e3}\n"
-            result += f"{''.ljust(40)}  {np.max(np.absolute([np.max(ts)/dt, np.min(ts)/dt]))} {np.std(ts)/dt}\n"
+            result += f"{s.ljust(40)}: {np.array([np.mean(ts), np.max(ts), np.min(ts)]) * 1e3}\n"
+            result += f"{''.ljust(40)}  {np.max(np.absolute([np.max(ts) / dt, np.min(ts) / dt]))} {np.std(ts) / dt}\n"
         result += "=" * 67
         print(result)
         self.assertTrue(passed)

system/loggerd/tests/test_encoder.py~L141

             for i in trange(num_segments):
                 # poll for next segment
                 with Timeout(int(SEGMENT_LENGTH * 10), error_msg=f"timed out waiting for segment {i}"):
-                    while Path(f"{route_prefix_path}--{i+1}") not in Path(Paths.log_root()).iterdir():
+                    while Path(f"{route_prefix_path}--{i + 1}") not in Path(Paths.log_root()).iterdir():
                         time.sleep(0.1)
                 check_seg(i)
         finally:

tools/tuning/measure_steering_accuracy.py~L110

             for group in self.display_groups:
                 if len(self.speed_group_stats[group]) > 0:
                     print(f"speed group: {group:10s} {self.all_groups[group][1]:>96s}")
-                    print(f"  {'-'*118}")
+                    print(f"  {'-' * 118}")
                     for k in sorted(self.speed_group_stats[group].keys()):
                         v = self.speed_group_stats[group][k]
                         print(

demisto/content (+218 -193 lines across 62 files)

ruff format --preview --exclude Packs/ThreatQ/Integrations/ThreatQ/ThreatQ.py

Packs/AccentureCTI_Feed/Integrations/ACTIIndicatorFeed/ACTIIndicatorFeed.py~L69

         except ConnectionError as exception:  # pragma: no cover
             # Get originating Exception in Exception chain
             error_class = str(exception.__class__)  # pragma: no cover
-            err_type = f"""<{error_class[error_class.find("'") + 1: error_class.rfind("'")]}>"""  # pragma: no cover
+            err_type = f"""<{error_class[error_class.find("'") + 1 : error_class.rfind("'")]}>"""  # pragma: no cover
             err_msg = (
                 "Verify that the server URL parameter"
                 " is correct and that you have access to the server from your host."

Packs/Anomali_ThreatStream/Integrations/AnomaliThreatStreamv3/AnomaliThreatStreamv3.py~L1782

     readable_output: str = ""
     if associated_entity_ids_results == num_associated_entity_ids:
         readable_output = (
-            f'The {associated_entity_type} entities with ids {", ".join(map(str, res.get("ids",[])))} '
+            f'The {associated_entity_type} entities with ids {", ".join(map(str, res.get("ids", [])))} '
             f'were associated successfully to entity id: {entity_id}.'
         )
     elif associated_entity_ids_results > 0:
         readable_output = (
-            f'Part of the {associated_entity_type} entities with ids {", ".join(map(str, res.get("ids",[])))} '
+            f'Part of the {associated_entity_type} entities with ids {", ".join(map(str, res.get("ids", [])))} '
             f'were associated successfully to entity id: {entity_id}.'
         )
     else:

Packs/Anomali_ThreatStream/Scripts/ThreatstreamBuildIocImportJson/ThreatstreamBuildIocImportJson.py~L60

         if not re.match(domainRegex, domain):
             invalid_indicators.append(domain)
     if len(invalid_indicators) > 0:
-        raise DemistoException(f'Invalid indicators values: {", ".join(map(str,invalid_indicators))}')
+        raise DemistoException(f'Invalid indicators values: {", ".join(map(str, invalid_indicators))}')
 
 
 def get_indicators_from_user(args: dict, indicators_types: dict) -> list:

Packs/ApiModules/Scripts/NGINXApiModule/NGINXApiModule.py~L192

             start = 1
             for lines in batch(f.readlines(), 100):
                 end = start + len(lines)
-                demisto.info(f"nginx access log ({start}-{end-1}): " + "".join(lines))
+                demisto.info(f"nginx access log ({start}-{end - 1}): " + "".join(lines))
                 start = end
         Path(old_access).unlink()
     if log_error:

Packs/ApiModules/Scripts/NGINXApiModule/NGINXApiModule.py~L200

             start = 1
             for lines in batch(f.readlines(), 100):
                 end = start + len(lines)
-                demisto.error(f"nginx error log ({start}-{end-1}): " + "".join(lines))
+                demisto.error(f"nginx error log ({start}-{end - 1}): " + "".join(lines))
                 start = end
         Path(old_error).unlink()
 

Packs/ArcusTeam/Integrations/ArcusTeam/ArcusTeam.py~L71

     markdown += f"**Series**: {device.get('series')}{nl}"
     markdown += f"**Categories**: {','.join(device.get('categories'))}{nl}"
     markdown += f"**DeviceID**: {device.get('device_key')}{nl}"
-    markdown += f"**Match Score**: {round(device.get('score')*100,2)}%{nl}"
+    markdown += f"**Match Score**: {round(device.get('score') * 100, 2)}%{nl}"
     firmwares = device.get("firmware")
     markdown += tableToMarkdown("Firmwares", firmwares, headers=["firmwareid", "version", "name"])
     return markdown

Packs/BmcITSM/Integrations/BmcITSM/BmcITSM.py~L3038

     """
 
     stmt = oper_between_filters.join(
-        f"'{filter_key}' {oper_in_filter} \"{wrap_filter_value(filter_val,oper_in_filter)}\""
+        f"'{filter_key}' {oper_in_filter} \"{wrap_filter_value(filter_val, oper_in_filter)}\""
         for filter_key, filter_val in (filter_mapper).items()
     )
     return stmt

Packs/BreachRx/Integrations/BreachRx/BreachRx.py~L158

         description = f"""An Incident copied from the Palo Alto Networks XSOAR platform.
             <br>
             <br>
-            XSOAR Incident Name: {demisto.incident().get('name')}"""
+            XSOAR Incident Name: {demisto.incident().get("name")}"""
 
     response = client.create_incident(incident_name, description)
 

Packs/Campaign/Scripts/GetCampaignIndicatorsByIncidentId/GetCampaignIndicatorsByIncidentId.py~L26

     Returns:
         List of the campaign indicators.
     """
-    indicators_query = f"""investigationIDs:({' '.join(f'"{id_}"' for id_ in incident_ids)})"""
+    indicators_query = f"""investigationIDs:({" ".join(f'"{id_}"' for id_ in incident_ids)})"""
     fields = ["id", "indicator_type", "investigationIDs", "investigationsCount", "score", "value"]
     search_indicators = IndicatorsSearcher(query=indicators_query, limit=150, size=500, filter_fields=",".join(fields))
     indicators: list[dict] = []

Packs/CommonScripts/Scripts/BMCTool/BMCTool.py~L1170

                             self.pal = True
                             t_bmp = self.PALETTE + self.bdat[len(t_hdr) : len(t_hdr) + cf * t_width * t_height]
                         else:
-                            self.b_log("error", False, f"Unexpected bpp {8*cf} found during processing; aborting.")
+                            self.b_log("error", False, f"Unexpected bpp {8 * cf} found during processing; aborting.")
                     bl = cf * 64 * 64
             if len(t_bmp) > 0:
                 self.bmps.append(t_bmp)

Packs/CommonScripts/Scripts/GetIndicatorDBotScoreFromCache/GetIndicatorDBotScoreFromCache.py~L7

     values: list[str] = argToList(demisto.args().get("value", None))
     unique_values: set[str] = {v.lower() for v in values}  # search query is case insensitive
 
-    query = f"""value:({' '.join([f'"{value}"' for value in unique_values])})"""
+    query = f"""value:({" ".join([f'"{value}"' for value in unique_values])})"""
     demisto.debug(f"{query=}")
 
     res = demisto.searchIndicators(

Packs/CommonScripts/Scripts/ParseEmailFilesV2/ParseEmailFilesV2.py~L30

     if parent_email_file:
         md += f"### Containing email: {parent_email_file}\n"
 
-    md += f"""* From:\t{email_data.get('From') or ""}\n"""
-    md += f"""* To:\t{email_data.get('To') or ""}\n"""
-    md += f"""* CC:\t{email_data.get('CC') or ""}\n"""
-    md += f"""* BCC:\t{email_data.get('BCC') or ""}\n"""
-    md += f"""* Subject:\t{email_data.get('Subject') or ""}\n"""
+    md += f"""* From:\t{email_data.get("From") or ""}\n"""
+    md += f"""* To:\t{email_data.get("To") or ""}\n"""
+    md += f"""* CC:\t{email_data.get("CC") or ""}\n"""
+    md += f"""* BCC:\t{email_data.get("BCC") or ""}\n"""
+    md += f"""* Subject:\t{email_data.get("Subject") or ""}\n"""
     if email_data.get("Text"):
         text = email_data["Text"].replace("<", "[").replace(">", "]")
         md += f'* Body/Text:\t{text or ""}\n'
     if email_data.get("HTML"):
-        md += f"""* Body/HTML:\t{email_data['HTML'] or ""}\n"""
+        md += f"""* Body/HTML:\t{email_data["HTML"] or ""}\n"""
 
-    md += f"""* Attachments:\t{email_data.get('Attachments') or ""}\n"""
+    md += f"""* Attachments:\t{email_data.get("Attachments") or ""}\n"""
     md += "\n\n" + tableToMarkdown("HeadersMap", email_data.get("HeadersMap"))
     return md
 

Packs/CommunityCommonScripts/Scripts/RetrievePlaybooksAndIntegrations/RetrievePlaybooksAndIntegrations.py~L66

 def retrieve_playbooks_and_integrations(args: Dict[str, Any]) -> CommandResults:
     playbooks: List[str] = []
     integrations: List[str] = []
-    query = f'''name:"{args['playbook_name']}"'''
+    query = f'''name:"{args["playbook_name"]}"'''
     body = {"query": query}
     playbooks_json = perform_rest_call("post", "playbook/search", body)
     for playbook_json in playbooks_json["playbooks"]:

Packs/CommunityCommonScripts/Scripts/RetrievePlaybooksAndIntegrations/RetrievePlaybooksAndIntegrations.py~L84

     outputs = {"Playbooks": playbooks, "Integrations": integrations}
 
     return CommandResults(
-        readable_output=f'''Retrieved Playbooks and Integrations for Playbook "{playbook_json['name']}"''',
+        readable_output=f'''Retrieved Playbooks and Integrations for Playbook "{playbook_json["name"]}"''',
         outputs_prefix="RetrievePlaybooksAndIntegrations",
         outputs_key_field="",
         outputs=outputs,

Packs/CommvaultSecurityIQ/Integrations/CommvaultSecurityIQ/CommvaultSecurityIQ.py~L881

             self.validate_session_or_generate_token(self.current_api_token)
             response = self.http_request("GET", f"/V4/SAML/{identity_server_name}")
             if "error" in response:
-                demisto.debug(f"Error [{response.get('error',{}).get('errorString','')}]")
+                demisto.debug(f"Error [{response.get('error', {}).get('errorString', '')}]")
                 return False
             if response.get("enabled"):
                 demisto.debug(f"SAML is enabled for identity server [{identity_server_name}]. Going to disable it")

Packs/ContentTesting/Scripts/UnitTestCoverage/UnitTestCoverage.py~L34

         markdown += "|No Tasks Found||||\n"
         return markdown
     for _key, val in tasks.items():
-        markdown += f"|{val['name']}|{val['count']}|{val['completed']}|{val['completed']/val['count']*100}%|\n"
+        markdown += f"|{val['name']}|{val['count']}|{val['completed']}|{val['completed'] / val['count'] * 100}%|\n"
 
     return markdown
 

Packs/CovalenceForSecurityProviders/Integrations/CovalenceForSecurityProviders/CovalenceForSecurityProviders.py~L168

                 created_time_str = created_time.strftime(DATE_FORMAT)
 
                 if BROKER:
-                    incident_name = f"""[{target_org}] [{a.get('type', 'No alert type')}] {a.get('analystTitle', 'No title')}"""
+                    incident_name = f"""[{target_org}] [{a.get("type", "No alert type")}] {a.get("analystTitle", "No title")}"""
                 else:
-                    incident_name = f"""[{a.get('type', 'No alert type')}] {a.get('analystTitle', 'No title')}"""
+                    incident_name = f"""[{a.get("type", "No alert type")}] {a.get("analystTitle", "No title")}"""
                 incident: Dict[str, Any] = {"name": incident_name, "occured": created_time_str, "rawJSON": json.dumps(a)}
                 if a.get("severity", None):
                     #  XSOAR mapping

Packs/CovalenceManagedSecurity/Integrations/CovalenceManagedSecurity/CovalenceManagedSecurity.py~L236

                 if a.get("steps", None) and len(a["steps"]) > 0:
                     incident["details"] += "\n\nMitigation Steps\n"
                     for step in a["steps"]:
-                        incident["details"] += f"""- {step['label']}\n"""
+                        incident["details"] += f"""- {step["label"]}\n"""
                 if org_id:
                     active_response_profile = p.get_active_response_profile(org_id)
                     if active_response_profile:

Packs/CrowdStrikeFalcon/Integrations/CrowdStrikeFalcon/CrowdStrikeFalcon.py~L6073

 def ODS_create_scan_request(args: dict, is_scheduled: bool) -> dict:
     body = make_create_scan_request_body(args, is_scheduled)
     remove_nulls_from_dictionary(body)
-    return http_request("POST", f'/ods/entities/{"scheduled-"*is_scheduled}scans/v1', json=body)
+    return http_request("POST", f'/ods/entities/{"scheduled-" * is_scheduled}scans/v1', json=body)
 
 
 def ODS_verify_create_scan_command(args: dict) -> None:

Packs/Cryptosim/Integrations/Cryptosim/Cryptosim.py~L132

             message = "ok"
         else:
             raise Exception(f"""StatusCode:
-                            {client.correlations().get('StatusCode')},
-                            Error: {client.correlations().get('ErrorMessage')}
+                            {client.correlations().get("StatusCode")},
+                            Error: {client.correlations().get("ErrorMessage")}
                             """)
     except DemistoException as e:
         if "401" in str(e):

Packs/Cybereason/Integrations/Cybereason/Cybereason.py~L875

         response = get_remediation_action(client, malop_guid, machine_name, target_id, remediation_action)
         action_status = get_remediation_action_status(client, user_name, malop_guid, response, comment)
         if dict_safe_get(action_status, ["Remediation status"]) == "SUCCESS":
-            success_response = f"""Kill process remediation action status is: {dict_safe_get(
-                action_status, ['Remediation status'])} \n Remediation ID: {dict_safe_get(action_status, ['Remediation ID'])}"""
+            success_response = f"""Kill process remediation action status is: {
+                dict_safe_get(action_status, ["Remediation status"])
+            } \n Remediation ID: {dict_safe_get(action_status, ["Remediation ID"])}"""
             return CommandResults(readable_output=success_response)
         elif dict_sa...*[Comment body truncated]*

@dhruvmanila dhruvmanila force-pushed the dhruv/fstring-formatting branch 2 times, most recently from 77830be to 7b2a6b2 Compare February 13, 2024 09:59
@dhruvmanila dhruvmanila force-pushed the dhruv/split-string-part branch 2 times, most recently from a431f6a to 334616d Compare February 13, 2024 12:28
@dhruvmanila dhruvmanila force-pushed the dhruv/fstring-formatting branch from 7b2a6b2 to c80ab7f Compare February 13, 2024 12:38
Base automatically changed from dhruv/split-string-part to main February 13, 2024 12:44
@dhruvmanila dhruvmanila force-pushed the dhruv/fstring-formatting branch from c80ab7f to 18b88c4 Compare February 13, 2024 12:45
Copy link

codspeed-hq bot commented Feb 13, 2024

CodSpeed Performance Report

Merging #9642 will not alter performance

Comparing dhruv/fstring-formatting (7eed676) with main (fe79798)

Summary

✅ 30 untouched benchmarks

// } bbbbbbbbbbbbb"
// ```
// This isn't decided yet, refer to the relevant discussion:
// https://github.com/astral-sh/ruff/discussions/9785
} else if AnyString::FString(self).is_multiline(context.source()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to change this to only return true when the string literals are multiline or we risk instability if an f-string becomes single line because we collapse the expression part. Or is this handled somewhere else?

@dhruvmanila dhruvmanila force-pushed the dhruv/fstring-formatting branch from 3d31f69 to 4c62227 Compare February 14, 2024 19:10
@dhruvmanila dhruvmanila marked this pull request as ready for review February 14, 2024 20:28
@MichaReiser
Copy link
Member

I've some understand questions first:

The heuristics for adding newline is similar to that of Prettier where the formatter would only split an expression in the replacement field across multiple lines if there was already a line break within the replacement field.

Is the logic if replacement expressions are allowed to expand over multiple lines globally for the entire f-string-literal (expand if any replacement expression contains a line break), or is the decision made for each replacement expression (only expand the replacement expressions that already contain line breaks, keep the other ones flat)?

But, for triple-quoted strings, we can re-use the same quote char unless the inner string is itself a triple-quoted string.

Does this also apply to Python 3.12 or does Python 3.12 allow to reuse the same triple quotes?

If debug expressions are present in the replacement field of a f-string, then the whitespace needs to be preserved as they will be rendered as it is (for example, f"{ x = }". If there are any nested f-strings, then the whitespace in them needs to be preserved as well which means that we'll stop formatting the f-string as soon as we encounter a debug expression.

Does that mean we do not format any f-string that contains any debug expression or is it that we only disable formatting for replacements that contain debug expressions?

@dhruvmanila
Copy link
Member Author

Is the logic if replacement expressions are allowed to expand over multiple lines globally for the entire f-string-literal (expand if any replacement expression contains a line break), or is the decision made for each replacement expression (only expand the replacement expressions that already contain line breaks, keep the other ones flat)?

The decision is made globally for an entire f-string by looking at each replacement field and checking if any of them contains a line break.

Does this also apply to Python 3.12 or does Python 3.12 allow to reuse the same triple quotes?

Python 3.12 allows re-use of same quotes irrespective of whether it's single or triple quoted. The logic basically ORs with checking if the target-version is 3.12 or not.

Does that mean we do not format any f-string that contains any debug expression or is it that we only disable formatting for replacements that contain debug expressions?

The later. So, given f"a { x = } b { y } c", we need to preserve the whitespace around x and not for y.

Copy link
Member

@MichaReiser MichaReiser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent work @dhruvmanila

We can follow up with the indent handling in a separate PR if we decide to change it.

I have a few comments about the implementations; most are small Nits.

My main concerns are about combining the RemoveSoftlinesBuffer with the Printer approach and how trailing comments are now associated as dangling comments.

I suggest that we either use the RemoveSoftlinesBuffer or the Printer approach but that we avoid using both to keep things "simple". If you prefer to keep RemoveSoftlinesBuffer, then I would prefer changing the magic trailing comma handling by checking some state in the context over using the Printer. It avoids the performance penalty and doesn't require a visitor to determine if it is necessary to use it or not.

crates/ruff_python_formatter/src/context.rs Outdated Show resolved Hide resolved
crates/ruff_python_formatter/src/other/string_literal.rs Outdated Show resolved Hide resolved
crates/ruff_python_formatter/src/string/normalize.rs Outdated Show resolved Hide resolved
crates/ruff_python_formatter/src/other/f_string_element.rs Outdated Show resolved Hide resolved
crates/ruff_python_formatter/src/other/f_string_element.rs Outdated Show resolved Hide resolved
crates/ruff_python_formatter/src/other/f_string_element.rs Outdated Show resolved Hide resolved
crates/ruff_python_formatter/src/other/f_string_element.rs Outdated Show resolved Hide resolved
crates/ruff_python_formatter/src/other/f_string_element.rs Outdated Show resolved Hide resolved
@dhruvmanila
Copy link
Member Author

Thanks for the detailed review @MichaReiser, really appreciate all the suggestions you've provided.

Regarding the magic trailing comma, I've removed the Printer usage and updated the builder to avoid adding the trailing comma if the f-string layout is flat. We already have that information in the context so it was an easy change.

pub(crate) fn finish(&mut self) -> FormatResult<()> {
// If the formatter is inside an f-string expression element, and the layout
// is flat, then we don't need to add a trailing comma.
if let FStringState::InsideExpressionElement(context) = self.fmt.context().f_string_state()
{
if context.layout().is_flat() {
return Ok(());
}
}

I'll look at the final ecosystem changes and then merge this PR.

@dhruvmanila
Copy link
Member Author

dhruvmanila commented Feb 16, 2024

For posterity...

Currently, to avoid breaking the expression, what we do is after the expression formatting is over, we'd remove the line breaks. This creates a problem w.r.t. the magic trailing comma. For example,

f"aaaaaaa {['aaaaaaaaaaaaaaa', 'bbbbbbbbbbbbb', 'ccccccccccccccccc', 'ddddddddddddddd', 'eeeeeeeeeeeeee']} aaaaaaa"

The formatter will break the list but as there were no line breaks in the original source code, we'd collapse it back. The trailing comma will still remain:

f"aaaaaaa {['aaaaaaaaaaaaaaa', 'bbbbbbbbbbbbb', 'ccccccccccccccccc', 'ddddddddddddddd', 'eeeeeeeeeeeeee',]} aaaaaaa"

The current approach to solving this problem is to update the builder to not add the trailing comma in this specific context:

pub(crate) fn finish(&mut self) -> FormatResult<()> {
// If the formatter is inside an f-string expression element, and the layout
// is flat, then we don't need to add a trailing comma.
if let FStringState::InsideExpressionElement(context) = self.fmt.context().f_string_state()
{
if context.layout().is_flat() {
return Ok(());
}
}

An alternative approach, which is what this comment documents, is to use the Printer in the f-string formatting itself and avoid updating the builder. That would also remove the use of RemoveSoftlineBreak buffer. This commit has a version of it which has been removed. There are few more changes which would need to be made along with that:

  1. Update the LineWidth to use u32 as u16 is reasonably large but a line can go beyond that.
  2. Remove RemoveSoftlineBreak buffer usage as the printer won't add the line breaks in the first place.

This also has an advantage that the logic is local to the f-string formatting.

@dhruvmanila dhruvmanila merged commit 72bf1c2 into main Feb 16, 2024
17 checks passed
@dhruvmanila dhruvmanila deleted the dhruv/fstring-formatting branch February 16, 2024 14:58
@dhruvmanila dhruvmanila mentioned this pull request Feb 16, 2024
3 tasks
nkxxll pushed a commit to nkxxll/ruff that referenced this pull request Mar 10, 2024
## Summary

_This is preview only feature and is available using the `--preview`
command-line flag._

With the implementation of [PEP 701] in Python 3.12, f-strings can now
be broken into multiple lines, can contain comments, and can re-use the
same quote character. Currently, no other Python formatter formats the
f-strings so there's some discussion which needs to happen in defining
the style used for f-string formatting. Relevant discussion:
astral-sh#9785

The goal for this PR is to add minimal support for f-string formatting.
This would be to format expression within the replacement field without
introducing any major style changes.

### Newlines

The heuristics for adding newline is similar to that of
[Prettier](https://prettier.io/docs/en/next/rationale.html#template-literals)
where the formatter would only split an expression in the replacement
field across multiple lines if there was already a line break within the
replacement field.

In other words, the formatter would not add any newlines unless they
were already present i.e., they were added by the user. This makes
breaking any expression inside an f-string optional and in control of
the user. For example,

```python
# We wouldn't break this
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa { aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"

# But, we would break the following as there's already a newline
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa {
	aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"
```


If there are comments in any of the replacement field of the f-string,
then it will always be a multi-line f-string in which case the formatter
would prefer to break expressions i.e., introduce newlines. For example,

```python
x = f"{ # comment
    a }"
```

### Quotes

The logic for formatting quotes remains unchanged. The existing logic is
used to determine the necessary quote char and is used accordingly.

Now, if the expression inside an f-string is itself a string like, then
we need to make sure to preserve the existing quote and not change it to
the preferred quote unless it's 3.12. For example,

```python
f"outer {'inner'} outer"

# For pre 3.12, preserve the single quote
f"outer {'inner'} outer"

# While for 3.12 and later, the quotes can be changed
f"outer {"inner"} outer"
```

But, for triple-quoted strings, we can re-use the same quote char unless
the inner string is itself a triple-quoted string.

```python
f"""outer {"inner"} outer"""  # valid
f"""outer {'''inner'''} outer"""  # preserve the single quote char for the inner string
```

### Debug expressions

If debug expressions are present in the replacement field of a f-string,
then the whitespace needs to be preserved as they will be rendered as it
is (for example, `f"{ x = }"`. If there are any nested f-strings, then
the whitespace in them needs to be preserved as well which means that
we'll stop formatting the f-string as soon as we encounter a debug
expression.

```python
f"outer {   x =  !s  :.3f}"
#                  ^^
#                  We can remove these whitespaces
```

Now, the whitespace doesn't need to be preserved around conversion spec
and format specifiers, so we'll format them as usual but we won't be
formatting any nested f-string within the format specifier.

### Miscellaneous

- The
[`hug_parens_with_braces_and_square_brackets`](astral-sh#8279)
preview style isn't implemented w.r.t. the f-string curly braces.
- The
[indentation](astral-sh#9785 (comment))
is always relative to the f-string containing statement

## Test Plan

* Add new test cases
* Review existing snapshot changes
* Review the ecosystem changes

[PEP 701]: https://peps.python.org/pep-0701/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
formatter Related to the formatter preview Related to preview mode features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Formatter: handle comments, quotes, and expressions inside f-strings
2 participants