Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

in keyword in class method causes indentation problem #1846

Closed
laggingreflex opened this issue Oct 11, 2020 · 5 comments
Closed

in keyword in class method causes indentation problem #1846

laggingreflex opened this issue Oct 11, 2020 · 5 comments

Comments

@laggingreflex
Copy link

Description

Class with a function named "in" causes indentation issue:

Input

The code looked like this before beautification:

class {
  get a() {

  }

  in () {

  }

  b () {

  }
}

Expected Output

The code should have looked like this after beautification:

(should be unchanged)

Actual Output

The code actually looked like this after beautification:

class {
    get a() {

        }

        in () {

        }

    b() {

    }
}

Steps to Reproduce

Tried on https://beautifier.io/

@kcamsanc
Copy link
Contributor

@bitwiseman Hi! I'm new to open source contributions and am considering addressing this issue. Before I take it though, I was wondering, is it still a requirement to ensure feature parity as described in Contributions.md?

If so, what's the best way to reproduce this error in the Python implementation after forking and cloning the repo locally? Thanks.

@kcamsanc
Copy link
Contributor

If so, what's the best way to reproduce this error in the Python implementation after forking and cloning the repo locally? Thanks.

I believe I figured out the answer to my question above.

I am now working on this change. I will keep you all updated on my progress!

@kcamsanc
Copy link
Contributor

kcamsanc commented Mar 29, 2022

Hi all, I just opened up PR #2004. Let me know what you think. There is one thing that I wanted to bring up though (also under "CURRENT DISCUSSION" in the PR):

  • I'm under the impression that when I add unit tests to /test/data/*/test.js, corresponding tests are automatically generated for the Python implementation. Is this correct?
  • If so, I found that my unit tests did not fail for the Python implementation before I made any changes to that implementation. However, I created a .js file myself, ran ./tools/python-dev js-beautify on my file, and found that the issue was actually reproducible in the Python implementation.
  • How can I add unit tests for the Python implementation to incur a test failure before making changes to the Python implementation?

@bitwiseman
Copy link
Member

Hi all, I just opened up PR #2004. Let me know what you think. There is one thing that I wanted to bring up though (also under "CURRENT DISCUSSION" in the PR):

  • I'm under the impression that when I add unit tests to /test/data/*/test.js, corresponding tests are automatically generated for the Python implementation. Is this correct?

Yes.

  • If so, I found that my unit tests did not fail for the Python implementation before I made any changes to that implementation. However, I created a .js file myself, ran ./tools/python-dev js-beautify on my file, and found that the issue was actually reproducible in the Python implementation.
  • How can I add unit tests for the Python implementation to incur a test failure before making changes to the Python implementation?

Did you run make all? I'd be very surprised if that didn't generate the test for both js and python, and result in a failure in the python test run.

@kcamsanc
Copy link
Contributor

kcamsanc commented Mar 30, 2022

Did you run make all? I'd be very surprised if that didn't generate the test for both js and python, and result in a failure in the python test run.

Yes, to test this, I reverted my changes to tokenizer.py and ran both make and make all (not sure if there's a difference there). Neither resulted in a failure, and within the make all output, I do see:

...
Test unpack() function. ... ok
test_detect (jsbeautifier.unpackers.tests.testurlencode.TestUrlencode)
Test detect() function. ... ok
test_unpack (jsbeautifier.unpackers.tests.testurlencode.TestUrlencode)
Test unpack function. ... ok

----------------------------------------------------------------------
Ran 36 tests in 0.056s

which I think means that no Python tests failed even after I reverted my changes in tokenizer.py. I also ran make py, which resulted in "PASSED".

Looking into this further, I can actually see that my unit tests were automatically generated inpython/jsbeautifier/tests/generated/tests.py like you said, so I feel like it's strange that no Python tests failed. FWIW, I know the JavaScript unit tests I wrote failed before I made changes to the JavaScript implementation.

I know I can't make changes to python/jsbeautifier/tests/generated/tests.py as it is automatically generated. So, what's the best way to write good unit tests to make sure this bug doesn't reoccur in the future? (For now, like I mentioned, I've just had to run ./tools/python-dev js-beautify on a local .js file I made to see if the bug is still present in the Python implementation.)

@bitwiseman bitwiseman added this to the 1.14.x milestone Apr 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants