Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc Dependency Updates #362

Merged
merged 9 commits into from
Feb 7, 2023
Merged

Misc Dependency Updates #362

merged 9 commits into from
Feb 7, 2023

Conversation

confused-Techie
Copy link
Member

This PR goes through some of our many dependencies and upgrades where possible.

I have in no way covered everything, but this covers some aspects at the very least. Most updates I tried to avoid Major Semver Version changes, and felt more comfortable doing this in package upgrades when it matched the version used in Pulsar.

The following dependencies have received updates:

Pulsar
  • yargs: 16.1.0 -> 17.6.2
  • temp: 0.9.2 -> 0.9.4
  • semver: 7.3.2 -> 7.3.8
  • eslint: 8.27.0 -> 8.33.0
  • eslint-plugin-jsdoc: 39.6.4 -> 39.7.4
About Package
  • etch: 0.9.0 -> 0.14.1
  • semver: 5.5.0 -> 7.3.8
Archive-View Package
  • etch: 0.9.0 -> 0.14.1
  • temp: 0.8.1 -> 0.9.4

Quick note, while doing this, it made me realize just how many of our dependencies are actually much more up to date then I thought they were, and thought it might be a good time to revisit the idea of setting up Renovate or some other dependency management, as I quickly realized how impossible it'd be to go through every package and find outdated dependencies, read changelogs and upgrade, then test and so on. Even if we need to disable checking for some packages that we know we can't touch such as tree-sitter or something else, it may be a good idea.

@DeeDeeG
Copy link
Member

DeeDeeG commented Feb 2, 2023

+1 for setting up renovate or similar. (I prefer something that doesn't post PRs, since it's a lot of noise, but at the same time if folks will intend to review/merge those quickly, then that'd be a non-issue, really...)

Can you comment on the CI results? Any new failures that you notice? (It's good to see most things are green! And we can re-run failed jobs if there are tests that are just flaky but aren't truly broken/perma-failing right now.)

@confused-Techie
Copy link
Member Author

@DeeDeeG I appreciate you taking a quick look at this!

For our CI here's what we have:

  • atom-dark-syntax: Failed when downloading RipGrep
  • autocomplete-atom-api: Failed when downloading RipGrep
  • find-and-replace: 47 Failures - Expected 51 ± ~10 (This tests is flaky and unstable in failure expectancy rate)
  • symbols-view: 2 Failures - 2 Expected
  • timecop: Failed when downloading RipGrep
  • tree-view: 2 Failures - 2 Expected
  • language-php: Failed with a Segmentation Fault when building
  • language-property-list: Failed when downloading RipGrep

As a note, the above tests that fail during the download of RipGrep has been resolved as of #357 but those commits are not part of this PR's history, so the fixes are not available.

As for the Segmentation Fault I'm a bit unsure how that's happened, I'm inclined to say it was a one time error we don't have to worry about but still good to keep in mind

So overall the tests look pretty good to me. A little wonky, but nothing I'd be seriously concerned about.


On a dep bot, definitely something we could look into implementing this behavior. There is a still open poll for this, that is currently winning, so we would be on track to do this.

@DeeDeeG

This comment was marked as outdated.

@DeeDeeG
Copy link
Member

DeeDeeG commented Feb 5, 2023

Okay, I can confirm there's no new issues with this branch in terms of the GitHub Actions tests (editor + package tests) -- all of the consistent spec failures here are also happening just the same on master branch.

Weirdly enough, a visual test in the Cirrus job on Linux is getting tripped up somehow.

Test scenario: opening the editor and expecting to see the Welcome page. Within said scenario (if I'm reading this correctly), it waits for some element with CSS style .tab-bar to be visible, and times out after 25 seconds of waiting.

Test output (click to expand):
  1) workspace.spec.js:53:3 › Opening Atom for the first time › the editor opens at the welcome page 
    Error: expect(received).toBeVisible()
    Call log:
      - expect.toBeVisible with timeout 25000ms
      - waiting for selector ".tab-bar >> nth=0"
      43 | test.describe('Opening Atom for the first time', () => {
      44 |   test.beforeAll(async () => {
    > 45 |     editor = await openAtom("atom-home-tests", "opening-first-time")
         |              ^
      46 |   })
      47 |
      48 |   test.afterAll(async () => {
        at openAtom (/tmp/cirrus-ci-build/integration/helpers.js:24:50)
        at /tmp/cirrus-ci-build/integration/workspace.spec.js:45:14

@DeeDeeG
Copy link
Member

DeeDeeG commented Feb 7, 2023

Going to try and see if the Cirrus Linux failure happens on master branch (without this PR) as well...

Could be just an intermittent CI issue, maybe. Or caused by something outside of this PR. Just trying to rule some things out.

@confused-Techie
Copy link
Member Author

Going to try and see if the Cirrus Linux failure happens on master branch (without this PR) as well...

Could be just an intermittent CI issue, maybe. Or caused by something outside of this PR. Just trying to rule some things out.

I appreciate any double checking you can do. That does seem quite odd, as I've been unable to determine how this would cause the errors reported, and at least running locally does obviously seem to be able to open just fine. So let me know what you find out!

@confused-Techie
Copy link
Member Author

@DeeDeeG Is there something you did on your end? It looks like the linux test is now passing on here for cirrus?

Copy link
Member

@DeeDeeG DeeDeeG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-ran it today and CI passed in the Linux job this time, no issue.

Meanwhile, a run on master branch had the same failure on Linux. (Run https://cirrus-ci.com/task/5949221833015296 for commit 3851a91). So it's not caused by this branch. Dunno if it's just a flaky test or what.

But I'm comfortable approving this since it's not the cause of the Cirrus issue and doesn't hurt test results in the GitHub Actions either.

Thanks for doing this!

@confused-Techie
Copy link
Member Author

@DeeDeeG Looks like (If memory serves) the same tests failed over on #346
I've already restarted the Intel Mac Cirrus Job there to see if it just needs to rerun, very well might be a flaky test. Which isn't something we have seen before for the visual tests so a little worrisome.

But otherwise thanks a ton (again) for your review! I appreciate the effort, and will be glad to get things just a tad bit more up to date here. I'll go ahead and resolve the conflicts and get this merged!

@confused-Techie confused-Techie merged commit a6253be into master Feb 7, 2023
@Spiker985 Spiker985 deleted the misc-update-deps branch February 24, 2023 07:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants