Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare the Worker for Production Adoption #74

Open
9 of 19 tasks
ovflowd opened this issue Nov 15, 2023 · 10 comments
Open
9 of 19 tasks

Prepare the Worker for Production Adoption #74

ovflowd opened this issue Nov 15, 2023 · 10 comments

Comments

@ovflowd
Copy link
Member

ovflowd commented Nov 15, 2023

There are still a few loose ends we should fix before releasing our Worker fully on production, this issue tracks all pending work:

Staged Tests

  • Wipe R2 once-again
  • Resync Everything
  • Remove the v21.2.0 from R2
  • Run the Promote Script and ensure that we have 1:1 match from direct.nodejs.org to r2.nodejs.org
  • Ensure that the new "Release" Tag on GitHub nodejs/node is triggering correctly our release
  • We might be able to test all this functionality with the next v18.x LTS release (cc @targos)

Current Rollout

The items checked, mean that are now being served through Cloudflare Workers and our R2 (S3) bucket instead of our DigitalOcean Server + Cloudflare Load Balancer

  • /docs
  • /api
  • /download
  • /metrics
  • /dist

Only after all these are ack'd and confirmed we should switch traffic back to the Worker for /download and /dist

@ovflowd
Copy link
Member Author

ovflowd commented Nov 23, 2023

So @targos afaik there's now the question why a new release did not trigger the GitHub Action, right?

And there's also the issue that when the promotion script runs... the R2 uploads start to fail due to the CPU/Memory being used a lot... Or maybe because the network traffic on that moment is huge (due to the cache being invalidate on Cloudflare?)

I feel that once we switch the traffic to R2 we wouldn't have this issue with uploading to R2 anymore, no?

@ovflowd
Copy link
Member Author

ovflowd commented Nov 23, 2023

cc @MattIPv4 your input here would be nice. Is there a way we can make the droplet have a 2nd network interface? So that since the first one is congestioned the 2nd one can be used just for DO <> CF comms?

@ovflowd
Copy link
Member Author

ovflowd commented Nov 23, 2023

(That would be great, because if we have a 2nd network interface with another public IP we can assign a domain there like (direct-cf.nodejs.org) and that can be also the domain that CF uses for caching things (?) idk if that would even make a difference.

@MattIPv4
Copy link
Member

Is there a way we can make the droplet have a 2nd network interface? So that since the first one is congestioned the 2nd one can be used just for DO <> CF comms?

I'm not sure if it'd give you a second interface, but a Droplet can definitely have two public IPs -- the default anchor address of the Droplet itself, and then a reserved IP as well. https://docs.digitalocean.com/products/networking/reserved-ips/

@MattIPv4
Copy link
Member

That being said, I'm not sure how having a second IP/hostname will achieve anything? As soon as that hostname becomes publicly known, folks will start abusing it just like they do the current one...

Maybe I've missed a discussion somewhere, but why isn't all HTTP traffic on the Droplet just restricted to Cloudflare only? There shouldn't be anything that needs direct access rather than being proxied through Cloudflare with some level of caching, or the ability for us to lock down access there (we could still have a hostname with little caching if needed, but have full access control in Cloudflare).

@targos
Copy link
Member

targos commented Nov 24, 2023

I'm not sure it's related to people accessing the server directly. When there's a new release, we just have a lot of load coming from Cloudflare servers.
One solution might be to disable concurrency in the aws S3 commands.

@ovflowd
Copy link
Member Author

ovflowd commented Dec 3, 2023

I think we've merged the last bug-fixes on our side. Now we only need to ensure the sync script is working correctly and the links job is being triggered correctly.

@MoLow any update on that front?

@flakey5
Copy link
Member

flakey5 commented Jan 10, 2024

Ensure that the new "Release" Tag on GitHub nodejs/node is triggering correctly our release

https://github.com/nodejs/release-cloudflare-worker/actions/runs/7475122417 - seems to be working

@ovflowd
Copy link
Member Author

ovflowd commented Jan 10, 2024

Ensure that the new "Release" Tag on GitHub nodejs/node is triggering correctly our release

https://github.com/nodejs/release-cloudflare-worker/actions/runs/7475122417 - seems to be working

Wdym? That action was triggered because of a commit to main on cloudflare worker 🤔

@MattIPv4
Copy link
Member

No, that run was manually triggered by the bot, against the latest commit in main

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants