Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server Setup #244

Closed
1 task
StefanS-O opened this issue Aug 8, 2022 · 21 comments
Closed
1 task

Server Setup #244

StefanS-O opened this issue Aug 8, 2022 · 21 comments
Assignees
Labels
in progress currently being worked on infrastructure Related to infrastructure urgent
Milestone

Comments

@StefanS-O
Copy link
Collaborator

StefanS-O commented Aug 8, 2022

Setup the server that JB provided for delivering the new site:

  • finish Ansible scripts

List what's needed:

  • rsync
  • nginx (vhost for jupiterbroadcasting.net / archive. jupiterbroadcasting.net -> just static sites), will need more when we migrate Fireside
  • letsencrypt
  • user to deploy from Github Action (Private Key needs to be set as a secret in Github Action, public key needs to be on the server)
  • docker (for later stuff like search)
@StefanS-O StefanS-O added this to the JB.com 1.0 milestone Aug 8, 2022
@StefanS-O StefanS-O self-assigned this Aug 8, 2022
@elreydetoda
Copy link
Collaborator

elreydetoda commented Aug 8, 2022

Oooh, if you want some help with this, I do this for work all the time 😁

Are you just doing a vhost with nginx @StefanS-O ?

We could commit the playbook to this repo to collaborate 🙃

honestly I don't typically like doing 🔼 (committing the infra code beside the app code) because it can inhibit either the app or infra depending on if one doesn't iterate as fast as the other, but I think this is a nuanced case that might be an exception (plus I don't think anyone has enough rights to create like an IaC (Infrastructure as Code) repo or something like that 😅)

@gerbrent
Copy link
Collaborator

gerbrent commented Aug 8, 2022

@ironicbadger wants to sink his teeth into helping on the website, this might be the perfect place...

@ironicbadger
Copy link
Collaborator

ironicbadger commented Aug 8, 2022

I will create an infra repo as I agree putting infra code and code code next to each other is not the way. I will migrate over the code from here where it makes sense and then provide docs of how to use it.

Might be a day or two.

@gerbrent
Copy link
Collaborator

gerbrent commented Aug 8, 2022

@StefanS-O is also in the process of writing up some requirements to help that process. Go team!

@gerbrent gerbrent added in progress currently being worked on infrastructure Related to infrastructure labels Aug 8, 2022
@StefanS-O
Copy link
Collaborator Author

@ironicbadger nice, i just added the required packages. Let's take a as little as possible approach and extend from there.

@ironicbadger
Copy link
Collaborator

The site will be hosted on an existing Linode server that we use for several core jb services already.

I need to refactor a few things now that the purpose of the servers scope has increased somewhat over its original but fundamentals will remain the same as described here.

In order to proceed with deploying even the test site to this box I need to get a handle on how we intend to build the site into an artifact. I assume (and hope) this will be a container with hugo as discussed in #249 using Github Actions. We need to agree upon how to publish that artifact - is github container registry agreeable to all? Or should we clone this repo locally to the Linode box and build a local image there? I don't mind either solution but we need to make the decision.

@elreydetoda
Copy link
Collaborator

In order to proceed with deploying even the test site...

I'd imagine just using the GH action + container registry (GHCR) would be fine, but it ultimately will depend on how much (or if) it costs JB anything (I've never hosted on GHCR before, only quay & docker hub). I'd imagine not, since it's going to be a public image but I don't personally know.

I think my biggest concern is the frequency as to when the updates will happen, because currently the JB.net site is updated instantaneously after the action runs with an rsync. So, is that going to end up being the same situation for the dev.JB.com server (or wherever it'll be hosted). If not, we'll need to notate it somewhere in the contributing docs and let people know that the update frequency won't be as fast as it previously was.

To supplement the latency of PR -> merge -> publish (if it isn't as fast), if we end up doing #257 then it'll give people fast feedback (w/o having to wait for the deployment) to see what things will look like and let them revert it before they go to try and deploy it.

@ironicbadger
Copy link
Collaborator

Deploying production and dev / pr iterations are two different things imho. Do we need to separate discussion?

@StefanS-O
Copy link
Collaborator Author

I don't have an issue when we use a container and the Github registry. I agree with @elreydetoda that it is a question of pricing. The free tier only covers 500MB and 1GB of transfer which isn't much. We could youse Dockerhub which is free for public repos.

If we tag the images correctly we could also use https://github.com/containrrr/watchtower/ to autodeploy, or a webhook, or just SSH into the server and pull the new image.

@ironicbadger
Copy link
Collaborator

Please not watchtower. I have a strong aversion to that thing as updates should be done atomically as part of a CI process, not just whenever something changes upstream (deliberately or otherwise).

The action can easily SSH into the remote box and deploy the new container. I've been doing this for years with several other sites quite successfully running on this box.

@ironicbadger
Copy link
Collaborator

@RealOrangeOne might have some opinions?

@elreydetoda
Copy link
Collaborator

So if we don't use GHCR, then I'd actually prefer to stay away from Docker Hub. I know that it's the default for a lot of thing (because it's the most well know & docker), but they've kept having changes to their usage of their docker hub and keep restricting things behind pay walls. (i.e. rate limiting the amount of pulls you can do of their registry)

I've personally moved all my stuff over to quay.io and have been pretty happy with it. I'd probably recommend that registry (unless someone else has another registry they'd use for a specific reason), and you can use the red hat GH action to push to it: https://github.com/redhat-actions/push-to-registry

The action can easily SSH into the remote box and...

Interesting I hadn't thought of that 😅, do you mind pointing to repo that has that? (just personally curious how you've done it before (i.e. an argument passed to ssh, cat'ing a script into ssh stdin, etc.))

That sounds like it'll work perfectly fine though 😁🚀

@ironicbadger
Copy link
Collaborator

perfectmediaserver.com is deployed this way and is a static site built using mkdocs.

https://github.com/ironicbadger/pms-wiki/blob/main/.github/workflows/deploy.yml

@RealOrangeOne
Copy link

I think we should do this properly. The ways the PMS wiki and selfhosted.show wiki are deployed definitely works, but it's not ideal.

Deploying as a docker container I think is something we can universally agree is the right way to go for so many reasons. Building the container on CI, pushing it to GHCR, and having some docker config in a separate repo ought to be ample. The tricky question I agree is how to deploy it.

watchtower doesn't play especially well with docker-compose in my experience, not to mention is a bit of a hack. Instead of having the server poll for updates, we should tell the server to update.

I've had success in the past using webhook to automatically deploy containers. Once the container is pushed to a registry, we can poke the server to pull and restart the container, something like my update-all script

Alternatively, we could bypass this entirely. Linode supports serving static websites using their object storage, which has the benefit of removing all server management and deployment issues, with much simpler deployment. It is much less "fun", yes, but also is pretty simple to configure. And given the website is static, we might as well lean on the fact it makes deployment much simpler.

@ironicbadger
Copy link
Collaborator

#274 addresses the production deployments. Dev is still being figured out but this issue can be closed.

@elreydetoda
Copy link
Collaborator

Alternatively, we could bypass this entirely. Linode supports serving static websites using their object storage, which has the benefit of removing all server management and deployment issues, with much simpler deployment. It is much less "fun", yes, but also is pretty simple to configure. And given the website is static, we might as well lean on the fact it makes deployment much simpler.

I actually really like this answer...since we'll be probably be using their object storage for #23 anyways why not use it for the whole site? 🤔😅

I will say though, that it does present at least one comment/question/issues. I know (at least on AWS) you can't have TLS on your s3 buckets you'll normally use cloudfront for that. So, we'd have to probably use cloudflare to handle the encryption. Another questions would be cleanup tasks, if something gets "removed" from the hugo side how does it get cleaned up from the object storage?

#274 addresses the production deployments. Dev is still being figured out but this issue can be closed.

I know that this is currently how we are doing things, but one thing I thought about is that we never end up publishing a container artifact to a registry. We're only building it on the production host. So, only the production host will have access to the image it's using, and if we want to try and debug an issue with the image we can't.

@ironicbadger
Copy link
Collaborator

I tried building the site and pushing to s3 earlier but it took 17 mins for one build. That’s too long and unacceptable when our current build and deploy is under 1 minute. I use s3cmd from a linode so bandwidth was not the limiting factor, the process of transacting 20k files was.

@RealOrangeOne
Copy link

Have you tried something like rclone? It should let you do parallel uploads, which should sidestep some of the processing overhead from the S3 API

@ironicbadger
Copy link
Collaborator

Haven't tried rclone yet - for 1.0 milestone we'll stick with the method the sysadmin here understands best (container). Not ruling out changes to s3 etc in future but I was advised by @gerbrent a real web server is needed for more advanced functionality somewhere? Not sure what though.

@elreydetoda
Copy link
Collaborator

based on @ironicbadger's comment here & the last one I'm closing this issue and opening a new one (#307) for future investigations.

@elreydetoda
Copy link
Collaborator

Will need to create a separate issue but wanted to note before I forgot. During this deployment issue

@ironicbadger and I talked about that it would probably be a good idea to still create a docker container artifact (image) and push it to a registry somewhere. That way if we do have any issues, we can simply pull the previous artifact and start the server with that till the issue is figured out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in progress currently being worked on infrastructure Related to infrastructure urgent
Projects
None yet
Development

No branches or pull requests

5 participants