Development has moved to:
https://github.com/JupiterBroadcasting/jupiterbroadcasting.com
Built with Hugo and deployed with Github Actions
Demo: https://jb.codefighters.net
JupiterBroadcasting/jupiterbroadcasting.com#8 (comment)
- Static Site using Hugo
- Complete publishing workflow using Github and Github Actions
- Template using SCSS (without node dependencies using Hugo extended)
- only Vanilla JS is used (single files with concat workflow)
- Highly configurable with
config.toml
and config folder - Hosts (via data folder and frontmatter)
- Video player
- HTML5 audio player
- Multishow capable
- Tags (via frontmatter)
- Guests (via data folder and frontmatter)
- Sponsors (via data folder and frontmatter)
Wishlist of features and work-to-be-done tracked here: https://github.com/StefanS-O/jupiterbroadcasting-hugo-mvp/issues
Install Hugo: https://gohugo.io/getting-started/installing/
Start the development Server (rebuilds on every filesystem change)
hugo server -D
To build and run the docker image:
make run
hugo server -D --config config.coderradio.toml
to clean the module config
hugo mod clean --all
build
hugo -D --config config.coderradio.toml
Hugo issue currently regarding overlapping mounts
so for now only subdirectories work
Deployment is done with Github Actions, see workflow file in .github/workflows/main.yml
At the moment it is only triggered when something in the main
branch is changing, but it can also be set up to run at certain times.
This would also enable scheduled publishing, since Hugo per default only builds pages which have set date
in frontmatter to <= now
The fireside-scraper is based on JB Show Notes that was written by ironicbadger.
It goes over all the JB Fireside shows and scrapes the episodes into the format that is expected by hugo for each episode (using this template).
Besides the episodes it also scrapes and creates the json files for:
- sponsors
- hosts
- guests (every host is symlinked into the guests dir since a host of one show, could be a guest on an episode of a different show)
There are makefile commands that should be used to run it.
The command below would build, and start up the container which would save all the data into scraped-data
dir.
make scrape
The files are organised in the same way as the files in the root project. This makes it very trivial to just copy the contents of scraped-data
over to the root dir of the repo to include all the scraped content. And it can be done with:
make scrape-copy
or you could just run the following to scrape and copy over the root dir all at once:
make scrape-full
Configure the scraper by modifying this config.yml file
-
I took parts of the functionality from the Castanet Theme: https://github.com/mattstratton/castanet Mainly the RSS feed generation and managing of hosts / guests.
-
ironicbadger and JB Show Notes project which was used as the base for the
fireside-scraper
Time spend so far: 13h+