Skip to content

Commit

Permalink
docs: Remove references to redis and workers from docs
Browse files Browse the repository at this point in the history
  • Loading branch information
MohamedBassem committed Aug 31, 2024
1 parent 25b61cc commit 83bc5bd
Show file tree
Hide file tree
Showing 6 changed files with 4 additions and 24 deletions.
2 changes: 0 additions & 2 deletions docs/docs/02-Installation/02-unraid.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,5 @@ Hoarder can be installed on Unraid using the community application plugins. Hoar
Here's a high level overview of the services you'll need:

- **Hoarder** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's main web app.
- **hoarder-worker** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's background workers (for running the AI tagging, fetching the content, etc).
- **Redis**: Currently used for communication between the web app and the background workers.
- **Browserless** ([Support post](https://forums.unraid.net/topic/130163-support-template-masterwishxbrowserless/)): The chrome headless service used for fetching the content. Hoarder's official docker compose doesn't use browserless, but it's currently the only headless chrome service available on unraid, so you'll have to use it.
- **MeiliSearch** ([Support post](https://forums.unraid.net/topic/164847-support-collectathon-meilisearch/)): The search engine used by Hoarder. It's optional but highly recommended. If you don't have it set up, search will be disabled.
9 changes: 1 addition & 8 deletions docs/docs/07-Development/01-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,12 @@
- The most important env variables to set are:
- `DATA_DIR`: Where the database and assets will be stored. This is the only required env variable. You can use an absolute path so that all apps point to the same dir.
- `NEXTAUTH_SECRET`: Random string used to sign the JWT tokens. Generate one with `openssl rand -base64 36`. Logging in will not work if this is missing!
- `REDIS_HOST` and `REDIS_PORT` default to `localhost` and `6379` change them if redis is running on a different address.
- `MEILI_ADDR`: If not set, search will be disabled. You can set it to `http://127.0.0.1:7700` if you run meilisearch using the command below.
- `OPENAI_API_KEY`: If you want to enable auto tag inference in the dev env.
- run `pnpm run db:migrate` in the root of the repo to set up the database.

### Dependencies

#### Redis

Redis is used as the background job queue. The easiest way to get it running is with docker `docker run -p 6379:6379 redis:alpine`.

#### Meilisearch

Meilisearch is the provider for the full text search. You can get it running with `docker run -p 7700:7700 getmeili/meilisearch:v1.6`.
Expand All @@ -35,14 +30,12 @@ The worker app will automatically start headless chrome on startup for crawling
- Run `pnpm web` in the root of the repo.
- Go to `http://localhost:3000`.

> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless redis is running.
> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless workers are running.
### Workers

- Run `pnpm workers` in the root of the repo.

> NOTE: The workers package requires having redis working as it's the queue provider.
### iOS Mobile App

- `cd apps/mobile`
Expand Down
3 changes: 1 addition & 2 deletions docs/docs/07-Development/04-architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,7 @@
![Architecture Diagram](/img/architecture/arch.png)

- Webapp: NextJS based using sqlite for data storage.
- Redis: Used with BullMQ for scheduling background jobs for the workers.
- Workers: Consume the jobs from redis and executes them, there are three job types:
- Workers: Consume the jobs from sqlite based job queue and executes them, there are three job types:
1. Crawling: Fetches the content of links using a headless chrome browser running in the workers container.
2. OpenAI: Uses OpenAI APIs to infer the tags of the content.
3. Indexing: Indexes the content in meilisearch for faster retrieval during search.
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,5 @@ Hoarder can be installed on Unraid using the community application plugins. Hoar
Here's a high level overview of the services you'll need:

- **Hoarder** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's main web app.
- **hoarder-worker** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's background workers (for running the AI tagging, fetching the content, etc).
- **Redis**: Currently used for communication between the web app and the background workers.
- **Browserless** ([Support post](https://forums.unraid.net/topic/130163-support-template-masterwishxbrowserless/)): The chrome headless service used for fetching the content. Hoarder's official docker compose doesn't use browserless, but it's currently the only headless chrome service available on unraid, so you'll have to use it.
- **MeiliSearch** ([Support post](https://forums.unraid.net/topic/164847-support-collectathon-meilisearch/)): The search engine used by Hoarder. It's optional but highly recommended. If you don't have it set up, search will be disabled.
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,12 @@
- The most important env variables to set are:
- `DATA_DIR`: Where the database and assets will be stored. This is the only required env variable. You can use an absolute path so that all apps point to the same dir.
- `NEXTAUTH_SECRET`: Random string used to sign the JWT tokens. Generate one with `openssl rand -base64 36`. Logging in will not work if this is missing!
- `REDIS_HOST` and `REDIS_PORT` default to `localhost` and `6379` change them if redis is running on a different address.
- `MEILI_ADDR`: If not set, search will be disabled. You can set it to `http://127.0.0.1:7700` if you run meilisearch using the command below.
- `OPENAI_API_KEY`: If you want to enable auto tag inference in the dev env.
- run `pnpm run db:migrate` in the root of the repo to set up the database.

### Dependencies

#### Redis

Redis is used as the background job queue. The easiest way to get it running is with docker `docker run -p 6379:6379 redis:alpine`.

#### Meilisearch

Meilisearch is the provider for the full text search. You can get it running with `docker run -p 7700:7700 getmeili/meilisearch:v1.6`.
Expand All @@ -35,14 +30,12 @@ The worker app will automatically start headless chrome on startup for crawling
- Run `pnpm web` in the root of the repo.
- Go to `http://localhost:3000`.

> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless redis is running.
> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless workers are running.
### Workers

- Run `pnpm workers` in the root of the repo.

> NOTE: The workers package requires having redis working as it's the queue provider.
### iOS Mobile App

- `cd apps/mobile`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,7 @@
![Architecture Diagram](/img/architecture/arch.png)

- Webapp: NextJS based using sqlite for data storage.
- Redis: Used with BullMQ for scheduling background jobs for the workers.
- Workers: Consume the jobs from redis and executes them, there are three job types:
- Workers: Consume the jobs from sqlite based job queue and executes them, there are three job types:
1. Crawling: Fetches the content of links using a headless chrome browser running in the workers container.
2. OpenAI: Uses OpenAI APIs to infer the tags of the content.
3. Indexing: Indexes the content in meilisearch for faster retrieval during search.

0 comments on commit 83bc5bd

Please sign in to comment.