Skip to content

Commit

Permalink
feat(aleph#monitorization): Add note on aleph and prometheus
Browse files Browse the repository at this point in the history
Aleph now exposes prometheus metrics on the port 9100

feat(bash_snippets#Do relative import of a bash library): Do relative import of a bash library

If you want to import a file `lib.sh` that lives in the same directory as the file that is importing it you can use the next snippet:

```bash
source "$(dirname "$(realpath "$0")")/lib.sh"
```

If you use `source ./lib.sh` you will get an import error if you run the script on any other place that is not the directory where `lib.sh` lives.

feat(bash_snippets#Check the battery status): Check the battery status

This [article gives many ways to check the status of a battery](https://www.howtogeek.com/810971/how-to-check-a-linux-laptops-battery-from-the-command-line/), for my purposes the next one is enough

```bash
cat /sys/class/power_supply/BAT0/capacity
```
feat(bash_snippets#Check if file is being sourced): Check if file is being sourced

Assuming that you are running bash, put the following code near the start of the script that you want to be sourced but not executed:

```bash
if [ "${BASH_SOURCE[0]}" -ef "$0" ]
then
    echo "Hey, you should source this script, not execute it!"
    exit 1
fi
```

Under bash, `${BASH_SOURCE[0]}` will contain the name of the current file that the shell is reading regardless of whether it is being sourced or executed.

By contrast, `$0` is the name of the current file being executed.

`-ef` tests if these two files are the same file. If they are, we alert the user and exit.

Neither `-ef` nor `BASH_SOURCE` are POSIX. While `-ef` is supported by ksh, yash, zsh and Dash, BASH_SOURCE requires bash. In zsh, however, `${BASH_SOURCE[0]}` could be replaced by `${(%):-%N}`.

feat(bash_snippets#Parsing bash arguments): Parsing bash arguments

Long story short, it's nasty, think of using a python script with [typer](typer.md) instead.

There are some possibilities to do this:

- [The old getops](https://www.baeldung.com/linux/bash-parse-command-line-arguments)
- [argbash](https://github.com/matejak/argbash) library
- [Build your own parser](https://medium.com/@Drew_Stokes/bash-argument-parsing-54f3b81a6a8f)

ci: also commit the not by ai badge in the CI

fix(alertmanager): Add another source on how to silence alerts

If previous guidelines don't work for you, you can use the [sleep peacefully guidelines](https://samber.github.io/awesome-prometheus-alerts/sleep-peacefully) to tackle it at query level.

feat(documentation#references): Add diátaxis as documentation writing guideline

[Diátaxis: A systematic approach to technical documentation authoring](https://diataxis.fr/)

feat(ecc): Check if system is actually using ECC

Another way is to run `dmidecode`. For ECC support you'll see:
```bash
$: dmidecode -t memory | grep ECC
  Error Correction Type: Single-bit ECC
  # or
  Error Correction Type: Multi-bit ECC
```

No ECC:

```bash
$: dmidecode -t memory | grep ECC
  Error Correction Type: None
```

You can also test it with [`rasdaemon`](rasdaemon.md)

feat(faster#Prometheus metrics): Prometheus metrics

Use [`prometheus-fastapi-instrumentator`](https://github.com/trallnag/prometheus-fastapi-instrumentator)

feat(privileges#Videos): Add nice video on male privileges

[La intuición femenina, gracias al lenguaje](https://twitter.com/almuariza/status/1772889815131807765?t=HH1W17VGbQ7K-_XmoCy_SQ&s=19)

feat(ffmpeg#Reduce the video size): Reduce the video size

If you don't mind using `H.265` replace the libx264 codec with libx265, and push the compression lever further by increasing the CRF value — add, say, 4 or 6, since a reasonable range for H.265 may be 24 to 30. Note that lower CRF values correspond to higher bitrates, and hence produce higher quality videos.

```bash
ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4
```

If you want to stick to H.264 reduce the bitrate. You can check the current one with `ffprobe input.mkv`. Once you've chosen the new rate change it with:

```bash
ffmpeg -i input.mp4 -b 3000k output.mp4
```

Additional options that might be worth considering is setting the Constant Rate Factor, which lowers the average bit rate, but retains better quality. Vary the CRF between around 18 and 24 — the lower, the higher the bitrate.

```bash
ffmpeg -i input.mp4 -vcodec libx264 -crf 20 output.mp4
```

feat(icsx5): Introduce ICSx5

[ICSx5](https://f-droid.org/packages/at.bitfire.icsdroid/) is an android app to sync calendars.

**References**

- [Source](https://github.com/bitfireAT/icsx5)
- [F-droid](https://f-droid.org/packages/at.bitfire.icsdroid/)

feat(haproxy#Automatically ban offending traffic): Automatically ban offending traffic

Check these two posts:

- https://serverfault.com/questions/853806/blocking-ips-in-haproxy
- https://www.loadbalancer.org/blog/simple-denial-of-service-dos-attack-mitigation-using-haproxy-2/

feat(haproxy#Configure haproxy logs to be sent to loki): Configure haproxy logs to be sent to loki

In the `fronted` config add the next line:

```
  # For more options look at https://www.chrisk.de/blog/2023/06/haproxy-syslog-promtail-loki-grafana-logfmt/
  log-format 'client_ip=%ci client_port=%cp frontend_name=%f backend_name=%b server_name=%s performance_metrics=%TR/%Tw/%Tc/%Tr/%Ta status_code=%ST bytes_read=%B termination_state=%tsc haproxy_metrics=%ac/%fc/%bc/%sc/%rc srv_queue=%sq  backend_queue=%bq user_agent=%{+Q}[capture.req.hdr(0)] http_hostname=%{+Q}[capture.req.hdr(1)] http_version=%HV http_method=%HM http_request_uri="%HU"'
```

At the bottom of [chrisk post](https://www.chrisk.de/blog/2023/06/haproxy-syslog-promtail-loki-grafana-logfmt/) is a table with all the available fields.

[Programming VIP also has an interesting post](https://programming.vip/docs/loki-configures-the-collection-of-haproxy-logs.html).

feat(haproxy#Reload haproxy): Reload haproxy

- Check the config is alright
  ```bash

  service haproxy configtest
  # Or
  /usr/sbin/haproxy -c -V -f /etc/haproxy/haproxy.cfg
  ```
- Reload the service
  ```bash
  service haproxy reload
  ```

If you want to do a better reload you can [drop the SYN before a restart](https://serverfault.com/questions/580595/haproxy-graceful-reload-with-zero-packet-loss), so that clients will
resend this SYN until it reaches the new process.

```bash
iptables -I INPUT -p tcp --dport 80,443 --syn -j DROP
sleep 1
service haproxy reload
iptables -D INPUT -p tcp --dport 80,443 --syn -j DROP
service haproxy reload
```

feat(linux_snippets#Get info of a mkv file): Get info of a mkv file

```bash
ffprobe file.mkv
```

feat(loki#Alert when query returns no data): Alert when query returns no data

Sometimes the queries you want to alert happen when the return value is NaN or No Data. For example if you want to monitory the happy path by setting an alert if a string is not found in some logs in a period of time.

```logql
count_over_time({filename="/var/log/mail.log"} |= `Mail is sent` [24h]) < 1
```

This won't trigger the alert because the `count_over_time` doesn't return a `0` but a `NaN`. One way to solve it is to use [the `vector(0)`](grafana/loki#7023) operator with [the operation `or on() vector(0)`](https://stackoverflow.com/questions/76489956/how-to-return-a-zero-vector-in-loki-logql-metric-query-when-grouping-is-used-and)
```logql
(count_over_time({filename="/var/log/mail.log"} |= `Mail is sent` [24h]) or on() vector(0)) < 1
```

feat(loki#Monitor loki metrics): Monitor loki metrics

Since Loki reuses the Prometheus code for recording rules and WALs, it also gains all of Prometheus’ observability.

To scrape loki metrics with prometheus add the next snippet to the prometheus configuration:

```yaml
  - job_name: loki
    metrics_path: /metrics
    static_configs:
    - targets:
      - loki:3100
```

This assumes that `loki` is a docker in the same network as `prometheus`.

There are some rules in the [awesome prometheus alerts repo](https://samber.github.io/awesome-prometheus-alerts/rules#loki)

```yaml
---
groups:
- name: Awesome Prometheus loki alert rules
  # https://samber.github.io/awesome-prometheus-alerts/rules#loki
  rules:
  - alert: LokiProcessTooManyRestarts
    expr: changes(process_start_time_seconds{job=~".*loki.*"}[15m]) > 2
    for: 0m
    labels:
      severity: warning
    annotations:
      summary: Loki process too many restarts (instance {{ $labels.instance }})
      description: "A loki process had too many restarts (target {{ $labels.instance }})\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"
  - alert: LokiRequestErrors
    expr: 100 * sum(rate(loki_request_duration_seconds_count{status_code=~"5.."}[1m])) by (namespace, job, route) / sum(rate(loki_request_duration_seconds_count[1m])) by (namespace, job, route) > 10
    for: 15m
    labels:
      severity: critical
    annotations:
      summary: Loki request errors (instance {{ $labels.instance }})
      description: "The {{ $labels.job }} and {{ $labels.route }} are experiencing errors\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"
  - alert: LokiRequestPanic
    expr: sum(increase(loki_panic_total[10m])) by (namespace, job) > 0
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: Loki request panic (instance {{ $labels.instance }})
      description: "The {{ $labels.job }} is experiencing {{ printf \"%.2f\" $value }}% increase of panics\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"
  - alert: LokiRequestLatency
    expr: (histogram_quantile(0.99, sum(rate(loki_request_duration_seconds_bucket{route!~"(?i).*tail.*"}[5m])) by (le)))  > 1
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: Loki request latency (instance {{ $labels.instance }})
      description: "The {{ $labels.job }} {{ $labels.route }} is experiencing {{ printf \"%.2f\" $value }}s 99th percentile latency\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"
```

And there are some guidelines on the rest of the metrics in [the grafana documentation](https://grafana.com/docs/loki/latest/operations/observability/)

**[Monitor the ruler](https://grafana.com/docs/loki/latest/operations/recording-rules/)**

Prometheus exposes a number of metrics for its WAL implementation, and these have all been prefixed with `loki_ruler_wal_`.

For example: `prometheus_remote_storage_bytes_total` → `loki_ruler_wal_prometheus_remote_storage_bytes_total`

Additional metrics are exposed, also with the prefix `loki_ruler_wal_`. All per-tenant metrics contain a tenant label, so be aware that cardinality could begin to be a concern if the number of tenants grows sufficiently large.

Some key metrics to note are:

- `loki_ruler_wal_appender_ready`: whether a WAL appender is ready to accept samples (1) or not (0)
- `loki_ruler_wal_prometheus_remote_storage_samples_total`: number of samples sent per tenant to remote storage
- `loki_ruler_wal_prometheus_remote_storage_samples_pending_total`: samples buffered in memory, waiting to be sent to remote storage
- `loki_ruler_wal_prometheus_remote_storage_samples_failed_total`: samples that failed when sent to remote storage
- `loki_ruler_wal_prometheus_remote_storage_samples_dropped_total`: samples dropped by relabel configurations
- `loki_ruler_wal_prometheus_remote_storage_samples_retried_total`: samples re-resent to remote storage
- `loki_ruler_wal_prometheus_remote_storage_highest_timestamp_in_seconds`: highest timestamp of sample appended to WAL
- `loki_ruler_wal_prometheus_remote_storage_queue_highest_sent_timestamp_seconds`: highest timestamp of sample sent to remote storage.

feat(loki#Get a useful Source link in the alertmanager): Get a useful Source link in the alertmanager

[This still doesn't work](grafana/loki#4722). Currently for the ruler `external_url` if you use the URL of your Grafana installation: e.g. `external_url: "https://grafana.example.com"` it creates a Source link in alertmanager similar to https://grafana.example.com/graph?g0.expr=%28sum+by%28thing%29%28count_over_time%28%7Bnamespace%3D%22foo%22%7D+%7C+json+%7C+bar%3D%22maxRetries%22%5B5m%5D%29%29+%3E+0%29&g0.tab=1, which isn't valid.

This url templating (via `/graph?g0.expr=%s&g0.tab=1`) appears to be coming from prometheus. There is not a workaround yet

feat(orgmode#How to deal with recurring tasks that are not yet ready to be acted upon): How to deal with recurring tasks that are not yet ready to be acted upon

By default when you mark a recurrent task as `DONE` it will transition the date (either appointment, `SCHEDULED` or `DEADLINE`) to the next date and change the state to `TODO`. I found it confusing because for me `TODO` actions are the ones that can be acted upon right now. That's why I'm using the next states instead:

- `INACTIVE`: Recurrent task which date is not yet close so you should not take care of it.
- `READY`: Recurrent task which date [is overdue](#how-to-deal-with-overdue-SCHEDULED-and-DEADLINE-tasks), we acknowledge the fact and mark the date as inactive (so that it doesn't clobber the agenda).

The idea is that once an INACTIVE task reaches your agenda, either because the warning days of the `DEADLINE` make it show up, or because it's the `SCHEDULED` date you need to decide whether to change it to `TODO` if it's to be acted upon immediately or to `READY` and deactivate the date.

`INACTIVE` then should be the default state transition for the recurring tasks once you mark it as `DONE`. To do this, set in your config:

```lua
org_todo_repeat_to_state = "INACTIVE",
```

If a project gathers a list of recurrent subprojects or subactions it can have the next states:

- `READY`: If there is at least one subelement in state `READY` and the rest are `INACTIVE`
- `TODO`:  If there is at least one subelement in state `TODO` and the rest may have `READY` or `INACTIVE`
- `INACTIVE`: The project is not planned to be acted upon soon.
- `WAITING`: The project is planned to be acted upon but all its subelements are in `INACTIVE` state.

feat(promtail#Set the hostname label on all logs): Set the hostname label on all logs

There are many ways to do it:

- [Setting the label in the promtail launch command](https://community.grafana.com/t/how-to-add-variable-hostname-label-to-static-config-in-promtail/68352/11)
  ```bash
  sudo ./promtail-linux-amd64 --client.url=http://xxxx:3100/loki/api/v1/push --client.external-labels=hostname=$(hostname) --config.file=./config.yaml
    ```

  This won't work if you're using promtail within a docker-compose because you can't use bash expansion in the `docker-compose.yaml` file
- [Allowing env expansion and setting it in the promtail conf](grafana/loki#634). You can launch the promtail command with `-config.expand-env` and then set in each scrape jobs:
  ```yaml
  labels:
      host: ${HOSTNAME}
  ```
  This won't work either if you're using `promtail` within a docker as it will give you the ID of the docker
- Set it in the `promtail_config_clients` field as `external_labels` of each promtail config:
  ```yaml
  promtail_config_clients:
    - url: "http://{{ loki_url }}:3100/loki/api/v1/push"
      external_labels:
        hostname: "{{ ansible_hostname }}"
  ```
- Hardcode it for each promtail config scraping config as static labels. If you're using ansible or any deployment method that supports jinja expansion set it that way
  ```yaml
  labels:
      host: {{ ansible_hostname }}
  ```

fix(roadmap_adjustment): Change the concept of `Task` for `Action`

To remove the capitalist productive mindset from the concept

fix(roadmap_adjustment#Action cleaning): Action cleaning

Marking steps as done make help you get an idea of the evolution of the action. It can also be useful if you want to do some kind of reporting. On the other hand, having a long list of done steps (specially if you have many levels of step indentation may make the finding of the next actionable step difficult. It's a good idea then to often clean up all done items.

- For non recurring actions use the `LOGBOOK` to move the done steps. for example:
  ```orgmode
  ** DOING Do X
     :LOGBOOK:
     - [x] Done step 1
     - [-] Doing step 2
       - [x] Done substep 1
     :END:
     - [-] Doing step 2
       - [ ] substep 2
  ```

  This way the `LOGBOOK` will be automatically folded so you won't see the progress but it's at hand in case you need it.

- For recurring actions:
  - Mark the steps as done
  - Archive the todo element.
  - Undo the archive.
  - Clean up the done items.

This way you have a snapshot of the state of the action in your archive.

feat(roadmap_adjustment#Project cleaning): Project cleaning

Similar to [action cleaning](#action-cleaning) we want to keep the state clean. If there are not that many actions under the project we can leave the done elements as `DONE`, once they start to get clobbered up we can create a `Closed` section.

For recurring projects:

  - Mark the actions as done
  - Archive the project element.
  - Undo the archive.
  - Clean up the done items.

feat(vim_autosave): Manually toggle the autosave function

Besides running auto-save at startup (if you have `enabled = true` in your config), you may as well:

- `ASToggle`: toggle auto-save
  • Loading branch information
lyz-code committed Apr 4, 2024
1 parent 91659f3 commit 1377503
Show file tree
Hide file tree
Showing 23 changed files with 509 additions and 68 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/gh-pages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ jobs:
git config --local user.email "[email protected]"
git config --local user.name "GitHub Action"
git add pdm.lock
git add docs/newsletter
git add .
git diff-index --quiet HEAD \
|| git commit -m "chore: update dependency and publish newsletters"
|| git commit -m "chore: update dependency, publish newsletters and add the not by ai badge"
- name: Make the site
run: make build-docs
Expand Down
2 changes: 1 addition & 1 deletion .scripts/footer.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ echo "Checking the Not by AI badge"
find docs -iname '*md' -print0 | while read -r -d $'\0' file; do
if ! grep -q not-by-ai.svg "$file"; then
echo "Adding the Not by AI badge to $file"
echo "[![](not-by-ai.svg){: .center}](https://notbyai.fyi)" >>"$file"
echo "\n\n[![](not-by-ai.svg){: .center}](https://notbyai.fyi)" >>"$file"
fi
done
3 changes: 3 additions & 0 deletions docs/aleph.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,10 @@ Sometimes you have two traces at the same time, so each time you run a PDB
command it jumps from pdb trace. Quite confusing. Try to `c` the one you don't
want so that you're left with the one you want. Or put the `pdb` trace in a
conditional that only matches one of both threads.
# Monitorization
## [Prometheus metrics](https://github.com/alephdata/aleph/pull/3216)

Aleph now exposes prometheus metrics on the port 9100
# References

- [Source](https://github.com/alephdata/aleph)
Expand Down
45 changes: 45 additions & 0 deletions docs/bash_snippets.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,51 @@ date: 20220827
author: Lyz
---

# [Do relative import of a bash library](https://code-maven.com/bash-shell-relative-path)
If you want to import a file `lib.sh` that lives in the same directory as the file that is importing it you can use the next snippet:

```bash
# shellcheck source=lib.sh
source "$(dirname "$(realpath "$0")")/lib.sh"
```

If you use `source ./lib.sh` you will get an import error if you run the script on any other place that is not the directory where `lib.sh` lives.
# [Check the battery status](https://www.howtogeek.com/810971/how-to-check-a-linux-laptops-battery-from-the-command-line/)
This [article gives many ways to check the status of a battery](https://www.howtogeek.com/810971/how-to-check-a-linux-laptops-battery-from-the-command-line/), for my purposes the next one is enough

```bash
cat /sys/class/power_supply/BAT0/capacity
```
# [Check if file is being sourced](https://unix.stackexchange.com/questions/424492/how-to-define-a-shell-script-to-be-sourced-not-run)


Assuming that you are running bash, put the following code near the start of the script that you want to be sourced but not executed:

```bash
if [ "${BASH_SOURCE[0]}" -ef "$0" ]
then
echo "Hey, you should source this script, not execute it!"
exit 1
fi
```

Under bash, `${BASH_SOURCE[0]}` will contain the name of the current file that the shell is reading regardless of whether it is being sourced or executed.

By contrast, `$0` is the name of the current file being executed.

`-ef` tests if these two files are the same file. If they are, we alert the user and exit.

Neither `-ef` nor `BASH_SOURCE` are POSIX. While `-ef` is supported by ksh, yash, zsh and Dash, BASH_SOURCE requires bash. In zsh, however, `${BASH_SOURCE[0]}` could be replaced by `${(%):-%N}`.
# Parsing bash arguments

Long story short, it's nasty, think of using a python script with [typer](typer.md) instead.

There are some possibilities to do this:

- [The old getops](https://www.baeldung.com/linux/bash-parse-command-line-arguments)
- [argbash](https://github.com/matejak/argbash) library
- [Build your own parser](https://medium.com/@Drew_Stokes/bash-argument-parsing-54f3b81a6a8f)

# [Compare two semantic versions](https://www.baeldung.com/linux/compare-dot-separated-version-string)

[This article](https://www.baeldung.com/linux/compare-dot-separated-version-string) gives a lot of ways to do it. For my case the simplest is to use `dpkg` to compare two strings in dot-separated version format in bash.
Expand Down
2 changes: 1 addition & 1 deletion docs/detox.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@

detox cleans up filenames from the command line.
detox clean up filenames from the command line.
# Installation
```bash
apt-get install detox
Expand Down
4 changes: 2 additions & 2 deletions docs/devops/prometheus/alertmanager.md
Original file line number Diff line number Diff line change
Expand Up @@ -306,6 +306,8 @@ time_intervals:
end_time: 07:00
```

If that doesn't work for you, you can use the [sleep peacefully guidelines](https://samber.github.io/awesome-prometheus-alerts/sleep-peacefully) to tackle it at query level.

## Alert rules

Alert rules are a special kind of Prometheus Rules that trigger alerts based on
Expand Down Expand Up @@ -343,8 +345,6 @@ alerting:
static_configs:
- targets: [ 'alertmanager:9093' ]
```


# Silences

To silence an alert with a regular expression use the matcher
Expand Down
1 change: 1 addition & 0 deletions docs/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@ about those in order to make progress.

# References

* [Diátaxis: A systematic approach to technical documentation authoring](https://diataxis.fr/)
* [divio's documentation wiki](https://documentation.divio.com/introduction/)
* [Vue's guidelines](https://v3.vuejs.org/guide/contributing/writing-guide.html#principles)
* [FastAPI awesome docs](https://fastapi.tiangolo.com/tutorial/)
Expand Down
17 changes: 17 additions & 0 deletions docs/ecc.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,3 +61,20 @@ Other people ([1](https://www.memtest86.com/ecc.htm), [2](https://www.reddit.com
[They also suggest](https://www.memtest86.com/ecc.htm) to disable "Quick Boot". In order to initialize ECC, memory has to be written before it can be used. Usually this is done by BIOS, but with some motherboards this step is skipped if "Quick Boot" is enabled.

The people behind [memtest](memtest.md) have a [paid tool to test ECC](https://www.passmark.com/products/ecc-tester/index.php)

Another way is to run `dmidecode`. For ECC support you'll see:
```bash
$: dmidecode -t memory | grep ECC
Error Correction Type: Single-bit ECC
# or
Error Correction Type: Multi-bit ECC
```

No ECC:

```bash
$: dmidecode -t memory | grep ECC
Error Correction Type: None
```

You can also test it with [`rasdaemon`](rasdaemon.md)
4 changes: 4 additions & 0 deletions docs/fastapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -874,6 +874,10 @@ FastAPI can
or similar [application loggers](python_logging.md) through the
[ASGI middleware](https://fastapi.tiangolo.com/advanced/middleware/#other-middlewares).
# [Prometheus metrics](https://github.com/trallnag/prometheus-fastapi-instrumentator)
Use [`prometheus-fastapi-instrumentator`](https://github.com/trallnag/prometheus-fastapi-instrumentator)
# [Run a FastAPI server in the background for testing purposes](https://stackoverflow.com/questions/57412825/how-to-start-a-uvicorn-fastapi-in-background-when-testing-with-pytest)
Sometimes you want to launch a web server with a simple API to test a program
Expand Down
4 changes: 4 additions & 0 deletions docs/feminism/privileges.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,4 +92,8 @@ to the [Michael Kimmel book](https://www.goodreads.com/book/show/7400069-privile
## Essays

* [White Privilege: Unpacking the Invisible Knapsack by Peggy McIntosh](https://www.racialequitytools.org/resourcefiles/mcintosh.pdf)

## Videos

- [La intuición femenina, gracias al lenguaje](https://twitter.com/almuariza/status/1772889815131807765?t=HH1W17VGbQ7K-_XmoCy_SQ&s=19)
[![](not-by-ai.svg){: .center}](https://notbyai.fyi)
19 changes: 18 additions & 1 deletion docs/ffmpeg.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,24 @@ ffmpeg -i video.mp4 -an mute-video.mp4

# Convert

## [Reduce the video size](https://unix.stackexchange.com/questions/28803/how-can-i-reduce-a-videos-size-with-ffmpeg)
If you don't mind using `H.265` replace the libx264 codec with libx265, and push the compression lever further by increasing the CRF value — add, say, 4 or 6, since a reasonable range for H.265 may be 24 to 30. Note that lower CRF values correspond to higher bitrates, and hence produce higher quality videos.

```bash
ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4
```

If you want to stick to H.264 reduce the bitrate. You can check the current one with `ffprobe input.mkv`. Once you've chosen the new rate change it with:

```bash
ffmpeg -i input.mp4 -b 3000k output.mp4
```

Additional options that might be worth considering is setting the Constant Rate Factor, which lowers the average bit rate, but retains better quality. Vary the CRF between around 18 and 24 — the lower, the higher the bitrate.

```bash
ffmpeg -i input.mp4 -vcodec libx264 -crf 20 output.mp4
```
## Convert video from one format to another

You can use the `-vcodec` parameter to specify the encoding format to be used for
Expand Down Expand Up @@ -148,7 +166,6 @@ done
output.mkv
```


## [Convert a video into animated GIF](https://superuser.com/questions/556029/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality)

```bash
Expand Down
4 changes: 2 additions & 2 deletions docs/gitea.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,8 @@ Then create the secrets file with the command `sops secrets.enc.json` somewhere
terraform {
required_providers {
gitea = {
source = "Lerentis/gitea"
version = "~> 0.12.1"
source = "go-gitea/gitea"
version = "~> 0.3.0"
}
sops = {
source = "carlpett/sops"
Expand Down
5 changes: 5 additions & 0 deletions docs/icsx5.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
[ICSx5](https://f-droid.org/packages/at.bitfire.icsdroid/) is an android app to sync calendars.

# References
- [Source](https://github.com/bitfireAT/icsx5)
- [F-droid](https://f-droid.org/packages/at.bitfire.icsdroid/)
5 changes: 2 additions & 3 deletions docs/life_planning.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,7 @@ Create the month objectives in your roadmap file after addressing each element o

- Your last month review document.
- The trimester objectives of your roadmap.

Once they are ready, copy them to the description of your `todo.org` file. That way you'll see it every day.
- You can add notes on the trimester objectives

## Decide the next steps

Expand All @@ -72,7 +71,7 @@ Once they are ready, copy them to the description of your `todo.org` file. That
- Taking into account the month objectives select the are you want to work on in each week day.
- Document the week distribution in your roadmap document and make it visible in your weekly planning process.

- Refine the roadmap of each of the selected areas
- Refine the roadmap of each of the selected areas (maybe for the trimestral? too soon to do it in the monthly planning IMHO)
- Define the todo of each device (mobile, tablet, laptop)
- Tweak your *things to think about list*.
- Tweak your *investigations list*.
Expand Down
149 changes: 148 additions & 1 deletion docs/linux/haproxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,28 @@ HTTP-based applications that spreads requests across multiple servers. It is
written in C and has a reputation for being fast and efficient (in terms of
processor and memory usage).

# Use HAProxy as a reverse proxy

# Installation

## Automatically ban offending traffic
Check these two posts:

- https://serverfault.com/questions/853806/blocking-ips-in-haproxy
- https://www.loadbalancer.org/blog/simple-denial-of-service-dos-attack-mitigation-using-haproxy-2/
## [Configure haproxy logs to be sent to loki](https://www.chrisk.de/blog/2023/06/haproxy-syslog-promtail-loki-grafana-logfmt/)

In the `fronted` config add the next line:

```
# For more options look at https://www.chrisk.de/blog/2023/06/haproxy-syslog-promtail-loki-grafana-logfmt/
log-format 'client_ip=%ci client_port=%cp frontend_name=%f backend_name=%b server_name=%s performance_metrics=%TR/%Tw/%Tc/%Tr/%Ta status_code=%ST bytes_read=%B termination_state=%tsc haproxy_metrics=%ac/%fc/%bc/%sc/%rc srv_queue=%sq backend_queue=%bq user_agent=%{+Q}[capture.req.hdr(0)] http_hostname=%{+Q}[capture.req.hdr(1)] http_version=%HV http_method=%HM http_request_uri="%HU"'
```

At the bottom of [chrisk post](https://www.chrisk.de/blog/2023/06/haproxy-syslog-promtail-loki-grafana-logfmt/) is a table with all the available fields.

[Programming VIP also has an interesting post](https://programming.vip/docs/loki-configures-the-collection-of-haproxy-logs.html).

## Use HAProxy as a reverse proxy

[reverse proxy](https://en.wikipedia.org/wiki/Reverse_proxy) is a type of proxy
server that retrieves resources on behalf of a client from one or more servers.
Expand Down Expand Up @@ -58,8 +79,134 @@ Other useful examples can be retrieved from [drmalex07
](https://gist.github.com/drmalex07/10d09c299245e3ab333c) or
[ferdinandosimonetti](https://gist.github.com/ferdinandosimonetti/23d0d9e468314a85d803bf5e2576be4d)
gists.
# Usage

## Reload haproxy
- Check the config is alright
```bash

service haproxy configtest
# Or
/usr/sbin/haproxy -c -V -f /etc/haproxy/haproxy.cfg
```
- Reload the service
```bash
service haproxy reload
```

If you want to do a better reload you can [drop the SYN before a restart](https://serverfault.com/questions/580595/haproxy-graceful-reload-with-zero-packet-loss), so that clients will
resend this SYN until it reaches the new process.

```bash
iptables -I INPUT -p tcp --dport 80,443 --syn -j DROP
sleep 1
service haproxy reload
iptables -D INPUT -p tcp --dport 80,443 --syn -j DROP
service haproxy reload
```

# [Comparison between haproxy and varnish](http://blog.haproxy.com/2012/07/04/haproxy-and-varnish-comparison/)

In the opensource world, there are some very smart products which are very
often used to build a high performance, reliable and scalable
architecture.\
**HAProxy** and **Varnish** are both in this category.

Since we can’t really compare a reverse-proxy cache and a reverse-proxy
load-balancer, I’m just going to focus in common for both software as
well as the advantage of each of them.\
The list is not exhaustive, but must only focus on most used /
interesting features. So feel free to add a comment if you want me to
complete the list.

## Common points between HAProxy and Varnish

Before comparing the differences, we can summarize the points in common:
* Reverse-proxy mode
* Advanced HTTP features
* No SSL offloading
* Client-side HTTP 1.1 with keepalive
* Tunnel mode available
* High performance
* Basic load-balancing
* Server health checking
* IPv6 ready
* Management socket (CLI)
* Professional services and training available

## Features available in HAProxy and not in Varnish

The features below are available in **HAProxy**, but aren’t in
**Varnish**:
* Advanced load-balancer
* Multiple persistence methods
* DOS and DDOS mitigation
* advanced and custom logging
* web interface
* server / application protection through queue management, slow
start, etc…
* sNI content switching
* Named ACLs
* Full HTTP 1.1 support on server side, but keep-alive
* Can work at TCP level with any L7 protocol
* Proxy protocol for both client and server
* Powerful log analyzer tool (halog)
* &lt;private joke&gt; 2002 website design &lt;/private joke&gt;

## Features available in Varnish and not in HAProxy

The features below are available in **Varnish**, but aren’t in
**HAProxy**:
* Caching
* Grace mode (stale content delivery)
* Saint mode (manages origin server errors)
* Modular software (with a lot of modules available)
* Intuitive VCL configuration language
* HTTP 1.1 on server side
* TCP connection re-use
* Edge side includes (ESI)
* A few command line tools for stats (varnishstat, varnishhist, etc…)
* Powerful live traffic analyzer (varnishlog)
* &lt;private joke&gt; 2012 website design &lt;/private joke&gt;

## Conclusion

Even if **HAProxy** can do TCP proxying, it is often used in front of
web application, exactly where we find **Varnish** :).\
They complete very well together: **Varnish** will make the website
faster by offloading static object delivery to itself, while **HAProxy**
can ensure a smooth load-balancing with smart persistence and DDOS
mitigation.
Basically, **HAProxy** and **Varnish** completes very well, despite
being “competitors” on a few features, each on them has its own domain
of expertise where it performs very well: **HAProxy is a reverse-proxy
Load-Balancer** and **Varnish is a Reverse-proxy cache**.

To be honest, when, at HAProxy Technologies, we work on infrastructures
where [Aloha Load
balancer](http://www.haproxy.com/en/aloha-load-balancer-appliance-rack-1u "Aloha load-balancer")
or **HAProxy** is deployed, we often see **Varnish** deployed. And if it
is not the case, we often recommend the customer to deploy one if we
feel it would improve its website performance.\
Recently, I had a discussion with
[Ruben](https://twitter.com/#!/ruben_varnish "ruben_varnish") and
[Kristian](https://twitter.com/#!/kristianlyng "kristian") when they
came to Paris and they told me that they also often see an **HAProxy**
when they work on infrastructure where **Varnish** is deployed.

So the real question is: **Since Varnish and HAProxy are a bit similar
but complete so well, how can we use them together???**\
The response could be very long, so stay tuned, I’ll try to answer this
question in an article coming soon.

# Update letsencrypt certificates with zero downtime

https://github.com/janeczku/haproxy-acme-validation-plugin

# References

* [Homepage](http://www.haproxy.com/ "HAProxy Technologies")
* [Aloha load balancer: HAProxy based LB appliance](http://www.haproxy.com/en/aloha-load-balancer-appliance-rack-1u "Aloha load balancer")
* [HAPEE: HAProxy Enterprise Edition](http://www.haproxy.com/en/haproxy-enterprise-edition-hapee "HAPEE: HAProxy Enterprise Edition")
* [Guidelines for HAProxy termination in AWS](https://github.com/jvehent/haproxy-aws)
[![](not-by-ai.svg){: .center}](https://notbyai.fyi)
Loading

0 comments on commit 1377503

Please sign in to comment.