Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does arcocd support patching of existing resources #2437

Open
raffaelespazzoli opened this issue Oct 8, 2019 · 41 comments
Open

How does arcocd support patching of existing resources #2437

raffaelespazzoli opened this issue Oct 8, 2019 · 41 comments
Assignees
Labels
enhancement New feature or request type:enhancement

Comments

@raffaelespazzoli
Copy link

this is really more of a question rather than a feature request.
The use case for patching existing resources comes up pretty often when dealing with Kubernetes distributions that comes up with a some default settings.
It would be nice to have a way to declarative patch the default settings to a desired state. Examples are typically Kubernetes distribution-dependent, but one use case across the board is node labeling.

@raffaelespazzoli raffaelespazzoli added the enhancement New feature or request label Oct 8, 2019
@simster7
Copy link
Member

simster7 commented Oct 8, 2019

What about:

  1. argocd app patch
$  argocd app patch
Examples:
	# Update an application's source path using json patch
	argocd app patch myapplication --patch='[{"op": "replace", "path": "/spec/source/path", "value": "newPath"}]' --type json

	# Update an application's repository target revision using merge patch
	argocd app patch myapplication --patch '{"spec": { "source": { "targetRevision": "master" } }}' --type merge

Usage:
  argocd app patch APPNAME [flags]
  1. Deploying local manifest files by doing argocd app sync APPNAME --local [PATH_TO_LOCAL_DIRECTORY]. See the docs here: https://github.com/argoproj/argo-cd/blob/master/docs/user-guide/application_sources.md#development

Would either of these help?

@raffaelespazzoli
Copy link
Author

@simster7 I am not trying to patch an application. I am asking if it is possible to patch an existing object.
For example let's say I need to create an Application which will patch node abc with label cde=123.
how do I do that?

@simster7
Copy link
Member

simster7 commented Oct 8, 2019

Sorry meant to point you to argocd app patch-resource:

$ argocd app patch-resource
Usage:
  argocd app patch-resource APPNAME [flags]

Flags:
      --all                    Indicates whether to patch multiple matching of resources
      --group string           Group
  -h, --help                   help for patch-resource
      --kind string            Kind
      --namespace string       Namespace
      --patch string           Patch
      --patch-type string      Which Patching strategy to use: 'application/json-patch+json', 'application/merge-patch+json', or 'application/strategic-merge-patch+json'. Defaults to 'application/merge-patch+json' (default "application/merge-patch+json")
      --resource-name string   Name of resource

An example of this in action: https://github.com/argoproj/argocd-example-apps/tree/master/blue-green

You can also consider:

  1. Updating your deployment manifests from your GitOps repo and performing a new sync. This is the preferred way to do it when using Argo.
  2. Parametrizing the fields you want to change and editing them with argocd app set (the example above also uses this).
  3. Overriding your manifests with argocd app sync APPNAME --local as mentioned above. Keep in mind that this is an anti-pattern and should only be done for development purposes. Furthermore, this will only affect the current deployment and changes will be lost after a sync.
  4. As a last resort maybe consider patching the objects directly using kubectl patch.

@raffaelespazzoli
Copy link
Author

@simster7 I am probably not explaining myself. A node object already exists and is not controlled by an argocd Application.
I'd like to be able to create an argocd Application that has the effect of patching a node.
So, basically we don't know what this node will look like and we want to change part of it, for example add a label.

@simster7
Copy link
Member

simster7 commented Oct 8, 2019

Got it, I had misunderstood what you had meant.

The team should correct me if I'm wrong, but I am fairly certain that you won't be able to use Argo to patch resources that are not controlled by it—i.e. resources that are not declared on the deployment repo and deployed by Argo.

@raffaelespazzoli
Copy link
Author

then if this is not possible, I'd like to formally request this feature to be added. Basically argocd should support a different templating model where the templates are actually patch fragments and they are applied and enforced to existing resources.

@alexmt
Copy link
Collaborator

alexmt commented Oct 11, 2019

This looks like a config management issue. Argo CD intentionally avoids making any changes in user-provided manifests and only inject one label to support resource pruning.

It is really difficult to implement config management in the right way so instead, Argo CD integrates with existing config management tools like helm/kustomize etc

@alexmt
Copy link
Collaborator

alexmt commented Oct 11, 2019

So I would suggest using kustomize to imlpement resource patching. @raffaelespazzoli please feel free to reopen ticket if necessary.

@alexmt alexmt closed this as completed Oct 11, 2019
@raffaelespazzoli
Copy link
Author

@alexmt I don't understand the answer you gave me. Kustomize for sure is not the answer to my problem because customize can only patch resources you own (and own the definition of).
I think having the ability to patch a resource you don't own is a good feature for a gitops operator. And I'd like to reopen this issue as a feature request. Ho do I do it?

@alexmt alexmt reopened this Oct 15, 2019
@alexmt
Copy link
Collaborator

alexmt commented Oct 15, 2019

Sorry @raffaelespazzoli , did not realize you cannot reopen the ticket. Looks like I misunderstood your question too. Do you mean patching existing resources in a cluster that were not created by Argo CD originally? This should be supported as long as such resources can be modified using "kubectl apply".

You would have to create an application which includes resource manifest with apiVersion/name of an existing resource and add fields which you want to manage. Argo CD should detect that object exist and run kubectl apply agains it.

@raffaelespazzoli
Copy link
Author

@alexmt So yes the request is to be able to patch resources that are pre-existing and not originally created by argocd. If I understand you suggestion, you are proposing to have an application that includes those resources in full (original fields plus the added fields that we wanted to patch).

There are two problems with this approach:

  1. one might not know the full resource definition at the time of the writing of the application resource (think about the use case of adding a label to a node resource)
  2. this would not work well with upgrades, because by giving argocd the full definition of a resource if the actual owner needs to change the resource (for an upgrade for example) argocd would not allow it and would reset the resource to the declared desired state in the application.

So, I believe, we need the concept of patching or if you will the idea that argocd does not fully own a resource but instead on some of the fields of that resource.

@alexec
Copy link
Contributor

alexec commented Nov 14, 2019

@raffaelespazzoli is your question answered now? Can I close this?

@alexec alexec closed this as completed Nov 14, 2019
@raffaelespazzoli
Copy link
Author

yes the answer was that there is not support for enforcing a patch on a preexisting resource. Then I asked to add this as a feature. If you think there is value in a feature like that then this issue should be left open.

@BostjanBozic
Copy link

This would definitely come in handy. Like is was mentioned, setting up node labels via ArgoCD would be great. Currently only option that I found so far is basically set up e.g. ansible playbook, which performs kubectl patch.

@raffaelespazzoli
Copy link
Author

@BostjanBozic I created an operator to enforce a patch[1], you can have argocd create the CR that informs that operator how to create the patch.
[1]: https://github.com/raffaelespazzoli/resource-locker-operator

@BostjanBozic
Copy link

@raffaelespazzoli nice one! I will take a look. If I understand correctly you are basically feeding ArgoCD ResourceLocker CR, Argo sync it and then ResourceLocker actually patches resource?

@raffaelespazzoli
Copy link
Author

raffaelespazzoli commented Jul 9, 2020 via email

@yuha0
Copy link

yuha0 commented Sep 4, 2020

I think it would be nice to support patch. There are valid use cases. For example, some resources are just cannot be created by users in any circumstance, like kube-system and default namespaces, kubernetes.default.svc.cluster.local service...

If I want to add a label to such object, shouldn't GitOps concept cover this use case?

@peterbosalliandercom
Copy link

peterbosalliandercom commented Nov 12, 2020

@alexmt Within openshift for example the default scc's (security context constraints) are owned by Openshift but there is a need to patch those scc with additional users and serviceaccounts (which is documented as a manual proces by doing oc adm or by oc patch (oc patch scc privileged --type=json -p '[{"op": "add", "path": "/users/0", "value":"system:serviceaccount:default:router"}]'). We need this to be possible within argocd because we want to add users and sa's dynamically based on the teams that have namespaces rolled out by argo.

@arikmaor
Copy link

I also would love this feature
My use case is configuring workload identity for Stackdriver Adapter that comes pre-installed in gke clusters

Currently, I have to manually run alongside argocd

kubectl annotate serviceaccount --namespace custom-metrics \
  custom-metrics-stackdriver-adapter \
  iam.gke.io/gcp-service-account=<google-service-account>@<project-id>.iam.gserviceaccount.com

@raffaelespazzoli
Copy link
Author

@arikmaor take a look at: https://github.com/redhat-cop/resource-locker-operator

@kxr
Copy link

kxr commented May 25, 2021

+1 to this feature request. resource-locker-operator seems like a viable option but it feels counter intuitive that we have to run a separate operator/controller to do this while argocd is present.

@rgordill
Copy link

+1. It will be very helpful if we can deploy a cluster, argocd and then everything can set up a complete operative cluster, including not only the applications but also patching what set up "as-is" in the original cluster deployment but has to be customized.

Examples: authentication methods, retention params in monitoring, api-server customizations, etc.

@raffaelespazzoli
Copy link
Author

I created an operator to specifically solve this issue: https://github.com/redhat-cop/patch-operator

@r0bj
Copy link

r0bj commented May 13, 2022

Another use case. After new GKE cluster creation I want to set particular StorageClass as a default by adding annotation storageclass.beta.kubernetes.io/is-default-class=true. I cannot have entire StorageClass object stored in argocd (git) because there are fields managed by GKE, e.g. metadata.annotations.components.gke.io/component-version and maybe some other. So ideally, I would like to just add an annotation.

@priggad
Copy link

priggad commented Jun 16, 2022

+1. Would be great to have all configuration after cluster bootstrap to be defined and managed in argocd.

@Ninsbean
Copy link

@alexmt Within openshift for example the default scc's (security context constraints) are owned by Openshift but there is a need to patch those scc with additional users and serviceaccounts (which is documented as a manual proces by doing oc adm or by oc patch (oc patch scc privileged --type=json -p '[{"op": "add", "path": "/users/0", "value":"system:serviceaccount:default:router"}]'). We need this to be possible within argocd because we want to add users and sa's dynamically based on the teams that have namespaces rolled out by argo.

Does anyone have a solution for this? I'm having this exact problem.

@raffaelespazzoli
Copy link
Author

@Ninsbean I use the patch operator to solve for this issue: https://github.com/redhat-cop/patch-operator

@b-a-t
Copy link

b-a-t commented Sep 20, 2022

Another use case. After new GKE cluster creation I want to set particular StorageClass as a default by adding annotation storageclass.beta.kubernetes.io/is-default-class=true. I cannot have entire StorageClass object stored in argocd (git) because there are fields managed by GKE, e.g. metadata.annotations.components.gke.io/component-version and maybe some other. So ideally, I would like to just add an annotation.

Exactly the same problem exists for AWS EKS default storageclass, if you want to redefine it to gp3 for example, you need to patch corresponding class annotation.

@iam-veeramalla
Copy link
Member

iam-veeramalla commented Oct 3, 2022

This issue is solved by #9711 .

Note: Server-Side Apply is a feature that is not released yet. It is part of the milestone v2.5.
If you would like to test this feature you can either build Argo CD from master or use
docker.io/abhishekf5/argocd@sha256:ac5d7b74e71eb6453944d564e1b5f056a4f3d8c4447141cde0e9f540a7115afc

TL;DR
Enable Server-Side Apply and turn off Schema Validation(not required in the below example, but this is similar to kubectl apply --server-side --validate=false - required when you try to update anything in .spec) as shown below.

Screenshot 2022-10-03 at 10 47 45 AM

Server-Side Apply is a feature stable as part of k8s v1.22 and is introduced into Argo CD by @leoluz with PR #9711. However, this feature is not released. It is set to milestone v2.5.

Server-Side Apply helps users and controllers manage their resources through declarative configurations. Clients can create and modify their [objects](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/) declaratively by sending their fully specified intent.

A fully specified intent is a partial object that only includes the fields and values for which the user has an opinion. That intent either creates a new object or is [combined](https://kubernetes.io/docs/reference/using-api/server-side-apply/#merge-strategy), by the server, with the existing object.

GOAL:

We will try to label a simple nginx deployment that is already available on the cluster and not managed by Argo CD. Let's start with creating the nginx deployment.

  1. Create a nginx deployment.
    kubectl create deployment nginx --image=nginx
  2. Patch the object that only includes the fields and values for which the user has an opinion. We will patch the nginx deployment with label foo: bar.
  • Place the below nginx deployment in a git repo.
apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
   app: nginx
   foo: bar
name: nginx
namespace: default
  1. Create an Argo CD Application that points to the git repo and path as shown below.

Screenshot 2022-10-03 at 10 59 14 AM

  1. Finally you will see that the new label foo:bar is attached to the nginx deployment.

Screenshot 2022-10-03 at 11 02 49 AM

@leotomas837
Copy link

leotomas837 commented Nov 27, 2022

@iam-veeramalla

Thanks for the tip, especially that is now released. Yet that is based on the strategic merge patch I assume, is there any way to apply a json patch ? Json patches offer more complex patch possibilities that the strategic merge patch does not support and I am typically using one, which a lot of people are certainly using: a json patch to patch the aws-auth cm to map AWS roles to Kubernetes roles and configure accesses to the cluster. It adds a multiline string to a multiline string, with some go template code to manage different cases.
I am currently using the redhat patch-operator for this purpose. Argocd supports strategic merge patches from now on thanks to #9711 and it would be nice to support other patch types especially the json one.

@leotomas837
Copy link

Why is this closed, can the issue be re-opened ?

@b-a-t
Copy link

b-a-t commented Apr 17, 2023

Why is this closed, can the issue be re-opened ?

Maybe cause ArgoCD now supports server-side applies?

@leotomas837
Copy link

leotomas837 commented Apr 17, 2023

Maybe cause ArgoCD now supports server-side applies?

@b-a-t
As mentioned, server side applies operate a strategic merge patch, see the Argocd doc.
Strategic merge patches handle only simple cases such as adding a new label for example (such case is mentioned in this thread). But this is far from handling all cases. Many very common cases are not supported by strategic merge patches, I shared one above from AWS.
JSON patches cover many more cases. Any plan to implement them i.e. servide side apply json patches ?

I found a workaround using Argocd CMP with helmfile, but that is not really intuitive. Another solution is to use the patch-operator. But it does not look maintained anymore and no enhancements are made to the chart even from external PRs.

@dellnoantechnp
Copy link

patch-operator project is not production ready.

Can any one find other solution ?

@crenshaw-dev
Copy link
Member

Reopening, because I think @leotomas837's description of how SSA is limited to relatively simple patches makes sense.

@crenshaw-dev crenshaw-dev reopened this Aug 18, 2023
@leotomas837
Copy link

leotomas837 commented Aug 18, 2023

patch-operator project is not production ready.

Can any one find other solution ?

There is another solution: creating an Argocd CMP with helmfile. As you can see in this helmfile.yaml example, helmfile supports json patches. Also add kubectl to the CMP sidecar.

Simply create an Argocd Application syncing a helmfile repository, and the trick is to use helmfile's Go template function called exec to get the yaml of the resources Argocd doesn't own with kubectl, then inject them in a raw helm chart into the helmfile.yaml (you can create your own raw helm chart, the link is just an example), and apply any helmfile json patch you like.

Don't forget to also create the right RBAC ressources to give the repo-server read access to the ressources.

Helmfile has a range of very useful other features anyway, have a look at it. I can patch any resource via Gitops in this way with Argocd using a helmfile CMP.

@dellnoantechnp
Copy link

I use Argo CD - Sync Options annotation resolved my problem.

Examples:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  annotations:
    argocd.argoproj.io/sync-options: ServerSideApply=true       # sync-options ServerSideApply
  name: k8s
  namespace: kubesphere-monitoring-system
spec:
  scrapeInterval: 30s    # change on 1m to 30s
  ......

Grafna dashboard configmap json: hurge json data

apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    argocd.argoproj.io/sync-options: Replace=true,Prune=true,PruneLast=true    # sync-options set Replace,Prune,PruneLast true
  name: kubernetes-dashboards
data:
  CoreDNS.json: |
    {"annotations":{"list":[{"builtIn .....
  Kubernetes-SkyDNS.json: |
    {"__inputs":[{"name":" .......
  Pod-Stats-Info-dashboard.json: |
    {"annotations":{"lis .......

It's greet.

@fredleger
Copy link

+1

Another valid use-case is to patch kubernetes API service with annotations as indicated here : https://docs.datadoghq.com/containers/kubernetes/control_plane/?tab=helm#EKS

Make sense to me to have everything in the same tool and repo

@gete76
Copy link

gete76 commented Jul 10, 2024

Let's say I want to add an initContainer to the aws-node DaemonSet on EKS. Is there a solution to do that? I don't believe the Argo strategic merge will work here because it requires a chart or templates to point to. What if we didn't deploy the object?

@christianh814
Copy link
Member

Since we have SSA, but with the limitation described, should we close this one in favor of another issue?

Supporting, say, JSON patching would be an enhancement to the SSA functionality. But AFAIC, the SSA feature satisfies the original request

lyz-code added a commit to lyz-code/blue-book that referenced this issue Oct 29, 2024
alephclient is a command-line client for Aleph. It can be used to bulk import structured data and files and more via the API, without direct access to the server.

**[Installation](https://docs.aleph.occrp.org/developers/how-to/data/install-alephclient/#how-to-install-the-alephclient-cli)**

You can now install `alephclient` using pip although I recommend to use `pipx` instead:

```bash
pipx install alephclient
```

`alephclient` needs to know the URL of the Aleph instance to connect to. For privileged operations (e.g. accessing private datasets or writing data), it also needs your API key. You can find your API key in your user profile in the Aleph UI.

Both settings can be provided by setting the environment variables `ALEPHCLIENT_HOST` and `ALEPHCLIENT_API_KEY`, respectively, or by passing them in with `--host` and `--api-key` options.

```bash
export ALEPHCLIENT_HOST=https://aleph.occrp.org/
export ALEPHCLIENT_API_KEY=YOUR_SECRET_API_KEY
```

You can now start using `alephclient` for example to upload an entire directory to Aleph.

**[Upload an entire directory to Aleph](https://docs.aleph.occrp.org/developers/how-to/data/upload-directory/)**
While you can upload multiple files and even entire directories at once via the Aleph UI, using the `alephclient` CLI allows you to upload files in bulk much quicker and more reliable.

Run the following `alephclient` command to upload an entire directory to Aleph:

```bash
alephclient crawldir --foreign-id wikileaks-cable /Users/sunu/data/cable
```

This will upload all files in the directory `/Users/sunu/data/cable` (including its subdirectories) into an investigation with the foreign ID `wikileaks-cable`. If no investigation with this foreign ID exists, a new investigation is created (in theory, but it didn't work for me, so manually create the investigation and then copy it's foreign ID).

If you’d like to import data into an existing investigation and do not know its foreign ID, you can find the foreign ID in the Aleph UI. Navigate to the investigation homepage. The foreign ID is listed in the sidebar on the right.

feat(aleph#Other tools for the ecosystem): Other tools for the ecosystem
[Investigraph](https://investigativedata.github.io/investigraph/) is an ETL framework that allows research teams to build their own data catalog themselves as easily and reproducable as possible. The investigraph framework provides logic for extracting, transforming and loading any data source into followthemoney entities.

For most common data source formats, this process is possible without programming knowledge, by means of an easy yaml specification interface. However, if it turns out that a specific dataset can not be parsed with the built-in logic, a developer can plug in custom python scripts at specific places within the pipeline to fulfill even the most edge cases in data processing.

feat(antiracism#Referencias): Nuevo artículo interesante

- [El origen antiracista de la palabra `woke`](https://www.lamarea.com/2024/08/27/el-origen-antirracista-de-lo-woke/)

feat(antitourism#Libros): Nuevos libros interesantes

- [Verano sin vacaciones. Las hijas de la Costa del Sol de ana geranios](https://piedrapapellibros.com/producto/verano-sin-vacaciones-las-hijas-de-la-costa-del-sol/)
- [Estuve aquí y me acordé de nosotros de Anna Pacheco](https://www.anagrama-ed.es/libro/nuevos-cuadernos-anagrama/estuve-aqui-y-me-acorde-de-nosotros/9788433922304/NCA_68)

feat(apprise): Introduce Apprise

[Apprise](https://github.com/caronc/apprise) is a notification library that offers a unified way to send notifications across various platforms. It supports multiple notification services and simplifies the process of integrating notifications into your applications.

Apprise supports various notification services including:

- [Email](https://github.com/caronc/apprise/wiki/Notify_email#using-custom-servers-syntax)
- SMS
- Push notifications
- Webhooks
- And more

Each service requires specific configurations, such as API keys or server URLs.

**Installation**

To use Apprise, you need to install the package via pip:

```bash
pip install apprise
```

**Configuration**

Apprise supports a range of notification services. You can configure notifications by adding service URLs with the appropriate credentials and settings.

For example, to set up email notifications, you can configure it like this:

```python
import apprise

apobj = apprise.Apprise()

apobj.add("mailto://user:[email protected]:587/")

apobj.notify(
    body="This is a test message.",
    title="Test notification",
)
```

**Sending notifications**

To send a notification, use the `notify` method. This method accepts parameters such as `body` for the message content and `title` for the notification title.

Example:

```python
apobj.notify(
    body="Here is the message content.",
    title="Notification title",
)
```

**References**
- [Home](https://github.com/caronc/apprise)
- [Docs](https://github.com/caronc/apprise/wiki)
- [Source](https://github.com/caronc/apprise)

feat(argocd): Reasons to use it

I'm using Argo CD as the GitOps tool, because:

1. It is a CNCF project, so it is a well maintained project.
3. I have positive feedback from other mates that are using it.
4. It is a mature project, so you can expect a good support from the community.

I also took in consideration other tools like
[Flux](https://fluxcd.io/), [spinnaker](https://spinnaker.io/) or
[Jenkins X](https://jenkins-x.io/) before taking this decision.

feat(argocd#Difference between sync and refresh): Difference between sync and refresh

Some good articles to understand it are:

- https://danielms.site/zet/2023/argocd-refresh-v-sync/
- https://argo-cd.readthedocs.io/en/stable/core_concepts/
- https://github.com/argoproj/argo-cd/discussions/8260
- https://github.com/argoproj/argo-cd/discussions/12237

feat(argocd#Configure the git webhook to speed up the sync): Configure the git webhook to speed up the sync

It doesn't still work [for git webhook on Applicationsets for gitea/forgejo](https://github.com/argoproj/argo-cd/issues/18798)

feat(argocd#Import already deployed helm): Import already deployed helm

Some good articles to understand it are:

- https://github.com/argoproj/argo-cd/issues/10168
- https://github.com/argoproj/argo-cd/discussions/8647
- https://github.com/argoproj/argo-cd/issues/2437#issuecomment-542244149

feat(argocd#Migrate from helmfile to argocd ): Migrate from helmfile to argocd

This section provides a step-by-step guide to migrate an imaginary deployment, it is not real, should be adapted to the real deployment you want to migrate, it tries to be as simpler as posible, there are some tips and tricks later in this document for complex scenarios.

1. **Select a deployment to migrate**
    Once you have decided the deployment to migrate, you have to decide where it belongs to (bootstrap, kube-system, monitoring, applications or is managed by a team).
    Go to the helmfile repository and find the deployment you want to migrate.
2. **Use any of the previous created deployments in the same section as a template**
    Just copy it with the new name, ensure it has all the components you will need:
      - The `Chart.yaml` file will handle the chart repository, version, and, in some cases, the name.
      - The `values.yaml` file will handle the shared values among environments for the deployment.
      - The `values-<env>.yaml` file will handle the environment-specific values.
      - The `secrets.yaml` file will handle the secrets for the deployment (for the current environment).
      - The `templates` folder will handle the Kubernetes resources for the deployment, in helmfile we use the raw chart for this.
3. **Create the `Chart.yaml` file**
    This file is composed by the following fields:
    ```yaml
    apiVersion: v2
    name: kube-system # The name of the deployment
    version: 1.0.0 # The version of the deployment
    dependencies: # The dependencies of the deployment
      - name: ingress-nginx # The name of the chart to deploy
        version: "4.9.1" # The version of the chart to deploy
        repository: "https://kubernetes.github.io/ingress-nginx" # The repository of the chart to deploy
    ```
    You can find the name of the chart in the `helmfile.yaml` file in the helmfile repository, it is under the `chart`  key of the release. If it is named something like `ingress-nginx/ingress-nginx` , it is second part of the value, the first part is the local alias for the repository.
    For the version and the repository, the more straightforward way is to go to the `helmfile.lock` within the `helmfile.yaml` and search for its entry. The version is under the `version` key and the repository is under the `repository` key.

4. **Create the `values.yaml` and `values-<env>.yaml` files**
    For the `values.yaml` file, you can copy the `values.yaml` file from the helmfile repository, but it has to be under a key named like the chart name in the `Chart.yaml` file.
    ```yaml
    ingress-nginx:
      controller:
        service:
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    [...]
    ```
    with the migration we have lost the go templating capabilities, so I would recommend to open the new `values.yaml`  side by side with the new `values-<env>.yaml`  and move the values from the `values.yaml` to the `values-<env>.yaml` when needed and fill the templated values with the real values. It is a pity, we know. Also remember that the `values-<env>.yaml`  content needs to be under the same key as the `values.yaml` content.
    ```yaml
    ingress-nginx:
      controller:
        service:
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:123456789012:certificate/12345678-1234-1234-1234-123456789012
    [...]
    ```
    After this you can copy the content of the environment-specific values from the helmfile to the new `values-<env>.yaml` file. Remember to resolve the templated values with the real values.
5. **Create the `secrets.yaml` file**
    The `secrets.yaml` file is a file that contains the secrets for the deployment. You can copy the secrets from the helmfile repository to the `secrets.yaml` file in the Argo CD repository. But you have to do the same as we did in the `values.yaml` and `values-<env>.yaml` files, everything that is to configure the deployment of the chart has to be in a key named like the chart name.
    Just a heads up, the secrets are not shared among environments, so you have to create this file for each environment you have (staging, production, etc.).
6. **Create the `templates` folder**
    If there is any use of the raw chart in the helmfile repository, you have to copy the content of the values file used by the raw chart in a file per resource in the `templates` folder. Remember that the raw chart, requieres to have everything under a key and this is a template so you have to remove that key and unindent the file.
    As a best practice, if there were some variables in the raw chart, you still can use them here, you just have to create the variables in the `values.yaml`  or `values-<env>.yaml`  files at the top level of the yaml hierarchy, and the templates will be able to use them, This also works for the secrets. This helps a lot to not repeat ourselves. As an example for this you can check the next template:

    ```yaml
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        email: [email protected]
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
          name: letsencrypt-prod
        solvers:
        - selector:
            dnsZones:
              - service-{{.Values.environment}}.example.org
          dns01:
            route53:
              region: us-east-1
    ```

    And this `values-staging.yaml` file:

    ```yaml
    environment: staging
    cert-manager:
      serviceAccount:
        annotations:
          eks.amazonaws.com/role-arn: arn:aws:iam::XXXXXXXXXXXX:role/staging-cert-manager
    ```
7. **Commit your changes**
    Once you have created all the files, you have to commit them to the Argo CD repository. You can use the following commands to commit the changes:
    ```bash
    git add .
    git commit -m "Migrate deployment <my deployment> from Helmfile to Argo CD"
    git push
    ```
8. **Create the PR and wait for the review**
    Once you have committed the changes, you have to create a PR in the Argo CD repository.
    After creating the PR, you have to wait for the review and approval from the team.
9. **Merge the PR and wait for the deployment**
    Once the PR has been approved, you have to merge it and wait for the refresh to be triggered by Argo CD.
    We don't have auto-sync yet, so you have to go to the deployment, manually check the diff and sync the deployment if everything is fine.
10. **Check the deployment**
    Once the deployment has been synced, you have to check the deployment in the Kubernetes cluster to ensure that everything is working as expected.

feat(argocd#You need to deploy a docker image from a private registry): You need to deploy a docker image from a private registry

This is a common scenario, you have to deploy a chart that uses a docker image from a private registry. You have to create a template file with the credentials secret and keep the secret in the `secrets.yaml` file.

`registry-credentials.yaml`:
```yaml
---
apiVersion: v1
data:
  .dockerconfigjson: {{ .Values.regcred }}
kind: Secret
metadata:
  name: regcred
  namespace: drawio
type: kubernetes.io/dockerconfigjson
```

`secrets.yaml`:

```yaml
regcred: XXXXX
```

feat(argocd#You have to deploy multiple charts within the same deployment): You have to deploy multiple charts within the same deployment

As a limitation of our deployment strategy, on some scenarios the name of the namespace is set to the directory name of the deployment, so you have to deploy any chart within the same deployment in the same `namespace/directory`. You can do this by using multiple dependencies in the `Chart.yaml` file. For example if you want an internal docker-registry and also a docker-registry-proxy to avoid the rate limiting of dockerhub you can have:

```yaml
---
apiVersion: v2
name: infra
version: 1.0.0
dependencies:
  - name: docker-registry
    version: 2.2.2
    repository: https://helm.twun.io
    alias: docker-registry
  - name: docker-registry
    version: 2.2.2
    repository: https://helm.twun.io
    alias: docker-registry-proxy
```

values.yaml

```yaml
docker-registry:
  ingress:
    enabled: true
    className: nginx
    path: /
    hosts:
      - registry.example.org
    annotations:
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
      cert-manager.io/cluster-issuer: letsencrypt-prod
      cert-manager.io/acme-challenge-type: dns01
    tls:
      - secretName: registry-tls
        hosts:
          - registry.example.org
docker-registry-proxy:
  ingress:
    enabled: true
    className: open-internally
    path: /
    hosts:
      - registry-proxy.example.org
    annotations:
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
      cert-manager.io/cluster-issuer: letsencrypt-prod
      cert-manager.io/acme-challenge-type: dns01
    tls:
      - secretName: registry-proxy-tls
        hosts:
          - registry-proxy.example.org
```

feat(argocd#You need to deploy a chart in an OCI registry): You need to deploy a chart in an OCI registry

It is pretty straightforward, you just have to keep in mind that the helmfile repository specifies the chart in the url and our ArgoCD definition just needs the repository and the chart name is defined in the name of the dependency. So in helmfile you will find something like this:
```yaml
  - name: karpenter
    chart: oci://public.ecr.aws/karpenter/karpenter
    version: v0.32.7
    namespace: kube-system
    values:
      - karpenter/values.yaml.gotmpl
```

And in the ArgoCD repository you will find something like this:

```yaml
dependencies:
  - name: karpenter
    version: v0.32.7
    repository: "oci://public.ecr.aws/karpenter"
```

feat(argocd#A object is being managed by the deployment and ArgoCD is trying to manage delete it): A object is being managed by the deployment and ArgoCD is trying to manage (delete) it

Some deployments create its objects and add its tags to them, so ArgoCD is trying to manage them, but as they are not defined in the ArgoCD repository, it is trying to delete them. You can handle this situation by telling ArgoCD to ignore the object. For example you can exclude the backups management:

```yaml
argo-cd:
  # https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/values.yaml
  configs:
    # General Argo CD configuration
    ## Ref: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-cm.yaml
    cm:
      resource.exclusions: |
        - apiGroups:
          - "*"
          kinds:
          - Backup
          clusters:
          - "*"
```
feat(argocd#When something is not syncing): When something is not syncing

If something is not syncing, you can check the logs in the `sync status` button in the Argo CD UI, this will give you a hint of what is happening. For common scenarios you can:

- Delete the failing resource (deployment, configmap, secret) and sync it again. **Never delete a statefulset** as it will delete the data.
- Set some "advanced" options in the sync, like `force`,  `prune` or `replace` to force the sync of the objects unwilling to sync.

feat(argocd#You have to deploy the ingress so you will lost the access to the Argocd UI): You have to deploy the ingress so you will lost the access to the Argocd UI

This is tricky, because ingress is one of theses cases were you have to delete the deployments and sync them again, but once you delete the deployment there is no ingress so no way to access the Argo CD UI. You can handle this situation by at least two ways:
- Set a retry option in the synchronization of the deployment, so you can delete the deployment and the sync will happen again in a few seconds.
- Force a sync using kubectl, instead of the UI. You can do this by running the following command:
  ```bash
  kubectl patch application <yourDeployment> -n argocd --type=merge -p '{"operation": {"initiatedBy": { "username": "<yourUserName>"},"sync": { "syncStrategy": null, "hook": {} }}}'
  ```

fix(bash_snippets#Fix docker error: KeyError ContainerConfig): Fix docker error: KeyError ContainerConfig

A patch is to run `docker-compose down` and then up again. The solution is to upgrade docker and use `docker compose` instead.

feat(board_games#Online board gaming pages): Online board gaming pages

- [Roll20](https://roll20.net/)
- [Foundry](https://foundryvtt.com/)

feat(book_management#Convert pdf to epub): Convert pdf to epub

This is a nasty operation, my suggestion is to export it with Calibre and then play with the [Search and replace](https://manual.calibre-ebook.com/conversion.html#search-replace) regular expressions with the wand. With this tool you can remove headers, footers, or other arbitrary text. Remember that they operate on the intermediate XHTML produced by the conversion pipeline. There is a wizard to help you customize the regular expressions for your document. Click the magic wand beside the expression box, and click the ‘Test’ button after composing your search expression. Successful matches will be highlighted in Yellow.

The search works by using a Python regular expression. All matched text is simply removed from the document or replaced using the replacement pattern. The replacement pattern is optional, if left blank then text matching the search pattern will be deleted from the document.

feat(cadvisor): Introduce cAdvisor

[cAdvisor](https://github.com/google/cadvisor) (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.

cAdvisor has native support for Docker containers and should support just about any other container type out of the box.

**Try it out**

To quickly tryout cAdvisor on your machine with Docker, we have a Docker image that includes everything you need to get started. You can run a single cAdvisor to monitor the whole machine. Simply run:

```bash
VERSION=v0.49.1 # use the latest release version from https://github.com/google/cadvisor/releases
sudo docker run \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --publish=8080:8080 \
  --detach=true \
  --name=cadvisor \
  --privileged \
  --device=/dev/kmsg \
  gcr.io/cadvisor/cadvisor:$VERSION
```
**Installation**

You can check all the configuration flags [here](https://github.com/google/cadvisor/blob/master/docs/runtime_options.md#metrics).

**With docker compose**

* Create the data directories:
  ```bash
  mkdir -p /data/cadvisor/
  ```
* Copy the `docker/docker-compose.yaml` to `/data/cadvisor/docker-compose.yaml`.
  ```yaml
  ---
  services:
    cadvisor:
      image: gcr.io/cadvisor/cadvisor:latest
      restart: unless-stopped
      privileged: true
      # command:
      # # tcp and udp create high CPU usage, disk does CPU hungry ``zfs list``
      # - '--disable_metrics=tcp,udp,disk'
      volumes:
        - /:/rootfs:ro
        - /var/run:/var/run:ro
        - /sys:/sys:ro
        - /var/lib/docker/:/var/lib/docker:ro
        - /dev/disk:/dev/disk:ro
      # ports:
      #   - "8080:8080"
      devices:
        - /dev/kmsg:/dev/kmsg
      networks:
        - monitorization

  networks:
    monitorization:
      external: true
  ```

  If the prometheus is not in the same instance as the cadvisor expose the port and remove the network.
  ```
* Create the docker networks (if they don't exist):
    * `docker network create monitorization`
* Copy the `service/cadvisor.service` into `/etc/systemd/system/`
  ```
  [Unit]
  Description=cadvisor
  Requires=docker.service
  After=docker.service

  [Service]
  Restart=always
  User=root
  Group=docker
  WorkingDirectory=/data/cadvisor
  TimeoutStartSec=100
  RestartSec=2s
  ExecStart=/usr/bin/docker compose -f docker-compose.yaml up
  ExecStop=/usr/bin/docker compose -f docker-compose.yaml down

  [Install]
  WantedBy=multi-user.target
  ```
* Start the service `systemctl start cadvisor`
* If needed enable the service `systemctl enable cadvisor`.
- Scrape the metrics with prometheus
  - If both dockers share machine and docker network:
    ```yaml
    scrape_configs:
      - job_name: cadvisor
        metrics_path: /metrics
        static_configs:
          - targets:
            - cadvisor:8080
        # Relabels needed for the grafana dashboard
        # https://grafana.com/grafana/dashboards/15798-docker-monitoring/
        metric_relabel_configs:
          - source_labels: ['container_label_com_docker_compose_project']
            target_label: 'service'
          - source_labels: ['name']
            target_label: 'container'
    ```

**[Deploy the alerts](https://samber.github.io/awesome-prometheus-alerts/rules#docker-containers)**

```yaml
---
groups:
- name: cAdvisor rules
  rules:
    # This rule can be very noisy in dynamic infra with legitimate container start/stop/deployment.
    - alert: ContainerKilled
      expr: min by (name, service) (time() - container_last_seen{container=~".*"}) > 60
      for: 0m
      labels:
        severity: warning
      annotations:
        summary: Container killed (instance {{ $labels.instance }})
        description: "A container has disappeared\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # This rule can be very noisy in dynamic infra with legitimate container start/stop/deployment.
    - alert: ContainerAbsent
      expr: absent(container_last_seen{container=~".*"})
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: Container absent (instance {{ $labels.instance }})
        description: "A container is absent for 5 min\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    - alert: ContainerHighCpuUtilization
      expr: (sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (pod, container) / sum(container_spec_cpu_quota{container!=""}/container_spec_cpu_period{container!=""}) by (pod, container) * 100) > 80
      for: 2m
      labels:
        severity: warning
      annotations:
        summary: Container High CPU utilization (instance {{ $labels.instance }})
        description: "Container CPU utilization is above 80%\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

      # See https://medium.com/faun/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d
    - alert: ContainerHighMemoryUsage
      expr: (sum(container_memory_working_set_bytes{name!=""}) BY (instance, name) / sum(container_spec_memory_limit_bytes > 0) BY (instance, name) * 100) > 80
      for: 2m
      labels:
        severity: warning
      annotations:
        summary: Container High Memory usage (instance {{ $labels.instance }})
        description: "Container Memory usage is above 80%\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # I feel that this is monitored well with the node exporter
    # - alert: ContainerVolumeUsage
    #   expr: (1 - (sum(container_fs_inodes_free{name!=""}) BY (instance) / sum(container_fs_inodes_total) BY (instance))) * 100 > 80
    #   for: 2m
    #   labels:
    #     severity: warning
    #   annotations:
    #     summary: Container Volume usage (instance {{ $labels.instance }})
    #     description: "Container Volume usage is above 80%\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    - alert: ContainerHighThrottleRate
      expr: sum(increase(container_cpu_cfs_throttled_periods_total{container!=""}[5m])) by (container, pod, namespace) / sum(increase(container_cpu_cfs_periods_total[5m])) by (container, pod, namespace) > ( 25 / 100 )
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: Container high throttle rate (instance {{ $labels.instance }})
        description: "Container is being throttled\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    - alert: ContainerLowCpuUtilization
      expr: (sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (pod, container) / sum(container_spec_cpu_quota{container!=""}/container_spec_cpu_period{container!=""}) by (pod, container) * 100) < 20
      for: 7d
      labels:
        severity: info
      annotations:
        summary: Container Low CPU utilization (instance {{ $labels.instance }})
        description: "Container CPU utilization is under 20% for 1 week. Consider reducing the allocated CPU.\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    - alert: ContainerLowMemoryUsage
      expr: (sum(container_memory_working_set_bytes{name!=""}) BY (instance, name) / sum(container_spec_memory_limit_bytes > 0) BY (instance, name) * 100) < 20
      for: 7d
      labels:
        severity: info
      annotations:
        summary: Container Low Memory usage (instance {{ $labels.instance }})
        description: "Container Memory usage is under 20% for 1 week. Consider reducing the all"

    - alert: Container (Compose) Too Many Restarts
      expr: count by (instance, name) (count_over_time(container_last_seen{name!="", container_label_restartcount!=""}[15m])) - 1 >= 5
      for: 5m
      labels:
        severity: critical
      annotations:
        summary: "Too many restarts ({{ $value }}) for container \"{{ $labels.name }}\""
```

**Deploy the dashboard**

There are many grafana dashboards for cAdvisor, of them all I've chosen [this one](https://grafana.com/grafana/dashboards/15798-docker-monitoring/)

Once you've imported and selected your prometheus datasource you can press on "Share" to get the json and add it to your provisioned dashboards.

**Make it work with ZFS**

There are many issues about it ([1](https://github.com/google/cadvisor/issues/1579))

Solution seems to be to use `--device /dev/zfs:/dev/zfs`

**References**
- [Source](https://github.com/google/cadvisor)

feat(changedetection): Introduce Changedetection

[Changedetection](https://changedetection.io/) is a free open source web page change detection, website watcher, restock monitor and notification service. Restock Monitor, change detection.

Note: even though it's a nice web interface, if you have some basic python skills it may be better to run your script on a cronjob.

**Installation**
With Docker compose, just clone this repository and..
- Copy the [default docker-compose](https://github.com/dgtlmoon/changedetection.io/blob/master/docker-compose.yml) at tweak it at your needs.

```bash
$ docker compose up -d
```

**References**
- [Home](https://changedetection.io/)
- [Docs](https://github.com/dgtlmoon/changedetection.io/wiki)
- [Source](https://github.com/dgtlmoon/changedetection.io)

feat(pytest#freezegun): Deprecate freezegun

[pytest-freezegun has been deprecated](https://github.com/ktosiek/pytest-freezegun/issues/19#issuecomment-1500919278) in favour of [`pytest-freezer`](https://github.com/pytest-dev/pytest-freezer)

feat(csvlens): Introduce csvlens

`csvlens` is a command line CSV file viewer. It is like less but made for CSV.

**Usage**

Run `csvlens` by providing the CSV filename:

```
csvlens <filename>
```

Pipe CSV data directly to `csvlens`:

```
<your commands producing some csv data> | csvlens
```

**Key bindings**

Key | Action
--- | ---
`hjkl` (or `← ↓ ↑→ `) | Scroll one row or column in the given direction
`Ctrl + f` (or `Page Down`) | Scroll one window down
`Ctrl + b` (or `Page Up`) | Scroll one window up
`Ctrl + d` (or `d`) | Scroll half a window down
`Ctrl + u` (or `u`) | Scroll half a window up
`Ctrl + h` | Scroll one window left
`Ctrl + l` | Scroll one window right
`Ctrl + ←` | Scroll left to first column
`Ctrl + →` | Scroll right to last column
`G` (or `End`) | Go to bottom
`g` (or `Home`) | Go to top
`<n>G` | Go to line `n`
`/<regex>` | Find content matching regex and highlight matches
`n` (in Find mode) | Jump to next result
`N` (in Find mode) | Jump to previous result
`&<regex>` | Filter rows using regex (show only matches)
`*<regex>` | Filter columns using regex (show only matches)
`TAB` | Toggle between row, column or cell selection modes
`>` | Increase selected column's width
`<` | Decrease selected column's width
`Shift + ↓` (or `Shift + j`) | Sort rows or toggle sort direction by the selected column
`#` (in Cell mode) | Find and highlight rows like the selected cell
`@` (in Cell mode) | Filter rows like the selected cell
`y` (in Cell Mode) | Copy the selected cell to clipboard
`Enter` (in Cell mode) | Print the selected cell to stdout and exit
`-S` | Toggle line wrapping
`-W` | Toggle line wrapping by words
`r` | Reset to default view (clear all filters and custom column widths)
`H` (or `?`) | Display help
`q` | Exit

**Installation**

Download the binary directly from the [releases](https://github.com/YS-L/csvlens/releases) or if you have cargo installed do:

```bash
cargo install csvlens
```
**References**
- [Source](https://github.com/YS-L/csvlens)

feat(deltachat): Introduce Delta Chat

Delta Chat is a decentralized and secure messenger app

- Reliable instant messaging with multi-profile and multi-device support
- Sign up to secure fast chatmail servers or use classic e-mail servers
- Interactive web apps in chats for gaming and collaboration
- Audited end-to-end encryption safe against network and server attacks
- FOSS software, built on Internet Standards, avoiding xkcd927 :)

**[Installation](https://delta.chat/en/download)**

If you don't want to use snap or flatpak or nix download the deb package under "Download options without automatic updates".

Install it with `sudo dpkg -i deltachat.deb`.

Be careful that by default it uses

**References**
- [Home](https://delta.chat/en/)
- [Source](https://github.com/deltachat/deltachat-desktop)
- [Docs](https://github.com/deltachat/deltachat-desktop/tree/main/docs)
- [Blog](https://delta.chat/en/blog)

feat(docker#Monitorization): Monitorization

You can [configure Docker to export prometheus metrics](https://docs.docker.com/engine/daemon/prometheus/), but they are not very useful.

**Using [cAdvisor](https://github.com/google/cadvisor)**
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.

**References**
- [Source](https://github.com/google/cadvisor?tab=readme-ov-file)
- [Docs](https://github.com/google/cadvisor/tree/master/docs)

**Monitor continuously restarting dockers**
Sometimes dockers are stuck in a never ending loop of crash and restart. The official docker metrics don't help here, and even though [in the past it existed a `container_restart_count`](https://github.com/google/cadvisor/issues/1312) (with a pretty issue number btw) for cadvisor, I've tried activating [all metrics](https://github.com/google/cadvisor/blob/master/docs/runtime_options.md#metrics) and it still doesn't show. I've opened [an issue](https://github.com/google/cadvisor/issues/3584) to see if I can activate it

feat(furios): Introduce FuriOS

The people of [FuriLabs](https://furilabs.com/) has created a phone that works over debian and runs android applications on a sandbox

**References**
- [Home](https://furilabs.com/)
- [Source](https://github.com/FuriLabs)

feat(gancio#References): Add new list of gancion instances

- [List of gancio instances](http://demo.fedilist.com/instance?q=&ip=&software=gancio&registrations=&onion=)

feat(gitops): Introduce gitops

GitOps is a popular approach for deploying applications to Kubernetes
clusters because it provides several benefits. Some of the reasons why
GitOps is a popular approach for deploying applications to Kubernetes
clusters because it provides several benefits. Some of the reasons why
we might want to implement GitOps in our Kubernetes deployment process include:

1. Git is a powerful and flexible version control system that can help
  us to track and manage changes to our infrastructure and application
  configuration. This can make it easier to roll back changes or compare
  different versions of the configuration, and can help us to ensure that
  our infrastructure and applications are always in the desired state.

2. GitOps provides a declarative approach to manage the infrastructure
  and applications. This means that we specify the desired state of our
  infrastructure and applications in configuration/definition files, and
  the GitOps tool ensures that the actual state of our infrastructure
  matches the desired state. This can help to prevent configuration drift
  and ensure that our infrastructure and applications are always in the
  desired state.

3. GitOps can automate the deployment process of our applications and
  infrastructure, which can help to reduce the time and effort required to
  roll out changes. This can improve the speed and reliability of our
  deployment process, and can help us to quickly and easily deliver changes
  to our applications and infrastructure.

4. GitOps can provide a central source of truth for our infrastructure
  and application configuration. This can help to ensure that everyone on
  the team is working with the same configuration, and can prevent conflicts
  and inconsistencies that can arise when multiple people are making changes
  to the configuration or infraestructure.

feat(grapheneos#Add call screening): Add call screening

If you're tired of getting spam calls even if you've signed up in a no-spam list such as the Robinson list, then try out ["yetanothercallblocker"](https://f-droid.org/en/packages/dummydomain.yetanothercallblocker/).

You can also enable the block unknown numbers in the Phone Settings, but [it only blocks calls that have an unknown id. Not the ones that are not in your agenda](https://www.reddit.com/r/GrapheneOS/comments/13yat8e/i_miss_call_screening/)
A friend is using "carrion" although he says it's not being very effective.

feat(hacktivist_collectives): Gather some colectives

**Germany**

- Chaos Computer Club: [here](https://fediverse.tv/w/g76dg9qTaG7XiB4R2EfovJ) is a documentary on it's birth
**Galicia**

Algunos colectivos de galiza son:

- [Hackliza](https://hackliza.gal/)
- [GALPon](https://www.galpon.org/): Linux y Soft Libre en Vigo/Pontevedra
- [GPUL](https://gpul.org/): Lo mismo por Coruña
- [Proxecto Trasno](https://trasno.gal/): Que se dedican a traducir software al gallego
- [La molinera](https://lamolinera.net/): Hacen impresion 3d
- [A Industriosa](https://aindustriosa.org/)
- Enxeñeiros sen fronteiras: hicieron cosas de reciclar hardware para dárselo a gente sin recursos
- [PonteLabs](https://pontelabs.org/)
- [Mancomun](https://mancomun.gal/a-nosa-rede/): Web que intenta listar colectivos pero son asociaciones muy oficiales todas.

feat(hacktivist_gatherings): Gather some gatherings

**europe**
- [Chaos Communication Congress](https://events.ccc.de/en/): Best gathering ever, it's a must at least once in your life.
- Chaos Communication Camp
- [Italian hackmeeting](https://www.hackmeeting.org/)
- [Trans hackmeeting](https://trans.hackmeeting.org/)

**spanish state**
- [Spanish Hackmeeting](https://es.hackmeeting.org)
- [TransHackFeminist](https://zoiahorn.anarchaserver.org/thf2022/)

feat(imap_tools): Introduce imap tools python library

`imap-tools` is a high-level IMAP client library for Python, providing a simple and intuitive API for common email tasks like fetching messages, flagging emails as read/unread, labeling/moving/deleting emails, searching/filtering emails, and more.

Features:

- Basic message operations: fetch, uids, numbers
- Parsed email message attributes
- Query builder for search criteria
- Actions with emails: copy, delete, flag, move, append
- Actions with folders: list, set, get, create, exists, rename, subscribe, delete, status
- IDLE commands: start, poll, stop, wait
- Exceptions on failed IMAP operations
- No external dependencies, tested

**Installation**

```bash
pip install imap-tools
```

**Usage**

Both the [docs](https://github.com/ikvk/imap_tools) and the [examples](https://github.com/ikvk/imap_tools/tree/master/examples) are very informative on how to use the library.

**[Basic usage](https://github.com/ikvk/imap_tools/blob/master/examples/basic.py)**
```python
from imap_tools import MailBox, AND

"""
Get date, subject and body len of all emails from INBOX folder

1. MailBox()
    Create IMAP client, the socket is created here

2. mailbox.login()
    Login to mailbox account
    It supports context manager, so you do not need to call logout() in this example
    Select INBOX folder, cause login initial_folder arg = 'INBOX' by default (set folder may be disabled with None)

3. mailbox.fetch()
    First searches email uids by criteria in current folder, then fetch and yields MailMessage
    Criteria arg is 'ALL' by default
    Current folder is 'INBOX' (set on login), by default it is INBOX too.
    Fetch each message separately per N commands, cause bulk arg = False by default
    Mark each fetched email as seen, cause fetch mark_seen arg = True by default

4. print
    msg variable is MailMessage instance
    msg.date - email data, converted to datetime.date
    msg.subject - email subject, utf8 str
    msg.text - email plain text content, utf8 str
    msg.html - email html content, utf8 str
"""
with MailBox('imap.mail.com').login('[email protected]', 'pwd') as mailbox:
    for msg in mailbox.fetch():
        print(msg.date, msg.subject, len(msg.text or msg.html))

mailbox = MailBox('imap.mail.com')
mailbox.login('[email protected]', 'pwd', 'INBOX')  # or use mailbox.folder.set instead initial_folder arg
for msg in mailbox.fetch(AND(all=True)):
    print(msg.date, msg.subject, len(msg.text or msg.html))
mailbox.logout()
```

**[Action with emails](https://github.com/ikvk/imap_tools?tab=readme-ov-file#actions-with-emails)**

Action's uid_list arg may takes:

- str, that is comma separated uids
- Sequence, that contains str uids

To get uids, use the maibox methods: uids, fetch.

For actions with a large number of messages imap command may be too large and will cause exception at server side, use 'limit' argument for fetch in this case.

```python
with MailBox('imap.mail.com').login('[email protected]', 'pwd', initial_folder='INBOX') as mailbox:

    # COPY messages with uid in 23,27 from current folder to folder1
    mailbox.copy('23,27', 'folder1')

    # MOVE all messages from current folder to INBOX/folder2
    mailbox.move(mailbox.uids(), 'INBOX/folder2')

    # DELETE messages with 'cat' word in its html from current folder
    mailbox.delete([msg.uid for msg in mailbox.fetch() if 'cat' in msg.html])

    # FLAG unseen messages in current folder as \Seen, \Flagged and TAG1
    flags = (imap_tools.MailMessageFlags.SEEN, imap_tools.MailMessageFlags.FLAGGED, 'TAG1')
    mailbox.flag(mailbox.uids(AND(seen=False)), flags, True)

    # APPEND: add message to mailbox directly, to INBOX folder with \Seen flag and now date
    with open('/tmp/message.eml', 'rb') as f:
        msg = imap_tools.MailMessage.from_bytes(f.read())  # *or use bytes instead MailMessage
    mailbox.append(msg, 'INBOX', dt=None, flag_set=[imap_tools.MailMessageFlags.SEEN])
```

**[Run search queries](https://github.com/ikvk/imap_tools/blob/master/examples/search.py)**

You can get more information on the search criteria [here](https://github.com/ikvk/imap_tools?tab=readme-ov-file#search-criteria)
```python
"""
Query builder examples.

NOTES:

    NOT ((FROM='11' OR TO="22" OR TEXT="33") AND CC="44" AND BCC="55")
    NOT (((OR OR FROM "11" TO "22" TEXT "33") CC "44" BCC "55"))
    NOT(AND(OR(from_='11', to='22', text='33'), cc='44', bcc='55'))

1. OR(1=11, 2=22, 3=33) ->
    "(OR OR FROM "11" TO "22" TEXT "33")"
2. AND("(OR OR FROM "11" TO "22" TEXT "33")", cc='44', bcc='55') ->
    "AND(OR(from_='11', to='22', text='33'), cc='44', bcc='55')"
3. NOT("AND(OR(from_='11', to='22', text='33'), cc='44', bcc='55')") ->
    "NOT (((OR OR FROM "1" TO "22" TEXT "33") CC "44" BCC "55"))"
"""

import datetime as dt
from imap_tools import AND, OR, NOT, A, H, U

q1 = OR(date=[dt.date(2019, 10, 1), dt.date(2019, 10, 10), dt.date(2019, 10, 15)])

q2 = NOT(OR(date=[dt.date(2019, 10, 1), dt.date(2019, 10, 10), dt.date(2019, 10, 15)]))

q3 = A(subject='hello', date_gte=dt.date(2019, 10, 10))

q4 = OR(from_=["@spam.ru", "@tricky-spam.ru"])

q5 = AND(seen=True, flagged=False)

q6 = OR(AND(text='tag15', subject='tag15'), AND(text='tag10', subject='tag10'))

q7 = OR(OR(text='tag15', subject='tag15'), OR(text='tag10', subject='tag10'))

q8 = A(header=[H('IsSpam', '++'), H('CheckAntivirus', '-')])

q9 = A(uid=U('1034', '*'))

q10 = A(OR(from_='[email protected]', text='"the text"'), NOT(OR(A(answered=False), A(new=True))), to='[email protected]')
```

**[Save attachments](https://github.com/ikvk/imap_tools/blob/master/examples/email_attachments_to_files.py)**

```python
from imap_tools import MailBox

with MailBox('imap.my.ru').login('acc', 'pwd', 'INBOX') as mailbox:
    for msg in mailbox.fetch():
        for att in msg.attachments:
            print(att.filename, att.content_type)
            with open('C:/1/{}'.format(att.filename), 'wb') as f:
                f.write(att.payload)
```

**[Action with directories](https://github.com/ikvk/imap_tools?tab=readme-ov-file#actions-with-folders)**

```python
with MailBox('imap.mail.com').login('[email protected]', 'pwd') as mailbox:

    # LIST: get all subfolders of the specified folder (root by default)
    for f in mailbox.folder.list('INBOX'):
        print(f)  # FolderInfo(name='INBOX|cats', delim='|', flags=('\\Unmarked', '\\HasChildren'))

    # SET: select folder for work
    mailbox.folder.set('INBOX')

    # GET: get selected folder
    current_folder = mailbox.folder.get()

    # CREATE: create new folder
    mailbox.folder.create('INBOX|folder1')

    # EXISTS: check is folder exists (shortcut for list)
    is_exists = mailbox.folder.exists('INBOX|folder1')

    # RENAME: set new name to folder
    mailbox.folder.rename('folder3', 'folder4')

    # SUBSCRIBE: subscribe/unsubscribe to folder
    mailbox.folder.subscribe('INBOX|папка два', True)

    # DELETE: delete folder
    mailbox.folder.delete('folder4')

    # STATUS: get folder status info
    stat = mailbox.folder.status('some_folder')
    print(stat)  # {'MESSAGES': 41, 'RECENT': 0, 'UIDNEXT': 11996, 'UIDVALIDITY': 1, 'UNSEEN': 5}

```
**[Fetch by pages](https://github.com/ikvk/imap_tools/blob/master/examples/fetch_by_pages.py)**

```python
from imap_tools import MailBox

with MailBox('imap.mail.com').login('[email protected]', 'pwd', 'INBOX') as mailbox:
    criteria = 'ALL'
    found_nums = mailbox.numbers(criteria)
    page_len = 3
    pages = int(len(found_nums) // page_len) + 1 if len(found_nums) % page_len else int(len(found_nums) // page_len)
    for page in range(pages):
        print('page {}'.format(page))
        page_limit = slice(page * page_len, page * page_len + page_len)
        print(page_limit)
        for msg in mailbox.fetch(criteria, bulk=True, limit=page_limit):
            print(' ', msg.date, msg.uid, msg.subject)
```
**References**
- [Source](https://github.com/ikvk/imap_tools)
- [Docs](https://github.com/ikvk/imap_tools)
- [Examples](https://github.com/ikvk/imap_tools/tree/master/examples)

fix(kubernetes_debugging#Network debugging): Network debugging with kubeshark

NOTE: maybe [kubeshark](https://github.com/kubeshark/kubeshark) is a better solution

feat(wireguard#NixOS): Install in NixOS

Follow the guides of the next references:

- https://nixos.wiki/wiki/WireGuard
- https://wiki.archlinux.org/title/WireGuard
- https://alberand.com/nixos-wireguard-vpn.html

feat(zfs#Clean the space of a ZFS pool): Clean the space of a ZFS pool

It doesn't matter how big your disks are, you'll eventually reach it's limit before you can buy new disks. It's then time to clean up some space.

**Manually remove data**

*See which datasets are using more space for their data*

To sort the datasets on the amount of space they use for their backups use `zfs list -o space -s usedds`

*Clean it up*

Then you can go dataset by dataset using `ncdu` cleaning up.

**See which datasets are using more space for their backups**

To sort the datasets on the amount of space they use for their backups use `zfs list -o space -s usedsnap`

**See the differences between a snapshot and the contents of the
dataset**

To compare the contents of a ZFS snapshot with the current dataset and identify files or directories that have been removed, you can use the `zfs diff` command. Here's how you can do it:

- First, find the snapshot name using the following command:

```bash
zfs list -t snapshot dataset_name
```

- Then, compare the contents of the snapshot with the current dataset (replace `<snapshot_name>` with your snapshot name):

```bash
zfs diff <dataset>@<snapshot_name> <dataset>
```

For example:

```bash
zfs diff tank/mydataset@snap1
```

The output will show files and directories that have been removed (`-`), modified (`M`), or renamed (`R`). Here's an example:

```
-     4 /path/to/removedfile.txt
```

If you want to see only the deleted files, you can pipe the output through `grep`:

```bash
zfs diff <dataset>@<snapshot_name> | grep '^-'
```

This will help you identify which files or directories were in the snapshot but are no longer in the current dataset.
diff --git a/docs/logql.md b/docs/logql.md

feat(logql#Make a regexp case insensitive): Make a regexp case insensitive

To make a regex filter case insensitive, you can use the `(?i)` flag within the regex pattern.

```
(?i)(error|warning)
```

This pattern will match "error" or "warning" in any case (e.g., "Error", "WARNING", etc.).

When using it in a Loki query, it would look like this:

```plaintext
{job="your-job-name"} |=~ "(?i)(error|warning)"
```

This query will filter logs from `your-job-name` to show only those that contain "error" or "warning" in a case-insensitive manner.
`(?i)(error|warning)`

fix(mediatracker#Add missing books): Add required steps to add missing books

- Register an account in openlibrary.org
- [Add the book](https://openlibrary.org/books/add)
 - Then add it to mediatracker

feat(memoria_historica#Movimiento obrero): Recomendar podcast sobre el movimiento obrero

- [La Olimpiada Popular, rebeldía obrera contra los fascismos](https://www.rtve.es/play/audios/documentos-rne/olimpiada-popular-rebeldia-obrera-contra-fascismos-19-07-24/16192458/)

feat(openwebui): Introduce Open WebUI

Open WebUI is an extensible, self-hosted interface for AI that adapts to your workflow, all while operating entirely offline.

Pros:

  - Web ui works both with llama and chatgpt api
  - made with Python
  - they recommend watchtower

**[Installation](https://docs.openwebui.com/getting-started/)**

**Troubleshooting**

**OAuth returns errors when logging in**

What worked for me was to repeat the login process until it went through.

But I'm not the only one having this issue [1](https://github.com/open-webui/open-webui/discussions/4940), [2](https://github.com/open-webui/open-webui/discussions/4685)

**References**
- [Home](https://openwebui.com/)
- [Docs](https://docs.openwebui.com/)
- [Source](https://github.com/open-webui/open-webui)

feat(palestine): Añadir página de agregación de noticias y movilizaciones de Palestina

- [Actua por Palestina](https://porpalestina.org/)

feat(parkour): Add funny parkour parody

- [The office parkour parody](https://www.youtube.com/watch?v=0Kvw2BPKjz0)

feat(pentesting#Tools): Add vulnhuntr

- [vulnhuntr](https://github.com/protectai/vulnhuntr): Vulnhuntr leverages the power of LLMs to automatically create and analyze entire code call chains starting from remote user input and ending at server output for detection of complex, multi-step, security-bypassing vulnerabilities that go far beyond what traditional static code analysis tools are capable of performing.

  It creates the 0days directly using LLMs

feat(playwright): Introduce playwright

[Playwright](https://playwright.dev/python/) is a modern automation library developed by Microsoft (buuuuh!) for testing web applications. It provides a powerful API for controlling web browsers, allowing developers to perform end-to-end testing, automate repetitive tasks, and gather insights into web applications. Playwright supports multiple browsers and platforms, making it a versatile tool for ensuring the quality and performance of web applications.

**Key features**

*Cross-browser testing*

Playwright supports testing across major browsers including:

- Google Chrome and Chromium-based browsers
- Mozilla Firefox
- Microsoft Edge
- WebKit (the engine behind Safari)

This cross-browser support ensures that your web application works consistently across different environments.

*Headless mode*

Playwright allows you to run browsers in headless mode, which means the browser runs without a graphical user interface. This is particularly useful for continuous integration pipelines where you need to run tests on a server without a display.

*Auto-waiting*

Playwright has built-in auto-waiting capabilities that ensure elements are ready before interacting with them. This helps in reducing flaky tests caused by timing issues and improves test reliability.

*Network interception*

Playwright provides the ability to intercept and modify network requests. This feature is valuable for testing how your application behaves with different network conditions or simulating various server responses.

*Powerful selectors*

Playwright offers a rich set of selectors to interact with web elements. You can use CSS selectors, XPath expressions, and even text content to locate elements. This flexibility helps in accurately targeting elements for interaction.

*Multiple language support*

Playwright supports multiple programming languages including:

- JavaScript/TypeScript
- Python
- C#
- Java

This allows teams to write tests in their preferred programming language.

**Installation**

To get started with Playwright, you'll need to install it via pip. Here's how to install Playwright for Python:

```bash
pip install playwright
playwright install chromium
```

The last line installs the browsers inside `~/.cache/ms-playwright/`.

**Usage**

**Basic example**

Here's a simple example of using Playwright with Python to automate a browser:

```python
from playwright.sync_api import sync_playwright

with sync_playwright() as p:
    # Launch a new browser instance
    browser = p.chromium.launch()

    # Create a new browser context and page
    context = browser.new_context()
    page = context.new_page()

    # Navigate to a webpage
    page.goto('https://example.com')

    # Take a screenshot
    page.screenshot(path='screenshot.png')

    # Close the browser
    browser.close()
```

**[A testing example](https://playwright.dev/python/docs/intro#add-example-test)**

```python
import re
from playwright.sync_api import Page, expect

def test_has_title(page: Page):
    page.goto("https://playwright.dev/")

    # Expect a title "to contain" a substring.
    expect(page).to_have_title(re.compile("Playwright"))

def test_get_started_link(page: Page):
    page.goto("https://playwright.dev/")

    # Click the get started link.
    page.get_by_role("link", name="Get started").click()

    # Expects page to have a heading with the name of Installation.
    expect(page.get_by_role("heading", name="Installation")).to_be_visible()
```

**References**

- [Home](https://playwright.dev/python/)
- [Docs](https://playwright.dev/python/docs/intro)
- [Source](https://github.com/microsoft/playwright-python)
- [Video tutorials](https://playwright.dev/python/community/learn-videos)

feat(privacy_threat_modeling): Introduce Linddun privacy framework

- [Linddun privacy framework](https://linddun.org/)

feat(python_imap): Introduce python libraries to interact with IMAP

In Python, there are several libraries available for interacting with IMAP servers to fetch and manipulate emails. Some popular ones include:

**imaplib**

This is the built-in IMAP client library in Python's standard library (`imaplib`). It provides basic functionality for connecting to an IMAP server, listing mailboxes, searching messages, fetching message headers, and more.

The [documentation](https://docs.python.org/3/library/imaplib.html) is awful to read. I'd use it only if you can't or don't want to install other more friendly libraries

*Usage*

```python
import imapclient

mail = imapclient.IMAPClient('imap.example.com', ssl=True)
mail.login('username', 'password')

```
*References*

- [Docs](https://docs.python.org/3/library/imaplib.html)
- [Usage article](https://medium.com/@juanrosario38/how-to-use-pythons-imaplib-to-check-for-new-emails-continuously-b0c6780d796d)

**imapclient**

This is a higher-level library built on top of imaplib. It provides a more user-friendly API, reducing the complexity of interacting with IMAP servers.

It's docs are better than the standard library but they are old fashioned and not very extensive. It has 500 stars on github, the last commit was 3 months ago, and the last release was on December 2023 (as of October 2024)

*References*

- [Source](https://github.com/mjs/imapclient/)
- [Docs](https://imapclient.readthedocs.io/en/3.0.0/)

**[`imap_tools`](imap_tools.md)**

`imap-tools` is a high-level IMAP client library for Python, providing a simple and intuitive API for common email tasks like fetching messages, flagging emails as read/unread, labeling/moving/deleting emails, searching/filtering emails, and more.

It's interface looks the most pleasant, with the most powerful features, last commit was 3 weeks ago, 700 stars, last release on august 2024, it has type hints.

*Usage*

```python
import imap_tools

mail = imap_tools.Mail('imap.example.com', ssl=True)
mail.login('username', 'password')

messages = mail.list()

message = mail.fetch('1234567890')
```

*References*
- [Source](https://github.com/ikvk/imap_tools)
- [Docs](https://github.com/ikvk/imap_tools)
- [Examples](https://github.com/ikvk/imap_tools/tree/master/examples)

**pyzmail**

`pyzmail` is a powerful library for reading and parsing mail messages in Python, supporting both POP3 and IMAP protocols.

It has 60 stars on github and the last commit was 9 years ago, so it's a dead project

*Usage*
```python
import pyzmail

p = pyzmail.PyzMail()
p.connect("imap.example.com", "username", "password")
p.select_folder('INBOX')
messages = p.get_messages()
```

*References*
- [Home](https://www.magiksys.net/pyzmail/)
- [Source](https://github.com/aspineux/pyzmail)

**Conclusion**

If you don't want to install any additional library go with `imaplib`, else use [`imap_tools`](imap_tools.md)

feat(python_logging#Configure the logging module to use logfmt): Configure the logging module to use logfmt

To configure the Python `logging` module to use `logfmt` for logging output, you can use a custom logging formatter. The `logfmt` format is a structured logging format that uses key-value pairs, making it easier to parse logs. Here’s how you can set up logging with `logfmt` format:

```python
import logging

class LogfmtFormatter(logging.Formatter):
    """Custom formatter to output logs in logfmt style."""

    def format(self, record: logging.LogRecord) -> str:
        log_message = (
            f"level={record.levelname.lower()} "
            f"logger={record.name} "
            f'msg="{record.getMessage()}"'
        )
        return log_message

def setup_logging() -> None:
    """Configure logging to use logfmt format."""
    # Create a console handler
    console_handler = logging.StreamHandler()

    # Create a LogfmtFormatter instance
    logfmt_formatter = LogfmtFormatter()

    # Set the formatter for the handler
    console_handler.setFormatter(logfmt_formatter)

    # Get the root logger and set the level
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)
    logger.addHandler(console_handler)

if __name__ == "__main__":
    setup_logging()

    # Example usage
    logging.info("This is an info message")
    logging.warning("This is a warning message")
    logging.error("This is an error message")
```

feat(renfe): Monitorización de billetes de renfe

Renfe hay veces que tarda mucho tiempo en sacar los billetes y es un peñazo tener que meterte continuamente para ver si ya han salido, así que lo he automatizado.

**Instalación**

Si quieres utilizarlo tendrás que al menos toquetear las siguientes líneas:

- Donde se definen los correos (`@example.org`)
- Las fechas del viaje: Busca el string `1727992800000` y puedes crear el tuyo con un comando como: `echo $(date -d "2024-10-04" +%s)000`
- La configuración del apprise (`mailtos`)
- El texto a meter en el origen (`#origin`) y el destino (`#destination`)
- El mes en el que quieres viajar (`octubre2024`)

Puede que en algún momento tenga ganas de hacerlo un poco más usable.

```python
import time
import logging
import traceback
from typing import List
import apprise
from playwright.sync_api import sync_playwright

class LogfmtFormatter(logging.Formatter):
    """Custom formatter to output logs in logfmt style."""

    def format(self, record: logging.LogRecord) -> str:
        log_message = (
            f"level={record.levelname.lower()} "
            f"logger={record.name} "
            f'msg="{record.getMessage()}"'
        )
        return log_message

def setup_logging() -> None:
    """Configure logging to use logfmt format."""
    # Create a console handler
    console_handler = logging.StreamHandler()

    # Create a LogfmtFormatter instance
    logfmt_formatter = LogfmtFormatter()

    # Set the formatter for the handler
    console_handler.setFormatter(logfmt_formatter)

    # Get the root logger and set the level
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.INFO)
    logger.addHandler(console_handler)

def send_email(
    title: str, body: str, recipients: List[str] = ["[email protected]"]
) -> None:
    """
    Sends an email notification using Apprise if the specified text is not found.
    """
    apobj = apprise.Apprise()
    apobj.add(
        "mailtos://{user}:{password}@{domain}:587?smtp={smtp_server}&to={','.join(recipients)}"
    )
    apobj.notify(
        body=body,
        title=title,
    )
    log.info("Email notification sent")

def check_if_trenes() -> None:
    """
    Main function to automate browser interactions and check for specific text.
    """
    log.info("Arrancando el navegador")
    pw = sync_playwright().start()
    chrome = pw.chromium.launch(headless=True)
    context = chrome.new_context(viewport={"width": 1920, "height": 1080})
    page = context.new_page()

    log.info("Navigating to https://www.renfe.com/es/es")
    page.goto("https://www.renfe.com/es/es")
    page.click("#onetrust-reject-all-handler")
    page.click("#origin")
    page.fill("#origin", "Almudena")
    page.click("#awesomplete_list_1_item_0")

    page.click("#destination")
    page.fill("#destination", "Vigo")
    page.click("#awesomplete_list_2_item_0")
    page.evaluate("document.getElementById('first-input').click()")

    while True:
        months = page.locator(
            "div.lightpick__month-title span.rf-daterange-alternative__month-label"
        ).all_text_contents()
        if months[0] == "octubre2024":
            break

        page.click("button.lightpick__next-action")

    # Para sacar otras fechas  usa echo $(date -d "2024-10-04" +%s)000
    page.locator('div.lightpick__day[data-time="1727992800000"]').click()
    page.locator('div.lightpick__day[data-time="1728165600000"]').click()
    page.click("button.lightpick__apply-action-sub")
    page.evaluate("window.scrollTo(0, 0);")
    page.locator('button[title="Buscar billete"]').click()
    page.locator("div#trayectoiSinTren p").wait_for(state="visible")

    time.sleep(1)
    no_hay_trenes = page.locator(
        "div", has_text="No hay trenes para los criterios seleccionados"
    ).all_text_contents()

    if len(no_hay_trenes) != 5:
        send_email(
            title="Puede que haya trenes para vigo",
            body="Corred insensatos!",
            recipients=["[email protected]", "[email protected]"],
        )
        log.warning("Puede que haya trenes")
    else:
        log.info("Sigue sin haber trenes")

def main():
    setup_logging()
    global log
    log = logging.getLogger(__name__)
    try:
        check_if_trenes()
    except Exception as error:
        error_message = "".join(
            traceback.format_exception(None, error, error.__traceback__)
        )
        send_email(title="[ERROR] Corriendo el script de renfe", body=error_message)
        raise error

if __name__ == "__main__":
    main()
```

**Cron**

Crea un virtualenv e instala las dependencias

```bash
cd renfe
virtualenv .env
pip install apprise playwright
```

Instala los navegadores

```bash
playwright install chromium
```

Crea el script para el cron (`renfe.sh`)

```bash

source /home/lyz/renfe/.env/bin/activate

systemd-cat -t renfe python3 /home/lyz/renfe/renfe.py

deactivate
```

Y edita el cron:

```cron
13 */6 * * * /bin/bash /home/lyz/renfe/renfe.sh
```

Esto lo correrá cada 6 horas

**Monitorización**

Para asegurarnos de que todo está funcionando bien puedes usar las siguientes alertas de [loki](loki.md)

```yaml
groups:
  - name: cronjobs
    rules:
      - alert: RenfeCronDidntRun
        expr: |
          (count_over_time({job="systemd-journal", syslog_identifier="renfe"} |= `Sigue sin haber trenes` [24h]) or on() vector(0)) == 0
        for: 0m
        labels:
          severity: warning
        annotations:
          summary: "El checkeo de los trenes de renfe no ha terminado en las últimas 24h en {{ $labels.hostname}}"
      - alert: RenfeCronError
        expr: |
          count(rate({job="systemd-journal", syslog_identifier="renfe"} | logfmt | level != `info` [5m])) or vector(0)
        for: 0m
        labels:
          severity: warning
        annotations:
          summary: "Se han detectado errores en los logs del script {{ $labels.hostname}}"

```

feat(roadmap_adjustment#Area review): Area review

It may be useful to ask the following questions of your own life. It doesn't matter if answers aren't immediately forthcoming; the point, is to "live the questions". Even to asking them with any sincerity is already a great step.

**What does your desire tell you about the area?**

Stop and really ask your g…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request type:enhancement
Projects
None yet
Development

No branches or pull requests