From 6a69f956696ccaecd83504a32865529e42ec6197 Mon Sep 17 00:00:00 2001 From: Lyz Date: Fri, 10 Nov 2023 20:47:06 +0100 Subject: [PATCH] feat(anki#What to do with unneeded cards): What to do with unneeded cards MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit You have three options: - Suspend: It stops it from showing up permanently until you reactivate it through the browser. - Bury: Just delays it until the next day. - Delete: It deletes it forever. Unless you're certain that you are not longer going to need it, suspend it. feat(anki#Configure self hosted synchronization): Configure self hosted synchronization Explain how to install `anki-sync-server` and how to configure Ankidroid and Anki. In the end I dropped this path and used Ankidroid alone with syncthing as I didn't need to interact with the decks from the computer. Also the ecosystem of synchronization in Anki at 2023-11-10 is confusing as there are many servers available, not all are compatible with the clients and Anki itself has released it's own so some of the community ones will eventually die. feat(bash_snippets#Loop through a list of files found by find): Loop through a list of files found by find For simple loops use the `find -exec` syntax: ```bash find . -name '*.txt' -exec process {} \; ``` For more complex loops use a `while read` construct: ```bash find . -name "*.txt" -print0 | while read -r -d $'\0' file do …code using "$file" done ``` The loop will execute while the `find` command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer. The `-print0` will use the NULL as a file separator instead of a newline and the `-d $'\0'` will use NULL as the separator while reading. How not to do it: If you try to run the next snippet: ```bash for file in $(find . -name "*.txt") do …code using "$file" done ``` You'll get the next [`shellcheck`](shellcheck.md) warning: ``` SC2044: For loops over find output are fragile. Use find -exec or a while read loop. ``` You should not do this because: Three reasons: - For the for loop to even start, the find must run to completion. - If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names. - Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it. feat(pytest#Stop pytest right at the start if condition not met): Stop pytest right at the start if condition not met Use the `pytest_configure` [initialization hook](https://docs.pytest.org/en/4.6.x/reference.html#initialization-hooks). In your global `conftest.py`: ```python import requests import pytest def pytest_configure(config): try: requests.get(f'http://localhost:9200') except requests.exceptions.ConnectionError: msg = 'FATAL. Connection refused: ES does not appear to be installed as a service (localhost port 9200)' pytest.exit(msg) ``` - Note that the single argument of `pytest_configure` has to be named `config`. - Using `pytest.exit` makes it look nicer. feat(python_docker#Using PDM): Dockerize a PDM application It is possible to use PDM in a multi-stage Dockerfile to first install the project and dependencies into `__pypackages__` and then copy this folder into the final stage, adding it to `PYTHONPATH`. ```dockerfile FROM python:3.11-slim-bookworm AS builder RUN pip install pdm COPY pyproject.toml pdm.lock README.md /project/ COPY src/ /project/src WORKDIR /project RUN mkdir __pypackages__ && pdm sync --prod --no-editable FROM python:3.11-slim-bookworm ENV PYTHONPATH=/project/pkgs COPY --from=builder /project/__pypackages__/3.11/lib /project/pkgs COPY --from=builder /project/__pypackages__/3.11/bin/* /bin/ CMD ["python", "-m", "project"] ``` feat(python_snippets#Configure the logging of a program to look nice): Configure the logging of a program to look nice ```python def load_logger(verbose: bool = False) -> None: # pragma no cover """Configure the Logging logger. Args: verbose: Set the logging level to Debug. """ logging.addLevelName(logging.INFO, "\033[36mINFO\033[0m") logging.addLevelName(logging.ERROR, "\033[31mERROR\033[0m") logging.addLevelName(logging.DEBUG, "\033[32mDEBUG\033[0m") logging.addLevelName(logging.WARNING, "\033[33mWARNING\033[0m") if verbose: logging.basicConfig( format="%(asctime)s %(levelname)s %(name)s: %(message)s", stream=sys.stderr, level=logging.DEBUG, datefmt="%Y-%m-%d %H:%M:%S", ) telebot.logger.setLevel(logging.DEBUG) # Outputs debug messages to console. else: logging.basicConfig( stream=sys.stderr, level=logging.INFO, format="%(levelname)s: %(message)s" ) ``` feat(python_snippets#Get the modified time of a file with Pathlib): Get the modified time of a file with Pathlib ```python file_ = Path('/to/some/file') file_.stat().st_mtime ``` You can also access: - Created time: with `st_ctime` - Accessed time: with `st_atime` They are timestamps, so if you want to compare it with a datetime object use the `timestamp` method: ```python assert datetime.now().timestamp - file_.stat().st_mtime < 60 ``` feat(collaborating_tools): Introduce collaborating tools Collaborating document creation: - https://pad.riseup.net - https://rustpad.io . [Can be self hosted](https://github.com/ekzhang/rustpad) Collaborating through terminals: - [sshx](https://sshx.io/) looks promising although I think it uses their servers to do the connection, which is troublesome. fix(kubernetes_tools#Tried): Recommend rke2 over k3s A friend told me that it works better. feat(emojis#Most used): Create a list of most used emojis ``` ¯\(°_o)/¯ ¯\_(ツ)_/¯ (╯°□°)╯ ┻━┻ \\ ٩( ᐛ )و // (✿◠‿◠) (/゚Д゚)/ (¬º-°)¬ (╥﹏╥) ᕕ( ᐛ )ᕗ ʕ•ᴥ•ʔ ( ˘ ³˘)♥ ❤ ``` feat(gitea#Run jobs if other jobs failed): Run jobs if other jobs failed This is useful to send notifications if any of the jobs failed. [Right now](https://github.com/go-gitea/gitea/issues/23725) you can't run a job if other jobs fail, all you can do is add a last step on each workflow to do the notification on failure: ```yaml - name: Send mail if: failure() uses: https://github.com/dawidd6/action-send-mail@v3 with: to: ${{ secrets.MAIL_TO }} from: Gitea subject: ${{ gitea.repository }} ${{gitea.workflow}} ${{ job.status }} priority: high convert_markdown: true html_body: | ### Job ${{ job.status }} ${{ github.repository }}: [${{ github.ref }}@${{ github.sha }}](${{ github.server_url }}/${{ github.repository }}/actions) ``` feat(grapheneos#Split the screen): Split the screen Go into app switcher, tap on the app icon above the active app and then select "Split top". feat(how_to_code): Personal evolution on how I code Over the years I've tried different ways of developing my code: - Mindless coding: write code as you need to make it work, with no tests, documentation or any quality measure. - TDD. - Try to abstract everything to minimize the duplication of code between projects. Each has it's advantages and disadvantages. After trying them all and given that right now I only have short spikes of energy and time to invest in coding my plan is to: - Make the minimum effort to design the minimum program able to solve the problem at hand. This design will be represented in an [orgmode](orgmode.md) task. - Write the minimum code to make it work without thinking of tests or generalization, but with the [domain driven design](domain_driven_design.md) concepts so the code remains flexible and maintainable. - Once it's working see if I have time to improve it: - Create the tests to cover the critical functionality (no more 100% coverage). - If I need to make a package or the program evolves into something complex I'd use [this scaffold template](https://github.com/lyz-code/cookiecutter-python-project). Once the spike is over I'll wait for a new spike to come either because I have time or because something breaks and I need to fix it. feat(life_analysis): Introduce the analysis of life process It's interesting to do analysis at representative moments of the year. It gives it an emotional weight. You can for example use the solstices or my personal version of the solstices: - Spring analysis (1st of March): For me the spring is the real start of the year, it's when life explodes after the stillness of the winter. The sun starts to set later enough so that you have light in the afternoons, the climate gets warmer thus inviting you to be more outside, the nature is blooming new leaves and flowers. It is then a moment to build new projects and set the current year on track. - Summer analysis (1st of June): I hate heat, so summer is a moment of retreat. Everyone temporarily stop their lives, we go on holidays and all social projects slow their pace. Even the news have even less interesting things to report. It's so hot outside that some of us seek the cold refuge of home or remote holiday places. Days are long and people love to hang out till late, so usually you wake up later, thus having less time to actually do stuff. Even in the moments when you are alone the heat drains your energy to be productive. It is then a moment to relax and gather forces for the next trimester. It's also perfect to develop *easy* and *chill* personal projects that have been forgotten in a drawer. Lower your expectations and just flow with what your body asks you. - Autumn analysis (1st of September): September it's another key moment for many people. We have it hardcoded in our life since we were children as it was the start of school. People feel energized after the summer holidays and are eager to get back to their lives and stopped projects. You're already 6 months into the year, so it's a good moment to review your year plan and decide how you want to invest your energy reserves. - Winter analysis (1st of December): December is the cue that the year is coming to an end. The days grow shorter and colder, they basically invite you to enjoy a cup of tea under a blanket. It is then a good time to get into your cave and do an introspection analysis on the whole year and prepare the ground for the coming year. We see then that the year is divided in two sets of an expansion trimester and a retreat one. We can use this information to plan our tasks accordingly. In the expansion trimester we could invest more energies in the planning, and in the retreat ones we can do more throughout reviews. feat(life_planning#month-plan): Introduce the month planning process The objectives of the month plan are: - Define the month objectives according to the trimester plan and the insights gathered in the past month review. - Make your backlog and todo list match the month objectives - Define the philosophical topics to address - Define the topics to learn - Define the are of habits to incorporate? - Define the checks you want to do at the end of the month. - Plan when is it going to be the next review. It's interesting to do the plannings on meaningful days such as the first one of the month. Usually we don't have enough flexibility in our life to do it exactly that day, so schedule it the closest you can to that date. It's a good idea to do both the review and the planning on the same day. We'll divide the planning process in these phases: - Prepare - Clarify your state - Decide the month objectives Prepare: It's important that you prepare your environment for the planning. You need to be present and fully focused on the process itself. To do so you can: - Make sure you don't get interrupted: - Check your task manager tools to make sure that you don't have anything urgent to address in the next hour. - Disable all notifications - Set your analysis environment: - Put on the music that helps you get *in the zone*. - Get all the things you may need for the review: - The checklist that defines the process of your planning (this document in my case). - Somewhere to write down the insights. - Your task manager system - Your habit manager system - Your *Objective list*. - Your *Thinking list*. - Your *Reading list*. - Remove from your environment everything else that may distract you Clarify your state: To be able to make a good decision on your month's path you need to sort out which is your current state. To do so: - Clean your inbox: Refile each item until it's empty - Clean your todo: Review each todo element by deciding if they should still be in the todo. If they do and they belong to a month objective, add it. If they don't need to be in the todo, refile it. - Clean your someday: Review each relevant someday element (not the ones that are archive at greater levels than month) and decide if they should be refiled elsewhere and if they are part of a month objective that should be dealt with this month. - Adress each of the trimester objectives by creating month objectives that get you closer to the desired objective. Decide the next steps: For each of your month objectives: - Decide wheter it makes sense to address it this month. If not, archive it - Create a clear plan of action for this month on that objective - Tweak your *things to think about list*. - Tweak your *reading list*. - Tweak your *habit manager system*. feat(linux_snippets#Accept new ssh keys by default): Accept new ssh keys by default While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5). This is done with `-o StrictHostKeyChecking=accept-new`. Or if you want to use it for all hosts you can add the next lines to your `~/.ssh/config`: ``` Host * StrictHostKeyChecking accept-new ``` WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to: ```bash ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com ``` Note, `StrictHostKeyChecking=no` will add the public key to `~/.ssh/known_hosts` even if the key was changed. `accept-new` is only for new hosts. From the man page: > If this flag is set to “accept-new” then ssh will automatically add new host keys to the user known hosts files, but will not permit connections to hosts with changed host keys. If this flag is set to “no” or “off”, ssh will automatically add new host keys to the user known hosts files and allow connections to hosts with changed hostkeys to proceed, subject to some restrictions. If this flag is set to ask (the default), new host keys will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to hosts whose host key has changed. The host keys of known hosts will be verified automatically in all cases. feat(linux_snippets#Do not add trailing / to ls): Do not add trailing / to ls Probably, your `ls` is aliased or defined as a function in your config files. Use the full path to `ls` like: ```bash /bin/ls /var/lib/mysql/ ``` feat(linux_snippets#Convert png to svg): Convert png to svg Inkscape has got an awesome auto-tracing tool. - Install Inkscape using `sudo apt-get install inkscape` - Import your image - Select your image - From the menu bar, select Path > Trace Bitmap Item - Adjust the tracing parameters as needed - Save as svg Check their [tracing tutorial](https://inkscape.org/en/doc/tutorials/tracing/tutorial-tracing.html) for more information. Once you are comfortable with the tracing options. You can automate it by using [CLI of Inkscape](https://inkscape.org/en/doc/inkscape-man.html). feat(linux_snippets#Redirect stdout and stderr of a cron job to a file): Redirect stdout and stderr of a cron job to a file ``` */1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1 ``` feat(linux_snippets#Error when unmounting a device Target is busy): Error when unmounting a device Target is busy - Check the processes that are using the mountpoint with `lsof /path/to/mountpoint` - Kill those processes - Try the umount again If that fails, you can use `umount -l`. feat(loki#installation): How to install loki There are [many ways to install Loki](https://grafana.com/docs/loki/latest/setup/install/), we're going to do it using `docker-compose` taking [their example as a starting point](https://raw.githubusercontent.com/grafana/loki/v2.9.1/production/docker-compose.yaml) and complementing our already existent [grafana docker-compose](grafana.md#installation). It makes use of the [environment variables to configure Loki](https://grafana.com/docs/loki/latest/configure/#configuration-file-reference), that's why we have the `-config.expand-env=true` flag in the command line launch. In the grafana datasources directory add `loki.yaml`: ```yaml --- apiVersion: 1 datasources: - name: Loki type: loki access: proxy orgId: 1 url: http://loki:3100 basicAuth: false isDefault: true version: 1 editable: false ``` [Storage configuration](https://grafana.com/docs/loki/latest/storage/): Unlike other logging systems, Grafana Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on the filesystem. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki. Loki 2.0 brings an index mechanism named ‘boltdb-shipper’ and is what we now call Single Store. This type only requires one store, the object store, for both the index and chunks. Loki 2.8 adds TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki as it improves query performance, reduces TCO and has the same feature parity as “boltdb-shipper”. feat(orgzly#Avoid the conflicts in the files edited in two places): Avoid the conflicts in the files edited in two places If you use syncthing you may be seeing conflicts in your files. This happens specially if you use the Orgzly widget to add tasks, this is because it doesn't synchronize the files to the directory when using the widget. If you have a file that changes a lot in a device, for example the `inbox.org` of my mobile, it's interesting to have a specific file that's edited mainly in the mobile, and when you want to edit it elsewhere, you sync as specified below and then process with the editing. Once it's done manually sync the changes in orgzly again. The rest of the files synced to the mobile are for read only reference, so they rarely change. If you want to sync reducing the chance of conflicts then: - Open Orgzly and press Synchronize - Open Syncthing. If that's not enough [check these automated solutions](https://github.com/orgzly/orgzly-android/issues/8): - [Orgzly auto syncronisation for sync tools like syncthing](https://gist.github.com/fabian-thomas/6f559d0b0d26737cf173e41cdae5bfc8) - [watch-for-orgzly](https://gitlab.com/doak/orgzly-watcher/-/blob/master/watch-for-orgzly?ref_type=heads) Other interesting solutions: - [org-orgzly](https://codeberg.org/anoduck/org-orgzly): Script to parse a chosen org file or files, check if an entry meets required parameters, and if it does, write the entry in a new file located inside the folder you desire to sync with orgzly. - [Git synchronization](https://github.com/orgzly/orgzly-android/issues/24): I find it more cumbersome than syncthing but maybe it's interesting for you. feat(orgzly#references): add new orgzly fork [Alternative fork maintained by the community](https://github.com/orgzly-revived/orgzly-android-revived) feat(pytelegrambotapi): Introduce pytelegrambotapi [pyTelegramBotAPI](https://github.com/eternnoir/pyTelegramBotAPI) is an synchronous and asynchronous implementation of the [Telegram Bot API](https://core.telegram.org/bots/api). [Installation](https://pytba.readthedocs.io/en/latest/install.html): ```bash pip install pyTelegramBotAPI ``` feat(pytelegrambotapi#Create your bot): Create your bot Use the `/newbot` command to create a new bot. `@BotFather` will ask you for a name and username, then generate an authentication token for your new bot. - The `name` of your bot is displayed in contact details and elsewhere. - The `username` is a short name, used in search, mentions and t.me links. Usernames are 5-32 characters long and not case sensitive – but may only include Latin characters, numbers, and underscores. Your bot's username must end in 'bot’, like `tetris_bot` or `TetrisBot`. - The `token` is a string, like `110201543:AAHdqTcvCH1vGWJxfSeofSAs0K5PALDsaw`, which is required to authorize the bot and send requests to the Bot API. Keep your token secure and store it safely, it can be used by anyone to control your bot. To edit your bot, you have the next available commands: - `/setname`: change your bot's name. - `/setdescription`: change the bot's description (short text up to 512 characters). Users will see this text at the beginning of the conversation with the bot, titled 'What can this bot do?'. - `/setabouttext`: change the bot's about info, a shorter text up to 120 characters. Users will see this text on the bot's profile page. When they share your bot with someone, this text is sent together with the link. - `/setuserpic`: change the bot's profile picture. - `/setcommands`: change the list of commands supported by your bot. Users will see these commands as suggestions when they type / in the chat with your bot. See commands for more info. - `/setdomain`: link a website domain to your bot. See the login widget section. - `/deletebot`: delete your bot and free its username. Cannot be undone. feat(pytelegrambotapi#Synchronous TeleBot): Synchronous TeleBot ```python import telebot API_TOKEN = '' bot = telebot.TeleBot(API_TOKEN) @bot.message_handler(commands=['help', 'start']) def send_welcome(message): bot.reply_to(message, """\ Hi there, I am EchoBot. I am here to echo your kind words back to you. Just say anything nice and I'll say the exact same thing to you!\ """) @bot.message_handler(func=lambda message: True) def echo_message(message): bot.reply_to(message, message.text) bot.infinity_polling() ``` feat(pytelegrambotapi#Asynchronous TeleBot): Asynchronous TeleBot ```python from telebot.async_telebot import AsyncTeleBot bot = AsyncTeleBot('TOKEN') @bot.message_handler(commands=['help', 'start']) async def send_welcome(message): await bot.reply_to(message, """\ Hi there, I am EchoBot. I am here to echo your kind words back to you. Just say anything nice and I'll say the exact same thing to you!\ """) @bot.message_handler(func=lambda message: True) async def echo_message(message): await bot.reply_to(message, message.text) import asyncio asyncio.run(bot.polling()) ``` feat(pytest-xprocess): Introduce pytest-xprocess [`pytest-xprocess`](https://github.com/pytest-dev/pytest-xprocess) is a pytest plugin for managing external processes across test runs. [Installation](https://pytest-xprocess.readthedocs.io/en/latest/#quickstart): ```bash pip install pytest-xprocess ``` [Usage](https://pytest-xprocess.readthedocs.io/en/latest/#quickstart): Define your process fixture in `conftest.py`: ```python import pytest from xprocess import ProcessStarter @pytest.fixture def myserver(xprocess): class Starter(ProcessStarter): # startup pattern pattern = "[Ss]erver has started!" # command to start process args = ['command', 'arg1', 'arg2'] # ensure process is running and return its logfile logfile = xprocess.ensure("myserver", Starter) conn = # create a connection or url/port info to the server yield conn # clean up whole process tree afterwards xprocess.getinfo("myserver").terminate() ``` Now you can use this fixture in any test functions where `myserver` needs to be up and `xprocess` will take care of it for you. [Matching process output with pattern](https://pytest-xprocess.readthedocs.io/en/latest/starter.html#matching-process-output-with-pattern): In order to detect that your process is ready to answer queries, `pytest-xprocess` allows the user to provide a string pattern by setting the class variable pattern in the Starter class. `pattern` will be waited for in the process `logfile` for a maximum time defined by `timeout` before timing out in case the provided pattern is not matched. It’s important to note that pattern is a regular expression and will be matched using python `re.search`. [Controlling Startup Wait Time with timeout](https://pytest-xprocess.readthedocs.io/en/latest/starter.html#controlling-startup-wait-time-with-timeout): Some processes naturally take longer to start than others. By default, `pytest-xprocess` will wait for a maximum of 120 seconds for a given process to start before raising a `TimeoutError`. Changing this value may be useful, for example, when the user knows that a given process would never take longer than a known amount of time to start under normal circunstancies, so if it does go over this known upper boundary, that means something is wrong and the waiting process must be interrupted. The maximum wait time can be controlled through the class variable timeout. ```python @pytest.fixture def myserver(xprocess): class Starter(ProcessStarter): # will wait for 10 seconds before timing out timeout = 10 ``` Passing command line arguments to your process with `args`: In order to start a process, pytest-xprocess must be given a command to be passed into the subprocess.Popen constructor. Any arguments passed to the process command can also be passed using args. As an example, if I usually use the following command to start a given process: ```bash $> myproc -name "bacon" -cores 4 ``` That would look like: ```python args = ['myproc', '-name', '"bacon"', '-cores', 4, ''] ``` when using args in pytest-xprocess to start the same process. ```python @pytest.fixture def myserver(xprocess): class Starter(ProcessStarter): # will pass "$> myproc -name "bacon" -cores 4 " to the # subprocess.Popen constructor so the process can be started with # the given arguments args = ['myproc', '-name', '"bacon"', '-cores', 4, ''] # ... ``` feat(python_prometheus): How to create a prometheus exporter with python [prometheus-client](https://github.com/prometheus/client_python) is the official Python client for [Prometheus](prometheus.md). Installation: ```bash pip install prometheus-client ``` Here is a simple script: ```python from prometheus_client import start_http_server, Summary import random import time REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request') @REQUEST_TIME.time() def process_request(t): """A dummy function that takes some time.""" time.sleep(t) if __name__ == '__main__': # Start up the server to expose the metrics. start_http_server(8000) # Generate some requests. while True: process_request(random.random()) ``` Then you can visit http://localhost:8000/ to view the metrics. From one easy to use decorator you get: - `request_processing_seconds_count`: Number of times this function was called. - `request_processing_seconds_sum`: Total amount of time spent in this function. Prometheus's rate function allows calculation of both requests per second, and latency over time from this data. In addition if you're on Linux the process metrics expose CPU, memory and other information about the process for free. feat(python-telegram): Analyze the different python libraries to interact with telegram There are two ways to interact with Telegram through python: - Client libraries - Bot libraries Client libraries: Client libraries use your account to interact with Telegram itself through a developer API token. The most popular to use is [Telethon](https://docs.telethon.dev/en/stable/index.html). Bot libraries: [Telegram lists many libraries to interact with the bot API](https://core.telegram.org/bots/samples#python), the most interesting are: - [python-telegram-bot](#python-telegram-bot) - [pyTelegramBotAPI](#pytelegrambotapi) - [aiogram](#aiogram) If there comes a moment when we have to create the messages ourselves, [telegram-text](https://telegram-text.alinsky.tech/api_reference) may be an interesting library to check. [python-telegram-bot](https://github.com/python-telegram-bot/python-telegram-bot): Pros: - Popular: 23k stars, 4.9k forks - Maintained: last commit 3 days ago - They have a developers community to get help in [this telegram group](https://telegram.me/pythontelegrambotgroup) - I like how they try to minimize third party dependencies, and how you can install the complements if you need them - Built on top of asyncio - Nice docs - Fully supports the [Telegram bot API](https://core.telegram.org/bots/api) - Has many examples Cons: - Interface is a little verbose and complicated at a first look - Only to be run in a single thread (not a problem) References: - [Package documentation](https://docs.python-telegram-bot.org/) is the technical reference for python-telegram-bot. It contains descriptions of all available classes, modules, methods and arguments as well as the changelog. - [Wiki](https://github.com/python-telegram-bot/python-telegram-bot/wiki/) is home to number of more elaborate introductions of the different features of python-telegram-bot and other useful resources that go beyond the technical documentation. - [Examples](https://docs.python-telegram-bot.org/examples.html) section contains several examples that showcase the different features of both the Bot API and python-telegram-bot - [Source](https://github.com/python-telegram-bot/python-telegram-bot) [pyTelegramBotAPI](https://github.com/eternnoir/pyTelegramBotAPI): Pros: - Popular: 7.1k stars, 1.8k forks - Maintained: last commit 3 weeks ago - Both sync and async - Nicer interface with decorators and simpler setup - [They have an example on how to split long messages](https://github.com/eternnoir/pyTelegramBotAPI#sending-large-text-messages) - [Nice docs on how to test](https://github.com/eternnoir/pyTelegramBotAPI#testing) - They have a developers community to get help in [this telegram group](https://telegram.me/joinchat/Bn4ixj84FIZVkwhk2jag6A) - Fully supports the [Telegram bot API](https://core.telegram.org/bots/api) - Has examples Cons: - Uses lambdas inside the decorators, I don't know why it does it. - The docs are not as throughout as `python-telegram-bot` one. References: - [Documentation](https://pytba.readthedocs.io/en/latest/index.html) - [Source](https://github.com/eternnoir/pyTelegramBotAPI) - [Async Examples](https://github.com/eternnoir/pyTelegramBotAPI/tree/master/examples/asynchronous_telebot) [aiogram](https://github.com/aiogram/aiogram): Pros: - Popular: 3.8k stars, 717k forks - Maintained: last commit 4 days ago - Async support - They have a developers community to get help in [this telegram group](https://t.me/aiogram) - Has type hints - Cleaner interface than `python-telegram-bot` - Fully supports the [Telegram bot API](https://core.telegram.org/bots/api) - Has examples Cons: - Less popular than `python-telegram-bot` - Docs are written at a developer level, difficult initial barrier to understand how to use it. References: - [Documentation](https://docs.aiogram.dev/en/dev-3.x/) - [Source](https://github.com/aiogram/aiogram) - [Examples](https://github.com/aiogram/aiogram/tree/dev-3.x/examples) Conclusion: Even if `python-telegram-bot` is the most popular and with the best docs, I prefer one of the others due to the easier interface. `aiogram`s documentation is kind of crap, and as it's the first time I make a bot I'd rather have somewhere good to look at. So I'd say to go first with `pyTelegramBotAPI` and if it doesn't go well, fall back to `python-telegram-bot`. feat(rocketchat): Introduce Rocketchat integrations Rocket.Chat supports webhooks to integrate tools and services you like into the platform. Webhooks are simple event notifications via HTTP POST. This way, any webhook application can post a message to a Rocket.Chat instance and much more. With scripts, you can point any webhook to Rocket.Chat and process the requests to print customized messages, define the username and avatar of the user of the messages and change the channel for sending messages, or you can cancel the request to prevent undesired messages. Available integrations: - Incoming Webhook: Let an external service send a request to Rocket.Chat to be processed. - Outgoing Webhook: Let Rocket.Chat trigger and optionally send a request to an external service and process the response. By default, a webhook is designed to post messages only. The message is part of a JSON structure, which has the same format as that of a . [Incoming webhook script](https://docs.rocket.chat/use-rocket.chat/workspace-administration/integrations#incoming-webhook-script): To create a new incoming webhook: - Navigate to Administration > Workspace > Integrations. - Click +New at the top right corner. - Switch to the Incoming tab. - Turn on the Enabled toggle. - Name: Enter a name for your webhook. The name is optional; however, providing a name to manage your integrations easily is advisable. - Post to Channel: Select the channel (or user) where you prefer to receive the alerts. It is possible to override messages. - Post as: Choose the username that this integration posts as. The user must already exist. - Alias: Optionally enter a nickname that appears before the username in messages. - Avatar URL: Enter a link to an image as the avatar URL if you have one. The avatar URL overrides the default avatar. - Emoji: Enter an emoji optionally to use the emoji as the avatar. [Check the emoji cheat sheet](https://github.com/ikatyang/emoji-cheat-sheet/blob/master/README.md#computer) - Turn on the Script Enabled toggle. - Paste your script inside the Script field (check below for a sample script) - Save the integration. - Use the generated Webhook URL to post messages to Rocket.Chat. The Rocket.Chat integration script should be written in ES2015 / ECMAScript 6. The script requires a global class named Script, which is instantiated only once during the first execution and kept in memory. This class contains a method called `process_incoming_request`, which is called by your server each time it receives a new request. The `process_incoming_request` method takes an object as a parameter with the request property and returns an object with a content property containing a valid Rocket.Chat message, or an object with an error property, which is returned as the response to the request in JSON format with a Code 400 status. A valid Rocket.Chat message must contain a text field that serves as the body of the message. If you redirect the message to a channel other than the one indicated by the webhook token, you can specify a channel field that accepts room id or, if prefixed with "#" or "@", channel name or user, respectively. You can use the console methods to log information to help debug your script. More information about the console can be found [here](https://developer.mozilla.org/en-US/docs/Web/API/Console/log). . To view the logs, navigate to Administration > Workspace > View Logs. ``` /* exported Script */ /* globals console, _, s */ /** Global Helpers * * console - A normal console instance * _ - An underscore instance * s - An underscore string instance */ class Script { /** * @params {object} request */ process_incoming_request({ request }) { // request.url.hash // request.url.search // request.url.query // request.url.pathname // request.url.path // request.url_raw // request.url_params // request.headers // request.user._id // request.user.name // request.user.username // request.content_raw // request.content // console is a global helper to improve debug console.log(request.content); return { content:{ text: request.content.text, icon_emoji: request.content.icon_emoji, channel: request.content.channel, // "attachments": [{ // "color": "#FF0000", // "author_name": "Rocket.Cat", // "author_link": "https://open.rocket.chat/direct/rocket.cat", // "author_icon": "https://open.rocket.chat/avatar/rocket.cat.jpg", // "title": "Rocket.Chat", // "title_link": "https://rocket.chat", // "text": "Rocket.Chat, the best open source chat", // "fields": [{ // "title": "Priority", // "value": "High", // "short": false // }], // "image_url": "https://rocket.chat/images/mockup.png", // "thumb_url": "https://rocket.chat/images/mockup.png" // }] } }; // return { // error: { // success: false, // message: 'Error example' // } // }; } } ``` To test if your integration works, use curl to make a POST request to the generated webhook URL. ```bash curl -X POST \ -H 'Content-Type: application/json' \ --data '{ "icon_emoji": ":smirk:", "text": "Example message" }' \ https://your-webhook-url ``` If you want to send the message to another channel or user use the `channel` argument with `@user` or `#channel`. Keep in mind that the user of the integration needs to be part of those channels if they are private. ```bash curl -X POST \ -H 'Content-Type: application/json' \ --data '{ "icon_emoji": ":smirk:", "channel": "#notifications", "text": "Example message" }' \ https://your-webhook-url ``` If you want to do more complex things uncomment the part of the attachments. feat(siem): Add Wazuh SIEM [Wazuh](https://wazuh.com/) feat(tails): Add interesting operations on tails - [Upgrading a tails USB](https://tails.net/upgrade/tails/index.en.html) - [Change the window manager](https://www.reddit.com/r/tails/comments/qzruhv/changing_window_manager/): Don't do it, they say it it will break Tails although I don't understand why feat(vim#Email inside nvim): Email inside nvim The best looking one is himalaya - [Home](https://pimalaya.org/himalaya/index.html) - [Nvim plugin](https://git.sr.ht/%7Esoywod/himalaya-vim) - [Source](https://github.com/soywod/himalaya) --- docs/anki.md | 142 ++++++++++ docs/bash_snippets.md | 48 ++++ docs/coding/python/pytest.md | 21 ++ .../python_project_template/python_docker.md | 40 ++- docs/coding/python/python_snippets.md | 46 ++++ docs/collaborating_tools.md | 9 + docs/devops/kubernetes/kubernetes_tools.md | 2 +- docs/emojis.md | 28 ++ docs/gitea.md | 22 ++ docs/grapheneos.md | 6 + docs/how_to_code.md | 15 + docs/life_analysis.md | 8 + docs/life_planning.md | 257 ++++++++++++++++++ docs/linux_snippets.md | 60 ++++ docs/loki.md | 34 +++ docs/orgzly.md | 23 ++ docs/parsing_data.md | 10 + docs/pytelegrambotapi.md | 99 +++++++ docs/pytest-xprocess.md | 99 +++++++ docs/python-prometheus.md | 49 ++++ docs/python-telegram.md | 100 +++++++ docs/rocketchat.md | 137 ++++++++++ docs/sed.md | 2 +- docs/siem.md | 4 + docs/tails.md | 16 ++ docs/vdirsyncer.md | 9 + docs/vim.md | 74 ++--- mkdocs.yml | 17 +- 28 files changed, 1341 insertions(+), 36 deletions(-) create mode 100644 docs/collaborating_tools.md create mode 100644 docs/how_to_code.md create mode 100644 docs/life_analysis.md create mode 100644 docs/life_planning.md create mode 100644 docs/parsing_data.md create mode 100644 docs/pytelegrambotapi.md create mode 100644 docs/pytest-xprocess.md create mode 100644 docs/python-prometheus.md create mode 100644 docs/python-telegram.md create mode 100644 docs/rocketchat.md create mode 100644 docs/siem.md create mode 100644 docs/tails.md diff --git a/docs/anki.md b/docs/anki.md index 89eee02d7e5..e4d0841e715 100644 --- a/docs/anki.md +++ b/docs/anki.md @@ -49,6 +49,16 @@ If you're afraid to be stuck in a loop of reviewing "hard" cards, don't be. In r * The card has too much information that should be subdivided in smaller cards. * You're not doing a good process of memorizing the contents once they show up. +## [What to do with unneeded cards](https://www.reddit.com/r/medicalschoolanki/comments/9dwjia/difference_between_suspend_and_bury_card/) + +You have three options: + +- Suspend: It stops it from showing up permanently until you reactivate it through the browser. +- Bury: Just delays it until the next day. +- Delete: It deletes it forever. + +Unless you're certain that you are not longer going to need it, suspend it. + # Interacting with python ## Configuration @@ -158,6 +168,138 @@ curl localhost:8765 -X POST -d '{"action": "deckNames", "version": 6}' self.requests("createDeck", {"deck": deck}) ``` +# Configure self hosted synchronization + +NOTE: In the end I dropped this path and used Ankidroid alone with syncthing as I didn't need to interact with the decks from the computer. Also the ecosystem of synchronization in Anki at 2023-11-10 is confusing as there are many servers available, not all are compatible with the clients and Anki itself has released it's own so some of the community ones will eventually die. + +## [Install the server](https://github.com/ankicommunity/anki-devops-services#about-this-docker-image) + +I'm going to install `anki-sync-server` as it's simpler to [`djankiserv`](https://github.com/ankicommunity/anki-api-server): + +* Create the data directories: + ```bash + mkdir -p /data/apps/anki/data + ``` + +* Copy the `docker/docker-compose.yaml` to `/data/apps/anki`. + ```yaml + --- + version: "3" + + services: + anki: + image: kuklinistvan/anki-sync-server:latest + container_name: anki + restart: always + networks: + - nginx + volumes: + - data:/app/data + + networks: + nginx: + external: + name: nginx + + volumes: + data: + driver: local + driver_opts: + type: none + o: bind + device: /data/apps/anki + ``` +* Copy the nginx config into your `site-confs`: + + ``` + # make sure that your dns has a cname set for anki and that your anki container is not using a base url + + server { + listen 443 ssl; + listen [::]:443 ssl; + + server_name anki.*; + + include /config/nginx/ssl.conf; + + client_max_body_size 0; + + # enable for ldap auth, fill in ldap details in ldap.conf + #include /config/nginx/ldap.conf; + + location / { + # enable the next two lines for http auth + #auth_basic "Restricted"; + #auth_basic_user_file /config/nginx/.htpasswd; + + # enable the next two lines for ldap auth + #auth_request /auth; + #error_page 401 =200 /login; + + include /config/nginx/proxy.conf; + resolver 127.0.0.11 valid=30s; + set $upstream_anki anki; + proxy_pass http://$upstream_anki:27701; + } + } + ``` + +* Copy the `service/anki.service` into `/etc/systemd/system/` + ```ini + [Unit] + Description=anki + Requires=docker.service + After=docker.service + + [Service] + Restart=always + User=root + Group=docker + WorkingDirectory=/data/apps/anki + # Shutdown container (if running) when unit is started + TimeoutStartSec=100 + RestartSec=2s + # Start container when unit is started + ExecStart=/usr/local/bin/docker-compose -f docker-compose.yaml up + # Stop container when unit is stopped + ExecStop=/usr/local/bin/docker-compose -f docker-compose.yaml down + + [Install] + WantedBy=multi-user.target + ``` +* Start the service `systemctl start anki` +* If needed enable the service `systemctl enable anki`. +* Create your user by: + * Getting a shell inside the container: + ```bash + docker exec -it anki sh + ``` + * Create the user: + ```bash + ./ankisyncctl.py adduser kuklinistvan + ``` + +`ankisyncctl.py` has more commands to manage your users: + +* `adduser `: add a new user +* `deluser `: delete a user +* `lsuser`: list users +* `passwd `: change password of a user + +## [Configure AnkiDroid](https://github.com/ankicommunity/anki-sync-server#ankidroid) + +* Add the dns you configured in your nginx reverse proxy into Advanced → Custom sync server. +* Then enter the credentials you created before in Advanced -> AnkiWeb account + +## [Configure Anki](https://github.com/ankicommunity/anki-sync-server#setting-up-anki) + +Install addon from ankiweb (support 2.1) + +- On add-on window,click Get Add-ons and fill in the textbox with the code 358444159 +- There,you get add-on custom sync server redirector,choose it.Then click config below right +- Apply your server dns address +- Press Sync in the main application page and enter your credentials + # References * [Homepage](https://apps.ankiweb.net/) diff --git a/docs/bash_snippets.md b/docs/bash_snippets.md index eceadd9a9ed..1399b1f19bf 100644 --- a/docs/bash_snippets.md +++ b/docs/bash_snippets.md @@ -4,6 +4,54 @@ date: 20220827 author: Lyz --- +# [Loop through a list of files found by find](https://stackoverflow.com/questions/9612090/how-to-loop-through-file-names-returned-by-find) + +For simple loops use the `find -exec` syntax: + +```bash +# execute `process` once for each file +find . -name '*.txt' -exec process {} \; +``` + +For more complex loops use a `while read` construct: + +```bash +find . -name "*.txt" -print0 | while read -r -d $'\0' file +do + …code using "$file" +done +``` + +The loop will execute while the `find` command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer. + +The `-print0` will use the NULL as a file separator instead of a newline and the `-d $'\0'` will use NULL as the separator while reading. + +## How not to do it + +If you try to run the next snippet: + +```bash +# Don't do this +for file in $(find . -name "*.txt") +do + …code using "$file" +done +``` + +You'll get the next [`shellcheck`](shellcheck.md) warning: + +``` +SC2044: For loops over find output are fragile. Use find -exec or a while read loop. +``` + +You should not do this because: + +Three reasons: + +- For the for loop to even start, the find must run to completion. +- If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names. +- Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it. + # [Remove the lock screen in ubuntu](https://askubuntu.com/questions/1140079/completely-remove-lockscreen) Create the `/usr/share/glib-2.0/schemas/90_ubuntu-settings.gschema.override` file with the next content: diff --git a/docs/coding/python/pytest.md b/docs/coding/python/pytest.md index a846fb7d349..cf55d6b789a 100644 --- a/docs/coding/python/pytest.md +++ b/docs/coding/python/pytest.md @@ -1097,6 +1097,27 @@ components in case it's too verbose. Check the [asyncio article](asyncio.md#testing). +# [Stop pytest right at the start if condition not met](https://stackoverflow.com/questions/70822031/stop-pytest-right-at-the-start-if-condition-not-met) + +Use the `pytest_configure` [initialization hook](https://docs.pytest.org/en/4.6.x/reference.html#initialization-hooks). + +In your global `conftest.py`: + +```python +import requests +import pytest + +def pytest_configure(config): + try: + requests.get(f'http://localhost:9200') + except requests.exceptions.ConnectionError: + msg = 'FATAL. Connection refused: ES does not appear to be installed as a service (localhost port 9200)' + pytest.exit(msg) +``` + +- Note that the single argument of `pytest_configure` has to be named `config`. +- Using `pytest.exit` makes it look nicer. + # Pytest integration with Vim Integrating pytest into your Vim workflow enhances your productivity while diff --git a/docs/coding/python/python_project_template/python_docker.md b/docs/coding/python/python_project_template/python_docker.md index 331e62e4169..278ab2ce5cd 100644 --- a/docs/coding/python/python_project_template/python_docker.md +++ b/docs/coding/python/python_project_template/python_docker.md @@ -4,7 +4,45 @@ date: 20200602 author: Lyz --- -Docker is a popular way to distribute applications. Assuming that you've set all +Docker is a popular way to distribute applications. + +# [Using PDM](https://pdm.fming.dev/latest/usage/advanced/#use-pdm-in-a-multi-stage-dockerfile) + +It is possible to use PDM in a multi-stage Dockerfile to first install the project and dependencies into `__pypackages__` and then copy this folder into the final stage, adding it to `PYTHONPATH`. + +```dockerfile +# build stage +FROM python:3.11-slim-bookworm AS builder + +# install PDM +RUN pip install pdm + +# copy files +COPY pyproject.toml pdm.lock README.md /project/ +COPY src/ /project/src + +# install dependencies and project into the local packages directory +WORKDIR /project +RUN mkdir __pypackages__ && pdm sync --prod --no-editable + + +# run stage +FROM python:3.11-slim-bookworm + +# retrieve packages from build stage +ENV PYTHONPATH=/project/pkgs +COPY --from=builder /project/__pypackages__/3.11/lib /project/pkgs + +# retrieve executables +COPY --from=builder /project/__pypackages__/3.11/bin/* /bin/ + +# set command/entrypoint, adapt to fit your needs +CMD ["python", "-m", "project"] +``` + +# Using setup.py + +Assuming that you've set all required dependencies in the `setup.py`, we're going to create an image with these properties: diff --git a/docs/coding/python/python_snippets.md b/docs/coding/python/python_snippets.md index e796604a333..e97776ac708 100644 --- a/docs/coding/python/python_snippets.md +++ b/docs/coding/python/python_snippets.md @@ -4,6 +4,52 @@ date: 20200717 author: Lyz --- +# Configure the logging of a program to look nice + +```python +def load_logger(verbose: bool = False) -> None: # pragma no cover + """Configure the Logging logger. + + Args: + verbose: Set the logging level to Debug. + """ + logging.addLevelName(logging.INFO, "\033[36mINFO\033[0m") + logging.addLevelName(logging.ERROR, "\033[31mERROR\033[0m") + logging.addLevelName(logging.DEBUG, "\033[32mDEBUG\033[0m") + logging.addLevelName(logging.WARNING, "\033[33mWARNING\033[0m") + + if verbose: + logging.basicConfig( + format="%(asctime)s %(levelname)s %(name)s: %(message)s", + stream=sys.stderr, + level=logging.DEBUG, + datefmt="%Y-%m-%d %H:%M:%S", + ) + telebot.logger.setLevel(logging.DEBUG) # Outputs debug messages to console. + else: + logging.basicConfig( + stream=sys.stderr, level=logging.INFO, format="%(levelname)s: %(message)s" + ) +``` + +# Get the modified time of a file with Pathlib + +```python +file_ = Path('/to/some/file') +file_.stat().st_mtime +``` + +You can also access: + +- Created time: with `st_ctime` +- Accessed time: with `st_atime` + +They are timestamps, so if you want to compare it with a datetime object use the `timestamp` method: + +```python +assert datetime.now().timestamp - file_.stat().st_mtime < 60 +``` + # Read file with Pathlib ```python diff --git a/docs/collaborating_tools.md b/docs/collaborating_tools.md new file mode 100644 index 00000000000..dbbc4ea177e --- /dev/null +++ b/docs/collaborating_tools.md @@ -0,0 +1,9 @@ + +# Collaborating document creation + +- https://pad.riseup.net +- https://rustpad.io . [Can be self hosted](https://github.com/ekzhang/rustpad) + +# Collaborating through terminals + +- [sshx](https://sshx.io/) looks promising although I think it uses their servers to do the connection, which is troublesome. diff --git a/docs/devops/kubernetes/kubernetes_tools.md b/docs/devops/kubernetes/kubernetes_tools.md index 76687ef0440..8e75537bb02 100644 --- a/docs/devops/kubernetes/kubernetes_tools.md +++ b/docs/devops/kubernetes/kubernetes_tools.md @@ -9,7 +9,7 @@ Kubernetes. ## Tried -* [K3s](https://k3s.io): Recommended small kubernetes, like hyperkube. +* [K3s](https://k3s.io): Recommended small kubernetes, like hyperkube. A friend told me that rke2 works better than k3s. ## To try diff --git a/docs/emojis.md b/docs/emojis.md index ba5bdc6fe92..2663d73b44b 100644 --- a/docs/emojis.md +++ b/docs/emojis.md @@ -6,6 +6,34 @@ Date: 20170302 Curated list of emojis to copy paste. +# Most used + +``` +¯\(°_o)/¯ + +¯\_(ツ)_/¯ + +(╯°□°)╯ ┻━┻ + +\\ ٩( ᐛ )و // + +(✿◠‿◠) + +(/゚Д゚)/ + +(¬º-°)¬ + +(╥﹏╥) + +ᕕ( ᐛ )ᕗ + +ʕ•ᴥ•ʔ + +( ˘ ³˘)♥ + +❤ +``` + # Angry ``` diff --git a/docs/gitea.md b/docs/gitea.md index e3b98639208..85e559afae9 100644 --- a/docs/gitea.md +++ b/docs/gitea.md @@ -380,6 +380,28 @@ jobs: The only downside is that if you set this pipeline as required in the branch protection, the merge button will look yellow instead of green when the pipeline is skipped. +### [Run jobs if other jobs failed](https://github.com/go-gitea/gitea/issues/23725) + +This is useful to send notifications if any of the jobs failed. + +[Right now](https://github.com/go-gitea/gitea/issues/23725) you can't run a job if other jobs fail, all you can do is add a last step on each workflow to do the notification on failure: + +```yaml +- name: Send mail + if: failure() + uses: https://github.com/dawidd6/action-send-mail@v3 + with: + to: ${{ secrets.MAIL_TO }} + from: Gitea + subject: ${{ gitea.repository }} ${{gitea.workflow}} ${{ job.status }} + priority: high + convert_markdown: true + html_body: | + ### Job ${{ job.status }} + + ${{ github.repository }}: [${{ github.ref }}@${{ github.sha }}](${{ github.server_url }}/${{ github.repository }}/actions) +``` + ## [Disable the regular login, use only Oauth](https://discourse.gitea.io/t/solved-removing-default-login-interface/2740/2) Inside your [`custom` directory](https://docs.gitea.io/en-us/customizing-gitea/) which may be `/var/lib/gitea/custom`: diff --git a/docs/grapheneos.md b/docs/grapheneos.md index 46661b06a78..2c42cd2e700 100644 --- a/docs/grapheneos.md +++ b/docs/grapheneos.md @@ -159,6 +159,12 @@ Auditor provides attestation for GrapheneOS phones and the stock operating syste Attestation can be done locally by pairing with another Android 8+ device or remotely using the remote attestation service. To make sure that your hardware and operating system is genuine, perform local attestation immediately after the device has been setup and prior to any internet connection. +# Tips + +## [Split the screen](https://www.reddit.com/r/GrapheneOS/comments/134iqr3/split_screen/) + +Go into app switcher, tap on the app icon above the active app and then select "Split top". + # References - [Home](https://grapheneos.org/) diff --git a/docs/how_to_code.md b/docs/how_to_code.md new file mode 100644 index 00000000000..b6adbd2a646 --- /dev/null +++ b/docs/how_to_code.md @@ -0,0 +1,15 @@ +Over the years I've tried different ways of developing my code: + +- Mindless coding: write code as you need to make it work, with no tests, documentation or any quality measure. +- TDD. +- Try to abstract everything to minimize the duplication of code between projects. + +Each has it's advantages and disadvantages. After trying them all and given that right now I only have short spikes of energy and time to invest in coding my plan is to: + +- Make the minimum effort to design the minimum program able to solve the problem at hand. This design will be represented in an [orgmode](orgmode.md) task. +- Write the minimum code to make it work without thinking of tests or generalization, but with the [domain driven design](domain_driven_design.md) concepts so the code remains flexible and maintainable. +- Once it's working see if I have time to improve it: + - Create the tests to cover the critical functionality (no more 100% coverage). + - If I need to make a package or the program evolves into something complex I'd use [this scaffold template](https://github.com/lyz-code/cookiecutter-python-project). + +Once the spike is over I'll wait for a new spike to come either because I have time or because something breaks and I need to fix it. diff --git a/docs/life_analysis.md b/docs/life_analysis.md new file mode 100644 index 00000000000..90b656a49a6 --- /dev/null +++ b/docs/life_analysis.md @@ -0,0 +1,8 @@ +It's interesting to do analysis at representative moments of the year. It gives it an emotional weight. You can for example use the solstices or my personal version of the solstices: + +- Spring analysis (1st of March): For me the spring is the real start of the year, it's when life explodes after the stillness of the winter. The sun starts to set later enough so that you have light in the afternoons, the climate gets warmer thus inviting you to be more outside, the nature is blooming new leaves and flowers. It is then a moment to build new projects and set the current year on track. +- Summer analysis (1st of June): I hate heat, so summer is a moment of retreat. Everyone temporarily stop their lives, we go on holidays and all social projects slow their pace. Even the news have even less interesting things to report. It's so hot outside that some of us seek the cold refuge of home or remote holiday places. Days are long and people love to hang out till late, so usually you wake up later, thus having less time to actually do stuff. Even in the moments when you are alone the heat drains your energy to be productive. It is then a moment to relax and gather forces for the next trimester. It's also perfect to develop *easy* and *chill* personal projects that have been forgotten in a drawer. Lower your expectations and just flow with what your body asks you. +- Autumn analysis (1st of September): September it's another key moment for many people. We have it hardcoded in our life since we were children as it was the start of school. People feel energized after the summer holidays and are eager to get back to their lives and stopped projects. You're already 6 months into the year, so it's a good moment to review your year plan and decide how you want to invest your energy reserves. +- Winter analysis (1st of December): December is the cue that the year is coming to an end. The days grow shorter and colder, they basically invite you to enjoy a cup of tea under a blanket. It is then a good time to get into your cave and do an introspection analysis on the whole year and prepare the ground for the coming year. + +We see then that the year is divided in two sets of an expansion trimester and a retreat one. We can use this information to plan our tasks accordingly. In the expansion trimester we could invest more energies in the planning, and in the retreat ones we can do more throughout reviews. diff --git a/docs/life_planning.md b/docs/life_planning.md new file mode 100644 index 00000000000..d320c789aed --- /dev/null +++ b/docs/life_planning.md @@ -0,0 +1,257 @@ +Life planning can be done at different levels. All of them help you in +different ways to reduce the mental load, each also gives you extra benefits +that can't be gained by the others. Going from lowest to highest abstraction +level we have: + +- Task plan. +- Pomodoro. +- Day plan. +- Week plan. +- Fortnight plan. +- Month plan. +- Trimester plan. +- Year plan. + +If you're starting your task management career, start with the first level. Once +you're comfortable, move one step up until you reach the sweet spot between time +invested in management and the profit it returns. + +Each of the plans defined below describe the most complete process, use them as +a starting point to define the plan that works for you depending on your needs +and how much time you want to invest at that particular moment. Even I don't +follow them strictly. As they change over time, it's useful to find a way to be +able to follow them without thinking too much on what are the specific steps, +for example having a checklist or a script. + +# Task plan + +The task plan defines the steps required to finish a task. It's your most basic +roadmap to address a task, and a good starting point if you feel overwhelmed +when faced with an assignment. + +When done well, you'll better understand what you need to do, it will prevent +you from wasting time at dead ends as you'll think before acting, and you'll +develop the invaluable skill of breaking big problems into smaller ones. + +To define a task plan, follow the next steps: + +- Decide what do you want to achieve when the task is finished. +- Analyze the possible ways to arrive to that goal. Try to assess different + solutions before choosing one. +- Once you have it, split it into steps small enough to be comfortable following + them without further analysis. + +Some people define the task plan whenever they add the task to their task +manager. Others prefer to save some time each month to refine the plans of the +tasks to be done the next one. + +The plan is an alive document that changes each [Pomodoro cycle](#pomodoro) and +that you'll need to check often. It has to be accessible and it should be easy +for you to edit. If you don't know where to start, use the +[simplest task manager](task_tools.md#divide-a-task-in-small-steps). + +Try not to overplan though, if at the middle of a task you realize that the rest +of the steps don't make sense, all the time invested in their definition will be +lost. That's why it's a good idea to have a great detail for the first steps and +gradually move to rougher definitions on later ones. + +# Pomodoro + +Pomodoro is a technique used to ensure that for short periods of time, you +invest all your mental resources in doing the work needed to finish a task. It's +your main unit of work and a good starting point if you have concentration +issues. + +When done well, you'll start moving faster on your tasks, because +[uninterrupted work](interruption_management.md) is the most efficient. You'll +also begin to know if you're drifting from your [day's plan](#day-plan), and +will have space to adapt it or the [task plan](#task-plan) to time constrains or +unexpected events. + +!!! note "" If you don't yet have a [task plan](#task-plan) or +[day plan](#day-plan), don't worry! Ignore the steps that involve them until you +do. + +The next steps define a Pomodoro cycle: + +- Select the cycle time span. Either 20 minutes or until the next interruption, + whichever is shortest. +- Decide what are you going to do. +- Analyze yourself to see if you're state of mind is ready to only do that for + the chosen time span. If it's not, maybe you need to take a "Pomodoro break", + take 20 minutes off doing something that replenish your willpower or the + personal attribute that is preventing you to be able to work. +- Start the timer. +- Work uninterruptedly on what you've decided until the timer goes off. +- Take 20s to look away from the screen (this is good for your ejes). +- Update your [task](#task-plan) and [day](#day-plan) plans: + - Tick off the done task steps. + - Refine the task steps that can be addressed in the next cycle. + - Check if you can still meet the day's plan. +- Check the + [interruption channels that need to be checked each 20 minutes](interruption_management.md#define-your-interruption-events). + +At the fourth Pomodoro cycle, you'll have finished a Pomodoro iteration. At the +end of the iteration: + +- Check if you're going to meet the [day plan](#day), if you're not, change + change it or the [task plan](#task) to make the time constrain. +- Get a small rest, you've earned it! Get off the chair, stretch or give a small + walk. What's important is that you take your mind off the task at hand and let + your body rest. Remember, this is a marathon, you need to take care of + yourself. +- Start a new Pomodoro iteration. + +If you're super focused at the end of a Pomodoro cycle, you can skip the task +plan update until the end of the iteration. + +To make it easy to follow the pomodoro plan I use a script that: + +- Uses [timer](https://github.com/pando85/timer) to show the countdown. +- Uses [safeeyes](https://github.com/slgobinath/SafeEyes) to track the eye + rests. +- Asks me to follow the list of steps I've previously defined. + +# Day plan + +This plan defines at day level which tasks are you going to work on and +schedules when are you going to address them. It's the most basic roadmap to +address a group of tasks. The goal is to survive the day. It's a good starting +point if you forget to do tasks that need to be done in the day or if you miss +appointments. + +It's also the next step of advance awareness, if you have a day plan, on each +[Pomodoro](#pomodoro) iteration you'll get the feeling whether you're going to +finish what you proposed yourself. + +You can make your plan at the start of the day, start by getting an idea of: + +- What do you need to do by checking: + - The last day's plan. + - Calendar events. + - The [week's plan](#week-plan) if you have it, or the prioritized list of + tasks to do. +- How much uninterrupted time you have between calendar events. +- Your state of mind. + +Then create the day schedule: + +- Add the calendar events. +- Add the + [interruption events](interruption_management.md#define-your-interruption-events). +- Setup an alert for the closest calendar event. + +And the day tasks plan: + +- Decide the tasks to be worked on and think when you want to do them. + +To follow it throughout the day and when it's coming to an end: + +- Update your [week](#week-plan) or/and [task](#task-plan) plans to meet the + time constrains. +- Optionally sketch the next day's plan. + +When doing the plan keep in mind to minimize the number of tasks and calendar +events so as not to get overwhelmed, and not to schedule a new task before you +finish what you've already started. It's better to eventually fall short on +tasks, than never reaching your goal. + +To make it easy to follow I use a script that: + +- Asks me to check the weather forecast. +- Uses [timer](https://github.com/pando85/timer) to show the countdown. +- Uses [safeeyes](https://github.com/slgobinath/SafeEyes) to track the eye + rests. +- Asks me to follow the list of steps I've previously defined. + +# Week plan + +The plan defines at week level which tasks are you going to work on. It's the next roadmap level to +address a group of tasks. The goal changes from surviving the day to start +planning your life. It's a good starting point if you are comfortable working +with the pomodoro, task and day plans, and want to start deciding where you're +heading to. + +It's also the next step of advance awareness, if you have a week plan, each day +you'll get the feeling whether you're going to finish what you proposed +yourself. + +You can make your plan at the start of the week. First you need to clarify your state at week level by: + +- Cleaning your calendar for the next 9 days: Refiling or rescheduling items as you need. If you are using your calendar well you shouldn't need to do any change, just load in your mind the things you are meant to do. +- Clean your inbox: Refile each item until it's empty +- Refine your month objective plans: For each objective: + - + +To make it easy to follow I use a script that asks me to follow the list of steps I've previously defined. + +# Month plan + +The objectives of the month plan are: + +- Define the month objectives according to the trimester plan and the insights gathered in the past month review. +- Make your backlog and todo list match the month objectives +- Define the philosophical topics to address +- Define the topics to learn +- Define the are of habits to incorporate? +- Define the checks you want to do at the end of the month. +- Plan when is it going to be the next review. + +It's interesting to do the plannings on meaningful days such as the first one of the month. Usually we don't have enough flexibility in our life to do it exactly that day, so schedule it the closest you can to that date. It's a good idea to do both the review and the planning on the same day. + +We'll divide the planning process in these phases: + +- Prepare +- Clarify your state +- Decide the month objectives + +## Prepare + +It's important that you prepare your environment for the planning. You need to be present and fully focused on the process itself. To do so you can: + +- Make sure you don't get interrupted: + - Check your task manager tools to make sure that you don't have anything urgent to address in the next hour. + - Disable all notifications +- Set your analysis environment: + - Put on the music that helps you get *in the zone*. + - Get all the things you may need for the review: + - The checklist that defines the process of your planning (this document in my case). + - Somewhere to write down the insights. + - Your task manager system + - Your habit manager system + - Your *Objective list*. + - Your *Thinking list*. + - Your *Reading list*. + - Remove from your environment everything else that may distract you + +## Clarify your state + +To be able to make a good decision on your month's path you need to sort out which is your current state. To do so: + +- Clean your inbox: Refile each item until it's empty +- Clean your todo: Review each todo element by deciding if they should still be in the todo. If they do and they belong to a month objective, add it. If they don't need to be in the todo, refile it. +- Clean your someday: Review each relevant someday element (not the ones that are archive at greater levels than month) and decide if they should be refiled elsewhere and if they are part of a month objective that should be dealt with this month. +- Adress each of the trimester objectives by creating month objectives that get you closer to the desired objective. + +## Decide the next steps + +For each of your month objectives: + +- Decide wheter it makes sense to address it this month. If not, archive it +- Create a clear plan of action for this month on that objective +- Tweak your *things to think about list*. +- Tweak your *reading list*. +- Tweak your *habit manager system*. + +# Trimester plan + +The objectives of the trimester plan are: + +- Define the objectives of the trimester according to the year plan and the past trimester review. +- Define the philosophical topics to address +- Define the topics to learn +- Define the are of habits to incorporate? + +# References + +- [Pomodoro article](https://en.wikipedia.org/wiki/Pomodoro_Technique). diff --git a/docs/linux_snippets.md b/docs/linux_snippets.md index dcb3b93c6e4..cd46b5aa3aa 100644 --- a/docs/linux_snippets.md +++ b/docs/linux_snippets.md @@ -4,6 +4,66 @@ date: 20200826 author: Lyz --- +# [Accept new ssh keys by default](https://stackoverflow.com/questions/21383806/how-can-i-force-ssh-to-accept-a-new-host-fingerprint-from-the-command-line) + +While common wisdom is not to disable host key checking, there is a built-in option in SSH itself to do this. It is relatively unknown, since it's new (added in Openssh 6.5). + +This is done with `-o StrictHostKeyChecking=accept-new`. Or if you want to use it for all hosts you can add the next lines to your `~/.ssh/config`: + +``` +Host * + StrictHostKeyChecking accept-new +``` + +WARNING: use this only if you absolutely trust the IP\hostname you are going to SSH to: + +```bash +ssh -o StrictHostKeyChecking=accept-new mynewserver.example.com +``` + +Note, `StrictHostKeyChecking=no` will add the public key to `~/.ssh/known_hosts` even if the key was changed. `accept-new` is only for new hosts. From the man page: + +> If this flag is set to “accept-new” then ssh will automatically add new host keys to the user known hosts files, but will not permit connections to hosts with changed host keys. If this flag is set to “no” or “off”, ssh will automatically add new host keys to the user known hosts files and allow connections to hosts with changed hostkeys to proceed, subject to some restrictions. If this flag is set to ask (the default), new host keys will be added to the user known host files only after the user has confirmed that is what they really want to do, and ssh will refuse to connect to hosts whose host key has changed. The host keys of known hosts will be verified automatically in all cases. + +# [Do not add trailing / to ls](https://stackoverflow.com/questions/9044465/list-of-dirs-without-trailing-slash) + +Probably, your `ls` is aliased or defined as a function in your config files. + +Use the full path to `ls` like: + +```bash +/bin/ls /var/lib/mysql/ +``` + +# [Convert png to svg](https://askubuntu.com/questions/470495/how-do-i-convert-a-png-to-svg) + +Inkscape has got an awesome auto-tracing tool. + +- Install Inkscape using `sudo apt-get install inkscape` +- Import your image +- Select your image +- From the menu bar, select Path > Trace Bitmap Item +- Adjust the tracing parameters as needed +- Save as svg + +Check their [tracing tutorial](https://inkscape.org/en/doc/tutorials/tracing/tutorial-tracing.html) for more information. + +Once you are comfortable with the tracing options. You can automate it by using [CLI of Inkscape](https://inkscape.org/en/doc/inkscape-man.html). + +# [Redirect stdout and stderr of a cron job to a file](https://unix.stackexchange.com/questions/52330/how-to-redirect-output-to-a-file-from-within-cron) + +``` +*/1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1 +``` + +# Error when unmounting a device: Target is busy + +- Check the processes that are using the mountpoint with `lsof /path/to/mountpoint` +- Kill those processes +- Try the umount again + +If that fails, you can use `umount -l`. + # Wipe a disk Overwrite it many times [with badblocks](hard_drive_health.md#check-the-health-of-a-disk-with-badblocks). diff --git a/docs/loki.md b/docs/loki.md index aae74139c01..b91512fde5b 100644 --- a/docs/loki.md +++ b/docs/loki.md @@ -6,6 +6,40 @@ A small index and highly compressed chunks simplifies the operation and signific # [Installation](https://grafana.com/docs/loki/latest/setup/install/docker/) +There are [many ways to install Loki](https://grafana.com/docs/loki/latest/setup/install/), we're going to do it using `docker-compose` taking [their example as a starting point](https://raw.githubusercontent.com/grafana/loki/v2.9.1/production/docker-compose.yaml) and complementing our already existent [grafana docker-compose](grafana.md#installation). + +```yaml +``` + +It makes use of the [environment variables to configure Loki](https://grafana.com/docs/loki/latest/configure/#configuration-file-reference), that's why we have the `-config.expand-env=true` flag in the command line launch. + +In the grafana datasources directory add `loki.yaml`: + +```yaml +--- +apiVersion: 1 + +datasources: + - name: Loki + type: loki + access: proxy + orgId: 1 + url: http://loki:3100 + basicAuth: false + isDefault: true + version: 1 + editable: false +``` + +## [Storage configuration](https://grafana.com/docs/loki/latest/storage/) + +Unlike other logging systems, Grafana Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on the filesystem. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki. + +Loki 2.0 brings an index mechanism named ‘boltdb-shipper’ and is what we now call Single Store. This type only requires one store, the object store, for both the index and chunks. + +Loki 2.8 adds TSDB as a new mode for the Single Store and is now the recommended way to persist data in Loki as it improves query performance, reduces TCO and has the same feature parity as “boltdb-shipper”. + + # Usage diff --git a/docs/orgzly.md b/docs/orgzly.md index 35ea4812e15..8e6d7d169e8 100644 --- a/docs/orgzly.md +++ b/docs/orgzly.md @@ -1,5 +1,27 @@ [Orgzly](https://orgzly.com/) is an android application to interact with [orgmode](orgmode.md) files. +# Troubleshooting + +## Avoid the conflicts in the files edited in two places + +If you use syncthing you may be seeing conflicts in your files. This happens specially if you use the Orgzly widget to add tasks, this is because it doesn't synchronize the files to the directory when using the widget. If you have a file that changes a lot in a device, for example the `inbox.org` of my mobile, it's interesting to have a specific file that's edited mainly in the mobile, and when you want to edit it elsewhere, you sync as specified below and then process with the editing. Once it's done manually sync the changes in orgzly again. The rest of the files synced to the mobile are for read only reference, so they rarely change. + +If you want to sync reducing the chance of conflicts then: + +- Open Orgzly and press Synchronize +- Open Syncthing. + +If that's not enough [check these automated solutions](https://github.com/orgzly/orgzly-android/issues/8): + +- [Orgzly auto syncronisation for sync tools like syncthing](https://gist.github.com/fabian-thomas/6f559d0b0d26737cf173e41cdae5bfc8) +- [watch-for-orgzly](https://gitlab.com/doak/orgzly-watcher/-/blob/master/watch-for-orgzly?ref_type=heads) + + +Other interesting solutions: + +- [org-orgzly](https://codeberg.org/anoduck/org-orgzly): Script to parse a chosen org file or files, check if an entry meets required parameters, and if it does, write the entry in a new file located inside the folder you desire to sync with orgzly. +- [Git synchronization](https://github.com/orgzly/orgzly-android/issues/24): I find it more cumbersome than syncthing but maybe it's interesting for you. + # References - [Docs](https://orgzly.com/docs) @@ -7,3 +29,4 @@ - [Source](https://github.com/orgzly/orgzly-android) - [Home](https://orgzly.com/) - [Alternative docs](https://github.com/orgzly/documentation) +- [Alternative fork maintained by the community](https://github.com/orgzly-revived/orgzly-android-revived) diff --git a/docs/parsing_data.md b/docs/parsing_data.md new file mode 100644 index 00000000000..5f771d8d01c --- /dev/null +++ b/docs/parsing_data.md @@ -0,0 +1,10 @@ +# Parsing passport data + +Random references: + +- https://www.daphne.foundation/passport-papers/ +- Mindee solution: + - https://mindee.com/live-test + - https://mindee.com/blog/create-ocrized-pdfs-in-2-steps/ + - They opensourced their pdf ocr code: https://github.com/mindee/doctr + - Docs: https://mindee.github.io/doctr/getting_started/installing.html diff --git a/docs/pytelegrambotapi.md b/docs/pytelegrambotapi.md new file mode 100644 index 00000000000..7d6c393ba07 --- /dev/null +++ b/docs/pytelegrambotapi.md @@ -0,0 +1,99 @@ +[pyTelegramBotAPI](https://github.com/eternnoir/pyTelegramBotAPI) is an synchronous and asynchronous implementation of the [Telegram Bot API](https://core.telegram.org/bots/api). + +# [Installation](https://pytba.readthedocs.io/en/latest/install.html) + +```bash +pip install pyTelegramBotAPI +``` + +# Quickstart + +## [Create your bot](https://core.telegram.org/bots/features#botfather) + +Use the `/newbot` command to create a new bot. `@BotFather` will ask you for a name and username, then generate an authentication token for your new bot. + +- The `name` of your bot is displayed in contact details and elsewhere. +- The `username` is a short name, used in search, mentions and t.me links. Usernames are 5-32 characters long and not case sensitive – but may only include Latin characters, numbers, and underscores. Your bot's username must end in 'bot’, like `tetris_bot` or `TetrisBot`. +- The `token` is a string, like `110201543:AAHdqTcvCH1vGWJxfSeofSAs0K5PALDsaw`, which is required to authorize the bot and send requests to the Bot API. Keep your token secure and store it safely, it can be used by anyone to control your bot. + + +To edit your bot, you have the next available commands: + +- `/setname`: change your bot's name. +- `/setdescription`: change the bot's description (short text up to 512 characters). Users will see this text at the beginning of the conversation with the bot, titled 'What can this bot do?'. +- `/setabouttext`: change the bot's about info, a shorter text up to 120 characters. Users will see this text on the bot's profile page. When they share your bot with someone, this text is sent together with the link. +- `/setuserpic`: change the bot's profile picture. +- `/setcommands`: change the list of commands supported by your bot. Users will see these commands as suggestions when they type / in the chat with your bot. See commands for more info. +- `/setdomain`: link a website domain to your bot. See the login widget section. +- `/deletebot`: delete your bot and free its username. Cannot be undone. + +## [Synchronous TeleBot](https://pytba.readthedocs.io/en/latest/quick_start.html#synchronous-telebot) + +```python +#!/usr/bin/python + +# This is a simple echo bot using the decorator mechanism. +# It echoes any incoming text messages. + +import telebot + +API_TOKEN = '' + +bot = telebot.TeleBot(API_TOKEN) + + +# Handle '/start' and '/help' +@bot.message_handler(commands=['help', 'start']) +def send_welcome(message): + bot.reply_to(message, """\ +Hi there, I am EchoBot. +I am here to echo your kind words back to you. Just say anything nice and I'll say the exact same thing to you!\ +""") + + +# Handle all other messages with content_type 'text' (content_types defaults to ['text']) +@bot.message_handler(func=lambda message: True) +def echo_message(message): + bot.reply_to(message, message.text) + + +bot.infinity_polling() +``` + +## [Asynchronous TeleBot](https://pytba.readthedocs.io/en/latest/quick_start.html#asynchronous-telebot) + +```python +#!/usr/bin/python + +# This is a simple echo bot using the decorator mechanism. +# It echoes any incoming text messages. + +from telebot.async_telebot import AsyncTeleBot +bot = AsyncTeleBot('TOKEN') + + + +# Handle '/start' and '/help' +@bot.message_handler(commands=['help', 'start']) +async def send_welcome(message): + await bot.reply_to(message, """\ +Hi there, I am EchoBot. +I am here to echo your kind words back to you. Just say anything nice and I'll say the exact same thing to you!\ +""") + + +# Handle all other messages with content_type 'text' (content_types defaults to ['text']) +@bot.message_handler(func=lambda message: True) +async def echo_message(message): + await bot.reply_to(message, message.text) + + +import asyncio +asyncio.run(bot.polling()) +``` + +# References + +- [Documentation](https://pytba.readthedocs.io/en/latest/index.html) +- [Source](https://github.com/eternnoir/pyTelegramBotAPI) +- [Async Examples](https://github.com/eternnoir/pyTelegramBotAPI/tree/master/examples/asynchronous_telebot) diff --git a/docs/pytest-xprocess.md b/docs/pytest-xprocess.md new file mode 100644 index 00000000000..d0419d60942 --- /dev/null +++ b/docs/pytest-xprocess.md @@ -0,0 +1,99 @@ +[`pytest-xprocess`](https://github.com/pytest-dev/pytest-xprocess) is a pytest plugin for managing external processes across test runs. + +# [Installation](https://pytest-xprocess.readthedocs.io/en/latest/#quickstart) + +```bash +pip install pytest-xprocess +``` + +# [Usage](https://pytest-xprocess.readthedocs.io/en/latest/#quickstart) + +Define your process fixture in `conftest.py`: + +```python +import pytest +from xprocess import ProcessStarter + +@pytest.fixture +def myserver(xprocess): + class Starter(ProcessStarter): + # startup pattern + pattern = "[Ss]erver has started!" + + # command to start process + args = ['command', 'arg1', 'arg2'] + + # ensure process is running and return its logfile + logfile = xprocess.ensure("myserver", Starter) + + conn = # create a connection or url/port info to the server + yield conn + + # clean up whole process tree afterwards + xprocess.getinfo("myserver").terminate() +``` + +Now you can use this fixture in any test functions where `myserver` needs to be up and `xprocess` will take care of it for you. + +## [Matching process output with pattern](https://pytest-xprocess.readthedocs.io/en/latest/starter.html#matching-process-output-with-pattern) + +In order to detect that your process is ready to answer queries, +`pytest-xprocess` allows the user to provide a string pattern by setting the +class variable pattern in the Starter class. `pattern` will be waited for in +the process `logfile` for a maximum time defined by `timeout` before timing out in +case the provided pattern is not matched. + +It’s important to note that pattern is a regular expression and will be matched using python `re.search`. + +## [Controlling Startup Wait Time with timeout](https://pytest-xprocess.readthedocs.io/en/latest/starter.html#controlling-startup-wait-time-with-timeout) + +Some processes naturally take longer to start than others. By default, +`pytest-xprocess` will wait for a maximum of 120 seconds for a given process to +start before raising a `TimeoutError`. Changing this value may be useful, for +example, when the user knows that a given process would never take longer than +a known amount of time to start under normal circunstancies, so if it does go +over this known upper boundary, that means something is wrong and the waiting +process must be interrupted. The maximum wait time can be controlled through the +class variable timeout. + +```python + @pytest.fixture + def myserver(xprocess): + class Starter(ProcessStarter): + # will wait for 10 seconds before timing out + timeout = 10 + +``` + +## Passing command line arguments to your process with `args` + +In order to start a process, pytest-xprocess must be given a command to be passed into the subprocess.Popen constructor. Any arguments passed to the process command can also be passed using args. As an example, if I usually use the following command to start a given process: + +```bash +$> myproc -name "bacon" -cores 4 +``` + +That would look like: + +```python +args = ['myproc', '-name', '"bacon"', '-cores', 4, ''] +``` + +When using `args` in `pytest-xprocess` to start the same process. + +```python +@pytest.fixture +def myserver(xprocess): + class Starter(ProcessStarter): + # will pass "$> myproc -name "bacon" -cores 4 " to the + # subprocess.Popen constructor so the process can be started with + # the given arguments + args = ['myproc', '-name', '"bacon"', '-cores', 4, ''] + + # ... +``` + +# References + +- [Source](https://github.com/pytest-dev/pytest-xprocess) +- [Docs](https://pytest-xprocess.readthedocs.io/en/latest/) diff --git a/docs/python-prometheus.md b/docs/python-prometheus.md new file mode 100644 index 00000000000..3fe1827c5d1 --- /dev/null +++ b/docs/python-prometheus.md @@ -0,0 +1,49 @@ +[prometheus-client](https://github.com/prometheus/client_python) is the official Python client for [Prometheus](prometheus.md). + +# Installation + +```bash +pip install prometheus-client +``` + +# Usage + +Here is a simple script: + +```python +from prometheus_client import start_http_server, Summary +import random +import time + +# Create a metric to track time spent and requests made. +REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request') + +# Decorate function with metric. +@REQUEST_TIME.time() +def process_request(t): + """A dummy function that takes some time.""" + time.sleep(t) + +if __name__ == '__main__': + # Start up the server to expose the metrics. + start_http_server(8000) + # Generate some requests. + while True: + process_request(random.random()) +``` + +Then you can visit http://localhost:8000/ to view the metrics. + +From one easy to use decorator you get: + +- `request_processing_seconds_count`: Number of times this function was called. +- `request_processing_seconds_sum`: Total amount of time spent in this function. + +Prometheus's rate function allows calculation of both requests per second, and latency over time from this data. + +In addition if you're on Linux the process metrics expose CPU, memory and other information about the process for free. + +# References + +- [Source](https://github.com/prometheus/client_python) +- [Docs](https://github.com/prometheus/client_python) diff --git a/docs/python-telegram.md b/docs/python-telegram.md new file mode 100644 index 00000000000..28e642f12de --- /dev/null +++ b/docs/python-telegram.md @@ -0,0 +1,100 @@ +There are two ways to interact with Telegram through python: + +- Client libraries +- Bot libraries + +# Client libraries + +Client libraries use your account to interact with Telegram itself through a developer API token. + +The most popular to use is [Telethon](https://docs.telethon.dev/en/stable/index.html). + +# Bot libraries + +[Telegram lists many libraries to interact with the bot API](https://core.telegram.org/bots/samples#python), the most interesting are: + +- [python-telegram-bot](#python-telegram-bot) +- [pyTelegramBotAPI](#pytelegrambotapi) +- [aiogram](#aiogram) + +If there comes a moment when we have to create the messages ourselves, [telegram-text](https://telegram-text.alinsky.tech/api_reference) may be an interesting library to check. + +## [python-telegram-bot](https://github.com/python-telegram-bot/python-telegram-bot) + +Pros: + +- Popular: 23k stars, 4.9k forks +- Maintained: last commit 3 days ago +- They have a developers community to get help in [this telegram group](https://telegram.me/pythontelegrambotgroup) +- I like how they try to minimize third party dependencies, and how you can install the complements if you need them +- Built on top of asyncio +- Nice docs +- Fully supports the [Telegram bot API](https://core.telegram.org/bots/api) +- Has many examples + +Cons: + +- Interface is a little verbose and complicated at a first look +- Only to be run in a single thread (not a problem) + +References: + +- [Package documentation](https://docs.python-telegram-bot.org/) is the technical reference for python-telegram-bot. It contains descriptions of all available classes, modules, methods and arguments as well as the changelog. +- [Wiki](https://github.com/python-telegram-bot/python-telegram-bot/wiki/) is home to number of more elaborate introductions of the different features of python-telegram-bot and other useful resources that go beyond the technical documentation. +- [Examples](https://docs.python-telegram-bot.org/examples.html) section contains several examples that showcase the different features of both the Bot API and python-telegram-bot +- [Source](https://github.com/python-telegram-bot/python-telegram-bot) + +## [pyTelegramBotAPI](https://github.com/eternnoir/pyTelegramBotAPI) + +Pros: + +- Popular: 7.1k stars, 1.8k forks +- Maintained: last commit 3 weeks ago +- Both sync and async +- Nicer interface with decorators and simpler setup +- [They have an example on how to split long messages](https://github.com/eternnoir/pyTelegramBotAPI#sending-large-text-messages) +- [Nice docs on how to test](https://github.com/eternnoir/pyTelegramBotAPI#testing) +- They have a developers community to get help in [this telegram group](https://telegram.me/joinchat/Bn4ixj84FIZVkwhk2jag6A) +- Fully supports the [Telegram bot API](https://core.telegram.org/bots/api) +- Has examples + +Cons: + +- Uses lambdas inside the decorators, I don't know why it does it. +- The docs are not as throughout as `python-telegram-bot` one. + +References: + +- [Documentation](https://pytba.readthedocs.io/en/latest/index.html) +- [Source](https://github.com/eternnoir/pyTelegramBotAPI) +- [Async Examples](https://github.com/eternnoir/pyTelegramBotAPI/tree/master/examples/asynchronous_telebot) + +## [aiogram](https://github.com/aiogram/aiogram) + +Pros: + +- Popular: 3.8k stars, 717k forks +- Maintained: last commit 4 days ago +- Async support +- They have a developers community to get help in [this telegram group](https://t.me/aiogram) +- Has type hints +- Cleaner interface than `python-telegram-bot` +- Fully supports the [Telegram bot API](https://core.telegram.org/bots/api) +- Has examples + +Cons: + +- Less popular than `python-telegram-bot` +- Docs are written at a developer level, difficult initial barrier to understand how to use it. + +References: + +- [Documentation](https://docs.aiogram.dev/en/dev-3.x/) +- [Source](https://github.com/aiogram/aiogram) +- [Examples](https://github.com/aiogram/aiogram/tree/dev-3.x/examples) + +## Conclusion + +Even if `python-telegram-bot` is the most popular and with the best docs, I prefer one of the others due to the easier interface. `aiogram`s documentation is kind of crap, and as it's the first time I make a bot I'd rather have somewhere good to look at. + +So I'd say to go first with `pyTelegramBotAPI` and if it doesn't go well, fall back to `python-telegram-bot`. diff --git a/docs/rocketchat.md b/docs/rocketchat.md new file mode 100644 index 00000000000..d65cfad9cc7 --- /dev/null +++ b/docs/rocketchat.md @@ -0,0 +1,137 @@ +# [Install](https://github.com/RocketChat/Rocket.Chat.Electron/releases) + +- Download the latest [deb package](https://github.com/RocketChat/Rocket.Chat.Electron/releases) +- `sudo dpkg -i file.deb` + +# [Integrations](https://docs.rocket.chat/use-rocket.chat/workspace-administration/integrations) + +Rocket.Chat supports webhooks to integrate tools and services you like into the platform. Webhooks are simple event notifications via HTTP POST. This way, any webhook application can post a message to a Rocket.Chat instance and much more. + +With scripts, you can point any webhook to Rocket.Chat and process the requests to print customized messages, define the username and avatar of the user of the messages and change the channel for sending messages, or you can cancel the request to prevent undesired messages. + +Available integrations: + +- Incoming Webhook: Let an external service send a request to Rocket.Chat to be processed. +- Outgoing Webhook: Let Rocket.Chat trigger and optionally send a request to an external service and process the response. + +By default, a webhook is designed to post messages only. The message is part of a JSON structure, which has the same format as that of a . + +## [Incoming webhook script](https://docs.rocket.chat/use-rocket.chat/workspace-administration/integrations#incoming-webhook-script) + +To create a new incoming webhook: + +- Navigate to Administration > Workspace > Integrations. +- Click +New at the top right corner. +- Switch to the Incoming tab. +- Turn on the Enabled toggle. +- Name: Enter a name for your webhook. The name is optional; however, providing a name to manage your integrations easily is advisable. +- Post to Channel: Select the channel (or user) where you prefer to receive the alerts. It is possible to override messages. +- Post as: Choose the username that this integration posts as. The user must already exist. +- Alias: Optionally enter a nickname that appears before the username in messages. +- Avatar URL: Enter a link to an image as the avatar URL if you have one. The avatar URL overrides the default avatar. +- Emoji: Enter an emoji optionally to use the emoji as the avatar. [Check the emoji cheat sheet](https://github.com/ikatyang/emoji-cheat-sheet/blob/master/README.md#computer) +- Turn on the Script Enabled toggle. +- Paste your script inside the Script field (check below for a sample script) +- Save the integration. +- Use the generated Webhook URL to post messages to Rocket.Chat. + +The Rocket.Chat integration script should be written in ES2015 / ECMAScript 6. The script requires a global class named Script, which is instantiated only once during the first execution and kept in memory. This class contains a method called `process_incoming_request`, which is called by your server each time it receives a new request. The `process_incoming_request` method takes an object as a parameter with the request property and returns an object with a content property containing a valid Rocket.Chat message, or an object with an error property, which is returned as the response to the request in JSON format with a Code 400 status. + +A valid Rocket.Chat message must contain a text field that serves as the body of the message. If you redirect the message to a channel other than the one indicated by the webhook token, you can specify a channel field that accepts room id or, if prefixed with "#" or "@", channel name or user, respectively. + +You can use the console methods to log information to help debug your script. More information about the console can be found [here](https://developer.mozilla.org/en-US/docs/Web/API/Console/log). +. To view the logs, navigate to Administration > Workspace > View Logs. + +``` +/* exported Script */ +/* globals console, _, s */ + +/** Global Helpers + * + * console - A normal console instance + * _ - An underscore instance + * s - An underscore string instance + */ + +class Script { + /** + * @params {object} request + */ + process_incoming_request({ request }) { + // request.url.hash + // request.url.search + // request.url.query + // request.url.pathname + // request.url.path + // request.url_raw + // request.url_params + // request.headers + // request.user._id + // request.user.name + // request.user.username + // request.content_raw + // request.content + + // console is a global helper to improve debug + console.log(request.content); + + return { + content:{ + text: request.content.text, + icon_emoji: request.content.icon_emoji, + channel: request.content.channel, + // "attachments": [{ + // "color": "#FF0000", + // "author_name": "Rocket.Cat", + // "author_link": "https://open.rocket.chat/direct/rocket.cat", + // "author_icon": "https://open.rocket.chat/avatar/rocket.cat.jpg", + // "title": "Rocket.Chat", + // "title_link": "https://rocket.chat", + // "text": "Rocket.Chat, the best open source chat", + // "fields": [{ + // "title": "Priority", + // "value": "High", + // "short": false + // }], + // "image_url": "https://rocket.chat/images/mockup.png", + // "thumb_url": "https://rocket.chat/images/mockup.png" + // }] + } + }; + + // return { + // error: { + // success: false, + // message: 'Error example' + // } + // }; + } +} +``` + +To test if your integration works, use curl to make a POST request to the generated webhook URL. + +```bash +curl -X POST \ + -H 'Content-Type: application/json' \ + --data '{ + "icon_emoji": ":smirk:", + "text": "Example message" + }' \ + https://your-webhook-url +``` + +If you want to send the message to another channel or user use the `channel` argument with `@user` or `#channel`. Keep in mind that the user of the integration needs to be part of those channels if they are private. + +```bash +curl -X POST \ + -H 'Content-Type: application/json' \ + --data '{ + "icon_emoji": ":smirk:", + "channel": "#notifications", + "text": "Example message" + }' \ + https://your-webhook-url +``` + +If you want to do more complex things uncomment the part of the attachments. diff --git a/docs/sed.md b/docs/sed.md index 21ea2295f01..620d2a5e4e2 100644 --- a/docs/sed.md +++ b/docs/sed.md @@ -30,7 +30,7 @@ find {{ directory }} -type f -exec sed -i 's/nano/vim/g' {} + Sed doesn't support non greedy, use `.[^{{ character }}]*` instead -## Delete match +## Delete line that match ```bash sed '//d' file diff --git a/docs/siem.md b/docs/siem.md new file mode 100644 index 00000000000..fe6b31b29cd --- /dev/null +++ b/docs/siem.md @@ -0,0 +1,4 @@ + +Open source SIEMs: + +- [Wazuh](https://wazuh.com/) diff --git a/docs/tails.md b/docs/tails.md new file mode 100644 index 00000000000..e28055b820b --- /dev/null +++ b/docs/tails.md @@ -0,0 +1,16 @@ +[Tails](https://tails.net/install/linux/index.en.html) is a portable operating system that protects against surveillance and censorship. + +# [Installation](https://tails.net/install/linux/index.en.html) + +# [Upgrading a tails USB](https://tails.net/upgrade/tails/index.en.html) + +# Troubleshooting + +## [Change the window manager](https://www.reddit.com/r/tails/comments/qzruhv/changing_window_manager/) + +Don't do it, they say it it will break Tails although I don't understand why + +# References + +- [Home](https://tails.net/index.en.html) +- [Docs](https://tails.net/doc/index.en.html) diff --git a/docs/vdirsyncer.md b/docs/vdirsyncer.md index da26319a46d..e27ee451a10 100644 --- a/docs/vdirsyncer.md +++ b/docs/vdirsyncer.md @@ -324,7 +324,16 @@ If the official steps failed for you, try these ones: redirect_uri="http://127.0.0.1:8088", ``` + You also need to find where it starts the `wsgi` server `wsgiref.simple_server.make_server` and set the port to `8088` + + ```python + local_server = wsgiref.simple_server.make_server( + host, 8088, wsgi_app, handler_class=_WSGIRequestHandler + ) + ``` + * Run `vdirsyncer discover my_calendar`. + * Opened the link in my browser (on my desktop machine). * Proceeded with Google authentication until "Firefox can not connect to 127.0.0.1:8088." was displayed. from the browser's address bar that looked like: diff --git a/docs/vim.md b/docs/vim.md index b6e9d72fea5..63e677eb99c 100644 --- a/docs/vim.md +++ b/docs/vim.md @@ -396,6 +396,8 @@ require('telescope').load_extension('fzf') It also needs [`fd`](https://github.com/sharkdp/fd#installation) for further features. You should be using it too for your terminal. +NOTE: If you want to [search exact words](https://github.com/nvim-telescope/telescope.nvim/issues/1083) you can start the search with `'` to search for exact matches. + To check that everything is fine run `:checkhealth telescope`. ### [Usage](https://github.com/nvim-telescope/telescope.nvim#usage) @@ -708,9 +710,9 @@ I've been using `vim-fugitive` for some years now and it works very well but is At a first look `lazygit` is too much and `neogit` a little more verbose than `vim-fugitive` but it looks closer to my current workflow. I'm going to try `neogit` then. -## [Neogit](https://github.com/Neogit/neogit) +### [Neogit](https://github.com/Neogit/neogit) -### [Installation](https://github.com/TimUntersberger/neogit#installation) +#### [Installation](https://github.com/TimUntersberger/neogit#installation) ```lua use { 'TimUntersberger/neogit', requires = 'nvim-lua/plenary.nvim' } @@ -730,6 +732,7 @@ That uses the default configuration, but there are [many options that can be set neogit.setup({ disable_commit_confirmation = true }) +``` ### Improve the commit message window @@ -740,7 +743,7 @@ https://neovim.discourse.group/t/how-to-create-an-auto-command-for-a-specific-fi [autocmd events](https://neovim.io/doc/user/autocmd.html#autocmd-events) -# [Abbreviations](https://davidxmoody.com/2014/better-vim-abbreviations/) +## [Abbreviations](https://davidxmoody.com/2014/better-vim-abbreviations/) In order to reduce the amount of typing and fix common typos, I use the Vim abbreviations support. Those are split into two files, @@ -806,7 +809,7 @@ Check the [README](https://github.com/tpope/vim-abolish/blob/master/doc/abolish.txt) for more details. -## Troubleshooting +### Troubleshooting Abbreviations with dashes or if you only want the first letter in capital need to be specified with the first letter in capital letters as stated in [this @@ -825,7 +828,7 @@ Abolish Knobas Knowledge-based Abolish W What ``` -# Auto complete prose text +## Auto complete prose text Tools like [YouCompleteMe](https://github.com/ycm-core/YouCompleteMe) allow you to auto complete variables and functions. If you want the same functionality for @@ -853,7 +856,7 @@ au FileType markdown let g:ycm_max_num_candidates = 1 au FileType markdown let g:ycm_max_num_identifier_candidates = 1 ``` -# Find synonyms +## Find synonyms Sometimes the prose linters tell you that a word is wordy or too complex, or you may be repeating a word too much. The [thesaurus query @@ -896,7 +899,7 @@ Type number and (empty cancels; 'n': use next backend; 'p' use previous If for example you type `45` and hit enter, it will change it for `thus`. -# [Keep foldings](https://stackoverflow.com/questions/37552913/vim-how-to-keep-folds-on-save) +## [Keep foldings](https://stackoverflow.com/questions/37552913/vim-how-to-keep-folds-on-save) When running fixers usually the foldings go to hell. To keep the foldings add the following snippet to your vimrc file @@ -909,14 +912,14 @@ augroup remember_folds augroup END ``` -## [Python folding done right](https://github.com/tmhedberg/SimpylFold) +### [Python folding done right](https://github.com/tmhedberg/SimpylFold) Folding Python in Vim is not easy, the python-mode plugin doesn't do it for me by default and after fighting with it for 2 hours... SimpylFold does the trick just fine. -# [Delete a file inside vim](https://vim.fandom.com/wiki/Delete_files_with_a_Vim_command) +## [Delete a file inside vim](https://vim.fandom.com/wiki/Delete_files_with_a_Vim_command) ```vim :call delete(expand('%')) | bdelete! @@ -932,7 +935,7 @@ endfunction Now you need to run `:call Rm()`. -# Task management +## Task management Check the [`nvim-orgmode`](orgmode.md) file. @@ -942,6 +945,35 @@ Check the [`nvim-orgmode`](orgmode.md) file. * [Getting started guide](https://github.com/nvim-orgmode/orgmode/wiki/Getting-Started) * [Docs](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md) +## [Email inside nvim](https://www.reddit.com/r/neovim/comments/zh0nx9/email_client/) + +The best looking one is himalaya + +- [Home](https://pimalaya.org/himalaya/index.html) +- [Nvim plugin](https://git.sr.ht/%7Esoywod/himalaya-vim) +- [Source](https://github.com/soywod/himalaya) + +# Tips + +## [Run a command when opening vim](https://vi.stackexchange.com/questions/846/how-can-i-start-vim-and-then-execute-a-particular-command-that-includes-a-fro) + +```bash +nvim -c ':DiffViewOpen' +``` +## Run lua snippets + +Run lua snippet within neovim with `:lua `. Useful to test the commands before binding it to keys. + +## Bind a lua function to a key binding + +```lua +key.set({'n'}, 't', ":lua require('neotest').run.run()", {desc = 'Run the closest test'}) +``` + +## [Use relativenumber](https://koenwoortman.com/vim-relative-line-numbers/) + +If you enable the `relativenumber` configuration you'll see how to move around with `10j` or `10k`. + # Troubleshooting When you run into problems run `:checkhealth` to see if it rings a bell @@ -985,28 +1017,6 @@ require('telescope').setup{ ... ``` -# Tips - -## [Run a command when opening vim](https://vi.stackexchange.com/questions/846/how-can-i-start-vim-and-then-execute-a-particular-command-that-includes-a-fro) - -```bash -nvim -c ':DiffViewOpen' -``` -## Run lua snippets - -Run lua snippet within neovim with `:lua `. Useful to test the commands before binding it to keys. - -## Bind a lua function to a key binding - -```lua -key.set({'n'}, 't', ":lua require('neotest').run.run()", {desc = 'Run the closest test'}) -``` - -## [Use relativenumber](https://koenwoortman.com/vim-relative-line-numbers/) - -If you enable the `relativenumber` configuration you'll see how to move around with `10j` or `10k`. - -# Troubleshooting ## Telescope changes working directory when opening a file diff --git a/mkdocs.yml b/mkdocs.yml index 17f90d1b4c8..21869dfa352 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -21,9 +21,14 @@ nav: - Free Knowledge: free_knowledge.md - Environmentalism: environmentalism.md - Laboral: laboral.md + - Collaborating tools: collaborating_tools.md - Life Management: - life_management.md - Time Management: time_management.md + - Life analysis: + - life_analysis.md + - Life planning: life_planning.md + - Life review: life_review.md - Task Management: - task_management.md - Getting Things Done: gtd.md @@ -32,7 +37,6 @@ nav: - Org Mode: orgmode.md - OpenProject: openproject.md - Task Management Workflows: task_workflows.md - - Life review: life_review.md - Interruption Management: - interruption_management.md - Work Interruption Analysis: work_interruption_analysis.md @@ -146,6 +150,10 @@ nav: - python-gnupg: python_gnupg.md - Python Mysql: python_mysql.md - pythonping: pythonping.md + - Python Prometheus: python-prometheus.md + - Python Telegram: + - python-telegram.md + - pytelegrambotapi: pytelegrambotapi.md - Python VLC: python_vlc.md - Plotly: coding/python/plotly.md - questionary: questionary.md @@ -184,6 +192,7 @@ nav: - Parametrized testing: coding/python/pytest_parametrized_testing.md - Pytest-cases: coding/python/pytest_cases.md - Pytest-HttpServer: pytest_httpserver.md + - Pytest-xprocess: pytest-xprocess.md - Internationalization: python_internationalization.md - Python Snippets: coding/python/python_snippets.md - Data Classes: coding/python/data_classes.md @@ -214,6 +223,7 @@ nav: - JWT: devops/jwt.md - React: coding/react/react.md - Generic Coding Practices: + - How to code: how_to_code.md - Program Versioning: - versioning.md - Semantic Versioning: semantic_versioning.md @@ -366,6 +376,7 @@ nav: - Blackbox Exporter: devops/prometheus/blackbox_exporter.md - Elasticsearch Exporter: elasticsearch_exporter.md - Node Exporter: devops/prometheus/node_exporter.md + - Python Prometheus: python-prometheus.md - Instance sizing analysis: devops/prometheus/instance_sizing_analysis.md - Prometheus Troubleshooting: >- devops/prometheus/prometheus_troubleshooting.md @@ -374,6 +385,7 @@ nav: - Loki: loki.md - Graylog: graylog.md - Elastic Security: elastic_security.md + - SIEM: siem.md - Authentication: - Authentik: authentik.md - Self-hosted services: @@ -443,9 +455,11 @@ nav: - Profanity: profanity.md - retroarch: retroarch.md - rm: linux/rm.md + - Rocketchat: rocketchat.md - sed: sed.md - Syncthing: linux/syncthing.md - Libreelec: libreelec.md + - Tails: tails.md - Torrents: - torrents.md - qBittorrent: qbittorrent.md @@ -532,6 +546,7 @@ nav: - Data Analysis: - Recommender Systems: >- data_analysis/recommender_systems/recommender_systems.md + - Parsing Data: parsing_data.md - Psychology: - The XY Problem: psychology/the_xy_problem.md - Botany: