Saltar a contenido

Blog

Limpieza de Metadatos en Archivos Multimedia

Teoría

Los archivos multimedia contienen metadatos (EXIF, XMP, UserData, etc.) que pueden revelar información sensible. Para privacidad, se recomienda eliminarlos.
exiftool manipula metadatos directamente, pero falla con estructuras complejas (como sidx). ffmpeg reenvuelve el archivo sin modificar el contenido, ideal para casos problemáticos.

Comandos

1. Eliminar todos los metadatos (ExifTool)

exiftool -all= -m -o ./salida/ -ext mp4 -ext webp -r /ruta/entrada/
  • -all=: Elimina todos los metadatos.
  • -m: Ignora errores menores (como Terminator found in Meta).
  • -r: Procesa recursivamente.
  • -o ./salida/: Guarda los archivos limpios.

2. Reenvolver archivo (FFmpeg)

Cuando exiftool falla (ej. por sidx):

ffmpeg -i entrada.mp4 -c copy -map_metadata -1 -f mp4 salida.mp4

Para .webp:

ffmpeg -i entrada.webp -c copy -metadata:s:v:0 "" salida.webp
  • -c copy: Copia sin reencodificar.
  • -map_metadata -1: Elimina todos los metadatos.

3. Lote con FFmpeg

for file in *.mp4; do ffmpeg -i "$file" -c copy -map_metadata -1 -f mp4 "salida/$file"; done
for file in *.webp; do ffmpeg -i "$file" -c copy -metadata:s:v:0 "" "salida/$file"; done

Referencias

Post-installation guide for a fresh Debian 13 workstation

Historial de cambios

Versión Fecha Comentario Autor
1.0 10/01/2025 Primera edición. Marco Teórico y Cheat Sheet. Jes

Summary

This guide is a step by step meant to reproduce my workstation environment after a hard wipe of the disk. This includes: Debian 13 official free repos, Librewolf repo, Mullvad browser repo, full Proton desktop suite, Neovim, tmux and a set of my favorite packages from apt; some for work/productivity, some for customization and some for fun.

My Proton referral code

Debian 13 Graphical Installer

Do not use any network. Use entire disk + guided partition + encrypted Logical Volume. Use a separated /home partition. Select Debian desktop environment and Gnome.

Setup sudo

Grant sudo rights to everyday user.

su -
usermod -aG sudo jes

Reboot now.

VPN Secure Core ( + )

Load a pre‑downloaded Wireguard profile for Proton VPN.

If wg-quick is available:

1
2
3
4
sudo mkdir -p /etc/wireguard
sudo cp /mnt/$USER/external_drive_mount_point/Wireward/CH-MX.conf /etc/wireguard/
sudo wg-quick up CH-MX
sudo systemctl enable wg-quick@CH-MX

Reload if you edit the profile:

sudo wg-quick down CH-MX && sudo wg-quick up CH-MX

But it's not If we don't connect to the internet during installation (for example because we only have access to an unsecured network), so we use the GNOME GUI NetworkManager.

First make the file be only accessible by our everyday user:

1
2
3
4
sudo mkdir -p /etc/wireguard
sudo cp /mnt/$USER/external_drive_mount_point/Wireward/CH-MX.conf /etc/wireguard/
sudo chmod 600 /etc/wireguard/CH-MX.conf
sudo chown $USER:$USER CH-MX.conf 

Then proceed with the GUI

Turn on Wi-Fi or connect Ethernet cable and Check public IP.

Prepare for package installation

Add official Debian APT sources

I run this snippet as root, could also open touch and open file as sudo with gedit/nano and paste it.

sudo su -
cat << EOF > /etc/apt/sources.list.d/debian.sources
Types: deb deb-src
URIs: https://deb.debian.org/debian
Suites: trixie trixie-updates
Components: main non-free-firmware
Enabled: yes
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg

Types: deb deb-src
URIs: https://security.debian.org/debian-security
Suites: trixie-security
Components: main non-free-firmware
Enabled: yes
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF

Trust Non-Official Repositories

Curl and other utilities are installed here instead of the next section because we use it right away.

1
2
3
4
5
6
7
sudo apt update && \
sudo apt install extrepo curl wget apt-transport-https -y && \
sudo extrepo enable librewolf && \
sudo wget -O /usr/share/keyrings/element-io-archive-keyring.gpg https://packages.element.io/debian/element-io-archive-keyring.gpg && \
echo "deb [signed-by=/usr/share/keyrings/element-io-archive-keyring.gpg] https://packages.element.io/debian/ default main" | sudo tee /etc/apt/sources.list.d/element-io.list && \
sudo curl -fsSLo /usr/share/keyrings/mullvad-keyring.asc https://repository.mullvad.net/deb/mullvad-keyring.asc && \
echo "deb [signed-by=/usr/share/keyrings/mullvad-keyring.asc arch=$( dpkg --print-architecture )] https://repository.mullvad.net/deb/stable stable main" | sudo tee /etc/apt/sources.list.d/mullvad.list

Update installed packages

Before continuing we upgrade the packages that were included offline from the installation media:

sudo apt update && \
sudo apt upgrade -y 

Install Software

System update & unattended upgrades

1
2
3
4
sudo apt update && \
sudo apt full-upgrade -y && \
sudo apt install -y unattended-upgrades && \
sudo dpkg-reconfigure unattended-upgrades

Install dependencies and favorite packages from apt sources

sudo apt install -y git exiftool vim tar btop ufw apparmor apparmor-utils gnupg gnupg2 gnupg1 gnupg-agent libayatana-appindicator3-1 gir1.2-ayatanaappindicator3-0.1 gnome-shell-extensions gnome-shell-extension-manager gnome-shell-extension-prefs gnome-shell-extension-appindicator gnome-shell-extension-blur-my-shell gnome-shell-extension-impatience gnome-shell-extension-system-monitor gnome-shell-extension-hard-disk-led gnome-shell-extension-user-theme flatpak gnome-software-plugin-flatpak xfce4-terminal clamav clamav-daemon clamtk clamtk-gnome rkhunter fail2ban mullvad-browser librewolf debsig-verify debian-keyring nicotine keepassxc-full webext-keepassxc-browser keepassxc nextcloud-desktop tmux rhythmbox easyeffects perl jq element-desktop
1
2
3
sudo apt update && \
sudo apt autoremove --purge && \
sudo apt clean

Enable basic firewall

1
2
3
sudo ufw default deny incoming && \
sudo ufw default allow outgoing && \
sudo ufw enable

NerdFetch

1
2
3
sudo curl -fsSL https://raw.githubusercontent.com/ThatOneCalculator/NerdFetch/main/nerdfetch -o /usr/local/bin/nerdfetch && \
sudo chmod +x /usr/local/bin/nerdfetch && \
echo 'nerdfetch' >> /home/$USER/.bashrc

Full Proton suite for Gnome

Links on this website point to old versions: Proton Apps

Individual product websites do point to latest versions... mostly.

But I figured the best way to get the newest links is to check the .json files that contains all version numbers, checksums and download URLs.

For some reason ProtonVPN has the file URL and the checksum published on this support article: How to install the Proton VPN GUI app on Debian. I failed to find a .json file similar to the other products. I did foud a Release gpg key on the protonvpn repository but the checksum on the support article works fine so we use that.

For mail bridge they use a different integrity verification method: Proton Mail Bridge Signature

Export Variables
1
2
3
4
5
6
7
8
9
export PROTON_VPN_CHECKSUM="0b14e71586b22e498eb20926c48c7b434b751149b1f2af9902ef1cfe6b03e180" && \
export PROTON_VPN_VERSION="1.0.8_all" && \
export PROTON_PASS_CHECKSUM="c742f302f73d59484a11055d58229240c7a0f648b2b9778968b6aac9e814e33c316f2a658cac58b55c518d228b59d92e1215e3eda07a5957fd5c4dddc086c8d0" && \
export PROTON_PASS_VERSION="1.33.5_amd64" && \
export PROTON_MAIL_CHECKSUM="eb24eeee66adc282ad7127a3e90cfd075353188f1a259c54d63ceb1883a6bfd4c5de4931a614836c20fedb18f3be1d972cf69486965e973f7f66d87e0120e28e" && \
export PROTON_MAIL_VERSION="1.12.0" && \
export PROTON_BRIDGE_VERSION="3.21.2-1_amd64" && \
export PROTON_AUTH_CHECKSUM="f442dbf6c798586316f0e49cdfec999787cee3a0e1b42d43a868405e4f1f33496eb9e20a15209ad31595b26ac795eb7808bc10c341658c89732727bef5ca55de" && \
export PROTON_AUTH_VERSION="1.1.4_amd64"
Download packages
1
2
3
4
5
wget https://repo.protonvpn.com/debian/dists/stable/main/binary-all/protonvpn-stable-release_${PROTON_VPN_VERSION}.deb && \
wget https://proton.me/download/pass/linux/proton-pass_${PROTON_PASS_VERSION}.deb && \
wget https://proton.me/download/authenticator/linux/ProtonAuthenticator_${PROTON_AUTH_VERSION}.deb && \
wget https://proton.me/download/mail/linux/${PROTON_MAIL_VERSION}/ProtonMail-desktop-beta.deb && \
wget https://proton.me/download/bridge/protonmail-bridge_${PROTON_BRIDGE_VERSION}.deb
Check Integrity
1
2
3
4
echo "$PROTON_VPN_CHECKSUM protonvpn-stable-release_${PROTON_VPN_VERSION}.deb" | sha256sum --check - && \
echo "$PROTON_PASS_CHECKSUM proton-pass_${PROTON_PASS_VERSION}.deb" | sha512sum --check - && \
echo "$PROTON_AUTH_CHECKSUM ProtonAuthenticator_${PROTON_AUTH_VERSION}.deb" | sha512sum --check - && \
echo "$PROTON_MAIL_CHECKSUM ProtonMail-desktop-beta.deb" | sha512sum --check -

We expect to see all OK to this point.

Now proceed with the verification mechanism for the bridge:

1
2
3
4
5
6
7
8
wget https://proton.me/download/bridge/bridge_pubkey.gpg  && \
gpg --dearmor --output debsig.gpg bridge_pubkey.gpg  && \
sudo mkdir -p /usr/share/debsig/keyrings/E2C75D68E6234B07  && \
sudo mv debsig.gpg /usr/share/debsig/keyrings/E2C75D68E6234B07  && \
wget https://proton.me/download/bridge/bridge.pol  && \
sudo mkdir -p /etc/debsig/policies/E2C75D68E6234B07  && \
sudo cp bridge.pol /etc/debsig/policies/E2C75D68E6234B07  && \
debsig-verify protonmail-bridge_${PROTON_BRIDGE_VERSION}.deb
Should yield: debsig: Verified package from 'Proton AG (Proton Mail Bridge developers) <bridge@protonmail.ch>' (Proton AG)

Install Proton Suite
1
2
3
sudo dpkg -i ./protonvpn-stable-release_${PROTON_VPN_VERSION}.deb && \
sudo apt update && \
sudo apt install -y proton-vpn-gnome-desktop
1
2
3
4
5
sudo apt update && \
sudo dpkg -i ./proton-pass_${PROTON_PASS_VERSION}.deb  && \
sudo dpkg -i ./ProtonAuthenticator_${PROTON_AUTH_VERSION}.deb && \
sudo dpkg -i ./ProtonMail-desktop-beta.deb && \
sudo dpkg -i ./protonmail-bridge_${PROTON_BRIDGE_VERSION}.deb

I wouldn't use unofficial RClone plugin for Proton Drive because I'm paying ProtonAG for an enterprise level office-suite and that's what I expect...

For the time being I'm using the Proton Drive Webapp for my personal files, and a Nextcloud instance on my own vps for work-related files.

Post-installation: Hardening, Security, and other configurations

Firewall hardening

Deny everything by default and only enable known traffic. Add fallback "kill-switch" just in case.

export WG_IFACE="CH-MX"
export PROTON_IFACE="proton0"
export ETH_IFACE="en0"
export WIFI_IFACE="wl0"

sudo ufw reset
sudo ufw default deny incoming
sudo ufw default deny outgoing
sudo ufw default deny routed

sudo ufw allow in on lo
sudo ufw allow in on "$ETH_IFACE" to any port 67 proto udp comment 'DHCP server (eth)'
sudo ufw allow in on "$ETH_IFACE" to any port 68 proto udp comment 'DHCP client (eth)'
sudo ufw allow in on "$WIFI_IFACE" to any port 67 proto udp comment 'DHCP server (wifi)'
sudo ufw allow in on "$WIFI_IFACE" to any port 68 proto udp comment 'DHCP client (wifi)'

# DNS
sudo ufw allow out on "$ETH_IFACE" to 127.0.0.53 port 53 proto udp comment 'DNS (eth)'
sudo ufw allow out on "$WIFI_IFACE" to 127.0.0.53 port 53 proto udp comment 'DNS (wifi)'
sudo ufw allow out on "$ETH_IFACE" to "$PROTON_DNS" port 53 proto {udp,tcp} comment 'Proton DNS (eth)'
sudo ufw allow out on "$WIFI_IFACE" to "$PROTON_DNS" port 53 proto {udp,tcp} comment 'Proton DNS (wifi)'

# API de Proton VPN
sudo ufw allow out on "$ETH_IFACE" to any port 443 proto tcp comment 'Proton VPN API (eth)'
sudo ufw allow out on "$WIFI_IFACE" to any port 443 proto tcp comment 'Proton VPN API (wifi)'

# Túneles VPN
sudo ufw allow out on "$WG_IFACE" comment "WireGuard tunnel ($WG_IFACE) – outbound"
sudo ufw allow out on "$PROTON_IFACE" comment "Proton VPN tunnel ($PROTON_IFACE) – outbound"
sudo ufw allow in on "$PROTON_IFACE" comment "Allow inbound from Proton tunnel (established)"
sudo ufw allow in on "$WG_IFACE" comment "Allow inbound from WireGuard tunnel (established)"

sudo ufw enable

AppArmor

sudo systemctl enable --now apparmor

Sysctl hardening

  • net.ipv4.tcp_syncookies = 1 – enables SYN-cookies to mitigate SYN-flood attacks that could exhaust the TCP connection table.
  • net.ipv4.conf.all.rp_filter = 1 – activates reverse-path filtering, discarding packets with source addresses that do not match the expected output path, avoiding spoofing and reflections.
  • net.ipv4.conf.all.accept_redirects = 0 – ignores ICMP redirects, preventing an attacker from modifying the host's route table.
  • net.ipv4.icmp_echo_ignore_broadcasts = 1 – blocks responses to broadcast pings, preventing the machine from participating in DDoS amplification attacks.
  • kernel.randomize_va_space = 2 – enables full ASLR, increasing the randomness of memory layout and making it more difficult to exploit vulnerabilities.
  • kernel.kptr_restrict = 2 – Hides kernel symbols in /proc/kallsyms from unprivileged users, reducing the information available to local attackers.
  • fs.suid_dumpable = 0 – prevents processes with SUID/SGID from generating core dumps, avoiding the exposure of sensitive data in case of failures.

As root or sudo with gedit instead:

cat << EOF > /etc/sysctl.d/99-hardening.conf
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
kernel.randomize_va_space = 2
kernel.kptr_restrict = 2
fs.suid_dumpable = 0
EOF

sudo sysctl --system

ClamAV

Enable automatic signature updates

1
2
3
sudo freshclam
sudo systemctl enable --now clamav-freshclam
sudo systemctl status clamav-freshclam

Fail2Ban

sudo systemctl enable --now fail2ban
sudo systemctl status fail2ban

Disable direct root login

sudo passwd -l root

Import SSH key pairs from the external drive

TODO: pendiente

RKHunt

TODO: pendiente

First-open librewolf setup

KeepassXC Browser Extension

Allow exntesion on KeepassXC Gui.

Add Flathub repository for flatpak (To install some extra software later)

flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo

Clone favorite repositories

XFace terminal theme

1
2
3
4
mkdir -p ~/.local/share/xfce4/terminal/colorschemes
git clone https://github.com/rose-pine/xfce-terminal.git ~/Repositories/rose-pine/
cp ~/Repositories/rose-pine/dist/*.theme /home/$USER/.local/share/xfce4/terminal/colorschemes/
chown -R $USER:$USER ~/.local/share/xfce4

Neovim & Friends

1
2
3
4
git clone https://github.com/neovim/neovim.git
git clone ssh://git@codeberg.org/jesmx/tmux.conf.git
git clone ssh://git@codeberg.org/jesmx/tmux.powerline.conf.git
git clone ssh://git@codeberg.org/jesmx/nvim.conf.git

Use Neovim nightly (main) branch for build from source. Follow up with Personal Tmux and Neovim configurations.

My personal knowledge base and blog

git clone ssh://git@codeberg.org/jesmx/Knowledge.git

Easy Effects Presets (Better Audio)

Easy Effects wiki Community Presets

1
2
3
4
5
mkdir -p ~/Music/eq_presets/
cd ~/Music/eq_presets/
git clone https://github.com/Bundy01/EasyEffects-Presets
git clone https://github.com/JackHack96/EasyEffects-Presets.git
git clone https://github.com/qbarbosa/PulseEffects-Presets.git

Converting PulseEffects presets to EasyEffects presets:

In EasyEffects 6.0.0 preset structure changed a little. There is a bash shell script that can convert all presets in directory the entire directory. The script is provided in this issue comment by AbsurdlySuspicious

cat << EOF > ~/Music/eq_presets/fix_ef_pe.sh
#!/usr/bin/env bash

# Replace with 'input' if you want to convert input preset
section='output'

# Convert boolean and numeric strings + replace invalid empty blocklist
perl -i -pe 's/"(true|false|[\d\.-]+)"/$1/g; s/(?<="blocklist": )""/[]/g' "$@"

# Fix plugins order using v5 state field (your set up plugin order is preserved)
for f in "$@"; do

  # Extract active plugins in right order
  po_src=$(jq ".$section"' | . as $out | .plugins_order | .[] | . as $pn | select($out | to_entries | .[] | .key as $key | select(["blocklist", "plugins_order"] | any(. == $key) | not) | select(.value.state != false) | .key == $pn)' "$f" -r)

  # Replace order array in config with new one
  jq --arg po "$po_src" '($po | split("\n")) as $poa | '".$section.plugins_order"' = $poa' "$f" >tmp
  mv tmp "$f"
done
EOF
chmod +x ~/Music/eq_presets/fix_ef_pe.sh
cd ~/Music/eq_presets/fix_ef_pe.sh
./fix_ef_pe.sh *.json

Final sanity‑check

Check hardening status

1
2
3
4
5
sudo aa-status | grep "enforced"
sudo systemctl is-active clamav-freshclam
sudo systemctl is-active fail2ban
id $USER
sudo -lU $USER

Check Network status

sudo ufw status verbose
nmcli connection show --active

Setup git for making commits

1
2
3
git config --global user.email "$USER@jes.mx"
git config --global user.user "$USER"
git config --global user.name "$USER"

Clean apt cache

1
2
3
4
sudo apt update
sudo apt upgrade -y
sudo apt clean
sudo apt autoremove --purge

Reboot now.

Setup anime girl background and transparent xfce terminal. Make terminal maximize on start, remove scrollbar, remove all terminal key-binds but full-screen. Set rose-pine color theme in settings.

Make VPN GUI launch at startup, setup kill-switch, netshield, ipv6 support, automatic connection to México.

Enable and customize gnome shell extensions.

Install Top Bar Customizer from gnome user extensions and sort the top bar.

Login into NextCloud accounts via Desktop app. Login into Nextcloud accounts via Gnome Accounts.

Setup proton mail bridge.

Login into IMAP & SMPT Accounts via Gnome accounts.

Blast some music and get ready to work ;)

Instalación de Nginx con Docker Compose en un VPS con Debian/Ubuntu

Historial de cambios

Versión Fecha Comentario Autor
1.0 03/02/2025 Primera edición. Marco Teórico y Cheat Sheet. Jes

Introducción

Docker Compose es una herramienta que simplifica la orquestación de contenedores, permitiendo definir y gestionar servicios multi-contenedor mediante un archivo YAML. Combinado con Nginx —un servidor web y proxy inverso eficiente—, ofrece una solución robusta para desplegar aplicaciones web de forma reproducible y aislada. Este artículo explica cómo implementar Nginx en un VPS con Debian/Ubuntu usando Docker Compose, incluyendo ejemplos prácticos y configuraciones avanzadas.

1. Marco Teórico

1. Docker Compose:

Permite definir servicios, redes y volúmenes en un archivo docker-compose.yml, facilitando la gestión de aplicaciones en contenedores. Sus ventajas incluyen:

  • Reproducible: Configuración declarativa del entorno.
  • Aislamiento: Servicios independientes con recursos dedicados.
  • Portabilidad: Funciona en cualquier sistema con Docker instalado.

2. Imagen Oficial de Nginx:

Disponible en Docker Hub, incluye:

  • Soporte para contenido estático (HTML, CSS, JS).
  • Personalización de configuración mediante montajes de volúmenes.
  • Sustitución de variables de entorno en tiempo de ejecución (vía envsubst).

1.3 Casos de Uso Comunes:

  • Hosting de contenido estático.
  • Proxy inverso para aplicaciones.
  • Balanceo de carga.

2. Estructura del Proyecto

1
2
3
4
5
6
7
.
├── nginx/                    # Configuración central de Nginx + Certbot
│   ├── docker-compose.yml    # Define Nginx + Certbot (red y volúmenes compartidos)
│   └── conf.d/               # Carpeta para configs manuales (opcional)
└── odoo/                     # Proyecto Odoo (ejemplo)
    ├── docker-compose.yml    # Define Odoo + PostgreSQL
    └── nginx.conf           # Config específica para Odoo (proxy_pass, SSL, etc.)

3. Configuración Base (Solo una vez)

3.1. Crear red y volúmenes compartidos

1
2
3
4
docker network create nginx_net  # Red para conectar todos los servicios
docker volume create shared_confs  # Volumen para configs de Nginx
docker volume create certbot_www  # Volumen para desafíos ACME (Certbot)
docker volume create certbot_conf  # Volumen para certificados SSL

3.2. nginx/docker-compose.yml

services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - shared_confs:/etc/nginx/conf.d  # Configs centralizadas
      - certbot_www:/var/www/certbot   # Certbot challenges
      - certbot_conf:/etc/letsencrypt  # Certificados SSL
    networks:
      - nginx_net
    restart: unless-stopped

  certbot:
    image: certbot/certbot
    volumes:
      - certbot_www:/var/www/certbot
      - certbot_conf:/etc/letsencrypt
    networks:
      - nginx_net
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
    depends_on:
      - nginx
    restart: unless-stopped

volumes:
  shared_confs:
    external: true  # Usa el volumen creado manualmente
  certbot_www:
    external: true
  certbot_conf:
    external: true

networks:
  nginx_net:
    external: true  # Usa la red creada manualmente

3.3. Iniciar Nginx + Certbot

cd nginx
docker compose up -d

4. Agregar un Proyecto (Ejemplo: Odoo)

4.1 Archivos de configuración

Archivo odoo.tudominio.com.conf.

server {
    listen 443 ssl http2;
    server_name odoo.tudominio.com;

    # Certificados SSL (generados por Certbot)
    ssl_certificate /etc/letsencrypt/live/odoo.tudominio.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/odoo.tudominio.com/privkey.pem;

    # Configuración SSL recomendada
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    # Proxy a Odoo
    location / {
        proxy_pass http://web_aledev:8069;  # Nombre del servicio en Docker
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # WebSocket (para chat en vivo)
    location /websocket {
        proxy_pass http://web_aledev:8072;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

# Redirección HTTP → HTTPS
server {
    listen 80;
    server_name odoo.tudominio.com;
    return 301 https://$host$request_uri;
}

Archivo docker-compose.odoo.yaml.

services:
  web_aledev:
    image: odoo:16
    depends_on:
      - db_aledev
    ports:
      - "8069:8069"  # Odoo HTTP
      - "8072:8072"  # Longpolling (WebSocket)
    volumes:
      - odoo-data:/var/lib/odoo
    environment:
      - HOST=db_aledev  # Conexión a PostgreSQL
    networks:
      - nginx_net  # Misma red que Nginx

  db_aledev:
    image: postgres:15
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=odoo
      - POSTGRES_PASSWORD=odoo
    volumes:
      - postgres-data:/var/lib/postgresql/data
    networks:
      - nginx_net

  config_loader:
    image: alpine
    command: sh -c "cp /config/odoo.conf /etc/nginx/conf.d/"
    volumes:
      - ./nginx.conf:/config/odoo.conf
      - shared_confs:/etc/nginx/conf.d
    networks:
      - nginx_net

volumes:
  odoo-data:
  postgres-data:

networks:
  nginx_net:
    external: true  # Usa la red compartida

4.2 Generar Certificado SSL para Odoo

1
2
3
4
5
 docker compose -f docker-compose.yml run --rm --entrypoint "" certbot sh -c "certbot certonly --webroot -w /var/www/certbot -d dominio.com --email usuario@dominio.com  --agree-tos --no-eff-email --dry-run --http-01-port 80"
 ```

 ```bash
docker compose -f docker-compose.yml run --rm --entrypoint "" certbot sh -c "certbot certonly --webroot -w /var/www/certbot -d dominio.com --email usuario@dominio.com --agree-tos --no-eff-email --http-01-port 80"

4.3 Iniciar Odoo

cd odoo
docker compose up -d

5. Comandos Clave para Mantenimiento

Comando Descripción
docker exec nginx nginx -t Verificar sintaxis de Nginx
docker exec nginx nginx -s reload Recargar configuración sin downtime
docker compose -f nginx/docker-compose.yml run --rm certbot renew Renovar certificados manualmente
docker volume ls Listar volúmenes
docker network inspect nginx_net Verificar conectividad entre servicios

6. Solución de Problemas Comunes

6.1. Error: network nginx_net declared as external, but not found

docker network create nginx_net

6.2. Certbot no genera certificados

Verifica que:
- El dominio apunta al servidor correcto.
- Los puertos 80 y 443 están abiertos en el firewall.

6.3. Odoo no se conecta a PostgreSQL

Ejecuta:

docker exec -it odoo_web_aledev_1 ping db_aledev

Si falla, revisa que ambos servicios estén en la misma red (nginx_net).

Nota Final: Esta estructura permite escalar a N proyectos sin modificar la configuración base de Nginx. Cada proyecto vive en su propia carpeta con su propio docker-compose.yml, conectándose a través de la red compartida.

7. Usos

8. Bibliografía

Essential Dependencies for Building Python with pyenv

Introduction

When building Python from source using pyenv, several dependencies are required to ensure a successful installation. This document provides a detailed overview of these dependencies, their purposes, and links to their official documentation. Building Python from source can provide flexibility and control over the Python environment. However, it requires certain libraries and tools to be installed on your system. This document outlines the essential dependencies needed for a successful build on Debian-based systems.

Dependency Overview

sudo apt update
sudo apt install -y build-essential libssl-dev libbz2-dev libreadline-dev libsqlite3-dev libffi-dev zlib1g-dev libgdbm-dev liblzma-dev libncurses5-dev libncursesw5-dev

The following table summarizes the essential dependencies required for building Python, along with links to their official documentation:

Dependency Description Documentation Link
build-essential A package that includes the GCC compiler and other essential tools for building software. Debian Build-Essential
libssl-dev Development files for OpenSSL, which provides SSL and TLS protocols. OpenSSL Documentation
libbz2-dev Development files for bzip2, a compression library. bzip2 Documentation
libreadline-dev Development files for the GNU Readline library, which provides line-editing and history capabilities. GNU Readline
libsqlite3-dev Development files for SQLite, a C library that provides a lightweight disk-based database. SQLite Documentation
libffi-dev Development files for the Foreign Function Interface library, which allows calling C functions from other languages. libffi Documentation
zlib1g-dev Development files for zlib, a compression library used in many applications. zlib Documentation
libgdbm-dev Development files for GNU dbm, a database library that uses a hash table. GNU GDBM Documentation
liblzma-dev Development files for LZMA compression, used in various applications for data compression. XZ Utils Documentation
libncurses5-dev Development files for ncurses, a library for text-based user interfaces in a terminal. ncurses Documentation
libncursesw5-dev Development files for the wide-character version of ncurses, supporting internationalization. ncursesw Documentation

Detailed Dependency Descriptions with Context

1. build-essential

The build-essential package is crucial for compiling software from source. It includes the GCC compiler, make utility, and other essential tools.

  • Installation Command:
    sudo apt install build-essential
    

2. libssl-dev

This library provides the development files for OpenSSL, which is used for implementing SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols.

Context of Use:
  • Secure Communications: While building Python, the libssl-dev library is necessary for enabling secure communications in various Python modules and libraries that rely on SSL/TLS for secure data transmission. This includes:
  • HTTP Requests: Libraries like urllib and requests use SSL/TLS to secure HTTP connections (HTTPS).
  • Secure Sockets: The socket library in Python can create secure connections using SSL/TLS, which is essential for applications that require secure data transfer over networks.
  • Cryptography: Many Python packages that deal with cryptography or secure data handling require OpenSSL for encryption and decryption processes.

  • Installation Command:

    sudo apt install libssl-dev
    

3. libbz2-dev

This library provides support for bzip2 compression, which is often used in Python's standard library for handling compressed files.

  • Installation Command:
    sudo apt install libbz2-dev
    

4. libreadline-dev

This library allows for advanced input handling, including command history and line editing, which enhances the user experience in interactive Python sessions.

  • Installation Command:
    sudo apt install libreadline-dev
    

5. libsqlite3-dev

SQLite is a lightweight database engine that is often used in Python applications for data storage.

  • Installation Command:
    sudo apt install libsqlite3-dev
    

6. libffi-dev

This library allows Python to call C functions directly, which is essential for many Python packages that interface with C libraries.

  • Installation Command:
    sudo apt install libffi-dev
    

7. zlib1g-dev

This library is used for data compression and is commonly required for handling compressed data in Python.

  • Installation Command:
    sudo apt install zlib1g-dev
    

8. libgdbm-dev

This library provides a database management system that is often used in Python applications for storing data.

  • Installation Command:
    sudo apt install libgdbm-dev
    

9. liblzma-dev

This library supports LZMA compression, which is used in various applications for efficient data storage.

  • Installation Command:
    sudo apt install liblzma-dev
    

10. libncurses5-dev

This library is used for creating text-based user interfaces in terminal applications, which can be useful for command-line tools.

  • Installation Command:
    sudo apt install libncurses5-dev
    

11. libncursesw5-dev

Similar to libncurses5-dev, but supports wide characters, making it suitable for internationalization.

  • Installation Command:
    sudo apt install libncursesw5-dev
    

Technical Reflection on Dependencies Required for Building Python

The Nature of Python

Python is a high-level, interpreted programming language known for its readability and versatility. It is primarily written in C, which allows it to achieve performance close to that of lower-level languages while maintaining the ease of use associated with higher-level languages. The core of Python, often referred to as CPython, is implemented in C and provides the foundational runtime environment for executing Python code.

Importance of Dependencies at Build Time

When building Python from source, several external libraries and development tools are required to ensure that the resulting binary is fully functional and capable of supporting various features. These dependencies are crucial for several reasons:

  1. Compilation and Linking:
  2. The build process involves compiling C code into machine code. The build-essential package provides the necessary compilers (like GCC) and tools (like make) to perform this task. Without these tools, the source code cannot be transformed into an executable binary.

  3. Standard Library Functionality:

  4. Python's standard library is extensive and includes modules that rely on external libraries for specific functionalities. For example:

    • libssl-dev: This library is essential for enabling secure communications through SSL/TLS protocols. Many standard library modules, such as http.client and ssl, depend on OpenSSL for secure socket connections. This is critical for applications that require secure data transmission over networks, such as web applications and APIs.
    • libsqlite3-dev: SQLite is embedded within Python as a lightweight database engine. The sqlite3 module in Python allows for database operations, and its functionality is contingent upon the presence of this library during the build process.
  5. Data Compression and File Handling:

  6. Libraries like libbz2-dev and zlib1g-dev are required for handling compressed files. Python's standard library includes modules such as bz2 and gzip, which facilitate reading and writing compressed data. These modules are often used in data processing applications, making the presence of these libraries essential for full functionality.

  7. Input Handling and User Interaction:

  8. The libreadline-dev library enhances the interactive experience of Python by providing line-editing capabilities and command history. This is particularly important for the Python REPL (Read-Eval-Print Loop) and interactive shells, where user input is a fundamental aspect of the programming experience.

  9. Foreign Function Interface:

  10. The libffi-dev library allows Python to interface with C libraries directly. This capability is crucial for many third-party packages that require performance optimizations or need to leverage existing C libraries. The ability to call C functions from Python code expands the language's capabilities and allows for seamless integration with other software.

  11. Database Management:

  12. The libgdbm-dev library provides support for the GNU database manager, which is used by some Python applications for persistent data storage. This library is essential for applications that require efficient data retrieval and storage mechanisms.

  13. Internationalization Support:

  14. The libncurses5-dev and libncursesw5-dev libraries are used for creating text-based user interfaces in terminal applications. These libraries are important for command-line tools and applications that require user interaction through a terminal interface, especially in internationalized contexts where wide character support is necessary.

  15. Compression Algorithms:

  16. The liblzma-dev library supports LZMA compression, which is used in various applications for efficient data storage. Python's lzma module relies on this library to provide compression and decompression functionalities.

Sincronización de Carpeta Local con Nextcloud (WebDAV) en Linux

Marco Teórico

1. Conceptos Clave

  • WebDAV: Protocolo de extensión HTTP/S para gestión colaborativa de archivos en servidores remotos.
  • Nextcloud Self-Hosted: Plataforma de almacenamiento en la nube autogestionada que soporta WebDAV (/remote.php/dav/files/USUARIO/).
  • Sincronización: Transferencia bidireccional o unidireccional de archivos entre un cliente y servidor.

2. Herramientas Recomendadas

  • cadaver: Cliente CLI WebDAV para transferencias manuales.
  • rclone: Utilidad para sincronización automatizada y montaje de almacenamiento.
  • davfs2: Sistema de archivos para integrar WebDAV como disco local.
  • curl: Método directo para operaciones puntuales.

3. Consideraciones de Seguridad

  • SSL/TLS: Nextcloud usa HTTPS por defecto (verificar certificado).
  • Autenticación: Credenciales de usuario Nextcloud (usuario + contraseña o token de app).

Marco Práctico

1. Configuración Inicial

Obtener URL WebDAV de Nextcloud

URL Base: https://[TU-DOMINIO]/remote.php/dav/files/[USUARIO]/ Ejemplo: https://cloud.example.com/remote.php/dav/files/juan/

2. Método 1: cadaver (CLI Manual)

Instalación
sudo apt install cadaver  # Debian/Ubuntu
sudo dnf install cadaver  # Fedora
Comandos Básicos
1
2
3
4
5
cadaver https://[URL-WEBDAV]
> ls                  # Listar archivos
> put archivo.txt      # Subir archivo
> get documento.pdf    # Descargar archivo
> mkdir carpeta        # Crear directorio

3. Método 2: rclone (Sincronización Automatizada)

Instalación y Configuración

sudo apt install rclone  # Debian/Ubuntu
rclone config
- Seleccionar tipo webdav. - URL: https://[URL-WEBDAV] - Autenticación: user/password (credenciales Nextcloud).

Sincronizar Carpeta Local → Nextcloud
rclone sync /ruta/local/ remote:[NOMBRE_CONFIGURACION_RCLONE]/ --progress
Montar WebDAV como Disco Local
rclone mount remote:[NOMBRE_CONFIGURACION_RCLONE]/ /mnt/nextcloud/ --vfs-cache-mode writes

4. Método 3: davfs2 (Sistema de Archivos)

Instalación y Montaje
1
2
3
4
sudo apt install davfs2  # Debian/Ubuntu
sudo mkdir /mnt/nextcloud
sudo mount -t davfs https://[URL-WEBDAV] /mnt/nextcloud
# Ingresar credenciales cuando solicite
Desmontaje
sudo umount /mnt/nextcloud

5. Método 4: curl (Transferencia Directa)

Subir Archivo
curl -u usuario:contraseña -T /ruta/local/archivo.txt https://[URL-WEBDAV]/archivo.txt

6. Automatización con Cron

Ejemplo: Sincronización Cada Hora
1
2
3
crontab -e
# Añadir línea:
0 * * * * rclone sync /ruta/local/ remote:[NOMBRE_CONFIGURACION_RCLONE]/ --quiet

Mejores Prácticas

  1. Pruebas Iniciales: Verificar conexión con cadaver o curl antes de automatizar.
  2. Logs: Usar --log-file=/ruta/log.txt en rclone para diagnóstico.
  3. Backups: No sincronizar carpetas críticas sin copia de seguridad.
  4. Seguridad: Usar contraseñas fuertes y 2FA en Nextcloud.

Solución de Problemas

  • Error de Conexión: Verificar URL, firewall y certificado SSL.
  • SSL/TLS Issues: Usar --no-check-certificate en curl (solo para pruebas).
  • Permiso Denegado: Asegurar que el usuario Nextcloud tiene derechos de escritura.
  • Conflictos de Sincronización: Evitar editar el mismo archivo en múltiples dispositivos.