Unified log levels

- Logger doesn't start with log level DEBUG by default, instead reads a environment variable
- Secret handler raises exceptions instead of using the module os to exit
- Added extensive debug logging
- Notifications: Added timestamp and category
This commit is contained in:
Florian 2025-10-18 20:59:50 +02:00
parent 3a722eca04
commit f2ffe394ad
9 changed files with 249 additions and 99 deletions

View File

@ -1,4 +1,90 @@
# docker-repository-query
# Docker Repository Query Service
A FastAPI-based service that monitors Docker repositories (Docker Hub and GitHub Container Registry) for new image versions and sends notifications when updates are detected.
FastAPI service with endpoints that check for updated versions of two applications so far, Suwayomi and Pi-hole.
## Features
- **Multi-Registry Support**: Queries both Docker Hub and GitHub Container Registry (GHCR)
- **Version Tracking**: Maintains local state of latest known versions in MySQL
- **Automatic Notifications**: Sends alerts when new versions are detected
- **Prometheus Metrics**: Built-in request monitoring
- **Scheduled Queries**: Designed to be called by external schedulers (cron, Kubernetes CronJob, etc.)
## Use Case
This service is ideal for:
- Monitoring self-hosted applications for updates
- Automating update notifications for Docker-based infrastructure
- Tracking specific image tags across different registries
## Prerequisites
- Python 3.8+
- MySQL database
- Required Python packages (see Installation)
## Installation
```bash
pip install -r requirements.txt
```
## Running the Application
```bash
python main.py
```
This starts two servers:
- **Main API**: `http://0.0.0.0:5000`
- **Metrics Server**: `http://0.0.0.0:9000`
## API Endpoints
### Health Check
```
GET /health
```
Returns 200 status code if the service is running.
### Check Suwayomi Updates
```
GET /suwayomi
```
Queries GitHub Container Registry for the latest stable version of Suwayomi/Tachidesk.
**Response:**
- `200 OK`: New version detected and notification sent
- `204 No Content`: No new version available
- `500 Internal Server Error`: Query or database error
**Registry:** GitHub Container Registry (ghcr.io)
**Repository:** Suwayomi/tachidesk
**Tag:** stable
### Check Pi-hole Updates
```
GET /pihole
```
Queries Docker Hub for the latest version of Pi-hole.
**Response:**
- `200 OK`: New version detected and notification sent
- `204 No Content`: No new version available
- `500 Internal Server Error`: Query or database error
**Registry:** Docker Hub
**Repository:** pihole/pihole
**Tag:** latest
## Monitoring
### Prometheus Metrics
Prometheus metrics are exposed on port 9000 and include:
- HTTP request counters by method, path, and status code
- Success/failure rates for version checks
Access metrics at: `http://0.0.0.0:9000/metrics`

View File

@ -1,4 +1,3 @@
import mysql.connector
from mysql.connector import pooling, Error
import threading
from secret_handler import return_credentials
@ -34,6 +33,7 @@ def create_connection_pool():
global _connection_pool
with _pool_lock:
if _connection_pool is not None:
logger.debug("[MySQL] Pool already exists, returning existing pool")
return
for attempt in range (1,MAX_RETRIES+1):
try:
@ -52,6 +52,7 @@ def create_connection_pool():
except Error as e:
logger.warning(f"[MySQL] Attempt {attempt} failed: {e}")
if attempt < MAX_RETRIES:
logger.debug(f"[MySQL] Retrying in {RETRY_DELAY} seconds...")
time.sleep(RETRY_DELAY)
logger.critical(f"[MySQL] Failed to connect after {MAX_RETRIES} attempts.")
@ -61,34 +62,43 @@ def close_connection_pool():
global _connection_pool
with _pool_lock:
if _connection_pool:
logger.debug(f"[MySQL] Closing pool: {_connection_pool}")
_connection_pool = None
logger.info("[MySQL] Connection pool closed")
_stop_healthcheck.set()
logger.debug("[MySQL] Healthcheck stop flag set")
def get_connection_pool():
global _connection_pool
with _pool_lock:
if _connection_pool is None:
logger.debug("[MySQL] No pool found, creating one")
create_connection_pool()
else:
logger.debug(f"[MySQL] Returning existing pool: {_connection_pool}")
return _connection_pool
def get_db():
pool = get_connection_pool()
logger.debug(f"[MySQL] Acquiring connection from pool: {pool}")
conn = pool.get_connection()
try:
conn.ping(reconnect=True, attempts=3, delay=1)
logger.debug("[MySQL] Connection alive")
except Error as e:
logger.warning(f"[MySQL] Connection dead, recreating pool: {e}")
create_connection_pool()
pool = get_connection_pool()
conn = pool.get_connection()
logger.debug("[MySQL] Reconnected successfully")
try:
yield conn
finally:
if conn and conn.is_connected():
conn.close()
logger.debug("[MySQL] Connection returned to pool")
def _pool_healthcheck():
while not _stop_healthcheck.is_set():
@ -96,11 +106,13 @@ def _pool_healthcheck():
with _pool_lock:
pool = _connection_pool
if not pool:
logger.debug("[MySQL] Healthcheck skipped, pool not initialized")
continue
try:
conn = pool.get_connection()
conn.ping(reconnect=True, attempts=3, delay=1)
conn.close()
logger.debug(f"[MySQL] Pool healthcheck succeeded")
except Error as e:
logger.warning(f"[MySQL] Pool healthcheck failed: {e}")
create_connection_pool()
@ -109,6 +121,7 @@ def _pool_healthcheck():
def start_healthcheck_thread():
global _health_thread
if _health_thread and _health_thread.is_alive():
logger.debug("[MySQL] Healthcheck thread already running")
return
_stop_healthcheck.clear()
_health_thread = threading.Thread(target=_pool_healthcheck, daemon=True)

View File

@ -1,29 +1,67 @@
import requests
from secret_handler import return_credentials
from logger_handler import setup_logger
dockerhub_token = return_credentials("/etc/secrets/dockerhub_token")
dockerhub_username = return_credentials("/etc/secrets/dockerhub_username")
logger = setup_logger(__name__)
def login_and_get_token()->str:
def login_and_get_token() -> str:
"""
You have to first login with credentials to get a token for subsequent api requests.
[DockerHub] Login to Docker Hub and retrieve a JWT token.
Returns:
A string JWT token for authenticated requests.
Returns None and logs error if login fails.
"""
login_url = "https://hub.docker.com/v2/users/login/"
response = requests.post(login_url,
json={"username": dockerhub_username, "password": dockerhub_token})
if response.status_code == 200:
token = response.json()["token"]
return token
else:
print(f"Login failed: {response.status_code} - {response.text}")
response = requests.post(
login_url,
json={"username": dockerhub_username, "password": dockerhub_token}
)
def find_package_version_with_tag(repo, tag):
if response.status_code == 200:
token = response.json().get("token")
if token:
logger.debug("[DockerHub] Login successful, token obtained")
return token
else:
logger.error("[DockerHub] Login response did not contain token")
return None
else:
logger.error(f"[DockerHub] Login failed: {response.status_code} - {response.text}")
return None
def find_package_version_with_tag(repo: str, tag: str) -> str:
"""
[DockerHub] Fetch the tag ID for a repository tag.
Authenticates with Docker Hub, queries the repository for the tag,
returns the tag ID if found, else None.
"""
token = login_and_get_token()
if not token:
logger.error(f"[DockerHub] Cannot fetch tag '{tag}' for repo '{repo}' without token")
return None
headers = {"Authorization": f"JWT {token}"}
tags_url = f"https://hub.docker.com/v2/repositories/{repo}/tags/{tag}?page_size=1"
tags_response = requests.get(tags_url, headers=headers)
id = tags_response.json()["id"]
return id
response = requests.get(tags_url, headers=headers)
if response.status_code != 200:
logger.error(f"[DockerHub] Failed to fetch tag '{tag}' for repo '{repo}': {response.status_code} - {response.text}")
return None
data = response.json()
tag_id = data.get("id")
if tag_id:
logger.info(f"[DockerHub] Found tag '{tag}' for repo '{repo}', ID: {tag_id}")
return tag_id
else:
logger.debug(f"[DockerHub] Tag '{tag}' not found for repo '{repo}'")
return None
if __name__ == "__main__":
print(find_package_version_with_tag("pihole/pihole", "latest"))

View File

@ -1,12 +1,16 @@
import requests
from secret_handler import return_credentials
from logger_handler import setup_logger
github_token = return_credentials("/etc/secrets/github_token")
logger = setup_logger(__name__)
def find_package_version_with_tag(org:str,package:str,target_tag:str)->str:
def find_package_version_with_tag(org: str, package: str, target_tag: str) -> str:
"""
Iterates through the available pages looking for the supplied target_tag
Either returns None or a string when successful
[GitHub] Search for a container package version with the specified tag.
Iterates through GitHub Package Registry pages for a given org and package,
returns the version ID matching the target_tag, or None if not found.
"""
headers = {
"Authorization": f"Bearer {github_token}",
@ -22,21 +26,23 @@ def find_package_version_with_tag(org:str,package:str,target_tag:str)->str:
response = requests.get(url, headers=headers, params=params)
if response.status_code != 200:
print(f"Error {response.status_code}: {response.text}")
logger.error(f"[GitHub] Error {response.status_code} fetching {package}: {response.text}")
return None
versions = response.json()
if not versions:
print(f"Reached end of pages — tag '{target_tag}' not found.")
logger.debug(f"[GitHub] Reached end of pages — tag '{target_tag}' not found for {package}.")
return None
for version in versions:
tags = version.get("metadata", {}).get("container", {}).get("tags", [])
if target_tag in tags:
print(f"Found tag '{target_tag}' on page {page}:\n")
logger.info(f"[GitHub] Found tag '{target_tag}' for {package} on page {page}, version ID: {version['id']}")
return version["id"]
page += 1
logger.debug(f"[GitHub] Tag '{target_tag}' not found on page {page-1}, moving to page {page}")
if __name__ == "__main__":
find_package_version_with_tag("Suwayomi", "tachidesk", "stable")

View File

@ -1,4 +1,9 @@
import logging
import os
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper()
if LOG_LEVEL not in {"ERROR", "DEBUG", "INFO", "WARNING", "CRITICAL"}:
LOG_LEVEL = "INFO"
def setup_logger(name: str) -> logging.Logger:
logger = logging.getLogger(name)
@ -9,5 +14,6 @@ def setup_logger(name: str) -> logging.Logger:
)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.setLevel(getattr(logging, LOG_LEVEL))
logger.debug(f"Logger {name} initialized with level {LOG_LEVEL}")
return logger

View File

@ -5,26 +5,29 @@ from dockerhub_api import find_package_version_with_tag as find_package_version_
import uvicorn
from contextlib import asynccontextmanager
from db import get_db, create_connection_pool, close_connection_pool, start_healthcheck_thread
import logging
from logger_handler import setup_logger, LOG_LEVEL
from send_notification import send_notification
from metrics_server import REQUEST_COUNTER
import asyncio
logger = logging.getLogger(__name__)
logger = setup_logger(__name__)
@asynccontextmanager
async def lifespan(app: FastAPI):
logger.info("Starting application...")
logger.info("[App] Starting application...")
logger.info("Creating MySQL connection pool...")
logger.info("[DB] Creating MySQL connection pool...")
create_connection_pool()
start_healthcheck_thread()
logger.info("MySQL healthcheck thread started.")
logger.info("[DB] MySQL healthcheck thread started.")
yield
logger.info("Closing MySQL connection pool...")
logger.info("[App] Closing MySQL connection pool...")
close_connection_pool()
logger.info("[App] Shutdown complete.")
api = FastAPI(
title="Docker Repository Query",
@ -35,17 +38,22 @@ api = FastAPI(
@api.middleware("http")
async def prometheus_middleware(request: Request, call_next):
logger.debug(f"[Metrics] Incoming request: {request.method} {request.url.path}")
status = 500
try:
response = await call_next(request)
status = response.status_code
logger.debug(f"[Metrics] Request processed with status {status}")
except Exception:
status = 500
logger.error(f"[Metrics] Exception occurred: {e}", exc_info=True)
raise
finally:
REQUEST_COUNTER.labels(request.method, request.url.path, status).inc()
logger.debug(f"[Metrics] Counter incremented for {request.method} {request.url.path} [{status}]")
return response
@api.get("/health")
def return_health():
return Response(status_code=200)
@ -58,22 +66,30 @@ def handle_suwayomi(
):
try:
logger.info("Suwayomi handler invoked")
logger.info("[App] Suwayomi handler invoked")
latest_online_version = find_package_version_with_tag_github("Suwayomi", "tachidesk", "stable")
logger.debug(f"[App] Fetched latest Suwayomi version from GitHub: {latest_online_version}")
cursor = db.cursor()
cursor.execute("SELECT latest_version FROM docker_repositories WHERE app='suwayomi'")
local_state = cursor.fetchone()
logger.debug(f"[App] Local Suwayomi version from DB: {local_state}")
if local_state and latest_online_version != local_state[0]:
logger.debug("[App] Version mismatch detected. Updating database.")
cursor.execute ("UPDATE docker_repositories SET latest_version=%s WHERE app='suwayomi'",
(latest_online_version,))
(latest_online_version,))
db.commit()
logger.info("New Suwayomi version has been found")
logger.info("[App] New Suwayomi version recorded in database")
send_notification("New Suwayomi version has been found")
logger.debug("[App] Notification sent for Suwayomi update")
return Response(status_code=200)
logger.debug("[App] No new Suwayomi version found")
return Response(status_code=204)
except Exception as e:
logger.error(f"[App] Error in Suwayomi handler: {e}", exc_info=True)
raise HTTPException(status_code=500, detail=str(e))
@ -84,33 +100,43 @@ def handle_pihole(
db = Depends(get_db)
):
try:
logger.info("Pi-hole handler invoked")
logger.info("[App] Pi-hole handler invoked")
latest_online_version = find_package_version_with_tag_dockerhub("pihole/pihole", "latest")
logger.debug(f"Fetched latest Pi-hole version from Docker Hub: {latest_online_version}")
cursor = db.cursor()
cursor.execute("SELECT latest_version FROM docker_repositories WHERE app='pihole'")
local_state = cursor.fetchone()
logger.debug(f"[App] Local Pi-hole version from DB: {local_state}")
if local_state and latest_online_version != local_state[0]:
logger.debug("[App] Version mismatch detected. Updating database.")
cursor.execute ("UPDATE docker_repositories SET latest_version=%s WHERE app='pihole'",
(latest_online_version,))
db.commit()
logger.info("New Pi-hole version has been found")
logger.info("[App] New Pi-hole version recorded in database")
send_notification("New Pi-hole version has been found")
logger.debug("[App] Notification sent for Pi-hole update")
return Response(status_code=200)
logger.debug("[App] No new Pi-hole version found")
return Response(status_code=204)
except Exception as e:
logger.error(f"[App] Error in Pi-hole handler: {e}", exc_info=True)
raise HTTPException(status_code=500, detail=str(e))
async def start_servers():
config_main = uvicorn.Config("main:api", host="0.0.0.0", port=5000, log_level="info")
config_metrics = uvicorn.Config("metrics_server:metrics_api", host="0.0.0.0", port=9000, log_level="info")
logger.info("Starting main and metrics servers")
config_main = uvicorn.Config("main:api", host="0.0.0.0", port=5000, log_level=LOG_LEVEL.lower())
config_metrics = uvicorn.Config("metrics_server:metrics_api", host="0.0.0.0", port=9000, log_level=LOG_LEVEL.lower())
server_main = uvicorn.Server(config_main)
server_metrics = uvicorn.Server(config_metrics)
logger.debug("Launching servers concurrently")
await asyncio.gather(server_main.serve(), server_metrics.serve())
logger.info("Both servers have stopped")
if __name__ == "__main__":
asyncio.run(start_servers())

View File

@ -1,12 +1,15 @@
import sys
from logger_handler import setup_logger
logger = setup_logger(__name__)
def return_credentials(path: str)->str:
logger.debug(f"[Secrets] Opening file:{path}")
try:
with open (path) as file:
return file.read().strip()
except FileNotFoundError:
print(f"[FATAL] Secret file not found: {path}")
sys.exit(1)
logger.fatal(f"[FATAL] Secret file not found: {path}")
raise
except Exception as e:
print(f"[FATAL] Failed to read secret file {path}: {e}")
sys.exit(1)
logger.fatal(f"[FATAL] Failed to read secret file {path}: {e}")
raise

View File

@ -12,10 +12,14 @@ api_key= return_credentials("/etc/secrets/api_key")
logger = setup_logger(__name__)
def send_notification(notification:str,
max_retries: int = 5,
timeout: int = 5
)->str:
def send_notification(notification:str,max_retries: int = 5,timeout: int = 5)->str:
"""
Sends a notification to the internal backend service when a new docker image is released.
Parameters:
notification: Name of the docker repository.
"""
headers = {
"X-API-Key-Internal": api_key,
"Content-Type": "application/json"
@ -26,28 +30,35 @@ def send_notification(notification:str,
"message": {
"title": notification,
"info": "A new version is available.",
"link": None
"link": None,
"category":"utility",
"timestamp": int(time.time())
}}
logger.debug(f"[Notify] Preparing to send notification: title='{notification}'")
with requests.Session() as session:
for attempt in range(1,max_retries+1):
for attempt in range(1, max_retries + 1):
try:
logger.debug(f"[Notify] Sending request to backend (attempt {attempt}/{max_retries})")
response = session.post(backend_api_url, headers=headers, json=data, timeout=timeout)
response.raise_for_status()
logger.info("Notification sent successfully")
logger.info(f"[Notify] Notification sent successfully for '{notification}'")
return response.text
except (Timeout,ConnectionError) as e:
logger.warning(f"Attempt {attempt}/{max_retries} failed: {type(e).__name__}")
except (Timeout, ConnectionError) as e:
logger.warning(f"[Notify] Attempt {attempt}/{max_retries} failed: {type(e).__name__}")
if attempt == max_retries:
logger.error(f"[Notify] All retry attempts failed for '{notification}'")
raise HTTPException(status_code=503, detail=f"Notification service unavailable: {type(e).__name__}")
time.sleep(2 ** (attempt - 1))
sleep_time = 2 ** (attempt - 1)
logger.debug(f"[Notify] Retrying in {sleep_time} seconds...")
time.sleep(sleep_time)
except HTTPError as e:
logger.error(f"HTTP {e.response.status_code}: {e.response.text}")
logger.error(f"[Notify] HTTP {e.response.status_code}: {e.response.text}")
raise HTTPException(status_code=e.response.status_code, detail=e.response.text)
except RequestException as e:
logger.error(f"Unexpected request failure: {e}")
logger.error(f"[Notify] Unexpected request failure: {e}")
raise HTTPException(status_code=500, detail=f"Request failed: {str(e)}")
return ""

View File

@ -1,39 +0,0 @@
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "%(asctime)s - %(levelname)s - %(name)s - %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S",
}
},
"handlers": {
"default": {
"class": "logging.StreamHandler",
"formatter": "default",
"stream": "ext://sys.stdout"
}
},
"loggers": {
"": { # root logger
"handlers": ["default"],
"level": "INFO",
"propagate": False
},
"uvicorn": {
"handlers": ["default"],
"level": "INFO",
"propagate": False
},
"uvicorn.error": {
"handlers": ["default"],
"level": "INFO",
"propagate": False
},
"uvicorn.access": {
"handlers": ["default"],
"level": "INFO",
"propagate": False
}
}
}