- Chore: ran `make npm@update` to update deps and lockfile
- Chat: new plugin: Socket.ts that allow the use of socket.io with
fastify (fastify-socket.io is not updated)
- Chat: Put everything from `src/socket.ts` that needed to be saved into
`src/app.ts`
- Removed `auth` table and merged its information inside the `user`
table
- Changed around some field names in the database
- Changed Create*User functions to not be using overload but different
functions
## 🦌 Centralized Logging Stack Integration
### ELK Stack Online
- Added **`elasticsearch`**, **`logstash`**, and **`kibana`** services to `docker-compose.yml`:
- **Elasticsearch** for log storage and indexing with persistent volumes.
- **Logstash** as the GELF entrypoint, handling log ingestion and transformation.
- **Kibana** as the web UI for log exploration, dashboards, and saved searches.
- Each ELK service is wired with:
- **Persistent storage** to survive restarts.
- **Environment variables** for credentials and tuning.
- **Bootstrap scripts** to perform initial setup (policies, templates, dashboards, etc.).
### Global GELF Logging
- All existing services now use the **GELF logging driver** in `docker-compose.yml`:
- Containers send their logs to **Logstash** instead of stdout-only.
- Provides **structured**, centralized logs ready for querying in Elasticsearch/Kibana.
- Result: no more log hunting across containers — everything lands in one searchable place.
---
## 🔁 Log Lifecycle & Visualization Automation
### Elasticsearch & Kibana Bootstrap
- Introduced **bootstrap scripts and config files** to automate:
- **Index Lifecycle Management (ILM)** policies for log retention and rollover.
- **Index templates** for log indices (naming, mappings, and settings).
- **Kibana imports** (index patterns / data views, dashboards, visualizations).
- This turns ELK setup from a manual ritual into a **single-command provisioning step**.
### Logstash Pipeline Upgrade
- Added a **Logstash pipeline configuration** to:
- Ingest **GELF** logs from Docker.
- **Normalize/rename fields** for consistent querying across services.
- Index logs into **Elasticsearch** with **daily rotation per container** pattern.
- Outcome: logs are structured, tagged by container, and auto-rotated to keep storage sane.
---
## 🛠 Makefile & Docker.mk Enhancements
### Logs Setup Targets
- Added a new **`logs`** target in `Makefile` (with `.PHONY` declaration) to manage logging setup from the top level.
- Added a **`logs-setup`** target in `Docker.mk` to:
- Initialize **ILM policies** in Elasticsearch.
- Apply **index templates** for logs.
- Create **Kibana index patterns** so logs are immediately visible in the UI.
- These targets plug into the existing tooling, making logging setup part of the **standard dev/ops workflow**.
---
## 🔐 Environment Configuration
### Secure Elasticsearch Access
- Updated `env.example` to include:
- **`ELASTIC_PASSWORD`**: central password for Elasticsearch authentication.
- Encourages **secure-by-default** deployments and aligns local/dev with production-style security.
---
## 📈 Monitoring Configuration Updates
### Grafana Alerting & Prometheus Cleanup
- Added a **basic alerting policy for Grafana**:
- Provides a default routing tree for alerts.
- Acts as a foundation for future, more granular alert rules.
- Cleaned up **Prometheus scrape configuration**:
- Removed obsolete backend scrape targets.
- Keeps monitoring config focused on **live** and relevant services.
* feat(docker/monitoring): adding the first monitoring tools from the docker file
- The first tools is Grafana basicly the tool to supervise all the data
* feat(monitoring/blackbox): adding initial configuration
* feat(monitoring/grafana): adding the configuration to alerting on the discord channels
* feat(monitoring/grafana): adding the grafana dashboard (docker monitoring)
* feat(monitoring/grafana): adding the grafana dashboard (global monitoring)
* feat(monitoring/grafana): adding the global configuration for dashboards
* feat(monitoring/grafana): adding the prometheus configuration
* feat(monitoring/prometheus): adding the configuration of prometheus as the main grafana sources
* core(docker-compose): adding the monitoring part for the docker files
* feat(monitoring/grafana): removing the monitoring global
* feat(monitoring/prometheus): removing the blackbox
- The self certificate is ruinning everything
* core(docker-compose): removing the blackbox container
* core(env/example): adding a env example
* feat(monitoring/blackbox): adding initial configuration
* test(nginx/location): adding a test to test blackbox
* feat(monitoring/prometheus): adding blackbox to the prometheus configuration
* feat(monitoring/grafana): adding the start of the global dashboard
* feat(monitoring/blackbox): adding tls_configuration skip
- The ssl certificate have to be self-signed
* feat(monitoring/grafana): global is now checked and work w/ others services
* feat(monitoring/prometheus): checking other services run
* feat(nginx/conf): now http port will mirror the https
- Usefull for intern docker communication
* feat(auth/app): adding the /monitoring routes
* feat(icons/app): adding the /monitoring routes
* feat(user/app): adding the /monitoring routes
* refactor(auth/src): linting the app.ts
* refactor(icons/src): linting the app.ts
* refactor(user/src): linting the app.ts
- Router: client side route handling with client side rendering
- Toast: rought Toast handling for better UX and messaging
- Auth: single point of truth for the Logged in user
This commit doesnt not include the openapi generated code
This template is modified from the original one, to handle multiple
status-code handling of response.
Don't ask me how they work, I don't quite understand them in depth...
- Updated to Typebox 1.0.0 to better support Openapi type generation
- Changed dockerfile to fetch depedencies only once
- Fixed Routes to properly handle openapi
- Fixed Routes to respond with multiples status code (no more only 200)
- Fixed Schemas so the auth-gated endpoint properly reflect that
- Added Makefile rule to generate openapi client (none working due to
missing files)