Linux API banner

Linux API

23 devlogs
45h 11m 34s

Updated Project: Linux-API is a REST API for monitoring and managing Linux systems. It provides system statistics, process and user information, and supports secure access with authentication and rate limiting. Designed to be easy to deploy an…

Updated Project: Linux-API is a REST API for monitoring and managing Linux systems. It provides system statistics, process and user information, and supports secure access with authentication and rate limiting. Designed to be easy to deploy and integrate into dashboards or automation workflows.
Update details: I refactored the full codebase, moved the old implementation into legacy routes, and introduced a more structured routing architecture along with a new PostgreSQL-based database system. I also implemented database migrations to ensure reliable schema versioning and deployments, added robust input validation using Pydantic, and introduced metrics & monitoring endpoints that expose database health as well as statistics such as average response time and per-route performance insights.

This project uses AI

Used ChatGPT to refine devlogs, support structural design decisions, and correct spelling and grammar throughout the documentation.

Demo Repository

Loading README...

Ivole32

Shipped this project!

Hours: 45.15
Cookies: πŸͺ 68
Multiplier: 1.5 cookies/hr

πŸš€ Linux-API is Shipping

After extensive development and testing, Linux-API is now ready to ship.

What Linux-API does

Linux-API provides a structured interface to monitor and manage Linux servers through a modern REST API.

Key capabilities include:

  • πŸ“Š Metrics & observability β€” request metrics, response times, error rates, and health monitoring
  • ❀️ Health & readiness checks β€” database status, flush worker health, and system readiness
  • πŸ—„ Time-series metrics storage powered by PostgreSQL/TimescaleDB
  • πŸ” Secure API access with key-based authentication
  • βš™οΈ Container-friendly deployment using Docker and Compose
  • πŸ“š Complete documentation & setup tooling (manual + automated)

What I learned building this

This project became a deep dive into backend architecture and operations:

  • structuring a real-world API project for maintainability and clarity
  • designing observability and health monitoring from scratch
  • working more extensively with PostgreSQL, connection pooling, and migrations
  • improving documentation quality and deployment workflows
  • building reproducible setup and startup automation

Future possibilities

Linux-API opens the door to more advanced server automation:

  • linking Linux user accounts with API identities
  • executing remote maintenance and administrative tasks securely
  • integrating with my project remote workflow to automate tasks
  • performing automated updates, backups, and system health remediation
  • centralized monitoring across multiple hosts

Status

I consider the project feature-complete for its current scope and ready for real-world use.

Further improvements will focus on performance, automation, and deeper system integration.


Linux-API started as a tooling experiment and evolved into a full observability and server management foundation β€” and this is just the beginning.

Ivole32

2026-02-22 [1]

Today I focused on improving the installation experience for Linux-API.

I created setup and startup scripts to simplify deployment and reduce the chance of configuration mistakes. The setup script guides users through the installation process, while the startup script makes running the API predictable and convenient in both foreground and background modes.

Alongside the automation work, I wrote the complete project documentation. It now includes:

  • a fully automated setup path
  • a detailed manual setup guide (recommended)
  • step-by-step server preparation instructions

The manual setup is recommended because it provides better transparency, helps users understand the system, and makes debugging easier if something goes wrong.

Additionally, every configuration option is now documented. Each environment variable and setting is explained so administrators can confidently customize deployments for their own infrastructure.

With deployment streamlined and documentation in place, I consider the project functionally complete for now. The current state is stable, understandable, and ready for real-world use.

I’m calling this milestone done β€” time to ship πŸš€

Changelog

Attachment
Attachment
0
Ivole32

2026-02-21 [1]

Today I added a new feature that allows routes to be disabled directly through the configuration. This makes it easy to control access to specific endpoints without modifying the codebase, which is especially useful for maintenance, staged rollouts, or demo environments.

I also started rewriting the documentation from scratch because the previous version was outdated. I am currently working on making the setup process clearer and ensuring the API is easy to understand and use.

Changelog

Attachment
0
Ivole32

2026-02-19 [1]

I focused on performance improvements and infrastructure stability in this update.

Database Optimization

I significantly accelerated database performance by introducing proper indexing. Queries that previously required full table scans are now resolved much more efficiently, reducing latency and CPU load.

Pgpool Integration with Caching

Pgpool is now integrated directly into the Docker Compose setup with caching enabled. This allows frequently requested data to be served faster without repeatedly hitting the database.

Performance Improvements

These two changes together resulted in a major performance boost.

Previously, a ZAP scan would generate enormous load on the system. During heavy scans, response times could spike to up to 10 seconds.

Now, even under very high load, response times rarely exceed 600 ms.

Stability & Release Update

The dev version has been merged into main, as it now feels reliable and production-ready.

Documentation Rewrite

Outdated documentation was removed to avoid confusion.

Changelog

Attachment
Attachment
0
Ivole32

2026-02-18 [2]

Devlog β€” Metrics & Health System Completed

Today I completed the implementation of the API’s metrics and health monitoring system. The goal was to improve production observability and detect failures, performance regressions, and infrastructure issues quickly.

The system now collects request metrics, response times, and status code distributions. Aggregated data is periodically flushed into TimescaleDB for efficient time-series storage and long-term analysis. I also added flush worker health tracking, exposing success rates, error counts, consecutive failures, and timestamps of the last successful run.

New monitoring endpoints now expose:

  • flush worker health and error rates
  • database readiness and migration state
  • route performance metrics and status code distributions
  • global request statistics and error rates

These endpoints provide valuable operational insight and make production debugging significantly easier.

After finishing, I ran a stress test with OWASP ZAP. That’s when I realized performance wasn’t as great as expected. Naturally, I increased the number of Uvicorn workers.

Everything immediately broke.

Multiple workers do not play nicely with the in-memory aggregation and flush process. Watching the metrics system fight itself was… educational.

For now, run the API with a single worker. I currently have zero motivation to debug multi-worker synchronization chaos.

Next Steps

  • Verify monitoring endpoint correctness and stability
  • Validate metrics accuracy under sustained load
  • Perform a comprehensive performance optimization pass
  • Revisit multi-worker support (when motivation returns)

Despite the chaos, the observability foundation is now in place β€” and that’s a big step forward.

Changelog

The full changelog exceeded the character limit

Attachment
Attachment
Attachment
0
Ivole32

2026-02-18 [1]

Monitoring & Observability Progress

Today I focused on improving the operational visibility of the API. The goal is to introduce monitoring and health endpoints that make it easy to detect problems quickly in production and understand system behavior under load.

A major milestone was laying the foundation for persistent metrics storage. I finalized the initial database structures for metrics and integrated TimescaleDB to efficiently handle time-series data, providing a scalable backbone for performance data and trends.

I also worked on routes to expose database health information and detailed API usage statistics. These endpoints will help inspect request metrics, error code distributions, and overall system health for faster debugging in production.

I haven’t committed most of this work yet, as I’m still refining the implementation before pushing it to the repository.

Although work remains, today marked strong progress toward a more observable system. I am wrapping up after a long session, but the core groundwork is now in place.

Next Steps

  • Implement monitoring endpoints for API health and metrics
  • Expose database health status and connection pool state
  • Provide detailed statistics for request performance and error codes

Overall, a productive day with meaningful progress toward production-ready monitoring.

Changelog

Attachment
Attachment
0
Ivole32

2026-02-17 [1]

Today I focused on improving compatibility and maintainability by migrating the old system statistics routes from the legacy routing layer into the v1 API.

Migration of Legacy Statistics Routes

I moved the existing system statistics endpoints from the legacy routes into the v1 structure. Fortunately, this process was straightforward:

  • The legacy implementation was already well-structured
  • Performance was still solid and efficient
  • Only minimal adjustments were required to match the v1 routing conventions

Because the original codebase was clean and modular, the migration did not introduce regressions or performance issues.

Configuration Toggle for Legacy Routes

After completing the migration, I added a new configuration option that allows enabling or disabling legacy routes.

New config option:

  • Toggle legacy routes on/off
  • Provides flexibility during transition and testing
  • Allows gradual deprecation without breaking compatibility

This addition makes it easier to phase out legacy functionality while maintaining backward compatibility when needed.

Next Steps

Next, I plan to implement dedicated health endpoints and supporting middleware to improve system maintainability and enable more efficient debugging and monitoring.

Changelog

Attachment
Attachment
0
Ivole32

2026-02-16 [1]

In this development cycle, I focused primarily on identifying and fixing bugs, especially within the user management routes. Several edge cases and unintended behaviors surfaced during testing, particularly around user deletion, role changes, and state transitions. Resolving these issues significantly improved the stability and predictability of the system.

While reviewing safety concerns, I intentionally skipped implementing protections against self-destructive actions for now. The following note reflects that decision:

β€œTo prevent these scenarios, I will implement safeguards to block self-destructive actions in the next iteration.”

I chose to postpone this because it did not feel critical at the current stage of development and would have slowed down progress on more immediate stability fixes.

With the most disruptive bugs addressed, the next step is to begin migrating the core system statistics logic from the legacy API routes. During this process, I plan not only to port the functionality but also to refactor and improve it to better fit the current architecture and performance goals.

Next focus:

  • Port system statistics logic from legacy routes
  • Refactor and optimize the implementation
  • Ensure consistency with the new API structure

This marks the transition from stabilization work to enhancing core functionality.

Changelog

Attachment
Attachment
0
Ivole32

2026-02-15 [1]

Today I implemented new endpoints for managing user states and permissions.
Admins can now:

  • activate users
  • deactivate users
  • change user roles (admin ↔ non-admin)

While working on these features, I realized that administrators could potentially sabotage themselves, for example by:

  • removing their own admin privileges
  • deactivating their own account
  • accidentally deleting critical accounts

To prevent these scenarios, I will implement safeguards to block self-destructive actions in the next iteration.

To ensure system integrity and protect critical accounts, I introduced an immutable attribute to the user database. When a user is marked as immutable:

  • the account cannot be deleted
  • the account cannot be modified
  • critical permissions cannot be changed

This guarantees that the main administrator account remains protected and that essential system access cannot be lost.

These changes significantly improve the safety and robustness of the user management system.

Changelog

Attachment
Attachment
Attachment
0
Ivole32

2026-02-14 [2]

Today I started auditing the existing user management routes to identify bugs and potential security issues.

For this process I used OWASP ZAP. From past experience, its automated scanning is very effective at uncovering edge cases, malformed requests, and improper error handling.

During testing I discovered that the user deletion endpoint accepted arbitrary strings instead of strictly validating UUIDs. Supplying invalid values caused database errors due to failed UUID parsing. Input validation has now been tightened to ensure only valid UUIDs (or the special "me" value) are accepted.

Additionally, I rediscovered an old host-header parsing issue that I had previously reported but which was not accepted upstream.
Issue reference: https://github.com/pallets/werkzeug/issues/3063

To prevent crashes caused by malformed host headers, I implemented a manual mitigation in the middleware layer.

Next Steps

  • Continue the security review of existing endpoints
  • Improve error handling to safely manage malformed requests
  • Continue implementing new API routes in parallel with the hardening work

Changelog

Attachment
0
Ivole32

2026-02-14 [1]

Added a new /users endpoint that allows administrators to retrieve a complete list of registered users.

This endpoint is protected by admin permission checks and includes pagination support to ensure performance and prevent large response payloads. Rate limiting has also been applied to reduce potential abuse and protect system resources.

Highlights

  • Admin-only access control
  • Paginated responses for scalability
  • Rate limiting for stability and abuse prevention
  • Prepared for future filtering and search capabilities

This addition improves system management by giving administrators a clear overview of all users while maintaining performance and security.

Changelog

Attachment
0
Ivole32

2026-02-13 [1]

Added User Endpoints to v1 Router

Today I added two new endpoints to the v1 router to improve user self-management:

  • /me β€” allows authenticated users to retrieve their own account information
  • /delete β€” allows users to delete their own account

These endpoints are intended to simplify common user actions and prepare the API for more structured account management.

Issues Encountered

During implementation I ran into several problems that were difficult to diagnose. Because the current logging is minimal, I was unable to clearly identify the root causes of some failures and unexpected behaviors.

Next Steps

In the coming days I plan to:

  • improve logging and error visibility across the API
  • continue building additional user management routes
  • migrate legacy routers related to system statistics into the new structure

These steps should improve maintainability, observability, and overall API consistency moving forward.

Changelog

Attachment
Attachment
0
Ivole32

Devlog 2026-02-11 [1]

New auth system for v1 routes

In the past few days, I started implementing the new auth dependencies for the new v1 routes.

Current Implementation

I was able to implement the auth functionality into the only existing route for the v1 API version (currently in development).

Issue with Demo Mode Admin API Key

I faced issues when trying to show the default admin API key in demo mode. The bug was that my FastAPI auth dependency was generated before the default API key was created and set as static.

Fix

After I found that out, I was able to fix it easily.

Next Steps

Next, I’ll implement more user management endpoints in v1.

Changelog

Attachment
0
Ivole32

Devlog 2026-02-5 [2]

Devlog - Legacy Endpoint Warning Middleware

I added a custom middleware that detects requests to legacy API endpoints and adds a warning flag to the response headers. This helps clients recognize when they are calling routes that still rely on the old database layer.

Changes

  • Implemented a custom FastAPI middleware
  • Middleware checks requests against the legacy API prefix
  • Automatically adds deprecation headers for legacy routes

Purpose

The goal is to clearly signal that certain endpoints are still using the legacy database logic and are not connected to the new v1 database system. This supports the ongoing migration.

More legacy routes will be migrated step by step.

Attachment
0
Ivole32

2026-02-5 [1]

Devlog – PostgreSQL Migration & API Refactor

Today I continued integrating the PostgreSQL database system into the active backend. The PostgreSQL layer itself was already developed in earlier devlogs. This session focused on connecting it to the running API, starting with the registration flow.

Progress

  • Connected the PostgreSQL system to the backend
  • Started integrating it into the registration endpoint
  • Began rebuilding API routes to use the new database system

Legacy Compatibility

  • The old SQLite3 database functionality is still included (no sync to new database)
  • Existing SQLite-based endpoints are kept as legacy API routes
  • Legacy routes remain functional as fallback
  • New PostgreSQL routes are being built in parallel
  • Migration is happening step by step to avoid breaking changes

Issues Encountered

  • Some queries failed due to schema and type differences
  • Constraint handling caused unexpected errors
  • Multiple migrations produced schema conflicts/bugs

Current Status

  • PostgreSQL connection is working
  • Registration route migration is in progress
  • Legacy SQLite routes are still active
  • New PostgreSQL-based API routes are partially implemented

Next Steps

  • Continue rebuilding API routes for PostgreSQL
  • Gradually replace legacy SQLite endpoints
  • Improve database validation and error handling
Attachment
0
Ivole32

Devlog 2026-02-1 [1]

Devlog - Database System Refactor & Error Handling Improvements

Today I continued working on the new database system and focused on stabilizing the overall structure and reliability.

βœ… Database Layer Progress

I expanded and refined the new database access layer. The repository-style methods for user records, authentication data, and permission management are now more consistent and better structured. Query execution and transaction handling were reviewed and cleaned up.

βœ… Improved Error Handling with Custom Exceptions

I significantly improved error handling by introducing dedicated custom exception classes across the database and service layers. Instead of using generic exceptions, the code now raises domain-specific errors, which makes debugging and API responses much clearer and more predictable.

This includes cases such as:

  • user not found
  • permission record missing
  • last admin protection
  • creation and deletion failures
  • permission update failures

βœ… Full Documentation Added

I wrote complete docstrings for the entire new database functionality. All core database methods are now documented with:

  • purpose
  • arguments
  • return values
  • raised exceptions

This should make future maintenance and extension much easier.

βœ… Dependencies Updated

Project dependencies were updated to the newest compatible versions to ensure current features, security fixes, and long-term support.

πŸ”œ Next Steps

Based on current progress, I expect to integrate and test the new database system tomorrow or the day after tomorrow.

Attachment
0
Ivole32

2026-01-31 [2]

Devlog – Database Initialization Fix and Migration Cleanup

πŸ›  Fixed PostgreSQL Initialization Issue

I fixed an issue where the PostgreSQL database was not being created automatically when starting the Docker container. The initialization process has now been corrected to ensure that the required database is reliably created during container startup. This improves the setup process and prevents manual database creation steps.


🧹 Migration Code Cleanup

I also removed outdated and unnecessary migration code from the new database structure. This cleanup helps reduce technical debt, improves maintainability, and ensures that the migration system remains clean and easier to manage moving forward.


πŸ—„ Continued Work on the New Database System

Development on the new database architecture is still ongoing. Several core functionalities still need to be implemented before the system is fully ready. I am currently working on expanding these features, improving stability, and preparing the database for integration into the production system. Further testing and optimization are in progress to ensure a smooth transition.


πŸš€ Next Steps

The focus now is on finalizing the remaining database features and preparing the implementation for deployment into the live environment.

Attachment
0
Ivole32

2026-01-31 [1]

Automatic Migrations, Backups and Migration Logging

βœ… Automatic Database Migrations

I have successfully implemented a fully automatic database migration system.
The system now detects schema changes and applies migrations without requiring manual intervention. This significantly reduces setup time, prevents version mismatches, and ensures that all environments remain synchronized.

The migration process has been designed to be safe and consistent, making it easier to deploy updates and maintain database integrity across development and production environments.


πŸ’Ύ Automatic Backup System

Alongside automatic migrations, I implemented a reliable backup system that runs automatically before migrations are executed.

This ensures that:

  • The database state is preserved before any structural changes are applied
  • Recovery is possible in case of migration failures
  • Data integrity risks are minimized

The backup process is fully integrated into the migration workflow and requires no manual interaction.


πŸ“Š Migration Logging System

A persistent and reliable migration logging system has now been added to the database.
This system records detailed information about every migration attempt, including:

  • Migration direction (upgrade/downgrade)
  • Target revision
  • Execution status (success or failure)
  • Error information when failures occur
  • Timestamp of execution

This logging infrastructure provides full transparency and traceability for database changes and greatly simplifies debugging and maintenance.


πŸš€ Summary

With these features implemented, the database management system is now significantly more robust and production-ready. Automatic migrations, integrated backups, and detailed migration logging together provide a safe and maintainable workflow for future development.

Attachment
0
Ivole32

2026-01-25 [3]

Progress on Database Migration System and Automatic Backups

I continued working on the migration logic of the new database system.

The main focus is currently on implementing reliable backup functionality and automatic database migrations using Alembic. The goal is to ensure that schema changes can be applied safely, while minimizing the risk of data loss by creating backups only when migrations are actually required.

While the core functionality is coming together, I am still running into some difficulties regarding overall code design decisions. In particular, deciding where certain responsibilities should live (for example, how much logic belongs in startup code versus dedicated database or service classes) is proving to be non-trivial.

For now, the priority is correctness and safety over perfect structure. Once the migration and backup flow is stable, I plan to revisit and refine the architecture to make it cleaner and more maintainable.

Attachment
0
Ivole32

2026-01-25 [2]

Devlog - January 25, 2026

Started implementing backup and migration functions directly in the code.
The goal is to make database upgrades safer and more automated without relying solely on the CLI.

Tomorrow, the plan is to attempt integrating the new PostgreSQL database system into the existing codebase and ensure everything works with the updated setup.

Attachment
0
Ivole32

Devlog 2026-01-25 [1]

Devlog: Implementing PostgreSQL Database Migrations with Alembic

Today I spent some time setting up database migrations for our PostgreSQL backend using Alembic.

The main goal was to streamline the development process for our new database system, which currently can only be tested after committing changes on my Linux server. By integrating Alembic migrations, I can now:

  • Keep track of schema changes in a structured way
  • Apply updates to the database reliably without manual intervention
  • Simplify testing and deployment, reducing errors caused by manual schema updates

This setup should make future development much faster and safer, especially when iterating on new models and schema modifications.

Attachment
0
Ivole32

Devlog 2026-01-23 [1]

I continued restructuring the project by moving files into more appropriate and maintainable folder structures.
Alongside this, I kept improving code documentation to make the codebase easier to understand and work with.

This ongoing cleanup aims to improve overall project organization, readability, and long-term maintainability.

Attachment
0
Ivole32

Devlog 2026-01-18 [2]

This is a small interim update reflecting recent committed changes.

I continued working on improving the structure and readability of the codebase, with several refactors now fully committed to keep the project cleaner and easier to maintain.

Development on the new PostgreSQL database implementation is also progressing. The database setup and related changes have been committed, but the new database is not yet integrated into the main application.

Additionally, I worked on general organization and cleanup tasks. More functionality and behavior are now configurable via the config files, improving flexibility without requiring code changes.

This update focuses on internal improvements and preparation for future features rather than a full release.

Attachment
0
Ivole32

Devlog 2026-01-18 [1]

I decided to continue my old SoM project called Linux API as part of the Flavortown competition.

Since working on the original version of Linux API, my technical skills and my approach to clean and maintainable software design have improved significantly. Because of this, my current focus is not on adding new features yet, but on refactoring, cleaning up, and restructuring the project.

Project Cleanup and Refactoring

At the moment, I am going through the project and removing outdated or unnecessary parts to bring the codebase up to my current standards.

So far, I have:

  • Completely removed the old pippackage from the repository

  • Updated the versions in the requirements file to use more recent and supported dependencies

  • Started redesigning the folder structure to make the project clearer and easier to maintain

  • Begun improving the documentation alongside the structural changes

This cleanup phase is important to reduce technical debt and ensure the project is easy to work on in the long term.

New Database System

In parallel, I am working on a new database system using PostgreSQL. Instead of modifying the old data layer, I decided to redesign the database from scratch with a cleaner schema and better scalability in mind.

I am currently designing the tables and their relationships.
An image showing the current database table structure is attached to this update.

Next Steps

Right now, the focus is on finalizing the new folder structure, improving documentation, and finishing the database design. Once this foundation is solid, I will move on to rebuilding and extending features on top of it.

More updates coming soon.

Attachment
0