Urja AI banner

Urja AI

11 devlogs
21h 26m 7s

Urja AI is an intelligent forecasting system designed to predict future electricity demand in Nepal using historical energy consumption data. By analyzing past trends and patterns, it helps improve energy planning, optimize resource allocation, an…

Urja AI is an intelligent forecasting system designed to predict future electricity demand in Nepal using historical energy consumption data. By analyzing past trends and patterns, it helps improve energy planning, optimize resource allocation, and support more reliable and efficient power management across the country.

This project uses AI

i have used github copilot for basic debugging ,error solving , writing readme and improving basic web ui

Demo Repository

Loading README...

adhikarisubodh999

Shipped this project!

Shipped URJA AI final snapshot with the full forecasting stack, cleaned docs, and Vercel deployment setup. Also aligned reported metrics with actual model output so the dashboard and README now match.

adhikarisubodh999

Packed the full snapshot from the finished URJA-AI project.

Includes complete stack:

  • Flask app with full API surface
  • trained model artifacts
  • extended + serving datasets
  • training script + notebook
  • full dashboard template and docs

Main goal here was not adding new features, just freezing the exact finished state so the whole progression has a clean end point.

Attachment
0
adhikarisubodh999

Production-oriented checkpoint.

Backend and retraining script are now both in decent shape:

  • dynamic latest model folder support
  • full endpoint set (forecast, historical, metrics, annual, status)
  • holdout + sequential CV in retraining
  • cleaner dashboard flow for quick checking

This stage is where I’d normally hand over for testing before documentation and deployment files.

Attachment
0
adhikarisubodh999

Created standalone retraining script so model updates are reproducible.

This first version trains and evaluates on the same set (not ideal), but it establishes:

  • feature engineering pipeline
  • artifact saving flow (model/scalers/config/metrics)
  • version-based model output folder

Next step is proper holdout and sequential CV because current metrics are too optimistic.

Attachment
0
adhikarisubodh999

Expanded backend API surface to something close to final product:

  • /api/metrics
  • /api/annual
  • richer /api/status
  • dynamic latest model folder selection

This made it easier to expose model quality and fiscal trend in UI.

The model folder auto-detect logic was a bit tricky because I wanted numeric sorting (nepal_2025 > nepal_2024) not simple string sort.

Attachment
0
adhikarisubodh999

Added a proper browser dashboard so I can visually inspect both historical and forecast series.

This stage finally feels interactive:

  • / renders HTML
  • chart updates by calling API
  • forecast horizon selectable (short/medium/long)

UI is still simple and not polished, but useful for validating model behavior quickly.

Attachment
0
adhikarisubodh999

Switched forecast endpoint from naive baseline to actual ML inference.

Key things implemented:

  • iterative step-by-step forecasting loop
  • feature generation per target step
  • scaler in/out transform around model prediction

Main headache was making sure lag features use updated predicted values in sequence. Once that was fixed, long-horizon forecast looked consistent.

Attachment
0
adhikarisubodh999

Connected app to trained model artifacts for the first time.

This stage only proves loading works: model, scalers, config, and dataset.

I kept directory lookup hardcoded (models/nepal_2025) for now because I wanted a stable checkpoint before adding dynamic model version detection.

Attachment
0
adhikarisubodh999

Before jumping to ML model loading, I added a naive forecast baseline (rolling mean of recent values).

This gives me a working /api/forecast and helps test the chart flow without model artifacts.

Forecast quality is obviously weak for seasonal spikes, but this stage was about plumbing, not accuracy.

Attachment
0
adhikarisubodh999

Added dataset loading and basic status endpoint.

Now app can confirm if CSV is present and show row count/data range.

Small issue: parsing looked wrong at first because I forgot parse_dates=['date']. Fixed that and sorting is stable now.

Attachment
0
adhikarisubodh999

Added dataset loading and basic status endpoint.

Now app can confirm if CSV is present and show row count/data range.

Small issue: parsing looked wrong at first because I forgot parse_dates=['date']. Fixed that and sorting is stable now.

Attachment
0
adhikarisubodh999

Started with the smallest Flask app possible to make sure environment and routing are fine.

Nothing fancy yet, just a health-like response at /.

I almost started with modeling directly, but I’ve burned effort before when basic server setup was broken, so I’m keeping this boring on purpose.

Attachment
0