Adrian Experiment (AI Experiment) banner

Adrian Experiment (AI Experiment)

1 devlog
9h 22m 41s

A Discord selfbot that simulates a persistent, AI-driven human persona named Adrian. whose a 16-year-old developer from Germany. Adrian has a memory system, a circadian rhythm, dynamic moods, and a consistent personality powered by a locally-runni…

A Discord selfbot that simulates a persistent, AI-driven human persona named Adrian. whose a 16-year-old developer from Germany. Adrian has a memory system, a circadian rhythm, dynamic moods, and a consistent personality powered by a locally-running LLM.

This project uses AI

I’ve used Mistral Nemo Instruct 12B for his brain; as for making him; it was mostly by hand and sometimes I’ve debugged with GitHub Cpoilot

Demo Repository

Loading README...

Gabs

This is the devlog about this project, here some details:

Experiment window: 7 days
Test environment: Discord server Snug Nook (snugnook.org)

This was a 7-day experiment to see if a single main.ts process could feel like a persistent person instead of a command bot. The goal was technical: stable behavior, memory over time, and believable pacing in chat.

Core setup:

Adrian runs on a simple stack:

  • Deno runtime
  • discord.js-selfbot-v13 for Discord events and presence
  • Local MLX endpoint (http://127.0.0.1:8080/v1/chat/completions) for generation
  • SQLite for user memory (users, user_facts)

Everything is wired together inside main.ts.

How it behaved:

The flow is straightforward:

  • Listen to incoming messages and filter obvious noise.
  • Score whether a message is worth replying to (ping, DM, question, reply context, etc.).
  • Buffer user input for ~5 seconds so split messages can be merged.
  • Generate a response with persona constraints.
  • Clean and validate output before sending.

On top of replies, Adrian also:

  • Updates mood/presence every 10 minutes from recent context.
  • Posts occasional autonomous “icebreaker” thoughts.
  • Reacts to hype/meme messages and sometimes joins reaction bandwagons.

Memory and state:

Two memory layers were used:

  • Persistent (SQLite): trust score, vibe summary, extracted user facts.
  • In-memory: short rolling context, pending reply buffers, recent ping history.

This gave decent continuity across chats while keeping runtime logic lightweight.

Guardrails that helped:

A few practical safeguards carried most of the stability:

  • Coherence checks to reject weird/noisy outputs.
  • Strict formatting prompts for mood, vibe, and fact extraction.
  • Fallbacks when generation failed or came back too long.
  • Sleep/wake scheduling so behavior had natural downtime.

Limits we hit:

The design worked, but it has clear limits:

  • Persona consistency is mostly prompt-based.
  • One global isProcessing lock can bottleneck multiple concurrent conversations.
  • Some conversational state is process-local and resets on restart.
  • If the local LLM server is down, behavior degrades hard.

Final takeaway (7 days in Snug Nook):

In Snug Nook (snugnook.org), this looked less like a classic chatbot and more like a small autonomous loop: ingest events, reason with local inference, store lightweight memory, and keep running. It stayed technically coherent for the full 7-day window, with most failures handled by simple guardrails rather than heavy infrastructure.


ps: I’ve attached some of the funniest moments of his while talking with fellow discord members

Attachment
Attachment
0