memorA - Offline Alzheimer's AI Companion banner

memorA - Offline Alzheimer's AI Companion

1 devlog
3h 22m 30s

memorA is a fully offline, voice-activated AI companion built to assist Alzheimer’s patients on edge hardware like the Raspberry Pi 5. All processing runs 100% locally after setup — no cloud, no internet, no privacy concerns.

This project uses AI

Used Github Copilot for inline completions and writing code autonomously and debugging.

Demo Repository

Loading README...

team1

Just finished the initial working build of memorA - a fully offline AI companion for Alzheimer patients on a Raspberry Pi 5.

The core loop: WebRTC VAD listens on the mic, segments speech, pipes audio through Faster-Whisper (base, INT8) for transcription, sends text to Qwen 0.8B via Ollama, and speaks back with Piper TTS. 100% offline, zero cloud, full patient privacy.

First-boot orientation asks 3 questions (name, last meal, day of week) and saves a local profile injected into every LLM system prompt.

Incident detection: 30ms RMS checks catch sudden loud noises (falls, glass breaking, screams). Bilingual distress keywords (EN + RO) trigger emergency dialogue - if unresponsive after 8s, logs and simulates 112 call.

Caretaker web UI (Flask, port 5000): view patient profile + incident logs, trigger re-evaluation, set Bluetooth MAC, toggle EN/RO language. Background thread, never blocks the AI loop.

Systemd service + 3AM auto-updater + Bluetooth autoconnect. install.sh handles everything end-to-end.

Next: RPi 5 hardware testing, VAD/RMS tuning, ESP32 wristband for fall detection.

Attachment
1

Comments

D-Pod
D-Pod about 1 month ago

amazing project! this has lots of potential, and I’m looking forward to seeing this become an actual product!