Activity

team1

Just finished the initial working build of memorA - a fully offline AI companion for Alzheimer patients on a Raspberry Pi 5.

The core loop: WebRTC VAD listens on the mic, segments speech, pipes audio through Faster-Whisper (base, INT8) for transcription, sends text to Qwen 0.8B via Ollama, and speaks back with Piper TTS. 100% offline, zero cloud, full patient privacy.

First-boot orientation asks 3 questions (name, last meal, day of week) and saves a local profile injected into every LLM system prompt.

Incident detection: 30ms RMS checks catch sudden loud noises (falls, glass breaking, screams). Bilingual distress keywords (EN + RO) trigger emergency dialogue - if unresponsive after 8s, logs and simulates 112 call.

Caretaker web UI (Flask, port 5000): view patient profile + incident logs, trigger re-evaluation, set Bluetooth MAC, toggle EN/RO language. Background thread, never blocks the AI loop.

Systemd service + 3AM auto-updater + Bluetooth autoconnect. install.sh handles everything end-to-end.

Next: RPi 5 hardware testing, VAD/RMS tuning, ESP32 wristband for fall detection.

Attachment
1

Comments

D-Pod
D-Pod 22 days ago

amazing project! this has lots of potential, and I’m looking forward to seeing this become an actual product!