Priority Tokens is a research project exploring whether a fine-tuned LLM can be taught to treat certain parts of its context as more important than others, based on explicit user-defined tags. The idea is to give users a simple way to mark text w…
Priority Tokens is a research project exploring whether a fine-tuned LLM can be taught to treat certain parts of its context as more important than others, based on explicit user-defined tags. The idea is to give users a simple way to mark text with priority levels — «Priority1» through «Priority10» — so the model reliably recalls and follows high-priority content even when it’s buried deep in a long context. The project uses QLoRA fine-tuning on Qwen3-8B to teach this behavior through supervised examples, without modifying the model’s architecture. It’s a hobbyist-scale experiment, but one grounded in a real and documented weakness of current LLMs.
I am using Claude (not code) for helping me learn and make concepts