Activity

housey2k

Fixed race condition

When the user sent a message, and at the same time the EngagementTicker ticket, it would crash the program, I fixed that by adding a boolean “thinking” inside LlamaClient, if its true, the SendMessage function will return with the string “Error: Race condition”

Fixed save multiline bug

There was a bug that when reloading, it wouldn’t fully recover memories with more than one line, i fixed it by adding the “U:” or “B:” prefix on the beginning of every line on multiline messages

Changed memory system

Previously I was sending every message to the bot, now I am sending only the last 10 messages (by default) + Master prompt on every message

I also tuned the master prompt, it is avaiable on the source code and it’s not defined on the cfg file by default, this new master prompt made the LLM behave more consistently, more in character, and added more personal details

Attachment
0
housey2k

Added configuration file

I removed the hardcoded values, created a really simple configuration file structure, and now you can change the llama directory, executable name, launch args, server window visibility, master prompt, and memory files

Attachment
0
housey2k

Talking

added code so the TalkForm can send text to be shown on the Bitty window, it will write to the label next to him, and change the image to one with the mouth open.

Engagement timer

I also added a timer that fires once 5 minutes to draw the user’s attention, it will send a system message to make bitty ask a question based on previous chats, if there is no topic, it can make up something.

Attachment
Attachment
0
housey2k

Rewrote back-end code - I was trying to implement features like embeding text files, removing hardcoded values to put them into a JSON configuration file, but it quickly became spaghetti code, and eventually stopped working, so I deleted most of the code, and spent the last 5 hours writing it manually with less AI help, now I have way better knowledge of the structure, more organized code, and now it’s easier to implement what I wanted. The LlamaClient.cs turned out so good in my opinion that I may reuse it in my next projects, all we need to do is call StartServer, then SendUserMessage/SendSystemMessage will handle all the server communication and memory stuff through the method SendMessage.
My friend sent me the finished art yesterday and he looks so cute :33

Attachment
0
housey2k

Implemented LLM. I implemented the actual AI code, for this project I am using the llama.cpp software, by running the executable file “llama-server.exe” I can interact with the LLM through a OpenAI Compatible API.
I also implemented a code that closes the server executable when the program closes, because during early development, 3 debug tests ate up all my RAM because it kept opening new servers.
Then I implemented code to write and read to a .txt file to save the chats when the program closes and opens
And I learned that LLMs are stateless, so for every message i need to resend the last messages for context (I implemented a loop that sends 10 messages), and i need to send the master prompt all again. Now i’ll be working on a set of commands to read files from the computer, and optimization because everything is messy and hardcoded, I want to implement a configuration file

Attachment
0
housey2k

Wrote chat form, this has a textbox for you to type your message, and the chat history will be displayed

Attachment
0
housey2k

I wrote the initial interface for the program, this will be the mascot that will sit in a corner of the screen and can be moved around, I’m already forcing a friend to design the character since I suck at drawing, for now I put a moth as a placeholder (Bitty will be a furry moth thing, you’ll see it later)

0