Since the day I started this project I have been trying to make it as easy to setup and use as possible. And I believe with the PyPI package and the VS Code extension I really achieved this!
Finally, I managed to refactor the code to be clean and DRY enough and created a python package.
Now installation is as easy as installing any python package!
Things done:
~/spiberry/), now, with the config file it is much more flexiblespiberry install-service if you are using the cli or it just asks you if you are using the old version.
Log in to leave a comment
__main__.py was trying to import the main module from the wrong location. Fixed that.set-pins cli argument (and other args necessary for it), it wasn’t reasonable to remove them in the first place
spiberry directoryDevlog note: there are no visible changes for an image
Log in to leave a comment
For some reason, I was completely depending on cli arguments for configuration of all kinds. I realized that was not really easy to maintain. So, I migrated to using a ini config file. It lives in home directory and contains all the configuration Spiberry needs.
Log in to leave a comment
extension update and bugfix
Log in to leave a comment
Add command to insert raspberry pi utility classes to cursor location
Also refactored the raspi-util-class.py a bit
Log in to leave a comment
Correct video for this devlog: https://tinyurl.com/52bc7jun
I accidentally uploaded one of the previous videos
I used a bit of AI to refactor the function parsing logic.
Log in to leave a comment
I built this because I wanted to connect a Raspberry Pi to a LEGO SPIKE and there was nothing I can do that with (there is a raspberry pi hat but it doesn’t have enough ports). The most challenging part with making these updates was the computer vision part, I had to remember cv2!
Introduced a VS Code extension to more easily use Spiberry.
IGNORE_STARTER_PIN=1 environment variableLog in to leave a comment
Added remote drive capabilities: now you can control the robot from the desktop app or via a game controller, and also get a record of the actions the robot took.
RemoteDriveController which can run in two modes: socket (desktop app) or controller.WIP: Still need to compile the received “code” into actual code
Devlog note: First time video editing, apologies for the quality.
Log in to leave a comment
I learnt that you should always test before shipping. Turns out there were some issues with the previous shipment. This shipment,
Log in to leave a comment
This devlogs mostly consists of two 11.59 PM sleep deprived coding sessions. So I don’t remember much other than fixing some issues with the vision module. This is an AI review of the last changes:
Log in to leave a comment
merge_close_contours function with the help of ClaudeCamera module compatible with both rpicam-apps and picamera2
ContourDetectorclass
Log in to leave a comment
I realized with the last commit I introduced a critical bug: when tools are called it doesn’t fit the response schema and errors. I made a temporary fix for now: tries json xml and markdown parsing and if they fail calls the llm with formatting and no tools. I am sure this is not very efficient but this is the best I have right now 
Log in to leave a comment
I started working on this project again. I am planning to add object detection and maybe some other cv capabilities.
Since the day I started this project I am trying to make it portable and easy to setup. I changed the project structure a bit and now I will be producing a .pyz file that will be the only thing necessary for running this project! 
Log in to leave a comment
Log in to leave a comment
I contributed to lapse because I thought it was something truly necessary for Hack Club, and I could help.
Working with lapse, I think, made me more confident in coding and I think I got more familiar with PR review processes.
Hardest part was definitely getting to know what everything does and generally understand the codebase.
I made 12 PRs to Lapse. Currently, 6 of them are merged, 1 is closed and 5 are waiting for review.
Short summary of all of them (as much as I remember):
Log in to leave a comment
This version adds support for edit requests and better prompting style options.
Log in to leave a comment
I started on working on a feature that would allow the user to request AI edits on the enhanced prompt
Log in to leave a comment
Why is frontend so hard! 😭
Log in to leave a comment
I forgot to devlog and now I don’t remember what I exactly had done! If I remember correctly,
Log in to leave a comment
Updated the prompt. It was fully using XML, I learnt that this was not the best practice (I hope this information was correct). So now it is a hybrid: markdown for guidelines and XML for variables
Also, improved the prompt style section a little bit.
Log in to leave a comment
Added prompt style options: now you can choose the format of your prompt; XML, Markdown, long, short, few-shot etc.
Divided the prompt.py file to smaller functions for readability
And I unfortunately (or maybe fortunately) learned that full XML prompts are counter productive, which means I need to refactor the prompt agaaaaiiinnnn!
Log in to leave a comment
I built this to more easily and effectively interact with AI. I believe I learned a lot about prompt engineering.
Added user messages to display when HCAI is down, fallback is being used and fallback is also down
Implemented a better fallback mechanism
Log in to leave a comment
Updated the README
(I know with all the emojis it looks like AI generated, but I like it this way)
Log in to leave a comment
Why everything keeps breaking! Ahhaaaa…
Today I tried to use it for something and backend gave an error: JSON parsing
Why?!! It was working yesterday. I checked the prompt, maybe there was something random making LLM go crazy. Couldn’t figure it out.
Finally added json-repair and it works fine for now.
And now the question Cancel button actually does something.
Please DON’T break again.
Log in to leave a comment
Made the project ready for production.
Deployed the frontend to Vercel and backend to Railway
Log in to leave a comment
Well, turns out you shouldn’t use CoT prompting with reasoning models.
Learning this I had to go over the prompt again.
I learned that gemini-3-flash works best with XML so I switched to XML
Overall spent (I hope not wasted) a lot of time on the prompt again.
Updated the frontend: prompts are auto saved and displayed neatly on the sidebar, stored in local storage.
Log in to leave a comment
Now questions are received, sent and answered as arrays, instead of plain string
Fix bugs with web sockets
Add rate limiting
Log in to leave a comment
Made significant improvements on the prompt: now it goes through a prompted chain-of-thought process.
Moved everything about the prompts to a new file.
(I want to apologize from HCAI for the immense costs to come)
Log in to leave a comment
Removed Celery and Redis so that deployment be cheaper.
Realized “the prompt” needs a lot of improvement.
Log in to leave a comment
Created a bat script to start the necessary services.
Made minor improvements to the prompt.
Also attempted to package the app in a .pyz file but failed :(
Log in to leave a comment
Implemented channels and web sockets to allow the LLM to ask questions to the user in prompt enhancement
Log in to leave a comment
Migrated to Django + Vite (frontend looks almost exactly the same)
Fixed the bug additional web search query.
Log in to leave a comment
The first primitive version.
Can save and load prompts.
Streamlit interface.
Has web search capabilities, not sufficient yet.
Log in to leave a comment