Shipped this project!
I built this to keep my code DRY. For some reason I really hate repeating code, if feels ineffective.
I built this to keep my code DRY. For some reason I really hate repeating code, if feels ineffective.
Log in to leave a comment
Log in to leave a comment
I think after improving the normalization and adding multi-file support this project will be ready to ship
But there is only 17 hours left 
Log in to leave a comment
Log in to leave a comment
I implemented the the embedding creation, storage, caching and comparison.
Embeddings are calculated thanks to HCAI, they are then stored locally. When something changes or a new function is added they are recalculated (of course only the ones needed).
The problem I have is that I was using a hard-coded key for development. I probably gonna need to make a simple proxy backend too 
Log in to leave a comment
In order to compare functions and classes I first need to identify them.
Getting this part to work was much harder than I expected. I first tried doing this with VS Code’s language server but that didn’t go as I expected. So, I switched to tree-sitter. To be honest, I relied a bit more on AI than usual in this part.
Log in to leave a comment
I always try to accomplish something, have something done before posting a devlog. But this time I will be posting my failure 
Log in to leave a comment
I thought I was done with this project but…
…
I was helping a friend setup the project on their devices, he had tried to pick-and-choose pieces of the code and reconstruct the project. While talking, I realized we didn’t have to send the code from the raspberry pi every time, the code could just stay on the spike and communicate with the raspberry pi via serial normally. So, I implemented that.
Devlog note: there are no visible changes; hence there is no image
Log in to leave a comment
I built this to stay awake concentrated. It tracks your Eye Aspect Ratio. If it is too low, which means you are not really concentrated, it plays a 50Hz tone for 10 seconds. Believe me after listening to that you WILL BE AWAKE.
Since the day I started this project I have been trying to make it as easy to setup and use as possible. And I believe with the PyPI package and the VS Code extension I really achieved this!
Finally, I managed to refactor the code to be clean and DRY enough and created a python package.
Now installation is as easy as installing any python package!
Things done:
~/spiberry/), now, with the config file it is much more flexiblespiberry install-service if you are using the cli or it just asks you if you are using the old version.
Log in to leave a comment
__main__.py was trying to import the main module from the wrong location. Fixed that.set-pins cli argument (and other args necessary for it), it wasn’t reasonable to remove them in the first place
spiberry directoryDevlog note: there are no visible changes for an image
Log in to leave a comment
For some reason, I was completely depending on cli arguments for configuration of all kinds. I realized that was not really easy to maintain. So, I migrated to using a ini config file. It lives in home directory and contains all the configuration Spiberry needs.
Log in to leave a comment
extension update and bugfix
Log in to leave a comment
Add command to insert raspberry pi utility classes to cursor location
Also refactored the raspi-util-class.py a bit
Log in to leave a comment
Correct video for this devlog: https://tinyurl.com/52bc7jun
I accidentally uploaded one of the previous videos
I used a bit of AI to refactor the function parsing logic.
Log in to leave a comment
I built this because I wanted to connect a Raspberry Pi to a LEGO SPIKE and there was nothing I can do that with (there is a raspberry pi hat but it doesn’t have enough ports). The most challenging part with making these updates was the computer vision part, I had to remember cv2!
Introduced a VS Code extension to more easily use Spiberry.
IGNORE_STARTER_PIN=1 environment variableLog in to leave a comment
Added remote drive capabilities: now you can control the robot from the desktop app or via a game controller, and also get a record of the actions the robot took.
RemoteDriveController which can run in two modes: socket (desktop app) or controller.WIP: Still need to compile the received “code” into actual code
Devlog note: First time video editing, apologies for the quality.
Log in to leave a comment
I learnt that you should always test before shipping. Turns out there were some issues with the previous shipment. This shipment,
Log in to leave a comment
This devlogs mostly consists of two 11.59 PM sleep deprived coding sessions. So I don’t remember much other than fixing some issues with the vision module. This is an AI review of the last changes:
Log in to leave a comment
merge_close_contours function with the help of ClaudeCamera module compatible with both rpicam-apps and picamera2
ContourDetectorclass
Log in to leave a comment
I realized with the last commit I introduced a critical bug: when tools are called it doesn’t fit the response schema and errors. I made a temporary fix for now: tries json xml and markdown parsing and if they fail calls the llm with formatting and no tools. I am sure this is not very efficient but this is the best I have right now 
Log in to leave a comment
I started working on this project again. I am planning to add object detection and maybe some other cv capabilities.
Since the day I started this project I am trying to make it portable and easy to setup. I changed the project structure a bit and now I will be producing a .pyz file that will be the only thing necessary for running this project! 
Log in to leave a comment
Log in to leave a comment
I contributed to lapse because I thought it was something truly necessary for Hack Club, and I could help.
Working with lapse, I think, made me more confident in coding and I think I got more familiar with PR review processes.
Hardest part was definitely getting to know what everything does and generally understand the codebase.
I made 12 PRs to Lapse. Currently, 6 of them are merged, 1 is closed and 5 are waiting for review.
Short summary of all of them (as much as I remember):
Log in to leave a comment
This version adds support for edit requests and better prompting style options.
Log in to leave a comment
I started on working on a feature that would allow the user to request AI edits on the enhanced prompt
Log in to leave a comment
Why is frontend so hard! 😭
Log in to leave a comment
I forgot to devlog and now I don’t remember what I exactly had done! If I remember correctly,
Log in to leave a comment
Updated the prompt. It was fully using XML, I learnt that this was not the best practice (I hope this information was correct). So now it is a hybrid: markdown for guidelines and XML for variables
Also, improved the prompt style section a little bit.
Log in to leave a comment
Added prompt style options: now you can choose the format of your prompt; XML, Markdown, long, short, few-shot etc.
Divided the prompt.py file to smaller functions for readability
And I unfortunately (or maybe fortunately) learned that full XML prompts are counter productive, which means I need to refactor the prompt agaaaaiiinnnn!
Log in to leave a comment
I built this to more easily and effectively interact with AI. I believe I learned a lot about prompt engineering.
Added user messages to display when HCAI is down, fallback is being used and fallback is also down
Implemented a better fallback mechanism
Log in to leave a comment
Updated the README
(I know with all the emojis it looks like AI generated, but I like it this way)
Log in to leave a comment
Why everything keeps breaking! Ahhaaaa…
Today I tried to use it for something and backend gave an error: JSON parsing
Why?!! It was working yesterday. I checked the prompt, maybe there was something random making LLM go crazy. Couldn’t figure it out.
Finally added json-repair and it works fine for now.
And now the question Cancel button actually does something.
Please DON’T break again.
Log in to leave a comment
Made the project ready for production.
Deployed the frontend to Vercel and backend to Railway
Log in to leave a comment
Well, turns out you shouldn’t use CoT prompting with reasoning models.
Learning this I had to go over the prompt again.
I learned that gemini-3-flash works best with XML so I switched to XML
Overall spent (I hope not wasted) a lot of time on the prompt again.
Updated the frontend: prompts are auto saved and displayed neatly on the sidebar, stored in local storage.
Log in to leave a comment
Now questions are received, sent and answered as arrays, instead of plain string
Fix bugs with web sockets
Add rate limiting
Log in to leave a comment
Made significant improvements on the prompt: now it goes through a prompted chain-of-thought process.
Moved everything about the prompts to a new file.
(I want to apologize from HCAI for the immense costs to come)
Log in to leave a comment
Removed Celery and Redis so that deployment be cheaper.
Realized “the prompt” needs a lot of improvement.
Log in to leave a comment
Created a bat script to start the necessary services.
Made minor improvements to the prompt.
Also attempted to package the app in a .pyz file but failed :(
Log in to leave a comment
Implemented channels and web sockets to allow the LLM to ask questions to the user in prompt enhancement
Log in to leave a comment
Migrated to Django + Vite (frontend looks almost exactly the same)
Fixed the bug additional web search query.
Log in to leave a comment
The first primitive version.
Can save and load prompts.
Streamlit interface.
Has web search capabilities, not sufficient yet.
Log in to leave a comment