Activity

Efe

Refactor distance calculations

Introduced 3D distance calculations to mitigate the effects of turning your head. Also now, turning your head away doesn’t cause false positives.

Settings

Added a simple settings menu and SettingsItem component

Attachment
Attachment
0
Efe

I created a simple setup to adjust EAR values for the person. It is not perfect yet but it works.
And now, a 50Hz noise is played when drowsiness is detected. This is to wake the person up.

0
Efe

UI update

Added a simple modal for displaying information.

Other

I actually tried to make a setup process to personalize the EAR thresholds but I gave up a bit to fast.

Attachment
0
Efe

Initial setup

I implemented a basic face mesh with MediaPipe in React.
Currently it only calculates your Eye aspect ratio, averages it over 5 seconds and alerts if it is lower than 0.5 (currently this is rather an arbitrary value, I will make it adapt to the user).

0
Efe

Shipped this project!

Hours: 9.41
Cookies: 🍪 222
Multiplier: 23.56 cookies/hr

Since the day I started this project I have been trying to make it as easy to setup and use as possible. And I believe with the PyPI package and the VS Code extension I really achieved this!

Efe

RGBLED wrapper class

Added a wrapper class for RGBLED to be able to disable RGB from the config file. If you set rgb_led_enabled in GPIO section to false in the config the RGB led won’t be used

Attachment
0
Efe

PyPI package

Finally, I managed to refactor the code to be clean and DRY enough and created a python package.
Now installation is as easy as installing any python package!
Things done:

  • Refactor the virtual environment and code path handling. Everything was in a set location (~/spiberry/), now, with the config file it is much more flexible
  • The main stuff: Created pyproject.toml and added CLI commands with cli.py
  • Move service installation to be a standalone thing. You can run spiberry install-service if you are using the cli or it just asks you if you are using the old version.
  • Fixed some bugs with the config file
Attachment
0
Efe

Core engine changes

  • __main__.py was trying to import the main module from the wrong location. Fixed that.
  • Brought back set-pins cli argument (and other args necessary for it), it wasn’t reasonable to remove them in the first place pf
  • Moved the code and raspi_functions into the parent spiberry directory

Extension changes

  • Improved code upload path handling: configuration file from the raspberry pi is now used to determine the correct path

Devlog note: there are no visible changes for an image

Attachment
0
Efe

Better Configuration

For some reason, I was completely depending on cli arguments for configuration of all kinds. I realized that was not really easy to maintain. So, I migrated to using a ini config file. It lives in home directory and contains all the configuration Spiberry needs.

Attachment
0
Efe

Wrapping up

Just made the final touches in the READMEs, updated the Github release, and published a Youtube video for the demo!

Other Changes:

  • Added build-pyz.bat for easier deployment
  • Added logo to VS Code extension
Attachment
0
Efe

vsc extension update and bugfix

  • Added a service section into the sidebar (and commands)
    • Start/stop the service
    • Enable/disable the service
    • Follow the service logs (journal)
  • Fixed the utility classes insertion. For some reason the markers didn’t work when they are not at the top of the file, so I moved them there!
Attachment
Attachment
0
Efe

VS Code extension update

  • Added a sidebar
  • Created new icon for the sidebar
  • Separated the ssh logic
Attachment
0
Efe

Extension update

Add command to insert raspberry pi utility classes to cursor location
Also refactored the raspi-util-class.py a bit

1

Comments

Efe
Efe 22 days ago

Correct video for this devlog: https://tinyurl.com/52bc7jun
I accidentally uploaded one of the previous videos

Efe

Some refactoring

I used a bit of AI to refactor the function parsing logic.

  • Now JSON is used for serial com
  • Added util class for vision module, I had forgotten this previously!
    Devlog note: This is a code refactor, so there is *literally *nothing I can use for devlog image other than images of the code.
Attachment
Attachment
0
Efe

Shipped this project!

Hours: 18.91
Cookies: 🍪 343
Multiplier: 18.11 cookies/hr

I built this because I wanted to connect a Raspberry Pi to a LEGO SPIKE and there was nothing I can do that with (there is a raspberry pi hat but it doesn’t have enough ports). The most challenging part with making these updates was the computer vision part, I had to remember cv2!

Efe

VS Code extension vsc

Introduced a VS Code extension to more easily use Spiberry.

  • Easy installation
  • No need for the SPIKE IDE for intellisense any more
  • Easy code transfer to Raspberry Pi

Core Engine changes tw_gear

  • Added a starter pin (GPIO21) that needs to be grounded to start the engine. This can be bypassed with IGNORE_STARTER_PIN=1 environment variable
  • Restructure raspi_functions as a module
0
Efe

TL;DR

Added remote drive capabilities: now you can control the robot from the desktop app or via a game controller, and also get a record of the actions the robot took.

Remote Drive

  • Refactored the main code to object oriented style so it could also be used in the remote drive module.
  • Added RemoteDriveController which can run in two modes: socket (desktop app) or controller.
  • Introduced device code that enables robot control and records the actions to be run later.
  • Created a simple desktop app with tkinter that is used for the socket-based control.

WIP: Still need to compile the received “code” into actual code
Devlog note: First time video editing, apologies for the quality.

0
Efe

Shipped this project!

Hours: 32.53
Cookies: 🍪 30
Multiplier: 7.88 cookies/hr

I learnt that you should always test before shipping. Turns out there were some issues with the previous shipment. This shipment,

  • Fixes bugs
  • Adds reasoning effort option
Efe
  • Updated the README
  • Fixed a little issue with the site tour
Attachment
Attachment
0
Efe

This devlogs mostly consists of two 11.59 PM sleep deprived coding sessions. So I don’t remember much other than fixing some issues with the vision module. This is an AI review of the last changes:

  • Added a retry mechanism that automatically extends HSV color ranges if no objects are detected.
  • Refactored color range extension logic to correctly handle multi-part ranges (e.g., red hue wrapping).
  • Enhanced detect_contours to support specific color filtering and more robust area-based filtering.
  • Added explicit error handling for unsupported picture-taking methods in the camera module.
  • Updated fallback vision configurations and improved type safety for detection filters.
Attachment
0
Efe
  • Added interactive pin configuration
  • There were a few issues with installation paths and virtual environment checks. Fixed those. It was much harder than it sounds exhausted
  • Some enhancements to the vision module
Attachment
0
Efe
  • Created a setup interface that will enable user to select which libraries they want to install. Currently it only has the libraries that are used in the vision module but many integration will come.
  • Optimized merge_close_contours function with the help of Claude
  • Added configuration options for Camera module compatible with both rpicam-apps and picamera2
  • Added examples for vision and camera modules
  • Modified the readme to include new camera capabilities
  • Made some bug fixes and improvements in ContourDetectorclass
Attachment
Attachment
Attachment
0
Efe

Vision Module

I started working on the vision module. Added,

  • Object detection
  • Contour detection (to find a single colored continuous area)
    Also fixed a few bugs (I don’t remember exactly what) and handled the installation of vision libraries
Attachment
Attachment
0
Efe

Bug fix

I realized with the last commit I introduced a critical bug: when tools are called it doesn’t fit the response schema and errors. I made a temporary fix for now: tries json xml and markdown parsing and if they fail calls the llm with formatting and no tools. I am sure this is not very efficient but this is the best I have right now pf

Attachment
0
Efe

I started working on this project again. I am planning to add object detection and maybe some other cv capabilities.

Current change

Since the day I started this project I am trying to make it portable and easy to setup. I changed the project structure a bit and now I will be producing a .pyz file that will be the only thing necessary for running this project! yay

Attachment
0
Efe

Bug fixes

  • If the LLM hadn’t asked you questions, the output wouldn’t have been structured, causing parsing errors. This is now (almost) handled.
  • Also added structured output to all LLM calls.

Choose your reasoning effort

  • Now you can choose the reasoning effort enhancer agent will use while enhancing your prompt. Hoping this will reduce the time enhancements take (even medium reasoning takes a lot of time).
Attachment
0
Efe

Shipped this project!

Hours: 26.75
Cookies: 🍪 364
Multiplier: 13.62 cookies/hr

I contributed to lapse because I thought it was something truly necessary for Hack Club, and I could help.
Working with lapse, I think, made me more confident in coding and I think I got more familiar with PR review processes.
Hardest part was definitely getting to know what everything does and generally understand the codebase.

Efe

I made 12 PRs to Lapse. Currently, 6 of them are merged, 1 is closed and 5 are waiting for review.

My PRs

Short summary of all of them (as much as I remember):

  • Added comprehensive testing for API routes
  • When the Hackatime login wasn’t properly done authentication was failing, fixed that bug
  • Added duration displays to timelapses corners, like it is on YouTube
  • Implemented a fallback mechanism for when re-muxing fails
  • Implemented a *really *fancy setup script that helps devs set up the environment more easily: navigate them through database (automatically sets everything up if using localstack option) and slack setup
  • Sometimes after stopping and resuming, a black screen was recorded instead of the actual source. Fixed that issue by not letting the user leave the recorder null.
  • Fixed very minor issue in the build script.
  • Added a time display in the recording.
  • You couldn’t change recording sources or change the resolution of the screen you are recording (or the output would be corrupted). Fixed that.
  • IndexedDB was sometimes closing unexpectedly. Fixed that.
  • At the first use of a device the passkey was set to 000000 till the first recording. Now it is generated when the user checks it.
Attachment
Attachment
0
Efe

Shipped this project!

Hours: 11.49
Cookies: 🍪 85
Multiplier: 7.4 cookies/hr

This version adds support for edit requests and better prompting style options.

  • You can now choose how your prompt will be like, markdown, XML, few-shot, CoT etc.
  • And if you don’t like it or something is missing you can tell the AI what it is missing and it will make the necessary updates to your prompt
Efe

App Tour

I have been putting this off for a very long time. Today, I implemented this with reactour.
I thought it would be easy, but the insufficient documentation (IMO, no offence to the devs) had other plans tw_grinning

  • And I also did some bug fixes and mobile view adjustments.
Attachment
Attachment
0
Efe

A lot of bug fixes gleam-debug

  • Well, turns out I got the message types and edit request parameter mixed up in the edit web socket payload. I wasted a lot of time finding this issue.
  • Sometimes the LLM didn’t call a tool, didn’t produce an output and just did reasoning. In this case, output was None and everything was failing. I changed the loop conditions a little, hope it works.

Structured Output

  • I was telling the LLM to output in XML and parsing with regex, that was all the output validation (there was no output validation if LLM didn’t output XML it would fail). So, I added a response format with pydantic.
  • I adjusted the prompt a little to work with structured output, but it still needs some work.
Attachment
0
Efe

New feature: Edits

I started on working on a feature that would allow the user to request AI edits on the enhanced prompt

  • Implemented the edit request input and web sockets logic on the front end
  • Created a edit consumer to handle the edits
  • Improved and adjusted prompts to handle edits
    Still did’t test anything. I’m soooo tired pf
Attachment
Attachment
0
Efe

Why is frontend so hard! 😭

  • Made the website mobile compatible
  • Now you can manually edit the prompt (in the future you will be able to request AI edits)
  • Added copy and save prompt buttons
Attachment
0
Efe

I forgot to devlog and now I don’t remember what I exactly had done! If I remember correctly,

  • I added a few more options to prompt style, namely, Plain formatting and CoT to techniques
  • The LLM was confusing the requested prompt style and the ultimate output user wants from the AI. I made it more explicit in the prompt but time will show if it worked.
  • Finally, added a checkbox to specify the target models reasoning-native-ness, so we don’t depend on the LLMs knowledge to identify that. (also updated the prompt accordingly)
Attachment
0
Efe

Updated the prompt. It was fully using XML, I learnt that this was not the best practice (I hope this information was correct). So now it is a hybrid: markdown for guidelines and XML for variables
Also, improved the prompt style section a little bit.

Attachment
Attachment
0
Efe

Added prompt style options: now you can choose the format of your prompt; XML, Markdown, long, short, few-shot etc.
Divided the prompt.py file to smaller functions for readability
And I unfortunately (or maybe fortunately) learned that full XML prompts are counter productive, which means I need to refactor the prompt agaaaaiiinnnn!

Attachment
0
Efe

Shipped this project!

Hours: 18.48
Cookies: 🍪 489
Multiplier: 26.43 cookies/hr

I built this to more easily and effectively interact with AI. I believe I learned a lot about prompt engineering.

Efe

Added user messages to display when HCAI is down, fallback is being used and fallback is also down
Implemented a better fallback mechanism

Attachment
0
Efe

Updated the README
(I know with all the emojis it looks like AI generated, but I like it this way)

Attachment
0
Efe

Why everything keeps breaking! Ahhaaaa…
Today I tried to use it for something and backend gave an error: JSON parsing
Why?!! It was working yesterday. I checked the prompt, maybe there was something random making LLM go crazy. Couldn’t figure it out.
Finally added json-repair and it works fine for now.
And now the question Cancel button actually does something.

Please DON’T break again.

Attachment
0
Efe

Made the project ready for production.
Deployed the frontend to Vercel and backend to Railway

Attachment
Attachment
0
Efe

Well, turns out you shouldn’t use CoT prompting with reasoning models.
Learning this I had to go over the prompt again.
I learned that gemini-3-flash works best with XML so I switched to XML
Overall spent (I hope not wasted) a lot of time on the prompt again.

Updated the frontend: prompts are auto saved and displayed neatly on the sidebar, stored in local storage.

Attachment
Attachment
0
Efe

Now questions are received, sent and answered as arrays, instead of plain string
Fix bugs with web sockets
Add rate limiting

Attachment
0
Efe

Made significant improvements on the prompt: now it goes through a prompted chain-of-thought process.
Moved everything about the prompts to a new file.
(I want to apologize from HCAI for the immense costs to come)

Attachment
0
Efe

Removed Celery and Redis so that deployment be cheaper.
Realized “the prompt” needs a lot of improvement.

Attachment
0
Efe

Created a bat script to start the necessary services.
Made minor improvements to the prompt.
Also attempted to package the app in a .pyz file but failed :(

Attachment
0
Efe

Implemented channels and web sockets to allow the LLM to ask questions to the user in prompt enhancement

Attachment
0
Efe

Migrated to Django + Vite (frontend looks almost exactly the same)
Fixed the bug additional web search query.

Attachment
0
Efe

The first primitive version.
Can save and load prompts.
Streamlit interface.
Has web search capabilities, not sufficient yet.

Attachment
0