Activity

shreeyans.vichare

Shipped this project!

Hours: 24.88
Cookies: 🍪 21
Multiplier: 4.44 cookies/hr

I built my very own blog site. Added a ton of more posts and new features to the inital site, like comments, likes, reading time, Table of Contents, a working Contact Form embedding, etc. The inital site was a little more simplistic, I tried to make it more interactive so I can actually see what people think of the silly little posts I write :)) This was a really fun update and now I feel like The Active Record, has all the features any blog site would have. Super proud of how I didn’t need to run any backend since the forms are managed by Tally and and the reactions/comments are managed by Giscus. Hope you enjoy it, happy reading :))

shreeyans.vichare

So this is the final updates list, a lot of stuff that was in the works before has now been implemented and deployed!!
Headings now appear on each post letting people easily navigate to any section of the post.
Reading times are now auto-calculated so I don’t have to guess how long it will take for people to read an article
Embeded a tally form, since it’s way easier than depending on supabase (which might shutdown my project’s backend anytime due to inactivity)
Also added likes and comments without bothering with supabase, there’s this really cool tool called Giscus that lets you use GitHub discussions for reactions and comments, it came in so handy and saved a lotttttt of effort of having to implement it from scratch!!
And finally made the category page look a little better, (used Github Co-Pilot and Claude for the UI to make it look more like a news bulletin).

Attachment
Attachment
Attachment
Attachment
0
shreeyans.vichare

Added search functionality to the blog site. This was a really highly requested feature, and hope it’ll keep coming in handy as I keep writing more blogs :)) The search bar expands on clicking and hides the navigation bar. More fun features incoming, hope yall enjoy ^^

Attachment
Attachment
0
shreeyans.vichare

Shipped this project!

Hours: 12.18
Cookies: 🍪 257
Multiplier: 21.14 cookies/hr

I built SurvivorLM, a model that is fine-tuned to aid you in survival and help you in times of need. The hardest part was the actual training which ended up taking hours, just to break in the middle sometimes. While trying to train from scratch, it would often produce gibberish. I ended up fine tuning a pre-existing model instead which ran into it’s own set of issues. It was very huge, so I used QLoRA for quantizing the model, gathered a lot of survival data and fine tuned the model on that. Turns out mlx-lm doesn’t work well with Gemma models so I had to start over entirely in Google Colab, wherein it would fail often. But after a few runs (and a little help from AI), I finally got it to work. But then when it came to downloading, it split the model into a bunch of safetensor shards, which then again had to be reassembled to a gguf file. Getting it deployed was another task, after many failed deployments, it finally came through and GCP is now serving the model, using the UI that Claude designed. Since I don’t have the resources of a large company I can’t keep it running for too long and have to limit the responses but hope you enjoy chatting with it. It took a lott of work and was very tedious but this has to be the most satisfying project I’ve ever built.
Thank you for following along!! Hope you like it :)

shreeyans.vichare

Managed to deploy it using Google Cloud Platform. That took a lot of failed builds, and an entire evening but it’s done. I used AI to make the UI but to anyone voting/reviewing please note that the implementation and work actually consists of fine-tuning and trying to train the LLM from scratch. This was one hell of a journey but glad it happened. I have learnt so much about how these large language models work and what we can tweak ourselves to make them fit our needs.
Hope you enjoy the tips provided by SurvivorLM.

NOTE

(The website may take some time to load, please be patient since I don’t have the hosting resources to keep multiple instances running at once, hope you get a clear view of the project from the devlogs and not just the website)

Please note that because running this model is kinda expensive, I am using free compute as of now so I had to cap the daily messages to 10. I’m afraid I don’t have the resources that large companies do.

Thank you for following along. Hope you like the SurvivorLM :))

PS: Fixed the truncated output and added the design file to the root of the repository :)

You can download the model @ https://huggingface.co/Shreyy2305/survival-gguf

Attachment
0
shreeyans.vichare

FINALYYYYYYY!!!! 😭😭😭😭. It works. Ahhhh. It took soo long and so many hours of debugging, the devlog time doesn’t reflect half of it. But yeayyyyyyy!!!! The gguf file complied successfully. I thought I lost hours of training when google colab downloaded all the safetensors sepearately in shards but thankfully it all recompiled. AHHHH. This was easily the hardest project ever.
Please note that this is majorly based on the theory and the actual functioning and fine-tuning of the LLM and not the UI. Now the only thing that’s left is actually deploying it. It works in the command line right now, but making it so that anyone can use it on their own website/cli is going to be difficult. Right now it only works by downloading the 3GB model and running it using llama.cpp but I’ll try to host it for people to test out. Can’t wait :)) Thank you for being a part of this!!

Attachment
Attachment
Attachment
0
shreeyans.vichare

So I’ve decided to completely pivot, I am still training a model. But now this is gonna consist of fine tuning the gemma 4 model for survival. It includes image/video processing as well which would be a lot more helpful and training one from scratch for all of that would take up more resources and brain power that I have. The one I did train does work, only that is produces gibberish because I couldn’t gather enough data, and dont have enough compute to train everything. I’m sure the output with this fine tuning approach will be 1000% more valuable and faster. I’m sorry I can’t attach photos of the actual output just yet since the training process is so lengthy. Right now I’ve gathered about 2000 examples and am fine tuning the model on those. It still takes a lotttt of time time, but it works!! Hope I can turn this into an app or a cli that people can use since the Gemma 4: 2b model is relatively light-weight.

Attachment
0
shreeyans.vichare

Shipped this project!

Hours: 9.27
Cookies: 🍪 74
Multiplier: 6.68 cookies/hr

I built a Path Finding Algorithm visualization tool that lets anyone view observe how path finding algorithms like Dijkstra’s, A*, GBFS, BFS, and DFS actaully work on real maps. The hardest part was actually getting the map data from OpenStreetMaps, and the OSRM router since I kept hitting the rate limit. To counter that I decided to download the maps of a bunch of major cities as grids, so they dont have to be requested each time. And added a worldwide (experimental feature) that (for short distances) will still fetch the data live for ORSM, ideally without hitting the rate limit. Seeing the visualization for the first time was genuinely thrilling and it helped me nurture an appreciation for how the technoloogy all around us really works. Hope you enjoy it ^^

shreeyans.vichare

The UI now looks nice and polished and more visually appealing. Added a new worldwide feature which I felt was really necessary since I wanted the user to pick any location they wanted. Also added a way for the person to learn how the algorithm actually works. Overall, this was a really huge update and I’m super proud of how this turned out. Hope you enjoy seeing these things work and find it just as beautiful as I did. Thank you for following along!! :) ^^

Attachment
Attachment
Attachment
0
shreeyans.vichare

Added support for more cities like Tokyo and Berlin, but I still keep running into.a bug where it chooses a completely different path that hasn’t been explored and isn’t the shortest path. The visuals still look good maybe there’s something wrong with the logic. I still need to work on the website’s UI, having trouble getting it to work perfectly. Steady progess is still good I guess. Hope you enjoy it :))

Attachment
0
shreeyans.vichare

I added a lot more posts and a bunch of ways to reach out and contact! Hope people find this stuff cool. There’s a lot more to do still. I can improve on the loading time for the site because it is currently a little slow, but upwards and onwards. I hope I can add more themes and maybe some AI to my website to help users navigate and know more about me, hope you enjoy :))

Attachment
Attachment
0
shreeyans.vichare

So, the algorithms are working correctly. The only issue right now is getting them to work on any place in the world. I keep hitting the rate limit for OpenStreetMapRouting, which I need to make a gird out of the map. It’s a little slow, and sometimes straight up doesn’t work. I’m thinking of a workaround wherein I download the maps of major cities as a grid and just load them in. This would make everything significantly faster, and as an experiment feature maybe I can also include short distances but placed anywhere in the world with less dense roads. Altogether though, this is coming along really well. Hope yall like it :))

Attachment
Attachment
0
shreeyans.vichare

Got OpenStreetMaps to work, alongside the OverPass API. So the OverPass API gets all the available road routes. And the algorithm (BFS for now) explores all the possible roads one by one, in a queue and finds the shortest path between two points. Often times, if large distances are entered, it glitches out and the API doesn’t work since it is all free but on small distances like 30-40 kilometers, it should work fine. I need to add more path finding algorithms and make the UI look nicer but overall, everything is coming along quite well :)

Attachment
0
shreeyans.vichare

Implemented both Informed and Uninformed Search Algorithms, Breadth First Search, Depth First Search, Greedy Best First Search, and the A* Algorithm. Getting them to work on an actual map is going to be the difficult part but, right now just focusing on getting a conceptual understanding of all of them. In the attached video, I’ve implemented the A* algorithm. Let’s see hope, I get to scale up this project to include maps and everything. Also I need to implement Dijkstra’s algorithm but I’m still working on understanding that one. Thank you for following along :))

0
shreeyans.vichare

Made the service navigator website clearer and added a demo video :) I wish I could’ve made it easier for people to test it out since I know downloading android apks isnt the easiet and the ios setup is even more tedious, but the app itself works splendidly and I hope the demo video does some justice to it. Hope you enjoy using PragatiConnect :)

Attachment
Attachment
0
shreeyans.vichare

Working on getting a simple python version up and running so I get a hang of how path finding algorithms actually work, later on I plan on implenting all this using OpenStreetMaps and with an actual web interface with real maps, but for now just working on the logic of how each algorithm will work and focusing on gaining a deeper understanding of them :)

Attachment
0
shreeyans.vichare

Shipped this project!

Hours: 4.15
Cookies: 🍪 45
Multiplier: 10.79 cookies/hr

I build a Wiki page generator that let’s people generate and share their own (or someone else’s wiki page). It is fun and I always wanted to have my own Wiki Page, just as a milestone or even as comedy/satire. The hardest part was setting up the backend and rendering user pages with user generated slugs, but it all worked out and now images can also appear in the Wiki Page (used Supabase’s S3 storage for that, they have a generous free tier). Hope you have fun creating your own pages and sharing them, thank you for your time :)

shreeyans.vichare

Finally done with the project!! Really happy with how everything turned out! Added some disclaimers (wouldn’t want to get into trouble) and a Readme. The website is meant to be for fun, funny and satirical and at the same time, a platform where people can showcase themselves (like a WikiCV), I actually like the idea of a WikiCV. Hope you all enjoy making pages as much as I enjoyed building the website, have fun :)

Attachment
0
shreeyans.vichare

Finally everything is up and running!! You can make a wiki page yourself, or generate one through a prompt to your favourite AI. It’s been fun seeing all this come together. Deployed the frontend on Vercel and all the data is stored in Supabase. The free tiers are generous and it’s easy to setup. Images can be added as well (up to 5mb) and the pages can be edited if a password was set at the time of creation. I’ll probably have to work on the request to delete page feature, if anyone doesn’t want their page to be seen anymore. But, overall it works and all this was really fulfilling, hope people make tons of pages and share them with their friends!!

Attachment
Attachment
1

Comments

Airin a
Airin a about 1 month ago

IT JUST SO PEAK, love it 🙌

shreeyans.vichare

Alright getting somewhere. The routing and page creation has been added, alongside and AI wiki generator that generates a prompt that can be given to any AI, to quickly generate your wiki for you. Images can be added and the wiki page actually looks complete. Now it’s all about refining it and making it work. Once created, the wiki page can be edited only if a password was set at the time of creation.

Attachment
0
shreeyans.vichare

Hii!! Just a Quick note for anyone testing with the demo. The demo link I’ve provided here is just website that lets you navigate to the service you want to use, since there is an Andriod App and also a website calling interface. Feel free to use whichever one you want. I did make the navigator website through AI but the Andriod app and the Web-call assistant website are the actual products and the demo link I’ve provided just helps you navigate to the service most suitable for you.
Thank you for your time!! Hope PragatiConnect helps you with your journey!!

Attachment
Attachment
0
shreeyans.vichare

Getting somewhere. I’ve got a rough layout and a framework of how everything should be working. Now I’ve just got to figure out how to route to different pages (with slugs), autheticating users and making the actual homepage look good. The last one is a work in progress as you can see but the path is clear. The user decides the name of their page and they generate their page through AI or by writing it themselves. Let’s see where this goes…

Attachment
0
shreeyans.vichare

Shipped this project!

Hours: 12.33
Cookies: 🍪 173
Multiplier: 14.01 cookies/hr

We built a a platform (app and web-call assistant interface) to empower informal sector. It provides features like visual price estimator for artists and craftsmen, AI voice assistant with local language support for a lot of Indian languages and English. This is meant to be an outlet where people like maids, artisans, small businesses and pretty much any non-traditional job profile having individuals can grow. The hardest part was getting everything connected to the hosted AWS architecture and making subtle changes in UI that would work on one OS but not on the other. But by far, the in-app voice assistant feature took the most time. I learnt a lot about how hosted systems work and what goes into actually building a production grade applicaton. To check out the working make sure to download the andriod apk and test it out for yourself. I’m really proud of what we’ve built as a team in this 7 day prototyping phase, it is a really meaningful project that can have a huge impact on society. Thank you for your time, I appreciate it.

shreeyans.vichare

Implemented a web calling interface. So if maybe someone doesn’t want to download the andriod app, they can use the voice calling interface. Turns out its really hard to do public iOS testing unless you have a paid Apple Developer account. So now the product consists of 2 services, the mobile app and the calling simulator. It would’ve been cool if I could rent a toll free number and connect it there but that’s fine for now. Finally everything that was planned to be implmented has been completed. Really feel good about what the PragatiConnect Product has come to be!!!!

Attachment
Attachment
Attachment
0
shreeyans.vichare

Trying to get the price estimator to work so people can upload images of their craft and get a fair price for it. It’s not been processing quite properly yet, but at least the user can add any image and send it to the backend, where the AI should analyse it. Also got the email authentication working, through the SMTP library so it sends and validates OTPs.

Attachment
Attachment
0
shreeyans.vichare

Added support for a few more Indian languages (Gujarati and Punjabi). The backend is a lot to configure, but testing has been relatively smooth. The folder and all the files are very messy and hard to keep track of. The text-to-speech and speech-to-text keeps breaking often, trying to work on that with the Google API’s. The voice assistant would lose context everytime a new message was being sent, now the memory persists until either the language is chosen or the voice assistant session is closed.

Attachment
0
shreeyans.vichare

Worked on the iOS and Andriod application. We have a lot of features to implement, currently working on one thing at a time. The frontend UI looks good, the backend AWS and Google Cloud part is a little tricky to deploy. Regardless we’re moving forward!!

Attachment
Attachment
0
shreeyans.vichare

Seems like I’ve hit a wall, rather the wall has hit me. The model is producing incomprehensible text. I’ve come to understand that it’s a scaling problem. Right now, all the data together is ~30 MB. I need to gather wayy more data, more importantly gather clean data. Seeing the Loss drop did feel good, the model is doing the best it can with the amount of data it has but it doesnt have too much to work on. Regardless, I’m not abandoning this, I hope I can come up with and find more data for it to be comprehensible!! Onwards and Upwards we go!!!!

Attachment
Attachment
0
shreeyans.vichare

Ok so, I completed the tokenizer. And followed that up by writing and training the model which took a longg time to to train (devlog time does not count for time spent on training). The vocab size is 8000 words and the maximum sequence allowed is 512. All of this was done with PyTorch (No HuggingFace). And after all of that effort, the output I got is in the picture (it is just text that does not make sense)…
Really regret adding Shakespearean text in the training data but hey… you live and you learn!

Attachment
0
shreeyans.vichare

Ok so I finally got the data together, it took a lot of time but it’s done. Now the actual work. begins. I got the transformers trained, and it is producing a sane amount of tokens and common words have stayed intact. For the data gathering, it was split into 3 parts, Survival Data (5.5MB), Warm-up Data (28.1MB)[English Language], Instruction Data (254KB)[QnA formatted Data]. And it was all combined into a tokenizer corpus (one large .txt file) and trained using sentence piece, next up is the architecture and the actual implementation! I’m really excited!!

Attachment
Attachment
0
shreeyans.vichare

To say finding and cleaning and processing data for training has been difficult would be an understatement. There’s always the fear of is this enough? Is this useful? Will this be enough data? And so far, it has been very messy but step by step I think I’m getting closer to actually useable data. However, the folder strucutre is still a mess. An LLM really needs a lot of data and for survival tips, I have been using Wikipedia and openly available survival manuals and PDFs. Data gathering is a really important step and I dont want to take it for granted so let’s hope all of this works!!

0
shreeyans.vichare

Shipped this project!

Hours: 1.24
Cookies: 🍪 2
Multiplier: 1.5 cookies/hr

Built a website that let’s chess enthusiasts play chess with their friends quickly and stress free. It was a hassle to keep signing into random PCs at college, so VelozChess helps you play chess with your friend easily with a single code! It was really difficult to deploy this project since it invovles a lot of moving parts such as Vercel for the Frontend, Render for the Backend and Supabase for database management. Overall it was a really fulfilling project since it included building something that me and the people around me actually needed so yeah that was super fun! I really hope you enjoy playing with your friends here and have fun messing around with the code. Happy chessing!!

shreeyans.vichare

This was an existing project that I worked on and finished the deployment finally! The backend is now running on render instead of running locally and the database is connected to a remote Supabase project. It was really fun to work on something that me and my friends could actually use and need. Colleges/Schools keep changing PCs, and chess.com and lichess being blocked was just getting too much. That’s where Veloz Chess comes in. It was fun to work on this project but since this was an existing project I worked on, this will probably be short on the number of devlogs.
Please feel free to mess around with the code and of course play with your friends!! Enjoyy 🥳🥳

Attachment
Attachment
Attachment
0
shreeyans.vichare

Please visit: https://ather-docs.vercel.app for a DETAILED DEMO and DOCUMENTATION

For installation please visit the latest release: https://github.com/Shreeyans2305/AtherOps/releases/tag/v1.0.7 and follow the steps there

(So far, I have only tested for MacOs and that works on the latest version however the builds for Ubuntu and Windows were also successful)
The objective is to create a smart and user friendly system that benefits the E-commerce platforms and SaaS businesses with transitions.
Broken API endpoints, payment gateway errors are real and wayy too common. These small bugs end up costing businesses a lot of money.
To resolve this, AtherOps introduces a Smart Multi-Agent Pipeline that monitors all the signals, logs, messages, console, and tickets from a website and provides a smart diagnosis and self fixes the issues based on severity. For harder issues, it sends an email to Engineers and Merchants informing them of the error.
This system vastly optimizes the current monitoring services and ticketing systems saving companies a lot of money and resources by providing a 24/7 support team of agents!!
The setup for a business is as easy as adding a JavaScript SDK through a code snipped in the script tag of the HTML file!!
Really proud of this project and I hope it goes on to make real world impact.
Thank you for your time.

PS: I finally got the package working so now you can download it on your own system and run it. Just make sure to install ollama first and read the setup steps. Have Fun!! I hope you enjoy it.

Attachment
0
shreeyans.vichare

Shipped this project!

Hours: 5.09
Cookies: 🍪 123
Multiplier: 24.15 cookies/hr

I built a multi agent AI system that monitors websites (mainly ecommerce) to help them monitor and fix (in real time) anything that might be halting business operations. The hardest part was getting the designing the pipeline for the agents and getting them to give a detailed and accurate output. A detailed system prompt helped fix this issue. All the models run locally and are very light-weight though shipping them requires a lot of formalities. The frontend is minimal and user friendly, it has a professional look resembling a stock market monitor (bloomberg terminal). I am the most proud of how easy it is for a hypothetical customer to adopt this framework into their website since it is as easy as adding a JavaScript SDK in their html page in a script tag and then done! One step and they are ready to have a team of smart personal assistants and monitors. This project was really fulfilling and I’m honestly super glad and grateful that I got to complete it and share it with you all.

shreeyans.vichare

Some finishing touches were added and I quickly whipped up a documentation page since deploying something like this is really tedious and would need money and probably a large team. The locally running light weight LLMs give this page a huge boost, seperating them from the exisiting diagnosis platforms, something like AtherOps is in a completely different category since it’s smart and self healing! Really proud of the way this project turned out and hope it can be shipped to real companies in need of monitoring during a transition to headless architecture and also as a general smart tool!

Attachment
0
shreeyans.vichare

The core of the project has been built out. After an error/ticket/API endpoint failure, the site correctly diagnoses and fixes the issue, identifies patterns, notifies related authorities (via email), and asks for human intervention on serious for critical errors. This project requires the installation of a lot of dependencies, so I am thinking I might just use a demo video instead of the working agentic pipeline. The setup is rather easy but heavy. I’ll probably include everything in the README file. Make sure to checkout the README file!!

Attachment
0
shreeyans.vichare

Created the frontend and the backend. The purpose of this is to monitor a website’s console, websocket data and customer support tickets to provide real time healing and assistance. So far, the data about a business’ website such as API errors, Payment Failures, Websocket errors go into the backend and follow an Agentic AI Pipeline which observes and reasons abouts these errors and establishes a pattern. In this devlog, I completed the implementation of the entire site. I will be sharing the theoretical framework behind this soon. Stay Tuned!!

Attachment
0
shreeyans.vichare

I got a simple demo orbit simulation running with Pygame. Still exploring the physics side of it all but for now this is a good base line to start from. I am planning to make it more accurate and better looking. Mimicing Newtonian Physics for now but might upgrade to use general relativity based on how much I am able to grasp and implement because some of the topic matter is really dense. Anyway that’s it thanks for following along with the journey!

Attachment
Attachment
0
shreeyans.vichare

Shipped this project!

Hours: 20.19
Cookies: 🍪 106
Multiplier: 5.25 cookies/hr

I made a personal blog website.
The Active Record is something I’ve worked on for a long time.
It provides an extremely personalized, and tailored UI that I just couldn’t find on traditional website makers. Shoutout to React, this library really is very helpful and beginner friendly.
I had fun working on the design from the ground-up,
The posts are rendered by parsing the markdown files for things such as categories, featured posts etc. And after uploading the blog as a Markdown file, it dynamically adjusts the UI and adds it at the desired location with the desired route.
I really gained an appreciation for functional programming and how I don’t have to make completely seperate instances of things just to fit one test case.
Overall it was a really fun experience and I hope everyone enjoys it!!

shreeyans.vichare

I finally finished the Project!
It took a lot of time to perfect, added dark mode and I really like how it turned out!
The progress bar feels smooth while reading a post! Can’t wait to put more cotent and perhaps in the future, my own merch store (as teased on the website)
Managed to make the categories section a little more interactive, added social links for contacting and tons and tons of tiny details that had to be worked out but I am super proud of it.
Hope you’ll enjoy reading my blogs as much as I’ll enjoy writing them

(P.S. I am still unable to upload a video in the devlog, to checkout the implementation please visit the demo link, though I have tried my best with GIFs but they don’t fully convey the UI,thank you!!)

Attachment
Attachment
Attachment
1

Comments

shahaarav2806
shahaarav2806 4 months ago

Great Work Bro

shreeyans.vichare

So I have completed the page for individual posts, and specific categories of post.
The design is kept consistent throughout the entire website to give it a better feel.
The Specific category page conditionally renders by parsing through the md files so I don’t have to manually add anything while uploading blogs. I also ended up writing a blog which was a fun break!
I really feel good about how this project is coming along and now all I have left to implement is the Light Mode/Dark Mode switching and the ability to contact me and like individual posts!
(P.S. I am having an issue uploading videos in devlogs, so if you want to see how the final page looks like please check the demo link)

Attachment
Attachment
Attachment
0
shreeyans.vichare

Perfecting the landing page is taking longer than I expected! But I am all for the customizability, that’s the fun part about making a blog website on your own, the possibilities are endless. While there aren’t any insane animations/transitions, I have done my best to make the website feel interactive and lively.
A lot of useful implementation was added in the duration of this session. The blog posts read from MarkDown files instead of JSON. The data and cotent and extracted using the React Markdown and gray matter pacakge. Powerful yet subtle animations have been added using the Framer Motion Library and GSAP.
I’m really proud of how this page is turning out to be and hope to keep adding features as I keep adding blog posts.
What’s left for now is the page for specific categories, adopting a mobile friendly design and light mode/dark mode integration.
Thanks for reading!

Attachment
Attachment
0