Activity

satya//7YA

Shipped this project!

just deployed…

it’s a simple ai-powered study tool i’ve been building to make learning a bit more interactive and less boring. you can go through notes, generate quizzes, and try different ways of studying all in one place.

it’s still a work in progress, especially the ui and tracking parts, but it’s usable right now.

please don’t misuse the api keys. if you want to test properly, you can plug in your own keys.

if you try it out, i’d really appreciate any feedback 🙌

satya//7YA

Finally after all of this, I moved into a phase where I just kept building on top of everything and trying to make the app feel more alive and actually fun to use, not just useful.

So first thing I did was improve the tables design. Earlier it was working but it felt kinda plain and boring, so I redesigned it a bit to make it cleaner and more readable. Spacing, alignment, small small visual tweaks, all that stuff. Nothing too flashy but enough to make it feel nicer when you actually look at data.

Then I started thinking about something more interesting. Like whenever a user asks something where a normal text answer is not enough, for example history, flows, step by step processes, tree structures or anything like that, it would actually be better if we could show it visually. So I added a flowchart / diagram type system. Basically if the question feels like it needs a visual explanation, the app can generate a graph-style diagram.

For that I used graph-based rendering and integrated React Flow. Setting that up was a bit tricky ngl, handling nodes, edges, layout and all, but once it started working it looked really good. It makes explanations feel way more clear compared to just paragraphs.

After that I thought okay reading is fine but how do we know if the user is actually understanding anything. So I added a quick test feature. For every question, there are like 4 small questions generated. These act like a mini test so the user can check if they actually understood what they just read. It is simple but kinda powerful because it turns passive reading into something a bit interactive.

Then I also made sure all of this works properly in dark mode. Graphs, tests, UI elements, everything. Earlier some parts looked off in dark theme so I refined that as well. Now both light and dark feel consistent.

I also updated the sources system a bit more to make it smoother and more reliable. And yeah I went back again to model settings and improved it slightly, tweaking how things behave and making it a bit more usable overall.

And one more thing I have started experimenting with is something like an Obsidian-style canvas. Still in progress but the idea is that users can create their own notes, connect them, maybe build a small knowledge graph kind of thing. Not fully done yet but trying it out and seeing how it fits into the system.

So yeah this part was less about one big feature and more like continuous building, improving, trying new ideas, fixing things, adding small details again and again. Kinda messy process but also the most fun part honestly.

Attachment
Attachment
0
satya//7YA

After building out the core system and getting the main flow working, I shifted my focus toward making the experience more complete and polished. This phase was all about adding depth, improving usability, and fixing the small things that actually matter a lot in real usage.

I started by expanding the rendering capabilities even further. I added proper LaTeX rendering so mathematical expressions can be displayed cleanly, which is especially useful for study-related content. Along with that, I also implemented code rendering, making sure code blocks are formatted nicely and remain readable. This made the platform much more versatile because now it can handle technical, academic, and general content all in one place.

After that, I spent time polishing the overall system. One thing I noticed was that the model responses were not always as good as I wanted. So I worked on improving that by refining how the models are used. I adjusted prompts, improved the message structure, and did some level of fine-tuning to get better and more consistent outputs. I also focused on optimizing response behavior so the answers feel more useful and less random.

Then I moved back to the UI and started enhancing the sidebar again. I made it collapsible so users can choose when they want more space versus when they want quick access to tools. On top of that, I added a small but useful feature showing the current time and date at the top of the sidebar. It is a simple addition, but it makes the interface feel more alive and practical.

While working with sources, I ran into an issue where sometimes content fetched through the Jina API would fail or not load properly. Instead of leaving users stuck with broken entries, I added a one-click remove option so they can easily clean up their sources. This made the system feel more controlled and user-friendly.

To give users more flexibility, I added a model settings option where they can choose which model they want to use. This opens up more control depending on the type of task or response they expect. Along with that, I also introduced a font customization feature so users can change how the text looks based on their preference. Whether they like cleaner fonts or something more stylized, they can adjust it accordingly.

Another important addition was the ability to include custom links and text as context. This allows users to manually add their own sources and send them to the model, making the system more flexible and powerful for personalized workflows.

Finally, I implemented a proper theme toggle with well-designed dark and light modes. Instead of just switching colors, I made sure both themes feel consistent and visually appealing.

Overall, this stage was about turning the project from something that works into something that feels good to use. A lot of small improvements, fixes, and enhancements came together to create a much smoother and more complete experience.

Attachment
Attachment
0
satya//7YA

After setting up the search and context flow, I continued working further on improving the core experience of the app, especially around how content is rendered and processed.

One of the first things I focused on was implementing full markdown rendering across the app. Instead of treating content as plain text, I made sure everything could be properly parsed and displayed with rich formatting. This made a huge difference because now notes, articles, and extracted content actually feel readable and structured rather than raw data.

After getting markdown working, I moved on to improving the overall typography and visual presentation. For this, I used the Gemini model to help guide how content should be structured and displayed. I ended up designing a fully custom typography system, thinking through how headings, paragraphs, spacing, and emphasis should look. The goal was to make reading feel smooth and intentional, not just technically correct.

Once the rendering and design side felt solid, I shifted focus to the backend intelligence. I integrated the Gemma model and set up its API properly. This involved defining a clear message structure so inputs and outputs stay consistent and predictable. I spent time making sure the flow between user input, processing, and response generation was clean and scalable.

After that, I worked on designing the complete page layout around this system. I built a sidebar that can display all the sources being used, so users can clearly see where their context is coming from. This helps make the experience more transparent and organized.

Finally, I implemented a chunking mechanism for handling files. Whenever files are uploaded, they are automatically split into smaller chunks and then processed step by step. This makes it easier to handle large inputs and also prepares the system for more advanced features later on.

Overall, this phase of development was less about adding one single feature and more about strengthening the foundation. From markdown rendering to AI integration and file processing, everything now feels more connected and closer to a complete system rather than separate parts.

Attachment
Attachment
Attachment
0
satya//7YA

After working on the sidebar and making it feel more structured and interactive, I decided to push it a bit further by adding something more functional instead of just visual improvements.

So what I did next was introduce a “search context” button directly inside the sidebar. The idea behind this was to make it easier for users to actually find useful content like notes, articles, blogs, or anything relevant without leaving the app.

To make this work, I built a backend API for it. Instead of relying on just one source, I experimented with combining results from multiple search providers. I used Exa for more context-aware results and also integrated Brave Search API, then merged both responses together to get more meaningful and diverse outputs. This part was interesting because I had to think about how to normalize and combine the data so it actually feels useful instead of messy.

Once the search results were coming in properly, I connected it back to the UI and turned it into a proper feature. Users can now search for content directly from the sidebar and explore different sources in one place.

Then I added a simple “Add” button for each result so users can select what they want to include. The goal here was to make the flow feel smooth, like you search, pick what you need, and move forward without friction.

After that, I went a step deeper and worked on extracting actual content from the selected sources. Using Jina API, I wrote logic to retrieve the markdown content from the selected websites. So instead of just showing links, the system can actually pull in structured content that can be used inside the app itself.

Overall, this turned out to be more than just a small feature. It started as a simple idea but ended up becoming a full flow where users can search, select, and import useful context into their workspace. It definitely made the project feel more practical and closer to something people can genuinely use for studying or research.

Attachment
Attachment
0
satya//7YA

Shipped this project!

Hours: 1.6
Cookies: 🍪 16
Multiplier: 9.8 cookies/hr

Just deployed my latest project 🚀

It’s a minimal AI-powered study tool built to make learning a bit more interactive and less boring. You can explore notes, generate quizzes, and experiment with different ways of studying — all in one place.

⚠️ Note: Please don’t misuse the API keys. If you want, feel free to use your own keys while testing.

I’m still working on polishing things (UI tweaks, tracker improvements, etc.), but it’s in a usable state now.

Would really appreciate it if you try it out and share your feedback 🙌

Meanwhile, I’m off watching my favorite anime 😄

satya//7YA

Just deployed my latest project 🚀

It’s a minimal AI-powered study tool built to make learning a bit more interactive and less boring. You can explore notes, generate quizzes, and experiment with different ways of studying — all in one place.

⚠️ Note: Please don’t misuse the API keys. If you want, feel free to use your own keys while testing.

I’m still working on polishing things (UI tweaks, tracker improvements, etc.), but it’s in a usable state now.

Would really appreciate it if you try it out and share your feedback 🙌

Meanwhile, I’m off watching my favorite anime 😄

Attachment
0
satya//7YA

Finally after all of this, I moved into a phase where I just kept building on top of everything and trying to make the app feel more alive and actually fun to use, not just useful.

So first thing I did was improve the tables design. Earlier it was working but it felt kinda plain and boring, so I redesigned it a bit to make it cleaner and more readable. Spacing, alignment, small small visual tweaks, all that stuff. Nothing too flashy but enough to make it feel nicer when you actually look at data.

Then I started thinking about something more interesting. Like whenever a user asks something where a normal text answer is not enough, for example history, flows, step by step processes, tree structures or anything like that, it would actually be better if we could show it visually. So I added a flowchart / diagram type system. Basically if the question feels like it needs a visual explanation, the app can generate a graph-style diagram.

For that I used graph-based rendering and integrated React Flow. Setting that up was a bit tricky ngl, handling nodes, edges, layout and all, but once it started working it looked really good. It makes explanations feel way more clear compared to just paragraphs.

After that I thought okay reading is fine but how do we know if the user is actually understanding anything. So I added a quick test feature. For every question, there are like 4 small questions generated. These act like a mini test so the user can check if they actually understood what they just read. It is simple but kinda powerful because it turns passive reading into something a bit interactive.

Then I also made sure all of this works properly in dark mode. Graphs, tests, UI elements, everything. Earlier some parts looked off in dark theme so I refined that as well. Now both light and dark feel consistent.

I also updated the sources system a bit more to make it smoother and more reliable. And yeah I went back again to model settings and improved it slightly, tweaking how things behave and making it a bit more usable overall.

And one more thing I have started experimenting with is something like an Obsidian-style canvas. Still in progress but the idea is that users can create their own notes, connect them, maybe build a small knowledge graph kind of thing. Not fully done yet but trying it out and seeing how it fits into the system.

So yeah this part was less about one big feature and more like continuous building, improving, trying new ideas, fixing things, adding small details again and again. Kinda messy process but also the most fun part honestly.

Attachment
Attachment
Attachment
Attachment
0
satya//7YA

After building out the core system and getting the main flow working, I shifted my focus toward making the experience more complete and polished. This phase was all about adding depth, improving usability, and fixing the small things that actually matter a lot in real usage.

I started by expanding the rendering capabilities even further. I added proper LaTeX rendering so mathematical expressions can be displayed cleanly, which is especially useful for study-related content. Along with that, I also implemented code rendering, making sure code blocks are formatted nicely and remain readable. This made the platform much more versatile because now it can handle technical, academic, and general content all in one place.

After that, I spent time polishing the overall system. One thing I noticed was that the model responses were not always as good as I wanted. So I worked on improving that by refining how the models are used. I adjusted prompts, improved the message structure, and did some level of fine-tuning to get better and more consistent outputs. I also focused on optimizing response behavior so the answers feel more useful and less random.

Then I moved back to the UI and started enhancing the sidebar again. I made it collapsible so users can choose when they want more space versus when they want quick access to tools. On top of that, I added a small but useful feature showing the current time and date at the top of the sidebar. It is a simple addition, but it makes the interface feel more alive and practical.

While working with sources, I ran into an issue where sometimes content fetched through the Jina API would fail or not load properly. Instead of leaving users stuck with broken entries, I added a one-click remove option so they can easily clean up their sources. This made the system feel more controlled and user-friendly.

To give users more flexibility, I added a model settings option where they can choose which model they want to use. This opens up more control depending on the type of task or response they expect. Along with that, I also introduced a font customization feature so users can change how the text looks based on their preference. Whether they like cleaner fonts or something more stylized, they can adjust it accordingly.

Another important addition was the ability to include custom links and text as context. This allows users to manually add their own sources and send them to the model, making the system more flexible and powerful for personalized workflows.

Finally, I implemented a proper theme toggle with well-designed dark and light modes. Instead of just switching colors, I made sure both themes feel consistent and visually appealing.

Overall, this stage was about turning the project from something that works into something that feels good to use. A lot of small improvements, fixes, and enhancements came together to create a much smoother and more complete experience.

Attachment
Attachment
0
satya//7YA

After setting up the search and context flow, I continued working further on improving the core experience of the app, especially around how content is rendered and processed.

One of the first things I focused on was implementing full markdown rendering across the app. Instead of treating content as plain text, I made sure everything could be properly parsed and displayed with rich formatting. This made a huge difference because now notes, articles, and extracted content actually feel readable and structured rather than raw data.

After getting markdown working, I moved on to improving the overall typography and visual presentation. For this, I used the Gemini model to help guide how content should be structured and displayed. I ended up designing a fully custom typography system, thinking through how headings, paragraphs, spacing, and emphasis should look. The goal was to make reading feel smooth and intentional, not just technically correct.

Once the rendering and design side felt solid, I shifted focus to the backend intelligence. I integrated the Gemma model and set up its API properly. This involved defining a clear message structure so inputs and outputs stay consistent and predictable. I spent time making sure the flow between user input, processing, and response generation was clean and scalable.

After that, I worked on designing the complete page layout around this system. I built a sidebar that can display all the sources being used, so users can clearly see where their context is coming from. This helps make the experience more transparent and organized.

Finally, I implemented a chunking mechanism for handling files. Whenever files are uploaded, they are automatically split into smaller chunks and then processed step by step. This makes it easier to handle large inputs and also prepares the system for more advanced features later on.

Overall, this phase of development was less about adding one single feature and more about strengthening the foundation. From markdown rendering to AI integration and file processing, everything now feels more connected and closer to a complete system rather than separate parts.

Attachment
Attachment
Attachment
Attachment
0
satya//7YA

After working on the sidebar and making it feel more structured and interactive, I decided to push it a bit further by adding something more functional instead of just visual improvements.

So what I did next was introduce a “search context” button directly inside the sidebar. The idea behind this was to make it easier for users to actually find useful content like notes, articles, blogs, or anything relevant without leaving the app.

To make this work, I built a backend API for it. Instead of relying on just one source, I experimented with combining results from multiple search providers. I used Exa for more context-aware results and also integrated Brave Search API, then merged both responses together to get more meaningful and diverse outputs. This part was interesting because I had to think about how to normalize and combine the data so it actually feels useful instead of messy.

Once the search results were coming in properly, I connected it back to the UI and turned it into a proper feature. Users can now search for content directly from the sidebar and explore different sources in one place.

Then I added a simple “Add” button for each result so users can select what they want to include. The goal here was to make the flow feel smooth, like you search, pick what you need, and move forward without friction.

After that, I went a step deeper and worked on extracting actual content from the selected sources. Using Jina API, I wrote logic to retrieve the markdown content from the selected websites. So instead of just showing links, the system can actually pull in structured content that can be used inside the app itself.

Overall, this turned out to be more than just a small feature. It started as a simple idea but ended up becoming a full flow where users can search, select, and import useful context into their workspace. It definitely made the project feel more practical and closer to something people can genuinely use for studying or research.

Attachment
Attachment
0
satya//7YA

So today I spent a solid chunk of time refining the sidebar of my project. Initially, the goal was simple just make it clean and usable but as I got into it, I ended up pushing it much further to give it a more polished and professional feel overall.

I started by properly structuring the sidebar layout, making sure spacing, alignment, and hierarchy all felt intentional rather than just “placed there.” Once that base felt stable, I experimented with adding drag-and-drop functionality. That part was actually pretty fun to work on — getting elements to move smoothly and behave predictably took some tweaking, but it definitely made the sidebar feel more interactive and dynamic instead of static.

After that, I added a theme toggling option at the bottom of the sidebar. I wanted it to feel like a complete experience rather than just a UI component, so giving users control over themes felt like an important touch. It also helped me think more about consistency in design — making sure both light and dark modes looked equally clean.

Overall, the main intention throughout was to move away from something that looks “just functional” to something that feels thoughtfully designed and slightly more production-ready.

Apart from the UI work, I also explored how indexing should work in the project. I added options to control how files are split into chunks — like how many chunks should be created and how the data is processed. Honestly, I’m not 100% sure yet if I’ll end up using this feature in the final version, but I still went ahead and implemented the option anyway. I figured it’s better to have that flexibility early on rather than regret not designing for it later.

So yeah, this update was a mix of UI refinement, interactivity improvements, and a bit of forward-thinking experimentation. Not everything I built today might make it into the final product, but it definitely helped me understand the system better and push the project a step closer to something more complete.

Attachment
0
satya//7YA

Shipped this project!

Hours: 0.47
Cookies: 🍪 9
Multiplier: 18.3 cookies/hr

Cheminformatic is a lightweight chemical analyzer built to test and validate the capabilities of my own hybrid Epoxy programming language. It parses user provided molecular formulas, estimates atomic composition, detects bond types, computes bond-to-atom ratios, and predicts relative molecular stability using heuristic rules. The system dynamically generates a styled chemical analysis report and serves it through a Node.js HTTP server.

satya//7YA

fixed the api problems and image loading things.. and optimized it for mobile devices a Lil bit

Attachment
0
satya//7YA

completely rebuilt the system using the groq api, which now handles all backend logic and data processing. i overhauled the entire website structure and UI for a cleaner look, rewrote the epoxy code into javascript from scratch to improve the core logic and finally deployed the live version on vercel.

Attachment
Attachment
Attachment
0
satya//7YA

Shipped this project!

Hours: 4.5
Cookies: 🍪 87
Multiplier: 19.4 cookies/hr

Cheminformatic is a lightweight cheminformatics experiment created to test the parsing power and runtime behavior of the Epoxy programming language. The project analyzes user-provided molecular strings, estimates atomic composition, detects bond types (double and triple), approximates hydrogen counts and evaluates a heuristic stability score based on bond-to-atom ratio.

satya//7YA

Resolved several bugs related to chemical parsing and optimized the logic for reading chemical formulas. Also updated the design of the Node HTTP server page for better readability.

Attachment
0
satya//7YA

enhanced the ui by integrating a header image for improved chemical structure visualization… refined the overall layout and optimized additional system parameters.. :)

Attachment
Attachment
0
satya//7YA

experimented with building a lightweight cheminformatics engine in my hybrid language, Epoxy. implemented a basic parser to analyze chemical formulas and compute structural details, then designed a simple server rendered tailwindcss + html interface through epoxy’s best feature javascript interpolation to present the output in a structured format…

Attachment
Attachment
0
satya//7YA

Shipped this project!

Hours: 2.17
Cookies: 🍪 17
Multiplier: 8.05 cookies/hr

Added a new feature that lets users see how many pages the website has and how many URLs and JavaScript files are linked to it :)

satya//7YA

Improved the overall design and refined the UI for a cleaner look. Fixed several bugs, including handling cases where result fetching fails.. it now properly shows an error instead of breaking. Reduced unnecessary re-rendering to improve performance and optimized parts of the codebase for better structure and readability. Also added a new feature that lets users see how many pages the website has and how many URLs and JavaScript files are linked to it.

Attachment
Attachment
0
satya//7YA

Shipped this project!

Hours: 8.0
Cookies: 🍪 125
Multiplier: 15.62 cookies/hr

Paradox is an advanced web analysis tool designed to evaluate the security and performance of modern web endpoints.

satya//7YA

successfully migrated the entire sites ui to react, moving to a more modular component based architecture for code maintenance … after finalizing the code, i pushed the latest version to github and configured an automated deployment pipeline via vercel for seamless live updates :)

Attachment
Attachment
Attachment
0
satya//7YA

designed the search results page with a much cleaner look :) integrated the api to make everything functional, then made some manual css adjustments here and there to improve the overall ui.. had to tweak a few things, but it was totally worth it…its looking much better now.

Attachment
Attachment
0
satya//7YA

made the basic landing page ui of the website ::))

Attachment
Attachment
Attachment
0
satya//7YA

Shipped this project!

Hours: 3.46
Cookies: 🍪 24
Multiplier: 7.03 cookies/hr

made this website, generated all the images for each status code, designed the background, wrote the code and finally deployed the website

satya//7YA

made this website, generated all the images for each status code, designed the background, wrote the code, and finally deployed the website :)

Attachment
Attachment
0
satya//7YA

Shipped this project!

Hours: 7.0
Cookies: 🍪 18
Multiplier: 2.53 cookies/hr

lol finalllyyy finished building this dog breed guessing game.. :) it was fun making this and i learned a lot about the dog breeds too.. hope you find it fun.. although its not useful, its definitely playful XD

satya//7YA

finally finished this project :)
polished the ui.. added some background patterns and wrote the function to set the colours to red or green for chosen options based on right or wrong.. and also added the full breed array.. its done :) themksuuuuuu

Attachment
Attachment
0
satya//7YA

designed the ui of the homepage and the background of the website… and added the full list of dog breeds

Attachment
0
satya//7YA

last day i wrote the basic functioning of the website like the random options generator and answer checking function.. and also implemented the dogs breed api for random dog images.. now i will make an array list of dogs breed for the options also..

Attachment
1

Comments

chefpenguino
chefpenguino 3 months ago

HAHA this project idea is so cute