NoteForge banner

NoteForge

10 devlogs
39h 0m 56s

A minimal and simple study tool designed to make learning from your own content easier and more flexible.
You can explore content from the web, upload PDFs and images, add YouTube videos, create graphs, and even take quizzes all in one place. It’…

A minimal and simple study tool designed to make learning from your own content easier and more flexible.
You can explore content from the web, upload PDFs and images, add YouTube videos, create graphs, and even take quizzes all in one place. It’s built to keep everything you need for studying simple, organized and actually useful.

Start where you are. Use what you have. Do what you can.

This project uses AI

Used GPT-5.3-codex for autosuggestions, devlogs, README generation and debugging :)

Demo Repository

Loading README...

satya//7YA

Shipped this project!

Hours: 37.41
Cookies: 🍪 807
Multiplier: 21.58 cookies/hr

just deployed…

it’s a simple ai-powered study tool i’ve been building to make learning a bit more interactive and less boring. you can go through notes, generate quizzes, and try different ways of studying all in one place.

it’s still a work in progress, especially the ui and tracking parts, but it’s usable right now.

please don’t misuse the api keys. if you want to test properly, you can plug in your own keys.

if you try it out, i’d really appreciate any feedback 🙌

satya//7YA

The first thing I focused on was improving the table design. It was functional before, but honestly it looked a bit dull. So I reworked it to feel cleaner and easier on the eyes — adjusted spacing, fixed alignment, and added a few subtle visual improvements. Nothing over-the-top, just enough to make browsing data feel smoother and less tiring.

After that, I shifted into more of an iterative phase — less about one big change and more about continuously building on top of what was already there. The goal was simple: make the app feel more alive and engaging, not just something that gets the job done.

Then I started thinking about how plain text isn’t always the best way to explain things. For topics like history, processes, flows, or hierarchical ideas, visuals just make more sense. So I built a system that can generate flowcharts or diagram-like structures whenever the question calls for it. Instead of forcing everything into paragraphs, the app can now switch to a more visual explanation when needed.

To make that work, I used a graph-based approach with React Flow. Setting it up wasn’t exactly straightforward — managing nodes, edges, and layouts took some effort — but once it clicked, the results were really solid. The explanations feel much clearer and more intuitive compared to just reading blocks of text.

Next, I tackled another problem: understanding. Just because someone reads something doesn’t mean they actually get it. So I added a small testing feature. For each question, the app generates a few quick follow-up questions — kind of like a mini quiz. It’s a simple addition, but it changes the experience from passive reading to something more interactive and self-checking.

I also spent some time making sure everything works properly in dark mode. Earlier, a few elements didn’t look quite right, especially graphs and UI components. I refined those so now both light and dark themes feel consistent and polished.

Along the way, I improved the sources system to make it smoother and more reliable. I also revisited the model settings, tweaking things slightly to make the overall behavior more usable and predictable.

Recently, I’ve started experimenting with something more ambitious — an Obsidian-style canvas. It’s still a work in progress, but the idea is to let users create notes, connect them, and maybe build their own small knowledge graph. Not fully there yet, just exploring how it fits into the app.

Overall, this phase wasn’t about a single standout feature. It was more about constant iteration — building, refining, experimenting, fixing, and repeating. A bit chaotic at times, but easily the most enjoyable part of the process.

Attachment
Attachment
0
satya//7YA

Once the core system was working, I shifted focus to making the app feel more complete and polished. This phase was mainly about improving usability and adding depth.

I started by upgrading rendering. I added proper LaTeX support so math expressions display cleanly, and also improved code rendering so code blocks look neat and readable. This made the app much more flexible for both technical and general use.

Then I worked on improving response quality. I refined prompts, adjusted how messages are structured, and tuned the model behavior to get more consistent and useful outputs instead of random ones.

After that, I went back to the UI and improved the sidebar. I made it collapsible for better space control and added a small time and date display at the top. It is a simple detail but makes the interface feel more practical.

While working with sources, I noticed that content from the Jina API could sometimes fail. To handle that, I added a one-click remove option so users can quickly clean up broken entries.

I also added more customization. Users can now choose which model they want to use, and I included font settings so they can adjust how text looks based on their preference.

Another useful feature was allowing custom links and text as context. This lets users bring their own data into the system, making it more flexible for different workflows.

Finally, I implemented a proper light and dark theme system and made sure both look consistent and clean.

Overall, this phase was about refining everything. Small improvements, fixes, and additions that together made the app feel smoother and much better to use.

Attachment
Attachment
0
satya//7YA

After the search and context flow, I kept pushing on improving the core experience, mainly around how content is handled and displayed.

I started with full markdown support across the app. Instead of showing plain text, everything is now properly formatted, which makes notes and extracted content feel much more structured and readable.

Then I focused on typography and overall presentation. With some help from the Gemini model, I worked on designing a custom text system, defining how headings, spacing, and content should look. The aim was to make reading feel smooth and natural, not just correct.

Once that felt stable, I moved to the backend side. I integrated the Gemma model and set up a clear API structure so inputs and outputs stay consistent. I made sure the flow from user input to response generation is clean and reliable.

After that, I designed the layout around this system. I added a sidebar that shows all active sources, so users can easily see where the information is coming from.

Finally, I implemented file chunking. Uploaded files are split into smaller parts and processed step by step, which helps in handling large data and sets up the system for future improvements.

Overall, this phase was about strengthening the foundation. Everything from rendering to AI integration now feels more connected and stable.

Attachment
Attachment
Attachment
0
satya//7YA

After working on the sidebar and making it feel more structured and interactive, I decided to push it a bit further by adding something more functional instead of just visual improvements.

So what I did next was introduce a “search context” button directly inside the sidebar. The idea behind this was to make it easier for users to actually find useful content like notes, articles, blogs, or anything relevant without leaving the app.

To make this work, I built a backend API for it. Instead of relying on just one source, I experimented with combining results from multiple search providers. I used Exa for more context-aware results and also integrated Brave Search API, then merged both responses together to get more meaningful and diverse outputs. This part was interesting because I had to think about how to normalize and combine the data so it actually feels useful instead of messy.

Once the search results were coming in properly, I connected it back to the UI and turned it into a proper feature. Users can now search for content directly from the sidebar and explore different sources in one place.

Then I added a simple “Add” button for each result so users can select what they want to include. The goal here was to make the flow feel smooth, like you search, pick what you need, and move forward without friction.

After that, I went a step deeper and worked on extracting actual content from the selected sources. Using Jina API, I wrote logic to retrieve the markdown content from the selected websites. So instead of just showing links, the system can actually pull in structured content that can be used inside the app itself.

Overall, this turned out to be more than just a small feature. It started as a simple idea but ended up becoming a full flow where users can search, select, and import useful context into their workspace. It definitely made the project feel more practical and closer to something people can genuinely use for studying or research.

Attachment
Attachment
0
satya//7YA

Shipped this project!

Hours: 1.6
Cookies: 🍪 16
Multiplier: 9.8 cookies/hr

Just deployed my latest project 🚀

It’s a minimal AI-powered study tool built to make learning a bit more interactive and less boring. You can explore notes, generate quizzes, and experiment with different ways of studying — all in one place.

⚠️ Note: Please don’t misuse the API keys. If you want, feel free to use your own keys while testing.

I’m still working on polishing things (UI tweaks, tracker improvements, etc.), but it’s in a usable state now.

Would really appreciate it if you try it out and share your feedback 🙌

Meanwhile, I’m off watching my favorite anime 😄

satya//7YA

Just deployed my latest project 🚀

It’s a minimal AI-powered study tool built to make learning a bit more interactive and less boring. You can explore notes, generate quizzes, and experiment with different ways of studying — all in one place.

⚠️ Note: Please don’t misuse the API keys. If you want, feel free to use your own keys while testing.

I’m still working on polishing things (UI tweaks, tracker improvements, etc.), but it’s in a usable state now.

Would really appreciate it if you try it out and share your feedback 🙌

Meanwhile, I’m off watching my favorite anime 😄

Attachment
0
satya//7YA

Finally after all of this, I moved into a phase where I just kept building on top of everything and trying to make the app feel more alive and actually fun to use, not just useful.

So first thing I did was improve the tables design. Earlier it was working but it felt kinda plain and boring, so I redesigned it a bit to make it cleaner and more readable. Spacing, alignment, small small visual tweaks, all that stuff. Nothing too flashy but enough to make it feel nicer when you actually look at data.

Then I started thinking about something more interesting. Like whenever a user asks something where a normal text answer is not enough, for example history, flows, step by step processes, tree structures or anything like that, it would actually be better if we could show it visually. So I added a flowchart / diagram type system. Basically if the question feels like it needs a visual explanation, the app can generate a graph-style diagram.

For that I used graph-based rendering and integrated React Flow. Setting that up was a bit tricky ngl, handling nodes, edges, layout and all, but once it started working it looked really good. It makes explanations feel way more clear compared to just paragraphs.

After that I thought okay reading is fine but how do we know if the user is actually understanding anything. So I added a quick test feature. For every question, there are like 4 small questions generated. These act like a mini test so the user can check if they actually understood what they just read. It is simple but kinda powerful because it turns passive reading into something a bit interactive.

Then I also made sure all of this works properly in dark mode. Graphs, tests, UI elements, everything. Earlier some parts looked off in dark theme so I refined that as well. Now both light and dark feel consistent.

I also updated the sources system a bit more to make it smoother and more reliable. And yeah I went back again to model settings and improved it slightly, tweaking how things behave and making it a bit more usable overall.

And one more thing I have started experimenting with is something like an Obsidian-style canvas. Still in progress but the idea is that users can create their own notes, connect them, maybe build a small knowledge graph kind of thing. Not fully done yet but trying it out and seeing how it fits into the system.

So yeah this part was less about one big feature and more like continuous building, improving, trying new ideas, fixing things, adding small details again and again. Kinda messy process but also the most fun part honestly.

Attachment
Attachment
Attachment
Attachment
0
satya//7YA

After building out the core system and getting the main flow working, I shifted my focus toward making the experience more complete and polished. This phase was all about adding depth, improving usability, and fixing the small things that actually matter a lot in real usage.

I started by expanding the rendering capabilities even further. I added proper LaTeX rendering so mathematical expressions can be displayed cleanly, which is especially useful for study-related content. Along with that, I also implemented code rendering, making sure code blocks are formatted nicely and remain readable. This made the platform much more versatile because now it can handle technical, academic, and general content all in one place.

After that, I spent time polishing the overall system. One thing I noticed was that the model responses were not always as good as I wanted. So I worked on improving that by refining how the models are used. I adjusted prompts, improved the message structure, and did some level of fine-tuning to get better and more consistent outputs. I also focused on optimizing response behavior so the answers feel more useful and less random.

Then I moved back to the UI and started enhancing the sidebar again. I made it collapsible so users can choose when they want more space versus when they want quick access to tools. On top of that, I added a small but useful feature showing the current time and date at the top of the sidebar. It is a simple addition, but it makes the interface feel more alive and practical.

While working with sources, I ran into an issue where sometimes content fetched through the Jina API would fail or not load properly. Instead of leaving users stuck with broken entries, I added a one-click remove option so they can easily clean up their sources. This made the system feel more controlled and user-friendly.

To give users more flexibility, I added a model settings option where they can choose which model they want to use. This opens up more control depending on the type of task or response they expect. Along with that, I also introduced a font customization feature so users can change how the text looks based on their preference. Whether they like cleaner fonts or something more stylized, they can adjust it accordingly.

Another important addition was the ability to include custom links and text as context. This allows users to manually add their own sources and send them to the model, making the system more flexible and powerful for personalized workflows.

Finally, I implemented a proper theme toggle with well-designed dark and light modes. Instead of just switching colors, I made sure both themes feel consistent and visually appealing.

Overall, this stage was about turning the project from something that works into something that feels good to use. A lot of small improvements, fixes, and enhancements came together to create a much smoother and more complete experience.

Attachment
Attachment
0
satya//7YA

After setting up the search and context flow, I continued working further on improving the core experience of the app, especially around how content is rendered and processed.

One of the first things I focused on was implementing full markdown rendering across the app. Instead of treating content as plain text, I made sure everything could be properly parsed and displayed with rich formatting. This made a huge difference because now notes, articles, and extracted content actually feel readable and structured rather than raw data.

After getting markdown working, I moved on to improving the overall typography and visual presentation. For this, I used the Gemini model to help guide how content should be structured and displayed. I ended up designing a fully custom typography system, thinking through how headings, paragraphs, spacing, and emphasis should look. The goal was to make reading feel smooth and intentional, not just technically correct.

Once the rendering and design side felt solid, I shifted focus to the backend intelligence. I integrated the Gemma model and set up its API properly. This involved defining a clear message structure so inputs and outputs stay consistent and predictable. I spent time making sure the flow between user input, processing, and response generation was clean and scalable.

After that, I worked on designing the complete page layout around this system. I built a sidebar that can display all the sources being used, so users can clearly see where their context is coming from. This helps make the experience more transparent and organized.

Finally, I implemented a chunking mechanism for handling files. Whenever files are uploaded, they are automatically split into smaller chunks and then processed step by step. This makes it easier to handle large inputs and also prepares the system for more advanced features later on.

Overall, this phase of development was less about adding one single feature and more about strengthening the foundation. From markdown rendering to AI integration and file processing, everything now feels more connected and closer to a complete system rather than separate parts.

Attachment
Attachment
Attachment
Attachment
0
satya//7YA

After working on the sidebar and making it feel more structured and interactive, I decided to push it a bit further by adding something more functional instead of just visual improvements.

So what I did next was introduce a “search context” button directly inside the sidebar. The idea behind this was to make it easier for users to actually find useful content like notes, articles, blogs, or anything relevant without leaving the app.

To make this work, I built a backend API for it. Instead of relying on just one source, I experimented with combining results from multiple search providers. I used Exa for more context-aware results and also integrated Brave Search API, then merged both responses together to get more meaningful and diverse outputs. This part was interesting because I had to think about how to normalize and combine the data so it actually feels useful instead of messy.

Once the search results were coming in properly, I connected it back to the UI and turned it into a proper feature. Users can now search for content directly from the sidebar and explore different sources in one place.

Then I added a simple “Add” button for each result so users can select what they want to include. The goal here was to make the flow feel smooth, like you search, pick what you need, and move forward without friction.

After that, I went a step deeper and worked on extracting actual content from the selected sources. Using Jina API, I wrote logic to retrieve the markdown content from the selected websites. So instead of just showing links, the system can actually pull in structured content that can be used inside the app itself.

Overall, this turned out to be more than just a small feature. It started as a simple idea but ended up becoming a full flow where users can search, select, and import useful context into their workspace. It definitely made the project feel more practical and closer to something people can genuinely use for studying or research.

Attachment
Attachment
0
satya//7YA

So today I spent a solid chunk of time refining the sidebar of my project. Initially, the goal was simple just make it clean and usable but as I got into it, I ended up pushing it much further to give it a more polished and professional feel overall.

I started by properly structuring the sidebar layout, making sure spacing, alignment, and hierarchy all felt intentional rather than just “placed there.” Once that base felt stable, I experimented with adding drag-and-drop functionality. That part was actually pretty fun to work on — getting elements to move smoothly and behave predictably took some tweaking, but it definitely made the sidebar feel more interactive and dynamic instead of static.

After that, I added a theme toggling option at the bottom of the sidebar. I wanted it to feel like a complete experience rather than just a UI component, so giving users control over themes felt like an important touch. It also helped me think more about consistency in design — making sure both light and dark modes looked equally clean.

Overall, the main intention throughout was to move away from something that looks “just functional” to something that feels thoughtfully designed and slightly more production-ready.

Apart from the UI work, I also explored how indexing should work in the project. I added options to control how files are split into chunks — like how many chunks should be created and how the data is processed. Honestly, I’m not 100% sure yet if I’ll end up using this feature in the final version, but I still went ahead and implemented the option anyway. I figured it’s better to have that flexibility early on rather than regret not designing for it later.

So yeah, this update was a mix of UI refinement, interactivity improvements, and a bit of forward-thinking experimentation. Not everything I built today might make it into the final product, but it definitely helped me understand the system better and push the project a step closer to something more complete.

Attachment
0