LectureVault banner

LectureVault

14 devlogs
30h 34m 50s

It will take your images of notes and will auto classify them into their subject folders so you wont have to find the lecture notes in tons of camera pictures!
It works well for the clean handwriting and digital notes(Screenshot of a slide, a AI…

It will take your images of notes and will auto classify them into their subject folders so you wont have to find the lecture notes in tons of camera pictures!
It works well for the clean handwriting and digital notes(Screenshot of a slide, a AI explanation, a textbook page) ! but it wont work well with messy handwriting for now! I will work more on it in next increement!

This project uses AI

Used ai for debugging
Also used ai to write the prompt i used in gemini classifier service.

Demo Repository

Loading README...

aneezakiran07

search feature complete, index, quality, and navigation

Before

  • ocr text saved to the search index was capped at 8 display keywords per photo

  • a digital screenshot with 200 words was only searchable by 8 of them

  • multi-word queries like “artificial intelligence” only matched if the exact phrase appeared consecutively in the ocr text

  • was no search screen

Now

  • a separate ocrFull field stores cleanedText plus the entire keyword list so every word on a digital screenshot is indexed
  • _confirmAndSave reads ocrFull into SearchService.indexPhoto instead of the 8-keyword display string
  • search() splits the query into individual words and requires every word to appear somewhere in the text in any order
  • search is the second tab in the bottom navbar

PS i’m never making mobile app again :”) just testing it takes 15 to 20 min, first you have to transfer it on your phone then run it and all :”( cant run it on emulator cuz it has to take photos from android !

0
aneezakiran07

full-text search across all notes - backend and frontend devlog

Before

  • ocr text was thrown away after gemini classified each photo
  • no index existed anywhere on device to search against
  • deletePhoto in storage_service removed the file but left ghost entries nowhere to clean
  • clearAll in storage_service wiped prefs but had no knowledge of the search index
  • no screen existed to type a query or see results

Now

  • SearchService.indexPhoto() saves ocr text + subject under a single ocr_index key in SharedPreferences immediately after every successful save in _confirmAndSave()
  • SearchService.search() scans the entire index for a query substring and returns List<SearchResult> sorted alphabetically by subject
  • SearchService.removePhoto() is called inside StorageService.deletePhoto() so deleting a photo cleans the index atomically
  • SearchService.removeSubject() exists for future subject deletion cleanup
  • SearchService.clearIndex() is called inside StorageService.clearAll() so a full app reset leaves no fatherless/orphan(funny word ik) data
0
aneezakiran07

Splash screen, UI and some fixes!
Hi! In this devlog I improved UI of LectureVault.
I built an animated splash screen!!! Also made logo for LV which is still so meh but i got no artistic sense so :”)
Fixed all the dim buttons across the app. Bottom nav icons, filter pills, sort button, the Change storage location link in settings, Move and Delete in the selection bar, and all dialog action buttons are now solid vivid colors!! because voters before complained my buttons aren’t good so :) before i just made this to just get its functionality right, i didnt bother much on UI but many of voters said my UI is trash so imma fix it now!!!

0
Oliver

Tagged your project as well cooked!

🔥 Oliver marked your project as well cooked! As a prize for your nicely cooked project, look out for a bonus prize in the mail :)

aneezakiran07

Shipped this project!

Hours: 25.89
Cookies: 🍪 662
Multiplier: 25.57 cookies/hr

HI!!
So I built an app called LectureVault that lets you organize/classify your lecture notes photos using AI!!

You pick a storage folder, create your subjects, and whenever you upload photos the app automatically scans them with OCR and uses Gemini AI to classify them into the right subject folder. It also shows you a confidence score so you know how sure the AI is, and you can manually override if it gets it wrong.
You can view photos in a fullscreen viewer with pinch to zoom, swipe between photos, and share them directly. There is also an Unclassified folder for anything the AI cannot figure out.
The hardest part was getting the classification pipeline right :)
I am happy how it turned out! Suggest me more features if you guys have any, and let me know what you think!!
If this do well with voting, then I will make a sort by topics feature also!!!
I put so much effort into this, so I hope you like it!!
if you dont want to download the apk, then please watch demo video in the latest devlog!

aneezakiran07

Sorry kinda long devlog but no worries i still document everything in this devlog :)
Hi!!
in this devlog, all i did was to suffer :”))

so i try adding a catch block in upload screen that catches gemini error like reaching the api limit!
but implementing this feature broke my all flow, try to fixed it many times but no vain, so i had to switch to prev commit code :)
also there was the error in toast message of undoing the delete action, it followed user everywhere so i fixed it by clearing the toast message while moving to another activity!
Also, I added the move photos functionality in other folders also, user now can move images among folders
also updated the readme!
moreover, user now cant delete the folder named unclassified cuz obv unclassified images need to go there!
also, image was’t loading sometimes!
Android returns content:// URIs from the image picker, and Image.file() cant read those as it only works with real paths.
so i cache the image bytes immediately after picking (while the URI permission is still active) using XFile.readAsBytes() then i render with Image.memory() instead.

Also good news
before i was using gemini model 2.5 flash lite and it was only 20 requests per day, but now i find out about 3.1 flast lite model and it can do 500 requests per day
so i am using it right now
also, my app send 1 request for 20 pics in one session, so i am saving alot of gemini credit this way!! so you can use application in peace cuz
if u upload 20 images per session, then we classify 1000 pics a day!!!

also, if someone dont wanna worry abt it at all! so he/she can just clone my repo and put their own gemini key in .env file. but i dont think so you need to do this

demo video is attached, i can assure you images i classified in app is 100% correct :”)

0
aneezakiran07

Pinch to zoom feature and some UI improvements!
Hi! In this devlog I fixed and added a few things.

i added unclassified folder that gets created automatically during onboarding setup so photos that the AI cannot confidently sort can go there

made delete folder flow so users now get a choice when deleting. They can either remove the folder from the app only, keeping all their files on the device, or delete everything including the physical folder and photos inside it.

Improved the pinch to zoom in the image viewer.
Replaced the keep original copy toggle in settings with a clean info card explaining that my app never touch the camera roll.

also i attached the video testing every feature, check it out!

0
aneezakiran07

debugging and debugging and image view feature minimal implementation

Hi! In this devlog I fixed several issues

first issue was the app kept showing the onboarding screen every time it launched even after setup was done. Fixed it by calling markSetupDone() at the end of the folder setup flow so the app now remembers that setup was completed.
Second was the Reset App button in settings was not properly clearing the setup flag. so i fixed it by replacing the individual save calls with clearAll() which wipes everything including the setup done flag.
third was the classification settings toggles in settings were hardcoded before. so i make it save to SharedPreferences and reading them back on load, then wiring each toggle into the actual upload logic
fourth was deleting a folder from the folder view screen did nothing. Fixed it by implementing the delete folder flow properly with a warning dialog that tells you that folder will be deleted from the app but wont be deleted from the device.

moreover, i added a new image viewer screen that opens when you tap on a photo in the folder view. It supports swiping between photos, I made it so that user can see the actual image we are classifying right now
I also removed the About section in history as i really dont need it!
also, i added a create and delete folder option in the homepage also, also fixed navbar issues :)
in the next devlog, i will implement zoom logic more correctly and also will implement sharing fuctionality of pictures!!!
ANd will remove the redundant icons i placed before :”)

0
aneezakiran07

OCR + Gemini Classification
hi!
in this devlog, i got the full AI classification pipeline working. here’s how it actually went

the OCR part was kinda smooth. i used Google ML Kit’s text recognizer and ran it locally on each image, so no internet needed. on top of that, i added a small OCR enhancer that fixes common mistakes like “Ca1cu1us” -> “Calculus”, pulls out keywords by removing stop words, and detects academic patterns like “derivative calculus” or “acid base chemistry” using regex. and also enhance my images first before sending it to OCR (also did rotating of image incase they are yk in weird position)
for gemini,
at first, i was sending one request per image, so 10 photos meant 10 requests. this instantly hit the rate limit (free tier only allows 10 per minute). then i switched to batching (mean many images->extract keywords->send it as one prompt to classify it) sounds easy but its not, everything came back as Unclassified with 0% confidence, even though OCR output looked fine. tho single image requests still worked perfectly,
so i added a debug panel on the upload screen to actually see what was going on. turns out the batch prompt was structured completely differently from the single-image prompt.
another issue was maxOutputTokens being set to 1000, which was cutting off the JSON response for larger batches. increasing it to 4000 fixed that part.

then i gave up and i used NotebookLM to refine the batch prompt. the fix was putting the reasoning field before the subject field in the JSON. this made Gemini process the text first before deciding the label. combined with making the batch prompt match the single prompt style (same formatting, examples, tone), it finally started working.

final setup: OCR runs in parallel locally, then a single Gemini request classifies everything and returns a JSON array, mapped back using IDs.

0
aneezakiran07

Settings Screen
hi!
in this devlog, i integrated the settings screen to save and load everything from SharedPreferences. before this it was all in memory so changes were lost when you left the screen.
you can now add a new subject, rename an existing one, or delete one.
it was mostly just replacing the local list mutations with calls to StorageService.saveSubjects() after every change. add, rename and delete all update the list then immediately write the whole thing back to prefs.
hmm the tricky part was making sure the home screen reflects the changes after you come back from settings. fixed it by calling _loadData() inside didChangeDependencies!!!
next devlog: OCR using ML Kit and maybe gemini integration also
Note: whole about section is for the future implementation, as i havent made any release on playstore yet

0
aneezakiran07

Folder View
hi! in this devlog, i made the folder view screen show real photos from the device instead of grey placeholder boxes. user can tap any subject card on the home screen and see the actual lecture photos you uploaded
it reads the files directly from the subject folder on device storage, pulls real metadata like file name, size, and modified date. deleting a photo removes the file. long pressing enters selection mode where you can delete multiple at once. the bottom sheet also shows a thumbnail of the photo.
did it by using getPhotosInSubject from storage service to list all jpg and png files in the folder, then used Image.file() to render them. added a deletePhoto method that calls file.delete() on the actual path.
the tricky part was reading args passed from the home screen , i had to make sure that was read inside didChangeDependencies so the photos reload every time you navigate back to the screen.
next devlog: settings screen maybe TT

0
aneezakiran07

Photo Upload
hi!
in this devlog, i made photo upload work.
when you pick photos, they show up in the photo strip and the classification list with your actual images. you can tap the arrow on any photo to expand it and manually override which subject it belongs to. once you hit confirm and save, the photo gets physically copied to the correct subject folder on your device storage.
it was straightforward for the most part, just wired up image_picker for gallery and camera, and used dart:io to copy files into the right folder path.
the tricky part was permissions. android 13 and below handle storage permissions completely differently, so i had to request photos permission first and fall back to storage permission for older versions otherwise it silently failed.
next devlog: folder view screen showing photos from device

0
aneezakiran07

Hi,
In this devlog, I made the persistent data layer working. the app needed to remember ur subjects and storage folder between sessions.
So,
I used SharedPreferences to save and load everything locally on the device. So now when you go through onboarding, pick your storage folder , add ur subjects . all of that will be saved in the folder you selected.
Next time you open the app, it reads that data and takes you straight to the home screen with your subjects.
In next devlog, i will integrate the photo upload system

0
aneezakiran07

HI!!!
In this devlog
i made full structure of my application, i made only the UI of whole system, cuz this way it will be easy to visualize all the flow
and also , i linked all the screen together so now all left is backend working. I also figure out what services to make, so basically i made all the ARCHITECTURE of the system!
also before i was doing it in JAVA but someone suggested i should make it in flutter because flutter is more easier to learn than JAVA
soI here im! I also made a new repo for flutter cuz i pushed it in prev repo with java code and it was causing errors so i made new one!
Also , upload pic functionality isnt working as i havent implemented its backend yet :”)

0
aneezakiran07

Hi, this is my second devlog on the LectureVault! Today I worked on the main page where you can select images from your gallery to start organizing them. I used the Activity Result API with a launcher bcz old startActivityForResult way is now deprecated. I learned that you have to use EXTRA_ALLOW_MULTIPLE to let the user pick more than one photo at a time.

Next update: letting user make the folders first so it will be easy for my classifier to classify images!!

0
aneezakiran07

Hi, This is my first devlog on the LectureVault(might change this name later)
and this is also my first time working on android app development, so i will be learning about it along with doing this project!!! I do know java tho so i hope its easy to learn. I will be using java for this project.
What i learn today:
folder structuring mainly
how to push on android studio(tho i still need more details on this one, will research abt it tommorrow)
XML is for all the layout here , also it gave you cool view of design.
In attached video you can see if we move the design layout , like move the icons in it , then it also changes the code accordingly, no worries abt maintaining x axis and y axis TT
Also, im finding this way more harder than web dev :”)

0