Activity

animesh_varma

woooo, I forgot to write a devlog, guess it will show up after my ship… Oh well

anyway, so flavortown just ended and I managed to slip in my ship exactly 2 min before it did…
I am so filled with adrenaline now I can’t express ittt

also the worst possible thing happened before I was about to the ship (~15 mins were remaining to the deadline), I realized I hadn’t cleared the 15 vote vote deficit from my last ship of another project and had to provide 15 comprehensive reviews in lest then 15 mins, to say it was a trip through hell would be an understatement (but don’t burn me. I still provide valid feedback to people, just mostly positive as I had no time to critique their work…), and that is the story of my very weird ship! ;(

anyway, as i said in my ship, the project itself is pretty easy to use, if you don’t understand something, please feel free to mail me now with that out of the way, attached here with are some screenshots from the demo script, enjoy!!!

[also, for some context, the video pre-processing is required as V-JEPA is pretty strict about what it takes in and how long it should be to yield optimal results, the bounding box for the frames is resizable as if you increase its size it, the extract is downsampled so V-JEPA can ingest it…]
[can you tell I am burned out, is my speech coherent? NVM, please refer to the readme if you need to understand it better… sorry again…]

Attachment
Attachment
Attachment
Attachment
Attachment
Attachment
1

Comments

animesh_varma
animesh_varma 2 days ago

Clarification on Usability!! Just a quick note on usability! You might notice there is no live website link, and that is simply because STATERA is a heavy framework that needs 8GB+ of VRAM just to run a single forward pass. Since I don’t have the budget for dedicated cloud GPU hosting right now, I made it as easy as possible to run locally with an automated setup script. If you don’t have the hardware to run it yourself, no worries at all, I put full video demos right at the top of the README so you can see the UI and physics tracking in action without downloading a thing!

(P.S. If you want any specific video tested, please feel free to email it to me and I will provide you with the heatmap and the exact extracted metrics!)

animesh_varma

Shipped this project!

Woooooooooo, I am so very short on time, only ~2 mins till flavortown ends, so please I am so very very sorry that I can’t explain anything, just please look at the top of my readme and my devlogs and you shall understand what it is easily enough, there are videos in the readme if you want to see it in action without downloading anything, see ya, thankyouuuu!!!

and if you want to use it:
Just clone the repo on a linux (preferably arch linux) or macOS machine and run the setup.py, it’ll ask you what you want to do and you can reply 2 to test out the demo, it will download everything required and then you can run the demo_app in the demo subfolder to test out the model!
[also on macOS the webpage wont load on safari, IDK why but it is what it is, and you will need a decently sized dedicated gpu for the pass, I assume ~8GB ram shall do the trick, any MacBook or Mac mini with apple silicon shall work… at least I hope soo…]

AAAAA, 1 minute left!!!!!

animesh_varma

Whewww, the final night before the deadline!

I just finished running the absolute final physics metrics on the new 50K-sigma model I spent the last 4 days training. And guess what? It turned into a complete coward!

Let me explain… To fix the “gravity bias” I talked about in the last log, I trained a new model that completely ignored angular orientation and used standard sigma, not my (potentially) novel crescent curriculum. It turns out, by removing that penalty, the network just took the easy way out. It parked its prediction right in the geometric center of the box and refused to move. It played it way too safe to actually track the physics, also while the actual avg prediction was parked in the center the heatmap covered the entire object edge to edge making the prediction pointless (img attached).

But here is the crazy part: when I ran a custom “Physics Capture” script to evaluate the vector distances, my original 50K Crescent model emerged as the undisputed champion. Despite the occasional bimodal split, it actually pushed its prediction away from the geometric center and consistently tracked the true hidden mass. It earned its accuracy through real physically grounded directionality, not just guessing the middle or flailing wildly like the baselines! [even though the 1K crescent and the resent baseline beat it there they suffered very badly in other metrics making the 50K-crescent only the one to service “well enough on all of them”, and of course its predictions look so cool and dynamic :)]

So, the original 50K Crescent model is officially the SOTA (it is now but I don’t know for sure still, dont know what I uncover when writing the final paper)! I am spending tonight doing a massive cleanup of my GitHub repo [deleting gigabytes of local logs, fixing hardcoded paths, and backing up all the weights and datasets]. I’ll whip up the final README tomorrow morning and hit that SHIP button right before the Flavortown deadline (I am in IST so adjust that accordingly).

See you on the other side!
[again the attached img is of the sigma 50K model just guessing the entire box just to be safe…]

Attachment
0
animesh_varma

Whewww, 23h 12m logged! Truth be told I didn’t believe that I would make STATERA presentable in time before flavortown ended (less than 2 days left yikes), but here we are, with a single devlog for 23h 12m no less (as I said I never expected it to be completed in time)!

So what is this massive project? STATERA (Spatio-Temporal Analysis of Tensor Embeddings for Rigid-body Asymmetry) is basically a zero-shot physics engine inside a neural network. It watches an opaque box tumble in the air and mathematically calculates exactly where its hidden internal center of mass (CoM) is, just from the raw video.

What have I been doing for 23 hours? Fighting physics and neural networks. I generated 50,000 simulated tumbling videos to train a frozen V-JEPA vision model on my single RTX 5070 Ti. I had to build a custom 2.5D decoder and a mathematical extraction pipeline using OpenCV just to track the physics accurately. (and yes obv the training time is not in the hours of work, hackatime sends heart beats only when you’re actively coding… the cumulative training time including all the ablations is well over 2 weeks!)

The biggest challenge yet? The model actually overfit to gravity! It learned a “Settling-State Bias” where it realized heavy things usually face the floor when they land in the simulation. So when I tested it in the real world in my bedroom and the box bounced with the heavy side UP, the network’s brain literally split in half trying to decide between the visual bounce and its gravity bias[I call this Visual-Kinematic Aliasing for now]. Fixing this math by changing the training curriculum to a phase-agnostic gaussian dot took some time (but nothing compared to the time it took to come up with the previous 50K curriculum which failed with the bias. It used a crescent target that forced the model to learn angular orientation, which accidentally taught it the gravity trick)!

What’s next? The final 50K model is finishing its run tonight around 9pm. While that finishes, I am going to build a slick local web UI for the demo so you guys can just upload a video and watch the physics tracking happen live. See you in the next devlog! [the attached video shows the last 50k model splitting up its prediction as it fights internally between the face it thinks is correct from observation and the one which is facing down which it learned to be correct most of the time. The crosshair is the actual ground truth from my tracking and you can see where the model thinks the CoM is from the heatmap density! I have 76 such videos as my real-world validation test. Also did I mention I would be releasing a 50K benchmark for other people who want to test on my baseline? I do also believe that the accuracy will scale beautifully with more compute, which I simply don’t have right now (fun fact, each 50K run takes around 4.5 days to complete and a 1K run around 6-9 hours on my 5070ti i.e., also the 50K dataset alone is over 200GB!)]
(also the first one it where it splits and the wrong face wins, and in the second one, the correct one was selected, you might also think it is guessing which face and is bound to get it correct 50% of the time, but the other 74 runs not shared here make me believe otherwise…)

0
animesh_varma

Shipped this project!

Hours: 17.12
Cookies: 🍪 401
Multiplier: 23.41 cookies/hr

Eidolon v0.0.1-poc — Native Bluetooth AI Call Proxy!

Whewww, it’s finally here! After wrangling with low-level OS kernel sockets and
ESP32 C firmware, I am officially shipping the first Proof-of-Concept for
Eidolon!

What is it?
Eidolon is an open-source Bluetooth call proxy. It makes your
computer (or an ESP32) pretend to be a standard Bluetooth headset (HFP). This
lets you seamlessly intercept live cellular phone calls at the native hardware
level, pipe the audio into a terminal, and eventually let an AI handle the
entire conversation for you.

What did I make for this PoC?

  • Direct Linux Kernel Injection: Built a custom controller that bypasses
    PulseAudio/PipeWire entirely. It binds directly to the Linux kernel via
    AF_BLUETOOTH sockets to read/write raw audio packets straight into the call!
  • ESP32 Hardware Delegate: Since macOS and Windows heavily restrict Bluetooth
    APIs, I built a custom ESP-IDF C firmware to offload the hardware
    interaction to an ESP32, streaming the audio back over a Wi-Fi TCP bridge.
  • The Audio Loop: Hooked up a Curses TUI, Faster-Whisper for offline STT, and
    Edge-TTS to prove the audio injection actually works.

What was challenging?
Low-level Bluetooth audio is a nightmare to standardize.
Apple completely locks down their kernel (forcing me to build an Objective-C
“acoustic bypass” for Mac). As for the ESP32, getting 16kHz audio over Wi-Fi
without awful packet latency was tough, it’s currently functional but it’s still pretty choppy! (and don’t get me started in the brownout detector ;()

What’s next?
This release was all about proving the core hardware routing works.
The immediate next step is ripping out the basic STT/TTS and plugging in true,
full-duplex conversational models (like Gemini Live or OpenAI Realtime),
followed by building a mobile companion app to control it!

animesh_varma

Whewwww, itsss finally donee (the PoC at least…), I will be shipping the project really soon, as for the changes I just polished up the setup script, tested it to make sure it runs on both linux and MacOS, and recored the videos for the README
[the img is from the testing video…]

Attachment
0
animesh_varma

Whewww, finally managed to make the audio in ESP32 barely work, the audio comes out fragmented but is finally legible, so that’s a win…I guess!!
Anyway now I am working on polishing the repo, making the setup work correctly, drafting a readme and setting up some demos for the same, see you soon in the ship!!

Attachment
0
animesh_varma

Well first of all, welcome! this is the first devlog I am going to write completely myself without the use of any AI, so please excuse any grammatical or spelling mistakes as I am going to make quite a few of them. anyway that’s besides the point, the thing is I have been working on a new project for quite some time [as you can see, around 12 hours logged]. Now, why were there no devlogs in that time? The thing was that 2 of my friends had committed to building this project together and I was waiting for them to join, and as it was not my own project I decided against putting it on flavortown, but guess what… they both bailed on me (hence the project then being posted here, rest assured all the commits were by me)! 
Anyway, eidolon is ment to function as a bridge to allow you deploy a live AI model [full duplex to be exact] as your personal assistant which can send and receive phone calls on your mobile device on your behalf. It functions by using the host to set up a Bluetooth HFP that a mobile considers as a headset and sends audio to, as it can send and receive audio, it can interact natively with the call.
Anyway curuntly its in very early PoC stage where I have just made the linux version working correctly [macOS version is also in its final stage of the PoC but will not work as intended because of kernel level constraints]. I am curruntly working on the esp32 version [ohh did I mention I will use the esp32 I bought from the shop to expand it to work without any desktop devices, its going to be sooo flipping cool!!]
also also, curuntly the linux and any other working version just have a basic TTS and STT implementation ment to function as a PoC, in the next (first ship) ill push all the working versions of it [esp32, linux, windows n mac] with the PoC TTS and STT. after it is done I’ll begin to work on actually implementing a live AI API [prolly openAI as I can get their credits from the shop and they are the only ones which provide a live API]
Also did I mention Eidolon will also have a mobile app operating over wifi to share live data about calls like transcription, what the agent is saying and providing the ability to guide it? again, it’ll be sooo fun, can’t wait!! [the attached screenshot is from the CursesTUI which I am using in the PoC working on linux (and me connecting to my pixel 9 and testing it with a call, also debug flag was set to true hence so many logs)!]

Attachment
Attachment
Attachment
0
animesh_varma

Shipped this project!

Hours: 4.1
Cookies: 🍪 74
Multiplier: 17.99 cookies/hr

Sigil v0.5.0-dev3 — The “Zero-Trust Screen Shield” Update!

[The iOS build is also planned and pretty much guaranteed to come, please do not repete that it is’t on iOS in the feedbacks, thankyouu :D]
As promised, the bleeding-edge dev builds keep coming! This update tackles Roadmap an Issue and fundamentally overhauls how Sigil defends your data while you’re looking at it.

What did I make?
I built a “Zero-Trust Screen Shield” to defend against tapjacking, unauthorized snapshots, and display mirroring.

Anti-Hooking: Instead of relying just on Android’s native FLAG_SECURE (which can be bypassed by Xposed/LSPosed frameworks), Sigil now aggressively re-applies the secure flag whenever window focus is regained.

Visual Defense: A reactive Compose blur protects data when the notification shade is pulled down, and a native FrameLayout overlay blocks the OS Recents menu from capturing app state.

Hardware Defense: The app now actively detects screen recording and hardware capture attempts, clearing the clipboard and notifying the user. It also enforces filterTouchesWhenObscured to automatically drop input events from invisible malicious overlays (Anti-Tapjacking).

(Meta note: Sorry for the tag confusion last time! Just to reassure everyone, all dev builds post-0.4.5 are properly using the 0.5.0-devX base! This build bumps the versionName to 0.5.0-dev3 and versionCode to 453.)

What was challenging?
Dialog rendering was my biggest enemy. The new shielding mechanism conflicted profusely with legitimate in-app dialogs, causing the screen to lock up or render black. I wasted a ton of time trying to fix it by pushing back features and attempting to treat the shield screen itself as a high-precedence dialog. It was incredibly unreliable. I eventually had to rewrite how Sigil handles dialogs globally, creating new SecureAlertDialog wrappers that keep the app safe without breaking the UI.

🔗 Download Link: https://drive.google.com/file/d/15Najuf76L5rUdvZOnmXmAEjW5lk6DSmX/view?usp=sharing

animesh_varma

First off, a quick apology and clarification! 😅 I know there was some confusion regarding the version tags in my last devlog. To clear things up: I reassure you that all dev versions following v0.4.5 are correctly labeled with the 0.5.0-devX base in the dev branch. The math schema holds strong!

Now, onto the fun stuff. I just merged a massive PR for v0.5.0-dev3, and this one was an absolute beast to build. Remember that “Zero-Trust Screen Shield” I mentioned was breaking native dialogs in dev2? I finally beat it into submission!

The “Zero-Trust” Screen Shield Architecture:
Standard FLAG_SECURE is great, but it’s easily stripped by Xposed/LSPosed modules or malicious root apps. I built a custom, highly resilient alternative to aggressively protect the UI state:

Reactive UI Blur & Native Recents Defense: When the app loses window focus (like pulling down the notification shade), it triggers a Compose blur. For the OS Recents menu, I implemented a synchronous native FrameLayout shield during onPause to block snapshots entirely, defeating OS-level FLAG_SECURE strippers.

Capture Detection & Anti-Tapjacking: Added DETECT_SCREEN_CAPTURE to actively log hardware screenshot attempts (and clear the clipboard if it detects one!). More importantly, I enforced filterTouchesWhenObscured on the root view. If an invisible or malicious overlay tries to hijack your taps, Sigil drops the input entirely.

The Current Struggle (Why this took so long):
The challenges on this one were endless. The shield conflicted profusely with in-app dialogs. I initially tried engineering a system that treated the shield screen itself as a dialog with the highest precedence. Spoiler: It was a disaster. It was incredibly unreliable, completely messed up the Android lifecycle, and wasted a lot of my time.

I ended up scrapping that approach and building SecureAlertDialog and SecureDialog wrappers globally via SigilUiElements.kt. This whitelists legitimate in-app dialogs safely without compromising the underlying shield layer.

I’m packing up the APK now. The dev build is dropping right after this!

1

Comments

animesh_varma
animesh_varma 20 days ago

[Note: most of the time is unlogged as I started sigil 3+ months ago and did’t know about hackclub]

animesh_varma

Shipped this project!

Hours: 2.22
Cookies: 🍪 19
Multiplier: 7.07 cookies/hr

Appraise v0.0.1-poc — The Visual Extraction Engine!
The devlog before this covers most of the details… But to sum it up I am starting a new project! The project is called appraise and as the name suggests, it allows you to appraise things around you, I have major plans for this going forward, read more if your interested!

What did you make?
Before I can turn real life into a game, I had to figure out how to make real-life objects actually look pristine and cool. This v0.0.1 Proof-of-Concept is entirely focused on building a premium visual extraction pipeline.

The Extraction Engine: I hooked up Google ML Kit via Play Services to instantly identify and extract the primary object from a live CameraX preview.

Strict Subject Isolation: AI models can be messy and highlight background artifacts. To fix this, I wrote a blazing-fast, custom Kotlin Flood-Fill algorithm that isolates the single largest contiguous visual mass, completely deleting stray pixels and enforcing a one-object-only rule.

Dynamic VFX Compositing: The UI isn’t just an image slapped on a background! Using GPU-accelerated BlurMaskFilter passes and SRC_IN blending, a massive neon glow visually bleeds through the edges of the object.

What was challenging?
Getting the ML Kit segmentation to run without crashing on modern hardware! I had to specifically migrate the core ML pipeline to properly support the new Android 15 and 16KB memory page hardware architectures to prevent legacy native crashes on newer Pixel devices. Also, figuring out how to run a custom flood-fill algorithm and heavy GPU compositing math instantly on the main UI thread without causing massive lag was a huge headache.

What are you proud of?
I am super proud of the edge light-wrapping math. Normally, when you cut an object out of a photo using AI, it looks like a cheap, jagged Photoshop job. By getting the background neon glow to wrap and bleed through the edges of the AI mask, it perfectly hides real-world light reflections and jagged cuts. It actually makes mundane things look like premium collectible items!

Download Link: You can grab the debug APK directly from GitHub here: https://github.com/Animesh-Varma/Appraise/releases/download/v0.0.1-poc/Appraise-v0.0.1-poc-debug.apk

animesh_varma

I’m taking a quick breather from my other projects to drop a brand new, wildly ambitious app! Meet Appraise!!

Appraise is a gamified camera app that turns the mundane real world into a massive, globally deduplicated collectible RPG. Point your camera at everyday objects to instantly extract them from reality and add them to your personal digital catalogue with procedurally generated stats and rarities! It isn’t just a scanner; it’s a collaborative project aiming to build a global encyclopedia of everyday things, complete with a creator economy for artists who claim and illustrate the items you find (not the AI slop you find these days)!

What did I make?
Before I can turn real life into a game, I had to figure out how to make real-life objects actually look pristine and cool. This v0.0.1 Proof-of-Concept is entirely focused on building a premium visual extraction pipeline:

  1. The Extraction Engine: I hooked up Google ML Kit (via Play Services) to instantly identify and extract the primary object from a live CameraX preview. I also had to made the pipeline to support modern Android 15 / 16KB memory page architectures to stop native hardware crashes on newer Pixels!
  2. Strict Subject Isolation: AI models can be messy and highlight background artifacts. To fix this, I cooked up blazing-fast, custom Kotlin Flood-Fill algorithm that isolates the single largest contiguous visual mass, completely deleting stray pixels and enforcing a “one-object-only” rule.
  3. Dynamic VFX Compositing: The UI isn’t just an image slapped on a background! Using GPU-accelerated BlurMaskFilter passes and SRC_IN blending, a massive neon glow visually “bleeds” through the edges of the object. It perfectly occludes real-world light reflections and imperfect AI jagged edges!

This project is my absolute first foray into something that actually requires me to handle complex backends and APIs, so it’s going to be a massive learning curve!

Repo is public, but APKs are GitHub-only for now while I build out the backend and iron out the bugs!

[BTW the attached screenshots give a quick look at the final edge-blending math!]

Attachment
0
animesh_varma

Shipped this project!

Hours: 4.63
Cookies: 🍪 64
Multiplier: 13.77 cookies/hr

Atropos v0.0.3 — The Rolling-Buffer Camera PoC!

As mentioned in my devlog, I am officially shipping the very first playable alpha of my new project, Atropos! You can grab the APK directly from the GitHub Releases page.

What did I make?
I built a continuous rolling-buffer camera app designed to act like a personal dashcam for your phone.

The Engine: A CameraX pipeline that continuously records and auto-deletes old 3-minute chunks in the background so your storage never fills up.

Eco-Mode: Because camera apps melt batteries, I built a feature that physically unbinds the screen preview after 1 minute of inactivity, plunging the screen into a true-black OLED power-saving mode while the camera keeps quietly recording.

Built-in Editor: A Media3 ExoPlayer interface that lets you scrub through your recent footage and instantly stitch/trim your desired clip using Media3 Transformer.

Hardware Polish: Integrated Camera2Interop to natively force Optical Image Stabilization (OIS), 10-bit HDR, and target FPS ranges directly from the Material 3 UI.

What was challenging?
Fighting Android’s media APIs to make the recording actually gapless! When one 3-minute chunk ends and another begins, Android usually drops 1-2 seconds of frames. I had to build an aggressive MediaCodec reconnection loop synced perfectly with VideoRecordEvent.Start to eliminate that gap.
Building the editor was also brutal. I had to map a global 6-minute timeline across multiple underlying MP4 files. To make scrubbing smooth without UI lag, I implemented dynamic seek swapping (it uses fast CLOSEST_SYNC while you drag the slider, but snaps to EXACT the millisecond you release it for frame-perfect cuts).

What are you proud of?
I am super proud of the “Virtual Cropping” logic! To give the user a perfectly seamless 6-minute editing window, the app actually maintains a 9-minute hardware buffer in memory. It hides the active chunk-deletion process entirely, so your editing timeline never jumps, drops, or glitches. Getting multiple independent video files to scrub and stitch as one continuous timeline was a massive win for me :D

🔗 Download Link: https://github.com/Animesh-Varma/Atropos/releases/download/v0.0.3/Atropos-v0.0.3-debug.apk

animesh_varma

I mentioned in my last Sigil update that I’d be dropping a brand new project soon… well, here it is! Meet Atropos!!

In Greek mythology, Atropos is the Fate who cuts the thread of time. I thought that was the perfect name for an open-source, continuous rolling-buffer camera! Think of it like a personal dashcam for your phone. You leave the app open, and it continuously records, auto-deleting old footage so your storage never fills up. When something cool happens, you simply cut and save the exact moment you need!!

This log covers the absolute birth of the app from an empty Activity to the v0.0.3 PoC Alpha!

What did I make?
I built out the entire core MVP pipeline across three rapid-fire versions:

v0.0.1 (The Engine): Built the CameraX rolling buffer and the Material 3 UI. Because leaving a camera app open melts your battery, I built Eco-Mode: after 1 minute of inactivity, the app physically unbinds the screen preview and plunges into a true-black OLED power-saving mode, while the camera keeps quietly recording behind the scenes.

v0.0.2 (The Seamless Illusion): Implemented the multi-chunk timeline. The app records in continuous 3-minute chunks.

v0.0.3 (Hardware Polish): Added Camera2Interop to bypass standard APIs and natively force Optical Image Stabilization (OIS), 10-bit HDR, and target FPS ranges. Also added a sleek M3 side drawer for quick toggles and pinch-to-zoom!

What’s next?
This is an early PoC Alpha (v0.0.3). The UI controls don’t currently poll hardware availability, so forcing 4K on a 1080p sensor will crash it. I’m moving to a standard Gitflow workflow next, fixing the hardware polling, and aiming for a stable v0.1.0 release soon, but there will definitely be more patch versions before the minor version.

Repo is public, but APKs are GitHub-only for now while I iron out the bugs and add more features!

[BTW the attached video gives a short demo of the app for you to see!]

0
animesh_varma

Shipped this project!

Hours: 28.05
Cookies: 🍪 460
Multiplier: 25.35 cookies/hr

Sigil v0.4.5-dev2 — The Crypto Expansion & Bleeding-Edge APKs!

As promised in my devlog, I am officially shipping bleeding-edge dev builds here on HackClub! To test this specific build, you’ll need to grab the APK from my Google Drive (link in the review instructions). It’s an alpha build, but it is fully signed!

What did I make?
I merged two massive PRs to kickstart the road to the v0.5.0 release:

The Crypto Expansion: I implemented a reusable Bouncy Castle wrapper to bring GCM (AEAD) support to major 128-bit block ciphers. You can now encrypt text using Camellia-GCM, Serpent-GCM, Twofish-GCM, and SM4-GCM!

The “Sigil Chain” Upgrade: The default 4-layer auto-encryption profile is now a 100% authenticated stack (XChaCha20-Poly1305 -> Serpent-GCM -> Twofish-GCM -> AES-GCM).

Legacy Theme Fixes: Fixed a critical bug on older devices (Android 11 and below) where custom seed colors caused invisible white-on-white text.

What was challenging?
Honestly, figuring out a versioning system that doesn’t break Google Play Console if I need to backport a hotfix! I ended up adopting a brand-new mathematical schema for version codes:

Positional logic: Major10000 + Minor100 + Patch (e.g., v1.0.0 = 10000).

Dev Schema: Tens place >= 5 designates alpha/dev builds. Dev builds use the last production base to ensure incremental ordering.

So, this build is v0.4.5-dev2, which equals versionCode 452 (400 for the prior v0.4.x base + 50 dev offset + 2). By basing both the name and code on the old base, it perfectly reserves 406–449 for any emergency v0.4.x production hotfixes, while incrementing smoothly toward the final v0.5.0 (which will be 500)!

What are you proud of?
I’m super proud of how fast I was able to implement a community feature request (Issue #15) right after coming back from my exam hiatus. Building out the dynamic contrast math for the legacy UI fixes was also super satisfying.

🔗 Download Link: https://drive.google.com/file/d/1I3Gx5KAPirKcIPwWHBICz3sDTFHbpFtE/view?usp=sharing

animesh_varma

I had to pause development for my Class 11 finals, but I’m officially back with MASSIVE news!

(Meta note: I actually had to use Gemini to shorten this log because my original draft somehow ballooned to over 3,000 characters!)

I just received an Emergent Ventures Grant! This is funding the huge v0.5.0 Steganography update and a Native iOS Port (just got an M5 Mac & iPhone 15 for it!). iOS dev starts in a new repo soon.

The New Plan: Bleeding-Edge APKs & Atropos
Going forward, I’m shipping every single dev build right here! I’ll link signed APKs so you can test features (and catch bugs) early. I’ll also be dropping updates for my other project, Atropos, alongside Sigil soon! 👀

What’s in the dev branch:

  • Crypto Expansion (PR #18): @marek22k requested Camellia-GCM (Issue #15). I expanded our AEAD support and upgraded the “Sigil Chain” to a 100% authenticated 4-layer stack (XChaCha20-Poly1305 -> Serpent-GCM -> Twofish-GCM -> AES-GCM) using a Bouncy Castle wrapper!

  • Legacy Theme Fixes (PR #17): Fixed a critical UI bug on Android 11 & below where custom colors caused invisible text. Added smart auto-contrast calculations.

  • The Current Struggle (dev3): I’m building a “Zero-Trust Screen Shield” to block screenshots & tapjacking. It’s breaking native dialog rendering, so the PR is currently a Draft while I rethink it. (Check the attached screenshot to see the shield I’m working on in action!)

Community & Stats:

Issue #10: A cross-device decryption bug turned out to just be a KDF mismatch! Resulted in a great new UI feature request.

Stats: 38 Stars (Doubled again with zero marketing?!), 2 Watchers, 1 Fork.

The official public GitHub Project Board is finally live with 16 issues scheduled.

[I will be shipping v0.4.5-dev2 immediately after this post since the build is already 100% completed!]

Attachment
Attachment
0
animesh_varma

Shipped this project!

Hours: 16.41
Cookies: 🍪 247
Multiplier: 15.07 cookies/hr

Sigil v0.4.5 - Encryption Profiles & The “Raw Mode” Update!

I just shipped v0.4.5, a massive overhaul focused on flexibility! Sigil is an Android app for multi-layered text encryption, and this update moves it from a strict tool to a customizable platform.

What did I make?
I built Encryption Profiles and Raw Mode. Previously, users were locked into my specific “Sigil Chain.” Now, you can save your own cipher configurations via the custom tab or use a Raw Mode profile to output standard, header-less ciphertext compatible with generic tools (like OpenSSL). I also finally ripped out the numeric-only restriction to support full alphanumeric passwords for the app lock!

What was challenging?
The hardest part was definitely fighting the Android Lifecycle and GitHub Actions simultaneously.

  1. State Management: Swapping between Numpad (for PINs) and QWERTY (for Passwords) dynamically without breaking the secure input flow was trickier than expected.
  2. CI/CD: autobuild kept failing on Android, so I had to rewrite the workflows to use manual Gradle modes to keep CodeQL and Linting functional.

What are you proud of?
I’m super proud of the community growth! Since the last update, the star count doubled (Even though i did’t market it anywhere!), and I received my first legitimate bug report (Issue #10) regarding cross-device decryption.

v1.0.0 is getting closer!!

animesh_varma

Okay, I know I said in the last log that “each merge into the dev branch will be followed by a devlog.” Clearly, I lied. I am absolutely terrible at keeping that promise.

I’ve been heads-down working towards v0.4.5. This covers roughly 16h 12m of work, mostly fighting state management and GitHub Actions.

Here is what I’ve been busy doing (shortened caus the 2000-character limit):

I) The Auth Overhaul (Passwords are here!)
I ripped out the old logic for a system supporting full alphanumeric Passwords.

  • Dynamic UI: The Lock Screen detects PIN vs. Password and swaps keyboards (Numpad vs. QWERTY) automatically.
  • Security: Still backed by TEE and Salted Argon2id.

II) Encryption Profiles
The biggest change. Not everyone wants the paranoid “Quad-Layer Cascade.”

  • Raw Mode: Output standard AES-GCM (no metadata) for OpenSSL compatibility.
  • Custom Chains: Save algo configs as “Profiles” with custom KDF overrides to switch instantly.

III) CI/CD & Infrastructure
Spent ages fixing GitHub Actions. autobuild failed on Android, so I switched to manual Gradle with JDK 17 to restore CodeQL and Linting.

The Bug Report (Issue #10)
First real bug report! @hulkspec noted cross-device decryption failed.

  • The Cause: Likely a feature working too well. Differing Argon2 settings (e.g., 64MB vs 128MB RAM) mean derived keys won’t match. Waiting on confirmation.
  • The Fix: Manual syncing for now. Future: embed KDF params or add Profile sharing.

Project Status:

  • Stars: 15 (Doubled since last devlog despite no marketing!)
  • Watchers: 2
  • Issues: 1 (Investigating KDF sync UX)

Polishing final docs now; v0.4.5 is imminent. After that, I’ll be setting up a public GitHub Project Board so you can actually see the roadmap instead of me just rambling about it here.

Attachment
0
animesh_varma

Shipped this project!

Hours: 0.28
Cookies: 🍪 4
Multiplier: 15.07 cookies/hr

My first ship! Honestly, I have no idea what I’m doing here (yet), but I’m excited to finally share what I’ve been working on for the last two months!

What is Sigil?
It’s an Android app designed for multi-layered text encryption. I wanted to build something that uses high-level cryptography but actually looks good with a modern Material 3 interface. I plan to expand it into a complete cryptography app, a one-stop shop for all your mobile cryptography needs!

What I did for v0.4.1

This update was all about “Transparency.” I worked on stripping out Google metadata blobs to make the app compliant with IzzyOnDroid and F-Droid standards.

What I learned

Android security is hard. Integrating biometrics (especially ones that expire when a new one is added!) and custom pins that are separate from the system lock taught me a lot about how sensitive data is actually handled in memory (and how to handle Sigil’s own data as well!).

It’s still technically a pre-release, but it’s functional and very stable (with no known crashes) and live on Google Play and IzzyOnDroid. Looking forward to learning the ropes here :D

animesh_varma

This first devlog will cover what I have already done for Sigil, as I have been working on it for well over two months (Before I joined HackClub yesterday).
Till now, I have managed to:

  1. Keep a very high security standard across the app
  2. Pass numuras varifcations [Displayed in README as badges]
  3. Set up a CI/CD pipeline along with PR checks
  4. Publish the app on IzzyOnDroid and Google Play
  5. Implement 15+ algorithms
  6. Make an Auto and Custom tab for different levels of users
  7. Create a comprehensive onboarding
  8. Add TEE and hardware integration with a Keystore tab for keystoreage
  9. Implement a release tab
  10. Create a settings tab allowing tweaking of encryption parameters and other parameters
  11. Add applock with biometrics and custom pin separate from screen lock
  12. Allow changing of appearance
  13. Screen shield and Clipboard auto wipe

And that about winds it up. This was until v0.4.1 whcih is a Pre-release (Though not marked as such because of IzzyOnDroid publishing guideline, this will be fixed with the release of v1.0.0)

The next release will be v0.5.0 with the following updates:

  1. Implement Steganography tab: The Steganography tab will be implemented.
  2. Biometric Upgrade: Transitioning authentication keys to AES-GCM.
  3. Custom Chains: The “Auto Mode” encryption chain (currently fixed at 4 layers) will become fully configurable. Users will be able to define their own custom cascades of ciphers for quick access.
  4. New Algorithms: Implementation of XChaCha20-Poly1305 and Aegis-256 (Addressing issues #3).

I am keeping this dev log just to keep track of changes I made, each merge into the dev branch will be followed by a devlog documenting everything (At least I hope so, as I am the worst at keeping consistent :( )
Sigil currently has: 7 Stargazers, 1 Watchers (Me), and 1 issues open (Add encryption algorithm)

Attachment
1

Comments

animesh_varma
animesh_varma 20 days ago

[Note: most of the time is unlogged as I started sigil 3+ months ago and did’t know about hackclub]