Jester AI banner

Jester AI

4 devlogs
15h 19m 7s

Jester AI is a neural defense extension that uses Large Language Models to scan web-page elements for phishing patterns, automatically blocking malicious sites to ensure a secure browsing experience.

This project uses AI

i used gemini ai to help with the technical setup, specifically for connecting the api to the chrome extension scripts

Demo Repository

Loading README...

jamallyemin

Shipped this project!

i built jester ai, an extension that uses a dual-layer architecture to block phishing sites. it doesn’t just check urlsƧ it scrapes the dom and uses gemini 2.5 flash to analyze screenshots for brand impostors in real time.

the hardest part was the quarantine logic, making sure the ā€˜gray-out’ filter stops the user from clicking anything while the ai is still thinking. i also built it so you can use your own api key, which keeps it decentralized and free for everyone. really happy with how the rebrand and the visual scanning turned out :3

jamallyemin

new devlog
the rebranding: officially swapped from aegis to jester. updated manifest.json and all UI components to reflect the new identity. dual-layered detection architecture: layer 1 (dom/static): content.js doesn’t just look for keywords. it checks for hasLogin (password fields), pageData.url, and suspicious headers. if the local regex or DOM scan hits a threshold, it triggers the block layer 2 (visual/gemini): if the site looks ā€œokā€ but suspicious, background.js triggers captureAndAnalyze. it sends a base64 screenshot + the scraped DOM to gemini 2.5 pro. the prompt forces a JSON output to check for brand impostors (e.g. a site looking like Instagram but hosted on a random domain). quarantine & interstitial: the gray-out: while checkVisualSafety is running, I implemented a setQuarantine(true) function. it injects a CSS filter (grayscale(100%)) and pointer-events block so the user can’t interact with the site until the AI returns a the block page: if verdict.rating >= 4, the extension stops the window (window.stop()) security & api decentralization: custom key integration: moved away from a shared key. built a system where users input their own google ai studio key. this keeps the project free for me and provides higher quotas for the btoa encoding: added a layer of obfuscation by btoa encoding the key before storing it in chrome.storage. stealth settings: designed the settings panel to be hidden by default, accessible via the ā‹® trigger in the popup header. async & storage storage wrapper: created a getStorage helper to wrap chrome.storage.local.get into a promise, making the background script much cleaner with await. messaging reliability: refactored the communication between content.js and background.js to prevent the ā€œundefinedā€ response error when fetching the API key during a live openphish sync: added a setInterval to background.js that pulls the OpenPhish public feed every 12 hours and stores it locally for instant blacklisting without needing the AI.

Attachment
Attachment
0
jamallyemin

new devlog!! :3
custom api key integration: moved away from a hardcoded key. i implemented a system where users can input their own google ai studio key directly into the extension. this makes the scanning process decentralized and gives every user their own api quota so the system never hits a limit.

hidden settings ui: designed a sleek, options menu. i added a three-dot (ā‹®) trigger in the header that slides down a hidden settings panel. it includes a password-masked input for the api key and a direct link to ai studio for onboarding.

smart state management: overhauled popup.js to handle real-time UI updates. if the user saves a new key, the extension immediately recognizes it, updates the ā€œSAVEā€ button state with a ā€œSAVED!’ feedback, and triggers a fresh scan of the current tab.

notification layer: added a background listener for chrome.notifications. if a user tries to scan a site without a key, aegis now sends a system-level alert explaining exactly how to fix it instead of just failing silently in the console.

asynchronous flow fix: debugging the messaging between background.js and chrome.storage.local. used await patterns to ensure the api key is fully retrieved before the gemini fetch call is even attempted. no more ā€œundefinedā€ errors during scans.

Attachment
Attachment
Attachment
1

Comments

Shuflentity
Shuflentity 24 days ago

holy shit Aegis like from Risk of Rain 2

jamallyemin

here’s a breakdown of what i did in these 4 hours:

gemini api implementation: i connected the extension to the gemini 2.5 pro model (completely free model that i got from google ai studio so users of this extension doesn’t have to pay for anything). now, instead of just checking a list of bad urls, the extension scrapes the dom (headers, buttons, links) and asks the ai to summarize the safety of the page.

access denied ui: designed a custom ā€œinterstitialā€ block page. if the ai gives a threat rating of 4/10 or higher, the extension injects a full-screen overlay that prevents the user from interacting with the malicious site. (i was gonna add a screenshot of this block page but i couldn’t find any phsihing sites D:)

dynamic whitelisting: implemented a system using chrome.storage.local. users can now manually trust a domain through the popup. once a site is whitelisted, the content script skips the ai scan to save api quota and make browsing faster.

logic flow: structured the messaging between popup.js, content.js, and background.js so the safety verdict stays consistent even if you refresh the page.

Attachment
Attachment
0