Prompt Enhancer  banner

Prompt Enhancer

22 devlogs
32h 31m 57s

As everybody knows, the better you prompt an LLM, the better the response will be.
This project helps you with this. If you don't know what exactly to include in a prompt, it will;
1. ask you questions to clarify your intent and get more inf…

As everybody knows, the better you prompt an LLM, the better the response will be.
This project helps you with this. If you don’t know what exactly to include in a prompt, it will;
1. ask you questions to clarify your intent and get more information about the task,
2. optionally search the web and gather relevant context to enhance the prompt.

And finally, it will apply the best practices of prompt engineering for your target model to produce the best possible prompt.

This project uses AI

Github Copilot and Antigravity for code generation and debugging

Demo Repository

Loading README...

Efe

Shipped this project!

Hours: 32.48
Cookies: 🍪 0
Multiplier: 7.88 cookies/hr

I learnt that you should always test before shipping. Turns out there were some issues with the previous shipment. This shipment,

  • Fixes bugs
  • Adds reasoning effort option
Efe
  • Updated the README
  • Fixed a little issue with the site tour
Attachment
Attachment
0
Efe

Bug fix

I realized with the last commit I introduced a critical bug: when tools are called it doesn’t fit the response schema and errors. I made a temporary fix for now: tries json xml and markdown parsing and if they fail calls the llm with formatting and no tools. I am sure this is not very efficient but this is the best I have right now pf

Attachment
0
Efe

Bug fixes

  • If the LLM hadn’t asked you questions, the output wouldn’t have been structured, causing parsing errors. This is now (almost) handled.
  • Also added structured output to all LLM calls.

Choose your reasoning effort

  • Now you can choose the reasoning effort enhancer agent will use while enhancing your prompt. Hoping this will reduce the time enhancements take (even medium reasoning takes a lot of time).
Attachment
0
Efe

Shipped this project!

Hours: 11.49
Cookies: 🍪 85
Multiplier: 7.4 cookies/hr

This version adds support for edit requests and better prompting style options.

  • You can now choose how your prompt will be like, markdown, XML, few-shot, CoT etc.
  • And if you don’t like it or something is missing you can tell the AI what it is missing and it will make the necessary updates to your prompt
Efe

App Tour

I have been putting this off for a very long time. Today, I implemented this with reactour.
I thought it would be easy, but the insufficient documentation (IMO, no offence to the devs) had other plans tw_grinning

  • And I also did some bug fixes and mobile view adjustments.
Attachment
Attachment
0
Efe

A lot of bug fixes gleam-debug

  • Well, turns out I got the message types and edit request parameter mixed up in the edit web socket payload. I wasted a lot of time finding this issue.
  • Sometimes the LLM didn’t call a tool, didn’t produce an output and just did reasoning. In this case, output was None and everything was failing. I changed the loop conditions a little, hope it works.

Structured Output

  • I was telling the LLM to output in XML and parsing with regex, that was all the output validation (there was no output validation if LLM didn’t output XML it would fail). So, I added a response format with pydantic.
  • I adjusted the prompt a little to work with structured output, but it still needs some work.
Attachment
0
Efe

New feature: Edits

I started on working on a feature that would allow the user to request AI edits on the enhanced prompt

  • Implemented the edit request input and web sockets logic on the front end
  • Created a edit consumer to handle the edits
  • Improved and adjusted prompts to handle edits
    Still did’t test anything. I’m soooo tired pf
Attachment
Attachment
0
Efe

Why is frontend so hard! 😭

  • Made the website mobile compatible
  • Now you can manually edit the prompt (in the future you will be able to request AI edits)
  • Added copy and save prompt buttons
Attachment
0
Efe

I forgot to devlog and now I don’t remember what I exactly had done! If I remember correctly,

  • I added a few more options to prompt style, namely, Plain formatting and CoT to techniques
  • The LLM was confusing the requested prompt style and the ultimate output user wants from the AI. I made it more explicit in the prompt but time will show if it worked.
  • Finally, added a checkbox to specify the target models reasoning-native-ness, so we don’t depend on the LLMs knowledge to identify that. (also updated the prompt accordingly)
Attachment
0
Efe

Updated the prompt. It was fully using XML, I learnt that this was not the best practice (I hope this information was correct). So now it is a hybrid: markdown for guidelines and XML for variables
Also, improved the prompt style section a little bit.

Attachment
Attachment
0
Efe

Added prompt style options: now you can choose the format of your prompt; XML, Markdown, long, short, few-shot etc.
Divided the prompt.py file to smaller functions for readability
And I unfortunately (or maybe fortunately) learned that full XML prompts are counter productive, which means I need to refactor the prompt agaaaaiiinnnn!

Attachment
0
Efe

Shipped this project!

Hours: 18.48
Cookies: 🍪 489
Multiplier: 26.43 cookies/hr

I built this to more easily and effectively interact with AI. I believe I learned a lot about prompt engineering.

Efe

Added user messages to display when HCAI is down, fallback is being used and fallback is also down
Implemented a better fallback mechanism

Attachment
0
Efe

Updated the README
(I know with all the emojis it looks like AI generated, but I like it this way)

Attachment
0
Efe

Why everything keeps breaking! Ahhaaaa…
Today I tried to use it for something and backend gave an error: JSON parsing
Why?!! It was working yesterday. I checked the prompt, maybe there was something random making LLM go crazy. Couldn’t figure it out.
Finally added json-repair and it works fine for now.
And now the question Cancel button actually does something.

Please DON’T break again.

Attachment
0
Efe

Made the project ready for production.
Deployed the frontend to Vercel and backend to Railway

Attachment
Attachment
0
Efe

Well, turns out you shouldn’t use CoT prompting with reasoning models.
Learning this I had to go over the prompt again.
I learned that gemini-3-flash works best with XML so I switched to XML
Overall spent (I hope not wasted) a lot of time on the prompt again.

Updated the frontend: prompts are auto saved and displayed neatly on the sidebar, stored in local storage.

Attachment
Attachment
0
Efe

Now questions are received, sent and answered as arrays, instead of plain string
Fix bugs with web sockets
Add rate limiting

Attachment
0
Efe

Made significant improvements on the prompt: now it goes through a prompted chain-of-thought process.
Moved everything about the prompts to a new file.
(I want to apologize from HCAI for the immense costs to come)

Attachment
0
Efe

Removed Celery and Redis so that deployment be cheaper.
Realized “the prompt” needs a lot of improvement.

Attachment
0
Efe

Created a bat script to start the necessary services.
Made minor improvements to the prompt.
Also attempted to package the app in a .pyz file but failed :(

Attachment
0
Efe

Implemented channels and web sockets to allow the LLM to ask questions to the user in prompt enhancement

Attachment
0
Efe

Migrated to Django + Vite (frontend looks almost exactly the same)
Fixed the bug additional web search query.

Attachment
0
Efe

The first primitive version.
Can save and load prompts.
Streamlit interface.
Has web search capabilities, not sufficient yet.

Attachment
0