Comments, support and feedback
- Harshil Siyani3 days agoMaker
As an AI developer, I spend a lot of time running prompts across different models and configs, tweaking temperature, comparing outputs, and manually checking which one gets it right. It’s repetitive. Time-consuming. And easy to mess up. So I built PromptPerf -> a tool that tests a single prompt across GPT-4o, GPT-4, and GPT-3.5, runs it multiple times, and compares the results to your expected output using similarity scoring. ⚡ No more guessing which prompt or model is better ⚡ No more switching between tabs ⚡ Just clean, fast feedback and a CSV if you want it This started as a scratch-my-own-itch tool, but now I’m opening it up to anyone building with LLMs. Unlimited free runs. More models coming soon. Feedback shapes the roadmap. Would love to hear what you think! Keen on feedback and help to ensure I build a product that solves your problems 👉 promptperf.dev
About this launch
PromptPerf was launched by Harshil Siyani in 3 days ago.
- 5Upvotes
- 1134Impressions
- #3Week rank