Defining the MVP
The first FlipDar prototype started as a notebook sketch after one too many nights scrolling Facebook Marketplace and eBay. The guiding question was simple: what is the smallest thing that makes a flipper’s decision faster today?
We forced the MVP into a single, honest flow: type an item, pull real sold data, and show a compact snapshot such as average and median sold price, typical listing time, and a short take on whether it’s worth chasing. No logins, no dashboards, no heatmaps; just a clean input and a straight answer.
That constraint cut scope to the bone. It also forced a data model that could be expanded later without rewriting everything.
- Backend: Supabase for auth + database
- Logic: Python scraping and analysis to clean marketplace data
- Frontend: Next.js on Vercel for fast iteration
Building the deal brain
Early on we realized that inconsistent marketplace data would drown any good UI. We standardized titles, prices, conditions, and timestamps, and then built a pricing engine to throw out lowball outliers and scam-looking spikes.
The engine computes average and median sold price, a “safe buy” number, and a typical time-to-sell window. With that, a flipper can decide in seconds instead of juggling 20 tabs and gut feelings.
Tagging was kept light but useful: items with high competition and tight margins were called out, while fast movers were flagged so you could lean into quick-turn inventory.
What shipped by end of August
By late August, FlipDar could take a keyword, fetch sold listings (starting with eBay), clean the results, and return price bands, a sell-time estimate, and a short FlipDar take.
It wasn’t fancy, but it answered the one question that mattered: is this worth flipping? And as soon as that worked, another request was obvious: “Tell me when a good deal near me shows up.” That became the next focus.