Remove radius bloat from the scraper
Early versions of the scraper tried to handle radius logic, filtering, and decision-making all at once. That made each run slower and harder to tweak.
We pulled radius handling out entirely. The scraper now focuses on fetching and normalizing data fast, then hands it to the backend to decide what’s relevant.
Make it compute-efficient
Profiling showed wasted time in repeated requests and extra parsing. We batched network calls where possible, added controlled concurrency to stay within rate limits, and cut duplicate hits to the same endpoints.
The payoff was clear: less CPU per run, faster alerts, and a lower bill when this runs around the clock.
- Batched + concurrent fetching (within rate limits)
- Caching repeated results where safe
- Less CPU per run = faster alerts + lower cost
Prep for Azure, 24/7 monitoring
We containerized the scraper and moved config into environment variables so dev and prod stay clean. Scheduling work now respects cadence per watchlist without hammering marketplaces.
Better logging and error handling make it clear when something stalls. Combined, this sets Flipdar up for Azure-powered, always-on monitoring without wasting cycles.