Wildermine Phase 4: Frontend & Backend Performance Optimization
After Phase 3’s architectural cleanup (432x memory reduction, 97% code cleanup), we turned our attention to runtime performance. Phase 4 targets the remaining hotspots: unnecessary re-renders, inefficient database queries, sequential I/O, and idle CPU usage.
The approach: Measure first, optimize second, validate third. Each task gets baseline metrics before we touch code, then after-measurements to prove the improvement.
The Four Tasks
We identified four optimization opportunities, ordered from lowest to highest risk:
Task 1: Ghost Highlight JSON.stringify Trap ⚠️ Low Risk
The Problem: Hovering over the level editor with a building selected triggers ghost highlights. The rendering code uses JSON.stringify()
in React dependency arrays, creating new strings on every mouse move and completely bypassing memoization.
Impact: 663 renders in 10 seconds (~66/sec), 20 MB of string allocations, 6-8 garbage collection cycles during a simple hover interaction.
Expected Fix: Replace JSON.stringify()
with a useStableArray
hook that provides stable array references.
Target Improvement: 30-50% fewer renders, ~20 MB memory reduction, fewer GC pauses.
✅ Task 1 Results: 36% Memory Reduction →
Task 2: Scoped Completion Queries ⚠️⚠️ Medium Risk
The Problem: Each API endpoint (/api/levels
, /api/levels/user
, /api/levels/community
) fetches all completed levels for a player, even when only displaying a single page of 10-50 levels. Players with large completion histories trigger full table scans on every request.
Impact: Unnecessary database load, slower API responses for players with 50+ completions, wasted bandwidth.
Expected Fix: Add targeted completion queries that accept the current page’s level IDs and only fetch what’s needed.
Target Improvement: 2-3× faster query execution for users with large histories, 90%+ reduction in rows scanned.
📊 Task 2 Baseline Analysis (coming soon)
Task 3: Parallel Community Level Fetch ⚠️⚠️ Medium-High Risk
The Problem: Loading community levels (up to 50 per request) happens serially: each loop iteration awaits file I/O from Azure/disk, then awaits a separate database query for vote status. Sequential I/O multiplies latency.
Impact: Long spinner times on the community tab, unnecessary latency even though most operations are independent.
Expected Fix: Parallelize file existence checks with Promise.all()
, add bulk vote query to fetch all votes in one database call.
Target Improvement: 50-70% faster community tab load times, especially on high-latency storage.
📊 Task 3 Baseline Analysis (coming soon)
Task 4: Map Animation Loop Throttling ⚠️⚠️⚠️ High Risk
The Problem: The idle menu scene’s map animation loop tears down and recreates at 60 FPS (whenever pan offsets change). The MAP_DRAW_INTERVAL
throttle never actually skips work because the else
branch still calls drawDynamicLayer
, which walks the entire 3,600-tile grid every frame instead of the intended 10 FPS.
Impact: Unnecessary CPU/GPU usage on the idle menu, causing fan spin, battery drain, and dropped frames elsewhere.
Expected Fix: Store offsets in refs instead of dependency arrays (stabilize RAF loop), remove drawDynamicLayer
from the else
branch (respect throttle), optionally split static terrain into a background canvas.
Target Improvement: 70%+ reduction in idle menu CPU usage, actual 10 FPS cadence instead of 60 FPS.
✅ Task 4 Results: 87% CPU Reduction →
The Measurement-First Methodology
Phase 3 taught us that comprehensive testing and measurement prevent regressions. Phase 4 doubles down on that approach.
For every task:
-
Baseline Measurements - Profile the problem before touching code
- React DevTools Profiler (render counts, component timings)
- Chrome Performance panel (CPU usage, function traces)
- Chrome Memory Profiler (allocations, GC events)
- Network tab (API response times)
- Database query logging (execution time, rows scanned)
-
Implementation - Make the targeted change in isolation
- One task at a time
- TypeScript checks after each change
- Functional testing before moving on
-
After Measurements - Repeat the exact same profiling process
- Compare before/after metrics
- Validate expected improvements
- Check for regressions
-
Validation - Prove it works in real usage
- Manual testing of affected workflows
- Visual regression checks
- Performance feels better, not just measures better
Why this works:
- Baseline data proves the problem exists
- Metrics guide implementation decisions
- After-measurements prove the fix works
- Rollback decisions become objective (if numbers don’t improve or regressions occur, we know immediately)
This approach delivered Phase 3’s 432× memory reduction with zero regressions. We’re applying the same discipline to Phase 4.
Risk Management
Tasks are ordered by risk level for a reason:
Task 1 (Low Risk): Isolated to UI layer, easy to test, straightforward rollback Task 2 (Medium Risk): Requires database knowledge, but queries are well-scoped Task 3 (Medium-High Risk): Concurrency can introduce bugs, needs thorough testing Task 4 (High Risk): Affects core rendering, most visible to users, most complex
We’ll tackle them one at a time, capturing full baseline and after measurements for each. If any task introduces regressions or doesn’t meet improvement targets, we’ll rollback and reassess before continuing.
Expected Overall Impact
By the end of Phase 4:
Frontend:
- 30-50% fewer renders during building placement interactions
- Smoother hover experience (no GC pauses)
- 70%+ reduction in idle menu CPU usage
Backend:
- 2-3× faster completion queries for players with large histories
- 50-70% faster community tab load times
- Reduced database load and bandwidth usage
Developer Experience:
- Cleaner performance baselines for future features
- Documented optimization patterns (like
useStableArray
) - Confidence that optimizations don’t break functionality
Progress Tracking
Each task will get its own detailed article series:
- Task 1: Baseline analysis → Implementation & results
- Task 2: Baseline analysis → Implementation & results
- Task 3: Baseline analysis → Implementation & results
- Task 4: Baseline analysis → Implementation & results
All performance evidence (screenshots, profiler traces, database logs) lives in /docs/perf/baselines/04-taskN-before/
and /docs/perf/baselines/04-taskN-after/
for reproducibility.
What This Enables
Phase 4 isn’t just about making existing features faster—it’s about removing performance barriers to future development:
With faster rendering:
- More complex building placement previews
- Real-time collaborative editing becomes feasible
- Advanced selection tools and multi-tile operations
With optimized queries:
- Live leaderboards and activity feeds
- Richer player progression tracking
- Faster level discovery and search
With parallelized I/O:
- Support for larger community level catalogs
- Faster initial page loads
- Better scalability as player base grows
With efficient animation:
- More elaborate menu scenes
- Animated UI elements without performance penalty
- Better battery life for laptop players
The Journey Continues
Phase 1: Foundation cleanup ✅
Phase 2: Copy-on-Write Revolution (248× faster) ✅
Phase 3: Architectural Transformation (432× memory, 97% code cleanup) ✅
Phase 4: Runtime Performance Optimization (in progress)
After three successful phases, Wildermine’s level editor is faster, leaner, and built on solid foundations. Phase 4 extends those wins to runtime performance—making every interaction smoother, every API call faster, and every idle moment more efficient.
📊 Start with Task 1: Measuring the JSON.stringify Performance Trap →
Phase 4 optimization work powered by document-driven development and AI collaboration (Claude Code, Codex CLI).