Deployment Hell
February 22–24, 2026 — When building is easy but shipping is hard
Building the Secure Sleuths bug bounty platform with Jarvis took one day. Getting it deployed took the same day. Getting it functional — that’s still in progress.
This wasn’t a failure. It was expensive research.
February 24, 2026: Build and Deploy
I gave Jarvis a product specification document and asked it to build a complete bug bounty platform. In 6 hours, it generated:
- 39 files: Complete Next.js application with TypeScript
- 4 user dashboards: Hunter, Client, Triage, Developer roles
- 2 database migrations: Full PostgreSQL schema with Row Level Security
- Advanced features: CVSS scoring, duplicate detection, real-time messaging
The code compiled locally. All tests passed. It looked production-ready.
I was wrong.
The Same Day: Deploy vs. Functional
The platform deployed to Netlify successfully the same day. Live URL: https://secure-sleuths-platform.netlify.app
But deployed doesn’t mean functional. The gap is between “it deploys” and “it works for real users.”
The TypeScript Error Cascade
The first Netlify build failed with a TypeScript error:
Property 'program' does not exist on type 'Report'.
Did you mean 'programs'?
I fixed it manually. The next build failed with a different error:
Property 'sender_id' does not exist on type 'Message'.
Did you mean 'sender'?
This pattern repeated 30+ times across 8 different files. Each fix revealed another type mismatch. The AI had generated internally consistent code, but the TypeScript compiler found edge cases we missed.
When AI Meets Reality
The problems fell into categories:
Database relationship mismatches: The AI generated interfaces assuming single relationships, but the database returned arrays. Code expected report.programs.name but got report.programs[0].name.
Interface duplication: Two files defined different Message interfaces with different properties, causing type conflicts when components interacted.
Next.js 14 compatibility: The useSearchParams() hook required Suspense boundaries for server-side rendering, which the AI hadn’t included.
The Rate Limit Reality
Here’s what nobody talks about: AI-native development burns through rate limits fast.
I’m paying $200/month for Claude Code Max. That’s supposed to be the “unlimited” tier. But after 4-5 consecutive build-fix-deploy cycles, I hit rate limits. The agent would pause for 10-15 minutes between fixes.
When you’re debugging 30+ TypeScript errors iteratively, this kills flow state. The economics don’t work yet.
Building an Auto-Fixer
After manually fixing 15+ similar errors, I had Jarvis write a TypeScript error auto-fixer. It would:
- Parse build output to extract error messages and file locations
- Read the problematic files
- Send the code and errors to GPT-4 with fix instructions
- Apply the corrected code automatically
- Rebuild and repeat until no errors remained
This worked for about 60% of the errors. The remaining 40% required manual intervention — usually interface design decisions that needed human judgment.
Cross-Service Configuration Hell
Even after deployment succeeded, the platform wasn’t functional. Users who signed up received email verification links that pointed to localhost:3000 instead of the production URL.
The problem: 7 different configuration locations across services:
- Supabase Dashboard: Auth URL configuration
- Netlify: Environment variables
- GitHub: Repository secrets
- Local config: supabase/config.toml
- Application config: Next.js environment detection
- DNS: Domain pointing
- SSL: Certificate configuration
Each service has its own interface, its own way of handling secrets, its own validation rules. There’s no unified configuration management. When something breaks, you debug across multiple dashboards.
This is what I call “the plumbing problem.”
What We Learned
Building vs Shipping vs Functional
The AI can generate a complete, working application in hours. Deployment pipelines can get it live in hours too. But making it functional for real users requires navigating a maze of service configurations that weren’t designed to work together.
The bottleneck isn’t code generation or deployment. It’s cross-service coordination.
As of this writing, Secure Sleuths is deployed and accessible, but the login flow is broken due to authentication URL mismatches across services. The platform exists, but users can’t actually use it.
The 80/20 Rule
AI automation works for about 80% of TypeScript errors. The remaining 20% require human judgment about interface design, database relationships, and architectural decisions.
This isn’t a failure of AI. It’s the nature of the problem. Some decisions require context that the AI doesn’t have.
Rate Limits Are Real
Even premium AI subscriptions have usage limits. When you’re doing intensive debugging sessions, you hit them. The tools aren’t designed for the sustained, iterative work that AI-native development requires.
Ideas for March
Every frustration became a product idea for the 31-day sprint:
TypeScript Error Auto-Fixer (prototype built) Automatically detect and fix common TypeScript compilation errors. Integration with CI/CD pipelines. Pattern learning from successful fixes.
Cross-Service Configuration Orchestrator
Single interface to manage configuration across Netlify, Supabase, Railway, etc. Automatic environment variable synchronization. Production URL propagation.
Rate Limit Orchestration System Automatic failover across multiple AI providers. Cost-aware task routing (use cheaper models for repetitive tasks). Queue management for sustained workloads.
Database Schema Consistency Checker Validate TypeScript interfaces against database schema. Catch relationship mismatches before deployment. Auto-generate types from database migrations.
The Deployment Abstraction Layer One-command deployment that handles all service configuration. Rollback capability when deployments partially fail. Unified secrets management.
The Pattern
This is the pattern that will define the March sprint: Every challenge in February becomes a product in March.
The goal isn’t to avoid deployment hell. It’s to build tools that make it systematic and repeatable.
Shipping is hard. But shipping tools that make shipping easier — that’s a software factory.