Python, Go, Rust, TypeScript and AI — Armin Ronacher
Original source: Python, Go, Rust, TypeScript and AI with Armin Ronacher - YouTube
TL;DR
- Tools and languages shape how we build software; choose by domain, team strengths, and feedback-loop speed — not hype.
- Rust excels for performance and safety, but its learning curve and iteration cost make it a poor default for early-stage startups.
- Python, Go, and TypeScript each offer practical trade-offs that map to different kinds of work.
- AI tools and agentic coding are changing workflows: more orchestration/review, less typing — but they require strong tests and guardrails.
Practical cheat sheet
- Early startup default: Go for services + Python for ML/data + TypeScript in the browser; add Rust only for clear perf/safety/WebAssembly or Python-extension hotspots.
- Lean into AI: use agents for scaffolding, repros, dashboards, and infra (e.g., Pulumi), while humans own architecture and tests.
- Design for observability: standardize correlation IDs, adopt context-locals/async-local-storage, and treat error payloads as product features.
Language trade-offs
- Python: fastest iteration and a rich ecosystem (data/ML/dev tooling). Great for prototyping and research; slower runtime, but often “fast enough.”
- Go: simple concurrency model, fast builds, easy deployment; excellent for services and infra. Some expressiveness trade-offs; explicit error handling.
- Rust: performance + memory safety + correctness with zero-cost abstractions. Steep learning curve; slower iteration — use when correctness/perf matter most.
- TypeScript: brings type safety to JS across frontends/backends; improves maintainability at scale. Tooling/build complexity to manage.
AI impact on workflows
- Less pressure to keep a single unified stack — tools can navigate heterogeneous codebases.
- Agentic coding increases throughput but can drive overwork and quality drift without tests/specs.
- Developer work shifts toward architecture, prompts/specs, and code review; tests become executable contracts.
AI tooling (Claude/Cursor/Codeex)
- Flipped from skeptic to heavy user; treats models as “AI interns” for scaffolding, bespoke tools (e.g., log visualizers), and reliable repros.
- Non-technical cofounder can ship validating prototypes using agents.
- With strong tests/architecture, >80% of non-critical code can be agent-generated.
Unified codebases matter less
- Strong codegen plus explicit API boundaries often beats forcing server and client into a single TS monorepo.
Languages will still matter (for agents)
- Some languages are easier for agents to reason about (thinner abstractions, clearer structure). Expect new languages tuned for human+agent co-dev.
Error handling & observability lessons (Sentry)
- Production errors need rich, cheap metadata by default; most ecosystems underinvest here.
- Context propagation is crucial: use context-locals/async-local-storage to keep correlation across async/promises.
- Type safety (e.g., TypeScript) reduced certain bugs but didn’t measurably cut JS error rates overall; rising app complexity introduces new error classes (e.g., hydration issues).
- Different stacks “fail” differently: JS surfaces many low-signal errors; C++ games crash less but reports are high-signal.
Practical takeaways
- Optimize for iteration speed early; pick the simplest stack that delivers value quickly.
- Reach for Rust when you need performance/safety guarantees; otherwise prefer higher-level stacks.
- Add guardrails for AI-assisted coding: tests, linters, CI, and thoughtful reviews.
- Watch for workload creep from agentic tools; measure outcomes beyond LOC and set boundaries.