New ask Hacker News story: Ask HN: Is AI code assistance fundamentally unenforceable without hooks?

Ask HN: Is AI code assistance fundamentally unenforceable without hooks?
2 by meloncafe | 1 comments on Hacker News.
I spent ~$2k on Claude Code this year (10+ years dev, non-dev job now, side projects only). The hard lesson: markdown instructions don't work. AI needs enforcement. The breaking point was Auto Compact. After context compression, Claude consistently ignores CLAUDE.md - the very file Anthropic tells you to create. It's like hiring someone who forgets their job description every 2 hours. Core issues I couldn't solve with instructions alone: - Post-compact amnesia: "interprets" previous session, often destructively - Session memory loss: asks the same questions like a new intern daily - TODO epidemic: "I implemented it!" (narrator: it was just a TODO) - Command chaos: rm -rf, repetitive curl prompts, git commits with "by Claude" - Guidelines = suggestions: follows them... when it feels like it After 6 months struggling with this, I built enforcement hooks (command restrictor, auto-summarizer before compact, TODO detector, commit validator). They work, but I feel like I'm working against the tool rather than with it. Questions for the community: 1. Is this everyone's experience, or am I using Claude Code fundamentally wrong? 2. For those on Cursor/Copilot/etc - same enforcement issues? 3. Is "markdown guidelines → AI follows them" just... not viable at scale? The hooks are on GitHub (mostly Korean but universally functional): https://ift.tt/2th5xZq Really curious if this is a universal AI coding problem or a skill issue on my end.

Comments

Popular posts from this blog

How can Utilize Call Center Outsourcing for Increase your Business Income well?

New ask Hacker News story: EVM-UI – visual tool to interact with EVM-based smart contracts

New ask Hacker News story: Ask HN: Should I quit my startup journey for now?