New ask Hacker News story: LLMs learn what programmers create, not how programmers work

LLMs learn what programmers create, not how programmers work
3 by noemit | 0 comments on Hacker News.
I ran an experiment to see if CLI actually was the most intuitive format for tool calling. (As claimed by a ex-Manus AI Backend Engineer) I gave my model random scenarios and a single tool "run" - i told it that it worked like a CLI. I told it to guess commands. it guessed great commands, but it formatted it always with a colon up front, like :help :browser :search :curl It was trained on how terminals look, not what you actually type (you don't type the ":") I have since updated my code in my agent tool to stop fighting against this intuition. LLMs they learn what commands look like in documentation/artifacts, not what the human actually typed on the keyboard. Seems so obvious. This is why you have to test your LLM and see how it naturally works, so you don't have to fight it with your system prompt. This is Kimi K2.5 Btw.

Comments

Popular posts from this blog

How can Utilize Call Center Outsourcing for Increase your Business Income well?

New ask Hacker News story: EVM-UI – visual tool to interact with EVM-based smart contracts

New ask Hacker News story: Ask HN: Should I quit my startup journey for now?