
"Right now we have the process of writing code, reviewing it, compiling it, and running it. We've added an extra layer-explaining our intentions to an agent that translates them into code. If that is a cow path-and the more I think about it, the more it does seem a rather indirect way to get from point A to point B-then what will be the most direct way to get from here to there?"
"Every day, our coding agents get better. The better they get, the more we'll trust them, and the less we'll need to review their code before committing it. Someday, we might expect, agents will review the code that agents write. What happens to code when humans eventually don't even read what the agents write anymore? Will code even matter at that point?"
"Will we write unit tests-or have our agents write unit tests-only for our benefit? Will coding agents even need tests? It's not hard to imagine a future where agents just test their output automatically, or build things that just work without testing because they can "see" what the outcome of the tests would be."
Software development currently follows writing, reviewing, compiling, and running code, with an added layer where humans explain intentions to agents that translate them into code. That extra layer can create an indirect path between intent and outcome. As coding agents improve, trust in their outputs will grow and human review demands will decline, enabling agents to review other agents' work. Humans may eventually stop reading agent-produced code, raising questions about the continued importance of code. Unit tests may become automated by agents or unnecessary if agents can predict and validate outcomes directly.
Read at InfoWorld
Unable to calculate read time
Collection
[
|
...
]