> The hackathon winners understood something that most developers do not: the hard part of building useful AI is not the code, it is knowing what the system should do in the first place.
This has always been true of all systems. Not that it isn’t an insight though since I don’t think enough people seem to get it. To build a system, with an LLM or without, you must know what the system needs to do. If you define it in C or in a markdown file it must still be defined. The advantage with LLMs is they bridge the gap between system definition and being able to simulate that system on a processor. The definition of the system is still required and it still must be precise. Even with “AGI” that’s still going to be true just as it’s true today with humans who do the translation between those who deeply understand a system and software.
This appears to be an advertisement for a (somewhat inscrutable) AI product they're selling called CANONIC, that also has a cryptocoin bolted on to it somehow.
Author here. The blog argues that the real story from Anthropic's hackathon isn't that domain experts can build AI (they can) but that hackathon demos and production systems require fundamentally different things. A permit app that works on demo day and a permit system that survives when California revises the code, when the builder leaves, when a municipality asks for an audit trail — those are different problems. We're building a governance framework (CANONIC — CANONIC.org) where every AI capability is declared in a versioned contract. Curious what HN thinks about the gap between "domain expert can build" and "institution can trust what they built."
The AI editing of the article makes it a painful read. A shame because the point they were making is a good one, regarding AI coding apps empowering domain experts.
> The hackathon winners understood something that most developers do not: the hard part of building useful AI is not the code, it is knowing what the system should do in the first place.
This has always been true of all systems. Not that it isn’t an insight though since I don’t think enough people seem to get it. To build a system, with an LLM or without, you must know what the system needs to do. If you define it in C or in a markdown file it must still be defined. The advantage with LLMs is they bridge the gap between system definition and being able to simulate that system on a processor. The definition of the system is still required and it still must be precise. Even with “AGI” that’s still going to be true just as it’s true today with humans who do the translation between those who deeply understand a system and software.
This appears to be an advertisement for a (somewhat inscrutable) AI product they're selling called CANONIC, that also has a cryptocoin bolted on to it somehow.
Author here. The blog argues that the real story from Anthropic's hackathon isn't that domain experts can build AI (they can) but that hackathon demos and production systems require fundamentally different things. A permit app that works on demo day and a permit system that survives when California revises the code, when the builder leaves, when a municipality asks for an audit trail — those are different problems. We're building a governance framework (CANONIC — CANONIC.org) where every AI capability is declared in a versioned contract. Curious what HN thinks about the gap between "domain expert can build" and "institution can trust what they built."
Simplifying here… sounds like essentially the split between (great) product managers and engineers
We need both! Hurrah
The AI editing of the article makes it a painful read. A shame because the point they were making is a good one, regarding AI coding apps empowering domain experts.