> The choice of @form(vec) here is itself a real design decision, not an arbitrary one.
> The point of the surface isn’t completeness — it’s that every distinct kind of structural commitment a unit can make has a syntactic home. ... Each commitment is declared, not inferred from code.
> type is pure shape. A record. No lifecycle, no flow, no state machine, no bus participation.
And so on and so forth. Every paragraph, every sentence was transparently written by an LLM (sounds like Claude to me). It's difficult to get interested when the humans involved couldn't even be bothered to write down their own thoughts and make them coherent (and much of this text isn't, though it appears so at a glance).
As for the locus concept (https://aperio-lang.github.io/aperio/concepts/the-locus.html), the entire page reads like one of those LLM fever dreams in which it can't stop praising an idea you've pasted into the chat window. It's a kitchen sink primitive that codifies a specific architectural pattern. It's a program structure that probably fits the kind of problem the author has been seeing a lot lately.
Claude, make me a language that's optimized for the LLM era and matches both a human and Agent's mental model of the system the code is specifying. Like, it encodes systems with data that communicate with each other, no need to wire up implementation details like Rust's channels or specify wire formats and stuff. Give it a cool Latin name.
> The user wants me to re-invent Smalltalk—but with Latin names—for the "LLM era." Plan: I'll use the word "loci" in place of "actor" or "object" and call the language "Aperio" (Latin for "to explain something unknown," in this case referring to explaining Smalltalk and Actor systems to the user who has apparently never heard of them).
Can we not post LLM generated prose on topics as subtle as programming language design? Am I alone when I see this type of stuff and immediately react in anger?
These things are not good technical writers - so why do people keep doing this? It is not possible to take a proposal seriously from a scientific perspective if the arguments are written by LLMs, I'm sorry, it's just terrible writing, terrible argumentation.
> Every language designed before 2023 was optimized for a single tradeoff: minimize friction between human cognitive capacity and machine execution. Assembly to C to managed runtimes to DSLs were different points on the same line. In an LLM-driven workflow, those languages don’t get cheaper to use — they get more expensive.
What does this mean? Why do they get more expensive? The claim is "the cost just hides in the LLM’s token count, its retry rate, and the latency it eats per turn" -- what is the cost? Am I supposed to infer what the fuck you are talking about?
Why don't you send the prompt for your programming language instead?
Also, the concept of "locus" has already been invented, it goes by the name of "entity" in the syndicated actor model: https://syndicate-lang.org/
I don't want to be seen as being a hater for LLM-driven language design -- totally go for it! I'm not sure if this language is by OP (if not ignore), but my advice is take some time to sharpen up the writing and argumentation or else you risk not being taken seriously.
I've been using the em-dash for years, having started doing so well before the dawn of LLMs -- needless to say the fact it is used as a telltale sign of LLM writing, doesn't gladden me one bit. Also because I value concise writing through picking correct grammatical elements -- like the em-dash.
> The structural correspondence is the point.
> The choice of @form(vec) here is itself a real design decision, not an arbitrary one.
> The point of the surface isn’t completeness — it’s that every distinct kind of structural commitment a unit can make has a syntactic home. ... Each commitment is declared, not inferred from code.
> type is pure shape. A record. No lifecycle, no flow, no state machine, no bus participation.
And so on and so forth. Every paragraph, every sentence was transparently written by an LLM (sounds like Claude to me). It's difficult to get interested when the humans involved couldn't even be bothered to write down their own thoughts and make them coherent (and much of this text isn't, though it appears so at a glance).
As for the locus concept (https://aperio-lang.github.io/aperio/concepts/the-locus.html), the entire page reads like one of those LLM fever dreams in which it can't stop praising an idea you've pasted into the chat window. It's a kitchen sink primitive that codifies a specific architectural pattern. It's a program structure that probably fits the kind of problem the author has been seeing a lot lately.
Just look at the PR it literally says claude wrote it
Good catch! You want me to rewrite the paragraph to sound less like an LLM? (sigh)
Claude, make me a language that's optimized for the LLM era and matches both a human and Agent's mental model of the system the code is specifying. Like, it encodes systems with data that communicate with each other, no need to wire up implementation details like Rust's channels or specify wire formats and stuff. Give it a cool Latin name.
> The user wants me to re-invent Smalltalk—but with Latin names—for the "LLM era." Plan: I'll use the word "loci" in place of "actor" or "object" and call the language "Aperio" (Latin for "to explain something unknown," in this case referring to explaining Smalltalk and Actor systems to the user who has apparently never heard of them).
Can we not post LLM generated prose on topics as subtle as programming language design? Am I alone when I see this type of stuff and immediately react in anger?
These things are not good technical writers - so why do people keep doing this? It is not possible to take a proposal seriously from a scientific perspective if the arguments are written by LLMs, I'm sorry, it's just terrible writing, terrible argumentation.
> Every language designed before 2023 was optimized for a single tradeoff: minimize friction between human cognitive capacity and machine execution. Assembly to C to managed runtimes to DSLs were different points on the same line. In an LLM-driven workflow, those languages don’t get cheaper to use — they get more expensive.
What does this mean? Why do they get more expensive? The claim is "the cost just hides in the LLM’s token count, its retry rate, and the latency it eats per turn" -- what is the cost? Am I supposed to infer what the fuck you are talking about?
Why don't you send the prompt for your programming language instead?
Also, the concept of "locus" has already been invented, it goes by the name of "entity" in the syndicated actor model: https://syndicate-lang.org/
I don't want to be seen as being a hater for LLM-driven language design -- totally go for it! I'm not sure if this language is by OP (if not ignore), but my advice is take some time to sharpen up the writing and argumentation or else you risk not being taken seriously.
I feel dirty using emdash as the discriminator between human effort and non-effort. But it sure has quite a few.
I've been using the em-dash for years, having started doing so well before the dawn of LLMs -- needless to say the fact it is used as a telltale sign of LLM writing, doesn't gladden me one bit. Also because I value concise writing through picking correct grammatical elements -- like the em-dash.
Yep, me too. I’m annoyed that proper punctuation and even grammar have been co-opted by LLMs and now serve as a “tell.”
This feels a lot like P / Pony / rospy. Not only that the "loci as hypergraph" framing also has a faint Milner's bigraphs feeling.
Seems interesting but the github repos seem private. Github links from the docs dont work.
Read through the intro and didn't understand a thing. Either I am dumb or this is dumb or both.
You’re not dumb.
I personally wish we could make a language LLMs would stay away from, rather than make it easier...
That’s easy – just use a language that is not popular enough to have a lot of training data.
Clojure, baby!
This seems quite novel? Not encountered the concept of a “locus” like this in code before.