> Consider what happens when you build software professionally. You talk to stakeholders who do not know what they want and cannot articulate their requirements precisely. You decompose vague problem statements into testable specifications. You make tradeoffs between latency and consistency, between flexibility and simplicity, between building and buying. You model domains deeply enough to know which edge cases will actually occur and which are theoretical. You design verification strategies that cover the behaviour space. You maintain systems over years as requirements shift.
I'm not sure why he thinks current LLM technologies (with better training) won't be able to do more and more of this as time passes.
To genuinely "talk to stakeholders" requires being part of their social world. To be part of their social world you have to have had a social past - to have been a vulnerable child, to have experienced frustration and joy. Efforts to decouple human development from human cognition betray a fundamental misunderstanding.
> The value of knowing syntax, APIs, and framework conventions approaches zero. An LLM can look these up faster than you can remember them.
The irony here is that although pointing out quite well how people may have made incorrect judgment calls due to what comes down to personal experience at various times, this aspect is also down to personal experience.
An LLM can look these up and is still getting them wrong, or it can get them right but still pick the wrong conventions to use. More importantly though, LLM code assistants will not always be performing lookups, you cannot assume the same IDE and tool configuration profile for everyone. You cannot even assume that everyone's using an IDE with an embedded chatbot.
Using an LLM to lookup syntax, common APIs, and conventions, seems to me like using a calculator to do basic arithmetic. It’s useful to memorize these things because it’s faster.
Moreover, if I know a key term or phrase (which is most cases) I can lookup those things in Google or IDE search, which is also faster than an LLM.
EDIT: to be clear, I’m still writing code. I can do many small tasks and fixes by hand faster than I can describe them to an LLM and check or fix its output. I also figure out how to structure a project partly by writing code. Many small fixes and structure by experimentation probably aren’t ideal software development, and maybe soon I’ll figure out LLMs (or they’ll improve) such that I end up writing better code faster with them. But right now I believe LLMs struggle with good APIs and especially modularity; because the only largely-LLM projects I’ve seen are small, and get abandoned and/or fall apart when the developer tries to extend them.
Computer Science education has always seem like a luxury to me. You go to college and get a very high view of the computer field but never enough to be able to get a job without additional training. That has changed CS graduates will have enough know how to be useful out of college. Their role now is to figure out how to turn spects into a usable system using AI.
My question now is: given that that there are only a limited number of types of system, why not have templates for the know how for most of these system? LLM can just fill in the blanks and have a working system in no time for most of the use cases.
The only thing I can think of that will happen is that we will have new creative systems for use cases we have never even thought about. I doubt AI will take over. Human creativity has no bounds so we will see an explosion of new ideas that only humans can solve not a capitulation to AI.
I'm tired of this naive view that college is for training people for jobs.
College is for growing individuals that can handle the complexities required in a field. That is the real value.
You don't do CS or SE because you have to get out of college with knowledge of the latest hype, but you get out of college armed with the tools that make you able to learn and handle any of the that latest hype for decades to come.
This field especially moves way too fast for anything to be actual by the time you graduate. That's why you focus on the fundamentals and problem solving and in some exams here and there you get some touch of different fields (data, machine learning, etc).
A new definition of object oriented software should be around the corner. Imagine having a few thousand objects that fit together but need AI tweeking for the system to work. Imagine people puting the blocks in place and have LLMs glue them together. We will have bug free software in no time. We will go from script type code to full custom OSes in days. It's bound to happen.
If you are thinking about not reading the syntax of AI generated code, i.e vibe coding on production software, just show this simple case study. [0]
For transparency in future incidents, I now expect that post-mortems like this one [0] would go along the lines of: "An AI code generator was used, it passed all the tests, we checked everything and we still got this error."
There is still one fundamental lesson in [0]: English as a 'programming language' cannot be formally verified and probabilistic AI generators can still be the cause of perfect-looking code being the cause of an incident.
This time the engineers will have no understanding of the AI generated code itself.
> Consider what happens when you build software professionally. You talk to stakeholders who do not know what they want and cannot articulate their requirements precisely. You decompose vague problem statements into testable specifications. You make tradeoffs between latency and consistency, between flexibility and simplicity, between building and buying. You model domains deeply enough to know which edge cases will actually occur and which are theoretical. You design verification strategies that cover the behaviour space. You maintain systems over years as requirements shift.
I'm not sure why he thinks current LLM technologies (with better training) won't be able to do more and more of this as time passes.
Meaning and thought are social all the way down.
To genuinely "talk to stakeholders" requires being part of their social world. To be part of their social world you have to have had a social past - to have been a vulnerable child, to have experienced frustration and joy. Efforts to decouple human development from human cognition betray a fundamental misunderstanding.
But surely you see the core LLM innovation is that computers can now TALK to you.
Well, people can talk, yet stakeholder most of the time cannot explain what they want.
> The value of knowing syntax, APIs, and framework conventions approaches zero. An LLM can look these up faster than you can remember them.
The irony here is that although pointing out quite well how people may have made incorrect judgment calls due to what comes down to personal experience at various times, this aspect is also down to personal experience.
An LLM can look these up and is still getting them wrong, or it can get them right but still pick the wrong conventions to use. More importantly though, LLM code assistants will not always be performing lookups, you cannot assume the same IDE and tool configuration profile for everyone. You cannot even assume that everyone's using an IDE with an embedded chatbot.
Using an LLM to lookup syntax, common APIs, and conventions, seems to me like using a calculator to do basic arithmetic. It’s useful to memorize these things because it’s faster.
Moreover, if I know a key term or phrase (which is most cases) I can lookup those things in Google or IDE search, which is also faster than an LLM.
EDIT: to be clear, I’m still writing code. I can do many small tasks and fixes by hand faster than I can describe them to an LLM and check or fix its output. I also figure out how to structure a project partly by writing code. Many small fixes and structure by experimentation probably aren’t ideal software development, and maybe soon I’ll figure out LLMs (or they’ll improve) such that I end up writing better code faster with them. But right now I believe LLMs struggle with good APIs and especially modularity; because the only largely-LLM projects I’ve seen are small, and get abandoned and/or fall apart when the developer tries to extend them.
Computer Science education has always seem like a luxury to me. You go to college and get a very high view of the computer field but never enough to be able to get a job without additional training. That has changed CS graduates will have enough know how to be useful out of college. Their role now is to figure out how to turn spects into a usable system using AI.
My question now is: given that that there are only a limited number of types of system, why not have templates for the know how for most of these system? LLM can just fill in the blanks and have a working system in no time for most of the use cases.
The only thing I can think of that will happen is that we will have new creative systems for use cases we have never even thought about. I doubt AI will take over. Human creativity has no bounds so we will see an explosion of new ideas that only humans can solve not a capitulation to AI.
I'm tired of this naive view that college is for training people for jobs.
College is for growing individuals that can handle the complexities required in a field. That is the real value.
You don't do CS or SE because you have to get out of college with knowledge of the latest hype, but you get out of college armed with the tools that make you able to learn and handle any of the that latest hype for decades to come.
This field especially moves way too fast for anything to be actual by the time you graduate. That's why you focus on the fundamentals and problem solving and in some exams here and there you get some touch of different fields (data, machine learning, etc).
A new definition of object oriented software should be around the corner. Imagine having a few thousand objects that fit together but need AI tweeking for the system to work. Imagine people puting the blocks in place and have LLMs glue them together. We will have bug free software in no time. We will go from script type code to full custom OSes in days. It's bound to happen.
Blah blah blah I'm so bored of this take. Even if it's correct, it's still worn out. Gimme something interesting to read about LLMs or GTFO.
Waiting to be blessed by the bounty of software now that it's just a couple of prompts away.
If you are thinking about not reading the syntax of AI generated code, i.e vibe coding on production software, just show this simple case study. [0]
For transparency in future incidents, I now expect that post-mortems like this one [0] would go along the lines of: "An AI code generator was used, it passed all the tests, we checked everything and we still got this error."
There is still one fundamental lesson in [0]: English as a 'programming language' cannot be formally verified and probabilistic AI generators can still be the cause of perfect-looking code being the cause of an incident.
This time the engineers will have no understanding of the AI generated code itself.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
[dead]
[dead]