The newest structural diff tool is RefactoringMiner, there's a paper and a Github repo that works out of the box which is rare for this space. Excellent results but mainline is limited to Java IIRC with a couple ports for other languages.
Incidentally, this describes what I believe to be the great difficulty of PhD research. You have to take a topic you find interesting and read all possible related work in it, which tends to result in significant scope creep as you realize just how much there is that already does you want to do. Having exhausted your initial energy and excitement for the project, you have to force yourself the remaining 20-30% of he way to the finish line to get that work to a publishable state.
Day 1: We aim to demonstrate the effectiveness of an existing industrial catalyst in a novel application that has not seen commercial usage, potentially lowering cost of production of precursors for essential medications
Day 400: Having thoroughly described a universal theory of everything, we set out to build an experimental apparatus in orbit at a Lagrange point capable of detecting a universal particle which acts a mediator for all observable forces in the known universe.
> You have to take a topic you find interesting and read all possible related work in it
This is definitely the wrong way of going about a research project, and I have rarely seen anyone approach research projects this way. You should read two or at most three papers and build upon them. You only do a deep review of the research literature later in the project, once you have some results and you have started writing them down.
The usual justification is that if you don't do at least a breadth-first literature review, you can get burned by missing a paper that already does substantially what you do in your work. I've heard of extreme case where it happens a week before someone goes to defend their dissertation!
Excuse my naivety, but isn't it good if the same results get proofed in slightly different ways? This is effectively a replication, but instead of just the appliance of the experiments, you also replicate the thought process by having a slightly different approach.
It would be good (especially with the replication crisis), but historically to earn a PhD, especially at a top-tier institution, the criteria is conducting original research that produces new knowledge or unique insights.
Replicating existing results doesn't meet that criteria so unknowingly repeating someone's work is an existential crisis for PhD students. It can mean that you worked for 4-6 years on something that the committee then can't/won't grant a doctorate for and effectively forcing you to start over.
Theoretically, your advisor is supposed to help prevent this as well by guiding you in good directions, but not all advisors are created equal.
For the humanity? Yes, it's generally good. For that particular researcher's career? Not really. Who wants to pay for research into something that's already known?
My imagination was leaning more into the educational side than the research side of university. I see how that wouldn't be appreciated by a patron, but when you get search grants, isn't the topic discussed before starting and paying for the research? Also that is kind of the point, why topics are cleared with the chair-holding professor, which is expected to be already experienced in the subject to know where the knowledge needs to be expanded.
The majority of PhD candidates deal with this because the point of a PhD is to prove you can to “normal science” [1] which boils down to “how do I make this system go from 1% observable to 1.001% observable” which is just a gate for being in the academic career field.
You’ll almost never see a PhD thesis that has anything particularly interesting, novel or directly applicable to the sciences.
I worked at a chair for 12 years - in that time I've seen a lot of PhD students go through this.
If it helps anything at all: It's normal. At this point, you've already proven you're smart and knowledgeable. Now, the universe wants to see if you can also finish what you've started. That's the main thing a PhD proves: That you can take an incredibly interesting topic and then do all the boring stuff that they need you to do to be formally compliant with arbitrary rules.
Focus on finishing. Reduce the scope as much as possible again. Down to your core message (or 3-4 core messages, I guess, for paper-based dissertations).
This is spot on. My dad was a professor and had dozens of PhDs. The only thing differentiating them (as I remember him telling me) was the resolve to keep work as /tiny/ as possible. Who is remember for his/her PhD? Only the smallest cream of the crop. He even made good fun of worthless thesis by (then) well known professors. It’s not about your PhD.
When I did my MSc thesis he told me it was a pretty good PhD. (Before giving me a months work in corrections.) I didn’t understand back then, but I understand now. It was small, replicatable and novel (still is)! Just replicate three times and be done with it. You’ve proven your mastery. Now start something serious.
It's been a long long time since I was the academic research world - but isn't 3 published papers pretty much the expectation for a PhD quantity of research?
Really depends on the field. Computer science research usually has pretty short cycle times. If you're working on, say, biology or anthropology, collecting data can take substantially longer.
Switch back and forth between trying and reviewing. Often it can be good to just try before reviewing, to get your feet wet. Don't spend too much time. Then when reviewing you're going to understand it more. Repeat this process.
But there's some things to remember that are incredibly important
- a paper doesn't *prove* something, it suggests it is *probably* right
- under the conditions of the paper's settings, which aren't yours
- just because someone had X outcome before doesn't mean you won't get Y outcome
- those small details usually dominate success
- sometimes a one liner seemingly throw away sentence is what you're missing
- sometimes the authors don't know and the answer is 5 papers back that they've been building on
- DO NOT TREAT PAPERS AS *ABSOLUTE* TRUTH
- no one is *absolutely* right, everyone is *some* degree of wrong
- other researchers are just like you, writing papers just like you
- they also look back at their old papers and say "I'm glad I'm not that bad anymore"
- a paper demonstrating your idea is a positive signal, you're thinking in the right direction
As soon as you start treating papers as "this is fact" you tend to overly generalize the results. But the details dominate so you just kill your own creativity. You kill your own ideas before you know they're right or wrong. More importantly you don't know how right or how wrong.
For me, it wasn't so much about mitigating this cycle as much as recognizing that the grit of pushing through that last 20-30% is actually a valuable life skill that the PhD could teach me to do, and that projects that I felt like I would never want to touch again actually started to become interesting again after I had left them for a year or so.
Acknowledge it is normal? Attempt to buy deeper into the delusion ("Yeah my work is awesome and unique!"). Use stimulants to force enthusiastic days every once in awhile?
In one of his speeches, Obama said "Better is good". I think about this a lot. It feels like better compounds over time, too. Small improvements add up. From experience, nothing new is perfect the first go round, so sitting around trying to come up with a perfect design is counterproductive because there's no such thing.
"impediment to action advances action. what stands in the way, becomes the way".
I'm _exactly_ in this situation right now with a side project.
It's in a field that I have little experience with (Information Retrieval). So there is obviously prior art that I could learn from or even integrate with.
This article motivates me further to learn things by focusing on building my own and peek into prior art as I go, when I'm stuck or need ideas.
Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
However, he also mentioned that he made other languages before. So the larger story starts earlier, by making things and learning from practice.
Maybe that's also the bigger lesson: Don't overthink, start by making the thing. But later when you learned a bunch of practical lessons and maybe hit a wall or two, then you might need that deeper research to push further.
> Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
That was also on my mind thanks to the documentary. Then I followed up with "Easy made Simple" and "Hammock Driven Development", and it makes me want to learn Clojure.
Our CEO at Rec Room put this a way I really like, "Teams are always telling me they wish they did shorter projects. I've almost never heard a team say, 'we wish we delayed launch, did something more complex, polished more'"
I don't think it holds in 100% of situations but I do think if you're going to make an error one way or the other, I'd rather do something smaller and release too early than do something bigger and waste time.
I think the author is really just getting at the fact that humans are by nature intelligent and by nature tend to think of similar ideas. So you can either unknowingly complete a project which is inevitably in some sense a replication of another project, or you can do the research first and realize it's partially a replication which is a bit disheartening. I think the solution might lie in realizing that completing a project for the sake of your own learning might be the most important factor. (This is easier said than done is when you are trying to complete novel academic research or when you are trying to make a profit off of your unique project.) But those, too, are more than forgiving to research that seem only to slightly tweak something that already exists.
I found that setting deadlines solves most scope creep problems. Anecdotally, I am more likely to complete a project for a game jam or programming contest (which come with hard deadlines) than finishing an open-ended project.
See also "Why does the C++ standard ship every three years" (as opposed to ship when the desired features are ready):
Sabotage is intentional, but the problem is unintended excursions, which is endemic to any scouting.
The real problem is avoidance, when cuts are warranted and you don't want them, so you ... hide, often by working hard on something else.
The solution is to value your time. Most don't, so (self-) managers instead need to dangle other opportunities: finish this so you can do that. You can't take candy from a baby without trouble; instead, you trade for something else.
Over planning and scope creep are a problem, but let's not swing the pendulum to far the other way. Some of my most successful projects were projects where I planned out and worked through most of the features ahead of time through the process of modeling my data without any working software to try out. When I'm in that phase, I often don't really know what is too much. If I leave out features I think I or the users will probably want, I spend a lot of time with significant redesign of core aspects of the code. If I'm wrong, the project gets too big and we chalk it up to scope creep.
My ability to get this right is often a matter of how well I know the domain. If I don't know the domain as well I think I do, I fall into a lot of rework. If I know the domain more than I imagine then I waste my time with a baby step process when I could have run. All of this is a big judgement call, and I have "regrets" in both directions.
I think the ideal solution is to spend a lot of time in the analysis phase to load your brain up with the correct context, but then be ready to throw out the overengineered solution and just build what feels right.
Don't fall prey to sunk cost fallacy. Just because you spent hours researching a PhD level topic doesn't mean you now have to use it in your project, if it's not quite the right application.
Firstly - Greetings! It’s so rare to see a Clojure person in the wild! and secondly, I really resonated with this! it feels like we, computer programmer, typically overthink too much to begin with, and then LLMs come along and actually help us overthink even more!
> Perhaps there’s some kind of conservation law here: Any increases in programming speed will be offset by a corresponding increase in unnecessary features, rabbit holes, and diversions.
This resonates hard. LLMs enable true perfectionism, the ability to completely fulfil your vision for a project. This lets you add many features without burning out due to fatigue or boredom. However (as the author points out), most projects' original goal does not require these complementary features.
Definitely have found myself in a similar situation in fact most of the times option 2 happens. I too have caught myself just thinking rather than building and glad I am not the only one who repeatedly tells himself I should just build it rather than enter the rabbit hole of what is out there.
1. Insist on doing everything through “channels.” Never permit short-cuts to be taken in order to expedite decisions.
2. Make “speeches,” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
3. When possible refer all matters to committees, for “further study and consideration”. Attempt to make the committees as large as possible – never less than five.
4. Bring up irrelevant issues as frequently as possible.
5. Haggle over precise wordings of communications, minutes, resolutions.
6. Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
7. Advocate “caution.” Be “reasonable” and urge your fellow-conferees to be “reasonable” and avoid haste which might result in embarrassments or difficulties later on.
8. Be worried about the propriety of any decision – raise the question of whether such action as is contemplated lies within the jurisdiction of the group or whether it might conflict with the policy of some higher echelon.
Managers and Supervisors:
1. Demand written orders.
2. “Misunderstand” orders. Ask endless questions or engage in long correspondence about such orders. Quibble over them when you can.
3. Do everything possible to delay the delivery of orders. Even though parts of the order may be ready beforehand, don’t deliver it until its completely ready.
4. Don’t order new working materials until your current stocks have been virtually exhausted, so that the slightest delay in filling your order will mean a shutdown.
5. Order high-quality materials which are hard to get. If you don’t get them argue about it. Warn that inferior materials will mean inferior work.
6. In making work assignments, always sing out the unimportant jobs first. See that important jobs are assigned to inefficient workers with poor equipment.
7. Insist on perfect work in relatively unimportant products send back for refinishing those which have the least flaws. Approve other defective parts whose flaws are not visible to the naked eye.
8. Make mistakes in routing so that parts
and materials will be sent to the wrong place in
the plant.
9. When training new workers, give incomplete or misleading instructions.
10. To lower moral and with it production, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers; complain unjustly about their work.
11. Hold meetings when there is critical work to be done.
12. Multiply paperwork in plausible ways. Start duplicating files.
13. Multiply the procedures and clearances involved in issuing instructions, making payments, and so on. See that three people have to approve everything where one would do.
also if youre in a large organization, this is a great way to sabotage other peoples projects while elevating your stature. Require that they go evaluate alternatives and prior art, and write a slew of analysis and decision documentation
i feel a lot are missing the point here of identifying the "why" in why you want to build a project.
do you want to learn a new skill? do you want to scratch a very specific personal itch for just yourself? do you want to solve problems for others as well? do you want to build a startup/business around the idea?
all of these necessitate different approaches and strategies to research and coding. scratching an itch? maybe fully vibe coding is fine. want to learn? ditch the vibes and write by hand and ignore prior art. want to build a business? do some actual market research first and decide if this is something you actually want to pursue.
this post was a good reminder for me to identify the why as early on as possible and to be ok with just building something for myself without always having to monetize a side project which, for me, just zaps all joy from it.
Just code using c++ (or in a language with a similar syntax complexity or a massive runtime,
java, microsoft rust, etc).
It gets even better with ISO regular feature creep: you'll find always a dev
to manage to make hard dependent on the latest "standard".
Basically, you will end up dependent on the massive complexity of a compiler
due to the syntax complexity, and the cherry on top, thanks to ISO,
you'll get feature creep creating a cycle of planned obsolescence around 5 to 10 years.
Yeah, it’s funny how all the comments so far are only talking about the over-engineering and scope creep, when the bulk of the blog was dedicated to a totally separate rant (but a good one!) on structural diffing.
Scope creep is scary when you have the wrong pretext: to "just" implement a small feature or a project, when in reality the prerequisites to do so are enormous.
The newest structural diff tool is RefactoringMiner, there's a paper and a Github repo that works out of the box which is rare for this space. Excellent results but mainline is limited to Java IIRC with a couple ports for other languages.
Incidentally, this describes what I believe to be the great difficulty of PhD research. You have to take a topic you find interesting and read all possible related work in it, which tends to result in significant scope creep as you realize just how much there is that already does you want to do. Having exhausted your initial energy and excitement for the project, you have to force yourself the remaining 20-30% of he way to the finish line to get that work to a publishable state.
Day 1: We aim to demonstrate the effectiveness of an existing industrial catalyst in a novel application that has not seen commercial usage, potentially lowering cost of production of precursors for essential medications
Day 400: Having thoroughly described a universal theory of everything, we set out to build an experimental apparatus in orbit at a Lagrange point capable of detecting a universal particle which acts a mediator for all observable forces in the known universe.
Damn, that's an incredible amount of progress in just 400 days
That is the power of AI.
Hahaha so well said, can relate during my thesis
> You have to take a topic you find interesting and read all possible related work in it
This is definitely the wrong way of going about a research project, and I have rarely seen anyone approach research projects this way. You should read two or at most three papers and build upon them. You only do a deep review of the research literature later in the project, once you have some results and you have started writing them down.
The usual justification is that if you don't do at least a breadth-first literature review, you can get burned by missing a paper that already does substantially what you do in your work. I've heard of extreme case where it happens a week before someone goes to defend their dissertation!
Excuse my naivety, but isn't it good if the same results get proofed in slightly different ways? This is effectively a replication, but instead of just the appliance of the experiments, you also replicate the thought process by having a slightly different approach.
It would be good (especially with the replication crisis), but historically to earn a PhD, especially at a top-tier institution, the criteria is conducting original research that produces new knowledge or unique insights.
Replicating existing results doesn't meet that criteria so unknowingly repeating someone's work is an existential crisis for PhD students. It can mean that you worked for 4-6 years on something that the committee then can't/won't grant a doctorate for and effectively forcing you to start over.
Theoretically, your advisor is supposed to help prevent this as well by guiding you in good directions, but not all advisors are created equal.
And here we once again see an example of misaligned incentives baked into another one of our most hallowed institutions.
For the humanity? Yes, it's generally good. For that particular researcher's career? Not really. Who wants to pay for research into something that's already known?
My imagination was leaning more into the educational side than the research side of university. I see how that wouldn't be appreciated by a patron, but when you get search grants, isn't the topic discussed before starting and paying for the research? Also that is kind of the point, why topics are cleared with the chair-holding professor, which is expected to be already experienced in the subject to know where the knowledge needs to be expanded.
The majority of PhD candidates deal with this because the point of a PhD is to prove you can to “normal science” [1] which boils down to “how do I make this system go from 1% observable to 1.001% observable” which is just a gate for being in the academic career field.
You’ll almost never see a PhD thesis that has anything particularly interesting, novel or directly applicable to the sciences.
[1] https://en.wikipedia.org/wiki/Normal_science
Oh man I feel that in my bones.
Any advice on how to mitigate this?
I worked at a chair for 12 years - in that time I've seen a lot of PhD students go through this.
If it helps anything at all: It's normal. At this point, you've already proven you're smart and knowledgeable. Now, the universe wants to see if you can also finish what you've started. That's the main thing a PhD proves: That you can take an incredibly interesting topic and then do all the boring stuff that they need you to do to be formally compliant with arbitrary rules.
Focus on finishing. Reduce the scope as much as possible again. Down to your core message (or 3-4 core messages, I guess, for paper-based dissertations).
Listen to the feedback you get from your advisor.
You got this!
This is spot on. My dad was a professor and had dozens of PhDs. The only thing differentiating them (as I remember him telling me) was the resolve to keep work as /tiny/ as possible. Who is remember for his/her PhD? Only the smallest cream of the crop. He even made good fun of worthless thesis by (then) well known professors. It’s not about your PhD.
When I did my MSc thesis he told me it was a pretty good PhD. (Before giving me a months work in corrections.) I didn’t understand back then, but I understand now. It was small, replicatable and novel (still is)! Just replicate three times and be done with it. You’ve proven your mastery. Now start something serious.
It's been a long long time since I was the academic research world - but isn't 3 published papers pretty much the expectation for a PhD quantity of research?
Really depends on the field. Computer science research usually has pretty short cycle times. If you're working on, say, biology or anthropology, collecting data can take substantially longer.
Switch back and forth between trying and reviewing. Often it can be good to just try before reviewing, to get your feet wet. Don't spend too much time. Then when reviewing you're going to understand it more. Repeat this process.
But there's some things to remember that are incredibly important
As soon as you start treating papers as "this is fact" you tend to overly generalize the results. But the details dominate so you just kill your own creativity. You kill your own ideas before you know they're right or wrong. More importantly you don't know how right or how wrong.For me, it wasn't so much about mitigating this cycle as much as recognizing that the grit of pushing through that last 20-30% is actually a valuable life skill that the PhD could teach me to do, and that projects that I felt like I would never want to touch again actually started to become interesting again after I had left them for a year or so.
My choice is to not do a PhD and just invest as much or as little effort in the topic as you like
It seems almost inevitable...
Acknowledge it is normal? Attempt to buy deeper into the delusion ("Yeah my work is awesome and unique!"). Use stimulants to force enthusiastic days every once in awhile?
In one of his speeches, Obama said "Better is good". I think about this a lot. It feels like better compounds over time, too. Small improvements add up. From experience, nothing new is perfect the first go round, so sitting around trying to come up with a perfect design is counterproductive because there's no such thing.
"impediment to action advances action. what stands in the way, becomes the way".
Obama - what a time to be alive
I'm _exactly_ in this situation right now with a side project.
It's in a field that I have little experience with (Information Retrieval). So there is obviously prior art that I could learn from or even integrate with.
This article motivates me further to learn things by focusing on building my own and peek into prior art as I go, when I'm stuck or need ideas.
Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
However, he also mentioned that he made other languages before. So the larger story starts earlier, by making things and learning from practice.
Maybe that's also the bigger lesson: Don't overthink, start by making the thing. But later when you learned a bunch of practical lessons and maybe hit a wall or two, then you might need that deeper research to push further.
> Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
That was also on my mind thanks to the documentary. Then I followed up with "Easy made Simple" and "Hammock Driven Development", and it makes me want to learn Clojure.
Clojure documentary on CultRepo channel: https://www.youtube.com/watch?v=Y24vK_QDLFg
Simple Made Easy: https://www.youtube.com/watch?v=SxdOUGdseq4
Hammock Driven Development: https://www.youtube.com/watch?v=f84n5oFoZBc
Our CEO at Rec Room put this a way I really like, "Teams are always telling me they wish they did shorter projects. I've almost never heard a team say, 'we wish we delayed launch, did something more complex, polished more'"
I don't think it holds in 100% of situations but I do think if you're going to make an error one way or the other, I'd rather do something smaller and release too early than do something bigger and waste time.
I think the author is really just getting at the fact that humans are by nature intelligent and by nature tend to think of similar ideas. So you can either unknowingly complete a project which is inevitably in some sense a replication of another project, or you can do the research first and realize it's partially a replication which is a bit disheartening. I think the solution might lie in realizing that completing a project for the sake of your own learning might be the most important factor. (This is easier said than done is when you are trying to complete novel academic research or when you are trying to make a profit off of your unique project.) But those, too, are more than forgiving to research that seem only to slightly tweak something that already exists.
I found that setting deadlines solves most scope creep problems. Anecdotally, I am more likely to complete a project for a game jam or programming contest (which come with hard deadlines) than finishing an open-ended project.
See also "Why does the C++ standard ship every three years" (as opposed to ship when the desired features are ready):
https://news.ycombinator.com/item?id=20428703 (2019-07-13, 220 comments)
My answer is both #1 and #2
Prototype a minority of the time. Research a majority of the time. At some point the ratio flips as research fades out and producing increases.
Interesting read but the author's thoughts were all over the place.
There is something to be said about scope creep here
This isn't a blogpost with a particular focus, it's a newletter update for people who follow this person.
Tell me you expect to be told what to think.
Sabotage is intentional, but the problem is unintended excursions, which is endemic to any scouting.
The real problem is avoidance, when cuts are warranted and you don't want them, so you ... hide, often by working hard on something else.
The solution is to value your time. Most don't, so (self-) managers instead need to dangle other opportunities: finish this so you can do that. You can't take candy from a baby without trouble; instead, you trade for something else.
Over planning and scope creep are a problem, but let's not swing the pendulum to far the other way. Some of my most successful projects were projects where I planned out and worked through most of the features ahead of time through the process of modeling my data without any working software to try out. When I'm in that phase, I often don't really know what is too much. If I leave out features I think I or the users will probably want, I spend a lot of time with significant redesign of core aspects of the code. If I'm wrong, the project gets too big and we chalk it up to scope creep.
My ability to get this right is often a matter of how well I know the domain. If I don't know the domain as well I think I do, I fall into a lot of rework. If I know the domain more than I imagine then I waste my time with a baby step process when I could have run. All of this is a big judgement call, and I have "regrets" in both directions.
I think the ideal solution is to spend a lot of time in the analysis phase to load your brain up with the correct context, but then be ready to throw out the overengineered solution and just build what feels right.
Don't fall prey to sunk cost fallacy. Just because you spent hours researching a PhD level topic doesn't mean you now have to use it in your project, if it's not quite the right application.
You worry too much about being wrong. Just try something and adjust as needed.
This is a pretty common failure mode in engineering too.
You start with a simple goal → then research → then keep expanding scope → and never ship.
The people who actually finish things do the opposite: lock scope early, ignore “better ideas”, ship v1.
Most projects don’t fail due to lack of ideas, they fail because they never converge.
Firstly - Greetings! It’s so rare to see a Clojure person in the wild! and secondly, I really resonated with this! it feels like we, computer programmer, typically overthink too much to begin with, and then LLMs come along and actually help us overthink even more!
> Perhaps there’s some kind of conservation law here: Any increases in programming speed will be offset by a corresponding increase in unnecessary features, rabbit holes, and diversions.
This resonates hard. LLMs enable true perfectionism, the ability to completely fulfil your vision for a project. This lets you add many features without burning out due to fatigue or boredom. However (as the author points out), most projects' original goal does not require these complementary features.
Definitely have found myself in a similar situation in fact most of the times option 2 happens. I too have caught myself just thinking rather than building and glad I am not the only one who repeatedly tells himself I should just build it rather than enter the rabbit hole of what is out there.
As usual it's not so black and white and is all about balance.
Project where the sole user is you in your kitchen? Sure, hack it together.
Project where you actually want other people to use the product? A research phase matters and helps here.
Consider what the goal is and the amount of effort to invest typically becomes more evident.
I feel for this a lot, but it's because I don't want to actually write code or build something if there is something workable already out there.
Maybe I lack imagination or curiosity, but it makes it difficult to come up with an idea and follow it through.
Funnily this aligns perfectly with the WW2 era CIA Sabotaging handbook https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...
Organizations and Conferences:
1. Insist on doing everything through “channels.” Never permit short-cuts to be taken in order to expedite decisions.
2. Make “speeches,” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
3. When possible refer all matters to committees, for “further study and consideration”. Attempt to make the committees as large as possible – never less than five.
4. Bring up irrelevant issues as frequently as possible.
5. Haggle over precise wordings of communications, minutes, resolutions.
6. Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
7. Advocate “caution.” Be “reasonable” and urge your fellow-conferees to be “reasonable” and avoid haste which might result in embarrassments or difficulties later on.
8. Be worried about the propriety of any decision – raise the question of whether such action as is contemplated lies within the jurisdiction of the group or whether it might conflict with the policy of some higher echelon.
Managers and Supervisors:
1. Demand written orders.
2. “Misunderstand” orders. Ask endless questions or engage in long correspondence about such orders. Quibble over them when you can.
3. Do everything possible to delay the delivery of orders. Even though parts of the order may be ready beforehand, don’t deliver it until its completely ready.
4. Don’t order new working materials until your current stocks have been virtually exhausted, so that the slightest delay in filling your order will mean a shutdown.
5. Order high-quality materials which are hard to get. If you don’t get them argue about it. Warn that inferior materials will mean inferior work.
6. In making work assignments, always sing out the unimportant jobs first. See that important jobs are assigned to inefficient workers with poor equipment.
7. Insist on perfect work in relatively unimportant products send back for refinishing those which have the least flaws. Approve other defective parts whose flaws are not visible to the naked eye.
8. Make mistakes in routing so that parts and materials will be sent to the wrong place in the plant.
9. When training new workers, give incomplete or misleading instructions.
10. To lower moral and with it production, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers; complain unjustly about their work.
11. Hold meetings when there is critical work to be done.
12. Multiply paperwork in plausible ways. Start duplicating files.
13. Multiply the procedures and clearances involved in issuing instructions, making payments, and so on. See that three people have to approve everything where one would do.
14. Apply all regulations to the last letter.
This reads like satire!
also if youre in a large organization, this is a great way to sabotage other peoples projects while elevating your stature. Require that they go evaluate alternatives and prior art, and write a slew of analysis and decision documentation
i feel a lot are missing the point here of identifying the "why" in why you want to build a project.
do you want to learn a new skill? do you want to scratch a very specific personal itch for just yourself? do you want to solve problems for others as well? do you want to build a startup/business around the idea?
all of these necessitate different approaches and strategies to research and coding. scratching an itch? maybe fully vibe coding is fine. want to learn? ditch the vibes and write by hand and ignore prior art. want to build a business? do some actual market research first and decide if this is something you actually want to pursue.
this post was a good reminder for me to identify the why as early on as possible and to be ok with just building something for myself without always having to monetize a side project which, for me, just zaps all joy from it.
Just code using c++ (or in a language with a similar syntax complexity or a massive runtime, java, microsoft rust, etc). It gets even better with ISO regular feature creep: you'll find always a dev to manage to make hard dependent on the latest "standard".
Basically, you will end up dependent on the massive complexity of a compiler due to the syntax complexity, and the cherry on top, thanks to ISO, you'll get feature creep creating a cycle of planned obsolescence around 5 to 10 years.
Oh, sorry, "they" called that "innovation".
I think this should've been two separate blog posts.
Yeah, it’s funny how all the comments so far are only talking about the over-engineering and scope creep, when the bulk of the blog was dedicated to a totally separate rant (but a good one!) on structural diffing.
Looks like this was a newsletter by the author, not a blogpost
That makes more sense!
Scope creep is scary when you have the wrong pretext: to "just" implement a small feature or a project, when in reality the prerequisites to do so are enormous.
Scope creep is scary when you have the wrong pretext.
I mean if you don't reconsider the foundation of computer science, mathematics or what even is information, can you truly be building a cool CRM?