For me, practical knowledge comes from trying to figure things out. The more polished and "ELI5" the material is, the less I retain. I've played with quite a few LLM tools that promised to help me "understand anything", but I don't think they help with intuition all that much. For what it's worth, it's not an LLM-specific problem. I like YouTube content like 3Blue1Brown, but I don't think that I retained anything useful from any of it.
I don't question that LLMs are useful for answering questions about codebases, but this is closer to "turn a codebase into a curriculum", and... does that actually work?
An important tenet of modern education is that true knowledge is that which the learner (re)constructs in their mind. Heuristic learning (i.e. "trying to figure things out") is often a great way to do this.
Definitely. As instructors, we see this in action all the time. We describe stuff in writing and in lecture, discuss it with the students, and everybody seems to have good understanding. And then we have them implement it.
And that's when the shit hits the fan. :-D Only after concerted effort do the students actually gain understanding.
Are those 9.7k real users? I mean, maybe I am too old fashioned, but whenever I tried to use such tools long before AI, it actually didn't help much. It was much easier to read the codebase and find the needed connection on my own.
It reminds me on NX graphs, which are helpful to find the circular dependencies, but other than that, doesn't provide a lot of value as I can see the same kind of structure just looking at the codebase.
Just look at the star graph on the bottom of the readme (itself a sign of a hype driven project with little substance). I highly doubt that hockey stick is organic.
Did anyone actually use this on a complex codebase and have any kind of intuition for it ?
Like, having looked at the demo, it feels less intuitive and extra complex than going through the codebase myself with tmux + codex + reading it myself.
I think for you to understand the codebase, it should be easier to interact without. This seems to introduce way too many steps to interact with the codebase
Interesting approach. I built something similar https://github.com/nilbuild/diffity to understand unknown codebases. The difference is it gives you the interactive walk-through with mermaid diagrams, guiding you through the feature or part of the codebase that you're looking at.
is this like Obsidian's graph view? Looks pretty/makes cool screenshots but has no actual value and is just cumbersome to use? (btw, this isn't meant to be a mean comment, just a question after looking at the output.)
What evidence is there that this makes any difference at all? There are a gazillion (and one) codebase understanding solutions using knowledge graphs. How do I know if it's any good compared to just using Codex or Claude Code?
It depends on an individual's personal taste of how he/she understands things, some people like to YOLO and tinker while some like to read docs first before looking at any code and some like to do both synchronously. To me it seems the reason why there is no basis that one solution works better than others is exactly why it's easy to make trending/popular repos these days
For me, practical knowledge comes from trying to figure things out. The more polished and "ELI5" the material is, the less I retain. I've played with quite a few LLM tools that promised to help me "understand anything", but I don't think they help with intuition all that much. For what it's worth, it's not an LLM-specific problem. I like YouTube content like 3Blue1Brown, but I don't think that I retained anything useful from any of it.
I don't question that LLMs are useful for answering questions about codebases, but this is closer to "turn a codebase into a curriculum", and... does that actually work?
An important tenet of modern education is that true knowledge is that which the learner (re)constructs in their mind. Heuristic learning (i.e. "trying to figure things out") is often a great way to do this.
Definitely. As instructors, we see this in action all the time. We describe stuff in writing and in lecture, discuss it with the students, and everybody seems to have good understanding. And then we have them implement it.
And that's when the shit hits the fan. :-D Only after concerted effort do the students actually gain understanding.
Are those 9.7k real users? I mean, maybe I am too old fashioned, but whenever I tried to use such tools long before AI, it actually didn't help much. It was much easier to read the codebase and find the needed connection on my own.
It reminds me on NX graphs, which are helpful to find the circular dependencies, but other than that, doesn't provide a lot of value as I can see the same kind of structure just looking at the codebase.
Am I doing something wrong with these tools?
Just look at the star graph on the bottom of the readme (itself a sign of a hype driven project with little substance). I highly doubt that hockey stick is organic.
the hockey stick was probably from when they paid for fake stars: https://awesomeagents.ai/news/github-fake-stars-investigatio...
Did anyone actually use this on a complex codebase and have any kind of intuition for it ?
Like, having looked at the demo, it feels less intuitive and extra complex than going through the codebase myself with tmux + codex + reading it myself. I think for you to understand the codebase, it should be easier to interact without. This seems to introduce way too many steps to interact with the codebase
Interesting approach. I built something similar https://github.com/nilbuild/diffity to understand unknown codebases. The difference is it gives you the interactive walk-through with mermaid diagrams, guiding you through the feature or part of the codebase that you're looking at.
is this like Obsidian's graph view? Looks pretty/makes cool screenshots but has no actual value and is just cumbersome to use? (btw, this isn't meant to be a mean comment, just a question after looking at the output.)
What evidence is there that this makes any difference at all? There are a gazillion (and one) codebase understanding solutions using knowledge graphs. How do I know if it's any good compared to just using Codex or Claude Code?
It depends on an individual's personal taste of how he/she understands things, some people like to YOLO and tinker while some like to read docs first before looking at any code and some like to do both synchronously. To me it seems the reason why there is no basis that one solution works better than others is exactly why it's easy to make trending/popular repos these days
Fake GitHub stars. Move a long.
Vibe coded projects and fake github stars. Name a more iconic dual.
Provocative title, then seeing the like 8+ dot folders in the repo really made this seem like some kind of obscure satire at first.