From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.
I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.
This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.
For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.
Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"
I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.
I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.
Maybe you get reimbursed for half as long as there are no obvious hallucinations.
The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.
That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.
I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).
> There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.
While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!
On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.
I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.
I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.
This is a space that probably needs substantial reform, much like grad school models in general (IMO).
Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".
It's all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions
I've noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting any sort of review or research paper.
We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.
Plus, this practice (just inserting AI-proposed citations/sources) is what has recently been the front-runner of some very embarrassing "editing" mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! <3
The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it's actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you've proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.
Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.
On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.
I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.
I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).
I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.
Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.
In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.
collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.
a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their paper.
overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.
also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyone's comfortable with that, but plenty of researchers treat computers as appliances lol.
I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.
The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.
(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)
The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.
> Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.
They’re quite open about Prism being built on top of Crixet.
Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.
This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.
I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.
You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.
Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use "prism" for a product but, for me, the word is permanently tainted.
Anecdotally, I have mentioned PRISM to several non-techie friends over the years and none of them knew what I was talking about, they know 'Snowden' but not 'PRISM'. The amount of people who actually cared about the Snowden leaks is practically a rounding error
I'd think that most people in science would associate the name with an optical prism. A single large political event can't override an everyday physical phenomenon in my head.
I followed the Snowden stuff fairly closely and forgot, so I bet they didn't think about it at all and if they did they didn't care and that was surely the right call.
I’ve been “testing” LLM willingness to explore novel ideas/hypotheses for a few random topics[0].
The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.
After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].
I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.
[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.
[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.
Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that…
The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.
Agreed. Tex/Latex is very old tech. Error recovery and messages is very bad. Developing new macros in Tex is about as fun as you expect developing in a 70s-era language to be (ie probably similar to cobol and old fortran).
I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/
With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.
This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.
The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.
That's is probably one good usage of LLMs. LaTeX was a relief to deal with after using LLM. I have been using LaTeX for years before but it was not any easy. Aside of making LLM write the text, it is very good grammar and typo checker.
Even on overleaf and browser default spell checker, my Thesis feedback contained a few typos from my Advisor each time I sent him for review.
BTW: Overleaf added LLM integration which help solving LaTeX errors but you pay for it separately. But I feel like overleaf with their good docs on LaTeX something positive for science and hope they manage to adapt with this nee competition.
> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.
Maybe it's cynical, but how does the old saying go? If the service is free, you are the product. The goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.
It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.
I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.
All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.
Yes, but there's a really large number of users who don't want to have to setup vscode, git, texlive, latex workshop, just to collaborate on a paper. You shouldn't have to become a full stack software engineer to be able to write a research paper in LaTeX.
in the end we're going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out
Way too much work having AI generate slop which gets dumped on a human reviewer to deal with. Maybe switch some of that effort into making better review tools.
From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.
I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.
This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.
For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.
Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"
I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.
I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.
Maybe you get reimbursed for half as long as there are no obvious hallucinations.
Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.
The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.
That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.
I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).
I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.
Pay to publish journals already exist.
This is sorta the opposite of pay to publish. It's pay to be rejected.
I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).
> There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.
While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!
I generally agree.
On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.
Completely agree. Look at the independent research that gets submitted under "Show HN" nowadays:
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.
AI generating references seems like a hop away from absolute unverifiable trash.
I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.
This is a space that probably needs substantial reform, much like grad school models in general (IMO).
Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".
This is all pageantry.
It's all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions
I've noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting any sort of review or research paper.
We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.
Plus, this practice (just inserting AI-proposed citations/sources) is what has recently been the front-runner of some very embarrassing "editing" mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! <3
The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it's actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you've proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.
Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.
On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.
[0] https://crixet.com
[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...
[2] https://news.ycombinator.com/item?id=42009254
[3] https://news.ycombinator.com/item?id=46394937
I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.
I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).
I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.
Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.
In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.
collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.
a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their paper.
overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.
also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyone's comfortable with that, but plenty of researchers treat computers as appliances lol.
I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.
The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.
(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)
The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.
So this is the product of an acquisition?
> Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.
They’re quite open about Prism being built on top of Crixet.
Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.
I remember, something like a month ago, Altman twit'n that they were stopping all product work to focus on training. Was that written on water?
Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?
what better training data than cutting edge research handed over in exchange for a robot proof-reader
This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.
I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.
You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.
Accessibility does matter
> In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,
I can't wait
Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use "prism" for a product but, for me, the word is permanently tainted.
Anecdotally, I have mentioned PRISM to several non-techie friends over the years and none of them knew what I was talking about, they know 'Snowden' but not 'PRISM'. The amount of people who actually cared about the Snowden leaks is practically a rounding error
Given current events, I think you’ll find many more people care in 2026 than did in 2024.
(See also: today’s WhatsApp whistleblower lawsuit.)
This was my first thought as well. Prism is a cool name, but I'd never ever use it for a technical product after those leaks, ever.
Guessing that Ai came up with the name based on the description of the product.
Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.
I'd think that most people in science would associate the name with an optical prism. A single large political event can't override an everyday physical phenomenon in my head.
I suspect that name recognition for PRISM as a program is not high at the population level.
2027: OpenAI Skynet - "Robots help us everywhere, It's coming to your door"
Pretty much every company I’ve worked for in tech over my 25+ year career had a (different) system called prism.
(plot twist: he works for NSA contractors)
Surprised they didn't do something trendy like Prizm or OpenPrism while keeping it closed source code.
Or the JavaScript ORM.
I never though of that association, not in the slightest, until I read this comment.
this was my first thought as well.
I followed the Snowden stuff fairly closely and forgot, so I bet they didn't think about it at all and if they did they didn't care and that was surely the right call.
I don't see the use. You can easily do everything shown in the Prism intro video with ChatGPT already. Is it meant to be an overleaf killer?
I’ve been “testing” LLM willingness to explore novel ideas/hypotheses for a few random topics[0].
The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.
After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].
I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.
[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.
[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.
Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that…
100% completely agreed. It's not the future, it's the past.
Typst feels more like the future: https://typst.app/
The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.
Agreed. Tex/Latex is very old tech. Error recovery and messages is very bad. Developing new macros in Tex is about as fun as you expect developing in a 70s-era language to be (ie probably similar to cobol and old fortran).
I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/
With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.
Don’t forget replication!
I'm curious how you think AI would aide in this.
Replicate this <slop>
Ok! Here's <more slop>
I don't think you understand what replication means in this context.
This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.
Check out MonsterWriter if you are concerned about the recent acquisition of this.
It also offers LaTeX workspaces
see video: https://www.youtube.com/watch?v=feWZByHoViw
The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.
That's is probably one good usage of LLMs. LaTeX was a relief to deal with after using LLM. I have been using LaTeX for years before but it was not any easy. Aside of making LLM write the text, it is very good grammar and typo checker.
Even on overleaf and browser default spell checker, my Thesis feedback contained a few typos from my Advisor each time I sent him for review.
BTW: Overleaf added LLM integration which help solving LaTeX errors but you pay for it separately. But I feel like overleaf with their good docs on LaTeX something positive for science and hope they manage to adapt with this nee competition.
What's the goal here?
There was an idea of OpenAI charging commission or royalties on new discoveries.
What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?
> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.
Maybe it's cynical, but how does the old saying go? If the service is free, you are the product. The goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.
It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.
I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.
All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.
I postulate 90% of the reason openai now has "variants" for different use cases is just to capture training data...
Very underwhelming.
Was this not already possible in the web ui or through a vscode-like editor?
Yes, but there's a really large number of users who don't want to have to setup vscode, git, texlive, latex workshop, just to collaborate on a paper. You shouldn't have to become a full stack software engineer to be able to write a research paper in LaTeX.
All your papers are belong to us
Users have full control over whether their data is used to help improve our models
in the end we're going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out
Way too much work having AI generate slop which gets dumped on a human reviewer to deal with. Maybe switch some of that effort into making better review tools.
"Accelerating science writing and collaboration with AI"
Uhm ... no.
I think we need to put an end to AI as it is currently used (not all of it but most of it).
Does "as it is currently used" include what this apparently is (brainstorming, initial research, collaboration, text formatting, sharing ideas, etc)?
Yeah, there are already way more papers being published than we can reasonably read. Collaboration, ok, but we don’t need more writing.
> Introducing Prism Accelerating science writing and collaboration with AI.
I thought this was introduced by the NSA some time ago.