1. AI can already do social science research better than most professors.In particular: Most papers are already mostly read by AI, not humans. Your primary audience is increasingly LLMs.2. The academic paper is a dead format walking.
3. The commercial journal system may not survive this.
4. Academics hold AI to absurd double standards.
Tuesday, March 10, 2026
Science Papers are now mainly read by AI LLMs
Alexander Kustov, with Claude AI assistance, writes Academics Need to Wake Up on AI and Part II:
Subscribe to:
Post Comments (Atom)
Has Quantum Supremacy been Achieved?
Dominik Hangleiter 2603.09901">writes in a new paper : Recently, I gave a couple of perspective talks on quantum advantage, one at...
-
I have occasionally argued that Bell's Theorem has been wildly misinterpreted, and that it doesn't prove nonlocality or anything in...
-
Peter Woit writes : what’s all this nonsense about Bell’s theorem and supposed non-locality? If I go to the Scholarpedia entry for Bell’s...
-
Dr. Bee's latest video is on Schroedinger's Cat, and she concludes: What this means is that one of the following three assumptions ...
If this is true, then it find it hilarious. Goodness knows how many so called 'scientists' are utterly useless, they survive like parasites in a system where publishing and citation are considered 'accomplishment'. No actual discovery or utility, no innovation, just more citation and publication in a system that rewards those who worship it. It's an elitist paradise with delusions of grandeur at taxpayer expense.
ReplyDeleteGood pieces. I agree with him on the matters relevant to me.
ReplyDeleteWhat's more, the pieces go much above the following kind of a typical exchange I see around:
``Hey, hey, hey! You are going to lose [/ have lost] your job to AI. We don't need to deal with a piece of sh*t like you. Never did. Got that?'' [Repeat.] [Have been hearing that, also with the word ``tsunami'' for about 3 years by now. Before ChatGPT, the excuse was: a prediction for the gig economy not stable jobs (the speaker had made himself rich enough already, and wouldn't need it, and seemed to take a hidden delight in delivering such lines after noting my presence). After ChatGPT, the excuse has become: AI. But let's come to the more recent times.]
In response, these days:
Person 1: ``Hmmm... 4 - 1.x years to go.''
Person 2: ``Which VCs from the SF Bay Area have been quietly funding for an AI-generated Excel? When is it likely it to hit the market? The time-to-market must've decreased from about 4 months 1 month ago, to about 2 months by now, no?''
Person 3 [to Person 2]: ``Hey, people! Any idea? Is any of us [IIT Bombay BTech graduates] involved in it? Can they outsource the development to us?''
---
Of course, someone can always argue that looking for something that goes above the level of this kind an exchange is, frankly speaking, setting the bar too low. He would be right.
But what I mean to say is that there are certain points which he makes, and these certainly would require a real insight for them to be included in the though process that goes before the article begins to get written. And that's why the write-ups turned out to be good. Good enough for book-marking and sharing. I especially appreciated some observations made in point nos. 1, 5, 9, 11, 12, 13, 16, 17, and to a somewhat less extent, 20.
Thanks for pointing out.
--Ajit