Anyone Seen AI submissions?

They get more. The student gets about 25K, plus the benefits cost about 5K, and tuition is near 20K per year. The University gets overhead on the salary and bennies (but not the tuition), which amounts to an additional 18K-ish.

I’m generally funded by the NIH, which throws the overhead on top of the direct costs, so I don’t have to worry about that 18K coming out of my research budget, but other funders don’t do that.

So the University, for one grad student making 25K per year, gets almost 40K in money from tuition and overhead.

None of this includes the money the student will need for actual research supplies, equipment, travel, etc…

Postdocs make about 55K per year (more in some cases). Bennies are another 14K or so. Overhead to the University is about 40K.

So the Uni gets about the same to them for a student or postdoc, just split in different ways - which does affect where the money goes in upper admin. My College wants me to put tuition on the grant because they can keep those dollars, while overhead goes to central admin and the college only sees 12% of it. So the college makes more money off tuition.

End result, postdocs cost a bit more than a grad student but not orders of magnitude more.

For my back of the envelope budgeting, I budget 100K per year in direct costs (not including the overhead) for any researcher in my lab, which includes salary, bennies, tuition (if a student), and research supplies. The devil is in the details of course but as a rule of thumb it seems to work.

Lol 25 years ago I made about $1000 a month in CA. I was able to live but was not, however, able to save much of it.

Pre-Industrial age, it was attractive to be fat because it was hard to become fat. Industrial excess made it easy for poor people to become fat and the norms shifted rather quickly such that it was attractive to be thin.

Academic writing is turgid because it used to take a lot of work to learn how to write as turgidly as an academic. Once it becomes de rigeur, suddenly who are the cool kids will shift. AI trained on pre-corpus academic writing will still be able to generate turgid 2020s style papers at zero cost and increasing quality. Academic writing will join the rest of the world where genuinely good writing is impossible to generate purely using AI because it requires a depth and clarity of thought.

Like power tools in woodworking, many woodworkers will adopt them enthusiastically, some woodworkers will stubbornly cling to old methods and make it work for them and a lot of woodworkers will be obsoleted to be replaced by a new generation. Power tools made certain methods of woodworking a lot more accessible but it didn’t change the fundamental difficulty of being a woodworker and the mindset and problem solving skills that causes one person to succeed and others to fail.

Brevity, clarity & wit, all have been prized and will continue to be prized in writing.

Mass production is mass production. If the product is sufficient quality, mass production is going to outcompete custom work due to price point.

The question is, what is the price point for writing? If the submission process is effectively free, then the submission process is going to be swamped by AI produced spam. The AI produced spam isn’t necessarily going to get better, because it getting better would actually require work, and who has time for that. The point is for AI to write about everything ever to fill up the available space with content, regardless of value.

Fact checking takes time and resources. Perhaps our world of information will evolve back to gatekeeping, trusted sources, that you will need to pay for. You can do that, or you can make your way through the free sea of falsehoods.

Who /what was the source of the article?
And, in general, what is the source for all the submissions which this thread is talking about?
Who is cheating professionally by using AI the same way a high-school kid or undergrad cheats when handing in an assignment? And what is the penalty for being caught? (analogous to the high-school kid getting an bad grade.)

If AI is flooding academic journals, would it be possible to stop the flood by demanding that the person submitting each paper first get the paper approved by his institution? Show it to your boss (professor) before you submit it. If the journal rejects it for being too much AI, don’t just reject it…maybe the journal should publicly humiliate the boss and the institution, too, so their reputation is hurt.
.
.
(Disclaimer: I know nothing about the world of serious academic life. Now, my dorm-and -fraternity life on campus I can tell ya about . But it ain’t gonna get submitted to no journal . :slight_smile: )

It was an article submitted to a peer-reviewed scientific journal for publication. The editor sent it to me to review it.

This annoyed me; I am an editor for several journals and the first thing I do when getting an article is to give it a cursory once-over to see whether it is complete nonsense or not. If it is, I desk reject it without review.

A paper being submitted by a graduate researcher or a postdoc should be reviewed by the principle investigator (PI) who is responsible for the research program or lab (who will typically be listed as an author even if they did no real work), although depending on how many researchers are under the PI the review may be barely more than cursory. Expecting it to be reviewed at depth at a higher level within the institution (by, say, a department chair or at the administrative level) is kind of pointless because there is likely not someone who is really qualified to review the work on even a superficial technical level. Some institutions have or are discussing independent internal academic review, not because of the use of AI but because of increasing and widespread evidence of research fraud (modifying images, fabricating or falsifying data, plagiarism, misattribution, et cetera) but this is costly, problematic for ongoing grants, and by no means foolproof, and frankly a lot of journals are disinclined to investigate research fraud claims anyway, not only because it opens the door to even more investigations and because they are dependent upon the same institutions as their major subscribers.

Realistically speaking, as AI tools get more integrated into general and technical writing workflow tools, there isn’t going to be a distinct line between a human author and an AI ‘assistant’. Many people already use chatbots to ‘organize their thoughts’ (a practice I decry because I think it erodes the fundamental skill of thinking through the process of outlining or diagramming a paper) and this will likely evolve into having a generative AI write an initial draft. One can understand why a researcher under pressure to publish would do so, as sitting down and writing research takes a lot of time and focus (reviewing data, generating and organizing figures and plots, mapping out a flow, coordinating between authors and contributors, performing a post-hoc lit search for similar work, constructing and verifying a bibliography, et cetera) and if you could just dump this all into a funnel and let a ‘bot do this you can focus on all of the other work of academia and administrivia, but it also means that there is even more potential for unintentional ‘errors’ from generative AI that neither knows nor cares about intellectual rigor, and may be ‘aligned’ to produce the desired result rather than frame out the paper to reflect an inconclusive or adverse result.

Stranger

I don’t know. I think she treats it similarly to the blatant cheating she sees all the time. She gives them a zero for the assignment. Accusing someone of cheating just opens up a pile of additional work which she doesn’t get paid for. And the school makes it pretty clear that their primary concern is in keeping the tuition dollars coming in.

When the school went to remote learning a while back - and ramped it up further during covid, her observation was, “This is just an exercise in creative cheating.” The decent students do the work and learn, whether in person or on-line. The worst students cheat and lie. When you get paid relatively little, it is tough to motivate yourself to spend considerably additional time and effort on the worst students.

The one I saw came from what looks like a diploma mill. The web page just had a registrar.
I agree with Stranger that it is impractical for an institution to review all papers. But I disagree about the PI doing it. The PI’s reputation is going to suffer (at least) if their students are committing fraud, let alone sending in AI generated drivel. The PI should know what the student is working on, and if the paper is on something totally different red flags should go up. If the PI does not have time to at least read all the papers their name is going on, the PI needs fewer students.

I’m in favor of the death penalty for authors who send this stuff in, but you need a policy first. Departments get to clean up their acts, but get a second chance. Spam institutions get zapped totally. Might be harsh, but even the fakers clearly hope to get something published and might want their money back if their submissions bounce really fast. The idea is to get these things bounced automatically, before an editor has to see them.
There will be fake journals accepting fake papers. Can’t do anything about that.

In an ideal world I would agree that the PI should absolutely be informed, involved, and knowledgable about the work the students and postdocs under their supervision are working on and mentoring them on not just the technical aspects but also the ethics of scientific integrity, and should be the primary internal peer reviewer (along with their actual peers who are not direct contributors to the paper). The reality is that PIs are often spending the bulk of their time doing grant proposals, financial audits, various administrative functions, and whatever academic politics are expected at their institution on top of teaching, and they are pushing on their researchers to “publish or perish” to demonstrate their value. The researchers (especially doctoral candidates) are often under their own pressure to produce results, and in the academic culture negative results are typically interpreted as no results, so both shortcuts and fraud are shockingly common in many areas of research. In some cases, PIs have been held to pulling in sufficient grants which include funding for a portion of their salary despite being tenured, so their focus isn’t on mentoring students, vetting work, or contributing to research as it is bringing in the bucks. It has become a highly dysfunctional system with perverse incentives to fabricate or exaggerate successful results in order to get ahead or even just stay above the water, so it isn’t surprising that researchers are turning to unscrupulous methods to satisfy their supervisors.

Unfortunately, even the real journals aren’t doing a very good job of vetting obvious fraud, and in many cases are delaying or refusing to investigate claims. And there is no general clearinghouse for fraudulent papers or authors, and as fraud has even found its way into The Lancet, Nature, and Science (and many other respected journals) relying on source as a verification of quality or truth is increasingly problematic. I don’t know what do to about a flood of fake papers from fake journals (or fake references from real journals) but it is going to overwhelm researchers to just go and verify their sources. I suspect the answer is going to be to use AI to fight AI, and it’ll be ‘deep learning’ all the way down to the bottom of the Pit of Dank Ignorance.

Stranger

I know the pressures, well, my daughter is a professor on the tenure track. But PIs who put their name on stuff they haven’t vetted are asking for trouble, and when enough of them get to be on the front page of the NY Times they will make time to figure stuff out. Fraudsters are unlikely to have to worry about spending time on grant proposals.
It is really not that different from the Enron situation. CEOs can no longer easily say “why should I have to worry if our quarterly statement is full of lies?”
About 50 years ago there was a prof at UT Austin who had so many students that he met with them once a quarter at best. (I started working on a musical called “Best Little Department in Texas” with an opening number “50 centrifuges a turning, they were turning in every lab…”) There are ways of developing organizational hierarchies to deal with this. Alas, many professors are not well organized.

The researchers in most fields should be wary of referencing papers in journals they’ve never heard of before, and there are indexes of influence. The journal I’m on the editorial board of cares a lot about those numbers.
As for fraud, no one is going to reproduce the experiment. The difference between a dumb fraudster and a smart fraudster is that the smart fraudster is smart enough to have the made up numbers be internally consistent. And the worst thing that can happen to them is that someone pays attention to the paper, since most papers in the most eclectic journals (not Science, of course) are write-only.
I know of someone who got in trouble for this, because their co-author who was responsible for collecting the data thought it would be faster to make it up. It is not something they want to go through again.

That’s a problem, too, because in a lot of fields research has become collaborative across institution and frequently international lines. In some cases, the collaborators have never met each other face to face, or at most briefly at a conference, and may not have access to each others’ experimental setup or raw data, so it is easy for one researcher’s perfidy to smear a group of well-intended researchers.

The problem with PIs not uncovering fraud or misbehavior among their researchers goes the other way, too; grad students being supervised by a well-regarded supervisor may be reluctant to report questionable or even openly fraudulent research from above in order to just get through their program without getting delayed or defunded. So, there are a lot of incentives to not dig too hard into academic misconduct.

As for “indexes of influence”, many of the most influential have demonstrated deficient (to say the least) review processes, and as consolidation of long-standing journals by publishers with less than stellar motives and the emergence of new journals and “preprint servers” which are cited for research that never undergoes peer review, it just becomes more difficult to sort the wheat from the chaff, especially when genuine research is presented in lower tier journals. Pay-to-publish used to be the bane of genuine science but is now sometimes becoming the only way that some original research gets published, while obvious crap somehow gets filtered through well-regarded publications.

As AI gets better at being a fabulist bullshit machine, it will complicate this further; right now it is easy to recognize the ‘style’ and nonsense of LLMs with their ‘predicting next word’ capability, but once they get to the point of being able to construct cromulent nonsense sufficient to fool a specialist it is going to require so much concerted effort to filter out that there will have to be a cadre of specialist reviewers just to look for the telltale signs of AI-generated ‘research’. And, again, I assume that the answer is going to be to turn to AI ‘filtering’ bots, making the entire system even more dependent on inexhaustible AI and less on human inexhaustible.

Stranger

PIs are also under a great deal of pressure to publish, which means having the people under them publish. On any garbage paper, my suspicion is that the PI is probably willing co-conspirator.

In that case, you reap what you sow. The PI is not in the same position as the graduate students, who are still culpable but were kept in the dark and/or bullied into not blowing the whistle in some of those cases of fabricated data and results, e.g. I have in mind the semi-recent room-temperature-superconductivity fraud.

Definitely agree that grad students get screwed also.
I mentioned influence because it means that journals accepting AI generated papers without checking are not going to help anyone get tenure at any respectable institution. There are plenty of not so respectable institutions that might brag about their faculty’s publications, hoping no one notices they are all in no influence rags. It does not mean that the high influence journals are error or fraud free. In fact, fraud is more likely to be caught when a lot of eyeballs read a journal, and won’t be caught when appearing in write only journals.
I can go on and on (and have) about how peer review is not the ultimate standard of correctness some naive people think it is.
However I’m not at all sure the problem is as much bigger as people think. It is a lot easier to check for plagiarism when everything is on line. I know of a case from 40 years ago, that got caught when someone who had happened to have just read the copied paper read the plagiarized one. Not all that likely to happen versus today when IEEE does a search as part of the submission process.

Agreed; peer review is intended to verify that the paper meets minimum standards for citations, that the methodology is validated as sensible and assumptions are clearly defined and reasonable, and that the conclusions follow from the work and observations. It doesn’t review raw data or the experimental setup, verify calculations or data analysis, and certainly doesn’t independly replicate results. Peer review can fail to catch unintended errors or methodological problems for a number of reasons (lack of specific knowledge by the reviewers, exclusion of critical assumptions or experiemental detail in the paper, ambiguity in jargon or labeling), and shouldn’t be expected to catch intentional fraud by a competent fabulist (although it is shocking just how bad some efforts at fabrication are).

Outright plagerism is easier to catch with online publication (I think the recent spate of plagerism in scientific literature reflects the better methods catch instances rather than it being a rare phenomenon in the past) but it is far easier now to fabricate complex data and images than it has ever been, often to the point that the only way to catch a fabrication should be to review the original raw data and experiemental/observational setup. That so many fraudulent images and data sets are produced by what are essentially cut & paste methods and still get past review shows just how inadequate basic peer review is to catch deliberate scientific fraud.

“A bit of advice; always…no, never…forget to check your references.”

Stranger

She says no, it is left up to the individual instructor.

It can be worse than that. An editor controls who does the review, and often knows how the reviewers feel about particular positions in the field. For a really good or really bad paper this is not going to matter, but a marginal one, like most, can be directed to reviewers who will tend to accept or tend to reject it.
The most blatant case was a creationist topic editor who was working at a journal which had nothing to do with biology, and got a creationist paper published by having it reviewed by people who knew nothing about the subject. It did get withdrawn, but that was a high visibility case.

Wife was grading papers this morning - said she is seeing more and more AI assignments. Said the students are getting better at figuring out how to do them, but not at how to make them convincing.

Just to put it into perspective, $400/mo in 1974 is equal to about $2700/mo today. (If you get that for 12 months, then it’s actually more in real terms than the current $25k a year mentioned.)