Kyle Taylor on Nostr: ~1% of papers use LLMs to generate filler content. Most #research articles use ...
Published at
2024-05-02 03:44:13Event JSON
{
"id": "c1dc5d8196e8002863ad7faa2ee1624eb305769271328f0cf5b4641910b7f0cf",
"pubkey": "dc8075f209324dd5a53b11fe91ee88636c2e2e595fce780474f6a4454b0abc88",
"created_at": 1714621453,
"kind": 1,
"tags": [
[
"t",
"Research"
],
[
"t",
"llm"
],
[
"proxy",
"https://hostux.social/users/kta/statuses/112369431571279708",
"activitypub"
]
],
"content": "~1% of papers use LLMs to generate filler content. Most #research articles use involved figures and tables in the results sections that cannot be faked. But the introduction and discuss sections can be. So can reviews. The worry is #LLM hallucination contaminating papers with meaningless conclusions. But the results sections, with their involved figures, and methods sections, can't be convincingly faked.\n\n[1] https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing\n[2] https://arxiv.org/abs/2403.16887",
"sig": "8c93854199fb3e13eca271e477d245cd0f0fa2aff0d7be3d899a231d71be91e217d3c045c41e9f959243be94e6615dea3a6391a89f017b8fd5c123e98235c257"
}