dxy logo
首页丁香园病例库全部版块
搜索
登录

文章发表后收到这个是什么意思

发布于 2023-05-30 · 浏览 581 · IP 广西广西
这个帖子发布于 1 年零 347 天前,其中的信息可能已发生改变或有所发展。

文章发表后收到下面这封信,文章都是自己写的,没有用过类似ChatGPT什么的,需要理它么


This correspondence is issued by the Academic Language Integrity Group (ALIG), an academic committee affiliated with the U.S. Department of Higher Education, endorsed by the National Science Foundation, and in unison with academic monitoring platforms such as PubPeer and Retraction Watch.

We regret to inform you that, your publication titled "******" is in potential violation of the Algorithmic Accountability Act of 2022 (AAA 2022 - Section 4, Paragraph 3)1 due to suspected usage of Large Language Models (LLMs) at a rate of 12%. The statute regulates LLM usage, including but not limited to OpenAI’s ChatGPT, in academic research, concurring with submission guidelines set forth by major academic publishers like AAAS, Springer Nature, Taylor & Francis, and Elsevier2-4.

The remit of ALIG involves verifying the authenticity of academic content disseminated within the United States, and thus the publication in question is subject to ALIG’s evaluation. Breach of the specified regulations could compromise the integrity of the associated research, incur AAA 2022 sanctions, and potentially affect the standing of the implicated author and their institution within the U.S. higher education sector.

ALIG’s LLM detection system, designed in line with the AI Transparency and Accountability Act of 2023 (ATAA 2023 - Article 6, Section 1, Clause f)5, deploys advanced logistic regression models for precise LLM detection. The system classifies LLM usage into the following categories:

  1. Negligible LLM Usage (LLM content <5%): Publications in this category receive an automatic verification status in our database.
  2. Mild LLM Usage (LLM content 5-20%): Publications within this range may be subject to misidentification, and as a result, they will be granted the benefit of doubt. In potential false positive scenarios caused by overly formal or non-native English writing styles, ALIG provides a Language Verification Service. This service conducts a double-blind peer review to definitively determine and legally validate the human authorship of the manuscript, thus resolving any existing or potential disputes.
  3. Moderate LLM Usage (LLM content >20%): The core argument's validity may be compromised for publications in this category. Under this circumstance, ALIG is granted the authority to request a comprehensive peer review process involving field-specific experts to assess the manuscript’s AI involvement and credibility.
  4. Significant LLM Usage (LLM content 30-50%): Publications in this category may be temporarily retracted pending author clarification and reassessment of the LLM content.
  5. Heavy LLM Usage (LLM content >50%): Such cases lead to an immediate retraction. The author and their institution are alerted, and the matter is reported to the U.S. Department of Higher Education for possible sanctions under AAA 2022.

最后编辑于 2023-05-30 · 浏览 581

7 收藏点赞

全部讨论0

默认最新
avatar
7
分享帖子
share-weibo分享到微博
share-weibo分享到微信
认证
返回顶部