COMAP竟然已经开始规范LLM的使用了,估计2024的美赛也会有这个说明
详情可见:https://www.contest.comap.com/undergraduate/contests/mcm/flyer/Contest_AI_Policy.pdf
和昨天介绍的论文不谋而合,【网安AIGC专题11.1】论文12:理解和解释代码,GPT-3大型语言模型&学生创建的代码解释比较+错误代码的解释(是否可以发现并改正)
将重点从编写代码转移到理解代码的目的、评估生成的代码是否合适以及根据需要修改代码,从而使代码理解成为一项更加重要的技能。
值得注意的是,LLM 不仅可以帮助学生生成代码,还可以通过创建代码解释(可用作代码理解练习)来帮助学生理解代码。
该政策旨在应对大型语言模型(LLM)和生成式人工智能辅助技术的兴起。该政策旨在为团队、顾问和评委提供更大的透明度和指导。该政策适用于学生工作的所有方面,从模型的研究和开发(包括代码创建)到书面报告。由于这些新兴技术发展迅速,COMAP将根据情况完善本政策。
团队必须公开、诚实地使用人工智能工具。团队及其提交材料越透明,他们的工作就越有可能被他人充分信任、欣赏和正确使用。这些披露有助于了解智力成果的开发情况,并适当承认贡献。如果没有公开、明确地引用和参考人工智能工具的作用,很可能会发现有问题的段落和工作被认定为抄袭并被取消资格。
解决这些问题不需要使用人工智能工具,尽管可以负责任地使用它们。COMAP认识到大型语言模型和生成式人工智能的价值,它们是生产力工具,可以帮助团队准备提交材料,例如生成结构的初步想法,或进行总结、改写、润色等。在模型开发中有许多任务需要人类的创造力和团队合作,依赖人工智能工具会带来风险。因此,我们建议在使用这些技术进行模型选择和构建、协助编写代码、解释数据和模型结果以及得出科学结论时要谨慎。
局限性值得注意的是,LLM和生成式人工智能有其局限性,无法取代人类的创造力和批判性思维。COMAP建议团队在选择使用LLM时要了解这些风险:
客观性:以前发表的内容包含种族主义、性别歧视或其他偏见,这些内容可能在LLM生成的文本中出现,并且一些重要的观点可能没有得到体现。准确性:LLM可以“幻觉”,即生成虚假内容,特别是当它们在其领域之外使用或处理复杂或模糊的主题时。它们可以生成在语言上但不在科学上合理的文本,它们可能出错,并且它们被证明可以生成不存在的引用。一些LLM仅在特定日期之前发布的内容上进行训练,因此呈现不完整的画面。上下文理解:LLM不能将人类理解应用于文本的上下文,特别是当处理习惯用语、讽刺、幽默或隐喻语言时。这可能导致生成的内容出现错误或误解。训练数据:LLM需要大量高质量的训练数据以实现最佳性能。然而,在一些领域或语言中,可能没有现成可用的这种数据,从而限制了任何输出的有用性。 团队指南各团队必须:
在报告中明确指出LLM或其他AI工具的使用情况,包括使用哪个模型以及使用目的。请使用内联引文和参考部分。另外,请将“AI使用情况报告”(如下所述)附在您的25页解决方案之后。验证语言模型生成的内容和引用的准确性、有效性和适当性,并纠正任何错误或不一致之处。按照此处提供的指导提供引用和参考文献。仔细检查引用,确保其准确且适当引用。要注意潜在的剽窃风险,因为LLMs可能会复制其他来源的大量文本。请检查原始来源,确保自己没有剽窃他人的作品。当我们发现提交的作品可能是在未公开使用此类工具的情况下准备的,COMAP将采取适当行动。
引文和引用说明仔细思考如何记录和引用团队可能选择使用的任何工具。各种风格指南开始纳入对AI工具引用的政策。使用内联引用,并在参考部分列出您25页解决方案中使用的所有AI工具。
无论团队是否选择使用人工智能工具,主要解决方案报告仍限制为25页。如果团队选择使用人工智能,在报告结束后,添加一个名为“AI使用情况报告”的新部分。这个新部分没有页数限制,也不计入25页解决方案的一部分。
例子(这些例子不全面——请根据您的情况修改这些例子):
人工智能使用报告 1.OpenAI ChatGPT (2023年11月5日版,ChatGPT-4) 提问1: <请提供一份关于人工智能使用情况的报告>。 回答: <以下是一份关于人工智能使用情况的报告>。 2. OpenAI Ernie(2023年11月5日版,Ernie 4.0) 提问1: <请提供一份关于人工智能使用情况的报告>。 回答: <以下是一份关于人工智能使用情况的报告>。 3. GitHub Copilot(2024年2月3日版本) 提问1: <请提供一份关于人工智能使用情况的报告>。 回答: <以下是一份关于人工智能使用情况的报告>。 4. Google Bard(2024年2月2日版) 提问1: <请提供一份关于人工智能使用情况的报告>。 回答: <以下是一份关于人工智能使用情况的报告>。 英文原版 Use of Large Language Models and Generative AI Tools in COMAP ContestsThis policy is motivated by the rise of large language models (LLMs) and generative AI assisted technologies. The policy aims to provide greater transparency and guidance to teams, advisors, and judges. This policy applies to all aspects of student work, from research and development of models (including code creation) to the written report. Since these emerging technologies are quickly evolving, COMAP will refine this policy as appropriate.
Teams must be open and honest about all their uses of AI tools. The more transparent a team and its submission are, the more likely it is that their work can be fully trusted, appreciated, and correctly used by others. These disclosures aid in understanding the development of intellectual work and in the proper acknowledgement of contributions. Without open and clear citations and references of the role of AI tools, it is more likely that questionable passages and work could be identified as plagiarism and disqualified.
Solving the problems does not require the use of AI tools, although their responsible use is permitted. COMAP recognizes the value of LLMs and generative AI as productivity tools that can help teams in preparing their submission; to generate initial ideas for a structure, for example, or when summarizing, paraphrasing, language polishing etc. There are many tasks in model development where human creativity and teamwork is essential, and where a reliance on AI tools introduces risks. Therefore, we advise caution when using these technologies for tasks such as model selection and building, assisting in the creation of code, interpreting data and results of models, and drawing scientific conclusions.
limitationsIt is important to note that LLMs and generative AI have limitations and are unable to replace human creativity and critical thinking. COMAP advises teams to be aware of these risks if they choose to use LLMs:
• Objectivity: Previously published content containing racist, sexist, or other biases can arise in LLM-generated text, and some important viewpoints may not be represented.
• Accuracy: LLMs can ‘hallucinate’ i.e. generate false content, especially when used outside of their domain or when dealing with complex or ambiguous topics. They can generate content that is linguistically but not scientifically plausible, they can get facts wrong, and they have been shown to generate citations that don’t exist. Some LLMs are only trained on content published before a particular date and therefore present an incomplete picture.
• Contextual understanding: LLMs cannot apply human understanding to the context of a piece of text, especially when dealing with idiomatic expressions, sarcasm, humor, or metaphorical language. This can lead to errors or misinterpretations in the generated content.
• Training data: LLMs require a large amount of high-quality training data to achieve optimal performance. In some domains or languages, however, such data may not be readily available, thus limiting the usefulness of any output.
Teams are required to:
Clearly indicate the use of LLMs or other AI tools in their report, including which model was used and for what purpose. Please use inline citations and the reference section. Also append the Report on Use of AI (described below) after your 25-page solution.Verify the accuracy, validity, and appropriateness of the content and any citations generated by language models and correct any errors or inconsistencies.Provide citation and references, following guidance provided here. Double-check citations to ensure they are accurate and are properly referenced.Be conscious of the potential for plagiarism since LLMs may reproduce substantial text from other sources. Check the original sources to be sure you are not plagiarizing someone else’s work.COMAP will take appropriate action
when we identify submissions likely prepared with undisclosed use of such tools.
Think carefully about how to document and reference whatever tools the team may choose to use. A variety of style guides are beginning to incorporate policies for the citation and referencing of AI tools. Use inline citations and list all AI tools used in the reference section of your 25-page solution.
Whether or not a team chooses to use AI tools, the main solution report is still limited to 25 pages. If a team chooses to utilize AI, following the end of your report, add a new section titled Report on Use of AI. This new section has no page limit and will not be counted as part of the 25-page solution.
Examples (this is not exhaustive – adapt these examples to your situation):
Report on Use of AI 1. OpenAI ChatGPT (Nov 5, 2023 version, ChatGPT-4,) Query1: