Generative AI Usage Guidance Research Community

Return to Employee Resource page.

To the UNC research community:

As you are likely aware, the use of generative AI tools such as ChatGPT has sparked numerous inquiries related to research and scholarly practices. To address these and other emerging concerns, the Provost and Deans established the UNC Generative AI Committee, with representatives from every academic unit.

The Committee has developed the following guidance for research and scholarly applications of generative AI. This guidance applies to all members of the research community, including faculty, staff (SHRA and EHRA non-faculty), students (undergraduate, graduate and professional), guest researchers (e.g., unpaid volunteers, interns, and visiting scholars), collaborators, and consultants involved in research occurring under the auspices of the University. This guidance aims to establish a framework for the ethical and responsible employment of AI tools in research and scholarship.

Please review this guidance and integrate it into your research and scholarly practices, tailoring it as necessary to suit your specific discipline and accepted research and scholarly practices within your discipline. Mentors and supervisors should have regular conversations with mentees and other research trainees about the intended use of generative AI in their research programs.

Given the rapid pace of advancements in generative AI, we anticipate this guidance to continue to evolve. Thus, if you have any questions or feedback, please do not hesitate to reach out to Eric Everett (Phone: 919-962-0988 or email: eric_everett@unc.edu).

Research Use Guidance for Generative Artificial Intelligence

Introduction

Generative AI (Artificial Intelligence) systems such as ChatGPT have emerged from a number of technologies including machine learning and are capable of creating images, text, and other products in response to queries and “prompts.” As a result, generative AI has become a powerful tool for research and scholarship. Systems like ChatGPT can be task-specific and are capable of evolving as information is provided from users and other sources.

Use of generative AI in research involves (at least) the following limitations and risks.

  • The operations that produce AI output are often unknown to its end-users. As a result, these tools may generate content that is not amenable to verification via validating primary sources.
  • AI-generated output is based on previously existing data and thus reflects the biases and other limitations of that data. These biases need be interrogated and acknowledged. The output may also be inaccurate or entirely fabricated, even if it appears reliable or factual.
  • It cannot be assumed that generative AI tools are compliant with rules and laws designed to ensure the confidentiality of private information, such as HIPAA (Health Insurance Portability and Accountability Act of 1996) and FERPA (Family Educational Rights and Privacy Act). Uploading information (e.g., research data, grant proposals, unpublished manuscripts, or analytical results) to a public AI tool is equivalent to releasing it publicly; thus, before any information from you or another individual is uploaded to a public AI tool, appropriate steps must be taken to ensure that the disclosure of that information is consistent with all rules and laws related to the handling of private information.
  • Generative AI may present other privacy risks to both individuals and the institution, such as those implicated by data breaches, exposure of intellectual property (IP), and Science & Security concerns.
  • Generative AI raises a range of intellectual property concerns regarding the ownership of its output, as well as questions about whether its output is properly treated as equivalent to a published resource. In particular, generative AI may create content that infringes on others’ intellectual property (IP) or copyright-protected works. Generative AI may also create content that leads to allegations of plagiarism or other forms of misconduct against the researcher/scholar.
  • Norms and requirements surrounding the citation of AI-generated output, as well as disclosure of the use of AI technologies, are complex, rapidly evolving, and oftentimes unclear. In addition, these norms and requirements may vary substantially depending on the source; for example, publishers, journals, professional organizations, and funding organizations all may have (or be in the process of developing) policies surrounding the use of AI that need to be navigated.

Usage Philosophy

University of North Carolina at Chapel Hill policy is that its research be carried out with the highest standards of integrity and ethical behavior. To that end, everyone involved in conducting research under the auspices of the University is responsible for ensuring that they use best practices in proposing, performing, and reviewing research, as well as in reporting research results. Authors are ultimately responsible and accountable for the content and methodology of their published and disseminated work.

Lead Principal Investigators and other key personnel involved in the preparation of grant applications submitted through the University’s Research Administration Management System & eSubmission (RAMSeS) certify the submissions before the grants are sent to sponsors. This entails certifying the following:

  • The information submitted within this application is true, complete, and accurate to the best of my knowledge. Any false, fictitious, or fraudulent statements or claims may subject the Organization and the Investigators to criminal, civil, or administrative penalties.
  • I have the responsibility for the scientific, fiscal, and ethical conduct of the project and to provide the required progress reports if an award is made.
  • I will comply with all relevant state and federal regulations, University policies, and contractual obligations in administering the resultant award.
  • I have reviewed applicable U.S. Export Control requirements and University policy on Export Controls and will comply with the export control requirements.
  • If this is an NIH application, I will comply with NIH Policy on Public Access.
  • I will work to ensure that my relationship with the sponsor of this project is free of conflict of interest or consistent with a previously disclosed conflict of interest management plan.
  • I can ensure that proper citation practices have been followed in this submission; and
  • The information that is proposed in this application represents original work by the investigators named in the application.

Usage of generative AI in research should be based on the following principles:

  • Entering private or confidential information into a public AI tool is tantamount to publicly releasing that information. Uploading information to public AI tools, including by entering prompts or queries into tools like ChatGPT, is a form of release of that information to a third party. The same rules that apply to other forms of public disclosure of private or confidential information apply as well to interactions with public generative AI tools. Disclosure of this type of information to public AI tools exposes both individuals and the institution to potential breaches of privacy and security. Similarly, uploading research data, grant proposals, or analytical results into a public AI tool is effectively to disclose that content publicly.
  • The norms of appropriate use of generative AI are constantly evolving, and vary enormously depending on application, context, and discipline. A particular use of generative AI in one research context might be perfectly appropriate, whereas an identical use of generative AI in a different research context might be a serious breach of professional research standards.
  • Those who are involved in proposing, reviewing, performing, or disseminating research bear the responsibility for familiarizing themselves with the policies and standards governing the use of generative AI in their research, and are ultimately responsible for the work that they produce and disseminate. This responsibility includes properly attributing ideas and credit, ensuring the accuracy of facts, relying on authentic sources, and appropriately disclosing the use of AI in research. This responsibility applies to all members of the research community, including faculty, students, staff, postdoctoral research scholars, and other research trainees. Individuals with supervisory responsibility for research activities should initiate discussions with their teams to ensure that all members of the team understand both opportunities and responsibilities surrounding the use of generative AI.
  • The use of generative AI should be clearly and transparently disclosed and documented. Documentation requirements will vary substantially from research context to research context and from discipline to discipline, but researchers should err on the side of explicitly disclosing any material use of generative AI in their research activities. Here is one perspective, generated by the Association for Computing Machinery:
    • “If you are using generative AI software tools such as ChatGPT, Jasper, AI-Writer, Lex, or other similar tools to generate new content such as text, images, tables, code, etc. you must disclose their use in either the acknowledgements section of the Work or elsewhere in the Work prominently. The level of disclosure should be commensurate with the proportion of new text or content generated by these tools.
    • If entire sections of a Work, including tables, graphs, images, and other content were generated by one of these tools, you should disclose which sections and which tools and tool versions you used to generate those sections by preparing an Appendix or a Supplementary Material document that describes the use, including but not limited to the specific tools and versions, the text of the prompts provided as input, and any post-generation editing (such as rephrasing the generated text). Authors should also note that the amount or type of generated text allowable may vary depending on the type of the section or paper affected. For example, using such tools to generate portions of a Related Work section is fundamentally different than generating novel results or interpretations.
    • If the amount of text being generated is small (limited to phrases or sentences), then it would be sufficient to add a footnote to the relevant section of the submission utilizing the system(s) and include a general disclaimer in the Acknowledgements section.
    • If you are using generative AI software tools to edit and improve the quality of your existing text in much the same way you would use a typing assistant like Grammarly to improve spelling, grammar, punctuation, clarity, engagement or to use a basic word processing system to correct spelling or grammar, it is not necessary to disclose such usage of these tools in your Work.”
    • Researchers should also be aware of or anticipate other positions coming from journals, publishers, professional associations/societies and funders/sponsors regarding the use generative AI in the creation of new content.
  • It is a shared responsibility to stay informed about relevant developments surrounding generative AI. Both the technical capabilities of generative AI tools, as well as the rules and norms surrounding their use, are constantly evolving, and responsible research requires an up-to-date awareness of changes in AI technology and best practices within specific fields of research and scholarship. Everyone involved in research should make efforts to stay informed about relevant emerging AI tools, research studies, and ethical guidelines, and should take advantage of professional development opportunities to enhance their AI integration skills.

FAQs

How is authorship determined?

Different disciplines, PIs, units and departments may or may not have established authorship guidelines. However, guidelines exist in journals, e.g., Plos authorship guidance, Nature Portfolio authorship guidance, and in professional organizations, e.g., APA and the ICMJE.

Can ChatGPT or other forms of generative AI be an author?

Authors in general are required to meet multiple criteria including accountability, responsibility and providing approval of the work to be published. The likelihood of generative AI systems, i.e., ChatGPT to fulfill the requirements to be an author is low. Elsevier’s authorship policy has taken the position that generative AI and AI-assisted technologies cannot be an author on a published work. Similar positions prohibiting generative AI tools, i.e., ChatGPT being listed as an author on a paper are provided by the Committee on Publication Ethics (COPE), World Association of Medical Editors (WAME ), Journal of the American Medical Association (JAMA Network ) and Science Journals as examples.

Can I subject generative AI output (e.g., ChatGPT) to iThenitcate screening?

Yes, although there are no assurances that iThenticate screening will be fool proof. First you can access iThenticate. All researchers at UNC can use their Onyen log to create a personal workspace.

If I use generative AI like ChatGPT to create new content, should I or do I need to cite it?

Use of generative AI information may require proper citation depending on context and how the tool is used. Examples of how to cite ChatGPT:

Return to Employee Resource page.