Plagiarism
"Plagiarism" is the word for using another person's literal expressions (words, images, etc.) or representing their ideas or concepts as your own, or in place of your own work.
Any amount of misrepresented work, large or small, passed off as your own, is plagiarism as a form of academic dishonesty.
In academic work you are expected to use your own words, and represent your own thoughts and ideas and concepts, which you have developed in the process of engaging with the work of many other people.
You are therefore expected to keep track of all of the other work you have read, and the expressions and ideas you have found there, and to be able to say clearly whose they are, and where they came from.
You are expected to be able to support your own words, your own thoughts and ideas and concepts, through exact references to all of that other work where it agrees with you, and to be able to argue with it in detail where you think something different.
The JKM Library is here to help with your research, and McCormick and LSTC also offer writing help through their own writing centers, including help with English, editing, and style guides.
Citation Guides | School Policies
Citation Guides
Accurately crediting and regularly citing your sources is an essential aspect of avoiding plagiarism.
It is important to know what information you need to quote, to paraphrase, and to cite, and how to do so properly for your sources and project.
The JKM Library is here to support you. We provide access to the following style guides, which will help you with a variety of sources and projects:
- Turabian: A Manual for Writers Citation Quick Guide (basic, for all papers)
- Chicago Manual of Style Online (18th Edition) (more kinds of projects and sources)
- The SBL Handbook of Style (for Bible scholarship)
If you are not sure which to use, or how to use them for your project, please reach out first to your professor, advisor, or the supervisor of your project for specific advice.
You can also email JKM staff at ihaveaquestion@jkmlibrary.org for more general information.
School Policies
McCormick and LSTC each have their own descriptions of what constitutes academic integrity and plagiarism, and policies for dealing with it:
- McCormick Academic Catalog 2024-25 (see Faculty Policy on Proper Use of Sources & Faculty Procedure for Dealing with Misuse of Sources and Plagiarism, pp. 43-47)
- LSTC Student Handbook 2024-25 (see Section 4 - Academic Integrity, pp. 28-30)
Other schools and programs have their own helpful descriptions and advice for dealing with plagiarism:
- University of Chicago Libraries on Academic Integrity
- CTU's Bechtold Library on Citing your Sources
- Saint Xavier University (a CARLI member) on Plagiarism
Plagiarism and "AI"
Large Language Model (LLM) "AI," also called "generative AI," is very popular today, and presents some serious problems from the standpoint of academic integrity.
These "AI" systems are capable of taking a brief prompt, and generating images or text on the basis of their training data.
Those images will look mostly like other images you see in artworks and on the internet. That text will sound mostly like other text you might read in books or on the internet.
That may sound tempting to you! Every author struggles to find the right words, after all. But we expect you to resist that temptation.
Remember: the point of academic work is that we expect your words, your work. You are the author, you are the artist, and any other source must be cited explicitly.
If you're worried about quality, just keep writing. Work with your teachers and your peers. You will find your voice. Your words and your work will only get better through practice!
"AI" companies engage in systematic copyright violation and plagiarism
It is important to recognize that the output from these "AI" systems looks the way it does because they are trained on data taken from existing authors and artists. Overwhelmingly, from the very beginning, this has been done without their consent, without compensation for their hard work, and without citation.
This means that "generative" or LLM "AI" is built on copyright violation, on a massive scale, and also plagiarizes the works it is based on. Everything it does is built on this basis.
But while that is a major problem, it is not the core complaint when it comes to whether or how you use it in your academic work.
"AI" generated words are not your words
When it comes to LLM "AI" as a source of text, you should treat its output like any other source of text you did not write.
No matter how much work you put into designing your prompt, you did not write the "AI" output. Its words are not your words, and do not represent your thoughts and ideas.
You are therefore not responsible for what it says. But you are responsible for how you use it!
If you receive LLM "AI" output text, and present that text as your own, you are engaging in plagiarism as a form of academic dishonesty.
And because LLM "AI" systems plagiarize their training data without telling you, you may also be engaging in plagiarism from real authors, without being aware of it. In addition to academic penalties, this may create the risk of lawsuit for copyright violation if you publish such material.
Citation does not solve the problem of "AI" text
Unfortunately, the problem of plagiarism when using "AI" output cannot be solved by treating it like any other source needing proper citation.
You can and should admit where you got these words, as they are not yours. However, LLM "AI" cannot give you a text for which citation is in any way meaningful.
The point of citation is to demonstrate the authorship and origin of the work that you are citing. This enables your reader to check your work against those sources, which still exist outside of your own work.
Setting aside its many other problems, LLM "AI" output does not exist outside of your interaction with that system. LLM "AI" systems do not give consistent responses. Your reader cannot follow your citation and get any useful information.
Additionally, LLM "AI" systems routinely make up answers that bear no resemblance to reality, and provide their own "citations" to work which simply does not exist—they made it up.
LLM "AI" systems make a mockery of academic work. They are not a reliable source, and they do not even function as a search engine pointing to other reliable sources.
LLM "AI" systems serve only as a substitute for your work—and it will always take far more work from you to responsibly evaluate and modify their output, than if you just did the work yourself.
Why does "AI" do this? How does it work?
There are two apects to the problem. The first aspect, we have covered above: that LLM "AI" by nature does not, and cannot, give you a reproducible answer or direct you reliably to other sources.
The second, more basic aspect is that LLM "AI" by nature does not know or care about matters of fact.
No matter how many facts may be in its training data, LLM "AI" is not programmed to understand that questions have factual answers.
The only thing that LLM "AI" is designed to do is reproduce patterns. Its only concern is the statistical likelihood that similar-looking data appears in similar patterns elsewhere.
LLM "AI" does not have any sense of the meaning of the data it was trained on, only its patterns.
These systems analyze your question for its patterns, observe similar patterns in other questions, observe the patterns their answers tend to fall into, and then manufacture responses that look like those patterns, in the hope that you will accept them. They do this whether or not this actually answers your question in any way, let alone correctly, because these systems have no standard of factual correctness.
You therefore cannot rely on the accuracy of the "data" LLM "AI" makes up. There is a chance that the output might be correct. Much more likely, however, it will only be linguistically (and not factually) "similar" by some standard. It may contain a variety of errors. And it may even bear no resemblance to reality at all. There is no guarantee that this "data" exists, outside of the "AI" making up language that matches patterns it has seen before.
LLM "AI" cannot reliably summarize text for you because, again, it does not understand the meaning of the text, only its patterns. Its summary is likely to be linguistically, but not factually, "similar" to the text—and again, it may bear no resemblance to the text on key points, even introducing language that does not exist in the text, but does exist in the LLM "AI" training data.
(Additionally, many LLM "AI" systems "sanitize" their inputs and so will not handle any topics the corporations in question consider to be "controversial," including many current political and world-historical events you may need information on for your research.)
The more difficulty you have with a piece of text, because of its complexity or uniqueness, the less likely any standard LLM "AI" system will be able to give you a correct summary. The more interesting your source, and the more unique your question, the less "similar-looking data in similar patterns elsewhere" is a useful approach.
LLM "AI" systems also rely on you to correct their output until it is satisfactory to you, meaning that your knowledge is their only standard of truth. This makes it radically unsuitable for any research work, where you are trying to learn things you do not yet know.
LLM "AI" is meant to profit from you, not benefit you
It is possible for companies to spend money and engineering time to force any LLM "AI" system to operate in ways that reduce (but cannot eliminate) its inherent weaknesses as outlined above.
None of that, however, is standard in the free versions offered to you online, which exist to generate popularity and so use your engagement to drive investment income for the company.
These systems are not offered for your benefit
, but to use you to create profit for the already-wealthy. They are not intended to meet academic standards, but to weaken them in order to make academic systems reliant on this unsuitable product instead of our existing systems of developing expertise and knowledge.Please be careful not to get stuck in the trap of arguing with an LLM "AI" system. It is not important that you correct it; it is a piece of software. You will not get paid by the company, and the time you spend on it will not advance your project or your field of study.
It is important that you do your research, that you talk with your fellow students and teachers, that you learn, and that what you write expresses what you have learned in your own words!
Your library is here to help!
You have libraries at your disposal, including the JKM Library and all of our partner libraries, which are full of reliable texts, which will still be there when you or your readers come back to them.
These are texts you must cite properly, but they are also texts of which you may be critical. They may also be wrong, but their authors were trying, in the best case, to learn the truth and to be right. And you may even help prove some of them wrong, but that will be a meaningful disagreement, and maybe even a very important one!
JKM Library staff, alongside the faculty and staff of McCormick and LSTC, will gladly help you through your process of becoming a capable and talented scholar using reliable resources.
We look forward to being able to share with you in the pride of your legitimate accomplishments. Do not cheat yourself, and us, of that joy!