AUTHOR=Paustian Timothy , Slinger Betty TITLE=Students are using large language models and AI detectors can often detect their use JOURNAL=Frontiers in Education VOLUME=Volume 9 - 2024 YEAR=2024 URL=https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2024.1374889 DOI=10.3389/feduc.2024.1374889 ISSN=2504-284X ABSTRACT=Large language model artificial intelligence (LLM) has been developing for many years. Open AI thrust them into the spotlight in late 2022 when it released ChatGPT to the public. The wide availability of LLMs resulted in various reactions, from jubilance to fear. In academia, the potential for LLM abuse in written assignments was immediately recognized, with some instructors fearing they would have to eliminate this mode of evaluation. In this paper, we seek to answer two questions. First, how are students using LLM in their college work? Second, how well do AI detectors function in detection of AI-generated text? We organized 153 students from an introductory microbiology course to write essays on the regulation of the tryptophan operon, ask the same question of AI, and then have the students try to disguise the answer. We also surveyed students about their use of LLMs. The survey found that 46.9% of students use LLM in their college work, but only 11.6% use it more than once a week. Students are unclear about what constitutes unethical use of LLMs. Unethical use of LLMs is a problem with 39% of students admitting to using LLMs to answer assessments and 7% using them to write entire papers. We also tested their prose against five AI detectors. Overall, AI detectors can differentiate between human and AIwritten text, identifying 88% correctly. Given the stakes, having a 12% error rate indicates we cannot rely on AI detectors alone to check LLM use, but they may still have value.