Connect with us

Hi, what are you looking for?

AI Education

WVU Parkersburg Professor Highlights AI’s Classroom Impact and Calls for Policy Reform

WVU Parkersburg’s Joel Farkas reports a 40% test failure rate linked to AI misuse, urging urgent policy reforms to uphold academic integrity.

SOUTH CHARLESTON, W.Va. – During a recent state Higher Education Policy Commission meeting, Joel Farkas, an assistant biology professor at West Virginia University at Parkersburg, addressed the growing influence of artificial intelligence (AI) on students and faculty. Farkas, who also chairs the state Advisory Council of Faculty, presented a report detailing how students are increasingly integrating AI into their academic work.

Farkas noted that while early AI models primarily served to quickly gather facts from the internet, advancements have enabled these tools to execute more complex tasks, including reasoning and problem-solving in subjects such as mathematics and genetics. “It used to make mistakes in problems, and I could predictably beat ChatGPT, and then every year, systematically, everything I beat it on it automatically figures out how to beat me back,” he said, highlighting the evolving capabilities of AI technology.

In his discussions with students, Farkas emphasizes the appropriate use of AI, encouraging them to utilize these tools for generating resources and engaging in dialogue for immediate feedback. However, he expressed concern over ongoing misuse among some students. “It’s at the point now where you can’t trust the authenticity of anything that students do unless you watch them do it in the classroom without a computer,” he remarked. This has prompted some faculty members to revert to traditional methods, utilizing “literally pencil and paper” to maintain academic integrity when assessing students’ work outside the classroom.

Farkas cited a specific case involving a math professor at WVU Parkersburg, who altered her teaching strategy as students began relying on AI to complete homework. He observed a notable shift in performance metrics over the past three years, with some students achieving perfect scores on homework while struggling significantly on tests, averaging only 40%. The professor has since adopted a new model that requires students to complete readings before class and work collaboratively on problems during instructional time.

In light of these developments, Farkas believes educators will need to reassess their teaching methodologies. “I think a lot of us are probably going to rethink our teaching in that kind of way where I think we’re going to have to go back to that more or less flipped model where every assessment is done in class,” he stated. He further suggested that traditional grading schemes, which have been in place for decades, may need revisions to account for the evolving educational landscape driven by AI.

Farkas highlighted that online classes face the most significant challenges, as students can easily utilize AI to complete their assignments. He proposed that online assessments might require stricter measures such as proctoring or time constraints to curb the likelihood of students seeking external help during tests.

He also raised concerns regarding credit transferability, suggesting that institutions may need to implement restrictions. He observed instances where students intentionally avoid challenging courses by taking them online at other schools and subsequently transferring the credits back to their home institution. “I’ve seen a lot of instances where somebody has one class they specifically want to avoid, and so they take that one class online somewhere else and transfer back,” he explained, using examples from WVU Parkersburg where students enroll in demanding courses like anatomy and macroeconomics elsewhere to bypass local challenges.

Farkas advocates for the development of formal AI policies within academic institutions. While WVU Parkersburg allows departments to create their own AI guidelines, he noted that not all have taken action. “Most people that I’ve talked to at my school don’t have any kind of statement on AI; you just kind of informally talk in the classroom about what you should do and what you shouldn’t,” he said. He expressed a desire for a more structured approach, calling for “top-down guidance” to establish clearer standards on AI usage.

As educational systems adapt to the rapid advancement of AI technologies, the need for comprehensive strategies and policies will be vital in maintaining academic integrity and ensuring that learning outcomes are met effectively. The ongoing dialogue among educators like Farkas serves as a crucial step in navigating this transformative landscape.

See also
David Park
Written By

At AIPressa, my work focuses on discovering how artificial intelligence is transforming the way we learn and teach. I've covered everything from adaptive learning platforms to the debate over ethical AI use in classrooms and universities. My approach: balancing enthusiasm for educational innovation with legitimate concerns about equity and access. When I'm not writing about EdTech, I'm probably exploring new AI tools for educators or reflecting on how technology can truly democratize knowledge without leaving anyone behind.

You May Also Like

AI Research

AI could simplify medical scan reports by nearly 50%, enhancing patient understanding from a university level to that of an 11- to 13-year-old, says...

AI Marketing

Semrush reveals that AI-driven visitors from LLM search engines are worth 4.4 times more than those from organic search, prompting urgent SEO strategy shifts.

Top Stories

OpenAI introduces ChatGPT for automated advertising, allowing businesses to manage campaigns with simple prompts, starting at $60 per 1,000 views, potentially reducing costs for...

AI Regulation

UAE's new 25 guidelines ban AI use for students under 13, emphasizing human interaction in education while mandating AI literacy from kindergarten to Grade...

AI Regulation

Oregon lawmakers advance Senate Bill 1546 to regulate AI chatbots, aiming to safeguard youth mental health as 72% of teens use AI companions for...

AI Generative

OpenAI will discontinue GPT-4o, affecting 800,000 users as it shifts focus to safer models amid rising concerns over the older model's reliability.

Top Stories

OpenAI warns U.S. lawmakers that Chinese startup DeepSeek is allegedly cloning its ChatGPT models, raising national security concerns over AI technology theft.

AI Cybersecurity

Generative AI tools like CrowdStrike's Charlotte AI streamline cybersecurity operations, cutting manual triage work by over 40 hours weekly with 98% accuracy.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.