A new artificial intelligence chatbot, Einstein AI, designed to engage with students’ online coursework, was taken offline just days after its launch. Developed by tech company Companion, the AI promised to log into educational platforms and complete assignments for students while they slept, raising significant concerns about “contract cheating” in higher education.
Unlike other AI tools that merely assist with studying, Einstein AI explicitly claimed it could watch recorded lectures, participate in online discussions, and submit assignments as if it were the student. Following a backlash online, the company revised its marketing materials and ultimately withdrew the tool on February 26. The webpage was altered to eliminate references to completing essays on behalf of students, shifting the focus to collaboration with students for tasks like creating flashcards and study plans.
Initial reports indicate that the tool’s discontinuation was prompted more by copyright issues than by ethical concerns. Advait Paliwal, the CEO of Companion, revealed that the company received a “cease and desist” order from CMG Worldwide, which holds the licensing rights to the Einstein brand. Paliwal expressed intentions to focus on promoting the broader capabilities of Companion AI for students.
Despite its brief existence, Einstein AI has been described by David Hitchcock, course director at Canterbury Christchurch University, as a clear representation of the challenges AI poses to educational integrity. He stated, “At a very basic level, ‘Einstein’ was simply a distillation of what more general-purpose AI chatbots offer students: the capacity to cease learning virtually anything at all or doing virtually any academic work for themselves.” Hitchcock labeled the tool as “the most thorough example of an automated contract cheating engine that we have seen so far,” warning that it undermines the essential process of learning.
Concerns extend beyond ethical implications; Hitchcock expressed a fear that such tools could compel educators to revert to traditional examination methods, thus damaging the trust that is crucial between students and teachers. He likened the situation to the dynamics of social media platforms, which often target users to create dependency and high engagement.
In an interview with 404 Media, Paliwal defended the technology, likening the evolution of human roles to that of horses transitioning from pulling carriages to enjoying greater freedom with the advent of cars. He argued that the chatbot was a step forward for students, rather than a hindrance.
To utilize Einstein AI, students had to grant the tool access to their accounts on Canvas, a popular virtual learning environment. This raised significant security concerns, as Damien Williams, assistant professor of philosophy and data science at the University of North Carolina, noted that entrusting an “unknown third party” with sensitive login information could result in dire consequences in the event of a data breach. Williams warned that the tool’s ability to complete assignments automatically contributes to a competitive “AI edtech” landscape, creating pressure for students to use such tools or risk falling behind in developing essential skills.
Williams further criticized the lack of consideration for the educational and ethical implications of releasing such a technology, emphasizing that it could exacerbate issues related to critical thinking and skills development, while fostering an environment of mistrust in classrooms.
As the educational landscape continues to evolve with the integration of AI tools, the swift withdrawal of Einstein AI may serve as a cautionary tale for developers and educators alike. The implications of such technologies on learning practices and academic integrity warrant further scrutiny as institutions strive to adapt to an increasingly digital landscape.
















































