COLUMBIA — The Columbia School Board approved an artificial intelligence policy and regulations for both students and employees at Monday’s regular meeting. The district will also create a new position titled “AI Coordinator” to oversee AI implementation across Columbia Public Schools. This decision comes after an initial proposal in early 2024 was not adopted due to the need for revisions reflecting the rapidly evolving landscape of AI technology.
Board Vice President Paul Harper emphasized the necessity of establishing a clear policy, stating, “We all know AI is not going away, and from the board level we need to have a policy that sets forth what we can expect from the administration with regard to that issue.” The revised policy focuses primarily on generative AI, a subset of artificial intelligence capable of producing text, images, videos, and other data forms using advanced algorithms. It encompasses platforms like ChatGPT, Google Gemini, and Canva AI.
The new policy encourages users of AI resources to prioritize privacy and security settings, with Harper noting, “I think the biggest risk, quite frankly, is that our data leaves the district, and we’re always concerned with ensuring that our data remains secure.” To keep parents and guardians informed, the district will provide annual updates summarizing AI usage, including any significant changes to its AI plans. Training will also be implemented for both employees and students on managing data privacy and acceptable uses of AI.
The AI policy delineates specific “use cases” where artificial intelligence may enhance efficiency or solve particular problems within the district. These AI use plans will be regularly updated to reflect emerging issues while ensuring the safety and security of students, employees, and the district itself. Harper reiterated the importance of compliance with existing regulations, particularly FERPA, which governs the privacy of student educational records.
A significant component of the policy requires the superintendent to appoint at least one AI coordinator responsible for regulating and monitoring AI usage throughout the district. This individual will develop the district’s AI Use Plan and serve as a professional resource on AI-related matters. According to the policy, requests for new AI applications may be submitted to the AI coordinator, who must be well-versed in privacy policies associated with AI products and services.
Once a year, the AI coordinator will conduct a review of the district’s AI usage, assessing its safety, data privacy, appropriateness, and effectiveness. Harper described the policy as an outline detailing expectations for administrative AI use, ensuring that the district remains responsive to the changing technological landscape. “That there will be an AI plan that will be regularly reviewed by our AI coordinators so that we are constantly keeping up on an ever-changing landscape,” he stated.
The policy also clarifies key definitions applicable to all district policies regarding AI. “Artificial Intelligence” is defined as any hardware or software capable of adapting its output through probabilistic algorithms. “Confidential Data/Information” pertains to legally protected information, including personal details about students and employees, while “Critical Data/Information” includes essential operational information that must be securely maintained. The term “Operational generative AI” is used to describe generative AI applications for various operational needs, based on nonconfidential data inputs.
This initiative reflects a growing trend in educational institutions to adopt structured approaches to AI, balancing innovation with the imperative of safeguarding sensitive information. As schools increasingly integrate technology into their curricula, policies like Columbia’s aim to harness the potential of AI while addressing the associated risks, setting a precedent for responsible AI use in education.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































