The Canadian Heritage committee has called for clear labeling of AI-generated content, including videos on digital platforms, to help distinguish between real and fake material. This recommendation is part of a report presented to the House of Commons this week, amid growing concerns about the implications of generative AI tools like OpenAI’s ChatGPT on creative industries.
The committee’s report suggests that the scope of Canada’s copyright law should be expanded to encompass AI-generated content, aiming to safeguard the integrity of creators’ work. It emphasizes that prior consent should be mandatory for the use of copyrighted materials—such as literature, art, and music—in training AI models.
This report precedes the upcoming release of a government AI strategy by Artificial Intelligence Minister Evan Solomon. It also recommends that the federal government should invest in Canadian AI infrastructure to protect digital sovereignty, a measure Mr. Solomon has indicated he is contemplating.
For years, artists have urged the government to legislate copyright protections applicable to AI-generated content, such as music that mimics a songwriter’s style or paintings that replicate an artist’s technique. Currently, the Copyright Act only protects works created by human beings.
At a recent summit on AI and culture in Banff, Alberta, attended by Solomon and Canadian Identity Minister Marc Miller, artists reiterated the need for legislation to enhance copyright protection against AI exploitation. They argued that current laws inadequately address the challenges posed by AI technologies that analyze and reproduce copyrighted works without consent from creators.
While half of the witnesses who addressed the committee acknowledged the potential of AI tools to boost efficiency and creativity in cultural fields, the report urged the government to regulate “the harmful outcomes of AI” to protect Canadians. Some experts expressed concerns that AI might overstep its role in supporting human creativity and instead begin to replace it entirely.
Eric Chan, an artist and creator in residence at Library and Archives Canada, described AI as the “printing press of our era,” asserting that every major technological advance in reproduction has been met with fears of apocalyptic consequences. He suggested that AI should not be viewed as a threat but rather as a new form of infrastructure.
The committee found that most representatives from the creative industries opposed granting copyright protection to AI-generated content without a significant standard of human intervention. The witnesses highlighted that AI can produce unreliable and misleading information, underscoring the necessity for clear labeling to protect the value of human creative work and promote transparency.
Taylor Owen, founding director of McGill University’s Centre for Media, Technology and Democracy, advocated for mandatory watermarking of AI-generated videos and images. He emphasized the importance of labeling AI content on social media platforms to enable the public to differentiate between machine-generated and human-created material. “In order to help us adapt to this amount of AI content that’s now in our world, we need to be able to know,” he stated. “I think it’s critically important.”
The committee’s report also urged the federal government to demand greater transparency from AI developers regarding the use of copyrighted works in training their models. This includes disclosing the sources of training data to facilitate proper authorization and licensing. The issue has gained traction, with numerous lawsuits filed by creators against tech companies for unauthorized use of their copyrighted works.
In 2024, Google faced a class-action lawsuit in the U.S. for allegedly using registered copyrighted works from visual artists and authors without authorization to develop its generative AI models. Similarly, a lawsuit has been initiated against OpenAI, challenging its use of various copyrighted literary, dramatic, musical, and artistic works. This legal action includes claims from media organizations such as The Globe and Mail and CBC, which allege that OpenAI violated copyright law by scraping proprietary news content without permission or compensation for training its AI models, including those that power ChatGPT.
As the landscape of AI-generated content continues to evolve, the call for regulatory measures and copyright adjustments may signify a pivotal moment for creators seeking protection in a rapidly changing digital environment.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature




















































