[ARH Learns] Takeaways from LawWorks Legal Primer Panel
Key takeaways on how independent arts practitioners can navigate the legal risks of generative AI
Unlocking Gen AI: Legal Risks, Rights, and Responsibilities for Arts Freelancers
Artificial Intelligence (“AI”) is rapidly transforming the way we work and create, but it also raises new legal and ethical questions. To address these, Pro Bono SG (PBSG) and the National Trades Union Congress (NTUC) convened a recent LawWorks Legal Primer on “Unlocking Gen AI: Legal Risks, Rights, and Responsibilities.”
The session brought together legal and industry experts to unpack the implications of Gen AI for Singapore’s workforce, including freelancers in the arts. Arts Resource Hub (ARH) has distilled some of the key insights from the discussion and hands-on workshop, with a focus on what matters most for arts self-employed practitioners (SEPs).

This panel is moderated by Bozy Lu (Han & Lu Law Chambers), the panel featured Patrick Tay (NTUC), Dr Stanley Lai (Allen & Gledhill) and Yang Yen Thaw (2iB Partners)
Copyright and Ownership: Who Holds the Rights?
One of the most pressing questions for creative freelancers is whether AI-generated works can be copyrighted. Current Singapore law does not recognise AI as an “author” — meaning copyright protection applies only if a human demonstrates sufficient creativity and skill in shaping the output.
For SEPs, this means:
Protect your role by documenting your input, prompts, and edits.
Be mindful when using AI outputs in client work — clarify with contracts who owns the final product.
Avoid unlicensed material: AI tools trained on copyrighted datasets may carry hidden infringement risks.
Data Privacy and Security: Guard Your Information
Gen AI tools often rely on large datasets, which can inadvertently expose sensitive or personal information. Under the Personal Data Protection Act (PDPA), freelancers remain responsible for safeguarding client data.
Practical steps:
Do not input confidential client or personal information into public AI platforms.
Use enterprise or “walled garden” versions of AI tools when available.
Always cross-check outputs — hallucinations and fabricated references are common.
Contracts and Clarity: Managing Client Expectations
When AI is involved—either in the work itself or through collaborators who use AI—additional considerations can arise. Questions around authorship, copyright ownership, and accountability for outputs may not be fully addressed by standard templates. To reduce misunderstandings and protect professional relationships:
Clarify with collaborators which parts of the work are human-generated and which are AI-assisted
Define how any AI-generated contributions may be used, credited, or licensed.
Specify authorship, copyright ownership, and liability for outputs—including potential errors or copyright issues.
Ensure contracts explicitly outline client expectations regarding AI involvement.
Update template contracts to include terms on AI use and responsibility.
By addressing these points upfront, artists can reduce misunderstandings and protect both their creative contributions and professional relationships.
Legal Frameworks and Safeguards
Stanley Lai reminded participants that “data is valuable but intangible — making it especially vulnerable.” In Singapore, a number of laws already set the baseline for protection: the Personal Data Protection Act (PDPA), the Computer Misuse Act, and the Cybersecurity Act form the key pillars.
At the same time, the Government’s Model AI Governance Framework highlights accountability, transparency, and incident reporting as best practices for anyone deploying AI tools.
Statutory protections, however, are only part of the equation. Lai emphasised that cyber hygiene—from employee training and resilience planning to internal safeguards—remains as crucial as legal compliance.
Translating these to Workplace Practices for SEPs and workplaces, Patrick Tay underlined how AI risks play out in day-to-day work. Employers, he suggested, should establish clear internal guidelines on how Gen AI is to be used across their teams. On the flip side, employees should avoid feeding sensitive or proprietary company data into public AI platforms.
For SEPs and smaller organisations, these practices are especially critical. SMEs and startups, often lacking extensive legal or IT resources, must be doubly mindful of what data is exposed and how AI tools are deployed. Simple steps like setting internal do’s and don’ts, reviewing contracts for AI-related clauses, and training staff on secure practices can help mitigate risks before they escalate.
Practical Use Cases: AI as a Productivity Tool
During the workshop, Yang Yen Thaw demonstrated how Microsoft Copilot can generate reports, presentations, and summaries within minutes. For SEPs juggling multiple projects, AI can streamline routine tasks — freeing up time for higher-value creative work.
Tips for effective use:
Structure prompts with goal, context, expectations, and sources.
Always fact-check outputs before use.
Treat AI as a “co-pilot” — it assists but does not replace your expertise.
Risks and Red Flags: Stay Alert
Beyond copyright and data, panelists flagged broader risks:
Phishing and scams — AI can mimic voices or emails convincingly.
Over-reliance — outputs may look polished but be legally or factually flawed.
Ethics — fairness, accountability, transparency, and explainability remain core to responsible use
Awareness & Vigilance (Yang Yen Thaw):
“Don’t beware, be aware” – paranoia isn’t useful, awareness is.
Use practical checks: verify official numbers, cross-check with trusted sites, understand how forgery works.
AI accelerates malicious actors just as much as it helps users.
90% of cybersecurity breaches are due to human error, not tech failure — human vigilance is key.
The Human Edge: Creativity and Wisdom
As Patrick Tay reminded the audience, “It’s not AI that takes your job, but the person who knows how to use AI.” For freelancers, this means investing in upskilling while leaning into what makes human creativity irreplaceable: judgment, intuition, and originality.
Patrick Tay emphasised:
Workplace: Workers need operating knowledge of Gen AI – its capabilities, limitations, and ethics. Not everyone must be an expert, but all should know risks.
Community: Labour movement and partners are actively educating workers, seniors, and youths about fact-checking, scam risks, and responsible use.
Human value-add remains critical – creativity, experience, and context cannot be replaced by AI.
AI is a powerful tool — but it is wisdom, not just knowledge, that keeps us on the right side of both innovation and the law.
Additional Resources for Further Reading:
Model AI Governance Framework (Generative AI) can be accessed here: https://aiverifyfoundation.sg/resources/mgf-gen-ai/
Digital Forum for Small States (Digital FOSS) AI Governance Playbook can be accessed here: https://www.imda.gov.sg/-/media/imda/files/news-and-events/media-room/media-releases/2024/09/ai-playbook-for-small-states/imda-ai-playbook-for-small-states.pdf
Nine proposed dimensions to support a comprehensive and trusted AI ecosystem: Annex A: Nine Dimensions of the Model AI Governance Framework for Generative AI (408.25KB)
👉 For more guidance on protecting your creative work, explore the Copyright Resources by IPOS, including updates on the Copyright Act 2021 and the CMO Class Licensing Scheme.
