As a marketer using AI,  it’s essential to understand the legal implications of AI to mitigate legal risks.  At our recent ASK BOSCO® Live event in London, we welcomed Kolvin Williams,  Technology and Data Lawyer from Fox Williams.  He joined our CEO John Readman for a fireside chat about how the rise of AI in marketing has brought about a wealth of opportunities and challenges. To view Kolvin’s full 15-minute fireside chat, scroll to the bottom of our page or head to our Youtube channel.

Key takeaways from Kolvin and John’s fireside chat:

1. 🏛️ Understanding the Regulatory Landscape

– Existing Regulations: AI is not an unregulated space. Existing laws like the General Data Protection Regulation (GDPR) and the Equalities Act already cover aspects of AI. Particularly in automated decision-making and bias prevention.
– Upcoming EU AI Act: This new legislation will categorize AI systems into prohibited, high-risk, and low-risk categories. High-risk AI applications, such as those used in credit scoring, employment, and profiling for marketing, will face stringent requirements.
– UK and US Approaches: Unlike the EU, the UK and US are adopting a principles-based approach. Emphasizing high-level guidelines rather than strict legal frameworks, aiming to balance innovation with regulation.

2. 📊 Data Use: Ensuring Compliance

– Vast Data Requirements: AI systems require large amounts of data for training, which can lead to legal issues if the data is used without proper authorization.
– IP Infringement Risks: Unauthorized use of data for training AI can result in intellectual property (IP) infringement. Companies must ensure they have the rights to use the data, often necessitating clear agreements and licenses.

3. 📜 Intellectual Property (IP) Concerns

– Ownership of AI Outputs: The question of who owns AI-generated outputs is still unresolved. Agencies need to provide assurances to clients that AI-generated content does not infringe on existing IP rights.
– Input Data Compliance: Ensuring that the data used to train AI algorithms is obtained legally and with appropriate permissions is crucial to avoid IP disputes.

4. ⚖️ Mitigating Legal Risks

– Human Oversight: Implementing human oversight in AI decision-making processes helps manage bias and discrimination.
– Transparency and Explainability: Companies must ensure that AI systems are transparent and explainable to meet legal obligations.
– Robust Approval Processes: AI-generated content should go through the same rigorous approval processes as human-created content, including checks for IP infringement and accuracy.

5. 📄 Contractual Safeguards

– Specific Clauses for AI: Incorporating clauses in contracts to address AI-related risks, such as confidentiality, data usage, and IP ownership, provides clarity and protection for all parties.
– Developing AI Policies: Comprehensive AI usage policies and training programs for employees are essential to mitigate risks and ensure compliance.

What does the future look like?

In the future, when it comes to AI, we’ll see increased regulation at both national and international levels. The EU AI Liability Act, for example, will make it easier for individuals to sue companies for harm caused by AI.  As AI use increases there will also be litigation related to data use and IP infringement so companies need to be prepared for these legal challenges.  Finally there is a growing trend towards licensing data for AI training to ensure accuracy and compliance, moving away from relying solely on publicly available data.

To watch Kolvin’s full fireside chat, please see the video below: