Regulatory Compliance Chasing Evolving AI Technologies

Regulatory Compliance Chasing Evolving AI Technologies

Tags
Author

Bryan Szekely, Head of Ad Strategy- Sigma Software

Published Date
December 20, 2024

Editor’s Note: Bryan Szekely, Head of Ad Strategy- Sigma Software, was a keynote speaker at the DanAds Summit in October 2024. He provided so much information so quickly, we asked him to translate his presentation into an article.

The digital advertising industry has a knack for jumping on the latest buzzword bandwagon, fully embracing each year's trend. From the "Year of Mobile" to buzzwords like bid caching, SPO, and cookieless identity solutions, the cycle is relentless. In 2024, the spotlight was firmly on AI. News outlets were brimming with AI-related content, and nearly every vendor was touting products marketing their uses of AI. If you're not exploring AI at this point, you may risk losing market share to those who do. This explosive growth has raised a few eyebrows, though, not just in the commercial space. Regulatory bodies took notice of the changing milieu while being encircled by a new world.

AI stands as one of the most disruptive and transformative forces in human history. With the power to revolutionize fields such as medicine, materials science, energy storage, and business process automation, its potential seems boundless. Yet, alongside these remarkable possibilities lies a darker reality, AI can also cause harm. From enabling fraud and data theft to amplifying the spread of misinformation, AI’s misuse can escalate to severe consequences, including threats to national security, incitement of violence, and even destabilization of governments. While digital advertising may not be the epicenter of AI regulation, due to the indiscriminate nature of AI systems, there exists the potential to operate without regard for the implications of their inputs and outputs, drawing the attention of regulators aiming to prevent abuse across all applications.

Explosive Growth of the AI Market

What many may not realize is that AI has been a cornerstone of digital advertising for over a decade. I remember the early days of ad network optimization and real-time pricing algorithms—technologies that laid the groundwork for innovations like Header Bidding. These systems relied on machine learning to predict monetization outcomes and enable real-time decision-making, showcasing AI’s value long before it became a buzzword.

So, if AI has been integral to digital advertising for years, why is it now experiencing such explosive growth and momentum across all channels within and beyond advertising?

image

Several factors play into AI’s explosive growth in recent years that have served to bring it to the attention to regulators:

  • Computational Power: Advancements in Graphics Processing Units (GPUs) significantly increased the processing power necessary to power models and vast volumes of data.
  • Advancements in Generative Models: ChatGPT and similar models have demonstrated significantly improved abilities to generate human-like text, translate languages, write different kinds of creative content, and respond in an informative and user-friendly way.
  • Ubiquity of Data: Exponential growth of data from social media, digital platforms, IoT devices, and 3rd party sources provide the essential fuel for AI systems to better predict decisions or generate new content.
  • Funding and Investments: Venture capital and government funding have poured capital into AI research and development, while Big Tech (Google, Facebook, Amazon, etc.) increased investments in AI projects as noted in the chart above.
  • Democratization of AI Tools and Systems: AI has permeated itself into our daily lives. Access to AI tools, such as ChatGPT, increased visibility of their efficiencies and potential. Cloud platforms now offer AI as a service, making advanced AI tools accessible to businesses of all sizes without requiring significant in-house expertise or infrastructure.
  • Demand for Automation: The need for cost reduction, operational efficiencies and innovation has driven businesses to automate complex processes, as we have seen in digital advertising.

Expanding Use-Cases in Advertising

The digital advertising industry’s long-standing experience with AI technologies, combined with access to vast datasets and advancements in computational power, has driven an acceleration in AI adoption and the expansion of its use cases. Privacy regulations and initiatives are reshaping traditional models, compelling the industry to leverage predictive algorithms and AI-driven insights to meet new compliance standards. Beyond adapting to existing challenges, AI is also unlocking entirely new opportunities, creating markets such as generative AI products, which had been previously unavailable.

Here are some examples of how AI is being applied in digital advertising:

image

Pace of Change vs Regulation

The inherent nature of rapid technological evolution creates a barrier of friction with regulation. AI and other technologies advance faster than governments can legislate, creating a regulatory lag. Governments lack proficiency and expertise in cutting edge technologies to determine the best methods of regulation.

If we look at privacy laws to set a basis for technology advancements vs regulation timeframes, the EU first established a public consultation on user data protection in 2009, only to release GDPR into full effect in all EU member states nine years later in 2018.

Rising AI-related litigation puts additional significant pressure on governments to regulate AI by exposing risks, gaps, and ambiguities in existing legal frameworks. The lack of precedence in cases of intellectual property ownership of AI-generated content personal injury due to automated systems, biased AI-derived results, and psychological manipulation of AI chat systems demonstrate the real-world consequences of unregulated AI, compelling governments to act.

Race to Regulate AI Systems

Now that governments have started to heed the warning signs of AI’s impact, the race to regulation has begun. Governments are increasingly involving technology leaders and experts to ensure policies are both practical and forward-looking, while balancing innovation with safety and ethical considerations.

While executives from companies like Google, Microsoft, and OpenAI are often invited to share expertise and shape the conversation on ethical and technical aspects, they recognize governments are not able to move sufficiently fast enough. As companies push the technical boundaries of AI, they initiate internal efforts to self-regulate by creating ethics frameworks and internal governance teams such as ethics boards and committees. For example, Google publishes their AI principles, disclosing publicly their commitment to developing responsible AI practices and OpenAI establishing a Safety and Security Committee, assuaging public concerns.

image

Snapshot of Current AI Regulation

The current landscape of AI regulation is dynamic, being shaped by global efforts to balance innovation with safety and ethical considerations. Lacking a global regulatory body, regulation remains fragmented, with jurisdictions adopting tailored approaches depending on their socio-economic priorities and risk profile.

In 2023, the G7 nations agreed on the  AI Principles and Code of Conduct (based on the OECD AI Principles), playing a pivotal role in encouraging governments to regulate AI systems. The G7 AI Principles and Code of Conduct serves as a template for governments to establish their own AI laws and regulations. Principles that include approaching AI with a risk-based assessment, securing AI systems, transparency on gen AI tools and breaches, and mitigation of public safety concerns are the most pressing.

While a comprehensive federal legislation has yet to pass in the United States, the US Congress has seen several proposals addressing AI governance:

  • Algorithmic Accountability Act – Directing the FTC to require impact assessments of automated decision systems and processes
  • Deep Fakes Accountability Act – Protecting national security against threats posed by deepfake technology and provide legal recourse for victims of deepfakes
  • Protect Elections from Deceptive AI Act – Another deepfake regulation, prohibiting the distribution of materially deceptive audio or video AI generated media relating to federal candidates
  • While not a regulation, the AI Bill of Rights is a framework intended to help protect Americans’ civil liberties in the age of AI

Absent federal regulations, states took it upon themselves to enact their own regulation; the end result being a disparate regulatory enforcement, where a fragmentation of rules could complicate enforcements due to many AI systems operating nationwide. Innovation may be impacted on states with stricter AI rules, pushing companies to potentially relocate or not do business in certain regions. Conversely, smaller states may struggle to develop and / or enforce comprehensive AI regulation, creating pockets of no oversight.

image

Source:

US STATE-BY-STATE AI LEGISLATION SNAPSHOT

Across the globe, several countries have introduced proposals to regulate AI, but none have progressed as significantly as the European Union. The EU's landmark legislation, the EU AI Act, represents the first comprehensive legal framework for governing AI systems globally. Much like how the GDPR set a global benchmark for user data privacy, the EU AI Act establishes a gold standard for regulating AI by introducing a risk-based approach and emphasizing transparency, accountability, and safety.

EU AI Act

The EU AI act ensures the safety and fundamental rights of people and businesses when it comes to AI systems.

At the most basic level, the AI Act seeks to verify AI Systems:

  • Are not used to break any laws
  • Collect and use data legally and ethically
  • Do not discriminate against a group or individual
  • Do not manipulate or deceive in any way
  • Should not invade an individual’s privacy or cause any harm to them
  • Are employed responsibly and in a way that benefits society
  • Can be created in sandboxes, bypassing regulation, to promote innovation

The AI Act leverages a risk-based approach to regulate AI systems, ensuring regulatory requirements are proportional to the risks posed by each system. This framework categorizes AI systems into four distinct risk levels, with corresponding obligations and restrictions:

image
Risk Level
Examples
Requirements and Obligations
Unacceptable / Prohibited: AI systems deemed to pose an unacceptable risk to safety, fundamental rights or social values
• Social Scoring • Behavioral Manipulation • Emotional recognition in the workplace or education systems • Biometric surveillance
Banned with exception of the following categories: • Personal Use • Research and Development • Military and national security • Law enforcement • Existing systems (timeline requirements for enforcement)
High Risk: Systems that pose a significant risk to safety or fundamental rights
• Critical infrastructure (e.g. transport) • Safety components of products • Law enforcement and justice systems • Educational or vocational training
• Adequate risk assessment • Ensuring systems do not produce discriminatory outcomes • Detailed documentation • Appropriate human oversight • Robustness, security and accuracy
Limited Risk: Moderate risk systems
• Chatbots • AI generated or manipulated media (images, video or text) • Virtual assistants
• Informing the user they are interacting with AI • Informing the user content is AI generated or manipulated
Minimal Risk: Low risk systems
• Price floor optimization • Traffic shaping • Spam filters
No requirements or obligations

The European Commission has employed a structured timeline to ensure gradual implementation. A staggered timeline allows businesses and public authorities time to adapt and collect feedback from stakeholders, balancing regulation with innovation.

image

Recognizing a regulatory gap between the various legislative phases, the European Commission established an agreement, called The EU AI Pact, for participants to voluntarily pledge to apply the principles of The AI Act during the roll-out phases. The Pact calls on signatories to commit to three core actions: AI governance strategy within the organization, a high-risk AI systems mapping, and promoting AI literacy and awareness among staff responsible for AI. To date, more than a hundred companies signed from a diverse group of sectors.

Enforcement and non-compliance

The EU AI Act and the GDPR are landmark regulatory frameworks with similar approaches to enforcement and penalties, despite addressing different areas of concern. Both set global standards, but their mechanisms for oversight and consequences for non-compliance differ in key aspects:

Enforcement Mechanisms

  • GDPR: Enforcement is decentralized, with each EU member state designating a Data Protection Authority (DPA) responsible for ensuring compliance with the regulation. These authorities have the power to investigate breaches, issue penalties, and provide guidance on data protection practices.
  • EU AI Act: Enforcement will be carried out by Market Surveillance Authorities (MSAs), often operating within the existing DPAs of member states. Additionally, the European Commission will maintain centralized oversight through its AI Office, which holds exclusive powers to ensure consistent application of the law across the EU.

Penalties for Non-Compliance

  • GDPR: Violators face fines of up to 4% of global annual revenue or €20 million, whichever is higher. This substantial penalty structure underscores the importance of adhering to data protection principles.
  • EU AI Act: Penalties are even more stringent for certain violations:
    • Up to 7% of global annual revenue or €35 million for serious infractions related to prohibited AI practices or breaches of high-risk system obligations
    • Fines of 3% of global annual revenue or €15 million for non-compliance by operators failing to meet their responsibilities in the AI ecosystem

The EU AI Act imposes stricter financial penalties in some cases, reflecting the high stakes associated with AI technologies such as the potential for misuse or harm. By creating distinct enforcement authorities and escalating penalties, the EU aims to ensure that both frameworks maintain accountability while addressing their respective domains of data protection and AI governance.

Recommendations

The digital advertising industry is uniquely positioned as a relatively lower-risk sector under AI regulatory frameworks, insulating it from the more stringent obligations seen in high-risk categories. However, this advantageous position does not exempt the industry from its responsibility to ensure compliance in an evolving regulatory landscape. To navigate AI regulations effectively and ethically, consider the following recommendations:

  1. Collaborate with Industry Bodies: Engage with organizations like the IAB to contribute to and adopt emerging AI standards and guidelines (e.g. Gen AI legal considerations).
  2. Partner with Legal Teams: Work closely with legal counsel to interpret AI-specific requirements, assess compliance risks, and develop clear contractual terms.
  3. Select Training Data Carefully: Ensure the legality and ethical sourcing of training datasets and synthesized AI outputs. Avoid using data that could lead to privacy violations or IP disputes.
  4. Implement Oversight and Monitoring: Maintain human oversight for critical AI decision-making processes and regularly monitor systems for unintended outcomes.
  5. Keep Accurate Logs: Record system decisions to enhance transparency and support compliance audits.
  6. Minimize Bias: Design and test AI models to identify and eliminate implicit or explicit biases in decision-making.
  7. Prepare for Future Changes: Build flexibility into AI systems to accommodate evolving regulations.
  8. Understand Your Market: Adapt AI systems to align with regional regulatory variations, akin to GDPR's differentiated impact across countries and states.
  9. Conduct Vendor Due Diligence: Ensure third-party vendors are compliant in relevant markets and adhere to similar standards of accountability.
  10. Leverage Tools for Compliance: Utilize resources like the EU AI Act Compliance Checker to assess and ensure alignment with European regulatory requirements.

Conclusions

As AI technologies continue to advance at an unprecedented pace, the challenge of regulating them grows more complex. Governments and industries are racing to keep up with the rapid development of AI, balancing its transformative potential with its inherent risks. We live in exciting times where AI technologies can push the boundaries of once-unthinkable tasks. The intersection of human interaction with these technologies has severe consequences anywhere from data privacy rights to behavioral manipulation, and the advertising industry must be proactive in staying ahead of AI compliance standards. With great opportunity comes great responsibility.