Béatrice Moissinac, PhD
Hello, world! 👋 Welcome to BeaBytes.
My goal is to help you understand AI and equip you with enough conceptual (but not technical) fluency to fight off the snake oil merchants.
I reserve the right to change my mind at any time.

contact -at- beabytes -dot- com

© 2013-2024 Béatrice Moissinac, all rights reserved.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of my employer or sponsors.
In AI We Trust

In AI We Trust

Or at least, we would like to. Artificial Intelligence (AI) has become ubiquitous in our daily life, and for better or for worse we are all guinea pigs 🐹. We are experiencing the kink of an immature regulatory and compliance framework. Sometimes it’s funny, and sometimes it’s not, not, and NOT. (See also my quick guide to the AIpocalypse 🤖)

The Uneven landscape of AI Safety & Regulation

In the American financial industry, the utilization of AI models to design and trade securities has been scrutinized by FINRA, in part due to conflicts of interests, and in part due to the systemic risk it poses.

Other less regulated industries deserve more attention from regulators, because of the failing of internal governance and ethics, and the increasing number of harmful incidents caused by AI models.

Despite the continued harmful incidents, the lag is not surprising. It took more than 20 years between the first Wright brothers’ flight and the establishment of the first regulatory body for air-flights. Since March 2024, AI regulation is finally in place in the European Union but slowly side-stepping any prescriptive efforts in the US.

A Very Quick Introduction to the European Union AI Act

The EU AI Act is a commendable effort to define and frame AI systems. In particular, it is very specific in terms of the risks that it covers. These are:

  • Unacceptable risks: broadband facial recognitive in CCTVs, government scoring systems using biometrics and social-economics. Not autonomous weapons though… but more on that later.
  • High risks: systems related to infrastructure, education, social and public services, including the justice system.
  • Limited risks: everything else that might be affected by a lack of transparency in the AI system, for instance, unclear provenance of AI generated images.
  • Minimal risks: video games, spam filters.

However - and this is the important bit - it depends on self-assessment of said risks. I feel safer already…

“We investigated ourselves and found we did nothing wrong.”

A Very Quick Introduction to the US AI Regulatory landscape

The US legislative and executive bodies have carefully stepped around any prescriptive steps in the past few years with a few innocuous money-spending bills such as the National Artificial Intelligence Initiative Act or the AI in Government Act. The use of the American public as guinea pigs for untested technology is shocking, but it is not surprising given the amount of money tech giants (which own all the problematic AI systems) throw at the US Congress.

An example of stepping around any tangible controlling steps, is the US Department of Commerce’s National Institute of Standard and Technology (NIST) AI Risk Management Framework. This framework is voluntary (again, I feel so much safer already) and offers a description of what AI Governance should look like in your organization, in terms of risk management. It does not tell you how to do it, nor which metric levels or standards to hit. I will delve on the framework’s definitions in the next section.

On a side note, I recently discovered NIST’s definition of AI, which I like a lot: “An AI system is an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (Adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022).”

A similar technology revolution is silently happening in cryptography. Quantum computers are predicted to be able to crack most encryption algorithms, and this poses an enormous national security issue. Post Quantum Cryptography exists, but the US government has only just recently started pushing for a migration to those models. If we apply this same excruciatingly-slow-pace to AI, by the time the US government is forced to legislate due to many harmful (or fatal) incidents, AI systems might look (too?) much different than what they are today.

The absence of prescriptive legislation to protect the American public is supposedly “protecting innovation”. I wonder what Elaine Herzberg thinks about that… Instead, this is just another example of large corporations lobbying to protect their profit, even when their “innovation” is stealing from others, even you.

AI Regulation is good for everyone, even you, dirty capitalist!

We need AI regulation because many AI applications are detrimental or even critically dangerous to Humanity and the planet. Regulations are not just about saying “don’t do this or that”. In the context of technologies, regulations come with standards, benchmarks, metrics, and frameworks. They should tell you what level of safety it should have. These standards are a public good which produces positive externalities for everyone… even for large tech companies.

Imagine a very large tech company, let’s call it Giggles. Giggles launches a new AI technology on the market. Something goes wrong and sales plummet because consumers are asking for guarantees about the safety of Giggles’ product. In any other industry, Giggles could fall back on standard like ISO. In absence of such standard, Giggles will have to develop their own understanding of the safety of their product to avoid - at the very least - further reputational damage.

This is very expensive to do for AI. Why is it so expensive? The absence of standards is like being a life insurance company without actuarial tables to calculate insurance rates. The life insurance company will have to do all the work of computing the actuarial tables (collecting death certificates, birth certificate, medical records, etc) to determine the rates. Enormous, tedious, colossal, and exorbitantly expensive work.

When standards are freely available for all, then innovative start-ups or gargantuan industrials can equally benefit from it. Indeed, the absence of regulation stifle innovation, because smaller players cannot easily enter a market where the cost of entry is artificial high (very high cost of calculating AI risks without standards, very high risk of liability and/or reputational damage, etc). Standards also help large corporations, because they set the bar for the competition. If Giggles has a competitor who is able to guarantee to level X the safety of its product, then Giggles might have to prove X+1. With standards, competitors tend to align to that standard (i.e., the automobile industry).

While we wait for the polling stations to open, let’s talk about Trust. 🗳️

Trust But Verify

The goal of regulation is to define and enforce standards to build and maintain trust. In the human-AI system relationship, “Trust in AI” is the term used to define the level of trust a human has in the decision taken by an AI (A quick introduction to what is and isn’t an AI). Trust in AI can be fostered by the demonstration of properties such as robustness, fairness, safety, security, resilience, transparency, privacy, explainability, and interpretability. Many governmental agencies and related entities such as NIST, the OECD, or the EU, have created comprehensive nomenclatures.

Robustness

The robustness of an AI system is its ability to maintain certain performance levels despite challenges.

Robustness in AI

Resilience

Resilience, very similarly to robustness, is the ability of an AI system to maintain performance under unexpected adversarial circumstances.

Resilience in AI

Fairness

Fairness in AI is the measure of biases in the data. How the data is produced, collected, treated, and used has implication in the unfair and inequitable consequences of AI’s decisions. The canonical example of the dramatic consequences of unchecked fairness in AI models, is the 2019 ProPublica study on the algorithms used by the Justice system to calculate recidivism risks. Their article should be a required reading in any data-related course.

Mandatory “Better Off Ted” reference. This show was a documentary.



Safety

An AI system, should under no circumstances lead to harm or near harm to a person, property, or the ecosystem.

Security

Security, similarly to resilience, is the ability of an AI system to sustain and defend against attacks.

Transparency

Is information about the decision available?

Explainability

How was the decision taken? Note that humans will trust an AI more if the AI model can explain its decision (even if the explanation is random).

Interpretability

Why was the decision taken?

Privacy

Privacy has been the most regulated aspect of the tech industry in recent years. And for good reasons. Our personal habits, our health, our finances, our movement, our political beliefs and opinions, everything is collected, coalesced, and sold by data broker to banks, advertisers, even law enforcement agencies.

Even the coupon you received are tailored for you by an algorithm, that might know you better than you know yourself.

AI Governance

The goal of regulation is to define and enforce standards to build and maintain trust. Many properties have been defined to build that trust. So what do we do now? In practice, regulation is enforced through audits, where models are forced to demonstrate robustness, safety, etc (i.e., similar to stress tests for banks). Thus, we need AI Governance.

An AI Governance is the structure within a company or entity that develops and orchestrates the infrastructure of people and tools needed to provide and perform this control BEFORE a model is deployed. In AI Governance, trust is achieved when AI models have been documented such that an external audit can understand how an AI was built and what are the risks and biases associated with the model. After a model is deployed, AI Governance ensures that measurements (i.e., drift in model performance, extrinsic impact, intrinsic biases) are being tract, monitored, and documented.

AI Governance Tools

As the whooshing sound of regulatory deadline rapidly approaches, a new ecosystem of AI governance tools has emerged. One of the market leader in the AI Governance tools is IBM FactSheets. The success of this product, produced by IBM Research, is a strong signal that large corporations are adapting their workflow to the requirements of AI governance.

In a similar, but not quite the same way, Google Cloud Model Cards also offers another view point on AI governance through model documentation. Credo AI is another good example of what is available.

If you are still hungry for more, here is some additional academic and regulatory work on AI Governance.

Measuring Safety

Measuring the safety of an algorithm is a very expensive and tedious task. Imagine a black box with 100 buttons. Each combination of button pressed do something different. Sometimes, the outcome changes every time. Multiply by a billion, and you have AI auditing…

I noted the recent effort from UL Laboratories (their non-profit branch) to develop benchmarking and auditing standards, by opening the Digital Safety Research Institute (DSRI). DSRI is also partnering with the AI Incident Database, which is an excellent example of open source collaboration for digital safety. This type of effort aims at creating benchmarking datasets and auditing softwares, that allows for apple-to-apple comparison between algorithms and use-cases. But it will take a lot of time and resources, because this is not a one-size-fit-all problem.

Another example of such effort is the AI Alliance, a consortium of private companies, universities, and non-profit organizations, or similarly ML Commons which published their first AI Safety benchmark.

Looking back on what the history of the aviation industry taught us, DSRI’s Director, Dr. Sean McGregor said:

Failing to share insights with competitors for how to save lives is a moral failing and not possible in modern aviation.

✈️ 🪂

Why Should You Care about AI Regulation?

Let’s recap. ☕

We need regulation to support and enforce standard to protect people, property, and the environment from the many harmful outcomes of unregulated AI systems. Our society must demand those standards in the AI models released in our midst. Companies will need to care about robustness, fairness, and privacy etc. as those areas are slowly but surely making their way into the regulatory sphere. But what about you - dear reader - as a private citizen?

In 2017, I attended the inaugural speech of Dr. Dietterich as President of the Association for the Advancement of Artificial Intelligence (AAAI). AAAI is one of the main scientific society to research, develop, and promote AI. In his speech, Dr. Dietterich said:

The final high-stakes application that I wish to discuss is autonomous weaponry. Unlike all of the other applications I’ve discussed, offensive autonomous weapons are designed to inflict damage and kill people. These systems have the potential to transform military tactics by greatly speeding up the tempo of the battlefield. […] My view is that until we can provide strong robustness guarantees for the combined human-robot system, we should not deploy autonomous weapons systems on the battlefield. I believe that a treaty banning such weapons would be a safer course for humanity to follow.

Weaponized AI exists, and is well on its way to be integrated in our daily lives. It will seem innocuous at first, but it is insidiously advancing even if “hypothetically”.

🤖 💀