General

Experts warn AI bias could undermine safety and security

Artificial Intelligence systems are increasingly becoming tools of efficiency and innovation but they could also become instruments of unfairness and insecurity if left unchecked. That was the key warning at a DIPPER Lab’s Research Masterclass organised virtually. The masterclass was themed “Bia...

MyJoyOnline

published: Jun 18, 2025

Blog Image

Experts warn AI bias could undermine safety and security

Artificial Intelligence (AI) systems are increasingly becoming tools of efficiency and innovation but they could also become instruments of unfairness and insecurity if left unchecked.

That was the key warning at a DIPPER Lab’s Research Masterclass organised virtually.

The masterclass was themed “Bias in AI at the Crossroads of Safety and Security” and brought together researchers, policy analysts, tech developers and students.

The event forms part of a larger AI research capacity building initiative funded by the UK’s Foreign, Commonwealth and Development Office (FCDO) and Innovate UK through a Knowledge Transfer Partnership project involving KNUST, Manchester Metropolitan University and Sesi Technologies.

Experts warn AI bias could undermine safety and security

Delivering the keynote address was Dr. Mohammed Al-Khalidi, Co-Lead of the AI Safety and Security team at the Manchester Metropolitan University’s Turing Network.

He cautioned that AI systems, if not properly audited, can be manipulated to produce biased and potentially dangerous results.

“Bias in AI isn’t just about fairness anymore. It’s about safety. It’s about security. We’ve shown how easily a model can be pushed beyond acceptable fairness thresholds through simple adversarial attacks,” Dr. Al-Khalidi said.

Dr. Al-Khalidi presented findings from an experiment involving credit scoring algorithms.

Using a fairness metric called Average Odds Difference, he demonstrated how AI models that initially performed within acceptable fairness ranges could be manipulated through techniques like poisoning, black-box, and white-box attacks.

“White-box attacks were the most damaging, pushing fairness metrics up to 25% bias far beyond the 10% threshold for acceptable systems

Experts warn AI bias could undermine safety and security

“That means the model is not only unfair, it’s unsafe and insecure,” he explained.

The study found that even minor data manipulations could cause significant disparities in how AI systems treat different demographic groups.

The masterclass also explored strategies for building more robust and fair AI models.

Dr. Al-Khalidi stressed the importance of adversarial training, which involves teaching AI systems to recognise and reject malicious or misleading inputs.

“It’s like teaching a child what’s inappropriate. You have to show it the bad examples to learn what to avoid,” he noted.

Other recommendations included regular audits of AI models, stronger data sanitisation methods, and carefully balanced explainability features ensuring that AI remains transparent without giving hackers the tools to exploit its design.

The session also delved into different forms of bias, including confirmation bias, selection bias, and bias arising from underrepresented demographics.

Dr. Al-Khalidi called for broader experimentation across more complex AI models and data scenarios.

He noted that his team’s study used logistic regression but said further work could test the impact of adversarial attacks on neural networks and other advanced systems.
“AI is here to stay. We’re not saying don’t use it. But human oversight is still essential. We need to balance automation with ethical and secure design,” he said.

Dr. Eric Tutu Tchao, Scientific Director of the DIPPER Lab and host of the event, reiterated the Lab’s commitment to advancing research in AI Safety and Security.

The DIPPER Lab Masterclass forms part of ongoing efforts to strengthen Africa’s voice in the global AI Security conversation.

Read More
National
AI
safety
Security

Stay in the loop

Never miss out on the latest insights, trends, and stories from Cedi Life! Be the first to know when we publish new articles by subscribing to our alerts.