Nigeria’s AI Future at a Crossroads: Experts Urge Swift Regulation to Unlock Prosperity

0 78

As Nigeria rapidly embraces Artificial Intelligence (AI), particularly generative models, experts warn that the country risks falling behind in safety and ethical standards without clear, enforceable regulations. Despite being a continental leader in AI adoption, Nigeria lacks a dedicated legal framework governing AI systems or large language models (LLMs). This legislative void could stifle innovation and increase vulnerability to the risks associated with advanced technologies, including misinformation and algorithmic bias.

The government has taken preliminary steps through initiatives like the National Artificial Intelligence Policy (NAIP), introduced by the National Information Technology Development Agency (NITDA), and Senate Bill 731, which proposes establishing a National AI Commission. While still in draft stages, these frameworks aim to regulate AI applications in vital sectors such as agriculture, healthcare, and education. Legal experts from Balogun Harold law firm emphasize the urgency of codifying these proposals into law to provide much-needed guidance and accountability.

Key regulatory concerns include data privacy, explainability of AI decisions, and model transparency. The Nigeria Data Protection Act (NDPA) of 2023 has extraterritorial reach, making it relevant to foreign AI entities processing Nigerian data. Specific risks were demonstrated during a recent ‘Red Teaming’ exercise with TelecomGPT at MWC25 Barcelona, where experts showed how easily an AI model could be manipulated using roleplay-based jailbreak tactics. These findings underline the need for enforceable standards around AI safety and explainability.

The testing also revealed the risk of bias, hallucinations (false but confident answers), and data quality concerns, especially when AI models are trained on poorly sourced or non-consensual data. With Nigeria’s diverse population and socio-economic disparities, unchecked AI systems could unintentionally reinforce existing inequalities in healthcare diagnostics, job recruitment, or access to information. Experts call for mandatory fairness audits and greater scrutiny of training data provenance to mitigate these issues.

Although specific liability laws for AI-generated content don’t yet exist in Nigeria, legal experts advise LLM developers to prepare for future enforcement under consumer protection and defamation laws. As AI continues to permeate critical national functions, including customer service and agricultural monitoring, the importance of rigorous regulatory oversight becomes more urgent. Experts conclude that collaborative governance—combining legal foresight, industry accountability, and international testing standards—will be essential to ensuring that AI serves as a force for national prosperity, rather than a source of unforeseen harm.

Source: The Sun

Leave A Reply

Your email address will not be published.