Africa and the Global AI Governance Landscape

By Ayantola Alayande and Charles Falajiki | 6 September, 2023

Artificial Intelligence (AI) is shaping daily societal practices differently – from foundation models powering popular generative AI platforms to algorithms optimising social networking apps, healthcare, and financial services. Human decisions and choices are now increasingly done through automation technologies. AI discussions also come with much techno positivism; a strong rhetoric on AI’s potential to enhance efficiency in service provision, improve labour productivity, and accelerate economic growth.  

However, research has shown that the use of automated systems poses significant ethical and safety challenges to society. These tools, sometimes unintended, limit opportunities and prevent underserved and underrepresented people from accessing critical resources or services. They can misalign court judgments, prejudice decisions on social housing, or reinforce ethnic profiling in policing. These issues have been well documented in academic research and media documentaries. Scoring algorithms used in allocating housing benefits have been found to be racially biassed and healthcare prediction models based on patients’ previous healthcare spending have reflected and reproduced existing social inequities, while also embedding new harmful biases and discrimination. As a result, ethical AI has become a top concern for scholars, technologists, and policy makers alike. 

Yet, contemporary discussions on global AI governance are disproportionately led by ‘technologically developed’ economies – especially the UK, US, EU and China, while the pace of AI innovation and regulation is arguably slow in the global south. For instance, only few African countries have published a national AI strategy or policy, and on average, Sub-Saharan Africa is the lowest-scoring region across the three main AI readiness indices: governance, technology readiness and human capital, and data and infrastructure. While Africa is well-positioned to take advantage of AI to accelerate its socio-economic development and boost government efficiency, AI readiness across the continent, particularly in relation to governance, is poor and raises concern about the future of AI in the region. 

This piece surveys the current global AI governance landscape, including the slowly emerging regulatory environment in Africa. Drawing on insights from other regions, we propose a lens through which African countries could approach their national AI policies — especially in relation to ethics and governance frameworks. Admittedly, mirroring frameworks from technologically developed nations raises questions as to whether this is the right way to develop a uniquely African approach to AI regulation. Nonetheless, we argue, however counterintuitively, that in the absence of active African participation in global AI governance, local adaptation of other countries’ experiences is the most efficient way for African states to avoid the vicious cycle of regulatory dependence on technologically developed nations. 

An overview of the current global AI regulatory landscape 

Existing multilateral frameworks for AI governance include the Universal Guidelines for AI (2018), the OECD AI Principles/G20 AI Guidelines (2019), the UNESCO Recommendation on the Ethics of AI (2021), among others. At the country level, the US, China, Singapore, UK and Canada are some of the leading AI champions, according to the Global AI Index. Some of their AI policies are worth examining. 

Canada was one of the first countries to develop a national AI strategy in 2017. But at the moment, China seems to be the most proactive state entity to consistently come up with comprehensive AI governance frameworks. So far, it has enacted three major rules on AI: the 2021 recommendation algorithms regulation, the 2022 deep synthesis regulation (which aims at deep fakes emanating from synthetically generated content), and the recently ratified framework for regulating generative AI.

In the US, the first comprehensive federal regulation was the 2016 White House National AI Research and Development Strategy. This first strategy failed to consider how certain predetermined societal values could influence the integration of AI into American society. However, the latest framework of 2023 addresses this challenge, in addition to outlining an approach to international collaboration in AI research. Still, a centrally coordinated federal approach to AI regulation is weaker in the US in comparison to China and the EU. Rather, state and local level regulations are stronger, and voluntary regulatory treaties from leading AI firms have dominated the scene. Lately though, the Federal Trade Commission has made numerous moves aimed at tackling anti-competitive behaviours in the AI industry. 

For the UK,  its 2023 national AI strategy is focused on fostering AI innovation and making the country the world AI superpower. The strategy also outlines a framework for AI ethics, as well as international operability standards. 

In Europe, the EU has set out a quite comprehensive AI regulation – the EU AI Act. The Act categorises AI harms into three: limited risks, high risks, and unacceptable risks. Each level of risk attracts a different form of regulation. The Act also draws special attention to generative AI such as ChatGPT. The EU regulation is the first AI Act with enforceable legal clauses.  

In Africa, the first draft of the AU Artificial Intelligence Continental Strategy for Africa was finalised in March 2023, with an expected launch date in January 2024. Another recent continental regulatory development is the Smart Africa initiative spearheaded by African Heads of State and Government. In 2021, Smart Africa published a blueprint outlining considerations for AI development in Africa. Although there are no solid regulatory pointers in the blueprint, a key recommendation of the framework is ensuring active African participation in the international processes on AI governance. 

Apart from regional efforts from the AU, there are also a few country-level efforts towards AI regulation across the continent. In 2018, Mauritius launched the first national  AI strategy in Africa. The country also announced the establishment of a National AI Council, which, in addition to carrying out oversight on the technologies and industry, would enable the development of AI innovation through possible fiscal incentives or otherwise. In 2021, Egypt announced its version of a national AI strategy –  “Artificial Intelligence for Development and Prosperity”. Nigeria has also recently completed the draft of its first national AI strategy. Surprisingly, South Africa has  yet to develop a national AI policy framework. 

Lessons from other countries

One challenge with a cross-country comparison of AI regulations is that countries often disagree on what fundamental approach to take toward AI regulation (e.g. an innovation-first approach versus a privacy/rights-first approach). Differing political and cultural setups can also influence data sovereignty, privacy laws, and AI use in social systems. So far, two extremes have emerged with regards to comparability: (i.) those that claim the EU AI Act will likely become the global standard for AI regulation, partly because of the breadth of the regulatory framework and participating countries, and (ii.) those that discard any possibility of transferability to other contexts. There is however a sweet spot: cross-border or trans-nationally adaptable AI regulations, which are essential for facilitating international digital trade and cooperation.

For African countries, a few things could be adapted from other nations’ regulatory landscape, especially around technicality, breadth of regulation, as well as the successes and failures of existing AI governance regimes. First, there is a need to clearly outline the social and cultural logics that guide a country’s governance framework, and who benefits from it. China sets an important example here in that its generative AI policy centres around socialist values, national security interests, alongside questions of discrimination and bias in AI content. Of course, whether socialist values are an acceptable undergirding framework for technology governance in Africa is a different question entirely. The key point here, as Wakunuma and others describe in their work, is that the AI ecosystem is a ‘value-laden’ one that encompasses a country’s ethical, legal, sociocultural and technical principles. As such, any governance framework in Africa must set out a clear path for the integration of social cultural values in specific AI domains. 

The second consideration is that upcoming AI policies in Africa must distinguish the rules for public welfare-facing AI products from those intended strictly for commercial purposes. Beijing provides another useful example in that it sets a bit more stringent security assessment protocols for generative AI firms offering public-facing services. This is because AI usage for public welfare raises peculiar needs, such as paying attention to the societal dynamics of the user population, resource constraints, and balancing aggregated data-decision making with collateral impacts on marginalised groups or certain individuals. 

Thirdly, unbundling AI regulations by the ‘type of AI’ is another important consideration for adoption. As Vincent Obia argues, one common practice with digital regulation in many African countries is ‘regulatory annexation’, wherein existing standards and legal frameworks meant for one digital domain are automatically replicated to another. Such an approach is bound to fail when it comes to AI governance. If anything, what the rise of ChatGPT and other generative AI platforms has shown in the past few months is that AI usage is very context specific. Even within foundation models, generated media contents tend to carry more risks than texts due to their extreme malleability into deep fakes. Again, like China, it is crucial to develop separate frameworks for governing different models of AI and algorithms. 

The final point for consideration, seen in the regulatory approaches of both the US and the UK, is the need to foster international operability and global partnership in AI development. Like data, AI is becoming increasingly borderless, and while specific risks might be more acute in certain places than others, AI harms are identical across countries and their effects tend to be cross-border as well. More so, African countries could really learn from global best practices in AI research. International collaboration is essential for capacity building, talent development, and retention. 

Integration into global regulatory landscape 

Scholars have argued that, even with carefully curated African AI frameworks, the huge power imbalance between African states and big tech companies (who currently dominate Africa’s AI space) will continue to limit the former’s ability to exert regulatory control in the AI domain. Inevitably, African states will end up being consumers of Global North-originating regulations rather than active curators of theirs. Indeed, as marginal consumers and not producers of the technologies involved, how much impact Africa can truly make in AI governance is limited. At best, it can localise existing policies from elsewhere, or create a special set of frameworks for homegrown AI technologies. In any case, to avoid a zero-sum situation, African countries should focus on active participation in existing international AI treaties, as well as campaigning for a pan-African front in AI governance bodies. 

This approach is important, as the implications of a passive African voice in the global AI governance landscape are endless. First, global economic inequality is further widened, since whoever sets the AI governance agenda also consequently leads its innovation. As with other sectors, Africa could be locked into technological dependence on the West, and if international trade between the developed economies and Africa has taught us anything in the past few years, it is that this is often a ‘winner-take-all’ game. Secondly, a backseat in global AI governance means missed opportunities in fostering innovation and capacity building. By becoming active AI governance entities, African countries could influence global treaties that stimulate investments for local AI platforms, while enabling capacity building for local talents. This might also help minimise the recent exodus of tech talents out of the continent. The third and final implication is that, by not participating actively in the global AI discourse, Africa misses out the chance to improve the contextual relevance of AI models. A well-known risk with many current AI models is that they are largely trained on non-inclusive data, which often do not take into account the socio-cultural contexts of many African societies. An active African participation in AI governance would ensure that issues of ethics and bias in algorithms are adequately considered, in addition to guaranteeing the national relevance of the current global AI environment. 

Ayantola Alayande is a Research Consultant in Civic Technologies and Public Policy.

Charles Falajiki is an education and digital development practitioner