Looking into the crystal ball: Artificial intelligence policy and regulation in Africa

#
Boitumelo via Unsplash

Authors

Less than a year ago, the internet was abuzz with amazement as people experimented with ChatGPT – composing poems, song lyrics, essays and speeches. But typical to most artificial intelligence (AI) tools, ChatGPT did not have enough data on African languages, limiting how African users experimented with the technology. Ethical, security and regulatory concerns around use of AI in Africa have fuelled debates on the need for laws to regulate AI on the continent – but countries will need to take a number of other steps first.

% Complete

    Artificial intelligence in Africa: a contentious issue


    The use of AI in Africa is a contentious issue. African content is underrepresented, certain stereotypes and biases are perpetuated, and Africans and their data are at risk of being exploited. A recent example of exploitation of African data involves the launch of Worldcoin’s AI project, where Kenyans were invited to scan their eyeballs on the crystal-ball-looking Worldcoin Orb in exchange for Worldcoin tokens, equivalent to around $49. This has raised privacy, security, regulatory and ethical concerns over AI, ultimately leading to Kenyan authorities suspending Worldcoin operations pending an investigation.

    The use of AI in Africa is a contentious issue. African content is underrepresented, certain stereotypes and biases are perpetuated, and Africans and their data are at risk of being exploited.

    Should Africa regulate AI now or adopt the wait-and-see approach?


    The increased uptake of AI tools warrants the question of whether African countries should exercise their sovereignty and leapfrog to AI regulation, or alternatively take their time to acquire knowledge around AI.

    I argue that African countries should not prioritise adopting AI-specific laws, but instead focus on strengthening the foundational regulations on data governance. If properly done, the implementation and enforcement of data protection laws is the first step on the journey towards AI regulation. Data protection laws provide guidance on processing of personal data such as biometrics, use of profiling tools and automated processing, the extent of such processing activities, and the rights of individuals.

    The role of data protection law in AI regulation


    The Malabo Convention – the African Union (AU) Convention on Cyber Security and Personal Data Protection, which is now operational – is an important tool to regulate aspects of AI such as automated processing of personal data. However, for data protection laws to be impactful in regulating AI in Africa, there is a need for relevant national laws to be promulgated in the 18 African countries which still do not have data protection laws.

    Africa needs to prioritise setting up data regulators with ‘teeth’ to enforce compliance, such as Kenya’s Office of the Data Protection Commissioner (ODPC) and South Africa’s Information Regulator. The legal framework must be designed in a way that makes data regulators enjoy higher levels of financial, institutional and political independence to avoid succumbing to political and private sector influence and intimidation.

    Data regulators should also join the Network of African Data Protection Authorities (NADPA). Through the collective efforts and expertise of NADPA members, blueprint regulations addressing use of personal data by different AI technologies (such as facial recognition) can be developed and disseminated for consideration at national government level.

    Through the AU Data Policy Framework from 2022, African leaders agreed on important principles guiding the development of national data policies. Instead of prioritising AI regulation, policymakers should prioritise the implementation of the continental framework at the national level. Robust data protection laws with stringent penalties and sanctions for non-compliance must support such data policy frameworks. If administrative fines are insignificant, foreign companies will consider it a low risk to test their products without complying with local data protection laws.

    Using AI regulatory sandboxes as testbeds


    Adoption of early regulation may stifle innovation. Part of the wait-and-see approach involves understanding AI technologies in regulated environments such as AI regulatory sandboxes, where technologies are tested against existing regulations. The AU also called upon African countries to undertake studies on the impact of AI on African people.

    Regulatory sandboxes can be used as testbeds for the development of future AI policy and AI regulations (AU Resolution 473), and African governments can use the Smart Africa Blueprint on AI to develop their own AI strategies. Such studies and assessments can provide the basis for development of AI laws and regulations which reflect African values and norms, while aligning with international standards such as the UNESCO recommendation on AI ethics.

    Dominating the AI markets


    Apart from prioritising laws regulating personal data, African governments need to capitalise on what big tech is after: African data and markets. Africa has the youngest population in the world, which means that the majority of long-term consumers and users of technology and products are in Africa. Instead of outrightly dismissing interested tech companies, African governments need to be strategic, use their market advantage and insist on favourable investment terms.

    Africa has the youngest population in the world, which means that the majority of long-term consumers and users of technology and products are in Africa.

    For instance, any entity interested in tapping into the African AI market must be open to collaborating with local micro-, small and medium-sized enterprises or invest in local AI innovation hubs where they share knowledge and transfer skills. African governments must identify the intended solutions and products that tech companies seek to develop and use that ‘market intelligence’ to their advantage.

    If AI companies are interested in collecting biometric data to build identity frameworks and unlock financial opportunities, African governments can respond by prioritising the development of digital ID systems. They could improve financial inclusion, access to government services and other online transactions and collaborate with the private sector to develop digital ID solutions in a manner that is guided by national laws.

    International partnerships and AI knowledge sharing


    African countries need to work together and share knowledge and expertise on data governance within the African ecosystem. Lessons can be learnt from South Africa, Mauritius and Rwanda, who have been working on understanding AI within their country contexts. The European Union (EU) also offers experiences with its AI Act.

    African countries lacking the financial muscle to conduct necessary feasibility studies can benefit from initiatives such as the EU’s Global Gateway strategy and a UNESCO-EU partnership on AI. In providing financial support, international partners must respect the sovereignty of African states and avoid imposing or influencing their approaches to AI regulation. African countries should develop AI strategies, laws and policies that meet their unique needs while promoting human rights in line with the African Charter on Human and Peoples’ Rights.

    Prioritising people and teaching them digital skills


    Regulating AI technology by itself is not enough. People must be educated about the value of their personal data in the digital economy, the vulnerability of such data and the security risks involved in sharing such data with third parties. Digital education and digital skills must also be accompanied by programmes aimed at eradicating poverty, and creating employment and business opportunities for young people.

    The views are those of the author and not necessarily those of ECDPM.

    Loading Conversation