Unlocking AI potential in delivery of public services

Data protection is a critical concern. AI-driven systems, such as facial recognition, can lead to mass surveillance, infringing on citizens’ privacy.

Photo credit: Shutterstock

Artificial intelligence (AI) is no longer a futuristic concept—it’s rapidly becoming a reality that will transform governments and public sector players across the globe.

In Africa, tech giants like Google, Microsoft, and Amazon are establishing technology hubs to harness AI’s potential to address local challenges.

We are already seeing glimpses of AI’s impact in government operations. In Kenya, the Office of the Data Protection Commissioner (ODPC) launched an AI chatbot to enhance data privacy awareness among citizens and businesses.

The Kenya Revenue Authority (KRA) announced plans to leverage AI to automate tasks and improve tax collection.

International donors have also turned to AI to streamline development efforts, such as using it to analyse health data for the early detection of disease outbreaks.

Elsewhere in East Africa, Uganda’s Ministry of ICT and National Guidance has partnered with technology firms to enhance AI use for social and economic development.

Tanzania is integrating AI into its centralised e-government platforms through the Government Enterprise Service Bus (GovESB) initiative, aiming to improve efficiency and transparency.

Rwanda, a tech leader in the continent, launched the Centre for the Fourth Industrial Revolution (C4IR Rwanda), which focuses on AI, machine learning, and data governance. Egypt and Mauritius have also developed national AI strategies, recognising the potential of AI to drive national development.

At the regional level, the African Union (AU) has established a working group to develop a capacity-building framework and an AI think tank. However, despite these promising initiatives, the adoption of AI across Africa’s public sector remains limited.

Several barriers are slowing its full potential, including the lack of relevant data to train AI systems, the absence of regulatory frameworks to govern ethical AI use, insufficient government investment in AI research and development, and a shortage of AI expertise due to the low uptake of STEM education.

Additionally, governments must find a balance between data protection and fostering innovation, especially in public services that manage large amounts of personal data.

To unlock AI’s full transformative power, governments must embrace innovation while addressing critical risks. AI is not without challenges, and these must be carefully navigated.

One of the biggest concerns is AI bias. If AI systems are trained on biased data, they can reinforce or even worsen societal inequalities.

For example, an AI-based resume screening tool could disproportionately reject women’s applications if trained on data that reflects historical gender biases.

AI can also be weaponised for cyberattacks, ranging from phishing and malware to sophisticated nation-state attacks. Deepfakes—AI-generated fake audio or video—pose a significant threat by spreading disinformation and damaging individuals and institutions.

Data protection is another critical concern. AI-driven systems, such as facial recognition, can lead to mass surveillance, infringing on citizens’ privacy.

Moreover, unfair automated decision-making, such as AI denying loans without transparency, could lead to public dissatisfaction and unrest. Data protection is another critical concern. AI-driven systems, like facial recognition, can lead to mass surveillance, violating citizens’ privacy.

Furthermore, unfair automated decision-making, such as AI denying loans without transparency, could lead to social unrest.

To mitigate these risks, governments need to adopt “Responsible AI” frameworks, which emphasise fairness, accountability, and ethical data practices. AI must respect human rights, including privacy and freedom of expression. It should be designed to avoid reinforcing societal biases and ensure fairness in processes such as recruitment and loan approvals.

Personal data used by AI systems must be securely managed, with limits on surveillance to protect privacy. AI decisions, particularly in high-stakes areas like law enforcement and healthcare, should be transparent and explainable, allowing citizens to appeal decisions when necessary.

Critical applications must also involve human oversight to prevent the risk of fully automated systems making life-changing decisions.

Lastly, accountability is essential, with clear lines of responsibility and mechanisms in place to address AI failures.

Governments, public sector players, and policymakers in Africa must adopt a strategic, ethical, and human-centred approach to AI. While the promise of AI is vast—enhancing efficiency, transparency, and improving public services—its risks are equally real.

Critical questions must be addressed: Does AI genuinely solve problems better than existing methods? Are we prepared to manage the biases and risks that AI may introduce? Do we have the technical capacity to implement AI effectively and responsibly?

While AI has the potential to elevate Africa’s public sector to new heights, its adoption must be guided by careful planning, ethical frameworks, and a commitment to inclusivity. With the right approach, AI can become a force for good, driving economic and social progress in Africa.

The writers specialise in cybersecurity at PwC Kenya

PAYE Tax Calculator

Note: The results are not exact but very close to the actual.