main content Main Menu Footer

From Prediction to Understanding: The Role of Explainability in Implementing AI in Public Policy

According to a 2024 McKinsey report, 78% of organizations surveyed say they already use artificial intelligence in at least one business function.

There's a strong trend across various economic sectors toward adopting Artificial Intelligence (AI) to perform a variety of functions. This movement, led by large companies, uses everything from more traditional analytical and predictive models to newer Generative AI tools like ChatGPT or Gemini to boost their financial results. According to a 2024 McKinsey report, 78% of the organizations surveyed claim to already use AI in at least one business function. Along the same lines, the 2025 OECD (Organization for Economic Cooperation and Development) report points to high expectations for productivity gains from AI adoption by the firms surveyed, including 167 Brazilian ones.

Despite the enthusiasm, this introduction of disruptive technology poses serious risks. Inaccurate results, breaches of privacy, and intellectual property infringement are just some of the issues that can lead to adverse outcomes for companies. These risks are so evident that, according to the same study, approximately 27% of respondents state that 100% of Generative AI results are reviewed by humans before use. In contrast, a similar proportion state that they review only up to 20% of the results generated by this tool. Furthermore, the report points out that there is a lack of propensity to address risks related to the accuracy or explainability of AI results.

The 'black box' challenge in the public sector

In the public sector, the situation is no different. The 2025 OECD report highlights the potential of AI to improve internal processes and create more effective policies that are responsive to citizens' needs, strengthening government accountability. However, there is a central concern: the risks of inadequate implementation, such as the amplification of biases and a lack of transparency, which can lead to unfair and discriminatory outcomes with profound social implications.

Many AI models focus on prediction and offer no explanations for their conclusions. After all, they were developed for predictive power, not explainability. When we try to decipher them, we find only complex mathematical formulas that are difficult to interpret.

This is where the lack of transparency surrounding AI as a 'black box' becomes even more critical and challenging for public administration. Imagine arriving at a doctor's appointment and having to accept a diagnosis without understanding its causes, or having your request for a public service denied without any justification.

Explainability as a solution

With this need in mind, Explainable AI (xAI) and interpretable Machine Learning models (iML) have emerged more recently as an alternative to existing methodologies. Interpretable Machine Learning), a field dedicated to developing transparent and interpretable AI models.

In the public sector, this is vital because social responsibility cannot be diluted. Transparency and interpretability are key to strengthening trust and legitimacy in decision-making. Furthermore, explainable models can help identify biases perpetuated by unrepresentative data, ensuring fairer public policies.

Although AI-enabled public management still needs to mature, these trends point to a promising path. Understanding what leads a municipality to fail to meet targets or a household to experience food insecurity is much more valuable than simply knowing how many failed to meet them. Understanding the factors behind these outcomes is crucial, and the appropriate use of the right technology can help us in a powerful, ethical, and transparent way.

Ultimately, the adoption of AI in public management must go beyond mere predictive efficiency. It's important to consider the appropriate use of AI in producing evidence to support and design public policies, aiming to build a more just and equitable future, with decision-making based on AI models, yes, but always transparently and in the service of society—and not simply following the current trend.

This text does not necessarily reflect the opinion of Unicamp.


Bibliography

MCKINSEY & COMPANY. The state of AI in early 2024: Gen AI adoption spikes and starts to generate value. New York: McKinsey & Company, 2024. Accessed on: September 6, 2025.

ORGANIZATION FOR ECONOMIC COOPERATION AND DEVELOPMENT; BOSTON CONSULTING GROUP; INSEAD. The adoption of artificial intelligence in firms: new evidence for policymaking. Paris: OECD Publishing, 2025. Accessed on: September 6, 2025.

ORGANIZATION FOR ECONOMIC COOPERATION AND DEVELOPMENT. Governing with artificial intelligence: are governments ready?. Paris: OECD Publishing, 2024. (OECD Artificial Intelligence Papers, n. 20). Accessed on: September 6, 2025.

Cover photo:

There is a strong trend in various sectors of the economy to adopt Artificial Intelligence (AI) to perform various functions
There is a strong trend in various sectors of the economy to adopt Artificial Intelligence (AI) to perform various functions (Photo: Pixabay Disclosure)
November 07 25

The social value of data

And projections point to a global data volume with exponential growth, which will be approximately 394 zettabytes in 2028.
Man_ED732-Chris-Yang-Unsplash

Go to top