spot_imgspot_img

Why AI alone cannot fix social problems

In the decades-long pursuit of solving social problems using technology, artificial intelligence is the newest hammer. From education and employment to agriculture, AI is increasingly presented as a solution to nearly every challenge.

In the book AI Snake Oil, authors Arvind Narayanan and Sayash Kapoor compare this trend to the “snake oil” of the early 20th century, or products marketed as miracle cures. AI’s capabilities, too, initially felt almost magical: With simple prompts, tasks like writing emails, summarizing documents, or searching for information could be done in seconds, significantly boosting individual productivity.

But it would be dismissive to ignore AI’s genuine potential. Recent U.S. economic data suggests that its productivity gains are starting to be visible at the macroeconomic level. While the increased efficiency and reduced labor costs have excited businesses, the public sector has also begun to view it as a tool for addressing social problems.

Yet there is something deeply contradictory about relying on a technology rooted in structural inequities for addressing social problems. In Atlas of AI, author Kate Crawford argued that AI is not a neutral or purely technical system, but a vast extractive industry built on natural resources, human labor, and entrenched systems of power. It reproduces and often amplifies historical inequalities, including colonial and capitalist logics.

These systems are themselves products of the very inequities they aim to mitigate.”

So when it comes to AI, two realities exist side by side. On one hand, there is growing enthusiasm for deploying AI to address pressing societal challenges. On the other, these systems are themselves products of the very inequities they aim to mitigate. This tension raises an important question: Can AI meaningfully contribute to solving social problems, particularly for underserved communities that have long experienced extraction and marginalization? And if so, what would it take to design systems that are not merely performative, but genuinely grounded, accountable, and useful?

We examined eight AI systems deployed to address social problems across the developing world. What stood out was not the technology, but the dense web of people working around it. Technologists, domain practitioners, bureaucrats, and frontline workers were engaged in making these systems function. In every case, the difference between systems that faltered and those that proved useful had less to do with the sophistication of the AI models than with the strength and resilience of this human infrastructure. We highlight four of these cases.

Shiksha Copilot, developed by Microsoft Research India, is an AI-based lesson planning tool to support teachers in the country. Across our school visits, a clear pattern emerged: In schools with adequate infrastructure, dedicated administrative support, and enabling leadership, teachers had the bandwidth to experiment and use the tool to meaningfully enhance lesson planning, exploring new ways to structure content and engage students.

In more constrained settings, where teachers were burdened by administrative tasks, documentation requirements, and poor management, they used the same AI tool to generate lesson plans quickly in order to complete the reporting work faster rather than for improving their pedagogy. The contrast was striking. The divergence in outcomes was not driven by the technology itself, but by the institutional conditions and human support systems surrounding it.

Adopting AI is only part of the challenge. A deeper issue lies in the limitations of the models themselves, especially in the developing world. AI systems often underperform in non-Western languages, reproduce biases embedded in their training data, and reflect Western-centric assumptions. This often results in significant gaps in local knowledge, with models struggling to capture situated practices, linguistic nuances, or culturally grounded information.

This became evident in FarmerChat, an AI assistant for agriculture. The system often misunderstood key agricultural terms due to accent variation. In one case, it confused masoor (a type of lentil) with mushroom, generating advice that was not only incorrect but potentially harmful. Such failures rarely surfaced in standard benchmarks, which tend to overlook culture-specific vocabulary and real-world linguistic variation. Addressing these gaps required close collaboration with field staff to design evaluation approaches grounded in agricultural terminology, accent diversity, and regional language patterns. Crucially, this work depended on people embedded in the context who could recognize, diagnose, and correct these failures.

Keeping humans in the loop

One reason AI systems are so useful is that they can respond to many different kinds of questions, unlike traditional rule-based software systems. But this ability also makes them unpredictable. Because AI models can sometimes generate incorrect or misleading responses, safeguards are needed.

In practice, this often meant keeping humans in the loop. On CataractBot, a WhatsApp-based assistant for post-surgery recovery in India, every AI-generated message was verified by a doctor before being sent, to avoid the risk of hallucinations or incorrect medical advice. The system worked because each patient was allocated to a specific doctor who was responsible for reviewing and sending responses quickly.

However, with ASHABot, a chatbot for community health-care workers in India, when the AI could not answer a question, the query was forwarded to several supervisors. With no single person responsible for responding, supervisors often did not reply in time, and ASHA workers faced delays and uncertainty.

The difference between the two systems was not the AI itself, but the human support structure around it. Across all the deployments, AI systems worked only when the human systems around them were already working well. This observation feels almost like common sense. Workers who are already stretched thin cannot suddenly transform their practice just because an AI tool is introduced. At best, the tool helps them complete existing tasks a little faster.

AI systems worked only when the human systems around them were already working well.”

This is where the promise of AI feels more complicated. AI is often framed as a tool for efficiency, but efficiency alone does not strengthen public systems without the underlying capacity being improved. Even when tasks are completed faster, the deeper constraints of the system do not automatically disappear. In many cases, AI ends up addressing the symptoms of these problems rather than their causes. This aligns with Kentaro Toyama’s argument in Geek Heresy, where he suggests that technology can only amplify existing human and institutional capacities rather than substitute for them. AI is no exception, despite its perceived sophistication.

This does not mean AI cannot be useful. The cases we studied showed how organizations drew on the affordances of AI technologies while also working around their shortcomings. Systems such as ASHABot, CataractBot, and FarmerChat made knowledge more accessible, and Shiksha Copilot reduced the workload of teachers while improving their pedagogical skills. These examples illustrate how beneficial AI systems can be when the right support structures are in place. Despite the exploitative histories and infrastructures behind many AI technologies, they still offer opportunities if they are designed thoughtfully to support systems that already have strong human foundations.

The opportunity lies in recognizing this reality. AI alone will not fix social problems, no matter how good the models are, and even if they are sovereign. But if institutions invest in the people, processes, and working conditions that sustain those systems, AI can help amplify those efforts. Without that foundation, however, AI will remain what it too often becomes: a technological smoke screen for deeper institutional decline.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Popular Articles

0
Would love your thoughts, please comment.x
()
x