The common good refers to what benefits society as a whole, as opposed to the private interests of individuals or specific groups. Historically, this concept has been linked to higher purposes, with many writers distinguishing between actions taken for the common good and those driven by narrow self-interest. In contemporary political thought, the common good is often understood in terms of individual rights, justice, material welfare, or the maximization of utility. Some canonical examples of the common good in a modern liberal democracy include: the road system; public parks; police protection and public safety; courts and the judicial system; public schools; museums and cultural institutions. It implies a social structure where resources and opportunities are equitably distributed, enabling everyone to reach their potential.
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals (Wikipedia).
Artificial intelligence was founded as an academic discipline in 1956, and the field went through multiple cycles of optimism, followed by periods of disappointment and loss of funding, known as AI winter. Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. This growth accelerated further after 2017 with the transformer architecture, and by the early 2020s hundreds of billions of dollars were being invested in AI (known as the “AI boom”).
While ChatGPT might be the most popular AI application today, it’s far from the only form of AI. Academic definitions classify AI into various types, each with unique capabilities and functions.
These categories highlight how diverse AI’s roles and potential are, shaping its development across domains and guiding future research into more advanced, adaptable, and autonomous forms of intelligence.
AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple’s Face ID or Microsoft’s DeepFace and Google’s FaceNet) and image labeling (used by Facebook, Apple’s iPhoto and TikTok).
In the context of technology, particularly artificial intelligence (AI), the common good involves prioritizing applications that broadly benefit society, rather than those serving only commercial or elite interests. For example, AI can contribute to the common good by enhancing healthcare, improving disaster response, and supporting education across socioeconomic groups. However, this requires ethical guidelines to prevent biases, protect privacy, and ensure accessibility, especially for marginalized communities.
Aligning AI with the common good also calls for balancing innovation with responsible governance. Regulations can help prevent potential harms, such as surveillance misuse or job displacement, while promoting transparency and accountability. Ultimately, to serve the common good, AI development must focus on inclusivity, equity, and public welfare, ensuring that its benefits reach all sections of society and support sustainable, collective progress.
1. Using AI in Vaccine Research and Development During COVID-19
During the COVID-19, AI played a pivotal role across various stages of vaccine development, accelerating processes that would typically take years. By analyzing vast datasets, predicting protein structures, and optimizing clinical trial logistics, AI enabled scientists and healthcare professionals to respond more swiftly and effectively to the pandemic. We will look into three of the most important use cases of AI during this period.
Natural Language Processing (NLP) in Research
Natural Language Processing (NLP) is a branch of artificial intelligence that enables computers to understand, interpret, and generate human language. Essentially, NLP allows machines to process large volumes of text and extract meaningful information, making it invaluable for analyzing complex data in fields like healthcare and scientific research.
During COVID-19 vaccine research, NLP was crucial for handling the flood of scientific literature produced worldwide. Thousands of studies, clinical trial results, and case reports were published in a short time, creating an overwhelming amount of unstructured text data. NLP algorithms helped researchers sift through this information, quickly identifying key findings about the virus’s structure, mutations, and immune response.
For example, NLP tools could scan articles for specific terms, such as “spike protein” or “antibody response,” and automatically highlight studies with relevant insights. This rapid information extraction allowed scientists to stay updated on breakthroughs and prioritize promising research paths, accelerating vaccine development. By organizing and summarizing vast amounts of data, NLP made it possible for researchers to harness global knowledge efficiently, ultimately speeding up the fight against COVID-19.
Accelerating Clinical Trials
AI helped in optimizing clinical trials by selecting participants based on medical history and risk factors, thus expediting the recruitment process and data analysis during trials.
As Pfizer scientists raced to develop their COVID-19 vaccine at record-breaking speed these past few months, they turned to an innovative artificial intelligence (AI) tool to help achieve this mission.
Normally, when a clinical trial or trial phase ends, it can take more than 30 days for the patient data to be “cleaned up”, so scientists can then analyze the results. This process involves data scientists manually inspecting the data sets to check for coding errors and other inconsistencies that naturally occur when collecting tens of millions of data points. But thanks to process and technology optimizations, including a new machine learning tool known as Smart Data Query (SDQ), the COVID-19 vaccine clinical trial data was ready to be reviewed a mere 22 hours after meeting the primary efficacy case counts. The technology enabled the team to maintain an exceptional level of data quality throughout the trial, leaving minimal discrepancies to resolve during the final steps.
“It saved us an entire month,” says Demetris Zambas, Vice President and Head of Data Monitoring and Management at Pfizer. “It really has had a significant impact on the first-pass quality of our clinical data and the speed through which we can move things along and make decisions.”[2]
Machine Learning in Drug Discovery
Machine learning (ML) is a type of artificial intelligence that enables computers to learn from data patterns and make predictions or decisions without being explicitly programmed. During the COVID-19 pandemic, ML was instrumental in accelerating drug discovery by analyzing vast amounts of biological and chemical data to identify potential treatments more quickly.
One key use of ML was in drug repurposing, where existing drugs were tested for effectiveness against COVID-19. ML algorithms analyzed data on known drug interactions, molecular structures, and virus proteins to identify compounds likely to bind with the SARS-CoV-2 virus, narrowing down promising candidates without the need for lengthy lab testing.
Additionally, ML models helped predict the efficacy and toxicity of new compounds, saving time and resources by focusing on the most promising options. ML also sped up the analysis of patient data from early trials, providing insights into how certain drugs affected COVID-19 symptoms.
By rapidly analyzing complex datasets and prioritizing potential treatments, ML enabled researchers to advance from initial screening to clinical testing in record time, contributing significantly to the COVID-19 response.
2. Human rights and social impact
Freedom of Expression and Access to Information
Generative AI tools enable instant access to a vast range of information, allowing users to synthesize complex topics, exchange ideas, and explore diverse perspectives through interactions with AI agents. These tools can be valuable resources for learning about human rights, democracy, and social justice, particularly in countries with restricted access to information. Information is a key step toward democracy because many countries that lack democracy tend to have censored information, making it difficult for people in those countries to know what is really going on. AI can bring changes to the current situation in those nations.
AI also enhances accessibility through features like captioning, image recognition, and translation, bridging language gaps and making information more available worldwide. For individuals with disabilities, AI offers innovative solutions such as speech-to-text applications, vision-enhancing tools, and navigation systems that improve communication and interaction.
AI can make digital platforms more user-friendly with voice recognition and gesture control, promoting inclusivity. It also supports education by providing tailored materials for students with learning disabilities and can create inclusive workplaces. Additionally, AI systems can assist in diagnosing illnesses, improving treatment outcomes for individuals with disabilities, and enhancing the functionality of prosthetic limbs. Overall, these applications demonstrate AI’s potential to foster accessibility, independent living, and inclusion for persons with disabilities.
With the help of AI, governments around the globe can benefit from this technology to execute political decisions promptly and reach out to the public more efficiently.
AI in politics
AI is being utilized in the public sector to enhance four core government functions: safety and security, public service delivery, revenue collection, and overall efficiency. By improving decision-making and reducing service costs, AI helps governments operate more effectively. It streamlines the hiring process by using algorithms to analyze applications and assess potential skills in candidates.
AI also enhances various governmental services, such as healthcare and criminal justice. For instance, the Rwandan government collaborated with Babylon Health to develop chatbots that assist in triaging patients by providing care recommendations based on reported symptoms. These tools facilitate quality service delivery efficiently and equitably.
Rwanda made good use of AI to enhance its healthcare system. In theory, other countries can also use it as an example to implement similar methods in the future.
AI tools have the potential to enhance political engagement by educating citizens about democratic principles and policies. For example, political recommender systems can help users learn about candidates’ platforms, while AI can keep citizens updated on relevant policy developments and facilitate their expression of opinions to politicians. In civic debates, AI can summarize discussions, moderate interactions, and foster consensus. AI is making citizens much more engaged than ever before, it can greatly improve the political interactions with no doubt.
Politicians can also benefit from AI by using it to analyze citizen feedback from public consultations, helping them understand public sentiment better and allowing for tailored responses. Trust remains intact as long as AI usage is transparent and overseen by humans.
Moreover, AI is transforming political campaigns, enabling rapid responses to developments and leveraging sentiment analysis to target voter groups effectively. In policymaking, AI can streamline processes by identifying issues, enhancing understanding, and aiding in drafting and lobbying efforts.
3. Forecasting Natural Disaster
Predicting earthquakes has been one of the most challenging tasks in science. For decades, researchers have sought reliable methods to foresee these natural disasters to save lives and reduce economic damage. Now, a new player is changing the game: Artificial Intelligence (AI). Recent advancements suggest that AI might hold the key to predicting earthquakes with a level of accuracy never before achieved. [9]
Machine learning (ML) is increasingly used in disaster prediction, helping to analyze vast amounts of environmental data to identify patterns and provide early warnings for natural events like earthquakes, floods, and hurricanes. By training ML models on seismic data, researchers can detect patterns that precede earthquakes, giving communities valuable seconds to minutes of warning to prepare for potential damage.
We may wonder why earthquake forecasting is difficult. Earthquake prediction is exceptionally challenging due to the complex and chaotic nature of tectonic activity. Unlike hurricanes or floods, earthquakes have no easily identifiable precursors. Seismic shifts can occur with little to no warning, making it hard for scientists to pinpoint when or where an earthquake will strike. Additionally, tectonic plate movements are affected by multiple variables — including fault line conditions and underground pressure — which are difficult to measure precisely. Traditional methods rely on historical data and fault line monitoring, but these offer limited predictive power.
Japan, one of the most seismically active countries in the world, has been a leader in using AI and ML for earthquake forecasting. The country has implemented an advanced early warning system that combines seismometers, machine learning algorithms, and rapid communication networks to detect the first signs of an earthquake. The system can analyze seismic wave data in real time, using ML models to estimate the earthquake’s magnitude and probable impact area within seconds.
When the system detects an earthquake’s initial seismic waves (P-waves), which are weaker and travel faster than the damaging S-waves, AI algorithms process this information to forecast the potential severity and location of the quake. If the quake is likely to cause significant damage, alerts are automatically sent to residents, businesses, and infrastructure facilities, allowing people to shut off gas lines, stop trains, and move to safer locations. As explained in a report by MIT Technology Review, these alerts can provide “seconds to minutes of warning,” which can be critical in reducing casualties and damage.
By leveraging AI, Japan’s system represents a significant advancement in earthquake preparedness, turning real-time data into actionable insights. While not yet capable of long-term prediction, Japan’s AI-enhanced system exemplifies how machine learning can transform disaster response, helping communities react swiftly and minimize harm.
The rapid advancement of artificial intelligence (AI) brings not only transformative potential but also significant ethical concerns. One of the most pressing issues is privacy and surveillance risks associated with AI-powered data collection. AI systems often rely on vast amounts of personal data, which, if mishandled, can infringe on individuals’ privacy and autonomy. This risk is especially pronounced in authoritarian regimes, where AI-driven surveillance tools are used to monitor and control populations. For instance, facial recognition and predictive policing technologies enable governments to track individuals, suppress dissent, and limit freedom of expression. According to human rights advocates, such applications threaten basic freedoms by creating environments where citizens are constantly monitored and potentially penalized for their behaviors. As AI continues to evolve, addressing these privacy risks is essential to ensure that AI serves society’s common good rather than becoming a tool of oppression.
Russia has significantly increased the use of AI-based tools for internal security and stability, leading to enhanced internet censorship and repression through measures like predictive policing. Urban surveillance has expanded with AI-driven facial recognition technology integrated into CCTV cameras in Moscow. The ‘Yarovaya Law’ provides a legal framework for widespread repression and narrative control, mandating internet service providers to store communications for six months to support AI surveillance. Additionally, the ‘Sovereign Internet Law’ (or Runet) isolates Russia’s internet, utilizing Deep Packet Inspection (DPI) technology for real-time traffic filtering by authorities.
A country like Russia is justifying the righteousness of repression and censorship. It has made the worst use of AI in this case. Furthermore, Russia has made it legal to surveil its citizens, placing everyone under its mass surveillance system. The so-called authorities in Russia are the ones who benefit from it all.
While Iran’s use of AI for repression is not as advanced as that of Russia and China, it has evolved to monitor internet traffic and enforce Islamic moral standards on digital content. The country aims for ‘digital sovereignty’ through the National Information Network (NIN), which isolates Iranian users from the global internet and facilitates state censorship by blocking counter-narratives on foreign websites. This strategy is framed as a defense against cyber threats but effectively limits public dissent and fosters a climate of surveillance among journalists, activists, and citizens.
Examples include the use of facial recognition during protests, AI-driven bots to promote pro-regime content, and tools for creating multilingual content aimed at a global audience. Iran has partnered with Chinese companies to enhance its AI capabilities, importing technology and hardware from various countries, including the UAE, China, Turkey, and India. Concerns about Iran’s AI use extend beyond its borders, involving hacking, cyber espionage, and social media manipulation to support pro-Iranian narratives while surveilling dissidents abroad.
The more a country spies on its population, the less democratic it becomes. Only a state that fears mass subversion of its power would exercise excessive surveillance over its citizens. Iran’s use of AI to enforce stronger surveillance is contrary to democracy and the rule of law.
For AI to truly serve the common good, it must be accessible and inclusive, benefiting people across all demographics and abilities. However, technological advancements often widen the gap between those who have access to resources and those who do not. Ensuring AI’s inclusivity requires deliberate efforts to bridge this digital divide, design technology that accommodates diverse needs, and foster collaboration across sectors. By addressing these challenges, AI can become a tool that empowers all communities, promoting equity and enhancing the quality of life for underserved and marginalized groups.
Overcoming the Digital Divide
The digital divide refers to the gap between those who have access to technology and those who do not, often due to economic, geographic, or educational barriers. In the context of AI, this divide can worsen inequalities if advanced technologies remain accessible only to affluent or urban populations. Ensuring AI accessibility requires addressing these disparities through targeted investments in digital infrastructure, especially in low-income and rural areas. Governments, nonprofits, and tech companies can work together to provide affordable internet access, digital devices, and training programs.
One approach to bridge this divide is through public-private partnerships, where tech companies partner with governments to expand digital networks and provide subsidized devices in underserved regions. Additionally, open-source AI tools can help democratize access, allowing individuals and organizations worldwide to use and build on AI technologies without prohibitive costs. By reducing barriers to entry, society can ensure that the benefits of AI reach all communities, promoting more equitable technological progress.
Inclusive Design
Inclusive design in AI means creating technologies that consider the diverse needs of users, including those with disabilities, the elderly, and other marginalized groups. This approach emphasizes designing AI systems that are usable and accessible to people with varying abilities and backgrounds. For example, AI-powered voice assistants should support multiple languages, dialects, and speech patterns, while interfaces should include options for those with visual, auditory, or motor impairments.
Inclusive design requires involving diverse groups in the AI development process, ensuring that their needs and feedback are considered from the beginning. This approach not only prevents exclusion but also reduces potential biases within AI systems. By prioritizing inclusivity, tech developers can create products that are genuinely user-friendly and accessible, thereby expanding the reach and impact of AI. Ultimately, inclusive design contributes to a more just and fair technological landscape where AI solutions are available to all, regardless of their abilities or background.
Cross-Sector Collaboration
Ensuring accessibility and inclusivity in AI is a complex challenge that requires collaboration across sectors, including government, private industry, academia, and civil society. Each of these stakeholders brings unique expertise and resources that can contribute to responsible AI development. Governments can create policies and regulations that promote equitable access and enforce standards for ethical AI use. Private companies, meanwhile, play a role in developing and distributing AI technologies, with the potential to prioritize accessibility and inclusivity in their products.
Academia contributes by researching the social implications of AI and developing new methodologies to reduce bias. Nonprofits and civil society organizations are essential in representing marginalized communities, advocating for fair AI practices, and providing insights into the needs of underserved populations. By working together, these sectors can create a comprehensive ecosystem that ensures AI serves the public interest, prioritizing inclusivity and ethical standards. Cross-sector collaboration fosters a balanced approach to AI, making technology more accessible and beneficial for all.
In conclusion, ensuring that AI serves the common good requires a proactive focus on accessibility and inclusivity across all stages of its development and deployment. First, bridging the digital divide is essential to prevent AI from becoming a privilege available only to certain groups. By investing in digital infrastructure and providing affordable access to technology, we can make AI’s benefits available to underserved communities worldwide.
Second, inclusive design is critical to creating AI systems that accommodate diverse user needs. Designing with inclusivity in mind — from supporting multiple languages to ensuring usability for people with disabilities — broadens AI’s reach and impact, making it truly accessible for all.
Finally, cross-sector collaboration is vital to address the complex challenges of equitable AI. Governments, tech companies, academic institutions, and nonprofits each play a role in building an AI ecosystem that prioritizes social responsibility and equity. By working together, these sectors can establish guidelines, share resources, and develop standards to guide AI toward benefiting the widest possible audience.
Together, these efforts can transform AI into a tool that enhances opportunities, empowers marginalized groups, and promotes a fairer, more inclusive society.
link