Skip to main content

AI and ethics

The rapid growth of artificial intelligence (AI) appears to be outpacing ethical discussions about its impact.

Ruairidh Fraser examines the progress that civil society groups are making towards setting ethical boundaries in this brave new world.

AI models that can generate photorealistic images and compose undergraduate essays have jumped from sci-fi fever dream into everyday reality over the past decade. And now, according to Keir Starmer, “artificial intelligence will deliver a decade of national renewal” for the UK.

Yet while Starmer seems happy to chuck the country’s eggs in an AI-generated basket, mainstream debates on the technology can feel stuck in a rut. Silicon Valley’s philosopher-prophets flit between visions of AI apocalypses and utopias, while ‘Big Tech’ CEOs promise impending shareholder windfalls and reassure us that their companies are fully capable of creating and following their own ethical codes. National governments meanwhile wield all the regulatory will of a rabbit caught in the floodlights of a Google data centre.

Amidst all this sound and fury, civil society groups have been emphasising the real life opportunities and challenges that AI presents, and have proposed several recommendations to ensure that AI practices protect human rights and the environment. 

In this feature we shift the focus away from the babbling of private corporations and governments, and instead highlight alternative approaches.

Adopting a human rights-based approach

Amnesty International has written extensively on the potential threats AI poses to human rights. They highlight that AI can easily become a means of societal control, mass surveillance and discrimination, for example through predictive policing (using data analysis to identify potential criminals), using automated systems to determine access to healthcare and social assistance, and monitoring the movements of migrants and refugees.

Amnesty is one of 120 civil society groups which called for a more human rights based approach to AI regulation in a joint response to the EU’s long awaited AI Act. This new EU law included red lines against some rights-violating uses of AI alongside redress measures for affected people – but did not go nearly as far as the groups had hoped.

Representing this coalition, European Digital Rights (EDRi) argued that “human centric” regulation cannot become a mere buzzword, but requires that people are treated with real dignity. Lawmakers therefore need to be bold enough to draw red lines against unacceptable uses of AI systems. Otherwise, in their view, hard-won rights to privacy, equality, non-discrimination, the presumption of innocence and many more are put under threat.

These groups also seek to guarantee that those who are impacted by technologies are not only meaningfully involved in decision-making on how AI should be regulated, but also that their experiences are continually centred within these discussions. The pursuit of profit cannot trump human rights. 

How could this apply to corporations?

Developing policy is a crucial first step. In February 2024, the World Benchmarking Alliance called on tech companies to adopt, implement, and disclose robust AI governance policies

They highlighted that only a quarter of the most influential tech companies meet minimum ethical AI standards, urging for more ambitious practices to mitigate risks such as bias, discrimination, and privacy violations.

Human oversight is also crucial. 

The Institute of Business Ethics recommends that companies appoint a dedicated AI ethics leader or committee to oversee the ethical use of AI technologies. They also emphasise ensuring that ethical AI practices extend across supply chains. It is easy to imagine a major Western corporation developing comprehensive policies and practices for their own staff, but continuing to rely on exploitative and privacy-violating technologies throughout their supply chains in the rest of the world.

Limiting environmental damage of AI

The largest tech companies emit 2-3 percent of the world’s carbon emissions – that’s roughly the same as global aviation. 

Google's greenhouse gas emissions in 2023 were almost double that of 2019, and the company stated that reducing emissions to meet its 2030 net-zero target “may be challenging... due to increasing energy demands from the greater intensity of AI compute”. 

Indeed, this shows no signs of slowing. Global data centre infrastructure is expected to more than double by 2026. In the UK, the government plans to introduce new “AI Growth Zones” to speed up the development of data centres.

Worryingly, at a time where global focus should be on reducing emissions, the energy demand at data centres which is accelerating because of AI could triple by 2030, according to Boston Consulting Group. That possibility has inspired Amazon, Google and Microsoft to look beyond solar and wind to non-fossil fuel power sources, including nuclear and geothermal.

So what else can be done? In short, companies need environmental benchmarks that govern how AI’s energy consumption is resourced, and also govern how the technology is put to use.

In terms of energy use, the Green Web Foundation takes issue with tech companies’ exclusive focus on ‘greening’ their own operations, at the expense of a more holistic view. For example, many companies boast about running their data centres with renewable energy, which looks good on paper. However, they are generally not investing in new renewable generation, but are just taking the renewable capacity out of increasingly stretched national grids.

Companies should instead be helping to decarbonise the grid, and artificial intelligence could open opportunities in this regard. Microsoft co-founder Bill Gates has (perhaps conservatively) estimated that AI would increase electricity demand by between 2% and 6%, but is confident that the technology will “certainly ... accelerate a more than 6 per cent reduction” by enabling efficiency savings.

Perhaps so, yet Gates’ company appears to be simultaneously using AI to accelerate climate breakdown. Whistleblower reports in 2024 alleged that Microsoft was helping companies extract fossil fuels more efficiently with AI tools, which chimes with Global Action Plan’s assertion that “the first major clients the [AI] industry has been servicing are fossil fuel companies.”

Enabling fossil fuel extraction is particularly egregious, but we should also examine the emissions impact of more everyday ‘digital waste’ created with AI. Big tech’s innovations over the past decade have tended towards addictive software, personalised marketing, data mining and infinite scrolling. These are data-heavy, and therefore energy-heavy processes. Groups such as The People Versus Big Tech have highlighted that the internet needs to become less addictive and less reliant on over-consumption if we are to limit its environmental impact. Regulation of AI usage in advertising is touted as a good starting point.

Without a set of norms and enforced regulation, it seems likely that AI's potential to solve climate issues will amount to little more than a PR trick.

Front cover of Ethical AI Civil Society Manifesto report

Adopting a global outlook

Finally, many civil society organisations recognise the discourse around AI and regulation is dominated by Western male voices, whiteness, and wealth. 

Works such as the AI Decolonial Manyfesto and The Civil Society Manifesto for Ethical AI highlight that debates are often exclusively centred around Western countries, and ignore both the risks and opportunities AI poses across the rest of the world.

A collaborative effort from groups representing over thirty countries, the The Civil Society Manifesto for Ethical AI explores how the AI industry exploits cheap labour and harvests data from consumers in the Global South, but gives little back to these communities. It calls for collective reflection on ethics, more globally-focused and diverse research, and public dialogue based on workers’ rights and human rights frameworks.

It is crucial that discussions around the design, scope, and regulation of AI do not remain concentrated in a few elite cross-sections of a few wealthy countries.

Regulation must have a global scope if it is to prevent already marginalised peoples being exploited to develop AI systems, without basic human rights safeguards or accountability mechanisms.

Emerging consensus around AI and ethics

There are certainly areas of consensus emerging.

On human rights, groups are united in their advocacy for robust safeguards to protect users from AI-driven surveillance and data misuse. This includes calls for transparent algorithms, public accountability mechanisms, and legally binding regulatory frameworks.

Civil society also stresses the importance of sustainable AI, reducing its energy footprint and ensuring that the technology is harnessed to support genuine emissions reductions.

Many organisations also demand that AI developers address biases that reinforce inequalities, calling for research and regulation that reflects global perspectives.

Overall, there is a genuine fear throughout the literature that placing profits above people will exacerbate human rights violations, accelerate climate breakdown, widen global economic disparities, and ultimately allow AI to generate further injustice rather than benefit for humanity.

Looking back to the UK, one is left wondering how far Kier Starmer’s AI master plan will address any of these concerns.

Developing an ethical rating for AI use

At Ethical Consumer, we may develop a rating standard for businesses in this area in the medium term. Key elements might include:

Carbon: companies need environmental benchmarks that govern how AI’s energy consumption is resourced, and also govern what the technology is used for (can it justify the carbon impact in a climate crisis).

Human rights: ambitious practices to conduct human rights impact assessments of projects and to mitigate risks such as bias, discrimination, and privacy violations, and to require the same in supply chains.

Accountability mechanisms: meaningful accountability and independent oversight, including from experts in technical and human rights issues, such as academics and civil society representatives. The development, implementation, and oversight of these approaches should involve all relevant stakeholders, including those from the Global South and those most likely to be adversely impacted by AI.

Workers' rights: Companies should recognise the data rights of workers and be transparent on what data is extracted from workers, how, and what the data is used for.