Minds and Machines: The Politics of Tomorrow

Thea Nord Berget

This text is interested in discussing Artificial Intelligence, not as a subject of policy and legislation, but as an active participant in legislative processes or political lobbying. Ultimately concluding that if AI fully ascends into politics, it needs to be heavily supervised with awareness and caution.

What could the future of AI within politics look like?

Bruce Schneier and Nathan E. Sanders outline six possible political milestones for AI. One of these milestones is the possibility of AI drafting legislation, and it being submitted under its name. One of the other milestones they predict is that AI could achieve a coordinated policy outcome across multiple jurisdictions. Some minor, inconsequential drafts for legislature have already been introduced in some US states, but these have been heavily edited by humans. An interesting milestone could be the potential acceptance of testimony on legislation, or a comment submitted to an agency entirely drafted by Artificial Intelligence. In other words, AI could become able to submit a draft for legislation that is considered to be legitimate. The second milestone was based on how even if AI does not have human desires or needs, it could be programmed to have a goal such as altering taxes or something of the sort, and actively participate in political lobbying. AI has many of the same tools that we do to achieve policy outcomes. It could for example advocate through promoting ideas through digital channels, it could lobby and direct ideas to policymakers, or it could write and propose legislation. There have also been attempts to create political actors powered by AI, such as the Danish Synthetic party – a party with AI as the lead policymaker, which was programmed on materials from several previous Danish ‘micro-‘parties to create proposals for the party’s policy proposals. It for example proposed that Denmark should, based on ancient Greek Democracy, hold a poll that would make it so that ministers are replaced by random citizens every month. The rationale behind this was most likely that the AI knows that many previous Danish micro-parties have sought direct democracy and are tired of representative democracy. Regardless, the synthetic party did not get enough votes to get a seat in parliament, but the idea behind the party inspired many worldwide. It was arguably a fascinating experiment, nonetheless. Ultimately, it is not unlikely that AI will be more significantly more present in politics, aiding at the very least in drafting policies or being used for advocating such policies through digital channels.

What are the potential issues with AI in politics?

It is argued here that the advancements of AI being more present in politics is not entirely unproblematic. In terms of politics, it is here believed that we should be extra cautious about using AI without constraint, to make sure we are holding our legislators to the standards they should be held to. Most of politics is about governing human citizens, and so human citizens should be responsible for it and AI should not assume a politician’s or legislator’s job. Being a politician is arguably a job that is supposed to be challenging. Perhaps AI can help bureaucratic processes and streamline administrative tasks that normally take a substantial amount of time and resources, but this does not mean AI should have a hand in drafting legislation or policy or be accepted as a legitimate source of such drafts. It is argued here that politics require a certain humanity, and this should be maintained. Currently, there are legislative and policy endeavours undertaken to regulate AI. Such as the first legislation regulating AI that is being proposed in the European Union. According to the AI Act, unacceptable risk AI systems that can be considered a threat would be banned. This would include any model that has to do with cognitive behavioural manipulation of people or specific vulnerable groups, such for example voice-activated children’s toys that encourage dangerous behaviors or social scoring models that classify people based on behaviours, socio-economic status, or personal characteristics. Other ‘high risk’ models that are not completely unacceptable must be registered in an EU database. Such as education and vocational training; management of critical infrastructure; law enforcement; migration, asylum, and border control management; or assistance in legal interpretation and application of the law. As well as this, general purpose models, such as ChatGPT would have to comply with transparency requirements disclosing that everything created by it is created by AI. This Act has been accepted by Parliament and the European Council in a provisional agreement and will very likely become official legislation in the near future.

Some worry that AI is a negative influence on politics, whilst others argue that it can only improve human existence on all levels, including within politics. In my opinion, both could be true, and it is a nuanced issue. Zoltan Istvan, a previous US presidential candidate, is a huge advocate for the advancements of AI and talks about how AI is currently only useful for very basic tasks, but that it advances rapidly, doubling its capacity every two years or so. It is argued by Verdict, that if AI lead our governments, we would be able to trust them to do the right thing. However, there are ethical issues with the notion that AI is an objective, unbiased tool for streamlining political tasks. Purely philosophically, who determines what is the ‘right thing to do?’ Judging by whose morality would the AI be programmed? Considering moral relativism and the notion that humans are incapable of being unbiased, AI could potentially not be able to determine a universal morality to act according to in policy decision-making. In terms of processes that require machinery and algorithms, it is probably entirely fair to say that AI can be better or more efficient than us, but it could be considered unrealistic to comment that AI can create objective truth or be unbiased, because it will always be programmed by humans who cannot be. In my opinion, AI is a tool that can be used to make tasks that require organisation, planning, or distribution of political materials, etc., more efficient. However, it should not replace the human politicians behind the foundational ideas for political campaigns, policies, legislation, lobbying, etc.

Further issues with AI taking a significant role in politics could be that it is becoming entirely too easy to falsify information. Some commentators, such as Robert Chesney, argues that the decline of trust in traditional media and the increase of communications and information delivered across social media platforms along with the increasing believability of AI such as deepfakes, can create an increase of misinformation in terms of politics. For example, technologies that alter images or spread misinformation can be used disproportionately to harm vulnerable populations such as women, LGBTQI+, or POC that are running for office. According to the Council of Foreign Affairs, a report from the Centre for Democracy and Technology found that these groups are more likely to be targeted negatively by misinformation campaigns. Furthermore, the increase of AI in politics could provide politicians with a method of refuting accountability for lies or scandals. As the use of fake images increase, any problematic soundbite or video a political person is involved could be claimed to be AI and misconstrued, thereby denying blame. ‘A get-out-of-jail freecard.’

Conclusion

AI continues to fascinate many of us but let us keep in mind that at the end of the day, humans are involved in programming and supervising AI – ethical considerations arguably need to always be at the forefront of any developments within AI. AI is significantly interesting as an immensely fascinating development in evolution and the history of humankind. However, in terms of politics, a lot of people might prefer policies and politicians to reflect humanity and human interests. Though AI can help with minor administrative tasks and ‘busy work’, it should not replace or render less important, the humanity it takes to govern humanity. It is not unwise to be wary of how AI may be misused, for instance by misinformation – this is not AI’s fault, but people potentially seeking to profit or fulfil ulterior motives using AI. Overall, it will continue to be exciting to follow AI’s development in politics, and the most important argument here is that AI must continue to be regulated according to strict ethical considerations.

Thea Nord Berget is an undergraduate student studying law at Newcastle University.

Final Editor Dr Neha Vyas

Academic Lead Dr Neha Vyas

Leave a Reply

Your email address will not be published. Required fields are marked *