Behind AI as a tool for judicial reasoning: A Western Balkans Perspective
Back in 2019, I stumbled upon an article on MIT’s Technology Review, titled ”Can you make AI fairer than a judge?”. As a recently graduated lawyer, my response was a resounding no. I was adamant that deciding, especially in criminal cases, should be strictly reserved for humans as moral, empathetic, and ethical beings. However, today, as a legal professional working in the Western Balkans, I am rethinking this view. After all, there are some recurring concerns in the Western Balkans countries’ criminal justice systems: lenient sanctioning, inefficient trial management, and one concern that will be the focus of this article – inconsistent judicial reasoning.
Reasoning, or decision-making in the judiciary, is a complex cognitive process, with the ultimate arbitrator of the procedure being the presiding judge. Still, judicial decision-making has some nuances that allow for a helpful AI intervention. The blog aims to explore exactly what these nuances are that AI can contribute to. Therefore, I am revisiting the question posed by the MIT piece in the following direction: Can artificial intelligence (AI) help judges make fairer decisions?
What is fair?
Fairness is a rather philosophical and thus abstract concept of justice; however, it is materialized within the universal right to a fair trial. To be fair in the judicial system means complying with this right, which enshrines procedural safeguards for parties in the proceeding. In the context of a criminal procedure, trials are deemed fair if they are conducted in a reasonable time, with parties familiarized with their procedural rights, able to exercise their right of defense, and presumed innocent until proven guilty. 1
Another prerequisite for achieving a fair trial is the quality of judicial reasoning. As underlined by CCJE, clear reasoning and analysis are basic requirements in judicial decisions and an important aspect of the right to a fair trial. Furthermore, the reasons in the decision must allow the reader to follow the chain of reasoning that led the judge to the decision. 2
Therefore, even though AI cannot replicate a human judgment of what is fair, it could help assess whether a judicial decision embeds fairness through procedural safeguards of the right to a fair trial; in particular, the right to a reasoned judicial decision. 3
Artificial intelligence, as understood in the EU AI Act, refers to machine-based systems capable of inferring information whose capacity transcends basic data processing by enabling learning, reasoning, or modelling. 4 With the use of AI and its natural language processing (NLP), legal reasoning can be replicated through identifying specific patterns, analyzing legal language, and establishing relationships between specific words and concepts. 5
Hence, AI could be useful for judicial reasoning since its way of operating (with NLP) resembles various types of legal interpretation. In particular, AI can support legal interpretation by cross-checking the normative system as a whole (systematic interpretation), parsing through legal text and establishing its literal meaning (grammatical interpretation), and spotting patterns in how courts justify decisions, thus revealing a norm’s practical purpose, beyond its literal meaning (teleological interpretation).
Some scholars even point to AI’s capabilities of processing a greater amount of relevant data, arguably better than human intelligence (HI), with the AI being less likely to fall for the confirmation bias. 6 However, as rightfully underlined by CEPEJ in its report on the use of AI in the judiciary, rules of judicial independence prohibit automated interference with legal reasoning. 7
Still, judicial reasoning in criminal decisions in the Western Balkans faces several deficits. First, the lack of legal certainty, which materializes in the absence of similar reasoning for similar cases. 8 In addition, the decisions’ chain of reasoning is difficult to follow when there is an absence of linking particular pieces of evidence with corresponding facts (and elements of a criminal offence), or the lack of referring to relevant jurisprudence (case law) that would support judges’ reasoning. 9 This is further exacerbated by unnecessarily lengthy decisions.
Possible reasons for these deficits could be:
- Lack of guidelines (or templates) for reasoning or metrics for reasoning quality, which leads to formalistic reasoning rather than a substantial one;
- Fragmented and unpublished case law contributes to a limited legal research infrastructure;
- Statutory ambiguities that are materialized in vague legal formulations and the lack of national guidelines to help with their interpretation;
- Deficiencies in understanding of the law, which are manifested through high rates of decision reversal; 10
- Shortcomings in legal education with regard to legal methodology.
The question is – can AI resolve these deficits or at least address the underlying reasons/causes of their occurrence?
AI Case Study
To demonstrate how an AI could support the judge in their decision-making, let us take a hypothetical scenario that draws on documented issues in judicial reasoning in the region. 11
In case A, a public official violated the Law on public procurement by not issuing a transparent call to ensure competitive bidding. Instead, they have selected their cousin as a vendor and increased the total price of procurement of the needed IT equipment by 25%. However, the court acquitted the defendant, reasoning no evidence of the intent of the accused to obtain a gain for another. Personal connection with the third party was deemed insufficient to prove the intent of the accused.
In case B, the facts of the case are similar to case A: a public official working in a local secretariat for city planning, designated a street renovation contract to a company owned by a friend. This time, the contract’s real value increased by 20%. The court convicted the defendant for abuse of office, reasoning that indirect gain for third parties, especially when there’s a close personal relationship, meets the legal requirement for the subjective element of the criminal offense – intent to obtain undue gain for another.
How can AI help judges?
Since these cases are hypothetical, we can imagine an ideal scenario – using an AI system which incorporates Retrieval-Augmented generation (RAG). RAG systems retrieve relevant legal documents (i.e, data points) and then employ a language model to synthesize that information into coherent, contextually grounded answers. In this manner, RAG can narrow down large language models (LLMs) to laws, policies, and case-law, which reduces so-called hallucinations and increases accuracy. 12 In addition, by using natural language processing (NLP) models, which can be trained on a “reasoning checklist” derived from criminal codes, AI can significantly aid judges. As shown by researchers focused on the application of AI in the legal domain, there are already technologies in place that can directly aid in legal QA (question answering), such as the Case-Based Reasoning RAG system. Also, the researchers further showed that the application of the so-called non-negative matrix factorization to legal texts exposes reasoning patterns in segmented judicial opinions. 13
So, how exactly can AI assist judges in tackling these shortcomings – and the reasons behind them – in their daily decision-making?
- Cluster related cases and highlight inconsistencies, supporting judicial harmonization without overriding judicial discretion;
- Flag gaps in judicial reasoning, such as unaddressed elements of the criminal offence (e.g., mental element – intent), prompting the judge to provide additional reasoning;
- Identify relevant case law and provide contextualized summaries;
- Suggest condensing the text of the decision into specific parts to improve clarity and readability;
- Spotting ambiguous legal provisions and suggesting relevant national or international standards/guidelines to fill in the gaps;
f) Include judicial statistics and indicate the overall overturn rate of the decisions in similar cases;
Based on the technologies outlined above, the AI system’s output in the hypothetical cases could look like this:
- There appears to be an inconsistent interpretation of ‘’intent to obtain gain for another’’ in the reasoning of decisions A and B, despite their similarity in facts;
- The threshold for meeting the subjective element of the abuse of office offence (intent) differs in these two decisions; in case A, the intent must be proved directly, while in case B, intent can be inferred from circumstantial evidence.
- Lack of a direct link between several key factual elements (elements x, y, z) and corresponding pieces of evidence.
- For the sake of clarity and readability, the decision would benefit from condensing the following paragraphs (points specific paragraphs).
- The national Criminal Procedure Code doesn’t specify the level at which intent is deemed proven; however, the Legislative guide for the implementation of the United Nations Convention against corruption entails that judges may infer intent from objective factual circumstances.
- Acquittal verdicts were overturned in xx % similar cases in the past two years (see detailed overview of the case law retrieved in the “sources’’ part).
This is how AI can be valuable – certainly not by recommending a decision, since a judge is completely in control, but by helping judges ensure their reasoning is grounded, consistent, and clear. In this manner, the judicial AI tool could help improve the readability of court decisions and facilitate understanding of how a potential decision would be positioned within the framework of precedent. 14
AI Safeguards
Still, we need to keep in mind that using AI in the judiciary also carries potential risks, as with anything novel in any field or profession. Ongoing efforts on regulating AI (especially in the EU) have taught us to tread lightly when engaging with it. After all, the EU AI Act classifies AI usage in the judiciary as a high-risk system. AI inherently carries risks regarding data sourcing, confirmation bias, and hallucinations in terms of its output. The basic safeguards against these risks include the principles of transparency, explainability, and human oversight.
For the above AI’s answer regarding cases A and B, if the AI system follows transparency and explainability principles, the judge could easily verify the sources the system used (making authentication of the data used by the AI easily traceable) as well as the steps the system took in forming its suggestion – a key to using AI in the judiciary in a responsible manner. Also, the principle of human oversight mitigates uncritical adoption of the system’s output, 15 since recent studies show that humans are more likely to agree with AI outcomes that align with their pre-existing beliefs (i.e., confirmation bias). 16 Finally, human oversight would ideally be conducted regularly and by legal professionals trained for this task. Together, these three safeguards form the ethical pillars of an AI, especially in high-risk systems such as the judiciary.
Judicial AI as an Aid – Not a Replacement
Even though digitalisation remains in early stages in the Western Balkans region, we can observe examples such as AI systems already in use in EU member states’ courts. 17 The EU encourages Western Balkan countries (as per the EU acquis) to align their policies with the AI framework. In addition, some scholars in Bosnia and Herzegovina support the use of AI in the judiciary by stating that AI can help to determine the intent of the perpetrator of a criminal offence. Albania is pioneering these efforts in the region by partnering with Microsoft to develop an AI that assists judges in their decision-making. 18
The keyword is assist, not substitute. It is not about using AI for predictive justice or replacing judicial decision-making. AI should serve as a mirror in spotting judicial reasoning inconsistencies. The fact is that AI is here to stay, but needs continuous oversight and regulation (especially with data sourcing).
Therefore, my answer to the revised opening question is carefully optimistic: yes, AI can help judges become fairer. I am not asserting this as a techno-utopian. If anything, I am idealistic towards a vision of a more just world, where fair trial rights are upheld in each and every case. If AI, used responsibly with built-in ethical and legal safeguards, can help bring us closer to this vision, we should at least be open to it.
- Article 6 of the European Convention on Human Rights
- Consultative Council of European Judges (CCJE), Opinion No.11 of the Consultative Council of European Judges (CCJE) to the Attention of the Committee of Ministers of the Council of Europe on the quality of Judicial Decisions, 2008, paras. 3 and 36, https://rm.coe.int/16807482bf
- European Court of Human Rights case García Ruiz v. Spain, application no.30544/96, para. 26; case Ajdarić v. Croatia, application no. 20883/09, para. 34
- European Parliament and European Council, Artificial Intelligence Act, Official Journal of the European Union, 2024, Recital 12, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- European Commission for the Efficiency of Justice (CEPEJ), Council of Europe, 1st AIAB Report on The Use Of Artificial Intelligence (AI) In The Judiciary Based on The Information Contained in the Resource Centre on Cyberjustice and AI, 2025, p. 11, available at: https://rm.coe.int/cepej-aiab-2024-4rev5-en-first-aiab-report-2788-0938-9324-v-1/1680b49def
- Christoph Winter, Institute for Law and AI, The challenges of artificial judicial decision-making for liberal democracy, 2021, p. 22, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3933648
- European Commission for the Efficiency of Justice (CEPEJ), Council of Europe, ibid, p. 10
- Haider, Huma (2018). Rule of Law Challenges in the Western Balkans. The Institute of Development Studies and Partner Organisations, p. 30, available at: https://hdl.handle.net/20.500.12413/14260
- OSCE Mission to Bosnia and Herzegovina, Third annual report on judicial response to corruption: the impunity syndrome, 2020, pp. 58, 75, available at: https://www.osce.org/files/f/documents/4/e/471003.pdf
- Ibid, p. 44
- OSCE Mission to Bosnia and Herzegovina, Assessing Needs of Judicial Response to Corruption through Monitoring of Criminal Cases, Trial monitoring of corruption cases in BIH: a first assessment, 2018, pp. 42, 49, available at: https://www.osce.org/files/f/documents/b/e/373204.pdf
- Ryan C. Barron, Maksim E. Eren, Ryan C. Barron, Cynthia Matuszek, Boian S. Alexandrov, Bridging Legal Knowledge and AI: Retrieval-Augmented Generation with Vector Stores, Knowledge Graphs, and Hierarchical Non-negative Matrix Factorization, 2025, p. 2, available at: https://arxiv.org/pdf/2502.20364v1
- Ibid, pp. 2, 4
- Justice, Fundamental Rights and Artificial Intelligence Project, Artificial Intelligence, Judicial Decision-Making and Fundamental Rights, 2024, p. 33, available at:https://www.scuolamagistratura.it/documents/20126/1750902/JuLIA_handbook%20Justice_final.pdf
- Argyro Amidi, Anastasios Giannopoulos, Panagiotis Trakadas, AI-Assisted Judicial Decisions in European Civil Justice, National and Kapodistrian University of Athens, 2024, p. 17, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4901730
- Saar Alon-Barkat, Madalina Busuioc, Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice, p. 13, available at: https://arxiv.org/pdf/2103.02381
- For instance, Germany’s OLGA system shows how AI can classify and manage caseloads more efficiently.
- Regional school of public administration, Use of emerging technologies in the administrations of the Western Balkans for more efficient delivery of public services, 2023, p. 11, available at: https://www.respaweb.eu/download/doc/Use+of+emerging+technologies+in+the+administrations+of+the+WBs+for+more+efficient+delivery+of+public+services.pdf/aef3f7014f7f3605d146d8015d4e850f.pdf