"The Ratio of Digital Justice: From Sikkim’s Paperless Courts to the Challenge of AI Hallucinations.

 

A day ago, while scrolling through a WhatsApp group, I came across a news item declaring Sikkim as the nation’s first fully paperless judiciary. This immediately piqued my interest and brought to mind the classic analogy: "If Keechaka is killed, Bhima must be the one who did it."

In January 2026, one of the most beloved judges of the Kerala High Court, Justice Muhamed Mustaque, was elevated as the Chief Justice of that Himalayan state. Knowing his track record, it was clear to me that he was the person behind this digital transformation.

When his elevation was first notified by the Central Government, many of his admirers, myself included, felt a tinge of sadness. Compared to other jurisdictions on the list, it seemed a small High Court for a judge of his caliber, and many felt he deserved a larger stage. However, upon reading this news, I realized that his Lordship likely viewed the posting not as a limitation, but as an opportunity to create a model for the rest of the country.

Having followed the official YouTube channel of the Sikkim State Government since his elevation, I received a notification regarding the historic events unfolding at Chintan Bhawan in Gangtok. It was a sight to behold: the Chief Justice of India, Justice Surya Kant, along with several Judges of the Supreme Court and various High Courts, distinguished academicians, and Advocates General from across the country, had all assembled for a momentous occasion.

In their presence, Sikkim was formally declared the nation’s first fully paperless judiciary.

As the icing on the cake, the declaration was the center-piece of a two-day National Conclave on Technology and Judicial Education. This was not merely a ceremonial gathering, but a masterclass in how a modern judiciary should function. Seeing the nation’s judicial leadership turn their eyes toward Sikkim, I couldn’t help but reflect on how Justice Mustaque’s vision has turned a geographically challenging terrain into a digital highway for justice. What some once viewed as a "small" posting has become a lighthouse for the entire Indian judiciary.

If the ceremonial declaration and the assembly of dignitaries were merely the obiter dicta of the event, the true ratio lay in its unprecedented transparency. Traditionally, high-level deliberations on judicial education and technology—the kind typically held behind the closed doors of the National Judicial Academy (NJA) in Bhopal—remain out of reach for the rank and file of the profession.

As I followed the technical sessions closely, the term "AI Hallucination" resonated deeply with me. It was a concept I had recently encountered during the Western Regional Conference of the Society of Construction Law (SCL) in Mumbai.

To understand the gravity of this issue, one need only look at the definition provided by the Chartered Institute of Arbitrators (CIArb) in their Guidelines on the Use of AI in Arbitration:

"Hallucination (in an AI context) refers to an invented and fictitious piece of information generated by an AI tool and presented as factually correct."

During the sessions in Gangtok, the speakers delved into this phenomenon with greater nuance. My mind immediately turned to the recent controversies that have rattled the legal community: instances where lawyers unknowingly presented fake, AI-generated precedents in open court. These "hallucinations" have moved beyond theoretical risks, forcing several High Courts—including the High Court of Kerala—to issue formal regulations and practice directions to govern the use of Artificial Intelligence in legal proceedings.

To someone like me, with limited technical knowledge, the concept of hallucination became clear when I looked at it through the lens of compliance. I began to understand that AI simply does not know how to say "No."

If you compel these tools with multiple, persistent prompts, they seem to feel a "pressure" to satisfy the request. Rather than admitting a gap in their knowledge or a vacuum in their database, the system begins to prioritize providing an answer over providing the truth. It starts to invent; it starts to hallucinate.

This realization changed how I view technology. It isn't just a "search engine" that might fail to find a result; it is a creative engine that, when pushed, will manufacture a reality to please the user. In our profession, where a single misplaced "fact" or a non-existent citation can destroy a reputation, this "yes-man" tendency of AI is a risk we must manage with extreme caution.

The most significant recent international development in this field is the EU AI Act, enacted by the European Parliament. This landmark legislation represents the world’s first comprehensive attempt to create a unified legal framework specifically designed to regulate Artificial Intelligence based on its potential to cause harm. At its core, the Act seeks to strike a delicate balance: fostering innovation while simultaneously ensuring the protection of fundamental rights, safety, and ethical standards across the global market.

To achieve this, the Act adopts a risk-based approach, categorizing AI systems into four distinct levels of risk:

  • Unacceptable Risk: This category includes AI used for manipulative techniques, social scoring, or deceptive practices that distort behavior; these systems are strictly prohibited.

  • High-Risk AI: This is the most regulated category, covering systems used in critical sectors such as education, employment, and—most relevantly—the administration of justice. Providers of these systems must establish rigorous risk management and ensure their datasets are, to the best extent possible, free of errors to prevent inaccuracies.

  • Limited Risk: Systems like chatbots or deepfakes are subject to lighter transparency obligations, requiring developers to ensure that end-users are aware they are interacting with AI.

  • Minimal Risk: The majority of current applications, such as spam filters, fall into this category and remain largely unregulated.

Furthermore, the Act introduces specific obligations for General Purpose AI (GPAI) models. All GPAI providers must now provide technical documentation, comply with copyright laws, and publish summaries of the content used for their training—a direct effort to address the transparency concerns that often lead to the "hallucinations" discussed earlier.

Building upon the broader principles of the EU AI Act, the Chartered Institute of Arbitrators (Ciarb) has introduced a specialized framework through its Guideline on the Use of AI in Arbitration, tailored specifically for the nuances of dispute resolution. While the Act establishes a general risk-based hierarchy, the Ciarb Guideline translates these concerns into the arbitral context, emphasizing that the use of AI Tools—including Generative AI—never diminishes the personal responsibility and accountability of legal practitioners or arbitrators. The guideline highlights transformative benefits, such as expedited data analysis, predictive case modeling, and cost-effective transcription, but balances these against the existential risks of AI hallucinations, "black box" algorithmic bias, and significant threats to confidentiality and due process. Crucially, it mirrors the EU's focus on high-risk applications by urging arbitrators to exercise their procedural powers to mandate disclosure, verify the integrity of source data, and maintain absolute "paternity" over their decisions. By requiring that arbitrators independently verify all AI-generated output and refrain from delegating core judicial functions like the interpretation of law, the guideline ensures that the "ratio" of an award remains a purely human endeavor, even in an increasingly digital environment.

In conclusion, the transformation of Sikkim into a paperless state is more than a technological milestone; it is a testament to what can be achieved when a leader views a challenging posting as a canvas for innovation. However, as the assembly at Gangtok made clear, the "digital highway" for justice requires more than just high-speed connectivity; it requires a new form of professional vigilance.

While international frameworks like the EU AI Act and the Ciarb Guidelines provide the "ratio" for global regulation, the burden of integrity ultimately rests on the practitioner. As we embrace tools that refuse to say "no," we must find the professional courage to say "stop"—independently verifying every fact and ensuring that our reliance on technology never leads to a "hallucination" of justice.

The true legacy of this digital era will not be the screens in our courtrooms, but our ability to ensure that the dialogue of the law remains a human, transparent, and truthful endeavor.

AI Disclosure & Accountability Statement

This article was prepared with the assistance of Generative AI for drafting, prose refinement, and the summarization of technical frameworks including the EU AI Act and Ciarb Guidelines (2025).

In accordance with the Ciarb Guideline on the Use of AI in Arbitration, the author maintains full paternity and accountability for this work. All AI-generated outputs, including technical definitions like AI Hallucinations, have been independently verified. The author remains solely responsible for the legal analysis, factual interpretation, and original reflections contained herein, ensuring that technology served only as a supportive tool and never as a substitute for human judgment.

Comments

Popular posts from this blog

The Empty Chair: Why I, a Member of the Bar, Joined the "Boycott"

Beyond the Victory: An Agenda for the Professional Growth of the Bar

The Paradox of Ownership: How Kerala's Apartment Law Failed Its Homeowners