"The Ratio of Digital Justice: From Sikkim’s Paperless Courts to the Challenge of AI Hallucinations.
A day ago, while scrolling through a WhatsApp group, I came across a news item declaring Sikkim as the nation’s first fully paperless judiciary. This immediately piqued my interest and brought to mind the classic analogy: "If Keechaka is killed, Bhima must be the one who did it."
In January 2026, one of the most beloved judges of the Kerala High Court, Justice Muhamed Mustaque, was elevated as the Chief Justice of that Himalayan state. Knowing his track record, it was clear to me that he was the person behind this digital transformation.
When his elevation was first notified by the Central Government, many of his admirers, myself included, felt a tinge of sadness. Compared to other jurisdictions on the list, it seemed a small High Court for a judge of his caliber, and many felt he deserved a larger stage. However, upon reading this news, I realized that his Lordship likely viewed the posting not as a limitation, but as an opportunity to create a model for the rest of the country.
Having followed the official YouTube channel of the Sikkim State Government since his elevation, I received a notification regarding the historic events unfolding at Chintan Bhawan in Gangtok. It was a sight to behold: the Chief Justice of India, Justice Surya Kant, along with several Judges of the Supreme Court and various High Courts, distinguished academicians, and Advocates General from across the country, had all assembled for a momentous occasion.
In their presence, Sikkim was formally declared the nation’s first fully paperless judiciary.
As the icing on the cake, the declaration was the center-piece of a two-day National Conclave on Technology and Judicial Education. This was not merely a ceremonial gathering, but a masterclass in how a modern judiciary should function. Seeing the nation’s judicial leadership turn their eyes toward Sikkim, I couldn’t help but reflect on how Justice Mustaque’s vision has turned a geographically challenging terrain into a digital highway for justice. What some once viewed as a "small" posting has become a lighthouse for the entire Indian judiciary.
If the ceremonial declaration and the assembly of dignitaries were merely the obiter dicta of the event, the true ratio lay in its unprecedented transparency. Traditionally, high-level deliberations on judicial education and technology—the kind typically held behind the closed doors of the National Judicial Academy (NJA) in Bhopal—remain out of reach for the rank and file of the profession.
As I followed the technical sessions closely, the term "AI Hallucination" resonated deeply with me. It was a concept I had recently encountered during the Western Regional Conference of the Society of Construction Law (SCL) in Mumbai.
To understand the gravity of this issue, one need only look at the definition provided by the Chartered Institute of Arbitrators (CIArb) in their Guidelines on the Use of AI in Arbitration:
"Hallucination (in an AI context) refers to an invented and fictitious piece of information generated by an AI tool and presented as factually correct."
During the sessions in Gangtok, the speakers delved into this phenomenon with greater nuance. My mind immediately turned to the recent controversies that have rattled the legal community: instances where lawyers unknowingly presented fake, AI-generated precedents in open court. These "hallucinations" have moved beyond theoretical risks, forcing several High Courts—including the High Court of Kerala—to issue formal regulations and practice directions to govern the use of Artificial Intelligence in legal proceedings.
To someone like me, with limited technical knowledge, the concept of hallucination became clear when I looked at it through the lens of compliance. I began to understand that AI simply does not know how to say "No."
If you compel these tools with multiple, persistent prompts, they seem to feel a "pressure" to satisfy the request. Rather than admitting a gap in their knowledge or a vacuum in their database, the system begins to prioritize providing an answer over providing the truth. It starts to invent; it starts to hallucinate.
This realization changed how I view technology. It isn't just a "search engine" that might fail to find a result; it is a creative engine that, when pushed, will manufacture a reality to please the user. In our profession, where a single misplaced "fact" or a non-existent citation can destroy a reputation, this "yes-man" tendency of AI is a risk we must manage with extreme caution.
The most significant recent international development in this field is the EU AI Act, enacted by the European Parliament. This landmark legislation represents the world’s first comprehensive attempt to create a unified legal framework specifically designed to regulate Artificial Intelligence based on its potential to cause harm. At its core, the Act seeks to strike a delicate balance: fostering innovation while simultaneously ensuring the protection of fundamental rights, safety, and ethical standards across the global market.
To achieve this, the Act adopts a risk-based approach, categorizing AI systems into four distinct levels of risk:
Unacceptable Risk: This category includes AI used for manipulative techniques, social scoring, or deceptive practices that distort behavior; these systems are strictly prohibited.
High-Risk AI: This is the most regulated category, covering systems used in critical sectors such as education, employment, and—most relevantly—the administration of justice. Providers of these systems must establish rigorous risk management and ensure their datasets are, to the best extent possible, free of errors to prevent inaccuracies.
Limited Risk: Systems like chatbots or deepfakes are subject to lighter transparency obligations, requiring developers to ensure that end-users are aware they are interacting with AI.
Minimal Risk: The majority of current applications, such as spam filters, fall into this category and remain largely unregulated.

Comments
Post a Comment