Reasonable for Cabinet Office not to use AI, even if it might save time

Tribunal finds that it was reasonable for the Cabinet Office not to use AI to 
locate documents for FOIA Request

Overview of the FTT's decision on AI use in FOIA requests

In a recent decision of the First-tier Tribunal (General Regulatory Chamber) 
Information Rights (FTT), the Cabinet Office succeeded in its application for a 
strike out. An individual, Mr Ed Surridge (the Appellant) had appealed against 
a Decision Notice (DN) issued by the Information Commissioner. The Cabinet 
Office said that it would take them too long to locate the information he had 
requested. Mr Surridge appealed, and argued that the Cabinet Office should use 
artificial intelligence (AI) tools to locate the requested information. The 
Tribunal held that the Cabinet Office's position was reasonable given it did 
not have the AI tools to perform the search.

FOIA request and appeal process

The appeal concerned a request for information under the Freedom of Information 
Act 2000 (FOIA) about wildfire preparedness for the safeguarding of people and 
their homes near woodland.

The Cabinet Office refused the request on the basis that it would exceed the 
appropriate limit of £600 or 24 hours of work under section 12(1) FOIA and the 
Fees Regulations. The scope of the request was for a review of over 3,568 
emails which the Cabinet Office estimated would take 43 hours to accomplish.

The Appellant complained to the Commissioner, who issued a DN which upheld the 
Cabinet Office's decision. The Appellant then appealed the DN in the FTT, 
challenging the adequacy of the searches conducted by the Cabinet Office, and 
suggesting that AI tools could be used to locate the information more 
efficiently.

The Cabinet Office applied to strike out the appeal. It argued that there was 
no prospect of the FTT finding an error of law in the Information 
Commissioner's judgment. The Applicant's views on AI search methods had no 
prospect of showing an error of law in the section 12 decision; AI chatbot 
responses could not be relied upon; and the Cabinet Office did not have AI 
tools to assist in the searches.

Tribunal decision and rejection of AI search based argument

The Tribunal granted the strike out application on the ground that the appeal 
had "no reasonable prospect of success" under 8(3)(c) of the Tribunal Rules. It 
assessed the applicability of the section 12 FOIA limit based on the processes 
"actually used and adopted" by the public authority, not on how the information 
"should have been kept" or searched. The Tribunal accepted the Cabinet Office's 
evidence that it did not have or use AI tools so therefore could not search its 
records "in the way that the Appellant envisaged."

The Tribunal also referred to Oakley v Information Commissioner [2024] UKFTT 
00315 (GRC) to highlight the inadequacy of AI searches. AI evidence was 
compared to expert evidence noting that "an expert would be required to explain 
their expertise, the sources that they rely upon and the methodology that they 
applied." Therefore "little weight" was given to the Appellant's evidence based 
on AI searches, as no explanation of the sources or methodology used by the AI 
tool could be provided.

How might this ruling shape the future of the use of AI in public sector 
information management?

The FTT's decision highlights the limitations of using AI within government 
processes in the UK. Public authorities are not obliged to utilise AI tools if 
they are not already applied in their operations and it would not be 
appropriate to mandate this - even if such tools may theoretically enhance the 
efficiency of workflow.

As AI technologies continue to evolve, public authorities face the decision of 
reassessing their information management practices and consider integrating AI 
tools to enhance their ability to respond to information requests efficiently. 
A decision to integrate AI tools into the daily processes of government 
activities would come with its own considerations and challenges, such as 
dealing with the lack of transparency in the methodology of AI-generated 
evidence. Addressing such concerns is crucial for effective implementation in 
the long term. This decision shows that there is no positive obligation to do 
so, yet.


https://www.lexology.com/library/detail.aspx?g=cc3fb15f-34f3-4517-9c78-32922728b6d3&s=09

Inviato da Outlook per Android<https://aka.ms/AAb9ysg>

Reply via email to