Skip to main content

Sergio Servantez Final Defense April 23: Large Language Models as Legal Reasoners

Thursday, April 23, 2026 | 1:30 PM - 3:30 PM CT
Mudd Hall ( formerly Seeley G. Mudd Library), 3501, 2233 Tech Drive, Evanston, IL 60208 map it

The legal system is built upon complex bodies of written text that function as operative rules governing rights, obligations, and outcomes. Recent advances in natural language processing have created new opportunities to automate aspects of legal analysis using language models (LMs). However, these models remain fundamentally probabilistic systems optimized for statistical prediction rather than faithful rule application. As a result, they often generate fluent legal language while failing to perform the precise, multi-step reasoning required to correctly interpret and apply legal rules. This dissertation investigates how language models can be improved and rigorously evaluated as legal reasoners through the integration of symbolic representations and methods grounded in legal domain knowledge.

Focusing on a tractable subset of legal tasks termed computational legal reasoning, this work develops neurosymbolic approaches that structure legal interpretation, rule application, and evaluation. First, it introduces a system for transforming natural language contracts into machine-readable representations through the extraction of Obligation Logic Graphs (OLGs), enabling contractual obligations to be represented as structured logical relationships and translated into executable code. Second, it presents Chain of Logic, a prompting method designed to improve rule-based reasoning in language models by decomposing compositional rules into their constituent elements before recombining the results to reach a final conclusion. Third, it introduces OpenExempt, a dynamic framework and benchmark that generates natural language legal tasks and their solutions from expert-crafted encodings of statutes and case facts, enabling fine-grained diagnostic evaluation of model behavior across controlled variations in reasoning complexity and task structure. Together, these contributions advance the study of language models as legal reasoners by introducing methods that improve interpretability, reliability, and diagnostic evaluation.

Audience

  • Faculty/Staff
  • Student
  • Post Docs/Docs
  • Graduate Students

Contact

Jensen Smith
Email

Interest

  • Academic (general)

Add Event To My Group

Please sign-in