Revisiting Structured Representation Use in LLM

Authors

DOI:

https://doi.org/10.5324/2ethqv02

Keywords:

Natural Language Processing, Large Language Models, Structured Representation

Abstract

Structured representations (SRs), pivotal in pre-LLM NLP, play a contentious role today, with studies showing that they can degrade task performance. The prevailing hypothesis suggests that Large Language Models (LLMs) are unfamiliar with traditional formalisms like Abstract Meaning Representation (AMR). We argue that this view is incomplete, as LLMs are extensively trained on structured data, particularly programming code. To test this, we introduce two new prompt frameworks and evaluate three representation formats (AMR vs. RDF vs. Python code) across multiple LLMs and tasks. Our findings indicate that the choice of representation is of high importance. Across multiple models and tasks, we show that Python code and RDF outperform AMR up to 20% in classification tasks. The effectiveness of any SR is also conditioned on the LLM's baseline capability, the prompting method, and the quality of the representation itself. Although SRs can substantially boost performance for models with weaker baselines, they offer diminishing returns and can harm performance for models that are already highly capable, confirming a "sweet spot" for their application. Our work demonstrates that the utility of SRs in the LLM era depends on their alignment with the models' training data.

Downloads

Download data is not yet available.

Downloads

Published

2025-11-24

How to Cite

[1]
“Revisiting Structured Representation Use in LLM”, NIKT, vol. 37, no. 1, Nov. 2025, doi: 10.5324/2ethqv02.