My Bachelor's Thesis: Reviewing the Consistency of Semantical Capabilities of Large Language Models - a Word-in-Context Benchmark Evaluation Framework and Utility Library
You can test the semantical sentence-understanding capabilities of any* Hugging Face model
Resurrection - The module where it happens
- Any amount of records from the Word in Context dataset (or records in the same format, of course 🙂)
- Any Hugging Face model
- Detailed statistics and analytics of the model's answers to the input.
* almost any. You need to make your own scripts to test unsupported models. Resurrection has been thoroughly tested on Qwen/Qwen2.5-0.5B-Instruct though, so this and similar models will grantedly work.
Reviewing the Consistency of Semantical Capabilities of Large Language Models
By design, word embeddings are unable to model the dynamic nature of words' semantics, i.e., the property of words to correspond to potentially different meanings. To address this limitation, dozens of specialized meaning representation techniques such as sense or contextualized embeddings have been proposed. However, despite the popularity of research on this topic, very few evaluation benchmarks exist that specifically focus on the dynamic semantics of words. In this paper we show that existing models have surpassed the performance ceiling of the standard evaluation dataset for the purpose, i.e., Stanford Contextual Word Similarity, and highlight its shortcomings. To address the lack of a suitable benchmark, Pilehvar and his team put forward a large-scale Word in Context dataset, called WiC, based on annotations curated by experts, for generic evaluation of context-sensitive representations. WiC is released in https://pilehvar.github.io/wic/.
This repository contains an algorithm to achieve as much accuracy as possible on the WiC binary classification task. Each instance in WiC has a target word w for which two contexts are provided, each invoking a specific meaning of w. The task is to determine whether the occurrences of w in the two contexts share the same meaning or not, clearly requiring an ability to identify the word’s semantic category. The WiC task is defined over supersenses (Pilehvar and Camacho-Collados, 2019) – the negative examples include a word used in two different supersenses and the positive ones include a word used in the same supersense.
WiC POS Tagging Word Comparison Notebook
- The Google Colab notebook running the models can be found at this link.
- The evaluation framework and function library can be downloaded from the github.com/Fabbernat/Thesis GitHub repository.
- Testing and evaluation of language models can be viewed in the Generative Language Models spreadsheet.