Professor of Natural Language Processing

The Hebrew University of Jerusalem

Schwartz lab

Roy Schwartz's lab at the School of Computer Science and Engineering at the The Hebrew University of Jerusalem studies Natural Language Processing (NLP). Our research is driven towards making text understanding technology widely accessible—to doctors, to teachers, to researchers or even to curious teenagers. To be broadly adopted, NLP technology needs to not only be accurate, but also reliable; models should provide explanations for their outputs; and the methods we use to evaluate them need to be convincing.
Our lab also studies methods to make NLP technology more efficient and green, in order to decrease the environmental impact of the field, as well as lower the cost of AI research in order to broaden participation in it.

Lab News

Congrats to Michael for getting his paper accepted to COLM 2024!

Congrats to Yuval for getting his LLM label bias accepted to NAACL 2024!

Excited to contribute to the AI Environmental Impacts Act by Senators Markey and Heinrich!

An awesome lab event in Nahal Halilim!

Projects

Biases in Datasets

We analyze the datasets on which NLP models are trained. Looking carefully into these datasets, we uncover limitations and biases in the data collection process as well as the evaluation process. Our findings indicate that the recent success of neural models on many NLP tasks has been overestimated, and pave the way for the development of more reliable methods of evaluation.

Green NLP

The computations required for deep learning research have been doubling every few months. These computations have a surprisingly large carbon footprint. Moreover, the financial cost of the computations can make it difficult for academics, students, and researchers, in particular those from emerging economies, to engage in deep learning research. Our lab studies tools to make NLP technology more efficient, and to enhance the reporting of computational budgets.

Multimodality

Humans learn about the world using input from multiple modalities. Machines can also leverage other modalities in order to improve their textual understanding. Our lab studies methods for combining textual information with data from images, sounds, videos and others, with the goal of making them more robust and allowing them to generalize better.

Understanding NLP

In recent years, deep learning became the leading machine learning technology in NLP. Despite its wide adoption in NLP, the theory of deep learning lags behind its empirical success, as many engineered systems are in commercial use without a solid scientific basis for their operation. Our research aims to bridge the gap between theory and practice. We devise mathematical theories that link deep neural models to classical NLP models, such as weighted finite-state automata.

Recent Publications

Quickly discover relevant content by filtering publications.

From Tokens to Words: on the Inner Lexicon of LLMs

Natural language is composed of words, but modern LLMs process sub-words as input. A natural question raised by this discrepancy is whether LLMs encode words internally, and if so how. We present evidence that LLMs engage in an intrinsic detokenization process, where sub-word sequences are combined into coherent word representations. Our experiments show that this process takes place primarily within the early and middle layers of the model. They also show that it is robust to non-morphemic splits, typos and perhaps importantly—to out-of-vocabulary words: when feeding the inner representation of such words to the model as input vectors, it can ‘understand’ them despite never seeing them during training. Our findings suggest that LLMs maintain a latent vocabulary beyond the tokenizer’s scope. These insights provide a practical, finetuning-free application for expanding the vocabulary of pre-trained models. By enabling the addition of new vocabulary words, we reduce input length and inference iterations, which reduces both space and model latency, with little to no loss in model accuracy.

Attend First, Consolidate Later: On the Importance of Attention in Different LLM Layers

In decoder-based LLMs, the representation of a given layer serves two purposes: as input to the next layer during the computation of the current token; and as input to the attention mechanism of future tokens. In this work, we show that the importance of the latter role might be overestimated. To show that, we start by manipulating the representations of previous tokens; e.g. by replacing the hidden states at some layer k with random vectors. Our experimenting with four LLMs and four tasks show that this operation often leads to small to negligible drop in performance. Importantly, this happens if the manipulation occurs in the top part of the model-k is in the final 30-50% of the layers. In contrast, doing the same manipulation in earlier layers might lead to chance level performance. We continue by switching the hidden state of certain tokens with hidden states of other tokens from another prompt; e.g., replacing the word “Italy” with “France” in “What is the capital of Italy?”. We find that when applying this switch in the top 1/3 of the model, the model ignores it (answering “Rome”). However if we apply it before, the model conforms to the switch (“Paris”). Our results hint at a two stage process in transformer-based LLMs: the first part gathers input from previous tokens, while the second mainly processes that information internally.

What Can Natural Language Processing Do for Peer Review?

The number of scientific articles produced every year is growing rapidly. Providing quality control over them is crucial for scientists and, ultimately, for the public good. In modern science, this process is largely delegated to peer review – a distributed procedure in which each submission is evaluated by several independent experts in the field. Peer review is widely used, yet it is hard, time-consuming, and prone to error. Since the artifacts involved in peer review – manuscripts, reviews, discussions – are largely text-based, Natural Language Processing has great potential to improve reviewing. As the emergence of large language models (LLMs) has enabled NLP assistance for many new tasks, the discussion on machine-assisted peer review is picking up the pace. Yet, where exactly is help needed, where can NLP help, and where should it stand aside? The goal of our paper is to provide a foundation for the future efforts in NLP for peer-reviewing assistance. We discuss peer review as a general process, exemplified by reviewing at AI conferences. We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance, illustrated by existing work. We then turn to the big challenges in NLP for peer review as a whole, including data acquisition and licensing, operationalization and experimentation, and ethical issues. To help consolidate community efforts, we create a companion repository that aggregates key datasets pertaining to peer review. Finally, we issue a detailed call for action for the scientific community, NLP and AI researchers, policymakers, and funding bodies to help bring the research in NLP for peer review forward. We hope that our work will help set the agenda for research in machine-assisted scientific quality control in the age of AI, within the NLP community and beyond.

Accelerating Speculative Decoding using Dynamic Speculation Length

Speculative decoding is a promising method for reducing the inference latency of large language models. The effectiveness of the method depends on the speculation length (SL) - the number of tokens generated by the draft model at each iteration. The vast majority of speculative decoding approaches use the same SL for all iterations. In this work, we show that this practice is suboptimal. We introduce DISCO, a DynamIc SpeCulation length Optimization method that uses a classifier to dynamically adjust the SL at each iteration, while provably preserving the decoding quality. Experiments with four benchmarks demonstrate average speedup gains of 10.3% relative to our best baselines.

The Larger the Better? Improved LLM Code-Generation via Budget Reallocation

It is a common belief that large language models (LLMs) are better than smaller-sized ones. However, larger models also require significantly more time and compute during inference. This begs the question: what happens when both models operate under the same budget? (e.g., compute, run-time). To address this question, we analyze code generation LLMs of various sizes and make comparisons such as running a 70B model once vs. generating five outputs from a 13B model and selecting one. Our findings reveal that, in a standard unit-test setup, the repeated use of smaller models can yield consistent improvements, with gains of up to 15% across five tasks. On the other hand, in scenarios where unit-tests are unavailable, a ranking-based selection of candidates from the smaller model falls short of the performance of a single output from larger ones. Our results highlight the potential of using smaller models instead of larger ones, and the importance of studying approaches for ranking LLM outputs.

Contact

  • roy.schwartz1@mail.huji.ac.il
  • School of Computer Science and Engineering, Edmond Safra Campus, Givat Ram, The Hebrew University, Jerusalem, 9190401
  • Rothberg Building C, Room C503