Professor of Natural Language Processing

The Hebrew University of Jerusalem

Schwartz lab

Roy Schwartz's lab at the School of Computer Science and Engineering at the The Hebrew University of Jerusalem studies Natural Language Processing (NLP). Our research is driven towards making text understanding technology widely accessible—to doctors, to teachers, to researchers or even to curious teenagers. To be broadly adopted, NLP technology needs to not only be accurate, but also reliable; models should provide explanations for their outputs; and the methods we use to evaluate them need to be convincing.
Our lab also studies methods to make NLP technology more efficient and green, in order to decrease the environmental impact of the field, as well as lower the cost of AI research in order to broaden participation in it.

Lab News

A new paper accepted to AAAI!

Congrats to Yonatan!

An awesome lab event in Nahal Halilim!

A new paper accepted to Findings of EMNLP

Congrats to Michael and Daniel!

WinoGAViL paper accepted to NeurIPS 2022 Track Datasets and Benchmarks track as a featured presentation!

Congrats to Yonatan, Nitzan and Ron!

Congrats to Yarden and Inbal for defending their Master’s theses!

Thank you all attendees of the Efficient NLP workshop in Dagstuhl!

Workshop report coming soon!

Our efficient NLP policy has been officially adopted by the ACL exec!

See document link for more details.


Biases in Datasets

We analyze the datasets on which NLP models are trained. Looking carefully into these datasets, we uncover limitations and biases in the data collection process as well as the evaluation process. Our findings indicate that the recent success of neural models on many NLP tasks has been overestimated, and pave the way for the development of more reliable methods of evaluation.

Green NLP

The computations required for deep learning research have been doubling every few months. These computations have a surprisingly large carbon footprint. Moreover, the financial cost of the computations can make it difficult for academics, students, and researchers, in particular those from emerging economies, to engage in deep learning research. Our lab studies tools to make NLP technology more efficient, and to enhance the reporting of computational budgets.

Understanding NLP

In recent years, deep learning became the leading machine learning technology in NLP. Despite its wide adoption in NLP, the theory of deep learning lags behind its empirical success, as many engineered systems are in commercial use without a solid scientific basis for their operation. Our research aims to bridge the gap between theory and practice. We devise mathematical theories that link deep neural models to classical NLP models, such as weighted finite-state automata.

Recent Publications

Quickly discover relevant content by filtering publications.

VASR: Visual Analogies of Situation Recognition

A core process in human cognition is analogical mapping: the ability to identify a similar relational structure between different …

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

The attention mechanism is considered the backbone of the widely-used Transformer architecture. It contextualizes the input by …

Efficient Methods for Natural Language Processing: A Survey

Getting the most out of limited resources allows advances in natural language processing (NLP) research and practice while being …

WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models

While vision-and-language models perform well on tasks such as visual question answering, they struggle when it comes to basic human …

Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias

The size of pretrained models is increasing, and so does their performance on a variety of NLP tasks. However, as their memorization …


  • School of Computer Science and Engineering, Edmond Safra Campus, Givat Ram, The Hebrew University, Jerusalem, 9190401
  • Rothberg Building C, Room C503