SWAGex for Commonsense Validation and Explanation

01/27/2020

SWAGex for Commonsense Validation and Explanation

Project Motivation

Commonsense reasoning has long been a challenge in the field of Natural Language Processing (NLP). Recently, the emergence of large pre-trained language models achieving state-of-the-art results in several NLP tasks, such as BERT and GPT has also influenced commonsense tasks to regain popularity.

Natural Language Inference (NLI) is one of the most popular tasks in commonsense reasoning research. It consists of determining if a hypothesis is correct, given a premise. Situations With Adversarial Generation (SWAG) task is closely related to NLI. It consists of determining the most appropriate ending given a start phase as a context. The start phrases and endings in SWAG dataset are video captions. The task is to find the ending that describes a possible (future) world that can follow the given sentence even when it is not strictly entailed. Making such inference necessitates a rich understanding of everyday physical situations.

Summary

  • Used multiple methods on the pre-trained language model BERT to recognize sentences against commonsense and justify the reasoning behind this decision.
  • Investigated the ability to transfer commonsense knowledge from SWAG to ComVE task by training a model for the Explanation task with Next Event Prediction data.

Project Results

  • Paper “SWAGex at SemEval-2020: Commonsense Explanation as Next Event Prediction”, appeared in Proceedings of the 14th International Workshop on Semantic Evaluation.
  • Poster presentation at COLING, in Dec 2020 (online)

News and Updates