AAAI 2022 Doctoral Consortium Materials

I will be presenting my thesis “pre-proposal” at the AAAI 2022 Doctoral Consortium. This proposal deals with exploring how computational models that acquire their semantic knowledge from text (through pre-training) generalize to novel information. The application required three key materials:

Reviews

Based on the above material, I received the following reviews. Note that the review process was single-blind, i.e., my name is revealed to the reviewers, but not vice-versa.

———————– REVIEW 1 ———————

SUBMISSION: 101

TITLE: On Semantic Cognition, Inductive Generalization, and Language Models

AUTHORS: Kanishka Misra

———– Summary ———–

SCORE: 2 (accept) —– TEXT: This work crosses across several fields and is likely to be of interest to a wider range of people as a result. The research questions that the student intends to pursue are clear and well-formulated.

———————– REVIEW 2 ———————

SUBMISSION: 101

TITLE: On Semantic Cognition, Inductive Generalization, and Language Models

AUTHORS: Kanishka Misra

———– Summary ———–

SCORE: 1 (weak accept)

—– TEXT:

  1. Cover sheet

Missing

  • The Department name.
  • A list of other DC program attended, or “none”
  1. Thesis summary

The structure seems fine.

Abstract

  • Kanishka writes “understanding” semantic knowledge in trained LMs. I’m thinking “eliciting”, instead.
  • This proposal will “analyze”…“investigate”…and “analyze and relate”. This seems incomplete. Will some benefits result from these studies? For example, will this permit better language models to be developed?

Introduction

  • I appreciate Kanishka’s familiarity with some older work on this topic. From the Abstract, I was immediately thinking of Smith and Medin’s 1981 book, and more. I encourage him to keep digging into that body of work.
  • The dissertation objective seems clear.

Related work

  • Ok, prior work was on what (world knowledge is accessible in LMs) whereas his is on how (they use semantic knowledge to process/generalize novel info). And he explains that, unlike prior work, IR makes graded distinctions among generalizations.
  • DARPA’s Machine Common Sense (MCS) program puts forward the claim that computational commonsense reasoning can be (partially) acquired by mimicking it in children (initially, from ages 0-18 months). That is, it takes the perspective of developmental psychology. Given that LMs do not, what confidence should we have that the behavior of LMs (pertaining to exhibiting common sense) will “align to that in humans”?

IR with LMs

  • Kanishka describes two capabilities that he argues LMs must perform and an evaluation strategy. He then discusses three research questions that focus on:
    • Identifying the kinds of inductions LMs make.
    • Their ability to recognize/leverage latent features during induction.
    • Determining how their generalization capacities relate to the representational space.
  • His assessments would use approaches inspired by research on human induction studies.

Finally, his research timeline seems set.

Summary: I appreciate the analyses that Kanishka will perform, but am uncertain how they may impact the study of LMs. For example, would other researchers use these to modify their designs of LMs to align more with how people (seem to) perform inductive reasoning? If so, would we expect some sort of performance improvements, or robustness in behavior? I’m uncertain.

  1. CV

This student already has a lot of experience, more than an ideal AAAI DC participant would have:

  • 11 publications
  • Has won (or honorable mention for) four awards
  • Several reviewing tasks

The AAAI DC instead seeks students who show promise but need help (e.g., because they have no or few local experts in their area, have only a few publications, or have not been able to parcipate in other events). Perhaps this student is too far along.

  1. Personal Statement

This covers the requested material, although the expected contribution to others seems narrow. That is, how many other students will focus on this specific topic and, thus, directly benefit from your experience? Probably none. Consider your broader experiences that you can share that would be valuable to peers.


Evaluation criteria and assessment

  • Clarity and completeness of submission packet
    • Mostly complete, but perhaps lacking in expressing how this work may/could impact the field. (If he had a more clear vision of this, it might inform how he conducts his remaining studies.)
  • Stage of research
    • Mid-way; expected graduation date is May 2023, so this seems appropriate for the AAAI-22 DC
  • Evidence of research progress (e.g., publications)
    • Substantial (11 so far)
  • Assessment of contribution to and benefit from participating in the DC
    • The benefit to participating might be below average, as this student already has 11 publications and he seems far along in his studies.
    • However, this is their first DC, and no prior publication appears to have been published at AAAI.
  • Advisor’s input
    • None was provided to me

———————– REVIEW 3 ———————

SUBMISSION: 101

TITLE: On Semantic Cognition, Inductive Generalization, and Language Models

AUTHORS: Kanishka Misra

———– Summary ———–

SCORE: 2 (accept)

—– TEXT:

This student’s thesis is on the way in which language models (LMs) encode semantic information, and in particular whether judgments such as generalization and typicality are present in pre-trained LMs. The specific top of the DC proposal is on the boundaries of generalization, that is, if an LM is fine-tuned with additional nonsense knowledge (eg. “canaries can fip”), to what other concepts (eg. robins, giraffes, planes) that knowledge might generalize.

Given the use of recent rise of LMs as KBs, both in NLP settings and elsewhere, this work explores the degree to which concepts commonly found in KBs are truly represented in LMs. It draws on concepts and techniques/tests from cognitive science and may interest researchers in those areas, and it also seeks to evaluate and understand where LMs might fail to capture desirable knowledge. This is an important question both to users of LMs and to a broad AI audience, and will likely be a contribution to the doctoral consortium and AAAI as a whole.

Kanishka Misra
Kanishka Misra
PhD Student

My research interests include Natural Language Processing, Cognitive Science, and Deep Learning.