Kanishka Misra

Kanishka Misra

PhD Candidate

AKRaNLU Lab

Purdue University

I am a PhD candidate at Purdue University, where I work on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I am particularly interested in characterizing the semantic knowledge made available to computational models that only learn from textual exposure. I work closely with Dr. Allyson Ettinger and her lab at UChicago. I am also affiliated with CERIAS, Purdue’s center for research and education in areas of information security.

I was recently selected to be a Graduate Student Fellow in the inaugural Purdue Graduate School Mentoring Fellows program!

I am the author of minicons, a python package that facilitates large scale behavioral analyses of transformer language models.

My email is kmisra @ purdue [dot] edu.[why is it like that?]

Interests

  • Inductive Reasoning
  • Concepts and Categories
  • Language Understanding
  • Lexical Semantics
  • Typicality and Vagueness

Collaborators

  • Julia Rayz, Purdue (Advisor)
  • Allyson Ettinger, UChicago
  • Chih-chan Tien, UChicago
  • Mourad Heddaya, UChicago
  • Geetanjali Bihani, Purdue

Recent News

SEE ALL

  • May 2022: Paper describing a paradigm to perform property induction with language models accepted at CogSci 2022! Check it out here!

  • May 2022: I will be interning at Google Research this fall! Feeling very fortunate!

  • March 2022: New preprint describing my library minicons out on arxiv. Check it out here!

  • February 2022: Presented my dissertation proposal at the AAAI 2022 Doctoral Consortium! Check out the corresponding poster here!

  • February 2022: Passed my PhD proposal!

  • January 2022: Happy to be selected as a Graduate Student Fellow in the inaugural Purdue Graduate School Mentoring Fellows program!

Recent Publications

Quickly discover relevant content by filtering publications.

A Property Induction Framework for Neural Language Models

Investigating how neural language generalize novel information about concepts and properties. To be presented at CogSci 2022

minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models

A python library that facilitates behavioral and representational analyses of transformer Language Models.

On Semantic Cognition, Inductive Generalization, and Language Models

Thesis proposal to study Inductive Generalization in Language Models.

Do language models learn typicality judgments from text?

Investigating manifestation of category typicality effects in predictive models of language processing. Presented at CogSci 2021

Recent Posts

Introducing $\texttt{minicons}$: Running large scale behavioral analyses on transformer language models

In this post, I showcase my new python library that implements simple computations to facilitate large-scale evaluation of transformer language models.

Contact