I am a PhD candidate at Purdue University, where I work on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I also work closely with Dr. Allyson Ettinger and her lab at UChicago.

My research focuses on evaluating and analyzing large language models from the perspective of human semantic cognition, investigating capacities such as their ability to encode typicality effects, recall property knowledge, demonstrate property inheritance, and perform human-like category-based induction. Through my work, I hope to contribute towards bridging the experimental paradigms in the study of human cognition with that of artificial intelligence systems.

I spent Fall 2022 as a Research Intern at Google AI working on multi-hop reasoning and language models!

I was recently selected to be a Graduate Student Fellow in the inaugural Purdue Graduate School Mentoring Fellows program!

I am the author of minicons, a python package that facilitates large scale behavioral analyses of transformer language models.

I recently hosted a two part discussion group on Neural Nets for Cognition @ CogSci 2022!

My email is kmisra [at] purdue [dot] edu.[why is it like that?]

Recent News

SEE ALL

  • May 2023: COMPS was awarded the best paper award at EACL – congratulations to my co-authors, Julia and Allyson!!

  • May 2023: Paper from my Google Internship accepted at ACL 2023 Findings, paper with collaborators on context effects in minimal pair testing accepted at ACL 2023, and paper with collaborators from Google Brain/DeepMind accepted at ICML 2023! Big congrats to all my collaborators!

  • April 2023: Presented a talk at MIT about my PhD research! The Boston-Cambridge area has great people!

  • January 2023: COMPS was accepted at EACL (main conf)!

  • January 2023: New paper with collaborators from Google Brain, on LLMs and their distractability in demonstrating mathematical reasoning behavior out on arxiv!

  • December 2022: New paper on characterizing the aspects of context and their effects on language model minimal pair judgments with collaborators out on arxiv!

Recent Publications

Quickly discover relevant content by filtering publications.

COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models

Dataset and analyes to test conceptual knowledge in LLMs. To be presented at EACL 2023

A Property Induction Framework for Neural Language Models

Investigating how neural language generalize novel information about everyday concepts and their properties. Presented at CogSci 2022

minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models

A python library that facilitates behavioral and representational analyses of transformer Language Models.

On Semantic Cognition, Inductive Generalization, and Language Models

Thesis proposal to study Inductive Generalization in Language Models. Presented at AAAI 2022 Doctoral Consortium.

Do language models learn typicality judgments from text?

Investigating manifestation of category typicality effects in predictive models of language processing. Presented at CogSci 2021

Recent Posts

Introducing $\texttt{minicons}$: Running large scale behavioral analyses on transformer language models

In this post, I showcase my new python library that implements simple computations to facilitate large-scale evaluation of transformer language models.

Contact

  • Heavilon Hall Room 108. Purdue University.
  • TBA
  • Tweet at me