Kanishka Misra

Kanishka Misra

PhD Student

Purdue University

I am a PhD candidate at Purdue University, where I work on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I am particularly interested in characterizing the level of semantic knowledge made available to computational models that only learn from textual exposure. I work closely with Dr. Allyson Ettinger and her lab at UChicago. I am also affiliated with CERIAS, Purdue’s center for research and education in areas of information security.

In 2018, I was fortunate to be awarded the Purdue Research Foundation fellowship (now known as the Ross-Lynn Graduate Student Fellowship). I then taught database fundamentals to sophomore level undergraduates for three semesters. I am currently funded by an NSF-EAGER grant focused on using artificial intelligence techniques to develop entertainment education materials for social-engineering research.

My email is [my-first-name] @ purdue [dot] edu.[why is it like that?]

Interests

  • Inductive Reasoning
  • Language Understanding
  • Lexical Semantics
  • Categorization
  • Typicality and Vagueness

Collaborators

  • Julia Rayz, Purdue (Advisor)
  • Allyson Ettinger, UChicago
  • Geetanjali Bihani, Purdue
  • Abhilasha Kumar, Indiana
  • Hemanth Devarapalli, Purdue
  • Tatiana Ringenberg, Indiana

Education

  • PhD, Natural Language Understanding, Current

    Purdue University

  • MS in Computer Information Technology, 2020

    Purdue University

  • BS in Computer Information Technology, 2018

    Purdue University

Recent News

SEE ALL

  • November 2021: I recently gave a talk at Allyson Ettinger’s research group at UChicago. Here are my slides.

  • October 2021: My thesis proposal was accepted to AAAI 2022 Doctoral Consortium! My application materials can be found here!

  • October 2021: Passed my prelim examination!

  • September 2021: Submitted a (tentative) thesis summary to the AAAI-2022 doctoral consortium!

  • July 2021: Presenting my paper on whether language models learn typicality at CogSci 2021!

  • May 2021: Our NAFIPS 2021 paper received an honorable mention for the Best Student Paper award.

Recent Publications

Quickly discover relevant content by filtering publications.

On Semantic Cognition, Inductive Generalization, and Language Models

Thesis proposal to study Inductive Generalization in Language Models.

Do language models learn typicality judgments from text?

Investigating manifestation of category typicality effects in predictive models of language processing. Presented at CogSci 2021

Finding fuzziness in Neural Network Models of Language Processing (Forthcoming)

Probing an NLI model for its handling of fuzzy concepts such as temperature. To be presented at NAFIPS 2021

Exploring BERT’s Sensitivity to Lexical Cues using Tests from Semantic Priming

Using semantic priming to investigate how BERT utilizes lexical relations to inform word probabilities in context. Presented at Blackbox NLP 2021.

Recent Posts

Introducing $\texttt{minicons}$: Running large scale behavioral analyses on transformer language models

In this post, I showcase my new python library that implements simple computations to facilitate large-scale evaluation of transformer language models.

Contact