I am a PhD candidate at Purdue University, where I work on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I also work closely with Dr. Allyson Ettinger and her lab at UChicago. I am currently on the job market for post-doctoral position! Please reach out if you think I am a good fit for your lab!
My research focuses on evaluating and analyzing large language models from the perspective of human semantic cognition, investigating capacities such as their ability to encode typicality effects, recall property knowledge, demonstrate property inheritance, and perform human-like category-based induction. Through my work, I hope to contribute towards bridging the experimental paradigms in the study of human cognition with that of artificial intelligence systems.
I am currently a Research Intern at Google AI working on multi-hop reasoning and language models!
I was recently selected to be a Graduate Student Fellow in the inaugural Purdue Graduate School Mentoring Fellows program!
I am the author of minicons, a python package that facilitates large scale behavioral analyses of transformer language models.
I recently hosted a two part discussion group on Neural Nets for Cognition @ CogSci 2022!
My email is kmisra @ purdue [dot] edu.[why is it like that?]
November 2022: Presented work at joint meeting of CPL Lab, Ev Lab, and Language and Intelligence Lab at MIT!
September 2022: Presented work at the Human and Machine Learning Lab at NYU CDS!
September 2022: Presented work at Computation and Psycholinguistics Lab at NYU CDS!
Aug 2022: Started internship at Google Research NYC – wonderful city!
Aug 2022: Successfully hosted discussion group on Neural Nets for Cognition at CogSci! Thanks to everyone who could make it, and my co-organizers: Jay McClelland, Judy Fan, and Felix Binder!
May 2022: Paper describing a paradigm to perform property induction with language models accepted at CogSci 2022! Check it out here!