I am a PhD candidate at Purdue University, where I work on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. I am particularly interested in characterizing the semantic knowledge made available to computational models that only learn from textual exposure. I work closely with Dr. Allyson Ettinger and her lab at UChicago. I am also affiliated with CERIAS, Purdue’s center for research and education in areas of information security.
I was recently selected to be a Graduate Student Fellow in the inaugural Purdue Graduate School Mentoring Fellows program!
I am the author of minicons, a python package that facilitates large scale behavioral analyses of transformer language models.
My email is kmisra @ purdue [dot] edu.[why is it like that?]
Aug 2022: Successfully hosted discussion group on Neural Nets for Cognition at CogSci! Thanks to everyone who could make it, and my co-organizers: Jay McClelland, Judy Fan, and Felix Binder!
May 2022: Paper describing a paradigm to perform property induction with language models accepted at CogSci 2022! Check it out here!
May 2022: I will be interning at Google Research this fall! Feeling very fortunate!
March 2022: New preprint describing my library
minicons out on arxiv. Check it out here!
February 2022: Presented my dissertation proposal at the AAAI 2022 Doctoral Consortium! Check out the corresponding poster here!
February 2022: Passed my PhD proposal!