Jalend Bantupalli I'm a Member of Technical Staff at MyoLab, working on LLM-based reasoning systems. I completed my Masterβs in Computer Science at UC San Diego, where I focused on LLM Reasoning and Program Synthesis. Previously, I worked at Google and Microsoft on applied AI systems. At Google, I worked on Gemini Networking, and at Microsoft, I contributed to AI-powered Dynamics 365. I earned my B.Tech from IIT Kharagpur, where I was advised by Professor Animesh Mukherjee on fairness in NLP, culminating in our paper on decoding demographic bias. Email / CV / LinkedIn / Scholar / Twitter / Github π¬ What I'm Working On
|
![]() ![]() ![]() ![]() ![]() |
ResearchMy research focuses on Large Language Model (LLM) reasoning, program synthesis, and multimodal understanding. Iβm particularly interested in enabling LLMs to perform structured, step-by-step reasoning in complex environments where inputs span across natural language, formal representations (e.g., code, logic), and physical modalities such as motion or visual context. This includes developing models that can solve problems by abstracting patterns, composing subroutines, and leveraging prior examples β rather than relying on brute-force memorization. I aim to design systems that are not only general-purpose and data-efficient, but also interpretable and aligned with human-like reasoning capabilities. At MyoLab, I work on developing LLM-based agents that interact with embodied data (e.g., body movements, sensor inputs) to solve complex reasoning tasks. My broader goal is to create learning frameworks that combine LLMs with reinforcement learning and vision to enable grounded, goal-directed intelligence. π I recently co-authored ImplexConv, a large-scale dataset and retrieval framework for implicit reasoning in personalized multi-session conversations. Our proposed method, TaciTree, introduces hierarchical multi-level summarization to support efficient long-context reasoning. (Preprint on arXiv) Publications
Media & Mentions
|