Milind Tambe
Abstract: For nearly two decades, my team’s work on AI for social impact (AI4SI) has focused on optimizing limited resources in public health, conservation, public safety, and other critical areas. I will highlight recent results from our deployed work in India on using bandit algorithms to improve effectiveness of the world’s two largest mobile health programs for maternal and child care that have served millions of beneficiaries. Additionally, I will briefly discuss our previous work on influence maximization for HIV prevention among youth experiencing homelessness in Los Angeles. Deploying end-to-end AI4SI systems pipeline requires us to repeat three steps of understanding stakeholders’ resource allocation challenges, building a tailored model and testing in the field. I’ll share initial results on how we can leverage foundation models and LLMs to dramatically accelerate this AI4SI process.
Bio: Milind Tambe is Gordon McKay Professor of Computer Science and Director of Center for Research on Computation and Society at Harvard University; concurrently, he is also Principal Scientist and Director for "AI for Social Good" at Google Deepmind. Prof. Tambe and his team have developed pioneering AI systems that deliver real-world impact in public health (e.g., maternal and child health), public safety, and wildlife conservation. He is recipient of the AAAI Award for Artificial Intelligence for the Benefit of Humanity, AAAI Feigenbaum Prize, IJCAI John McCarthy Award, AAAI Robert S. Engelmore Memorial Lecture Award, AAMAS ACM Autonomous Agents Research Award, INFORMS Wagner prize for excellence in Operations Research practice, Military Operations Research Society Rist Prize, Columbus Fellowship Foundation Homeland security award and commendations and certificates of appreciation from the US Coast Guard, the Federal Air Marshals Service and airport police at the city of Los Angeles. He is a fellow of AAAI and ACM.
Dinesh Jayaraman
Abstract: Industry is placing big bets on "brute forcing" robotic control, but such approaches are profligate in their use of expensive resources in robotics: power, compute, time, data, etc. Good engineering principles would hold that we should aim to develop more minimalist robotic control stacks, which requires understanding the tradeoffs between task performance and resource usage. My research group has been "exploiting and exploring" robot learning: exploiting to push the limits of what can be achieved with today’s prevalent principles, and "exploring" by asking foundational questions towards building better design principles for efficient and minimalist robots in the future. As examples of "exploit", we have trained quadruped robots to perform circus tricks on yoga balls and robot arms to perform household tasks in entirely unseen scenes with unseen objects. As examples of "explore", we are studying the sensory requirements of robot learners: what sensors do they need and when during training and task execution do they need them? In this talk, I will highlight these examples, and discuss some lessons we have learned in our research towards better-engineered robot learners.
Bio: Dinesh Jayaraman is an assistant professor in the University of Pennsylvania’s CIS department. Before this, he was a visiting research scientist at Facebook AI Research, Menlo Park and was a postdoctoral scholar at UC Berkeley. He received his PhD from UT Austin (2017), and his Bachelor’s degree from IIT Madras (2011). Dinesh’s research focuses on questions at the intersections of perception, learning, and robotic control, such as: how might perception (such as from high-resolution optical / tactile sensors) benefit from the ability to act in the world, and vice versa? And how can effective visual control algorithms that exploit these perception-action cycles help in bringing general purpose affordable robots into our homes and workplaces? Towards answering these questions, he studies a broad range of topics, from predictive models for model-based RL and planning, to self-supervised visual representation learning, active perception, visuo-tactile robotic manipulation, causal inference, visual servoing, semantic visual attributes, and zero-shot categorization.