Data drives quicker, safer decisions for race cars and robots
By
on
Naomi Ehrich Leonard ’85, the Edwin S. Wilsey Professor of Mechanical and Aerospace Engineering and chair of the department, has long used mathematical models to understand the remarkably fast, robust, and adaptable group decision-making in natural systems like swarms of honeybees and flocks of birds. She has applied these models in settings including a group of robots exploring the ocean depths.
Now, Leonard has teamed up with Jaime Fernández Fisac, an expert in safe robotics and human-centered autonomy, to improve safety and performance in racing. Their methods combine the mathematics of group dynamics with techniques from machine learning and control systems, and use data from real and simulated races. The goal is to improve coordination on multiple levels — between each car’s AI and its driver, and among all the cars on the track.
With support from the Toyota Research Institute, the team is building neural networks — multilayered algorithms inspired by the organization of neurons in the brain — that will use training data to tune opinion dynamics models that ensure split-second breaking of costly decision-making deadlocks.
“We want to make sure that the AI can convey its situational awareness quickly and clearly,” said Fisac, an assistant professor of electrical and computer engineering. “Transparency is critical when you deal with increasingly sophisticated driver assistance systems,” like current systems that warn of lane departures or vehicles in the driver’s blind spot, or even nudge the steering or tap the brakes to avoid a potential accident.
Fisac and Leonard seek to develop onboard AI systems that can rapidly infer and adopt drivers’ shifting priorities during a race, as well as warn them about hazards or directly intervene in the vehicle’s control, all without unduly distracting or, even worse, startling the human behind the wheel.
Racing is “uniquely dynamic and safety-critical,” Fisac said. “You’re really pushing the car to the limits of its operating envelope, and a single misstep can have catastrophic consequences.”