Skip to content ↓

AgeLab researching autonomous vehicle systems in ongoing collaboration with Toyota

Innovative MIT research focuses on developing systems to perceive and identify objects in their environment and understand social interactions in traffic.
Press Inquiries

Press Contact:

Close
The MIT AgeLab and Toyota CSRC set up a camera system at a busy Cambridge intersection to study minute details of pedestrian movements.
Caption:
The MIT AgeLab and Toyota CSRC set up a camera system at a busy Cambridge intersection to study minute details of pedestrian movements.
Credits:
Photo: MIT AgeLab
MIT AgeLab engineers test a program on driver state detection (facial movements, head position, emotion, drowsiness, attentiveness, and body language) to improve automated vehicle systems.
Caption:
MIT AgeLab engineers test a program on driver state detection (facial movements, head position, emotion, drowsiness, attentiveness, and body language) to improve automated vehicle systems.
Credits:
Photo: MIT AgeLab

The MIT AgeLab will build and analyze new deep-learning-based perception and motion planning technologies for automated vehicles in partnership with the Toyota Collaborative Safety Research Center (CSRC). The new research initiative, called CSRC Next, is part of a five-year-old ongoing relationship with Toyota.

The first phase of projects with Toyota CSRC has been led by Bryan Reimer, a research scientist at MIT AgeLab, which is part of the MIT Center for Transportation and Logistics. Reimer manages a multidisciplinary team of researchers, and students focused on understanding how drivers respond to the increasing complexity of the modern operating environment. He and his team studied the demands of modern in-vehicle voice interfaces and found that they draw drivers’ eyes away from the road to a greater degree than expected, and that the demands of these interfaces need to be considered in the time course optimization of systems. Reimer’s study eventually contributed to the redesign of the instrumentation of the current Toyota Corolla and the forthcoming 2018 Toyota Camry. (Read more in the 2017 Toyota CSRC report.)

Reimer and his team are also building and developing prototypes of hardware and software systems that can be integrated into cars in order to detect everything about the state of the driver and the external environment. These prototypes are designed to work both with cars with minimal levels of autonomy and with cars that are fully autonomous. 

Computer scientist and team member Lex Fridman is leading a group of seven computer engineers who are working on computer vision, deep learning, and planning algorithms for semi-autonomous vehicles. The application of deep learning is being used for understanding both the world around the car and human behavior inside it.

“The vehicle must first gain awareness of all entities in the driving scene, including pedestrians, cyclists, cars, traffic signals, and road markings,” Fridman says. “We use a learning-based approach for this perception task and also for the subsequent task of planning a safe trajectory around those entities.”

Fridman and his team, now firmly entrenched in the next phase of the project with Toyota CRSC, set up a stationary camera at a busy intersection on the MIT campus to automatically detect the micro-movements of pedestrians as they make decisions about crossing the street. Using deep learning and computer vision methods, the system automatically converts the raw video footage into millisecond-level estimations of each pedestrian’s body position. The program has analyzed the head, arm, feet and full-body movement of more than 100,000 pedestrians. 

Fridman’s research also focuses on the world inside the car. 

“Just as interesting and complex is the integration of data inside the car to improve our understanding of automated systems and enhance their capability to support the driver,” he says. “This includes everything about the driver’s face, head position, emotion, drowsiness, attentiveness, and body language.” 

With Toyota and other partners, the team is exploring the use of cameras positioned to monitor the driver, as well as methods to extract all those driver state factors from the raw video and turn them into useable data which can to support future automotive industry needs. 

“What’s innovative about Lex’s work is that it uses state-of-the-art methods in computer science and artificial intelligence to study the complexities of human intent grounded in large-scale real-world data,” Reimer says.

Toyota CSRC Director Chuck Gulash says the research “leverages the AgeLab’s expertise in computer vision, state detection, naturalistic data collection and deep learning to focus on the challenges and opportunities of autonomous vehicle technologies.”

When asked how the research collaboration would affect the future of automotive technology, Gulash says it will “contribute to better computer-based perception of a vehicle’s environment as well as social interactions with other road users.”

“What is unique about the AgeLab’s work is that it brings together advanced computer science with a human centered perspective on driver behavior,” he says. “As with all CSRC projects, output from the AgeLab’s effort will be openly shared with industry, academia and government to contribute to future safe mobility.”

MIT AgeLab Director Joe Coughlin says the AgeLab “is using all of these technologies to do two things: understand human behavior in the driving context, and to design future systems that result in greater safety and expansion of mobility options for all ages.”

Related Links

Related Topics

Related Articles

More MIT News