Skip to content ↓

3Q: David Mindell on his vision for human-centered robotics

Engineer and historian discusses how the MIT Schwarzman College of Computing might integrate technical and humanistic research and education.
Press Inquiries

Press Contact:

Emily Hiestand
Phone: 617-324-2043
Office of the Dean, School of Humanities, Arts, and Social Sciences
Close
"As an engineer and historian, I’ve been 'bilingual' my entire career,” says David Mindell, the Dibner Professor of the History of Engineering and Manufacturing; professor of aeronautics and astronautics; and co-founder and CEO of Humatics Corporation. “Dual competence is a good model for undergraduates at MIT as well."
Caption:
"As an engineer and historian, I’ve been 'bilingual' my entire career,” says David Mindell, the Dibner Professor of the History of Engineering and Manufacturing; professor of aeronautics and astronautics; and co-founder and CEO of Humatics Corporation. “Dual competence is a good model for undergraduates at MIT as well."
Credits:
Photo: Len Rosenstein
"Decades of experience have taught us that to function in the human world, autonomy must be connected, relational, and situated," says David Mindell. "Human-centered autonomy in automobiles must be more than a fancy FitBit on a driver; it must factor into the fundamental design of the systems: What do we wish to control? Who owns our data? How are our systems trained?"
Caption:
"Decades of experience have taught us that to function in the human world, autonomy must be connected, relational, and situated," says David Mindell. "Human-centered autonomy in automobiles must be more than a fancy FitBit on a driver; it must factor into the fundamental design of the systems: What do we wish to control? Who owns our data? How are our systems trained?"

Credits:
Photo: Humatics

David Mindell, Frances and David Dibner Professor of the History of Engineering and Manufacturing in the School of Humanities, Arts, and Social Sciences and professor of aeronautics and astronautics, researches the intersections of human behavior, technological innovation, and automation. Mindell is the author of five acclaimed books, most recently "Our Robots, Ourselves: Robotics and the Myths of Autonomy" (Viking, 2015) as well as the co-founder of the Humatics Corporation, which develops technologies for human-centered automation. SHASS Communications spoke with Mindell recently on how his vision for human-centered robotics is developing and his thoughts about the new MIT Stephen A. Schwarzman College of Computing, which aims to integrate technical and humanistic research and education.  
 
Q: Interdisciplinary programs have proved challenging to sustain, given the differing methodologies and vocabularies of the fields being brought together. How might the MIT Schwarzman College of Computing design the curriculum to educate "bilinguals" — students who are adept in both advanced computation and one of more of the humanities, arts, and social science fields?
 
A: Some technology leaders today are naive and uneducated in humanistic and social thinking. They still think that technology evolves on its own and “impacts” society, instead of understanding technology as a human and cultural expression, as part of society.

As a historian and an engineer, and MIT’s only faculty member with a dual appointment in engineering and the humanities, I’ve been “bilingual” my entire career (long before we began using that term for fluency in both humanities and technology fields). My education started with firm grounding in two fields — electrical engineering and history — that I continue to study.

Dual competence is a good model for undergraduates at MIT today as well. Pick two: not necessarily the two that I chose, but any two disciplines that capture the core of technology and the core of the humanities. Disciplines at the undergraduate level provide structure, conventions, and professional identity (although my appointment is in Aero/Astro, I still identify as an electrical engineer). I prefer the term “dual disciplinary” to “interdisciplinary.” 

The College of Computing curriculum should focus on fundamentals, not just engineering plus some dabbling in social implications.

It sends the wrong message to students that “the technical stuff is core, and then we need to add all this wrapper humanities and social sciences around the engineering.” Rather, we need to say: “master two fundamental ways of thinking about the world, one technical and one humanistic or social.” Sometimes these two modes will be at odds with each other, which raises critical questions. Other times they will be synergistic and energizing. For example, my historical work on the Apollo guidance computer inspired a great deal of my current engineering work on precision navigation.

Q: In naming the company you founded Humatics, you’ve combined “human” and “robotics,” highlighting the synergy between human beings and our advanced technologies. What projects underway at Humatics define and demonstrate how you envision people working collaboratively with machines? 

A: Humatics builds on the synthesis that has defined my career — the name is the first four letters of “human" and the last four letters of “robotics.” Our mission is to build technologies that weave robotics into the human world, rather than shape human behavior to the limitations of the robots. We do very technical stuff: We build our own radar chips, our own signal processing algorithms, our own AI-based navigation systems. But we also craft our technologies to be human-centered, to give users and workers information that enables them to make their own decisions and work safer and more efficiently.

We’re currently working to incorporate our ultra-wideband navigation systems into subway and mass transit systems. Humatics' technologies will enable modern signaling systems to be installed more quickly and less expensively. It's gritty, dirty work down in the tunnels, but it is a “smart city” application that can improve the daily lives of millions of people. By enabling the trains to navigate themselves with centimeter-precision, we enable greater rush-hour throughput, fewer interruptions, even improved access for people with disabilities, at a minimal cost compared to laying new track.

A great deal of this work focuses on reliability, robustness, and safety. These are large technological systems that MIT used to focus on in the Engineering Systems Division. They are legacy infrastructure running at full capacity, with a variety of stakeholders, and technical issues hashed out in political debate. As an opportunity to improve peoples' lives with our technology, this project is very motivating for the Humatics team.

We see a subway system as a giant robot that collaborates with millions of people every day. Indeed, for all their flaws, it does so today in beautifully fluid ways. Disruption is not an option. Similarly, we see factories, e-commerce fulfillment centers, even entire supply chains as giant human-machine systems that combine three key elements: people, robots (vehicles), and infrastructure. Humatics builds the technological glue that ties these systems together.

Q: Autonomous cars were touted to be available soon, but their design has run into issues and ethical questions. Is there a different approach to the design of artificially intelligent vehicles, one that does not attempt to create fully autonomous vehicles? If so, what are the barriers or resistance to human-centered approaches?

A: Too many engineers still imagine autonomy as meaning “alone in the world.” This approach derives from a specific historical imagination of autonomy, derived from Defense Advanced Research Projects Agency sponsorship and elsewhere, that a robot should be independent of all infrastructure. While that’s potentially appropriate for military operations, the promise of autonomy on our roads must be the promise of autonomy in the human world, in myriad exquisite relationships.

Autonomous vehicle companies are learning, at great expense, that they already depend heavily on infrastructure (including roads and traffic signs) and that the sooner they learn to embrace it, the sooner they can deploy at scale. Decades of experience have taught us that, to function in the human world, autonomy must be connected, relational, and situated. Human-centered autonomy in automobiles must be more than a fancy FitBit on a driver; it must factor into the fundamental design of the systems: What do we wish to control? Whom do we trust? Who owns our data? How are our systems trained? How do they handle failure? Who gets to decide?

The current crisis over the Boeing 737 MAX control systems show these questions are hard to get right, even in aviation. There we have a great deal of regulation, formalism, training, and procedure, not to mention a safety culture that evolved over a century. For autonomous cars, with radically different regulatory settings and operating environments, not to mention non-deterministic software, we still have a great deal to learn. Sometimes I think it could take the better part of this century to really learn how to build robust autonomy into safety-critical systems at scale.
 

Interview prepared by MIT SHASS Communications
Editorial and Design Director: Emily Hiestand
Interview conducted by writer Maria Iacobo

 

Related Links

Related Topics

Related Articles

More MIT News