WCVB
Prof. Regina Barzilay speaks with Nicole Estephan of WCVB-TV’s Chronicle about her work developing new AI systems that could be used to help diagnose breast and lung cancer before the cancers are detectable to the human eye.
Prof. Regina Barzilay speaks with Nicole Estephan of WCVB-TV’s Chronicle about her work developing new AI systems that could be used to help diagnose breast and lung cancer before the cancers are detectable to the human eye.
In conversation with Matthew Huston at Science, Prof. John Horton discusses the possibility of using chatbots in research instead of humans. As he explains, a change like that would be similar to the transition from in-person to online surveys, “"People were like, ‘How can you run experiments online? Who are these people?’ And now it’s like, ‘Oh, yeah, of course you do that.’”
Researchers from MIT have found that using generative AI chatbots can improve the speed and quality of simple writing tasks, but often lack factual accuracy, reports Richard Nieva for Forbes. “When we first started playing with ChatGPT, it was clear that it was a new breakthrough unlike anything we've seen before,” says graduate student Shakked Noy. “And it was pretty clear that it was going to have some kind of labor market impact.”
Prof. Daron Acemoglu and graduate student Todd Lensman have created “the first economic model of how to regulate transformative technologies,” like artificial intelligence, reports Tim Fernholz for Quartz. “Their tentative conclusion is that slower deployments is likely better, and that a machine learning tax combined with sector-specific restrictions on the use of the technology could provide the best possible outcomes,” writes Fernholz.
Lecturer Guadalupe Hayes-Mota SB '08, MS '16, MBA '16 writes for Forbes about the ethical framework needed to mitigate risks in artificial intelligence. “[A]s we continue to unlock AI's capabilities, it is crucial to address the ethical challenges that emerge,” writes Hayes-Mota. “By establishing a comprehensive ethical framework grounded in beneficence, non-maleficence, autonomy, justice and responsibility, we can ensure that AI's deployment in life sciences aligns with humanity's best interests.”
Writing for Forbes, Prof. Daniela Rus, director of CSAIL, makes the case that liquid neural networks “offer an elegant and efficient computational framework for training and inference in machine learning. With their compactness, adaptability, and streamlined computation, these networks have the potential to reshape the landscape of artificial intelligence and drive further breakthroughs in the field.”
Researchers at MIT have developed PIGINet (Plans, Images, Goal and Initial facts), a neural network designed to bring task and motion planning to home robotics, reports Brian Heater for Tech Crunch. “The system is largely focused on kitchen-based activities at present. It draws on simulated home environments to build plans that require interactions with various different elements of the environment, like counters, cabinets, the fridge, sinks, etc,” says Heater.
Prof. Max Tegmark speaks with Guardian reporter Steve Rose about the potential of artificial intelligence. “The positive, optimistic scenario is that we responsibly develop superintelligence in a way that allows us to control it and benefit from it,” says Tegmark. “If we can build and control superintelligence, we can quickly go from being limited by our own stupidity to being limited by the laws of physics. It could be the greatest empowerment moment in human history.”
MIT Schwarzman College of Computing Dean Daniel Huttenlocher discusses how artificial intelligence has impacted print media at the Aspen Ideas Festival, reports John Frank for Axios. “Most of us grew up in a world where the word print was something that was authoritative,” says Huttenlocher, of how people will need to be on the lookout for misinformation.
MIT Schwarzman College of Computing Dean Daniel Huttenlocher speaks at the Aspen Ideas Festival on how to regulate AI while maximizing its positive impact, reports NBC. “I think when we think about regulation [of artificial intelligence] we need to think about this in the ways we’ve traditionally thought about things – risk, reward, tradeoffs – and that tends to be domain specific,” says Huttenlocher. “It’s hard to have sort of an abstract notion of this new technology and what the risk [and] reward is across all domains.”
Prof. Deb Roy speaks with CNBC reporter Deirdre Bosa about “the relationship between machine-leaning technology and humans.”
Writing for Bloomberg, Prof. Thomas Malone about the need to adapt copyright laws in the era of generative AI. “One promising possibility is to introduce a new body of law based on the premise that different legal protections are appropriate when these massive AI systems can process vast amounts of information far faster and less expensively than humans can,” writs Malone.
Prof. Marzyeh Ghassemi speaks with Yahoo News reporter Rebecca Corey about the benefits and risks posed by the use of AI tools in health care. “I think the problem is when you try to naively replace humans with AI in health care settings, you get really poor results,” says Ghassemi. “You should be looking at it as an augmentation tool, not as a replacement tool.”
Prof. Kevin Esvelt and his students have found that language-generating AI models could make it easier to create pandemic potential pathogens, reports Kelsey Piper for Vox.
MIT researchers and an undergraduate class found that chatbots could be prompted to suggest pandemic pathogens, including specific information not commonly known among experts, reports Ryan Health for Axios. The MIT researchers recommend "pre-release evaluations of LLMs by third parties, curating training datasets to remove harmful concepts, and verifiably screening all DNA generated by synthesis providers or used by contract research organizations."