Last week, the stuff of Hollywood blockbusters became a reality when NASA launched its Double Asteroid Redirection Test (DART), a small spacecraft that will smash into an asteroid sometime in September or October to try to alter its path. Scientists say the target, Dimorphos, poses no threat to Earth, but that we need to be ready in the future event of a mass of rock or metal hurtling toward the planet at more than 30,000 miles an hour. The Gazette spoke with Harvard astrophysicist Jonathan McDowell about NASA’s first-of-its-kind mission. The interview was edited for clarity and length.
GAZETTE: What exactly is an asteroid?
McDOWELL: The technical term for it is a minor planet. It’s an object, usually orbiting the sun, that is too small to be considered a planet. In the context we’re talking about, near-Earth asteroids are basically mountains flying through space that are anything from 100 meters across to a few kilometers across, out in the asteroid belt between Mars and Jupiter. You get all sizes, all the way up to dwarf planets that are 1,000 kilometers across. The ones that are a threat to Earth are less than a mile across. These are remnants left over from the formation of our sun and the solar system. There were lots of these things early on in the solar system before the planets were there, and a bunch of them hit each other and glommed on together and built themselves up to become planets. So asteroids are the construction debris left over from building the planets. The trouble is that they are still flying around, trying to build the planets, which isn’t good if you live on one already.
GAZETTE: What are they made of?
McDOWELL: In the case of Dimorphos, we don’t know exactly. That’s one of the things researchers are hoping to find out with this mission. Mostly, asteroids consist of rock. But some asteroids are almost solid metal. Some are made of ice in the outer solar system, and when they come too close to the sun, they sort of boil off and become comets. Some are just made of stone. One question they are hoping to answer with DART: Is this asteroid a really solid, strong rock, or is it a whole bunch of pebbles being held together by gravity, such that if you smash into it, it’ll kind of be like a ball pit? If you’re worried that one of these asteroids is someday going to hit the Earth, you want to know how hard you have to hit it to successfully alter its course. That depends on whether it’s a solid thing or one of these rubble piles, as we call them, that are only very tenuously held together.
If human activity is killing the planet, can humans engineer a solution to save it? That was the question that ran through “The Climate of Attention,” a Harvard discussion with Elizabeth Kolbert, a New Yorker staff writer and Pulitzer Prize-winning author, on Nov. 15. It is also the theme of Kolbert’s latest book, “Under a White Sky: The Nature of the Future.”
Williams began the conversation by citing the impact of rising sea levels and asking, “How do we navigate these waters?”
“Everyone is struggling,” Kolbert said, “even if the struggle is to push the information away.” Her focus, she said, is on communicating the truth of what she sees on her beat: climate change. “When I go around the world, I can see what’s missing. I can see all the invasive species that are right here in New England. I can watch all the ash trees dying, being done in by the ash borer.
“One of the things that is shocking to me is the way we just trundle on,” said Kolbert, whose 2014 book “The Sixth Extinction” was awarded a Pulitzer Prize. “Each loss doesn’t get marked, and I see my role to a great extent as bearing witness.”
Williams and Kolbert discussed the case of the Devil’s Hole pupfish, chronicled in “Under a White Sky.” Possibly “the rarest fish in the world,” according to Kolbert, the 1½-inch-long, iridescent blue fish lives only in a thermal heated pool in the Mojave Desert. That pool is fed by an ancient aquifer that began to be noticeably depleted by human use in the 1960s. Although the Supreme Court sided with conservationists, the pool and the pupfish have not recovered and attempts to breed the animals in aquariums have failed. In the latest response, conservationists have built a replica of Devil’s Hole, down to the shape of the rocks, as a “refuge tank” environment. “People started to do all this crazy stuff to get the fish to reproduce,” said Kolbert. The results remain uncertain.
Such interventions are now being considered on a global scale, including attempts at geoengineering or solar radiation management to counter global warming. As Kolbert noted, when volcanic eruptions darken the sky, temperatures cool. “The idea is we could mimic volcanos and counteract some of the warming,” she said. However, the initial experiments to test equipment have been met with protests, and Kolbert is not sold on the idea of an engineered solution. “I put geoengineering in this long line of interventions that had very mixed effects.”
Part of the problem, Kolbert said, is that there is a significant time lag in climate change. “We’re just feeling what was emitted 20 to 30 years ago,” she said. “Any intelligent coastal city has to be thinking about how are we going to protect ourselves against what we know is baked in at this point.”
When Myers joined the conversation, he likened humanity to “a monkey on a spaceship.” For much of our history, he said, we were simply passengers, “hurtling around.” Over time, however, we “made our way up to the cockpit and started flipping levers and turning dials.” These actions have disrupted the spaceship’s flight. “We have a very limited amount of time to learn to fly this rocket ship before it crashes,” he said.
Kolbert was skeptical. “Do we have the knowledge to do this?” she asked. “Our record is not good.”
“If we have any hope of navigating this moment, it’s a political moment — what we need is not more science, but the emotional and spiritual,” said Myers.
Winding up the discussion, Williams asked both participants about the future, and what they tell their own children. Kolbert, whose oldest son is studying climate science at Harvard, said that there’s nothing she can tell him that he doesn’t know. “I do feel there’s a passing off to the next generation,” she added. “The thrill of discovery and the pain of discovery.”
Myers, whose daughters “are just getting old enough to grasp” the crisis, describes a world of possibility. “I say to them what I say to students, which is that this is the most interesting time to be a human in the history of our species. There’s no set of skills that isn’t relevant, and you have the capacity to make a contribution that almost no one has.”
In some ways, says Ritu Raman, an inchworm is similar to a smartphone. Both are machines — one living, one not — that can sense changes in their environments and light up, change color, or make a sound.
But there’s one major difference on which Raman built her career.
“If I pick up my phone right now and throw it across the room, it will break.”
Living cells, on the other hand, can heal, grow, get stronger, and learn, making them a promising foundation not just for biofabricated machines, but also for tissue created from a patient’s cells, organs-on-a-chip for rapid drug development, and lab-made meat.
On Nov. 10, Raman, an assistant professor of mechanical engineering at the Massachusetts Institute of Technology, outlined her new book, “Biofabrication,” in a Harvard Science Book Talk at the Harvard Book Store. Speaking with Jermey Matthews, an editor at the MIT Press, Raman recalled how she started building with biology and discussed what possibilities excite her most in the field.
The daughter of engineers, Raman grew up in India, Kenya, and all over the U.S. Her parents’ projects included communications towers in remote villages, and Raman tagged along, witnessing how technological innovation could change people’s lives.
Because of those childhood experiences, Raman ended up studying mechanical engineering at Cornell University. But an inchworm creeping along a leaf in her mother’s garden stole her mind away from cars, rockets, and robots. No robot can move like an inchworm, she says. But maybe someday?
In her lab at MIT, Raman built an inchworm-like robot with lab-made muscles wrapped around a synthetic skeleton. Like living muscles, her robots run on sugar. Because she genetically engineered the cells to move when they sense light, she and her collaborators could control where their “creepy, crawly, inchworm-like robots” wriggled.
“We started seeing right away that they were able to do things regular robots couldn’t do,” Raman said. “They could exercise and get stronger. We could get them to heal from damage. And it was entirely because they were made out of biological materials.”
In her book, Raman describes how biofabricators are engineering materials and machines for everything from medicine and agriculture to defense. Lab-built tissue, for example, could replace tissue too damaged or diseased to heal. Today, new drugs must go through animal trials — close but imperfect approximations of how humans will respond to treatments — and then risky and expensive human clinical trials. Lab-grown organs and cells could mimic a human body’s response and provide a more accurate, lower-risk, and quicker way to design new drugs.
What about building human arms, hearts, or eyes? “It’s one of those things you see in science fiction all the time,” Raman said. “Because we’re afraid of injuries and disabilities. And, of course, we’re afraid of dying.” But building an eye, which needs blood vessels, an immune system, and billions of cells, is “very, very complex,” she said. Simple cartilage is a realistic near-term goal, but an eye will take decades or longer.
Printing a steak, on the other hand, might soon be possible.
“If we could build you meat — print you a steak or build you a burger — and it would essentially have the same look, taste, and smell, you may not be able to tell the difference,” Raman said. Singaporeans can already buy lab-grown chicken and beef. The product is expensive compared with farmed meat, but comes with environmental and other benefits, including lower risk of contamination and the potential for enhanced vitamins and minerals (“Synthesized to meet your dietary needs,” Raman said). For consumers who see animal farms as unethical, lab-grown meat may be worth the price. For chefs, fabricated meat could mean entirely new combinations of animal cells, giving new meaning to the turducken.
Yet lab-grown living cells introduce serious ethical dilemmas, Raman noted. “When you’re building things with living cells, that doesn’t technically mean the thing you’re building is alive.” How do we define it, then? Raman said researchers in biofabrication must consider the ethical implications of their work. Reading superhero comics might help, too.
“Superhero comics allow us to ask questions like, ‘What if you gave someone super-strong muscle and then they became super strong? What kind of ethical implications would that have? What responsibility do they have to their community?’ We can learn a lot from those stories, right? We can use that to spark our imaginations and keep our imaginations in check.”
That imagination eventually could send hybrid robots into space or the deep sea. But for now, Raman’s imagination remains preoccupied with her “creepy, crawly, inchworm-like robots” and how to give them neurons, so they can sense if a surface is slippery, how far to stretch, and more.
“At the end of the day,” she said, “I’m still obsessed with that inchworm.”
Wondering is a new series in which Harvard experts give informed answers to random questions. For the first installment, we asked Krzysztof Gajos, Gordon McKay Professor of Computer Science, to tell us when a robot will write a novel.
From the perspective of someone somewhat familiar with the state of the art in machine learning, my answer is that AI may be able to write trashy novels as soon as next year, but it will not write a true novel in the foreseeable future.
What I mean by this is that the AI tools that we have developed are very good at manipulating surface levels of representation. For example, AI is good at manipulating musical notes without being capable of coming up with a musical joke or having any intention of engaging in a particular conversation with audiences. And AI may produce visually appealing artifacts, again, without any high-level intent behind such an artifact.
Yekaterina “Kate” Shulgina was a first-year student in the Graduate School of Arts and Sciences, looking for a short computational biology project so she could check the requirement off her program in systems biology. She wondered how genetic code, once thought to be universal, could evolve and change.
That was in 2016, and Shulgina has come out the other end of that short-term project with a way to decipher the mystery. She describes it in a new paper in the journal eLife with Harvard biologist Sean Eddy.
The report details a new computer program developed by Shulgina that can read the genome sequence of any organism and then determine its code. The program, called Codetta, has the potential to help scientists expand their understanding of how the genetic code evolves and correctly interprets the codes of newly sequenced organisms.
“This in and of itself is a very fundamental biology question,” said Shulgina, who does her graduate research in Eddy’s Lab.
The genetic code is the set of rules that tells cells how to translate the three-letter combinations of nucleotides into proteins, often referred to as the building blocks of life. Almost every organism, from E. coli to humans, uses the same genetic code. It’s why the code was once thought to be set in stone. But scientists have discovered a handful of outliers — organisms that use alternative genetic codes — exist where the set of instructions are different.
This is where Codetta can shine. The program can help identify more organisms that use these alternative genetic codes, helping shed new light on how genetic codes can change in the first place.
“Understanding how this happened would help us reconcile why we originally thought this was impossible … and how these really fundamental processes actually work,” Shulgina said.
Already, Codetta has analyzed the genome sequences of more than 250,000 bacteria and other single-celled organisms called archaea for alternative genetic codes and has identified five that had never been seen. In all five cases, the code for the amino acid arginine was reassigned to a different amino acid. It’s believed to mark the first time scientists have seen this swap in bacteria and could hint at evolutionary forces that go into altering the genetic code.
The researchers say the study marks the largest screening for alternative genetic codes. Codetta essentially analyzed every genome that’s available for bacteria and archaea. The name of the program is a cross between the codons, sequences of three nucleotides that forms pieces of the genetic code, and the Rosetta Stone, a slab of rock inscribed with parallel texts in ancient Greek, Demotic, and Egyptian hieroglyphs, which served as a key for experts trying to decipher ancient Egyptian script.
The work marks a capstone moment for Shulgina, who spent the past five years developing the statistical theory behind Codetta, writing the program, testing it, and then analyzing the genomes. It works by reading the genome of an organism and then tapping into a database of known proteins to produce a likely genetic code. It differs from similar methods because of the scale at which it can analyze genomes.
Shulgina joined Eddy’s lab, which specializes in comparing genomes, in 2016 after coming to him for advice on the algorithm she was designing to interpret genetic codes.
Until now, no one had done such a broad survey for alternative genetic codes.
“It was great to see new codes, because for all we knew, Kate would do all this work, and there wouldn’t turn out to be any new ones to find,” said Eddy, who’s also a Howard Hughes Medical Investigator. He also noted the system’s potential to be used to ensure the accuracy of the many databases that house protein sequences.
“Many protein sequences in the databases these days are only conceptual translations of genomic DNA sequences,” Eddy said. “People mine these protein sequences for all sorts of useful stuff, like new enzymes or new gene editing tools and whatnot. You’d like for those protein sequences to be accurate, but if the organism is using a nonstandard code, they’ll be erroneously translated.”
The researchers say the next step of the work is to use Codetta to search for alternative codes in viruses, eukaryotes, and organellar genomes like mitochondria and chloroplasts.
“There’s still a lot of diversity of life where we haven’t done this systematic screening yet,” Shulgina said.
Cut off the head of a three-banded panther worm, and it will grow another — mouth, brain, and all. Cut off its tail, and the same thing happens. Cut it in three pieces, and within eight weeks there’ll be three fully formed worms.
Put simply: The three-banded panther worm is one of the greatest of all time when it comes to regeneration, which is why scientists started studying this Tic Tac-sized worm in earnest over the past decade or so to learn exactly how it pulls off such an amazing feat. Such knowledge could eventually lend insights into the possibilities for a similar kind of regeneration process in humans.
Now, a team of researchers has taken the study of these worms to the next level by making them glow in the dark through a process called transgenesis. The work, described in a new paper in Developmental Cell, is led by Mansi Srivastava, a professor of organismic and evolutionary biology at Harvard who has been studying three-banded panthers for more than a decade.
Transgenesis is when scientists introduce something into the genome of an organism that is not normally part of that genome. “It’s a tool that biologists use to study how cells or tissues work within the body of an animal,” Srivastava said.
The glow-in-the-dark factor comes from the introduction of a gene that, when it becomes a protein, gives off a certain florescent glow. These proteins glow either green or red and can lead to glowing muscle cells or glowing skin cells, for example.
The fluorescence gives scientists a more detailed look at cells, where they are in the animal, and how they interact with each other.
If only Captain Ahab and the white whale could have had a heart-to-heart, things might have turned out differently.
It may be a bit late for the protagonists of Herman Melville’s classic novel “Moby Dick,” but talking to the massive creatures might one day be a reality. According to a group of scientists working on deciphering sperm whale communication, the time may come when a human-to-whale conversation will indeed be possible.
But first, they have to spend more time eavesdropping on the giants from the deep.
“The goal here is to find ways to use technology to connect us to nature by really deeply listening,” said marine biologist David Gruber RI ’18, lead scientist on Project CETI, a nonprofit that grew out of a series of meetings among scholars from various fields that began at the Radcliffe Institute for Advanced Study in 2017. The researchers, a collection of biologists, cryptographers, linguistics, computer scientists, and robotics experts, will begin by gathering the whale’s sounds and observing their patterns of behavior. In their final phase, they hope to play taped vocalizations back to the animals and record how they respond.
Gruber was virtually back on campus Tuesday to discuss the work with his colleagues and former Radcliffe Fellows Shafi Goldwasser RI ’18 and Michael Bronstein RI ’18. During an online talk moderated by Radcliffe Dean Tomiko Brown-Nagin they reviewed the inception of CETI (short for the Cetacean Translation Initiative), its progress, and what they hope the work might mean for the future of animal/ human understanding.
Gruber explained that while at Radcliffe in 2017 to design robots based on the soft properties of jellyfish, in collaboration with Robert Wood who directs the Harvard Microrobotics Laboratory, he grew interested in codas, the series of clicks sperm whales use to communicate. He was listening to a recording of the sounds in his office one day when Goldwasser, a cryptographer and Radcliffe Fellow exploring machine learning and data privacy, stopped in from across the hall. “We began discussing the sounds and she brought up the idea of using machine learning and invited me to this Radcliffe machine-learning working group that she’d organized,” said Gruber, a professor of biology at City University of New York.
More discussions followed, and in 2019 they applied for funding after studying an existing data set of whale sounds gathered by other researchers off the coast of the Caribbean Island Dominica and convening a two-day exploratory seminar back at Radcliffe. “We came together and kind of realized that this moment in time the window opened up where we think we could make significant inroads,” said Gruber.
Some of those inroads will be based on artificial intelligence, explained Bronstein, a computer scientist who was using his fellowship year to detect the spread of misinformation on social media sites with machine-learning algorithms when he sat in on the discussions about the giant marine mammals. Upon hearing Gruber and Goldwasser’s pitch about communicating with the leviathans his first thought was “It’s the craziest idea in the world.” But, Bronstein added, “The questions stuck in my head.”
Good luck finding an animal tougher than a tardigrade.
These tiny creatures are famous for their ability to survive in the most extreme conditions, including boiling water, freezing water, and even the vacuum of space. Called water bears or moss piglets because of their appearance under a microscope, tardigrades are the smallest-known animals with legs. They have a pudgy body — no larger than a pencil point — their eight legs have several pointed claws at the end, and they have a spear-like sucker that extends from their mouth.
Tardigrades are found on all the continents (basically wherever there is water) and have survived on Earth for more than 500 million years. Despite such a long evolutionary history and global presence, the fossil record on tardigrades is thin, with only two clear examples identified as separate species ever found. But thanks to a 16-million-year-old piece of amber discovered in the Dominican Republic, scientists can now add a third — a discovery immortalized in word and song.
Marcus du Sautoy was around 13 when the exploits of a 19th-century German math genius changed his life.
According to legend, the young Carl Friedrich Gauss was asked to calculate the sum of the numbers from one to 100, but instead of adding up the digits one by one, he realized 50 pairs of numbers (one plus 100, two plus 99, and so on) all equaled 101. So, he simply multiplied 50 times 101 and dropped the correct answer, 5,050, on his teacher’s desk.
“When I heard this story,” said du Sautoy, “I thought, ‘Wow, that’s the subject I want to dedicate myself to.’”
Through teaching, TV appearances, and bestsellers, the Oxford professor of mathematics has done exactly that, bringing math and science to the masses. In an online Harvard Science Book Talk on Monday, du Sautoy, clad in a black shirt covered with white numbers, discussed his new book, “Thinking Better: The Art of the Shortcut in Math and Life,” with Melissa Franklin, Harvard’s Mallinckrodt Professor of Physics. The art of the shortcut isn’t about cutting corners, he emphasized. Instead, it’s about thinking cleverly about a problem to avoid the “boring work,” so you can “get to the work you want to do.” The book, he said, is a “celebration of mathematics,” as well as “a kind of interesting exploration of the shortcut beyond my world of maths.”
And many worlds there are. While researching his work, he interviewed experts in a range of fields. From an accomplished cellist he learned that scales are a kind of pattern, or shortcut, enabling musicians to play without analyzing each note. But he also learned that there’s no getting around the muscle memory required to master an instrument. “If you have to change the body physically in some way,” he said, “then it’s very hard to actually shortcut that.”
A mountaineer told du Sautoy that shortcuts occasionally come in handy, recalling a time when he triggered a small avalanche so he could quickly slip along the snow to avoid getting stuck on a summit at night. But more often, the climber took the long way up and down because he relished the views and “being in the moment.” The same is true for those who want to inhabit a piece of music, said the author. A song clip won’t immerse the listener in the work the way a full recording does. Conversely, a movie trailer might be “a useful shortcut to get a quick feel for what the film might be like,” he said.
A wide array of animals have tusks, including elephants, walruses, warthogs, hippos, and even the much smaller hyrax, which look like guinea pigs and are about the size of domestic cats. The other thing they have in common is that all are mammals, which begs the question: Why did some mammals develop tusks?
Published in the Proceedings of the Royal Society B, a new, Harvard-led study traces the origins of tusks to an ancient mammal-like species that lived before the time of the dinosaurs and sheds light on how some mammals would go on to evolve this iconic dental trait. (Minor spoiler: They inherited them.)
Dental exams of fossils more than 200 million years old showed that the first true tusks belonged to later groups of a family of creatures called dicynodonts. These distant relatives of mammals were the most abundant and diverse vertebrates on land before the dinosaurs took over. They lived between 270 and 201 million years ago and included a range of animals from tiny, rat-like dicynodonts to elephant-sized dicynodonts that weighed about 6 tons. Each of them had beaks and a pair of tusk-like teeth protruding from their turtle-shaped heads.
But here’s the kicker: Not all of them were actual tusks.
“Some of them were just big teeth that extended out from the jaw,” said Megan Whitney, a postdoctoral researcher in the FAS Department of Organismic and Evolutionary Biology and lead author of the study.
For Emilly Fan, concentrating in Environmental Science and Public Policy feels urgent and consequential. It brought her all the way from her home in New Zealand.
“[It] was the main drawing card in flying to study here,” said the Quincy House senior. “Even having a nocturnal class schedule last year when I was back home in Auckland during COVID didn’t detract from the incredible caliber of classes and the importance of the content.”
Fan’s deep commitment to the environment, which will find her in Glasgow this week as part of Harvard’s contingent at the U.N.’s COP26 climate summit, has been fostered by her concentration, which was created 25 years ago to provide the foundation for thinking about the complex tangle of issues involved in safeguarding life on the planet.
“It marries the science with the policy and, given how intertwined the two are, I knew this is what I want to be studying. There’s a lot of passion behind the youth movement, which is such an important force to harness, but I’ve been conscious of building upon that energy while ensuring that I also have the knowledge and practical skills to call myself a pragmatic and holistic environmentalist,” Fan said.
Contemporary Americans have access to custom workout routines, fancy gyms, and high-end home equipment like Peloton machines. Even so, when it comes to physical activity, our forebears of two centuries ago beat us by about 30 minutes a day, according to a new Harvard study.
Researchers from the lab of evolutionary biologist Daniel E. Lieberman used data on falling body temperatures and changing metabolic rates to compare current levels of physical activity in the United States with those of the early 19th century. The work is described in Current Biology.
The scientists found that Americans’ resting metabolic rate — the total number of calories burned when the body is completely at rest — has fallen by about 6 percent since 1820, which translates to 27 fewer minutes of daily exercise. The culprit, the authors say, is technology.
“Instead of walking to work, we take cars or trains; instead of manual labor in factories, we use machines,” said Andrew K. Yegian, a postdoctoral fellow in the Department of Human and Evolutionary Biology and the paper’s lead author. “We’ve made technology to do our physical activity for us. … Our hope is that this helps people think more about the long-term changes of activity that have come with our changes in lifestyle and technology.”
While it’s been well documented that technological and social changes have reduced levels of physical activity the past two centuries, the precise drop-off had never been calculated. The paper puts a quantitative number to the literature and shows that historical records of resting body temperature may be able to serve as a measure of population-level physical activity.
“This is a first-pass estimate of taking physiological data and trying to quantify declines in activity,” Yegian said. “The next step would be to try to apply this as a tool to other populations.”
The work started last year as a back-of-the-envelope calculation after scientists at Stanford University showed that Americans’ average body temperature had declined to about 97.5 degrees Fahrenheit — a tick lower than the well-established 98.6. The Harvard researchers figured that falling body temperature and falling physical activity are related and could be linked by metabolism, which produces body heat and is, in part, powered by physical activity.
The scientists scoured studies by other researchers to find a quantitative answer to this question: If there is a change in body temperature, what does that mean in terms of metabolism and activity? They pulled data from two papers to calculate how the processes corresponded and then estimated how much physical activity had gone down.
In the paper, the researchers note that factors other than physical activity can influence resting metabolic rate and body temperature, complicating their estimate.
They also say that future work refining relationships among metabolic rates, body temperature, and physical activity could allow for more precise investigation of physical activity trends and serve as an anchor for understanding how declines in physical activity have affected the health and morbidity of Americans.
“Physical activity is a major determinant of health,” said Lieberman, the Edwin M. Lerner II Professor of Biological Science. “Understanding how much less active Americans have become over the last few generations can help us assess just how much increases in the incidence of chronic conditions like Type 2 diabetes, heart disease, and Alzheimer’s can be attributed to decreases in physical activity.”
Methane emissions from the distribution and use of natural gas across U.S. cities are 2 to 10 times higher than recent estimates from the Environmental Protection Agency, according to a new study from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS).
In Boston, methane emissions from the natural gas system are six times higher than recent estimates by the Massachusetts DEP and have not significantly changed in eight years, despite legislation aimed at repairing leaks in natural gas pipelines.
Methane is the most potent greenhouse gas after C02. Reducing emissions of methane is critical to slowing the pace of global warming. While C02 has longer-lasting effects, methane has more than 80 times the warming power of C02 in its first 20 years in the atmosphere, setting the pace for near term warming.
The study, a collaboration between Harvard, Boston University, the Earth System Research Laboratory at the National Oceanic and Atmospheric Administration, and the Environmental Defense Fund, used atmospheric measurements of methane and ethane concentrations to track emissions over eight years in Boston. The researchers found that more than half of emissions may be coming from sources other than pipelines, including compressing stations, meters, and appliances such as boilers and furnaces, and other so-called end use emitters.
The study finds that methane emissions from urban natural gas pipelines and end-use emitters account for 20 to 36 percent of all U.S. methane emissions from natural gas, significantly higher than previous estimates, which found those sources to contribute only about 6 percent of the load.
“Traditional approaches to estimating emissions from natural gas systems are missing significant sources of methane emissions,” said Steven Wofsy, the Abbott Lawrence Rotch Professor of Atmospheric and Environmental Science at SEAS and senior author of the study. “If cities and states want to pass meaningful legislation to curb emissions, they need to know where emissions are coming from, how they change over time, and whether or not the policies put in place to reduce them are working.”
Today, oil and natural gas systems account for about 30 percent of human-made methane emissions in the U.S., a number that has been rising steadily as natural gas has become an increasingly important energy source.
“Natural gas has been sold as a transitional fuel as we move toward green energy — but its low greenhouse emissions are contingent on a low loss rate as any amount of methane that escapes is going to have a strong climate impact,” said Maryann Sargent, a research scientist in Environmental Science and Engineering at SEAS and first author of the paper.
Cities calculate methane emissions from natural gas using a so-called bottom-up approach, sampling various emissions sources, and generating an average emission rate for each source. But there is a huge discrepancy when those estimates are compared to measurements of actual methane in the atmosphere.
What accounts for those discrepancies?
To answer that question, the research team designed a top-down study, which started with atmospheric measurements and worked backwards to trace emissions. The researchers installed sensors at two sites in Boston — on a roof at Boston University and on top of a tall building in Copley Square — and three locations outside Boston at Harvard Forest in Petersham, Massachusetts, Canaan, New Hampshire, and Mashpee, Massachusetts. The sensors ran continuously from September 2012 to May 2020.
To differentiate natural gas emissions from other sources of methane such as landfills, the sensors measured levels of methane and ethane, a compound emitted by natural gas but not by other sources of methane. Using a model that considers wind and atmospheric turbulence, the research team was able to calculate natural gas emissions in the Boston area at one kilometer resolution.
The study revealed that emissions were three times higher than the most recent estimates and six times higher than a 2018 study still being used by the state. Emissions remained constant over the eight years, despite the enactment of legislation aimed at curbing emissions by mandating the repair of leaky pipelines.
“It seems like it’s a game of whack-a-mole, every time you repair a leak, a new one springs up,” said Sargent.
The researchers also noticed that emissions changed seasonally. As pipelines are pressurized year around, the seasonal emissions must be tied to consumption at the end of the pipeline, such as appliances in residential, industrial, and commercial buildings such as boilers, furnaces, and stovetops or flow meters and boosting compressors.
The COVID-19 lockdown of April 2020 shed significant light on the relationship between emissions and consumption. Emissions at the Boston University sensor dropped by 40 percent during lockdown when many nearby buildings significantly reduced their heat, hot water or stove use.
The researchers found that those seasonal, consumption-based emissions accounted for about 56 percent of the total natural gas emissions in Boston.
“We didn’t expect to see such a strong relationship between emissions and consumption,” said Sargent. “This finding shows that the government needs to be looking at emissions beyond just pipes and provides more evidence that we should be moving away from natural gas toward renewable energy to heat and electrify our cities.”
This research was co-authored by Cody Floerchinger, Kathryn McKain, John Budney, Elaine W. Gottlieb, Lucy R. Hutyra and Joeseph Rudek. It was supported by the Environmental Defense Fund; the National Aeronautics and Space Administration through OCO-2 Grant 1637874 and Carbon Monitoring System Award NNX16AP23G; the National Oceanic and Atmospheric Administration Urban Awards NA20OAR4310303 and NA17OAR4310086.
Signs of a planet transiting a star outside of the Milky Way galaxy may have been detected for the first time. This intriguing result, using NASA’s Chandra X-ray Observatory, opens up a new window to search for exoplanets at greater distances than ever before. The possible exoplanet candidate is located in the spiral galaxy Messier 51 (M51), also called the Whirlpool galaxy because of its distinctive profile.
Exoplanets are defined as planets outside of our Solar System. Until now, astronomers have found all other known exoplanets and exoplanet candidates in the Milky Way galaxy, almost all of them less than about 3,000 light-years from Earth. An exoplanet in M51 would be about 28 million light-years away, meaning it would be thousands of times farther away than those in the Milky Way.
“We are trying to open up a whole new arena for finding other worlds by searching for planet candidates at X-ray wavelengths, a strategy that makes it possible to discover them in other galaxies,” said Rosanne Di Stefano of the Center for Astrophysics | Harvard & Smithsonian (CfA), who led the study, which was published Monday in Nature Astronomy.
This new result is based on transits, events in which the passage of a planet in front of a star blocks some of the star’s light and produces a characteristic dip. Astronomers using both ground-based and space-based telescopes — like those on NASA’s Kepler and TESS missions — have searched for dips in optical light, electromagnetic radiation humans can see, enabling the discovery of thousands of planets.
Di Stefano and colleagues have instead searched for dips in the brightness of X-rays received from X-ray bright binaries. These luminous systems typically contain a neutron star or black hole pulling in gas from a closely orbiting companion star. The material near the neutron star or black hole becomes superheated and glows in X-rays.
Because the region producing bright X-rays is small, a planet passing in front of it could block most or all of the X-rays, making the transit easier to spot because the X-rays can completely disappear. This could allow exoplanets to be detected at much greater distances than current optical light transit studies, which must be able to detect tiny decreases in light because the planet only blocks a tiny fraction of the star.
The team used this method to detect the exoplanet candidate in a binary system called M51-ULS-1, located in M51. This binary system contains a black hole or neutron star orbiting a companion star with a mass about 20 times that of the sun. The X-ray transit they found using Chandra data lasted about three hours, during which the X-ray emission decreased to zero. Based on this and other information, the researchers estimate the exoplanet candidate in M51-ULS-1 would be roughly the size of Saturn, and orbit the neutron star or black hole at about twice Saturn’s distance from the sun.
While this is a tantalizing study, more data would be needed to verify the interpretation as an extragalactic exoplanet. One challenge is that the planet candidate’s large orbit means it would not cross in front of its binary partner again for about 70 years, thwarting any attempts for a confirming observation for decades.
“Unfortunately, to confirm that we’re seeing a planet we would likely have to wait decades to see another transit,” said co-author Nia Imara of the University of California at Santa Cruz. “And because of the uncertainties about how long it takes to orbit, we wouldn’t know exactly when to look.”
Can the dimming have been caused by a cloud of gas and dust passing in front of the X-ray source? The researchers consider this to be an unlikely explanation, as the characteristics of the event observed in M51-ULS-1 are not consistent with the passage of such a cloud. The model of a planet candidate is, however, consistent with the data.
Between 2.5 and 4 billion years ago, a time known as the Archean eon, Earth’s weather could often be described as cloudy with a chance of asteroid.
Back then, it was not uncommon for asteroids or comets to hit Earth. In fact, the largest ones, more than six miles wide, altered the chemistry of the planet’s earliest atmosphere. While this has all been generally accepted by the geologists, what hasn’t been as well understood is how often these large asteroids would hit and how exactly the fallout from the impacts affected the atmosphere, specifically oxygen levels. A team of researchers now believe they have some of the answers.
In a new study, Nadja Drabon, a Harvard assistant professor of Earth and planetary sciences, was part of a team that analyzed remnants of ancient asteroids and modeled the effects of their collisions to show that the strikes took place more often than previously thought and may have delayed when oxygen started accumulating on the planet. The new models can help scientists understand more precisely when the planet started its path toward becoming the Earth we know today.
“Free oxygen in the atmosphere is critical for any living being that uses respiration to produce energy,” Drabon said. “Without the accumulation of oxygen in the atmosphere we would probably not exist.”
The work is described in Nature Geoscience and was led by Simone Marchi, a senior research scientist at the Southwest Research Institute in Boulder, Colorado.
The researchers found existing planetary bombardment models underestimate how frequently asteroids and comets would hit Earth. The new, higher collision rate suggest impactors hit the planet roughly every 15 million years, about 10 times higher than current models.
The scientists realized this after analyzing records of what appear to be ordinary bits of rock. They are actually ancient evidence, known as impact spherules, that formed in the fiery collisions each time large asteroids or comets struck the planet. The energy from each impact melted and vaporized the rocky materials in the Earth’s crust, shooting them up in a giant plume. Small droplets of molten rock in that cloud would then condense and solidify, falling back to Earth as sand-sized particles that would settle back onto the Earth’s crust. These ancient markers are hard to find since they form layers in the rock that are usually only about an inch or so.
“You basically just go on long hikes and you look at all the rocks you can find because the impact particles are so tiny,” Drabon said. “They’re really easily missed.”
Scientists, however, have caught breaks. “Over the last couple of years, evidence for a number of additional impacts have been found that hadn’t been recognized before,” Drabon said.
These new spherule layers increased the total number of known impact events during the early Earth. This allowed the Southwest Research Institute team to update their bombardment models to find the collision rate had been underestimated.
Researchers then modeled how all these impacts would have influenced the atmosphere. They essentially found that the accumulated effects of meteorite impacts by objects larger than six miles probably created an oxygen sink that sucked most of the oxygen out of the atmosphere.
The findings align with the geological record, which shows that oxygen levels in the atmosphere varied but stayed relatively low in the early Archean eon. This was the case until around 2.4 billion years ago, during the tail end of the time period when the bombardment slowed down. The Earth then went through a major shift in surface chemistry triggered by the rise of oxygen levels known as the Great Oxidation Event.
“As time went on, collisions become progressively less frequent and too small to be able to significantly alter post-GOE oxygen levels,” Marchi said in a statement. “The Earth was on its course to become the current planet.”
Drabon said next steps in the project include putting their modeling work to the test to see what they can model in the rocks themselves.
“Can we actually trace in the rock record how the oxygen was sucked out of the atmosphere?” Drabon wondered.
Javier Luque’s first thought while looking at the 100-million-year-old piece of amber wasn’t whether the crustacean trapped inside could help fill a crucial gap in crab evolution. He just kind of wondered how the heck it got stuck in the now-fossilized tree resin?
“In a way, it’s like finding a fish in amber,” said Luque, a postdoctoral researcher in the Harvard Department of Organismic and Evolutionary Biology. “Talk about wrong place, wrong time.”
It was, however, a bit of good luck for Luque and his team, as the amber, recovered from the jungles of Southeast Asia, presented researchers with the opportunity to study a particularly intact specimen of what’s believed to be the oldest modern-looking crab ever found. The discovery provides new insights, reported Wednesday in Science Advances, into the evolution of these crustaceans and when they spread around the world.
The crab, measuring about the width of an eraser on a pencil, is the first ever found in amber from the dinosaur era, and the researchers think it represents the oldest evidence of incursions into nonmarine environments by “true crabs.”
Forgetting a child in the car is a parent’s worst nightmare, but some experts say our ability to remember even the most crucial tasks can be hijacked by something as simple as a missing cue.
According to Harvard psychologist Daniel L. Schacter, tragic cases of forgotten children started to rise near the turn of the millennium, just as new safety rules began requiring children to be placed in car seats in the back. “You would never think that that could produce a problem with forgetting because the child is no longer visible, but sadly it has,” said Schacter, author of “The Seven Sins of Memory: How the Mind Forgets and Remembers.” His 2001 work, based on his research into memory errors, includes a discussion that helps explain how a parent could forget something as essential as taking his or her toddler with them when leaving their vehicle.
“One of the points I made there, and I never imagined it would apply to a situation like this, is that when a retrieval cue is not present at the moment you need it, you can forget almost anything,” said Schacter
When Sharif Tabebordbar was a teenager, his father began having trouble walking. Soon, he became wheelchair-bound and was diagnosed with a rare genetic muscle disease.
“I watched my dad get worse and worse each day,” said Tabebordbar. “It was a huge challenge to do things together as a family.”
For Tabebordbar, who’s now a research scientist at the Broad Institute of MIT and Harvard and an associate of the Department of Organismic and Evolutionary Biology, the experience led him to focus on gene therapy, and for the past decade has been his motivation for working in the field. “Genetic disease is a burden on not only patients but families,” he said. “I thought: This is very unfair to patients and there’s got to be a way to fix this.”
Along with colleagues from the Broad and Harvard, Tabebordbar has gotten a step closer to that goal with a new gene-delivery system that has the potential to make gene therapy for muscle diseases both safer and more effective for patients.
The system is called MyoAAV and is described in the journal Cell.
It is a new family of adeno-associated viruses that act as a better transport vehicle for gene therapies and can be used to carry a genetic editing system sometimes called CRISPR 2.0. In gene therapies for muscle diseases, scientists often use harmless viruses known as adeno-associated viruses, to deliver a functioning copy of a disease-causing gene to cells that have shown promise in clinical trials. The therapies often face challenges because they require high doses of the gene-carrying virus to reach the cell and much of it ends up in the liver instead of the muscle cells, which can lead to severe adverse side effects, and even death in some trial participants.
In the study, the researchers show the group of viral vectors they created is more than 10 times more efficient at reaching and delivering therapeutics to muscle cells than other adeno-associated vectors currently used in clinical trials, largely avoiding the liver, and does so at doses around 100 to 250 times lower. Because of this, MyoAAV has the potential to better treat these diseases and reduce the risk of liver damage and other serious side effects.
“We know that if you get enough of the drug into the target tissue, it’s going to be efficacious,” said Tabebordbar, who works in the lab of Pardis Sabeti, an institute member at the Broad and a professor in the Harvard Department of Organismic and Evolutionary Biology. “It’s all about delivering a safe dose of the virus.”
Already, the system has delivered some impressive results. The researchers used MyoAAV to transport therapeutic genes or the CRISPR-Cas9 gene-editing system to muscle cells in mice and primates. They found that the treatments delivered by the system improved muscle function in mouse models that have Duchenne muscular dystrophy, the most common form of genetic muscle disease, and of a rarer disease called X-linked myotubular myopathy. The researchers also found that MyoAAV could effectively deliver gene therapies to muscle in nonhuman primates and to human muscle cells.
“All of these results demonstrate the broad applicability of the MyoAAV vectors for delivery to muscle,” said co-senior author of the study Amy Wagers, a professor and co-chair of Harvard’s Department of Stem Cell and Regenerative Biology and a senior investigator at the Joslin Diabetes Center. “These vectors work in different disease models and across different ages, strains, and species, which demonstrates the robustness of this family of AAVs.”
The new study is a collaboration between researchers from Harvard’s Department of Stem Cell and Regenerative Biology, the Broad, and Boston Children’s Hospital.
The paper details how the group modified the outer protein shell of an adeno-associated virus capsid known as AAV9, a commonly used gene-delivery vehicle in gene therapy, to improve its ability to shuttle genes into muscle cells. They then injected the capsids into mice and primates, and then sampled and sequenced the muscle tissue throughout the body to look for the capsids that had successfully delivered their genetic cargo. They found a family of capsid variants with a unique surface structure that specifically targets muscle cells, and called these MyoAAV.
The researchers then put their capsids to the test: treating genetic muscle disease in animal models.
In the mouse model for Duchenne muscular dystrophy, which causes progressive muscle degeneration and weakness due to alterations of a protein that keeps muscles intact, MyoAAV carrying CRISPR-Cas9 led to more widespread repair of the dysfunctional gene in the muscle tissue. This was compared to the conventional AAV9 carrying the CRISPR components. The muscles of the MyoAAV-treated animals also showed greater strength and function.
In collaboration with Alan Beggs’ lab at Boston Children’s Hospital, the research team showed that MyoAAV was also effective at treating Duchenne muscular dystrophy.
In the mouse model for X-linked myotubular myopathy, a disease that is lethal after about 10 weeks in mice, the researchers saw that after receiving a dose 100 times lower than those in other viral vectors currently used in clinical trials, all six mice treated with MyoAAV in the study lived as long as normal mice. In comparison, mice treated with AAV9 lived only up to 21 weeks of age.
In their final experiment, the team saw that MyoAAV designed for nonhuman primates delivered genes to muscles in these animals far more efficiently than naturally occurring capsids currently used in clinical trials. MyoAAV also successfully introduced genes to human cells in the lab. The results suggested that MyoAAV can be used for muscle-directed gene delivery across different species because various MyoAAV capsids used a similar mechanism to deliver genes to mouse and human muscle cells.
Next steps involve using the data from the study and looking at how the system can be used to enable effective drug development for patients.
“We have an enormous amount of information about this class of vectors from which the field can launch many exciting new studies,” Wagers said.
Support for this research was provided in part by an anonymous philanthropic gift, the Howard Hughes Medical Institute, Sarepta Therapeutics, the Chemical Biology and Therapeutic Sciences program at the Broad Institute, the American Society of Gene & Cell Therapy, the National Institutes of Health, the Glenn Foundation, the Muscular Dystrophy Association USA, and the Anderson Family Foundation.
International climate-change experts have issued increasingly dire warnings about the need for deep emissions cuts in the years to come. The nations of the world will consider their individual commitments and plan the path ahead when they gather for the latest global climate summit in November in Scotland. Harvard, meanwhile, has signaled its intent to further boost its diverse and long-running climate-change efforts, creating a new position of vice provost for climate and sustainability and naming energy and environmental policy expert James Stock to the post in September. Stock, the Harold Hitchings Burbank Professor of Political Economy, spoke to the Gazette about the challenge ahead, for both the globe and Harvard, and his vision for how his new office can help.
GAZETTE: There’s work on climate change going on across Harvard’s Schools. How do you see your role as vice provost intersecting with that work?
STOCK: In his letter on climate-change engagement at Harvard last month, President Larry Bacow wrote that “Harvard must stand among world leaders in addressing this challenge.” It’s my job to make this happen.
Fortunately, we are starting from a position of strength. We have many excellent individual faculty members and students working on various aspects of climate change and sustainability across the University. Also, the Harvard University Center for the Environment has done a terrific job creating a community of scholars across Harvard who are deeply engaged in climate and environment research. My task is to build on this strength at the University level to support more research, more teaching and education, and an enhanced level of external impact commensurate with Harvard’s stature.
At the moment, research and education in the climate area mainly occurs within the Schools. I see my role as both supporting existing research programs and promoting research collaborations across School boundaries. Climate change cuts across School boundaries, and tackling the challenge of climate change is a quintessential example of how we can draw on all of Harvard’s strengths.
GAZETTE: Is there a particular place at the University where the potential for deepest impact lies right now?
STOCK: In the short run, we have at least two areas of opportunity. One is in research. We can support those who are actively engaged in research in this area. We can also make it easier for researchers who haven’t worked on climate issues but would like to. For example, I anticipate that this spring we will start a new grants program for Harvard faculty and graduate students, as a larger successor to the Climate Change Solutions Fund. I’d like to see this program used in part to support early work by scholars interested in bringing their expertise and perspectives to climate and sustainability research. I’d like to promote a “big tent” approach to research in this area. This is a vast field that needs first-rate minds focusing on it across many disciplines.
A second area is on-campus education. Currently, demand for climate-related courses far outstrips supply. Fully addressing that problem requires having more faculty teaching in the area. But we can make progress in the short run by taking advantage creatively of existing courses. For example, the Center for the Environment has been developing resources to support faculty who want to create a climate module in their courses. That is a terrific opportunity to help faculty who want to connect existing and new courses to climate and environmental challenges. There are also things we can and will do, with the Center for the Environment’s help, to support and educate Ph.D. students working in the climate area broadly. And we — faculty and students together — also have an opportunity to think ahead to what a more robust set of curricular options could look like.
GAZETTE: Does your appointment — as an expert on energy and environment policy — indicate a belief that, while advancing climate science, technology, and other aspects of the problem are clearly still important, this has become largely a policy problem, specifically an energy policy problem?
STOCK: The scope of needed work on climate spans many time scales and many intellectual areas. It’s true that my own work focuses on U.S. energy policy and how we can effectively decarbonize in the near term. But there are a vast number of other aspects of climate change that Harvard can and does contribute to. We already have great engineering work on batteries, carbon removal, and more, but the world needs much more on the technology front. And, what are the best ways to make urban environments more resilient to storms, floods, and rising sea levels? What are the public health threats that future climate change will pose, and how can they be addressed? How can the private sector meaningfully work toward a sustainable future? What can we do to make sure that the transition to clean energy benefits the disadvantaged communities that have disproportionately suffered from fossil-fuel pollution? Harvard faculty are doing good work in many of these areas, but there is much more to be done.
Going back to your question about energy policy, recently I’ve been thinking about the transition to light-duty electric vehicles and about the transition to low-carbon aviation fuels. Those are, in the first instance, policy questions, but they are intimately linked with technology questions. If you just focus on sustainable aviation fuels, there are many possible, theoretical paths to decarbonizing the aviation sector, but really none of those are available at a commercial scale right now. We don’t really know which of those are going to be cost-effective. So the question of decarbonizing aviation is really challenging in part because we don’t know what the technology is. What we need to do now is make sure that these technologies develop. That problem combines policy, technology, finance, land use, and business.
GAZETTE: Can the potential replacements for current aviation fuels be used in current engines or are we also looking at a need to transition to new types of engines?
STOCK: At the moment, the most plausible path runs through so-called sustainable aviation fuels, which can be used directly in today’s jet engines. Those fuels also can be used with today’s fueling infrastructure. These fuels come from biofuel feedstocks, some of which have the potential to have very low life-cycle carbon footprints. But engineers are thinking outside the box, too. For example, there are development prototype airplanes that are battery powered for short-haul purposes, like flying from Boston to the Cape. Then there are other ideas like using hydrogen as a fuel. For example, Airbus has been doing some concept work on using hydrogen. That’s a long way off but it underscores that aviation isn’t just a policy problem, it’s also a technology and business problem.
GAZETTE: Clearly, a heavy reliance on renewables seems to be the way forward, but is it possible to go entirely to renewables? What is your view of this energy transition as we move ahead?
STOCK: I’m confident that we can fully decarbonize the power sector, although we can’t do it next week. If we can generate inexpensive green electricity, then use it to power vehicles and manufacturing processes and so forth, then that is a very plausible path towards decarbonization for big chunks of the economy. At the moment, the best bet for sharply reducing carbon emissions in the power sector is through building new renewables generating capacity — wind and solar — while postponing the retirement of our nuclear fleet. Squeezing the final 10 or 15 percent of fossil fuel emissions out of the power sector is an area where the technology isn’t quite there yet. There are some very exciting ideas that are being explored for that. Some of them have to do with long-term battery storage; some of them have to do with hydrogen; some of it could be load management. There is also a role for new zero-carbon capacity like nuclear or, maybe someday, fusion, if the economics, siting, and fuel challenges work out. So there’s a wide range of things that are under active investigation, and we’re going to have to cross that bridge in 10 or 15 years, perhaps sooner in some regions.
GAZETTE: Has solar gotten to the point where it is cheaper than coal or natural gas?
STOCK:Yes. In much of the United States, it is substantially cheaper to install new solar than just to keep an existing coal plant running. This is a remarkable and quite recent development. Now, one really can see a path toward deep decarbonization of the power sector in the U.S. and in other areas where there are renewable resources, wind or solar.
GAZETTE:Is that what makes you optimistic?
STOCK: The falling cost of wind and solar electricity definitely is one reason for great optimism in the United States for a big chunk of our emissions. Another reason for optimism is that it is also becoming cheaper to use that green electricity in much of the transportation sector. We are already at the point where some electric vehicles are price-competitive with their gasoline counterparts, if you include fuel and maintenance costs. As battery prices fall, the price of electric vehicles will fall further, and they will become increasingly attractive to the consumer — especially if we take the public policy actions needed to support charging infrastructure.
But the main reason I’m optimistic is the passion for addressing climate-change issues among youth nationally and globally. Here at Harvard, there is tremendous student enthusiasm for tackling climate change across the board, from undergraduates interested in green technologies, to professional students preparing for careers in which climate change and sustainability will play an important role. The youth movement has been critical in driving the current climate efforts by the Biden administration and in Congress. Whatever happens to those efforts in the short run, the youth political pressure to act on climate has been transformative.
GAZETTE:Given the magnitude of the change, can we do it in the timeframe that seems to be necessary?
STOCK: A common goal is to be net zero by 2050. I think that’s an achievable goal, but I must admit that in saying that I’m putting a lot of faith in the engineers — and in green-tech venture capitalists and the rest of the green-tech ecosystem, including research universities — to invent a lot of things that don’t yet exist. We also will need the policies to support those nascent technologies and drive, or in some cases enforce, the transition. In the big areas of energy use — the use of electricity in residential heating and cooling, light-duty transportation, even aviation, one can see a path toward net zero by 2050. There are some residual areas where we just don’t know, such as manufacturing processes like steel production. Agriculture is challenging because of methane emissions from animal husbandry. There are ideas about how to mitigate such sources, but they’re all very expensive. So there’s a lot of work to be done in some of these most difficult sectors. It’s really important to think about these things now, to focus on doing what we can now to set the stage for those steps later.
GAZETTE:Harvard has several potential roles when we talk about climate-change solutions and one of them is as a living lab. How big a challenge are the University’s goals to be fossil-fuel neutral by 2026 and fossil-fuel free by 2050?
STOCK: Being fossil-fuel neutral by 2026 is a big lift, but Harvard is working hard toward that goal. The Harvard Office for Sustainability has been working on this undertaking, in collaboration with faculty through the Presidential Committee on Sustainability. And that office has been taking concrete steps. As just one example, on Oct. 21 there will be a ribbon-cutting ceremony on the first four of our new electric shuttle buses. The office, faculty, and students are also thinking more broadly about how our efforts can provide a useful template for sustainability efforts elsewhere. In this vein, part of this work includes ex-post program evaluation, for example checking whether an offset program or a new renewables investment credibly provided additional emissions reductions.
On your second question, fossil-fuel free by 2050, that really is a question of whether we can make the economy fossil-fuel free by 2050. In my view, Harvard’s real contribution is helping to achieve that goal nationally. Changes on the Harvard campus are important — we must lead by example — but Harvard’s leadership must extend much further, through research, teaching, and engagement, nationally and globally.
GAZETTE:Clearly, as a university, one of our biggest impacts on the future in any field is our students. Are we adequately preparing them to understand this problem, regardless of what field they’re in, and are there thoughts about potentially making changes to the curriculum?
STOCK: I mentioned that we have short-run opportunities in education. Demand for climate-related training will only be increasing. Some of our peer institutions are setting up schools of climate and sustainability, and we need to compete successfully for those students, the leaders of tomorrow. If you think expansively, there are professions that are just now emerging which are related to climate, and there’ll be more new professions that are related to climate in eight to 10 years: climate and health, climate and development, climate and finance, and so forth. So, there are many areas where I think we have great opportunities to create new programs, to have cross-School programs, or enhanced programs within Schools, both at the graduate and undergraduate level. So, the short answer is we have some very good courses, and we do a good job teaching them, but there are many opportunities to do much more.
GAZETTE:You had indicated that you were going to be embarking on a series of conversations with faculty across the University. Have you begun that? What have you been hearing?
STOCK: I have. I’ve also created a faculty advisory committee, drawn from faculty across multiple Schools, to help me as we build out the new vice provost position. There is tangible enthusiasm for the fact that Harvard created this position and that we have ambitious goals. I’ve had quite a few colleagues say, “I’m interested in climate. It seems really important. It’s not my main area of research, but it would be really interesting to do some work in this area. How can I get involved?” or “How can I learn about this particular technical topic?” And that’s great. That means there are opportunities to get to get additional faculty involvement. We have a community that is united by the recognition of the gravity of the climate problem and the unique opportunity for Harvard to lead in tackling it.
Matthew Douglas Adams wants to taste 5,000-year-old beer — or at least one made like they did then.
Thanks to his recent excavation of a brewery in the ancient Egyptian city of Abydos, the senior research scholar at New York University’s Institute of Fine Arts may get his wish, and soon. But the excavation revealed far more than a way to reconstruct an ancient recipe for suds. The industrial-scale production — on par with today’s best microbreweries — offers direct evidence of the kind of power wielded by Egyptian kings.
In previous digs, archaeologists have unearthed several breweries across various sites. But the one in Abydos is the largest so far. In the early 1900s, Egyptologist Thomas Eric Peet was excavating cemetery fields in Abydos when he found surprising remains underneath the tombs: large pottery vats, propped up by fired-mud bricks. Peet speculated the rigs were used to dry grain.
He was close. But it was Adams and his team who determined what they were really for. In two excavations during 2018 and 2020, they uncovered six large, rectangular buildings, each more than 20 yards long and three feet deep, housing about 40 vats apiece. Each vat was wrapped in mud for insulation; charcoal lay underneath; and inside, an organic residue remained, burned black and hard.
“Enough survives for us to gain a very solid picture of what was going on,” said Adams, which is to say beer production on an industrial scale.
Two similar ancient Egyptian breweries — Tell el Farkha and HK24B — could produce batches of up to about 165 gallons and 265 gallons, respectively. (A typical U.S. keg is about 15.5 gallons.) The Abydos site, built a couple of centuries later, could put out more than 5,917 gallons per batch. That’s enough to serve a pint to every person in a packed Fenway Park and a second round for about half.
“It’s an incredible amount of beer,” especially for a society at such an early stage of economic and political development, Adams said. “Why would kings or anyone at this time need this amount of beer? What would they have possibly been using it for?”
Our sedentary tendencies may be robbing us of a key benefit of physical activity: the myriad repair mechanisms that heal the minor dings and tears of hunter-gatherer and farming lifestyles, a deficit that may be particularly damaging as we age.
Daniel Lieberman, the Edwin M. Lerner II Professor of Biological Sciences and an expert in the evolution of physical activity and exercise, says that the difference in activity levels between Western adults and hunter-gatherers is significant throughout the lifespan, but grows particularly glaring as we age. Western adults slow down with age while elders of today’s hunter-gatherer tribes — whose daily exercise is already significantly higher — chalk up six to 10 times more activity than their Western counterparts.
“We evolved to be very physically active as we age,” Lieberman said. “There’s no such thing as retirement if you’re a hunter-gatherer. You work until the end of your life. There are no weekends, no bank holidays, no retirement.”
Grandmothers actually increase foraging after their child-rearing days, spending more time engaged in the activity than mothers who are juggling childcare responsibilities: four to eight hours per day compared with two to five for mothers. All that exercise, Lieberman said, stresses the body and requires it to spend significant resources on repair after each session, patching tears in muscle fibers, repairing cartilage damage, and healing microfractures. Exercise-related antioxidants, anti-inflammatories, increased blood flow, cellular and DNA repair processes have been shown to lower the risk of diabetes, obesity, cancer, osteoporosis, Alzheimer’s, and depression. Exercise has even been shown to protect against COVID-19, Lieberman said, with 150 minutes per week resulting in a 2½ times lower risk for contracting the illness.
In the digital age, every byte of data needs to go somewhere — and preferably stay there a long time. That last part is a major problem when it comes to data-storage systems, which typically last less than 20 years. A group of Harvard chemists is trying to solve the issue with an innovation that resembles tiny drops of ink.
In a new paper in ACS Central Science, researchers from the George Whitesides lab describe a novel storage approach that uses mixtures of seven commercially available fluorescent dyes to save data files. The dyes are dropped by an inkjet printer and read with a microscope that can detect the different wavelengths of light each dye emits. The researchers then decode the binary message in the molecules back to documents, books, photos, videos, or anything else that can be digitally stored.
Theoretically, the data can be saved for a very long time — thousands of years or more. The long timeline of molecular data-storage options is superior to that of current media devices for data storage, such as flash drives, Blu-rays, magnetic memory strips, and computer drives, which can store information for 40 years at most, have strict size limits, and are susceptible to water damage and hacking. Another shortcoming with traditional storage processes is that they gobble up energy. Even the cloud has a storage limit, requires huge and expensive physical servers, and is, of course, susceptible to being breached.
“This method could provide access to archival data storage at a low cost,” said Amit A. Nagarkar, co-lead author of the paper, who conducted the research as a postdoctoral fellow in the Whitesides lab. “[It] provides access to long-term data storage using existing commercial technologies — inkjet printing and fluorescence microscopy.”
The dye method could be particularly helpful with information whose storage is regulated — financial and legal records, for example — and in cases in which long-term storage is crucial, as with satellite data. The dyes live outside the hackable internet, are relatively cheap to produce, and can’t be read without a special microscope. The technique uses no energy once the data is recorded.
Massages feel good, but do they actually speed muscle recovery? Turns out, they do. Scientists at the Wyss Institute and Harvard John A. Paulson School of Engineering and Applied Sciences applied precise, repeated forces to injured mouse leg muscles and found that they recovered stronger and faster than untreated muscles, likely because the compression squeezed inflammation-causing cells out of the muscle tissue.
Using a custom-designed robotic system to deliver consistent and tunable compressive forces to mice’s leg muscles, researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering and SEAS found that this mechanical loading (ML) rapidly clears immune cells called neutrophils out of severely injured muscle tissue. This process also removed inflammatory cytokines released by neutrophils from the muscles, enhancing the process of muscle fiber regeneration. The research is published in Science Translational Medicine.
“Lots of people have been trying to study the beneficial effects of massage and other mechanotherapies on the body, but up to this point it hadn’t been done in a systematic, reproducible way,” said first author Bo Ri Seo, who is a postdoctoral fellow in the lab of Dave Mooney at the Wyss Institute and SEAS. “Our work shows a very clear connection between mechanical stimulation and immune function. This has promise for regenerating a wide variety of tissues including bone, tendon, hair, and skin, and can also be used in patients with diseases that prevent the use of drug-based interventions.”
A more meticulous massage gun
Seo and her co-authors started exploring the effects of mechanotherapy on injured tissues in mice several years ago, and found that it doubled the rate of muscle regeneration and reduced tissue scarring over the course of two weeks. Excited by the idea that mechanical stimulation alone can foster regeneration and enhance muscle function, the team decided to probe more deeply into exactly how that process worked in the body, and to figure out what parameters would maximize healing.
They teamed up with soft robotics experts in the Harvard Biodesign Lab, led by Wyss Associate Faculty member Conor Walsh to create a small device that used sensors and actuators to monitor and control the force applied to the limb of a mouse. The team experimented with applying force to mice’s leg muscles via a soft silicone tip and used ultrasound to get a look at what happened to the tissue in response. They observed that the muscles experienced a strain of between 10 to 40 percent, confirming that the tissues were experiencing mechanical force. They also used those ultrasound imaging data to develop and validate a computational model that could predict the amount of tissue strain under different loading forces.
They then applied consistent, repeated force to injured muscles for 14 days. While both treated and untreated muscles displayed a reduction in the amount of damaged muscle fibers, the reduction was more pronounced and the cross-sectional area of the fibers was larger in the treated muscle, indicating that treatment had led to greater repair and strength recovery. The greater the force applied during treatment, the stronger the injured muscles became, confirming that mechanotherapy improves muscle recovery after injury. But how?
Evicting neutrophils to enhance regeneration
To answer that question, the scientists performed a detailed biological assessment, analyzing a wide range of inflammation-related factors called cytokines and chemokines in untreated vs. treated muscles. A subset of cytokines was dramatically lower in treated muscles after three days of mechanotherapy, and these cytokines are associated with the movement of immune cells called neutrophils, which play many roles in the inflammation process. Treated muscles also had fewer neutrophils in their tissue than untreated muscles, suggesting that the reduction in cytokines that attract them had caused the decrease in neutrophil infiltration.
The team had a hunch that the force applied to the muscle by the mechanotherapy effectively squeezed the neutrophils and cytokines out of the injured tissue. They confirmed this theory by injecting fluorescent molecules into the muscles and observing that the movement of the molecules was more significant with force application, supporting the idea that it helped to flush out the muscle tissue.
To pick apart what effect the neutrophils and their associated cytokines have on regenerating muscle fibers, the scientists performed in vitro studies in which they grew muscle progenitor cells (MPCs) in a medium in which neutrophils had previously been grown. They found that the number of MPCs increased, but the rate at which they differentiated (developed into other cell types) decreased, suggesting that neutrophil-secreted factors stimulate the growth of muscle cells, but the prolonged presence of those factors impairs the production of new muscle fibers.
Colonoscopy is an important weapon in the fight against colon cancer, which killed 51,000 Americans in 2019, making it the nation’s second-deadliest cancer. But doctors’ ability to catch polyps on the colonoscopy screen varies — sometimes significantly. Last month, a team led by physicians at Beth Israel Deaconess Medical Center and Harvard Medical School showed that an AI-based computer-vision algorithm can improve the accuracy of screenings. The Gazette spoke with Tyler Berzin, a gastroenterologist and associate professor of medicine, about the findings, published in the journal Clinical Gastroenterology and Hepatology. The interview was edited for clarity and length.
GAZETTE:Your study used AI to improve colonoscopy results. Can you tell us what you found?
BERZIN:Colonoscopy is the most effective tool for preventing colon cancer, but there’s a lot of variability among individual physicians in their ability to detect precancerous polyps. That variability directly translates into how effectively they can protect their patients from colon cancer. There have been interesting observations in the field of screening colonoscopy that an extra pair of eyes, an experienced nurse or technician, a second gastroenterologist, or an extra gastroenterology trainee helps with polyp detection. So this is a good target for using AI to augment physician performance because AI computer vision could act as an extra set of eyes, without distraction and without fatigue.
But demonstrating real clinical benefit is the last-mile problem for AI in clinical medicine. There is an explosion of very cool technology where AI promises to benefit physician performance and clinical care, but actually demonstrating a benefit in a high-quality randomized control trial has rarely happened. So, our clinical research is providing rock-solid clinical science to support the use of this technology.
GAZETTE:Does the AI review these images after the fact or is it done in real time?
BERZIN:This is real-time use of AI, which also is somewhat unique. In clinical medicine, most examples of AI use — for radiology, for EKGs and so on — is applying AI after the initial patient interaction, in the subsequent review of the X-ray, for instance. But we need real-time assistance during colonoscopy screening, when the job of the physician is to visually survey the entire lining of the colon very meticulously, to identify and remove small precancerous polyps. The challenge that gastroenterologists face is that many of these polyps are very flat, almost growing like moss on a rock. Often they can blend into the surrounding mucosa. Our AI computer sits between the colonoscope and the endoscopy monitor and processes the colonoscopy image. What we see on the monitor is our live colonoscopy procedure, but with blue or green alert boxes pointing out where suspected polyps may be located. It basically guides the physician’s eyes to an area where these subtle polyps are. So this is the perfect example of AI not replacing the physician, but augmenting physician performance.
GAZETTE:Are flat polyps less dangerous?
BERZIN:It’s actually the reverse. These flat polyps, which are often on the right side of the colon, make up a large percentage of polyps that may be missed during colonoscopies. There is a small percentage of patients who develop colon cancers even after they’ve undergone the screening and those patients often are found to have colon cancers in the right side of the colon.
GAZETTE:And the detection improvement was about 30 percent?
BERZIN:Percentages are always tricky, because there’s absolute versus relative differences for any given polyp, but our study showed that physicians were about 30 percent less likely to miss a polyp if they were using AI assistance. A core priority for AI in clinical medicine is independent, external validation of AI clinical algorithms — does an algorithm which has been developed in one environment perform as expected in a different clinical setting with a different patient population? This study is the first prospective randomized trial to externally validate the performance of an AI algorithm in a country and patient population — the U.S. — that was entirely distinct from where the training data was derived, China. We’re particularly proud that the trial engaged a diverse U.S. patient population, which must be a continued priority for AI clinical trials going forward.
GAZETTE:How does the AI recognize images of the polyps?
BERZIN: In this case the software is based on a deep-learning computer vision algorithm — which is built to learn how to detect certain objects once you give it enough examples of what that object looks like. You feed it a lot of visual data and say, “Hey, these 100,000 images have polyps, and these 100,000 images don’t.” Then it learns, over a training period, how to distinguish the polyps. What’s interesting is that these AI deep-learning systems potentially identify features that a physician might not even recognize. There are lots of examples of this, where a deep-learning model for X-rays, for instance, can distinguish somebody’s ethnicity based on an X-ray. There are examples where AI systems can look at a retinal scan and distinguish whether the patient is male or female, which physicians cannot do by looking at the retina. We have no idea how the AI systems can do this, but they can do it with incredible accuracy. So when you train a machine-learning model, it may actually be picking up on cues that are not the typical cues that physicians pick up on. That can come with advantages, but it can also — outside of polyp detection — create interesting questions about what’s happening and why.
GAZETTE:Can the human physicians learn from the AI?
BERZIN:One area of interest is “explainable AI.” We would love to be able to go back and interrogate the computer: Hey, this is an interesting group of 20 polyps that the machine saw more easily than the physician — what are the features that made it possible to reliably identify these and that will help, both with training physicians and with future iterations of the AI technology?
GAZETTE:How long before AI is routinely used in colonoscopies?
BERZIN:The FDA just approved the first AI system for polyp detection, and this is beginning to be rolled out to a handful of centers across the country. However, in the field of medicine it’s common that exciting new technology gets rolled out, and sometimes even gains wide adoption, before high-quality research trials determine whether or not the clinical benefit is clear, and whether the cost is warranted. My team is trying to make sure that we develop a solid evidence base of high-quality research to guide clinical use of AI in gastroenterology.
GAZETTE:I heard at least one person familiar with AI in medicine say that the AI used in social media platforms — basically on his kid’s phone — is far more sophisticated than what is used in medicine these days. Do you share that observation?
BERZIN:I do share that observation. I’ve been working on the concept of AI polyp detection now for about seven or eight years — this was around the time that Facebook started recognizing my face and my sister’s face and my wife’s face on our uploaded images on Facebook. Facial recognition is a very difficult problem and tightly parallels some of the image recognition challenges in medicine. Social media companies invested in this years ago, but in the early days of AI, the Venn diagram of people who were interested in machine learning and the folks who were interested in colorectal cancer prevention had almost zero overlap. When I was doing very early research in this eight years ago, several members of my team — all graduate students or postdocs at MIT — ultimately graduated and left for Facebook, Google, or a similar company.
GAZETTE:How big is the missed opportunity?
BERZIN:I think we’re five to eight years behind where we could be if the efforts of the best AI minds had been spent differently during the last decade. Implementing these technologies in medical practice is going to reduce the number of people who develop colon cancer. It’s great that it’s happening in 2021, but certainly I would have been really happy for this to be happening in 2015.
At the end of the 1970s, infanticide became a flashpoint in animal behavioral science. Sociobiologist Sarah Hrdy, then a Harvard Ph.D. student, shared her observation in her published thesis that whenever a new langur male entered an established colony, infants would either begin to disappear or show evidence of wounds. Hrdy concluded this was done to eliminate the progeny of rivals and free up now infant-less females for mating. The work provoked an uproar.
“You can imagine, talking about infanticide and infanticidal behavior. That just seemed absolutely awful,” said Catherine Dulac, Lee and Ezpeleta Professor of Arts and Sciences and Higgins Professor of Molecular and Cellular Biology. “People said, ‘No, no, no, this, this can’t be true. It’s not a normal behavior. It’s a pathological behavior.’ [Hrdy] maintained her thesis and people started to look, to observe.”
Since then abusive behavior toward infants — ranging from physical aggression to avoidance and neglect — has been documented in a range of species, including certain primates, lions, and mice. It has spawned many laboratory studies trying to better understand the phenomenon, and the neurobiological mechanisms controlling it are still being teased out. A recent study from Dulac’s lab is helping shed new light on the neural circuitry involved.
Published in eLife, the work describes a specific set of neurons in the brain that controls the aggressive behavior of adult mice toward infants. Researchers believe these “anti-parental” circuits, found in a small area of the hypothalamus called the perifornical area, are triggered in virgin male and stressed female mice resulting in aggressive tendencies and neglectful behavior toward infants. The findings illustrate a novel role for these neurons controlling anti-parental interactions in male and female mice, research that has ramifications in fields such as neuroscience and animal behavior. It may also help scientists get a better handle on how stress and disease affect human parenting.
“The big finding is that there is a very specific set of neurons in the brain that controls that particular form of agonistic [or hostile] behavior,” Dulac said. “Maternal aggression, adult-on-adult aggression — two males attack each other, a female protects her pups — all these other types of aggression rely on distinct circuitry. These circuits specifically orchestrate aggressive behavior toward infants, as well as avoidance and neglect.”
Dulac was awarded a Breakthrough Prize in Life Sciences last year for her pioneering work identifying the neural circuitry that regulates parenting behavior in both males and females. For her this kind of study speaks to one of the major goals of the field: understanding how brain activity generates specific behaviors.
The melting of polar ice is not only shifting the levels of our oceans, it is changing the planet Earth itself. Newly minted Ph.D. Sophie Coulson and her colleagues explained in a recent paper in Geophysical Research Letters that, as glacial ice from Greenland, Antarctica, and the Arctic Islands melts, Earth’s crust beneath these land masses warps, an impact that can be measured hundreds and perhaps thousands of miles away.
“Scientists have done a lot of work directly beneath ice sheets and glaciers,” said Coulson, who did her work in the Department of Earth and Planetary Sciences and received her doctorate in May from the Graduate School of Arts and Sciences. “So they knew that it would define the region where the glaciers are, but they hadn’t realized that it was global in scale.”
By analyzing satellite data on melt from 2003 to 2018 and studying changes in Earth’s crust, Coulson and her colleagues were able to measure the shifting of the crust horizontally. Their research, which was highlighted in Nature, found that in some places the crust was moving more horizontally than it was lifting. In addition to the surprising extent of its reach, the Nature brief pointed out, this research provides a potentially new way to monitor modern ice mass changes.
To understand how the ice melt affects what is beneath it, Coulson suggested imagining the system on a small scale: “Think of a wooden board floating on top of a tub of water. When you push the board down, you would have the water beneath moving down. If you pick it up, you’ll see the water moving vertically to fill that space.”
These movements have an impact on the continued melting. “In some parts of Antarctica, for example, the rebounding of the crust is changing the slope of the bedrock under the ice sheet, and that can affect the ice dynamics,” said Coulson, who worked in the lab of Jerry Mitrovica, the Frank B. Baird, Jr. Professor of Science.
The current melting is only the most recent movement researchers are observing. “The Arctic is an interesting region because, as well as the modern-day ice sheets, we also have a lasting signal from the last ice age,” Coulson explained. An ice sheet once covered what is now Northern Europe and Scandinavia during the Pleistocene Epoch, the ice age that started about 2.6 million years ago and lasted until roughly 11,000 years ago. “The Earth is actually still rebounding from that ice melting.”
“On recent timescales, we think of the Earth as an elastic structure, like a rubber band, whereas on timescales of thousands of years, the Earth acts more like a very slow-moving fluid.” said Coulson, explaining how these newer repercussions come to be overlaid on the older reverberations. “Ice age processes take a really, really long time to play out, and therefore we can still see the results of them today.”
The implications of this movement are far-reaching. “Understanding all of the factors that cause movement of the crust is really important for a wide range of Earth science problems. For example, to accurately observe tectonic motions and earthquake activity, we need to be able to separate out this motion generated by modern-day ice-mass loss,” she said.
Coulson is continuing her research as a Director’s Postdoctoral Fellow at Los Alamos National Laboratory in New Mexico as part of a climate group that works on future projections of ice sheets and ocean dynamics.
Glenn Antony Milne, professor of Earth and Environmental Sciences at the University of Ottawa, explained that understanding the extent of this movement clarifies all studies of the planet’s crust. “Sophie’s work is important because it is the first to show that recent mass loss of ice sheets and glaciers causes 3D motion of the Earth’s [solid] surface that is greater in magnitude and spatial extent than previously identified,” he said. “Also, one could look for this signal in regional and larger-scale global navigation satellite system datasets to, in principle, produce improved constraints on the distribution of ice mass fluctuations and/or solid Earth structure.”
Testosterone’s wide-reaching effects occur not just in the human body, but across society, powering acts of aggression, violence, and the large disparity in their commission between men and women, according to Harvard human evolutionary biologist Carole Hooven.
Hooven, lecturer and co-director of undergraduate studies in Human Evolutionary Biology, waded directly into the nature versus nurture debate Thursday evening, laying out her case for the hormone’s function as a foundation for aspects of male behavior. She traced the role of testosterone in the natural world, pointing out its role in differentiating males from females across the animal kingdom. Its far higher levels in males — 10 to 20 times that in females — act as a switch that turns on genes, creating stronger, more heavily muscled individuals, along with more aggressive behavior.
From an evolutionary standpoint, the reason for these differences is the biological imperative to mate, said Hooven, whose recent book, “T: The Story of Testosterone, the Hormone that Dominates and Divides Us,” was published in July. She pointed to examples such as the rutting behavior of male deer, whose seasonal testosterone surges cause changes that are both physiological and behavioral, including aggression that causes males to clash for the right to mate with nearby females and the growth of antlers which serve as weapons in those battles.
“What is testosterone? It is evolution’s tool to help male animals convert energy into offspring, which often requires aggression,” Hooven said.
Hooven delivered an online lecture and fielded questions from Daniel Gilbert, Harvard’s Edgar Pierce Professor of Psychology. She said she first began thinking of doing research on these differences during an eight-month visit in the late 1990s to the Kibale Chimpanzee Project in Uganda, founded by Richard Wrangham, Harvard’s Ruth Moore Research Professor of Biological Anthropology. Her time there was cut short by unrest in the region, but it was long enough for her to spend a lot of it observing chimps in their natural environment, including an episode in which a large male beat a female with a stick for nine minutes as the female shielded her infant. That and other episodes, Hooven said, provided examples of parallels between violence among chimps — also largely perpetrated by males — and humans.
“What was interesting was the physical aggression that I saw while spending time with the chimpanzees in the forest,” Hooven said. “One thing that really struck me about my time with the chimpanzees was the sex differences in the behavior of the chimps that so strongly paralleled sex differences in human behavior. And chimpanzees, of course, and wild animals in general, don’t share any aspects of human culture.”
In human development, Hooven said, testosterone levels reach peaks in the developing fetus and in infants right after birth. Levels rise again around puberty. Behavioral differences between boys and girls have been documented, she said, with boys gravitating more toward rough and tumble play than girls, an observation which has parallels in the natural world.
In human societies, Hooven said, testosterone’s effects are best seen on a large, rather than an individual, scale. That’s because these effects can vary widely from person to person. For example, even though in most heterosexual couples the male is larger and more muscular, there are many examples where that’s not the case. And many, if not most, individual males are nonviolent, despite national crime statistics that show stark differences in the types of crimes committed by men and women, with vastly more violent crimes, rapes, and murders committed by men. Another wrinkle, she said, is that differences in testosterone levels between males appears to make little difference in factors like sex drive and athletic ability once a certain threshold is reached. The effects are most apparent in the large differences in testosterone levels and behavior between men and women.
Hooven, whose talk was sponsored by the Harvard Museums of Science and Culture, has come under fire from those who argue that behavior — including aggression — is largely learned, its roots being in culture, not biology. They say that emphasizing the role of biology in male aggression could play into the hands of those who believe acts such as attacks and rape should be viewed as less a matter of choice than part of a perpetrator’s nature.
Hooven said the question is an important one and answered that critique by agreeing that culture has a profound effect on behavior. She pointed to the fact that different countries around the world have far different homicide rates, evidence that culture-based attitudes toward violence, which vary from nation to nation, lead to such striking differences. However, Hooven argued, even among countries with very different rates of violence, one thing that is consistent is that in every nation men are the perpetrators far more often than women.
That said, Hooven decried the concept of biological determinism, saying that the existence of these testosterone-based effects should not be an excuse for tolerating aggression, violence, discrimination or other ills. The high stakes, she said, should instead provide a reason to better understand whatever biological underpinnings there are for these behaviors, in order to ultimately arrive at a more effective solution.
“I would argue that both are incredibly important, and in some ways culture is more important than biology,” Hooven said. “We have to start with trying to understand reality. Reality isn’t always comfortable.”
Recently NASA updated its forecast of the chances that the asteroid Bennu, one of the two most hazardous known objects in our solar system, will hit Earth in the next 300 years. New calculations put the odds at 1 in 1,750, a figure slightly higher than previously thought.
The space agency, which has been tracking the building-sized rock since it was discovered in 1999, revised its prediction based on new tracking data.
Even with the small shift in odds, it seems likely we won’t face the kind of scenario featured that in the 1998 science-fiction disaster film “Armageddon” when Stamper, played by Bruce Willis, and his team had to try to blow up a huge asteroid that was on an extinction-making collision course with the Earth.
(In an unrelated development, NASA plans to launch a mission in November to see whether a spacecraft could hit a sizeable space rock and change its trajectory just in case it ever needs to.)
This begs the question of just how good should we feel about our odds? We put that question to Lucas B. Janson and Morgane Austern, both assistant professors of statistics.
They compared Bennu’s chances of hitting Earth to the approximate likelihood of:
Flipping a coin and having the first 11 attempts all land heads.
Any four random people sharing a birthday in the same month (the odds of this are 1 in 1,750 exactly).
Throwing a dart at a dartboard with your eyes closed and hitting a bullseye.
Winning the state’s VaxMillions lottery on two separate days if every eligible adult resident is entered and a new drawing is held every second.
Bottom line? Janson, an affiliate in computer science, says that if he were a betting man, he would put his money on our being just fine. Then again, he points out, if he is wrong, “Paying up would be the least of my worries.”
It’s not uncommon for scientists to have to run experiments numerous times to see whether they have a big discovery on their hands. Every once in a while, though, a researcher makes a big find more or less by eyeballing it. That’s essentially what Tiago R. Simões did when he saw pictures of a 231-million-year-old reptile skull, originally found about two decades ago.
“I knew right away because of its age, locality, and some of its anatomical aspects that it was extremely important,” said Simões, a researcher in the lab of Harvard paleontologist Stephanie Pierce. “I knew we needed to give this priority and get the CT scan data to see exactly what we have.”