This article originally appeared on VICE US.
A new brain scan can tell the difference between good surgeons and inexperienced ones
Prominent heart surgeon David Sabiston, who performed an early coronary bypass operation in the 1960s, led a surgical training program at Duke known colloquially as “Decade with Dave.” This was because Sabiston required nearly 10 years in clinical training and research before one could say they’d gained the experience and expertise necessary to become a surgeon. The profession requires not only rigorous mental learning, but manual dexterity too—the fine motor skills to make tiny sutures and guide laparoscopes through the body.
Currently, a budding surgeon’s skills are gauged by another person through extensive testing, which takes up valuable time and resources. But new study in Science Advances was able to look into the brains of surgeons, and determine who was an expert and who was still learning, based on differences observed in the brain activity.
The researchers used an imaging technique called functional near infrared spectroscopy, or fNIRS, a non-invasive way to measure brain activity in real time. People put on what can look like a hair net or beanie connected to a bunch of wires, which emits near-infrared light into the skull. The light that reflects back is recorded and used to infer blood flow in the brain, which is associated with neural activation.
“We chose to study surgeons mainly because there is a clinical need for more objective and analytical metrics for quantifying surgical motor skill, particularly for surgical training,” says Arun Nemani, a biomedical engineer at Rensselaer Polytechnic Institute and co-author on the paper.
They saw “clear and distinct” differences in brain activation between surgical novices and experts. The novices had much higher prefrontal cortex activation, and significantly lower primary cortex activation than experts. “This is an important finding because primary motor cortex activity is associated with fine motor skills, and prefrontal cortex activity is associated with the initial stages of motor skill learning and strategy development,” Nemani tells me.
From their results, the researchers then developed an algorithm that can identify which doctors were experts or novices, based solely on brain behavior. “In short: Using brain imaging metrics, we can accurately measure, classify, and predict motor skill differences in surgeons for the first time,” he says.
Nemani says the most immediate application of this technology would be to bring it to teaching hospitals, as a way to quantify the skills of learning doctors. But since many jobs require fine motor skills, it could be used for other occupations as well.
Still, we should be cautious before resorting to brain-only aptitude tests. Elizabeth Hillman, a biomedical engineer at Columbia University’s Zuckerman Institute, told the Wall Street Journal that imaging “may not truly be capturing all aspects” of a person’s performance. “More work needs to be done to mitigate the risk of discriminating people based on incomplete data, she added,” The Wall Street Journal reported.
One-third of the "gluten-free" foods in US restaurants have gluten in them
The number of people who eat gluten-free has tripled between 2009 and 2014, whether they medically need to or not. (“It may be in part because of a public belief that the diet is healthier,” explained a 2016 paper in JAMA Internal Medicine.)
Still, if you have celiac disease—an autoimmune disorder in which even tiny amounts of gluten can cause serious damage to the intestinal lining—the proliferation of access to the gluten-free lifestyle is cause for celebration. There’s arguably never been a better time to be gluten-intolerant, as restaurant menus flood with options. Unfortunately, some places capitalizing on the trend aren’t doing so responsibly, found a preliminary study presented at a meeting of the American College of Gastroenterology, in Philadelphia.
The research, which is not yet peer-reviewed, found that one-third of the "gluten-free" foods sold in US restaurants actual do contain traces of gluten. More than 800 investigators with portable gluten sensors performed 5,600 gluten tests over the course of 18 months. They looked for gluten levels at or above 20 parts per million, which is the cutoff to claim a food is officially gluten-free. They found that 27 percent of gluten-free breakfast meals had gluten in them, as did 34 percent of dinner meals.
“The increased popularity and availability of gluten-free foods may have led to some establishments marketing or labeling items as gluten-free without taking necessary precautions to prevent cross-contamination,” Benjamin Lebwohl, the director of clinical research at The Celiac Disease Center at Columbia University, tells me.
Some foods were riskier than others, including pizza and pasta; more than half of the “gluten-free” pizza and pastas that were tested had gluten in them. Lebwohl says these foods might be more vulnerable to cross-contamination because pasta can be cooked in water that regular noodles were made in and pizza can be cooked in shared ovens dusted with gluten flour.
He recommends taking extra precautions with foods that share prep space, even if they’re advertised as gluten-free. And, if you do have celiac or another gluten intolerance, make it clear to those preparing your food that gluten-free isn’t an accessory you’re trying on: It’s a medical condition. (Unless it is an accessory you’re trying on, in which case maybe you should reconsider.)
“When speaking with wait-staff or kitchen personnel it is important emphasize that celiac disease is a serious condition and that even trace amounts of gluten could cause harm,” Lebwohl says.
Flu season plays out differently depending on the size of your city
Benjamin Dalziel, an assistant professor of integrative biology and mathematics at Oregon State University, has been interested in the flu since the 2009 pandemic. At the time, he compared commuter patterns in various Canadian cities, and saw how wildly different they were. He made a computer model that predicted how flu epidemics might follow suit, and look different depending where they took place. “But that prediction was controversial, and at the time we did not have flu incidence data at the city level that could be used to see if the prediction was true,” he tells me.
Now, in a new paper in Science, Dalziel and collaborators have the data from 603 cities in the United States to support their idea: The flu season plays out differently depending on the size of the city. They found that smaller cities tended to have more flu cases in a shorter period of time, around peak winter season, but in larger urban areas cases were more spread out timewise.
“It’s important to stress that our results do not show that some cities are safer than others for flu: Rather, we found differences in the relative timing of when cases occur,” he says.
This could lead to recommendations for how different cities handle their flu seasons. In smaller cities, the intensity of many people getting the flu at once means it could be beneficial to have a surplus of support and assistance around peak winter times. Bigger cities, alternatively, could enhance flu surveillance—keeping a watch for those early and late season flu outbreaks.
“No matter where you live the recommended protective actions are the same, including: Wash your hands often, cover your cough, and get a flu shot,” he says.
Postdiction is the opposite of prediction, and it’s a thing
Many neuroscientists believe the brain to be a predictive organ, which means that instead of interpreting all the sensory signals and input it gets from the outside world, it makes predictions based on past experiences to create reality.
Now researchers from Caltech have shown another way the brain influences our perceptions. Two illusions in a new study were developed to show that something your brain sees can influence something it perceived earlier, a phenomenon called “postdiction,”—the opposite of prediction.
If you’ve ever seen an optical illusion, you know that our brain’s perception of the world can be fooled into seeing colors, shapes, or other things that aren’t really there. Illusions can be a window to the functioning of the brain, says Noelle Stiles, a a postdoctoral scholar and research associate at USC.
“For example, how does the brain determine reality with information from multiple senses that is at times noisy and conflicting?” She asks in a release. “The brain uses assumptions about the environment to solve this problem. When these assumptions happen to be wrong, illusions can occur as the brain tries to make the best sense of a confusing situation. We can use these illusions to unveil the underlying inferences that the brain makes.”
In the illusions in the study, a participant was presented with a series flashes of lights and beeps, that occur rapidly and close together. Because it all happens to fast, their brains were susceptible to using postdiction to determine what they really saw.
They called the first illusion the Illusory Rabbit. There is a short beep and a quick flash of light on the left side of a screen, then 58 milliseconds later, a beep is played alone, and 58 milliseconds later, another beep and flash of light on the right side of the screen. There are only two flashes, but most people see three. They see this third illusory flash at the same time as the second beep, and see it located in the center of the screen, between the two real flashes.
"When the final beep-flash pair is later presented, the brain assumes that it must have missed the flash associated with the unpaired beep and quite literally makes up the fact that there must have been a second flash that it missed," Stiles says in the release. "This already implies a postdictive mechanism at work. But even more importantly, the only way that you could perceive the shifted illusory flash would be if the information that comes later in time—the final beep-flash combination—is being used to reconstruct the most likely location of the illusory flash as well."
In their second illusion, named the Invisible Rabbit, people were shown three flashes: one on the left, one in the middle, and one on the right of the screen. This time, there were only two corresponding beeps; for the flash on the left and right, not but the middle. In this illusion most people don’t see the second flash, the one without the beep. “The absence of the second beep leads the brain to decide after the fact that there actually was no flash, even though it was in fact present,” the release says.
“These illusions are among the very rare cases where sound affects vision, not vice versa, indicating dynamic aspects of neural processing that occur across space and time,” professor of experimental psychology Shinsuke Shimojo, says in a release.
So do we do this in real life, and not just during a carefully designed illusion? Yes, Shimojo tells me. It can happen more subtly in cases of “causal misattribution,” which is when you associate something you did earlier with a later outcome without evidence. For example, if you spend a day eating foods, get sick later, and then read an article about the dangers of gluten, you might convince yourself it was the bread that made you sick.
“Another example, inspired by neuroscientist Benjamin Libet who influenced our work: You’re driving down a dark road, see something dart across it, and slam on the brakes. Only after the fact will you realize it was, say, a cat. People modify their perception/memory—especially in causality, but not limited to it—to come up with a more consistent and compact interpretation of the world,” Shimojo says.
Try out the illusions yourself here: