Your cart is currently empty!
Author: JamesPereza
Diabetic eye damage prevented or reversed by novel drug in study
Researchers at Wilmer Eye Institute, Johns Hopkins Medicine say they have evidence that an experimental drug may prevent or slow vision loss in people with diabetes. The results are from a study that used mouse as well as human retinal organoids and eye cell lines. Eye conditions that cause vision loss are common complications of diabetes, affecting nearly 8 million Americans—a statistic likely to almost double by 2040, according to the National Institutes of Health.
The team focused on models of two common diabetic eye conditions: proliferative diabetic retinopathy and diabetic macular edema, both of which affect the retina, the light-sensing tissue at the back of the eye that also transmits vision signals to the brain. In proliferative diabetic retinopathy, new blood vessels overgrow on the retina’s surface, causing bleeding or retinal detachments and profound vision loss. In diabetic macular edema, blood vessels in the eye leak fluid, leading to swelling of the central retina, damaging the retinal cells responsible for central vision.
Results of the study, published May 25 in the Journal of Clinical Investigation, show that a compound called 32-134D, previously shown to slow liver tumor growth in mice, prevented diabetic retinal vascular disease by decreasing levels of a protein called HIF, or hypoxia-inducible factor. Doses of 32-134D also appeared to be safer than another treatment that also targets HIF and is under investigation to treat diabetic eye disease.
Current treatment for both proliferative diabetic retinopathy and diabetic macular edema includes eye injections with anti-vascular endothelial growth factor (anti-VEGF) therapies. Anti-VEGF therapies can halt the growth and leakiness of blood vessels in the retina in patients with diabetes. However, these treatments aren’t effective for many patients, and may cause side effects with prolonged use, such as increased internal eye pressure or eye tissue damage.
Akrit Sodhi, M.D., Ph.D., an author of the new study, says that in general, the idea of inhibiting HIF, a fundamental protein in the body, has raised concerns about toxicity to many tissues and organs. But when his team screened a library of HIF inhibitor drugs and conducted extensive testing, “We came to find that the drug examined in this study, 32-134D, was remarkably well tolerated in the eyes and effectively reduced HIF levels in diseased eyes,” says Sodhi, associate professor of ophthalmology and the Branna and Irving Sisenwein professor of ophthalmology at the Johns Hopkins University School of Medicine and the Wilmer Eye Institute.
HIF, a type of protein known as a transcription factor, has the ability to switch certain genes, including vascular endothelial growth factor (VEGF), on or off throughout the body. In the eye, elevated levels of HIF cause genes like VEGF to increase blood vessel production and leakiness in the retina, contributing to vision loss.
To test 32-134D, researchers dosed multiple types of human retinal cell lines associated with the expression of proteins that promote blood vessel production and leakiness. When they measured genes regulated by HIF in cells treated with 32-134D, they found that their expression had returned to near-normal levels, which is enough to halt new blood vessel creation and maintain blood vessels’ structural integrity.
Researchers also tested 32-134D in two different adult mouse models of diabetic eye disease. In both models, injections were administered into the eye. Five days post-injection, the researchers observed diminished levels of HIF, and also saw that the drug effectively inhibited the creation of new blood vessels or blocked vessel leakage, therefore slowing progression of the animals’ eye disease. Sodhi and his team said they also were surprised to find that 32-134D lasted in the retina at active levels for about 12 days following a single injection without causing retinal cell death or tissue wasting.
“This paper highlights how inhibiting HIF with 32-134D is not just a potentially effective therapeutic approach, but a safe one, too,” says Sodhi. “People facing diabetic eye disease and vision loss include our family members, friends, co-workers—this is a disease that impacts a large group of people. Having safer therapies is critical for this growing population of patients.”
Sodhi says that further studies in animal models are needed before moving to clinical trials.
Sudden infant death syndrome may have biologic cause
Sudden infant death syndrome (SIDS) is a case where the death of an apparently healthy infant before their first birthday remains unexplained even after thorough investigation. Death generally seems to occur when infants are sleeping.
While rare, it is the leading post-neonatal infant death in the United States today, occurring in 103 out of 100,000 live births a year. Despite the initial success of national public health campaigns promoting safe sleep environments and healthier sleep positions in infants in the 1990s in the United States, rates of cases have remained the same over the last three decades.
Researchers here collected tissue from the San Diego Medical Examiner’s Office related to infant deaths between 2004 and 2011. They then examined the brain stems of 70 infants who died during the period and tested them for consistent abnormalities.
They found that the serotonin 2A/C receptor is altered in sudden infant death cases compared to control cases of infant deaths. Previous research in rodents has shown that 2A/C receptor signaling contributes to arousal and autoresuscitation, protecting brain oxygen status during sleep. This new research supports the idea that a biological abnormality in some infants makes them vulnerable to death under certain circumstances.
The investigators here believe that sudden infant death syndrome occurs when three things happen together: a child is in a critical period of cardiorespiratory development in their first year, the child faces an outside stressor like a face-down sleep position or sharing a bed, and the child has a biological abnormality that makes them vulnerable to respiratory challenges while sleeping.
“The work presented builds upon previous work by our laboratory and others showing abnormalities in the serotonergic system of some SIDS infants,” says the paper’s lead author, Robin Haynes.
“Although we have identified abnormalities in the serotonin 2A/C receptor in SIDS, the relationship between the abnormalities and cause of death remains unknown.”
“Much work remains in determining the consequence of abnormalities in this receptor in the context of a larger network of serotonin and non-serotonin receptors that protect vital functions in cardiac and respiratory control when challenged. Currently, we have no means to identify infants with biological abnormalities in the serotonergic system. Thus, adherence to safe-sleep practices remains critical.”
Major progress in curing brain tumors by blocking certain functions in cells with a docked molecule
Researchers at the University of Gothenburg, working with French colleagues, have successfully developed a method able to kill the aggressive brain tumor glioblastoma. By blocking certain functions in the cell with a docked molecule, the researchers cause the cancer to die of stress.
Cancer cells, especially those that form aggressive tumors, are in one way or another out of control and live a very stressful existence. To manage this stress, the cancer cells hijack mechanisms that the healthy cells use to regulate protein production and process the surplus proteins that they create. Without these hijacked mechanisms, the cancer cell would die.
“We have now succeeded in stopping this hijacking by inserting a specially developed molecule in the cells that inhibits one of these hijacked adaptive mechanisms in the cancer cells. This causes the cancer to self-destruct,” says Leif Eriksson, professor of physical chemistry at the University of Gothenburg.
Swedish-French collaboration
Leif Eriksson’s group has worked with a research group at INSERM in Rennes, France. Using super computers and advanced simulations, the researchers developed a version of the molecule that can also pass through the blood-brain barrier that protects brain tissue. They have presented their findings in the journal iScience.
The breakthrough applies to glioblastoma brain tumors. These make up about 45 percent of all brain tumors and around 400 Swedes are diagnosed with glioblastomas every year. For the EU as a whole, there are 19,000 cases annually. Currently, the prognosis for malignant glioblastomas is very bad. Only a few percent survive five years after diagnosis and treatment.
“Today, cancer treatment consists of surgery, radiation and chemotherapy. Unfortunately, all cancer cells are not killed and the tumor returns. Once the cancer relapse, the tumor cells have often spread and developed resistance,” Leif Eriksson.
Studying how it can be used with other cancers
Studies with the new method have shown very promising results. The researchers saw that a combination treatment with the new substance and chemotherapy was enough to completely kill the tumors while also preventing relapse. Since the tumors were stressed to death, all cancer cells disappeared, and in animal experiments with mice there was no cancer relapse after 200 days. In comparative experiments with just chemotherapy, the brain tumors relapsed after 100 days and grew rapidly.
“These are the first clear results with brain tumors that can lead to a treatment which completely avoids surgery and radiation. We have also begun studying the use of our substance on other aggressive tumor forms like pancreatic cancer, triple-negative breast cancer and certain lever cancers,” says Eriksson.
There are other types of brain tumors that develop differently than glioblastomas. This new method does not work with these forms of cancer.
No side effects
Current treatments for brain tumors often have severe side effects. With this new treatment, the researchers have not yet seen any side effects with the substance. The treated animals maintained weight, had no apparent changes in behavior and there was no sign of impact on the liver. While more in-depth studies are needed, extensive cell tests have shown that the substance is non-toxic for healthy cells even at very high doses.
Research on this molecule will now continue. There is still much to do, such as optimizing the treatment procedure and additional animal experiments. But Leif Eriksson hopes and believes that it should go relatively quickly to get the pharmaceutical into clinical treatment.
“It largely depends on whether funding comes in that allows taking the different steps as smoothly as possible. If I’m optimistic, perhaps it might take five years. That’s a short timeframe, but at the same time glioblastomas are nearly 100 percent fatal, so any improvement in medical care is major progress,” says Eriksson.
Scientists discover that metabolic sensor may play role in Alzheimer’s disease
It’s well-known that people with type 2 diabetes are at an increased risk of developing Alzheimer’s disease, but the reason why isn’t fully understood and is an area of current research.
Now, scientists at Wake Forest University School of Medicine have uncovered a novel mechanism that shows increased sugar intakeand elevations in blood glucose are sufficient to cause amyloid plaque buildup in the brain, which increases the risk of Alzheimer’s disease. Amyloid plaque is made up of toxic proteins in the brain.
The study findings appear online in JCI Insight.
“We wanted a better understanding of the metabolic changes in diabetes that put the brain at risk for Alzheimer’s disease or accelerates the pathology already forming in the brain of individuals who will go on to an Alzheimer’s disease diagnosis,” said Shannon Macauley, Ph.D., associate professor of physiology and pharmacology at Wake Forest University School of Medicine and principal investigator of the study.
Using a mouse model, the research team demonstrated that more amyloid plaques form when sugar water is given instead of regular drinking water. They also found that elevations in blood sugar increase the production of amyloid-beta in the brain.
“This finding is significant because it demonstrates that consuming too much sugar is enough to cause amyloid plaque proliferation and increase the risk of Alzheimer’s disease,” Macauley said.
To better understand the molecular drivers of this phenomenon, the research team identified a metabolic sensor on neurons that link changes in metabolism with neuronal firing and amyloid-beta production. The sensors are known as adenosine triphosphate (ATP)-sensitive potassium channels or KATP channels. ATP is an energy source that all living cells need to survive. These channels sense how much energy is available for healthy function. Disrupting these sensors changes how the brain works normally.
“Using genetic techniques in mice, we removed these sensors from the brain and showed that elevation in blood sugar no longer increased amyloid-beta levels or amyloid plaque formation,” Macauley said.
Next, researchers explored the expression of these metabolic sensors in the human Alzheimer’s disease brain and again found that the expression of these channels changes with an Alzheimer’s disease diagnosis.
According to Macauley, the study suggests that these metabolic sensors may play a role in the development of Alzheimer’s disease and could ultimately lead to new treatments.
“What’s most notable is that pharmacological manipulation of these KATP channels may hold a therapeutic benefit in reducing amyloid-beta pathology for diabetic and prediabetic patients,” said Macauley.
Phase I trial demonstrates first pharmacological treatment able to improve cardiac function in stiff-heart syndrome
Transthyretin-related cardiac amyloidosis is a progressive disease characterized by the deposition of amyloid protein fibrils in the heart. Amyloid fibril deposition thickens and stiffens the heart walls, and the disease is also known as stiff-heart syndrome. The accumulation of amyloid fibrils causes heart failure, and patients suffer from fluid retention, fatigue, and arrhythmias. The disease can be caused by genetic mutations or related to aging. Prognosis is poor, and untreated patients survive for an average of just 3 years.
Now, the results of a study published in the The New England Journal of Medicine (NEJM) promise to radically alter the prospects of patients with this disease. The study was led by Dr. Pablo Garcia-Pavía, who heads the Inherited Cardiac Diseases Section at Hospital Universitario Puerta de Hierro and is a research scientist at the Centro Nacional de Investigaciones Cardiovasculares (CNIC) and within the Spanish cardiovascular research network (CIBERCV).
Coinciding with the publication of the study, Dr. Pablo Garcia-Pavía has today presented the results of the first clinical trial with an amyloid-removing drug for the treatment of cardiac amyloidosis.
The study represents a major advance in the treatment of the disease. Although currently available treatments effectively prevent the accumulation of more amyloid fibrils and delay disease progression, they do not directly remove any amyloid protein already deposited in the heart.
Current treatment options include transthyretin-stabilizing therapy and measures to control associated cardiovascular complications. The only intervention currently able to restore cardiac function in this disease is heart transplantation.
The only drug approved to treat transthyretin-related cardiac amyloidosis is tafamidis, an oral transthyretin stabilizer. Tafamidis improves survival and reduces hospitalizations; however, it does not reverse disease symptoms that are already established.
The initial results of the trial, which included 40 patients in France, The Netherlands, Germany, and Spain and was coordinated by Dr. García-Pavía, show that the new drug is safe and appears to reduce the amount of amyloid protein deposited in the heart.
Developed by the Swiss company Neurimmune, the new medication is an antibody that binds to transthyretin amyloid protein. The antibody was first isolated from memory B cells obtained from healthy elderly individuals.
In the study, the antibody was used to stimulate the patients’ own defense systems, resulting in the elimination of cardiac amyloid fibrils. The antibody was administered to patients intravenously in progressively increasing monthly doses over a 12-month period.
“Patients who received higher antibody doses seemed to show a greater reduction in amyloid deposits in the heart and greater improvements in a range of cardiac parameters,” said Dr. García-Pavía.
The NEJM article concludes that the phase I proof-of-concept study demonstrates the safety of this treatment in patients and supports further clinical trials of this antibody.
New study shows noninvasive brain imaging can distinguish among hand gestures
Researchers from University of California San Diego have found a way to distinguish among hand gestures that people are making by examining only data from noninvasive brain imaging, without information from the hands themselves. The results are an early step in developing a non-invasive brain-computer interface that may one day allow patients with paralysis, amputated limbs or other physical challenges to use their mind to control a device that assists with everyday tasks.
The research, recently published online ahead of print in the journal Cerebral Cortex, represents the best results thus far in distinguishing single-hand gestures using a completely noninvasive technique, in this case, magnetoencephalography (MEG).
“Our goal was to bypass invasive components,” said the paper’s senior author Mingxiong Huang, Ph.D., co-director of the MEG Center at the Qualcomm Institute at UC San Diego. Huang is also affiliated with the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering and the Department of Radiology at UC San Diego School of Medicine, as well as the Veterans Affairs (VA) San Diego Healthcare System. “MEG provides a safe and accurate option for developing a brain-computer interface that could ultimately help patients.”
The researchers underscored the advantages of MEG, which uses a helmet with embedded 306-sensor array to detect the magnetic fields produced by neuronal electric currents moving between neurons in the brain. Alternate brain-computer interface techniques include electrocorticography (ECoG), which requires surgical implantation of electrodes on the brain surface, and scalp electroencephalography (EEG), which locates brain activity less precisely.
“With MEG, I can see the brain thinking without taking off the skull and putting electrodes on the brain itself,” said study co-author Roland Lee, MD, director of the MEG Center at the UC San Diego Qualcomm Institute, emeritus professor of radiology at UC San Diego School of Medicine, and physician with VA San Diego Healthcare System. “I just have to put the MEG helmet on their head. There are no electrodes that could break while implanted inside the head; no expensive, delicate brain surgery; no possible brain infections.”
Lee likens the safety of MEG to taking a patient’s temperature. “MEG measures the magnetic energy your brain is putting out, like a thermometer measures the heat your body puts out. That makes it completely noninvasive and safe.”
Rock paper scissors
The current study evaluated the ability to use MEG to distinguish between hand gestures made by 12 volunteer subjects. The volunteers were equipped with the MEG helmet and randomly instructed to make one of the gestures used in the game Rock Paper Scissors (as in previous studies of this kind). MEG functional information was superimposed on MRI images, which provided structural information on the brain.
To interpret the data generated, Yifeng (“Troy”) Bu, an electrical and computer engineering Ph.D. student in the UC San Diego Jacobs School of Engineering and first author of the paper, wrote a high-performing deep learning model called MEG-RPSnet.
“The special feature of this network is that it combines spatial and temporal features simultaneously,” said Bu. “That’s the main reason it works better than previous models.”
When the results of the study were in, the researchers found that their techniques could be used to distinguish among hand gestures with more than 85% accuracy. These results were comparable to those of previous studies with a much smaller sample size using the invasive ECoG brain-computer interface.
The team also found that MEG measurements from only half of the brain regions sampled could generate results with only a small (2—3%) loss of accuracy, indicating that future MEG helmets might require fewer sensors.
Looking ahead, Bu noted, “This work builds a foundation for future MEG-based brain-computer interface development.”
In addition to Huang, Lee and Bu, the article, “Magnetoencephalogram-based brain–computer interface for hand-gesture decoding using deep learning,” was authored by Deborah L. Harrington, Qian Shen and Annemarie Angeles-Quinto of VA San Diego Healthcare System and UC San Diego School of Medicine; Hayden Hansen of VA San Diego Healthcare System; Zhengwei Ji, Jaqueline Hernandez-Lucas, Jared Baumgartner, Tao Song and Sharon Nichols of UC San Diego School of Medicine; Dewleen Baker of VA Center of Excellence for Stress and Mental Health and UC San Diego School of Medicine; Imanuel Lerman of UC San Diego, its School of Medicine and VA Center of Excellence for Stress and Mental Health; and Ramesh Rao (director of Qualcomm Institute), Tuo Lin and Xin Ming Tu of UC San Diego.
Investigation raises questions over lack of ‘substantial evidence’ for FDA approved antibiotic
Drugs approved in the US require “substantial evidence” that they are effective. But an investigation by The BMJ into the recent approval of the antibiotic Recarbrio from Merck suggests that these standards are being bypassed.
Peter Doshi, senior editor at The BMJ, describes how US Food and Drug Administration (FDA) scientists had serious doubts about Recarbrio—a product 40 times more expensive than an existing generic alternative—but the agency approved it anyway.
Did the FDA break its own rules in approving this antibiotic, and what does this case tell us about problems within the agency, he asks?
Recarbrio is a combination therapy made up of a new beta-lactamase inhibitor (relebactam) and a decades old Merck antibiotic (imipenem-cilastatin) to treat complicated infections. It costs between $4,000 and $15,000 for a course, compared with a couple of hundred dollars for the generic version of Merck’s old antibiotic.
In its FDA application, Merck submitted results from two clinical trials comparing Recarbrio with imipenem in adults with complicated urinary tract infections and in patients with complex intra-abdominal infections.
But FDA reviewers noted that Merck had studied the wrong patient population to evaluate the added benefits of the new drug, and said the trial for urinary tract infections showed that Recarbrio was as much as 21% worse in effectiveness than the older, less-expensive imipenem.
The FDA concluded that “these studies are not considered adequate and well-controlled.” And of a third clinical study, the FDA called it a “very small,” “difficult to interpret” “descriptive trial with no pre-specified plans for hypothesis testing.”
Yet despite all three clinical studies not providing substantial evidence of effectiveness, FDA approved Recarbrio.
“Instead of basing its decision on the clinical trials in Merck’s application, FDA’s determination of Recarbrio’s efficacy was justified on past evidence that imipenem was effective, plus—to justify the new relebactam component—in vitro (lab) studies and animal models of infection rather than evidence from human trials as required by law,” writes Doshi.
Others are concerned that Recarbrio’s approval essentially amounts to a return to a way of regulating medicines that the FDA abandoned a half century ago prior to the agency’s “substantial evidence” standard.
Our brain prefers positive vocal sounds that come from our left
Sounds that we hear around us are defined physically by their frequency and amplitude. But for us, sounds have a meaning beyond those parameters: we may perceive them as pleasant or unpleasant, ominous or reassuring, and interesting and rich in information, or just noise.
One aspect that affects the emotional ‘valence’ of sounds—that is, whether we perceive them as positive, neutral, or negative—is where they come from. Most people rate looming sounds, which move towards them, as more unpleasant, potent, arousing, and intense than receding sounds, and especially if they come from behind rather than from the front. This bias might have a plausible evolutionary advantage: to our ancestors on the African savannah, a sound approaching from behind their vulnerable back might have signaled a predator stalking them.
Now, neuroscientists from Switzerland have shown another effect of direction on emotional valence: we respond more strongly to positive human sounds, like laughter or pleasant vocalizations, when these come from the left.
“Here we show that human vocalizations that elicit positive emotional experiences, yield strong activity in the brain’s auditory cortex when they come from the listener’s left side. This does not occur when positive vocalizations come from the front or right,” said first author Dr. Sandra da Costa, a research staff scientist at the EPFL in Lausanne, Switzerland.
“We also show that vocalizations with neutral or negative emotional valence, for example meaningles vowels or frightened screams, and sounds other than human vocalizations do not have this association with the left side.”
From erotic vocalizations to a ticking bomb
Da Costa and colleagues used functional magnetic resonance imaging (fMRI) to compare how strongly the brain of 13 volunteers responded to sounds coming from the left, front, or right. These were women and men in their mid-twenties, all right-handed, and none were trained in music. The researchers compared the brain’s response between six categories of sounds: besides positive human vocalizations like erotic sounds, they played back neutral and negative vocalizations, like meaningless vowels and a frightened scream; and positive, neutral, and negative non-vocalizations, like applause, wind, and a ticking bomb.
Da Costa et al. focused on brain regions known to be important for the early stages of sound processing, the primary auditory areas A1 and R, the surrounding other early-stage auditory areas, and the ‘voice area’ (VA). Each of these areas occurs in the left and right hemisphere of the brain.
The results showed that A1 and R in both hemispheres became maximally active when listening to positive vocalizations coming from the left, and much less when listening to positive vocalizations coming from the front or right, to neutral or negative vocalizations, or to non-vocalizations.
Auditory cortex discriminates in favor of positive vocalizations from left
“The strong activation by vocalizations with positive emotional valence coming from the left takes place in the primary auditory cortex of either hemisphere: the first areas in the brain cortex to receive auditory information. Our findings suggest that the nature of a sound, its emotional valence, and its spatial origin are first identified and processed there,” said co-author Dr. Tiffany Grisendi.
In addition, area L3 in the right hemisphere, but not its twin in the left hemisphere, also responded more strongly to positive vocalizations coming from the left or right compared to those coming from the front. In contrast, the spatial origin of the sound didn’t impact the response to non-vocalizations.
The evolutionary significance of our brain’s bias in favor of positive vocalizations coming from the left is still unclear.
Researchers identify 10 pesticides toxic to neurons involved in Parkinson’s
Researchers at UCLA Health and Harvard have identified 10 pesticides that significantly damaged neurons implicated in the development of Parkinson’s disease, providing new clues about environmental toxins’ role in the disease.
While environmental factors such as pesticide exposure have long been linked to Parkinson’s, it has been harder to pinpoint which pesticides may raise risk for the neurodegenerative disorder. Just in California, the nation’s largest agricultural producer and exporter, there are nearly 14,000 pesticide products with over 1,000 active ingredients registered for use.
Through a novel pairing of epidemiology and toxicity screening that leveraged California’s extensive pesticide use database, UCLA and Harvard researchers were able to identify 10 pesticides that were directly toxic to dopaminergic neurons. The neurons play a key role in voluntary movement, and the death of these neurons is a hallmark of Parkinson’s.
Further, the researchers found that co-exposure of pesticides that are typically used in combinations in cotton farming were more toxic than any single pesticide in that group.
For this study, published May 16 in Nature Communications, UCLA researchers examined exposure history going back decades for 288 pesticides among Central Valley patients with Parkinson’s disease who had participated in previous studies.
The researchers were able to determine long-term exposure for each person and then, using what they labeled a pesticide-wide association analysis, tested each pesticide individually for association with Parkinson’s. From this untargeted screen, researchers identified 53 pesticides that appeared to be implicated in Parkinson’s—most of which had not been previously studied for a potential link and are still in use.
Those results were shared for lab analysis led by Richard Krolewski, MD, Ph.D., an instructor of neurology at Harvard and neurologist at Brigham and Women’s Hospital. He tested the toxicity for most of those pesticides in dopaminergic neurons that had been derived from Parkinson’s patients through what’s known as induced pluripotent stem cells, which are a type of “blank slate” cell that can be reprogrammed into neurons that closely resemble those lost in Parkinson’s disease.
The 10 pesticides identified as directly toxic to these neurons included: four insecticides (dicofol, endosulfan, naled, propargite), three herbicides (diquat, endothall, trifluralin), and three fungicides (copper sulfate [basic and pentahydrate] and folpet). Most of the pesticides are still in use today in the United States.
Aside from their toxicity in dopaminergic neurons, there is little that unifies these pesticides. They have a range of use types, are structurally distinct, and do not share a prior toxicity classification.
Researchers also tested the toxicity of multiple pesticides that are commonly applied in cotton fields around the same time, according to California’s pesticide database. Combinations involving trifluralin, one of the most commonly used herbicides in California, produced the most toxicity. Previous research in the Agricultural Health Study, a large research project involving pesticide applicators, had also implicated trifluralin in Parkinson’s.
Kimberly Paul, Ph.D., a lead author and assistant professor of neurology at UCLA, said the study demonstrated their approach could broadly screen for pesticides implicated in Parkinson’s and better understand the strength of these associations.
“We were able to implicate individual agents more than any other study has before, and it was done in a completely agnostic manner,” Paul said. “When you bring together this type of agnostic screening with a field-to-bench paradigm, you can pinpoint pesticides that look like they’re quite important in the disease.”
The researchers are next planning to study epigenetic and metabolomic features related to exposure using integrative omics to help describe which biologic pathways are disrupted among Parkinson’s patients who experienced pesticide exposure. More detailed mechanistic studies of the specific neuronal processes impacted by pesticides such as trifluralin and copper are also underway at the Harvard/Brigham and Women’s labs.
The lab work is focused on distinct effects on dopamine neurons and cortical neurons, which are important for the movement and cognitive symptoms in Parkinson’s patients, respectively. The basic science is also expanding to studies of pesticides on non-neuronal cells in the brain—the glia—to better understand how pesticides influence the function of these critical cells.
How do combat-related injuries and their treatments affect bone health?
Combat-related injuries to bone are common in military personnel and can lead to pain and disability. Results from a new study in the Journal of Bone and Mineral Research suggest that amputations for such injuries may negatively affect bone mass.
In the study of 575 male adult UK military personnel with combat-related traumatic injuries and 562 without such injuries, veterans who sustained traumatic amputations often had low bone density in the hip region. Changes in bone health appeared to be mechanically driven rather than systemic and were only evident in those with lower limb amputations.
“We hope these results will drive further research into ways to reverse bone mineral density changes,” said co-author Group Captain Alex Bennett, Defence Professor of Rehabilitation, Defence Medical Rehabilitation Centre. “We need to investigate the role of prosthetics and exercise in reversing bone mineral density loss to reduce the longer-term risk of hip fracture. Because systemic treatments like bisphosphonates are not indicated in this young population with bone mineral density loss, it is important to understand other ways to reduce their hip fracture risk.”
Recent conflicts in Iraq and Afghanistan have resulted in large numbers of personnel exposed to combat related traumatic injury (CRTI) including 265 UK military personnel sustaining a total 416 amputations, most frequently of the lower limb.(1) The mean age at time of amputation was 22 years, an age at which osteoporosis is uncommon.(2) In studies of amputees over 5 years following amputation disproportionate numbers (more than 50%) of amputees are diagnosed with osteoporosis and osteopenia.(3–5) CRTI can induce systemic inflammation(6) and hormonal changes,(7) both of which have been shown to lead to osteoporosis. CRTI with traumatic amputation has the potential to lead to osteoporosis through not only systemic inflammation and hormonal changes but also altered loading. Although a documented long-term complication of lower limb amputation is osteoporosis, this is often observed in older less active subjects with comorbidities, thus it is unknown whether this is secondary to systemic changes or changes to the loading environment.(3, 8, 9) A comparison of CRTI and traumatic amputation enables the differences between systemic effects and localized effects to be investigated due to the change in loading on the amputated limb.The long-term consequences of osteopenia and osteoporosis include stress fractures, femoral neck and vertebral wedge fractures, with serious implications on loss of mobility, physical dependency, and morbidity, including compromising future prosthetic use. Although there is currently no direct evidence that amputation influences fracture risk, indirectly lower bone mineral density (BMD) would imply a greater risk. Current systemic treatments include hormone replacement therapy and use of bone-preserving medication such as bisphosphonates, but such treatments have been shown to have adverse effects elsewhere in the body.(10, 11) It is therefore important to understand whether amputees suffer from systemically driven osteopenia/osteoporosis or whether it is a localized phenomenon similar to disuse osteopenia.
There is some evidence in the literature from studies in older groups, using X-ray data as a surrogate of BMD, few numbers of amputees, without control groups, and without controlling or accounting for differences in activity levels, smoking or body mass index (BMI), that amputee bone loss is commonly localized to the amputated limb.(12–15) Thus it is postulated that BMD loss in amputees is a mechanical phenomenon, similar to disuse osteopenia, where altered, nonphysiological loading, post-amputation, drives progressive bone loss over the course of many remodeling cycles. Literature has suggested this might come from: offloading by the predominantly ischial tuberosity weight bearing prosthetic socket(13) that would be worse for above knee amputees than below knee amputees, bedrest immediately postsurgery,(12) reduced activity as ambulation becomes more challenging,(16, 17) and significantly lowered muscle forces from extreme atrophy.(18) Diagnosis discordance in measures of BMD can be used to investigate local versus systemic phenomena, where minor discordance is found when the scores at two different measurement sites are separated by one diagnostic class. Major discordance is when one site is considered osteoporotic and the other normal.(19) The presence of discordance means that the BMD loss is localized and not systemic.
The aim of this study was to test the following hypotheses in a prospective military cohort:
- BMD in the CRTI group is lower than the uninjured group;
- BMD in the femoral neck of the amputated limb of lower limb amputees is lower than in the CRTI non-amputee or uninjured groups;
- BMD loss is greater in above knee amputees than below knee amputees; and
- BMD changes in the amputee population exhibit higher levels of diagnosis discordance than in the populations of CRTI non-amputee and uninjured groups.
Subjects and Methods
Study design and participants
The Armed Services Trauma Rehabilitation Outcome (ADVANCE) study(20) is a prospective cohort study comprising 575 male adult UK military personnel (UK-Afghanistan War 2003 to 2014; 153 lower limb amputees of varying amputation type and level) with CRTI (mean of 8 years since injury) who were frequency-matched to 562 uninjured men by age, service, rank, regiment, deployment period, and role-in-theater. The adjusted recruitment response rates (excluding those who had died, had no known contact details or for whom no contact was attempted) were equivalent between groups at 58.0%. CRTI was defined as injury requiring aeromedical evacuation.(21) Ethical approval was received from the Ministry of Defence Research Ethics Committee and all participants provided written consent prior to inclusion. All lower limb amputee subjects were below or above-knee amputees in at least one limb.
Variables
Dual-energy X-ray absorptiometry (DXA) scanning was used to measure BMD and T-score at both the lumbar spine and femoral neck. T-score was selected (as opposed to Z-score) because the patient population is young therefore comparison to a healthy 30-year-old is appropriate to age. Measurements presented from the femoral neck of CRTI non-amputee, bilateral amputees, and uninjured control participants represent an average of both right and left legs. In unilateral amputee participants, the data of one limb was used for analysis separately as opposed to either the amputated or non-amputated limb.
The following data were also collected to control for any factors that could account for differences in amputee and uninjured BMD:
- Age;
- Height, or preinjury height for bilateral amputees;
- Adjusted body mass, to include the lost mass of the amputated limb(s)(22);
- Adjusted BMI;
- Smoking status (pack years), to account for the known adverse effects of smoking on bone turnover(23, 24); and
- Activity levels (using the International Physical Activity Questionnaire [IPAQ],(25) a validated patient-reported outcome measure where moderate activity is defined as “activities.… [that] make you breathe somewhat harder than normal” and vigorous activity as “activities that take hard physical effort and make you breathe much harder than normal”).
Statistical methods and sample size
Forty-three amputees are required to detect a clinically significant difference between T-score at the spine and femoral neck with 80% power and 1% level of significance(16, 19); unilateral (as opposed to bilateral) amputees are required to detect a clinically significant difference between T-score in the femur of amputated and contralateral limbs.(15) A clinically significant difference here is defined as a change from one diagnostic class to another.
Statistical analysis was carried out using Stata Version 16 (StataCorp, College Station, TX, USA). For comparisons between two groups (uninjured versus CRTI), independent t tests were used for normally distributed data, and Mann-Whitney U tests for non-normally distributed data. For comparisons between three or more groups (non-injured versus CRTI non-amputees versus amputees, subcategories of amputees), one-way ANOVAs were used for normally distributed data, whereas the Kruskal Wallis test was used for non-normally distributed data. Where significant differences were found, post-hoc pairwise comparisons with Bonferroni corrections were conducted.
Logistic binary regression with femoral neck pathology (T-score less than −1) as the outcome measure was performed to investigate the contribution of the following variables on femoral neck BMD pathology: amputee status, smoking history, total IPAQ walking minutes, total IPAQ activity minutes, and adjusted BMI. To assess discordance, Fisher’s exact test tests were performed for differences in magnitude and type (spine lower than hip or hip lower than spine) of discordance between controls, injured non-amputees, and amputees. A conservative threshold of p < 0.01 was set for statistical significance of all analyses due to the large sample size to reduce the chances of significant results for very small differences.
Results
The injured and uninjured groups are summarized in Table 1. Adjusted body mass and BMI were significantly higher in the injured compared to uninjured participants. There was no statistical difference in the T-score at the spine between the whole injured and non-injured groups (p = 0.959, Table 1). However, the whole injured group demonstrated a reduced femoral neck T-score compared to the uninjured group (p < 0.001, Table 1).Table 1. Participant Details of Uninjured, Injured, and Injured Subgroups (Non-amputees, Unilateral, and Bilateral Amputees)