Quantcast
Channel: Biome » Research
Viewing all 235 articles
Browse latest View live

Sleep disturbance and suicidal behaviors in psychiatric illness

$
0
0

Sleepless womanPatients with a psychiatric diagnosis are known to be at high risk of suicidal behaviour, however clinicians face a challenge in identifying those at highest risk. A systematic review and meta-analysis published in Systematic Reviews has found that, among patients with  psychiatric diagnoses, sleep disturbances are associated with an increased risk of suicidal behavior.

A team of researchers, led by Mohammad Hassan Murad of the Mayo Clinic, Rochester, USA, identified 19 observational studies (involving 104,436 patients) that reported on sleep disturbances in patients with psychiatric diagnosis and suicidal behaviours.

Additional subgroup analyses based on the type of suicidal behaviours and different types of sleep disturbance, found that the association was seen across different psychiatric diagnoses and suicidal behaviour was significantly associated with insomnia, parasomnias, and sleep-related breathing disorders, but not hypersomnias.

These results suggest that clinicians should be alert to the presence of sleep disturbance in patients with psychiatric diagnoses, and that sleep disorders should form part of suicide risk assessment.

Malik et al, SysRevMore clinical insights available here.


Dose-response effect of smoking in rheumatoid arthritis

$
0
0

Smoking_iStock PhotoCigarette smoking is a well-established risk factor for developing rheumatoid arthritis (RA), and previous research has shown that ever-smokers have a 40 percent higher risk of RA than never-smokers. Little is known however, regarding the dose-response relationship between the quantity of cigarette smoking and the risk of RA.

In a recent meta-analysis, researchers have found a non-linear dose-response trend. The risk of developing RA increased significantly with increasing number of pack-years (a unit of quantifying cigarette smoking, where a pack year is defined as smoking 20 cigarettes a day for one year). This correlation occurred up to 20 pack-years, after which the relative risk stabilised, even for those individuals who smoked more than 40 pack-years.

There appears to be a non-linear dose relationship between lifelong smoking and RA, with risk increasing up to 20 pack years.

Di Giuseppe et al, AR&T

More clinical insights available here.

Is therapeutic hypothermia beneficial after traumatic brain injury?

$
0
0

Thermometer_Flickr_Da SalTherapeutic hypothermia is a potential treatment of traumatic brain injury (TBI). Previous laboratory  research suggested it has therapeutic benefit, however subsequent clinical studies and systematic reviews have not corroborated this.

In a recent systematic review and meta-analysis, researchers investigated the effect of therapeutic hypothermia in adult patients admitted to ICU with TBI. They identified randomized controlled trials that involved therapeutic hypothermia as a treatment of adult patients with TBI (defined as any traumatic, acute closed head injury). Quality of the include studies was assessed and data was extracted on risk of death, unfavorable outcome and new pneumonia. The 20 eligible studies were generally of low quality, but suggested some benefit of therapeutic hypothermia as a treatment for TBI.

There is some evidence that therapeutic hypothermia results in reduced rates of death and long-term disability, but a little evidence that such patients are at increased risk of pneumonia. More research is needed however due to the low quality of available evidence.

Crossley et al,CC.jpeg

More Clinical Insights are available here.

 

Healthy lifestyle programme reduces childhood obesity

$
0
0

Diverse group of Children StretchingThe rising prevalence of obesity is a growing concern worldwide and has important negative health consequences. Preventing obesity at a young age is vital in tackling this problem as obesity in childhood leads to obesity in adulthood. Researchers in Spain have conducted a cluster randomised controlled trial involving 38 schools to evaluate the effects of a primary-school-based program promoting healthy lifestyle. Delivered by trained university students acting as ‘health promoting agents’, the 28 month programme reduced the prevalence of obesity in boys by 4.39 percent.

This suggests that primary-school based interventions may play a useful role in combating increasing obesity rates.

Tarro et al, Trials

More clinical insights available here.

Can EPO prevent AKI in patients undergoing cardiac surgery?

$
0
0

Kidneys_iStock PhotoAcute kidney injury (AKI) is a serious complication of cardiac surgery and the recent literature suggests erythropoietin (EPO) may play a protective role against it. This is supported by several experimental studies, however clinical studies have failed to consistently show benefit.

Researchers have conducted randomised controlled trial involving patients with preoperative risk factors for AKI scheduled for complex valvular heart surgery to establish the effect of preemptive EPO administration on the incidence of post-operative AKI in such patients. Patients were randomized to receive either 300 IU/kg of EPO intravenously after anesthetic induction or the equivalent volume of normal saline (control). Contrary to expectations, a reduced incidence of AKI among patients receiving the EPO bolus was not observed and there was no attenuation of the increase in biomarkers of renal injury among the intervention group.

This study suggests that a single preventive bolus of EPO may not protect against AKI in patients at increased risk of developing AKI undergoing complex valvular heart surgery.

Kim et al, CC

More Clinical Insights are available here.

Breast cancer screening: risks vs benefits

$
0
0

Nurse Assisting Patient Undergoing MammogramMass screening for breast cancer using mammography is carried out with the aim of reducing the number of deaths due to breast cancer. Previous research has shown it to be effective at this, however studies have also suggested that screening may not reduce overall mortality. This has led to debate on the risks and benefit of mass breast cancer screening.

Given the controversies, researchers in France conducted a systematic review and meta-analysis to analyse non-breast cancer mortality in women undergoing screening with mammography compared to those not undergoing screening. They included randomized controlled trials involving women over the age of 39 years with no history of breast cancer at baseline.

After 13 years of follow up, there was no excess mortality caused by screening, but the all-cause death rate was not significantly reduced in those undergoing screening. This highlights the complexity of the issue and may help improve information for patients.

Erpeldinger et al, Trials

More Clinical Insights are available here.

Risk of death after percutaneous dilatational tracheostomy

$
0
0

patients monitoring in ICUThe number of critically ill patients undergoing percutaneous dilatational tracheostomy (PDT) has increased in recent years. The procedure is not without risk and can be associated with major complications, including death. Prompted by three fatalities in their own department, researchers in Germany carried out a systematic review to determine the incidence of fatal complications of PDT. All published cases of death following PDT published up to April 2013 were identified. Including their own three unpublished cases, 71 cases of death due to PDT were identified, equating to an incidence of 1 in 600 patients. The most common cause of death was tracheostomy related hemorrhage, followed by airway complications. In 73.2 percent of cases specific known risk factors were identified (e.g. lack of bronchoscopic guidance, low tracheostomy site, coagulopathy and recent neck surgery).

This study highlights the importance of careful patient selection for PDT, the use of bronchoscopic guidance, and securing the tracheal cannula with sutures.

Simon et al, CC

More Clinical Insights are available here.

José García-Arrarás and Vladimir Mashanov on echinoderm clues to neural regeneration

$
0
0

Over a million marine species are estimated to inhabit the planet’s oceans, according to the Census of Marine Life. Included in this vast reserve of biodiversity are the echinoderms, with sea urchins, star fish and sea cucumbers among their well known members. Research across biological disciplines has shown that life under the sea can provide significant insights to life on land, with echinoderms increasingly considered valuable for studies into regenerative biology, in light of their extensive capacity to regenerate various body parts. In a study in BMC Genomics, José García-Arrarás, Vladimir Mashanov, and Olga Zueva from the University of Puerto Rico, USA, provide transcriptomic insights to the regulation of central nervous system regeneration, through their analysis of the sea cucumber Holothuria glaberrima. Here García-Arrarás and Mashanov explain how echinoderms can inform our knowledge of human conditions, and what they were able to learn about neural regeneration.

 

What makes the echinoderm an important system for studies into regeneration? Are they now considered an established model system for such studies?

Echinoderms can be considered ‘masters of regeneration’. They are able to regenerate most of their tissues and organs. This is public knowledge as many non-scientists already know that starfish are capable of regenerating their arms. Nonetheless, starfish are not the only echinoderms that can regenerate body appendages or internal organs, many members of this phylum have amazing regenerative capacities. Sea cucumbers for example, in addition to regenerating their nervous system can regenerate their viscera following a process of evisceration. Several laboratories worldwide use echinoderms to study regeneration; examples include a group in Italy that looks at crinoid arm regeneration, in Russia holothurian muscle regeneration and in Sweden brittle star arm regeneration. However only recently are they being established as model systems. Why has it taken so long? Possibly because it is only now that are we developing the molecular tools to be able to analyse the molecular basis of regeneration.

 

How can findings in a phylum seemingly far removed from us provide valuable insights into human conditions?

Echinoderms, with their radial body plan might appear to have very little in common with us, but in reality they are much closer to vertebrates (and humans) than other well known model organisms, such as flies (Drosophila) or worms (C. elegans), and thus can provide important insights into human conditions. In fact, we had a common ancestor with echinoderms not so long ago (~500 MYR, compared with ~700 MYR that have passed after separation of fly+worm and human lineages). As a result, there are deep similarities between mammals and echinoderms in certain basic developmental mechanisms, both at the cellular and genetic levels

 

You employed deep RNA sequencing to analyse the transcriptome of the sea cucumber Holothuria glaberrima during regeneration of its nerve cord. What were the main challenges you faced in assembling and annotating its transcriptome?

For one, as with any other organism whose genome has not been sequenced yet, we had a challenge to assemble relatively long sequences of individual transcripts from a large number of much shorter sequences (reads) without having a reference template. This is very much like assembling a jigsaw puzzle without knowing what the final pattern should look like. De novo assembly software is still relatively young, there are a few competing approaches, and there is no universal protocol that would work for sure in all cases. We had to try various approaches before coming up with an assembly pipeline that yielded acceptable results.

Functional annotation was another challenge. Most of our current knowledge about functions of genes, and interactions between them, comes from research on a few ‘established’ model organisms, such as the mouse, for example. Up to now we have therefore only been able to annotate a fraction of the sea cucumber transcriptome, which represented homologs of well characterised genes in other organisms. Many of the potentially important genes that are unique to sea cucumbers and echinoderms in general still await further analysis.

 

Your study functionally characterised the genes identified as involved in regeneration. What were the main functional groups you identified?

The study highlighted a number of differentially expressed functional groups of genes, two of them deserve a special mention. First, the transcriptome analysis suggests that extracellular matrix remodeling plays an unexpected role in the regeneration of the central nervous system (CNS). This aspect of echinoderm regeneration has rarely been addressed before, and the unbiased analysis of gene expression data clearly suggests that it is definitely worthwhile studying in the future. Second, our study pointed to a possible mechanism by which post-traumatic neuronal cell death is prevented in the CNS. This mechanism works through the inhibition of the glutamate neuroexcitatory pathway, which is known to cause the death of neurons following injury in the mammalian nervous system.

 

Several transcription factors were identified in your study as candidates for regulating central nervous system regeneration. Were you surprised by any of them?

We were indeed surprised by the results. First, we expected that many of the known transcription factors that are associated with the induction of stem cell properties would be significantly overexpressed. That was not the case (with one interesting exception), as most of those genes were already expressed in non-injured animals at a certain level. This suggested that the tissues might have been primed for regeneration even before the injury took place. Moreover, we expected those genes associated with the differentiation of neuronal precursors to also  significantly increase their expression level over normal conditions and that was not the case either. Again, this is in part due the fact that some of those genes are already expressed in normal, non-injured animals.

 

What do you think are the most exciting avenues of echinoderm research opened up by your study?

Our study opens up a way to explore the molecular basis of neural regeneration and of other regenerative processes in echinoderms. It provides a list of candidate genes that might be involved in nervous system regeneration. In particular, our study identified eleven putative transcription factors that are predicted to be positioned at or near the top of the regeneration-specific gene regulatory network hierarchy. These ‘master’ genes are the most promising candidates for future functional assays. Moreover, our results may also inform other studies in echinoderm biology, including those of molecular evolution, animal phylogeny and (most important to us) regenerative processes.

 

What are the next steps for translating these findings in echinoderms to enhance central nervous system regeneration in mammalian systems?

Our study is just one step on the path to understanding the limitations of central nervous system regeneration in mammals. The central question still remains of why can some animals easily regenerate their CNS while others cannot. Sea cucumbers are an example of the former while humans are an example of the latter. As we continue to do comparative studies to determine what are the differences among these animals groups we will approach a better understanding of what can be done to improve CNS regeneration in humans.

 


Greg Gibson on predicting cardiovascular death through telling transcripts

$
0
0

Cardiovascular disease is the leading cause of death worldwide, according to a report from the World Health Organization, and moreover around 80 percent of premature heart attacks are considered preventable. The ability to accurately predict the risk of heart attack and cardiovascular death in an individual is therefore of significant value. Current clinical practice predicts the risk of heart attack through assessing a range of physiological parameters, such as blood pressure and cholesterol levels. Now improvements in genomics and transcriptomics have led to the development of genetic risk scores that when factored in to current risk estimation models may improve their accuracy. Predicting the likelihood of a heart attack however does not equate to predicting the risk of cardiovascular death. In a Genome Medicine study Greg Gibson from the Georgia Institute of Technology, USA, and colleagues deduce gene expression profiles associated with acute myocardial infarction and cardiovascular death. Here Gibson explains how their findings may improve current risk estimation models, and discusses the reality of their use in clinical practice.

 

Regarding current risk estimation models for coronary artery disease used in clinical practice, what factors are considered, and what are the limitations of these models?

The Framingham Ten Year Risk for Cardiovascular Disease considers age, gender, total and high density lipoprotein cholesterol, smoking status, and blood pressure (the NIH National Heart Lung and Blood Institute website hosts  a convenient personal risk calculator). It is designed to predict the risk of having a heart attack, and has performed remarkably well for some time now, though most doctors recognise that there must be room for improvement.  A big difference with our study is that we are predicting risk of a heart attack so adverse that it leads to death: the evidence suggests that these may be separable risks.

 

To what extent do you think the incorporation of genetic risk scores into these risk models will improve their accuracy?

I’d like to think they will, but those to date are targeted to cardiovascular disease in general, and in some cases risk of heart attack. They only explain a few percent of the variance in risk, but as we get samples of hundreds of thousands in the next few years, that will improve. My guess is that genetic risk scores, transcriptional risk scores, and biochemical ones will all add a few points to the Framingham score regarding risk of myocardial infarct. However I am not sure about the indicators of really adverse events.

 

Genetic risk scores have already been developed for coronary artery disease, however they are not predictive of adverse cardiovascular events. Why is this?

Current estimates are that as many as a half of all adults in America have sufficient heart disease that they are at elevated risk of heart attack relative to how they would be if they were in better shape. But heart attacks are by nature stochastic events, they occur when a plaque ruptures for example – when that happens, the consequences can be relatively benign, or they can lead to rapid complete occlusion of the vessel, which is more likely to be lethal, result in long-term damage, or otherwise cause hospitalisation that requires major surgery. I am not a cardiologist, but our study suggests that there is a high risk population who may be worth studying in greater depth to see what coronary features they have in common that predisposes them to these really adverse events.

 

What approach did you take in your study to discern whether genetic analysis could inform risk predictions of adverse cardiovascular events?

We actually set out to ask whether the differences that are observed between people having a heart attack or just experiencing incident coronary disease, are the same as those that distinguish people who have a heart attack at a young age, and those who experience disease relatively late in life. We did not see any differences in the blood that distinguish the early onset and later onset groups, but do have some evidence that the gene expression differences in the heart attack group may predict likelihood of a future myocardial infarct. Unfortunately, approaching ten percent of the sample died from an adverse event within three to four years after their visit to the Emory Cardiology clinic (directed by my colleague, Arshed Quyyumi who had the vision to archive blood samples for RNA analysis). At this point we were able to evaluate the additional risk of really adverse events.

 

What genetic differences did you identify that were suggestive of acute myocardial infarction? How did you determine whether these changes were causative or consequential?

The AMI (acute myocardial infarction) signal turns out to be largely related to neutrophilia, which was previously known, but also to downregulation of a particular arm of T-cell activity. A couple of years ago we showed that in healthy populations, there are nine or ten major axes of gene expression that involve hundreds to thousands of genes each, two of which are altered in the AMI individuals. It is difficult to tease apart the relative contributions of differences in cell abundance (say, neutrophil to T-cell ratio), and changes within these cell types, but we are fairly sure both contribute. One reason is that we performed a genetic analysis of the regulation of gene expression, so-called eQTL analysis. This showed that there is an excess of genes that are no longer under control of local regulatory polymorphisms during the myocardial infarct, whereas in healthy people the level of expression is a function of the genotype at that locus. This is one of the first demonstrations of a change in genetic regulation of gene expression in the disease state, a type of genotype-by-environment interaction.

 

Further analysis of gene expression profiles from acute myocardial infarcts revealed a subset of transcripts associated with significant risk of cardiovascular death. What further research is needed to determine whether these transcripts can be used to accurately predict risk of cardiovascular death?

We replicated the study in two phases conducted 18 months apart, but still have to recognise that the signature we describe is based on just over 30 people who have died. This clearly needs to be replicated in independent studies – there are a few prospective cardiology studies we know of, so hopefully the opportunity to confirm the findings will arise soon. Then after that, we need to think about mechanistic studies that establish what aspects of the profile are causal.

 

Do you foresee a future where routine clinical practice involves blood tests for biomarkers predictive of cardiovascular disease? What needs to happen for this to become a reality?

I actually think there is some weariness in the cardiology community toward biomarkers in general – they are not predictive in the sense that they are accurate enough to change people’s behaviour. So the main thing is finding out what motivates people, which in turn requires that there are concrete things that people can do to reduce their risk. If we can show that the profile can be reversed, presumably through exercise and diet in the main, then we would be a long way toward establishing the utility of predictive genetic tests such as this. If not, then at least we can work to ensure that the highest risk population have the support base and access to emergency care that they may need at any moment.

 

Genome-wide effects of palm oil on the pancreas: implications for diabetes?

$
0
0

Free fatty acids in plasma are known to be elevated in patients with type 2 diabetes and obesity, and also associated with insulin resistance and pancreatic beta (β) cell dysfunction, which results in the impaired control of blood glucose. One such fatty acid, palmitic acid (also known as hexadecanoic acid), is found in the oils of palm trees (palm oil, palm kernel oil and coconut oil) and is also present in butter, cheese, milk and meat. Research in rodents has revealed that long term exposure to palmitate (that is the salts and esters of palmitic acid) in cloned β cells or clusters of pancreatic cells (islets) alters the expression of genes involved in fatty acid metabolism and steroid biosynthesis. However, when it comes to the molecular effects of palmitate on human islets in vitro, evidence is lacking. Charlotte Ling from Lund University, Sweden, and colleagues sought to address this gap in a recent study in BMC Medicine.

Ling and colleagues show for the first time how exposure of human pancreatic islets to palmitate affects genome-wide mRNA and DNA methylation. The findings were validated by relating the gene expression levels in human islets to BMI (body-mass index) in non-diabetic individuals and to transcriptomic changes in the pancreatic islets of type 2 diabetes patients.

Pancreatic islets of eight donors were treated with palmitate for 48 hours in vitro and used in mRNA and DNA methylation array analyses, with five samples being unique for each set of microarrays. The expression levels of several specific metabolic genes as well as genes in metabolic pathways such as glycolysis, pyruvate metabolism and biosynthesis of fatty acids, were altered in palmitate treated human islets. Furthermore many differentially expressed genes showed a parallel change in DNA methylation, including candidate genes for type 2 diabetes and obesity such as TCF7L2 (Transcription Factor 7-Like 2) and GLIS3 (zinc finger transcription factor). Interestingly, some differentially expressed genes in the palmitate treated islets were found to be associated with BMI, whilst others were associated with type 2 diabetes. Moreover, bioinformatic analysis revealed that the insulin signalling pathway is enriched in both the differentially expressed and methylated genes.

The study provides original evidence that both specific and global DNA methylation patterns in human palmitate treated islets could affect mRNA expression and that lipid-induced epigenetic modifications may influence type 2 diabetes risk. The impaired insulin secretion in these cells may be of clinical relevance to obesity and type 2 diabetes. However, application to the medical field is still a long way off until targets designed for the newly identified genes show therapeutic efficacy. The study additionally has implications for the food industry where future research into palm oil derived edible products would be needed to inform health policies and help reduce the burden of obesity and type 2 diabetes.

 

Barcoding lines: Matthew Porteus on how clonal cell lines really are

$
0
0

One of the most well known and widely used cell lines originated from the eponymous Henrietta Lacks. However HeLa cells are just one of a multitude of cell lines that have since been generated and cultured, providing a vital resource for scientific progress and often forming the foundation of preclinical studies. A clear understanding of the genetic makeup of these cell lines is therefore essential, particularly in light of the oft made assumption that these lines remain clonal. Mutagenesis and transformation are just two of the processes that can destabilise the genomes of these cells over time, leading to experimental variability, issues around reproducibility, and hampering attempts at building upon findings from these cells. In a Genome Biology study, Matthew Porteus from Stanford Medical Center, USA, and colleagues present a cellular barcoding method for tracking the clonal dynamics of cultured cells, in the hopes that adoption of this technique will improve experimental design and the interpretation of results. Here Porteus reveals what they discovered in HeLa, HEK-293T, and K562 cells, the particular utility of this method in stem cell and cancer biology, and the scope for using ZFNs and CRISPRs.

 

What motivated you to develop a reproducible means of tracking the clonal dynamics of cultured cells within a population?

We were initially motivated by the observation that in clinical gene therapy trials using retroviral vectors, the preclinical assays had not identified the risk of leukaemia that was discovered during the clinical trials. We felt that we needed an assay that could pick up small proliferative changes in individual clones over a prolonged period of time and thought that a barcode marking system that labelled tens of thousands to millions of cells might be such an assay. As we developed the assay, however, we recognised that it had tremendous power in revealing the dynamics of populations of cells in general.

 

The problem of cell lines changing with multiple passages is well known, however your study now brings quantitative insights to the scale of this problem. Were you surprised by any of your results?

We were pleased that we provided a quantitative measure for what has been well known qualitatively for some time. I don’t think we were particularly surprised by the extent or rate that cell lines change over time in vitro. I was personally surprised, however, that while there were certainly large clonal changes in the xenogeneic experiments, the total clone number was better maintained in vivo than in vitro. I expected that the selective pressure of the mouse would be greater than that found in a tissue culture dish with cells grown in plastic in artificial media at 21 percent oxygen. I guess as I write this, I should not have been surprised.

I was also relatively surprised that the same clone became dominant in both the in vivo and in vitro experiments as I expected that each environment would exert unique selective pressures that would result in different clones becoming dominant. That may turn out to be true as this system is used to study other human tumours. The barcode system will also be a quantitative measure of how much better it is to put tumours into their appropriate organ in a xenogeneic model. That is, is there a difference when a primary lung cancer, for example, is grown subcutaneously rather than propagated through the lungs of a mouse.

 

Your study largely used a lentiviral approach to barcode cells at random sites, however you also adapted this method to use zinc finger nucleases (ZFNs). Why did you do this and how did the ZFN approach affect clonal dynamics?

My lab has had a long standing interest in using engineered nucleases as a method to stimulate homologous recombination and as an approach to gene therapy. One of the assumptions of this strategy is that the functional effects on clonal dynamics of targeted integration of transgenes would be less using engineered nucleases than with lentiviral integrations. We were surprised to find the opposite, that integration using ZFNs created greater clonal dynamics with higher clonal dropout and more clonal dominance. With lentiviral integration in K562 cells, for example, we did not see clones reproducibly becoming dominant in the population, suggesting that there was a stochastic effect to that phenomenon. In contrast, using the ZFN mediated targeted integration we saw many of the same clones growing out in different samples suggesting that there was a heritable and non-stochastic event that gave those clones a proliferative advantage.

 

Are there plans to adapt your method to CRISPR as well? Would you expect CRISPR to impact the cell population in a similar way as ZFN?

We are in the active process of comparing ZFNs, TALENs, and CRISPRs for their effects on clonal dynamics. My expectation is that CRISPRs will show less clonal  dropout and clonal dominance than ZFNs but the barcode system has continually surprised me and may do so again. One of the important features of this system, is that it is a functional assay. Significant discussion has been going on about identifying the off-target sites for different nucleases. These studies have been and will continue to be important in understanding aspects of nuclease specificity. But it is probably impossible to identify all off-target sites; for example, those sites at which the nuclease only cuts at 1:10000 the frequency as its on-target site. So as we think about the safety of using cells clinically that have been modified using engineered nucleases, it is important to develop functional assays that have a high degree of sensitivity – how does the modification process affect the way cells behave and can you identify processes that might only create deleterious events at low frequencies? We believe that the barcode system is an important step in that direction.

 

Do you see scope for combining your cellular barcoding approach with single-cell sequencing, in order to further probe the nature of cellular heterogeneity?

One of the questions that we regularly get asked and regularly talk about is what is causing the cellular heterogeneity. We know that there are ongoing genetic changes in the cell and agree that using single cell sequencing may help determine if the functional heterogeneity we see is the result of changes in the nucleotide composition of the genome (single base pair mutations, copy number changes, gross chromosomal rearrangements). The relatively rapid pace of diversity we observed when we started with a single cell/clone, however, suggests that in these cell lines epigenetic variations may be a more important contributor. Thus, technologies that are able to interrogate the epigenome at a single cell level, not just sequencing, will have powerful synergy with the barcoding system.

 

The implications of your study stretch across all disciplines using cell lines, however do you see any specific areas of research where this method would be particularly useful?

We agree that the utility of barcoding goes beyond the analysis of cell lines and are beginning to pursue some of these ourselves. As mentioned before, we are using this system to explore the functional effects of different nuclease platforms. Systems similar to this have already been used to track the reconstitution of the haematopoietic system following transplantation. In the future, I would expect that such systems will be used to understand the clonal dynamics in expansion of primary cells, including primary stem cells. It is possible, and has important ramifications, that when primary cells expand in vitro that the process ends up selecting just a handful clones over time. I would also expect that this barcode system will be a powerful tool in cancer biology: what is the clonal composition of primary tumours? what is the clonal composition after recurrence? what is the clonal composition of metastases? what is the clonal composition as it relates to the development of resistance to standard cytotoxic chemotherapy and to targeted therapies?

Finally, the barcode system will be a powerful approach to understanding how cells might behave differently when they are isolated and alone as compared when they are part of a large population (either seemingly homogeneous or overtly heterogeneous). By barcode marking, one can track how different, seemingly identical, clones respond to the signals in their environment. And given the cleverness and innovativeness of researchers, I expect that the barcode system will be used in many other ways in the future.

 

The validity of findings based exclusively on cultured cell lines has been subject to debate, raising the question of whether research should be so reliant on these cell lines. In light of  your recent findings, where do you stand?

Cell lines continue to serve an incredibly valuable purpose and will continue to do so. But our studies do caution about over-interpreting any result that is based on experiments in cell lines. Our results should certainly serve as a strong reminder that the cell line one is using may have little relevance to the process one thinks one might be studying because these lines change so rapidly. So one model would be that cell lines serve an important role in exploration and discovery and in some proof-of-concept studies, but that critical validation in primary cells and in vivo is essential. Along similar lines, for lines of experiments that are focused on potential medical applications in humans, a model could be that any potential discovery made in mouse or non-human systems is validated in a human system.

 

Mihai Pop on changing gut microbiota in childhood diarrhoea

$
0
0

Diarrhoea is the second leading cause of childhood mortality worldwide in those under five, according to the World Health Organization. Whilst prevention through safe drinking water and sufficient sanitation remains the best way to decrease diarrheal disease, improving treatment is nonetheless vital. Understanding the microbiota of the gut can provide insights into the contribution of pathogens – whether causative, protective or exacerbating – that can feed into dietary or microbiological interventions for the treatment of diarrhoea. In a Genome Biology study, Mihai Pop from the University of Maryland, USA, and colleagues use high-throughput sequencing to analyse stool specimens of over 900 children with moderate to severe diarrhoea from the Gambia, Kenya, Mali and Bangladesh, uncovering novel correlations with both potentially pathogenic and protective bacteria. Here Pop discusses the importance of this work in tackling the public health burden of diarrhoea, what more is needed, and future research directions.

 

How does the public health burden of diarrhoeal diseases compare, for instance, to malaria?

According to the World Health Organization, there are about four times as many cases of diarrhoeal illness than malaria in African countries. Worldwide, diarrhoeal illness represents the most common cause of disease, and contributes to roughly twice as many deaths as malaria.

 

The contribution of the human microbiota to human disease is only just beginning to be understood; how has your recent study in Genome Biology aided research in this field?

Our study made several contributions. First, we were able to show that metagenomic methods are able to uncover new potential causative agents even in diseases as well studied as diarrhoea. Second, we have started to uncover the potential role of interactions between microorganisms within the community in preventing disease. For example, in our data Prevotella organisms appear to be associated with absence of diarrhoea and are negatively correlated with diarrhoeal-causing bacteria.

 

Were you surprised by the associations you found between diarrhoeal disease and intestinal microbial diversity?

The association between microbial diversity and diarrhoea was already well established. If anything, we were surprised by the relatively small impact of diarrhoeal disease on the observed microbial diversity. This result is in part due to the tremendous inter-personal variability of the human gut microbiome, a feature that complicates the interpretation of data derived from cross-sectional studies such as ours, requiring much higher sample sizes in order to detect significant associations. Prospective longitudinal studies will, thus, be necessary to better understand the role of the host microbiota in disease.

 

How significant has the availability of high-throughput technologies been in identifying and understanding pathogen diversity, and its contribution to disease?

High-throughput sequencing technologies have been absolutely critical to our study. When we initiated our project we were expecting to sequence less than 100 reads/sample using the traditional Sanger method. The emergence of new sequencing technologies allowed us to increase this amount by over an order of magnitude, and the much deeper sampling of the community was critical to our ability to detect new associations through robust statistical methods.

 

How important is understanding human population diversity for addressing major public health problems, such as intestinal pathogens?

Most of the studies done so far on host-associated microbiota have focused primarily on Western populations and have, thus, only sampled a small fraction of the microbial diversity living on and within the human body. The bacterial causes and contributing factors for many diseases are likely to differ across countries and socio-economic strata. As a result, conclusions drawn from Western populations cannot be easily generalised to other populations. This is particularly true for diarrhoeal disease that has a substantially higher impact within the developing world.

Recent studies, including ours, have started to paint a more complete picture of host-associated microbial communities that will ultimately enable us to better understand the interaction between the host microbiome and health or disease. The impact of such studies goes beyond the immediate needs of public health efforts in developing countries – a better understanding of the global microbial diversity may help us predict and protect against emerging pathogens, many of which originate in developing countries but have a worldwide impact on public health.

 

Is global awareness of diseases that largely affect the developing world increasing in light of research funding from large organisations like the Bill and Melinda Gates Foundation?

The work of the Bill and Melinda Gates Foundation (that funded our study), as well as that of other funding agencies, is definitely resulting in an increased focus on diseases that primarily affect the developing world. I do not have a good way to evaluate the global awareness of such diseases, but funding priorities definitely affect the directions in which scientists take their research. My personal guess is that funding is less of an incentive than a enabling factor in this research. That is, many scientists are already very interested in tackling these diseases but are not able to do so without appropriate funding.

 

What is next for your research?

Specific to the current study, we are trying to confirm the findings made through association statistics. Statistical associations are not a conclusive proof that specific bacteria contribute to diarrhoeal disease, and further in-depth analyses (both laboratory-based and computational) are needed to evaluate the hypotheses generated by our study.

On a broader scale, my lab is actively developing new approaches for metagenomic assembly, approaches necessary for conducting the deeper analysis of the putative pathogens found by our study. These tools will enhance and extend our metagenomic assembler metAMOS. We are also focusing on optimising the metAMOS pipelines, and testing and validating our code base.

 

Remodeling the predictive power of Alzheimer’s biomarkers for diagnosis

$
0
0

Alzheimer’s disease (AD) is the most common form of dementia, however there is currently no cure. Drugs are available to slow down disease progression, proving most effective at the early stages of AD, yet degenerative brain changes may start decades before clinicians are able to diagnose the condition. Cerebrospinal fluid (CSF) biomarkers have been heralded as objective and effective early detectors of AD, although the interpretation of biomarker data can be problematic, with results often presented in a way that is difficult to employ in routine clinical practice. Sylvain Lehmann from CHU Montpellier and the University of Montpellier, France and colleagues, present a new AD prediction model for use with multivariate CSF biomarker data, as published in their recent study in Alzheimer’s Research & Therapy.

Lehmann and colleagues sampled more than 1,000 cognitive disorder patients from six independent memory clinic cohorts in Paris, Lille and Montpellier, and classified them into AD and non-AD (NAD) patients using clinical diagnostic criteria. Concentrations of the CSF AD biomarkers β-amyloid1-42 (Aβ42), total-tau (tau) and phosphorylated-tau (p-tau) were measured from lumbar puncture samples taken from the patients. As expected, differences in individual biomarker concentrations between AD and NAD were apparent. Optimal cut-off values on levels of the three CSF biomarkers were computed, above or below which they are implicated in the disease.

The authors used these data to compare two prediction models: that of logistic regression, already proven to perform highly in AD diagnosis, and a new, simpler scale, the PLM. The PLM scale was based on a straightforward and intuitive rule: class 0 corresponding to no pathologic biomarkers (according to the designated cut-off values); class 1 corresponding to one pathologic biomarker out of three; class 2 corresponding to two pathologic biomarkers out of three; and class 3 with all three biomarkers being pathological. Results revealed that the predictive values from PLM were not significantly different to those of logistic regression, however the difference in patient distribution between the two prediction models was significantly different – notably the PLM scale had more AD patients in class 3 as well as more NAD patients in class 0. The new scale resulted in 23.25 percent more patients better classified than for logistic regression.

The PLM scale not only outperformed the logistic regression but also has the advantage of not requiring complex mathematical adjustments. It potentially provides a simple and effective tool for physicians in memory clinics to determine the probability that a patient has AD, and also has prospective value in grouping patients for clinical research trials.

 

Jeffrey Cummings on problems with the Alzheimer’s drug development pipeline

$
0
0

In the race for new therapeutic drugs, many contenders fall to the wayside as they shuttle through the clinical trials pipeline. Rigorous testing often sees increasing failure rates as drugs pass from phase I studies into safety to phase II studies for efficacy, and ultimately phase III studies tasked with assessing a barrage of factors in a larger group of subjects. Whilst these daunting odds are expected, in a recent study in Alzheimer’s Research & Therapy by Jeffrey Cummings from the Cleveland Clinic Lou Ruvo Center for Brain Health, USA, and colleagues, the development of therapeutic drugs for Alzheimer’s disease is shown to be particularly subject to failure. Accessing a decade of trial data from the US National Institutes of Health database, ClinicalTrials.gov, the rate of attrition of drugs in the development pipeline was quantified. Here Cummings explains why there aren’t enough clinical trials for Alzheimer’s disease therapeutics, how this can be addressed, and the implications of their results.

 

Why did you choose to use ClinicalTrials.gov data to record the trends in Alzheimer’s disease drug development?

This is the most comprehensive database on clinical trials and has been mandated since 2007.  It had not been analysed for Alzheimer drugs. There have been several high profile recent failures in drug development – bapineuzumab, semagecestat – but no detailed evidence-based analyses of the Alzheimer’s disease drug pipeline. We wanted to get an overview of the process and studying the decade 2002-2012 plus an analysis of the current pipeline gave us a comprehensive view of how compounds move through the pipeline, what targets are being addressed, and what the success and failure rates are.

 

Why are there relatively few clinical trials for Alzheimer’s disease therapeutics, considering the magnitude of the problem?

There are many contributing factors to the low number of trials. The investment of the US National Institutes of Health (NIH) in Alzheimer’s disease is relatively modest – $650 million compared to $6 billion in cancer, for example – so the rate of target identification is low. Similarly, many drugs are first discovered in academic laboratories and then out-licensed to biotechnology and pharmaceutical companies; the low rate of investment in discovery leads to few candidates to assess. At the biotechnology and pharmaceutical level, the high failure rate, long trials required, and high cost discourages investment in this area. Venture capital is less available in this area because of the high failure rates. Many biopharma groups have exited central nervous system drug development.

 

How can this situation be improved?

Funding is need to advance Alzheimer’s disease drug development; this is true at every level of the drug development ecosystem. More NIH funds are needed for Alzheimer’s disease research including target identification and candidate therapy generation. Novel mechanisms for bringing in more venture capital are required. Public-private collaborations and support from philanthropy and advocacy groups are another means of supporting drug development. Legislation that extends patent protection, incentives for working in chronic disease and the US Food and Drug Administration fast-track channels are also among the things to be done.

 

What are the implications of your data for future Alzheimer’s disease research and funding?

The analysis shows that Alzheimer drug development is in a disastrous state and much more must be done to the respond to the looming epidemic. Around 10,000 baby boomers turn 65 every day and enter their Alzheimer’s disease risk years. Against this tsunami of therapeutic demand we have a trickle of drug development. Our current efforts are vastly incommensurate with the need. We hope our study will show the situation so dramatically that public concern will be amplified and more funds will be found.

 

What role do the Open Access and Open Data movements have to play in aiding progress in Alzheimer’s disease research?

The widespread availability of data is critical to advancing the Alzheimer research agenda. Scientists sharing information is critical to advancing drug discovery and development and the free flow of information is the highway down which scientific progress will drive.

 

One of the outcomes of the G8 dementia summit, held in London, UK, in 2013, was to identify a cure, or disease-modifying therapy, for dementia by 2025. How achievable do you think this is?

Discovering a cure seems unlikely but I think it is possible that we will have identified a disease-modifying drug by then. The time frame of 2025 is very tight given how long it takes to develop drugs – 13 years on average for Alzheimer’s disease drugs to make it from the laboratory to the end of a Phase III clinical trial – so any adjustments we make now will not come to fruition for a decade. If we do not increase our efforts now we will not have new therapies by 2025.

 

 

Martin Widschwendter on epigenetic predictions for non-hereditary breast cancer

$
0
0

Hereditary breast cancer accounts for a small proportion of total breast cancer cases – with estimates varying up to ten percent – however this minority of cases can provide clues to the far more common non-hereditary form, as illustrated in a recent study in Genome Medicine. Mutations in the BRCA1 and BRCA2 genes are well known to greatly increase the risk of developing breast cancer. Martin Widschwendter from University College London, UK, and colleagues reveal that genome wide epigenetic changes identified in women carrying BRCA1 mutations can be used to predict the chances of developing breast cancer in those lacking these mutations. Here Widschwendter explains how these findings came about, what their implications are, and the next steps to further refine their predictions.

 

What is the relative prevalence of hereditary versus non-hereditary breast cancer? To what extent do we understand the genetics of breast cancer?

Breast cancer is the most common cancer worldwide in women, with nearly 1.7 million new cases diagnosed each year. Only 5-10 percent of these cases are associated with a strong inherited genetic component, of which BRCA mutations comprise a subset. Other moderate penetrance genes have been identified including RAD51, ATM, CHEK2 and PALB2 for example, but their contribution to the risk of developing breast cancer is far less than that of BRCA mutations. Genome wide association studies have added further to the genetic picture by highlighting an association between specific single nucleotide polymorphisms (SNPs) and breast cancer. However, the gain in power of risk prediction afforded when adding these SNPs to conventionally used clinical epidemiological models has so far proven to be very small. Studies of identical and non-identical twins tell us that approximately 73 percent of the risk of developing breast cancer is due to non-heritable factors (e.g. lifestyle, environment, reproductive factors, etc.).

 

Your study provides a DNA methylation signature to predict non-hereditary breast cancer. Can you explain how BRCA mutation carriers were employed to achieve this?

As it is known that women with BRCA1/2 genetic mutations have a high lifetime risk for breast cancer development (up to 85 percent), we were interested in looking at ‘epigenetic’ changes in the blood of these women and to see whether these same changes are also seen in non-BRCA1/2 carriers who develop breast cancer (and who have therefore acquired the same ‘epigenetic’ changes via other routes).

Epigenetics describes a system of processes that can alter gene function via the modification of DNA and not as a consequence of underlying sequence mutations. Whilst the genome is referred to as the hardware of our cells, the epigenome can be thought of as the software that can potentially be misprogrammed. We studied the most well described epigenetic event known as DNA methylation which can cause genes to be switched ‘on’ or ‘off’. We used a platform allowing us to simultaneously assess 27,000 of these methylated sites throughout the genome per sample. We derived a methylation signature from BRCA mutation carriers consisting of about 1800 methylated sites that was not present in the blood of non-carriers. This signature was then applied to blood sample sets from women included in the MRC National Survey of Health and Development (NSHD) 1946 birth cohort, and the UKCTOCS ovarian cancer screening trial. In both collections, blood samples were available from women predating their diagnosis of breast cancer. We discovered that this methylation signature is also present in the blood of non-BRCA1/2 carriers and is detectable in advance of disease development.

 

What were the key findings from your study?

The key findings of the study were the discovery of an epigenetic (methylation) signature in white blood cells of BRCA1/2 mutation carriers that was also present in women without BRCA mutations and who developed breast cancer several years later. Furthermore, the signature was able to identify those women more likely to die from the disease.

 

In your study you note the tissue-specificity of your DNA methylation signature for the prediction of breast cancer. Do you think certain tissues types may provide more accurate signatures for prediction?

In our study we demonstrated that the epigenetic risk prediction signature was present in the blood but not in buccal cells from the same women, reiterating that DNA methylation changes are tissue specific. This is unsurprising since DNA methylation is crucial for early development and cellular differentiation – all cells with the same genetic code are able to develop into different tissue types owing to the expression of different gene sets, which in turn is made possible by epigenetic modulation. As the cellular source of breast cancer is epithelial, we hypothesise that more powerful risk predicting signatures could be derived from the comparison of epithelial cells between BRCA mutation carriers and non-carriers. Cervical cells for instance could be potentially more suited to this approach as they are both hormone sensitive (hormone exposure is one of the biggest risk factors for breast cancer development) and easily accessible.

 

Do you think DNA methylation changes in breast cancer is causative or consequential?

In our study we describe a BRCA mutation associated epigenome wide methylation signature in white blood cells affecting numerous other genes. Notably the signature was enriched for genes important in cellular differentiation suggesting that DNA methylation mediated silencing of these genes could result in suppression of immune cell differentiation in women harbouring the signature. The fact that our signature was also associated with the development of other cancers supports the view that the DNA methylation signature might be epigenetically compromising immune-defence against tumour cells in affected women. Further studies are required to confirm whether this is the case.

 

Do you think this DNA methylation signature could one day be used to inform decisions around preventative mastectomies, in the same way BRCA mutation tests are?

It is far too early to address this question. Further research is required to produce an epigenetic risk predicting test with sufficient power to justify implementation in the clinic. The hope would be that such a test would afford a large enough window of opportunity to allow for preventative measures (both lifestyle and clinical interventions) to prevent breast cancer development in all women.

 

What’s next for your research?

As well as looking further into the specifics of the blood based epigenetic signature we have discovered, we plan to study the epigenome further with the perspective to predict cancer development – this time employing a more advanced analysis platform allowing for assessment of half a million DNA methylation sites as opposed to 27,000. We plan to specifically look into epithelial cells to determine whether this tissue type offers significantly high enough levels of breast cancer risk prediction to justify the initiation of a clinical trial.

 


Mark Hall and Jennifer Muszynski on investigating septic shock in children

$
0
0

When a serious infection sets in, systemic inflammation can be triggered and result in the potentially fatal condition of sepsis. Although treatment with large amounts of intravenous fluids and antibiotics can resolve this, fatalities still occur, particularly in intensive care patients. A greater understanding of the underlying pathology, at the level of the immune response, may present new avenues for treatment. Data on this is particularly lacking in paediatric sepsis. In a recent study in Critical Care, Mark Hall and Jennifer Muszynski from the Nationwide Children’s Hospital, USA, and colleagues investigate the early adaptive immune response in children with septic shock, revealing the role of adaptive immune suppression in determining clinical outcomes. Here Hall and Muszynski discuss their key findings and the implications for treatment.

 

How common is sepsis in adults and children, and how effective are current treatments in both of these groups?

The prevalence of severe sepsis and septic shock are increasing in children, estimated at around 0.9 cases/1000 children (Pediatr Crit Care Med. 2013, Sep,14, 7:686-93). Numbers are similar in adults (0.5 – 1 case/1000 adults) (New England Journal of Medicine. 2003, Apr 17, 348, 16:1546-54). The sepsis-related case fatality rate in children in the US is 8 -10 percent, though this rises to 10 -20 percent in children with co-morbidities (Pediatr Crit Care Med. 2013, Sep,14, 7:686-93 and Am J Respir Crit Care Med. 2003, Mar 1,167, 5:695-701). While outcomes have improved in adults in recent years, mortality rates of 20 – 40 percent are still reported (New England Journal of Medicine. 2003, Apr 17, 348, 16:1546-54).

Sepsis, therefore, represents a major ongoing burden to the healthcare system and is responsible for significant morbidity and mortality across the age spectrum.  In adults and children, sepsis treatment is largely supportive, with an emphasis on fluid resuscitation, aggressive antibiotic use, and organ support. Therapies specifically targeting sepsis pathophysiology related to the immune response have largely been unsuccessful, but the vast majority of these studies have targeted the initial pro-inflammatory response. We believe that the function of immune suppression that follows the onset of sepsis may be a more attractive target of therapy.

 

Innate immune suppression has previously been shown in children with sepsis. What led you to specifically investigate the adaptive immune response in children with sepsis?

Indeed, we have demonstrated that impairment of the innate immune response is associated with adverse outcomes in children with life-threatening infection (Intensive Care Med. 2011, Mar, 37, 3:525-32 and Crit Care Med. 2013, Jan, 41, 1:224-36). We and others have also shown that the adaptive, or lymphocyte, arm of the immune system can also be impaired in the setting of sepsis with similar associations with mortality (JAMA. 2011, Dec 21, 306, 23:2594-605 and J Immunol. 2005, Mar 15,174, 6:3765-72). Some investigators have suggested that lymphocyte suppression may occur very early in the setting of sepsis based on mRNA profiles (Pediatr Crit Care Med. 2010, 11, 3:349-55), but changes in lymphocyte function were thought to play a role in the more subacute phase of the disease. The temporal relationships between innate and adaptive immune suppression were poorly understood and the role of lymphocyte function in children with sepsis was unknown.

 

What were your main findings and how do these compare to what was previously known about sepsis in adults?

Our work shows, for the first time, that critically ill children with sepsis demonstrate reduced CD4+ T cell function within 48 hours of sepsis onset. Further, the degree of reduction in T cell function was predictive of infectious complications as evidenced by failure to resolve infection or the development of new infection. Interestingly, we were unable to demonstrate an increased prevalence of immunosuppressive regulatory T cells, a cell type that is thought to promote immune suppression in the subacute phase of sepsis in adults (Crit Care Med. 2003, Jul, 31, 7:2068-71).

 

What are the clinical implications of adaptive immune suppression in children with sepsis? How may this inform early detection and treatment in children?

We have shown that restoration of innate immune function in critically ill children and adults is possible through the use of immunostimulatory therapy such as the drug GM-CSF (Intensive Care Med. 2011, Mar, 37, 3:525-32 and Am J Respir Crit Care Med. 2009, Oct 1,180, 7:640-8). Others have suggested, however, that stimulation of the adaptive arm of the immune system may be an appropriate goal of therapy (Nat Med. 2009, May,15, 5:496-7). As yet however, stimulation of neither arm of the immune system is part of standard sepsis treatment. This is in part due to a lack of a standardised approach to immune function monitoring in sepsis. Our data suggest that measurement of T cell responsiveness may be an important element of an immune monitoring regimen, so that T cell-targeted therapies can be developed and tested in a patient-specific manner.

 

Do you think suppression of the adaptive immune system in children is specific to sepsis, or is it likely to occur in other conditions during critical illness?

Innate immune suppression has been shown to occur in many forms of critical illness including sepsis, trauma, and cardiopulmonary bypass. While adaptive immune function has not been previously evaluated using our approach in other forms of paediatric critical illness, we suspect that lymphocyte function may well be abnormal in other settings. We view this to be an important area for future study.

 

What’s next for your research?

We are actively engaged in investigations focused on understanding mechanisms and risk factors for critical illness-induced immune suppression. In addition, we are performing a first-of-its kind clinical trial of immune stimulation with GM-CSF in critically injured paediatric trauma patients who demonstrate severe innate immune suppression. We look forward to more work evaluating the interplay between innate and adaptive immune function in our critically ill septic children, potentially including stimulation studies targeting both innate and adaptive immune suppression in an immunophenotype-specific way, paving the way for a precision medicine-focused approach to sepsis therapy.

 

Rod Dillon and Mauricio Sant’Anna on Leishmania and sand fly gut microbiota

$
0
0

Over 300 million people worldwide are at risk of contracting leishmaniasis according to the World Health Organization. The protozoan responsible for this disease, Leishmania, is spread through infected sand flies, where it colonises the gut lumen and is exposed to the local microbiota. Understanding the relationship between the sand fly gut microbiota and Leishmania may provide clues to improved prevention of the disease, which is endemic in several regions across the globe including Brazil. In a recent study in Parasites & Vectors, Rod Dillon from Lancaster University, UK, Mauricio Sant’Anna from the Federal University of Minas Gerais, Brazil, and colleagues investigate how colonisation of the gut of Lutzomyia sand flies with yeast and bacteria impacts the establishment of a Leishmania population, and conversely how colonisation with Leishmania impacts subsequent infection with the insect bacterial pathogen Serratia marcescens. Here Dillon and Sant’Anna discuss the implications of these findings for the reducing the incidence of leishmaniasis, as well as the importance of open access and international collaborative efforts in this field.

 

How much of a public health burden is leishmaniasis in Brazil, and more broadly in the Americas?

MS: According to the World Health Organization, there were over one million cases of cutaneous leishmaniasis in the world during the last five years. Visceral leishmaniasis, the far more dangerous type of leishmaniasis if not treated, was responsible for 300,000 cases, with over 20,000 deaths annually. In the Americas, more than 90 percent of visceral leishmaniasis cases occur in Brazil. From 1980 to 2005 Brazil registered over 50,000 new cases of visceral leishmaniasis, most of which were concentrated in the northeast region. It is not just fatalities but loss of income and cost of treatment that creates a huge burden on the population, especially for those with a lower income.

Interestingly, the disease has spread throughout the country in recent years and become more urbanised since the early 1980s. The main reason for this is the high adaptability of the insect vector to urban and human-modified environments. Lutzomyia longipalpis, the main vector species transmitting Leishmania infantum in the country, has been captured more in urban and semi-urban areas in comparison to forested environments, showing a clear urbanisation trait.

My hometown, Belo Horizonte is endemic for visceral leishmaniasis. I have relatives whose friends or family have become infected with L. infantum. An extra complication comes with the fact that the symptoms (fever, weight loss, loss of appetite, etc) can be mistaken for other illnesses, so complicating early diagnosis, which is crucial for effective treatment.

 

What led to your interest in gut bacteria and parasite transmission in insect vectors, and more specifically your work on Leishmania and sand flies?

RD: My first interests were in the co-evolution of insect and plant interactions but I soon discovered that there was a hidden layer of potential mediators in the form of microbes found in insects and plants. This led to a career as an insect microbiologist where I started working on microbiological agents for the control of locusts. During that time (the 1980s) we had the idea that gut microbes may have a protective role in preventing insects from succumbing to disease. We eventually published a review on this in 2004 and I like to think that this helped to encourage research in this area. The idea of a protective gut microbiota and ‘probiotics’ has gained acceptance, and within the insect research world it has become the subject of intense research, for example in mosquitoes. These were new ideas at the time and there wasn’t a great deal of interest in my work!

After studying plant feeding insects I wanted a change and the idea of working with medically important insects was appealing. The main challenge initially was to adjust from working on a locust 600 mm in length to the tiny 2 mm sand fly. Sand flies actually feed on plants, it is only the female that needs blood for egg production and so I like to think of sand flies as plant feeding insects with an occasional blood feeding habit. When I started working on sand flies in the 1990s I always wanted to study gut microbes in sand flies. At that time there was a belief that the gut of sand flies was sterile – this now seems such a naive view. I studied gut microbes in sand flies with a group in Cairo, Egypt, and we found that these insects do contain a diverse community of microorganisms. My hunch then was that these gut microbiota may be important in influencing Leishmania transmission and this current study is the fruition of some of this work. Funding for this was a long time coming and support from the Leverhulme Trust was vital in starting this up.

 

What were the key findings of your recent study in Parasites & Vectors?

RD: There were two key findings. The first was that yeasts and bacteria that inhabit the gut of this tiny fly may prevent colonisation of the gut by the medically important Leishmania. We also found that two microbe species were potentially more effective than one in preventing colonisation. So a diverse fly microbiota is likely to resist colonisation by Leishmania. The question here is whether many of the wild sand flies possess a diverse microbiota and if this providing a naturally occurring ‘probiotic’ effect against Leishmania. Perhaps we would see more cases of leishmaniasis if these microbes were not present. This is a complex area because the number of cases depends on other factors such as the presence of animal reservoirs of the parasite.

The second finding was that Leishmania may potentially benefit the sand fly host under certain circumstances; its presence reduces fly mortality due to an insect pathogen. The main question that was asked in the past was whether Leishmania had a harmful effect on the fly. No one really asked whether there may be benefits for the fly. The results for these experiments were surprising because the insect pathogen is also capable of killing Leishmania. So for the fly and Leishmania it is a case of ‘better together’ regarding the presence of this bacterial pathogen. In practice Leishmania may have positive or negative effects on the fly depending on the circumstances.

 

Biological control of vectors is one approach to eliminating Leishmania and other vector-borne diseases. What advice would you give researchers working in this area, as a result of the findings from your study?

RD: A pioneering North American, early 20th century, insect microbiologist Ed Steinhaus said “comprehensive understanding of the biology of insects requires that they be studied in ecological context with microorganisms as an important component of the system”. I think the advice is that all aspects of the ecology of the vector needs to be considered and that the ecology at the microbial level is just as important as the macro ecology. I am an avid supporter of local biocontrol approaches. The development of fungal or bacterial pathogens as biocontrol agents needs to consider all aspects of the microbial ecology of the insect-Leishmania-mammalian interaction.

 

What impact do you think your work into Leishmania will have on reducing the burden of leishmaniasis?

RD and SM: The treatment of visceral leishmaniasis has greatly changed over recent years due to emerging drug resistance. In the absence of a vaccine with a protective effect, leishmaniasis control relies on insecticide spraying, with associated pesticide resistance. What we propose with our work is an alternative way to interfere with Leishmania development within the sand fly gut by harnessing their own microbes. We also study how bacteria and Leishmania itself activate the flies’ innate immune system to try and understand how the vector modulates their gut inhabitants populations. The aim is to find tools we can use to disrupt Leishmania development inside their insect vectors and reduce disease transmission. These tools might be quite subtle, for example some manipulation of the local environment around people’s houses or more of an intervention through novel techniques to switch on the sand flies immune system to kill the Leishmania.

 

Climate change is predicted to make ‘tropical’ disease more of a problem in non-tropical regions. How do you see climate change influencing leishmaniasis?

SM: This is a possibility. The presence of sand flies in new areas is a potential risk factor for leishmaniasis transmission. Recent work has shown evidence of an increasing risk of sand fly establishment in new parts of European countries such as Germany and Switzerland. If global temperatures increase over the next years, sand flies could be able to colonise areas that they were not able to colonise before.

 

What do you think has been the biggest benefit of collaborating with international researchers in your recent study, and in your field of research in general?

RD: It is difficult to be successful in bioscience research without collaborating these days. We always collaborate with international researchers; these collaborations are built on personal relationships in which there is a large element of trust and openness. Of course it brings new perspectives and is also personally enriching.

MS: Collaboration is the key word in science. Scientists do not survive in their research fields on their own! As a student I was involved in ‘sandwich’ post-graduation programmes, having the opportunity to start my PhD in my home country and go abroad for work experience, returning to defend my viva. That time abroad was of great importance for me, not only to give me new scientific knowledge but also to bring me invaluable personal experience. Later on as a junior researcher, I had the opportunity to work as a postdoc in three UK universities for over ten years, definitely paving the way to my scientific career. All these years working with top scientists from the sandfly-Leishmania field of work taught me that collaborative work is the key to success.
How important is open access in your field of research? How is open access viewed by the scientific community in Brazil?

MS: Researchers, not only in Brazil but worldwide are publishing their work more and more in open access journals, simply because it hits a broader audience, increasing their chance to reach a larger number of readers. For researchers in developing countries like Brazil, a clear disadvantage are the publication fees, which can be expensive. Most grant applications do not cover publication fees and certain university departments only cover publication of papers above a certain impact factor.

RD: Open access is increasingly important, particularly for neglected tropical disease work, some grant awarding bodies are now demanding that results of their funded work are published in this way. We are in a period of transition with traditional publishing models breaking down, and the addition of social media outlets and personal blogs is further blurring the boundaries. Obviously for scientists in economically deprived communities open access is a fantastic asset. The publication costs for the scientist can be an issue but these may be included in grant costings or sometimes can be funded via the university, as is happening at Lancaster University.

 

Charlotte Watts and Lori Michau on preventing intimate partner violence

$
0
0

Intimate partner violence (IPV) is a global issue that is estimated to affect around 30 percent of women during their lifetime. In sub-Saharan Africa this problem is compounded by  the risk of contracting HIV. Whilst these two issues have been addressed separately, they are known to be closely linked. Interventions targeting individuals have sought to address both of these public health burdens with varying degrees of success. Now in a study in BMC Medicine, Charlotte Watts from the London School of Hygiene and Tropical Medicine, UK, Lori Michau from Raising Voices, Uganda, and colleagues present the results of the first trial in sub-Saharan Africa to employ a community based intervention to reduce IPV and the associated risk of HIV, called the SASA! Study. Here Watts and Michau discuss why research into IPV has been neglected, the challenges and benefits of community based interventions in this field, and what’s next for the SASA! Study.

 

Why is intimate partner violence (IPV) such a big problem in sub-Saharan Africa? What are the broader consequences?

Intimate partner violence is a big problem all over the world, not only in sub-Saharan Africa – global figures suggest that 30 percent of ever partnered women will be physically or sexually assaulted in their lifetime. What is striking in sub-Saharan Africa, and several other developing country settings, is that the past year levels of violence are also high. In our study community, for example, about a quarter of women had experienced physical or sexual violence from a partner in the past year.

The health consequences of violence are multiple, with both short and long term effects. As well as injury, women in violent relationships are more likely to suffer from poor mental health, and be at greater risk of having an unwanted pregnancy, sexually transmitted infections, and HIV. More broadly, violence within intimate partnerships can have profound impacts on children’s well-being and development, and limit women’s ability to engage in economic development activities.

 

Why has IPV received little attention from the research community?

For many years gender based violence, including intimate partner violence and sexual violence, were seen as being too sensitive and hidden an issue to be researched. People thought that violence was too private an issue to explore, and they were worried that population surveys could not be done without putting women at increased risk. Things are now changing, and we have strong guidance on how to conduct research in an ethically responsibly way, and we have learnt that if interviewers are carefully selected and trained, and interviews are conducted in a private setting, women do talk about the violence in their lives. There is now a fairly large body of population based evidence on the prevalence of intimate partner violence around the world, and a growing body of evidence on its health impacts.

The field of intervention evaluation is still in its relative infancy, although the evidence that does exist is promising, and shows that violence against women is preventable. It can be difficult to obtain funding for intervention trials – the SASA! trial had five donors, for example, and was small (with only eight clusters), due to constraints in available resources. As the field grows I hope that more mainstream research bodies will start to support research in this area.

 

What is the SASA! Study and what are its key findings? Were you surprised by any of the results?

The SASA study is a cluster randomised controlled trial, designed to assess the community level impact of SASA! – a community focused violence prevention programme. SASA! was designed to change the social norms and behaviours that underpin both violence against women and HIV related risk behaviours. Discussions and one-on-one teaching activities are delivered by community members (local leaders, and male and female activists) who are supported by staff at the Centre for Domestic Violence Prevention in Uganda, to support communities to think more about the causes and consequences of violence, the impact of this violence, and the ways in which individuals and communities can change.

The findings are exciting, suggesting that the intervention achieved significant impacts on a number of the primary outcomes over a relatively short timeframe. Impacts included significant reductions in community acceptance of violence, and significant reductions in multiple sexual partnerships among men. Women’s experience of physical violence were also 52 percent lower, although due to the small number of clusters, the findings were not statistically significant.

What I found surprising was that when we compared the intention to treat and per-protocol findings, in general the effect size estimates did not change substantially (although the physical violence outcome achieved borderline significance). For me it was interesting that even when people had not had direct contact with intervention activities, the same degree of change had occurred. To me this provides direct support of the value of the social diffusion model of intervention that SASA! uses.

 

What are the barriers to implementing community based interventions for IPV, and how can they be overcome?

Programming at a community level is complex – involving multiple stakeholders, institutions, opinions and opportunities. For a meaningful change in social norms, programming must be guided by a theory of change – a hypothesis of how change will happen over the timeframe of the intervention sequenced in an appropriate manner such that it does not get to the ‘ask’ regarding behaviour change, too early or too late. SASA! requires longer-term investment in communities through a systematic process of change; yet for many organisations, prevention programming is often thought of as a few community dramas or posters hung around town. Reorientation to a different way of engaging communities is needed.

IPV prevention activities are most effective when led by trusted others. Effective social norm change efforts require on-going exposure of ideas and are best delivered by a trusted and known persons rather than an external implementer. This is an unfamiliar way of working for many organisations. In the SASA! methodology activities are led by community members themselves and happen on a daily basis – not as part of a large public event but rather within the context of the daily lives of community members (while collecting water, waiting at the bus stand, shopping in the market, at the local drinking joints, etc). This often requires a large shift in how organisations think of programming; moving from non-governmental organisation (NGO)-led to community-led, sporadic activities to consistent presence in communities, message-based activities to encouraging critical thinking, phasing in ideas. Community mobilisation done well will take on a life of its own, with diverse individuals and groups making the issue ‘their own’. This is a marker of success but also not business as usual for organisations who are used to more controlled efforts.

Working in mixed sex groups is also important. It can be challenging to work with women and men together – rather than single sex groups – for a variety of reasons including women’s understandable reluctance to voice their opinion in the presence of men, and the need for small, single sex groups to work through identity and personal experiences. There is no hard and fast rule for this and when carrying out IPV prevention, care must be taken in how issues are raised and discussed. In SASA! there are both single and mixed sex groups. Many organisations find it difficult to ‘get men’ to activities. Often these activities are announced as being on violence against women – naturally, many men do not appear. With SASA!, community activists who are in equal number male and female go out to reach their own social networks and neighbours. Rather than calling people to activities, the community activists go to them, for example, meeting men at the local drinking joint, pool table, carpentry shop or garage (the same is done for women). In this way, the ideas come to people through their daily work and movements and while they are with their peer groups. This approach can be very powerful in shifting social norms as collectively, small groups of men and women discuss and work through these issues. This creates much more ‘stickiness’ for change as the peer group will reinforce and hold members accountable to new norms.

 

What are the long term benefits of community based interventions for IPV?   

There are direct and indirect benefits for individuals, communities and the wider society when a community mobilisation, social norm change approach like SASA! is used. For individuals, the SASA! focus on power (power within, power over, power with, power to) opens often very marginalised people to new ways of thinking about their own power in their families and communities. Time and time again, we hear from community members that their perceptions of themselves have changed – from feeling powerless to recognising that while not all powerful, each of us can make decisions everyday with a variety of people about how we use our power. This has been transformative for many communities and individuals and has an effect on all spheres of a person’s life.

For families, women and men talk about a new respect, softness and kindness in their relationships – from where there was suspicion, control and fear, to the recognition that families and relationships can be stronger when the dignity and rights of the other are upheld. Men comment that there is more harmony in their homes and so they feel less compulsion to go out and look for other partners. Women talk about how they feel more respected and listened to and now can contribute to the family decisions and finances. Children share stories of relief about how their homes are much nicer places to be, with parents who they are not afraid of or afraid for.

In the community, when social norms about the value and worth of women shift opportunities open up for women’s participation, their voices can be heard in community meetings, they come together to work on issues important to them. We have also found that once women and men feel a sense of activism (taking action on an issue because it is important to you), this activism spreads beyond IPV to other community issues such as sanitation, water, political participation, etc. Feeling a greater sense of power means there is much more activism broadly in the community. This bodes well for positive change across the development spectrum.

 

SASA! is being rolled out across a number of countries in sub-Saharan Africa. What are the challenges in implementing the intervention in different locations?

SASA! is currently being used by over 30 organisations in the Horn, East and Southern Africa, including  international NGOs, NGOs, community-based organisations and faith-based institutions. Each organisation is unique, as is each community. SASA! is being used in densely populated urban ‘slums’, rural villages, pastoralist communities, refugee camps and settlements, as well as within the structures of the Catholic church. SASA! can be adapted based on the skills, context and culture of a community and an organisation. We’ve come to learn that because SASA! works on the core driver of violence against women – power inequality – rather than the manifestations of violence, it is largely applicable in settings which are very different. However challenges remain, including:

a)    Knowledge of community characteristics. Effective SASA! implementation requires deep understanding of the community – both geographically and socially. SASA! needs to be adapted to suit each community because every community is unique. SASA! is not a blueprint for an intervention but an overarching methodology that provides a basis for local adaptation. For example, in pastoralist communities the market days become important gatherings and spaces for dialogue in this otherwise diffuse community, in the Catholic church rather than setting up parallel structures the challenge is to tap into all the existing opportunities the complex infrastructure of the church provides. Likewise, in some refugee communities, rather than printing posters for hanging in homes and public spaces it can be more effective to print small, carrying size images as many homes are temporary structures with extremely limited wall space.

b)    Anxiety, backlash and scepticism from the community. Raising issues of gender, power and intimate relationships is sensitive and requires care. Even though SASA! uses a benefits-based approach (i.e. demonstrating the benefits of non-violence for all rather than blaming and shaming those using or experiencing violence), there can be anxiety from women and men about what talking about these topics will mean. There can be backlash from both women and men who are resistant to changing the status quo. There can also be scepticism from women and men that programming will truly benefit them and include them – rather than the organisation leading the programming. All of these can be overcome through a variety of ways of working in the community including: a) ensuring that community members (women and men) are leading the activities, b) transparent processes of information giving and selection of activists when the work is beginning, c) quality training of the community activists before they begin activities in the community as well as ongoing support to help them expect and be able to constructively respond to possible community reactions, d) positive, benefits-based programming, e) commitment from the organisation that programming will be consistent and regular – this is essential in moving from project-based programming to fostering community activism.

 

What further research is required in this field?

We need research in many different areas. We need to learn more about how to achieve impacts at a population level. Much of the current intervention evidence focuses on assessing the direct impact of interventions on programme recipients. Given the high prevalence of violence globally, it is important to move to evaluating interventions that are seeking to achieve more widespread change, and to identify potential entry platforms that could be used to deliver effective interventions at scale.

 

It’s all in the eyes: Heather Nuske on tracking emotional processing in autism

$
0
0

Reading and understanding emotion is a skill most people begin to develop during infancy, such that by adulthood they are better equipped to deal with social interactions. However individuals with autism are often found to struggle with processing emotions expressed by others, though familiarity is suggested to help mitigate this. In a recent study in the Journal of Neurodevelopmental Disorders, Heather Nuske, Giacomo Vivanti and Cheryl Dissanayake from La Trobe University, Australia, set about measuring the effect of familiarity on emotion processing in autism. Utilising eye-tracking pupillometry they investigate how autistic children respond to expressions of fear in familiar versus unfamiliar people. Here Nuske explains what their findings suggest about the neurological basis of autism, and how this may aid clinical assessment and management.

 

What got you interested in studying emotion processing in individuals with autism?

This topic has captured my interests for many reasons, personal and professional. On the personal side, my younger brother has autism. He has taught me many things, including that each person has a unique way to perceive the world around them, and that this should be celebrated. This led me to want to study psychology and after having some great mentors along the way (including Thomas Matyas and Cheryl Dissanayake), I got interested in research and naturally gravitated towards autism research.

But why emotion research? Well this was sparked from reading inspirational work from key scholars in this and related fields (Antonio Damasio, Peter Hobson, Colwyn Trevarthen, Daniel Stern, Michael Tomasello and Mark Johnson, to name a few), and a general realisation that emotions and emotional balance underlies what every person wants from life, to be happy.

Also, my work as a therapist for children and adolescents with autism has taught me the power that non-verbal cues to emotion have in communicating ideas between people, and the difficulty that stems from not being able to effectively use this mode of communication. The transfer of emotion between people is crucial as it is the gateway to connecting people and establishing genuine relationships, and I want to know how to better support people with autism to do this.

 

Could you explain the technology behind eye-tracking pupillometry? What can it tell us about an individual, and why is it an important tool?

Pupillometry is the measurement of pupil size over time. The pupil becomes bigger or smaller in response to a complex but finite set of inputs, including emotional stimuli. For example, when you are happy or scared your pupils dilate (become bigger), and the dilation is proportional to how happy or scared you are (i.e. more dilation with more emotion). Eye-tracking pupillometry is a relatively new technology as eye-tracking itself (which involves recording where someone is looking) has only been used during the past ten years or so.

Eye-tracking pupillometry is a great technology to use with young children with autism as it is non-invasive (no electrodes need to be attached to participants, like is needed with heart rate recording), is less affected by bodily movement than other systems (which can be problematic as this creates artifacts in data), and researchers can simultaneously measure where a person is looking (so they can know for sure what participants are responding to). This technology involves simply watching stimuli on a computer monitor, thus can be used to provide insight into the emotional life of non-verbal children, which made up the majority of the children in this study.

 

How does familiarity affect emotion processing in individuals with typical development? How does this compare to individuals with autism?

It is well known that individuals with typical development respond to the emotional expressions of familiar people in a different way than those of unfamiliar people. For example, when an old friend gives you a big smile, this expression carries with it more meaning and relevance than a smile of a complete stranger. Humans have a tendency to be more empathic towards and more accurate in perceiving emotions of in-group members than out-group members, whatever ‘group’ that may be, though this can be modified with exposure and empathic understanding of out-group members.

Not much was already known specifically on this topic, as this was the first study to examine the processing of emotion expressed by familiar people in individuals with autism. That said, we knew from Kristelle Hudry and colleagues’ 2009 study on empathy that children with autism, like their typically developing peers, are more empathic towards caregivers than unfamiliar adults. Also, in my clinical and personal experience I have seen many examples of ‘normal’ emotional responses in people with autism to their family members despite difficulty with emotional responding to unfamiliar people.

 

What were the key findings of your study, and were you surprised by any of these?

Using eye-tracking pupillometry, we measured the emotional responses of a group of children with autism and a group of typically developing children whilst they watched emotional expressions of their childcare teachers and therapists (i.e. familiar people) and people that they did not know. We found that the children with autism and the typically developing children had the same magnitude of pupillary response to familiar people expressing emotion. However, the children with autism had a reduced response to emotion expressed by unfamiliar people, compared to the typically developing children. Therefore the children with autism had a ‘normal’ magnitude of emotional reaction to familiar, but not unfamiliar people.

We also measured the timing (latency) of these emotional responses, finding that no matter whether the person expressing the emotion was familiar or unfamiliar, the children with autism had a delayed pupillary response compared to the typically developing group. This result suggests that people with autism take longer to react to emotional expressions than their peers. We think this helps to explain the difficulties that people with autism face during social encounters, which are fast-paced and ever changing.

Finally, we measured how long the children looked at the eye and mouth regions of the face, to see whether children with autism paid less attention to the eye region. Abnormal attention to the eye is a distinctive feature of autism, but interestingly we did not find this to be the case. Consistent with a previous study in 2011 by Vivanti and colleagues, we found that this result depended on whether the face showed emotion (fear) or not (a neutral face), with no difference in attention to emotional faces and less attention to the eye (and mouth) region of neutral faces. We think this result means that emotion helped to sustain the attention of the children with autism, who characteristically look less to faces than their peers in everyday life.

 

What do your results suggest about the neurological basis of autism?

We think that these results suggest that people with autism do not have a fundamentally different brain ‘wiring’ for emotion processing, as these brain circuits can be assessed or ‘bootstrapped’ by familiarity; they can process emotions of familiar people just like their peers. Although they have difficulty processing emotion expressed by the human face (as well as the human voice and body), this is easier for them with people they know. This may be because they experience emotional connectedness only to people they know well, rather than with people in general, as they have more experience with familiar faces or perhaps as they have more motivation to process the emotions of familiar people, or a combination of these three – the mechanism is still unclear.

However, the finding of a general delayed emotion response in the children with autism is suggestive of less strengthened connections between core brain centers for emotion and other brain areas, which has been found in previous studies in individuals with autism during emotion processing.

 

Where do you see the future of eye-tracking studies heading in the next ten years?

In the next ten years, I expect to see many more researchers using eye-tracking pupillometry in combination with the traditional attention measures that the eye-tracker provides. Many researchers have incorporated eye-tracking in their research paradigms but at the moment expertise in pupillometry is not that common. I would like to see (and be a part of) conferences dedicated to this methodology. I recently gave a presentation explaining some features of eye-tracking pupillometry at the International Conference on Infant Studies in Berlin, Germany; there seemed to be much interest in this methodology.

 

How can studies such as yours impact upon the clinical assessment and management of autism?

As direct clinical assessment of people with autism is often conducted by people who are unfamiliar to the individual, these results highlight the importance of obtaining parent or teacher reports of the individual interacting with familiar people in order to get an accurate perception of the person with autism’s strengths in emotion perception and reactivity.

Also, we think that our results in this study speak to the importance of establishing long-term therapeutic relationships with people with autism; therapists that become familiar to people with autism can provide more learning opportunities to foster their development of emotion perception and understanding.

 

Gerald Chanques on how to assess pain in critically ill patients

$
0
0

Pain management is key to ensuring optimal recovery from illness or injury, however accurately assessing pain levels in non-communicative patients is a major challenge. Several tools are currently used to address this, including the Behavioural Pain Scale (BPS), Critical Care Pain Observation Tool (CPOT) and Non-Verbal Pain Scale (NVPS) – but which is most reliable? In a study in Critical Care, Gerald Chanques from the University of Montpellier, France, and colleagues, analysed the psychometric properties of each of these pain scales in non-communicative patients in intensive care units. Here Chanques discusses the challenges and importance of pain assessment in critically ill patients, and how these latest findings may impact clinical practice.

 

From a physiological standpoint, how much pain can be experienced when a patient is not fully conscious, such as during critical illness?

Pain is a frequent event in Intensive Care Unit (ICU) patients, with an incidence of up to 50 percent in medical as well as surgical and trauma patients. If pain intensity cannot be self-reported in some patients with an impaired level of consciousness (delirium, sedation, coma), the acute stress-response associated with pain can be assessed: pain behaviour (grimacing), agitation, changes in heart and respiratory rates, blood pressure, and pupil size.

From a physiological standpoint, these changes in neuro-vegetative system activity and brain functioning may be associated with higher oxygen consumption, cardiac arrhythmia, ischemic events, and patient/ventilator asynchrony. Also, there might be an increased risk of developing a chronic pain syndrome if high intensity pain events are repeated while in the ICU.

 

Why is it important to measure pain levels in critically ill patients?

Pain assessment is the first step precluding an accurate diagnostic and treatment of pain that consists of a good match between a patient’s needs and the analgesics dose i.e. providing effective analgesia while avoiding the side effects associated with an overdose of analgesics.

Pain protocols based on a validated measure of pain intensity in critically ill patients allows for decreasing adverse physiological events during painful nursing procedures. Also, reports from large observational and high quality sequential ‘before and after’ studies showed a significant association between pain assessment and patients’ outcomes in the ICU: decreased duration of mechanical ventilation, decreased length of stay in ICU, and decreased nosocomial complications. Explanations are multiple, including better prevention of the acute stress response and analgesics side effects, but also a decreased use of sedatives.

 

How is pain currently measured in critically ill patients, and are different approaches taken in different settings?

Pain assessment is well standardised in critically ill patients. As widely recommended by national critical care societies, pain should be assessed by self-reporting scales in patients able to communicate, or by behavioural pain scores in patients unable to communicate. However, such pain assessment using accurate clinical tools remains below average in 50 percent of ICU patients. This is mainly determined by differences in the implementation of recommended practices into routine care.

 

What problems do you encounter when trying to measure pain in critically ill patients?
 
Pain assessment is basically clinical in most ICU patients (self-reporting pain scales,  behavioural pain scores), so education and training of nurses and physicians is paramount to insure standardised and reliable use of such subjective tools. An important inter-observer agreement is necessary before implementing analgesia protocols based on a subjective assessment of pain in a multidisciplinary ICU team with a number of professionals.

 

What were the key findings from your study? Were you surprised by any of the results?

The key findings are that some behavioural pain scores (BPS, CPOT) have significantly higher psychometric properties (inter-rater reliability, internal consistency and responsiveness) than others (NVPS). This was not so surprising because BPS and CPOT are the two behavioural pain scores recommended for practice by the Society of Critical Care Medicine (SCCM). However, though these two scores had been very well validated separately, they had never been evaluated comparatively. The purpose of this study was to address this and also compare these two scores, which are routinely used, to another score i.e. NVPS.

 

What are the implications of your study for current clinical practice?

Our study definitively suggests that either BPS or CPOT appear to be superior tools and should be chosen in ICUs where no behavioural pain scale has been implemented yet – this is consistent with the recent practice guidelines.

 

What’s next for your research?

This is the most difficult question. We should probably better understand why some ICU teams succeed in implementing these tools while others don’t. What are the barriers and facilitators to successfully implement a behavioural (subjective) pain tool? What are the most appropriate education and training methods?

Also, beyond assessment of pain in patients who are unable to communicate but still have the possibility of demonstrating a pain behaviour (e.g. grimacing, moving the limbs), we should now determine which pain assessment tool is recommended in other selected critically ill patients, especially brain injured patients and patients receiving neuromuscular blocking agents.

 

Viewing all 235 articles
Browse latest View live