Quantcast
Channel: Biome » Research
Viewing all 235 articles
Browse latest View live

Are children with malaria at increased risk of bacterial infection?

$
0
0

A putative clinical association between malaria and invasive bacterial infection (IBI) was first suggested in 1929, and although data accumulated since then does indicate that children with Plasmodium falciparum malaria are at risk of IBI, the exact nature and extent of this relationship remains unclear. James Church and Kathryn Maitland, from Imperial College London, UK, conducted a systematic review, published in BMC Medicine, to try and unravel this association.

Church and Maitland carried out a systematic search of three major scientific databases, PubMed, Embase and Africa Wide Information, to identify articles describing bacterial infection among children with P. falciparum malaria in sub-Saharan Africa and found a total of 25 studies that fulfilled their inclusion criteria. Of these, ten studies reported on children with severe malaria between 1992 and 2010 in 11 countries. In all, these studies involved 7,208 children, including 461 with concomitant IBI, reflecting a mean prevalence of IBI of 6.4 percent.

When they considered studies that included children with all severity malaria (i.e. including children with not just severe malaria, but also asymptomatic and mild cases), they found slightly variant results. The prevalence of IBI in children with malaria was not dissimilar at 5.58 percent (1,166 of 20,889), however, of 27,641 children with non-malarial febrile illness, 2,148 had a concomitant IBI, demonstrating a prevalence of 7.77 percent.

Additionally, the authors investigated the effect of co-infection on mortality in children with severe malaria, and found that there were 81 fatalities in 336 children with malaria and concomitant IBI (24.1 percent) compared with 585 deaths in 5,760 children with malaria alone (10.2 percent).

Although there are limitations to the systematic review, such as the fact that heterogeneity between the included studies was high, the findings suggest that children with severe malaria are at a risk of bacterial infection, and that this results in an increased mortality risk.


Patrick Bolton and Hilgo Bruining on connecting genetic risk factors to specific symptoms in autism

$
0
0

Genome wide association studies, genetic epidemiological investigations and numerous gene sequencing approaches have led to a growing appreciation of a genetic component to autism spectrum disorder (ASD). Genetic variations have consequently been linked to a broad spectrum of behavioural symptoms that fall within the classification of ASD. However, for the large part, these risk factors have not been correlated with specific symptomatology. Such a correlation might be important to dissect the heterogeneity of ASD, which is urgently needed to develop more targeted treatment possibilities. In a recent study in Molecular Autism, Patrick Bolton from King’s College London, UK, Hilgo Bruining from the Brain Centre Rudolf Magnus, the Netherlands, and colleagues, investigate the genetics of ASD with a view to determining whether specific behavioural signatures can indeed be linked to certain genetic traits. Bolton and Bruining explain how they were able to discern behavioural symptoms unique to specific genetic disorders that are known to carry an increased risk for ASD, and moreover discuss how this machine-learning approach could be applied to idiopathic ASD.

 

An increasing body of research has linked a variety of genetic risk factors to autism spectrum disorder (ASD). How strong do you think the genetic component of ASD is?

Twin and family studies have shown that Autism Spectrum disorder is one of the most strongly determined psychiatric disorders. Heritability estimates indicate that 60-90 percent of the liability to autism spectrum disorder is attributable to genetic factors. The known genetic risk factors included single gene disorders, chromosomal disorders and rare genetic variants. Together these are evident in approximately 15 percent of cases. However, even amongst those cases where  genetic risk factors have yet to be identified, the evidence indicates that genetic factors play a major role in aetiology. We also know, however, that non-genetic factors are implicated in aetiology, although much less progress has been made in identifying these to date.

 

What did your study set out to investigate?

Whether specific genetic conditions that are known to cause ASD lead to different patterns of autistic symptomatology and whether cases of ASD of unknown aetiology, exhibit patterns of symptomatology that resemble those associated with specific genetic syndromes.

 

Data analysed in your study was generated from behavioural profiles obtained through ADI-R (Autism Diagnosis Interview – Revised). What is ADI-R and what kind of information did it provide?

The Autism Diagnostic Interview – Revised is an extensive well validated parent interview that characterises the key symptomatic manifestations of autism spectrum disorder. These include problems in communication, difficulties in engaging in reciprocal social interaction and the presence of intense preoccupations, repetitive behaviours and unusual circumscribed interests.

 

You employed a machine-learning approach to analyze your data. Can you explain what the Support Vector Machine is and how it works?

Machine learning concerns the construction or ‘training’ of supervised learning algorithms on labelled examples. In this study the labels were the different types of genetic disorder. The Support Vector Machine method tries to determine an optimal non-linear (flexible) combination of ADI-R items that separates the genetic labels by the largest possible margin, giving the lowest misclassification error.

 

What were the main results of your study, and what findings most excited/surprised you?

The most important result was that we could associate patterns of autistic symptomatology to specific genetic disorders. This is an important result implying that autistic symptom profiles might be used to designate underlying genetic aetiology. Indeed, behavioural specificity related to genetic disorders is consistent with the notion of many clinicians recognising characteristic behavioural presentations in genetic syndromes such as Down’s, Rett syndrome or Tuberous Sclerosis. Our study is the first to translate this notion into statistical evidence by machine learning pattern analysis.

Apart from its power to detect ‘hidden’ profiles, machine learning also has the advantage that it delivers a robust algorithm that can be used in other samples consisting of the same behavioural data. This offered the unprecedented opportunity to estimate the relative similarity of ‘idiopathic ASD’ to behavioural profiles designated from the selected genetic disorders, as we show in the second part of the study. Taken together, the application of support vector machine learning to autistic symptom profiles opens up a novel avenue of translational research.

 

What are the major strengths and limitations of the methods employed in your study?

We had ADI-R algorithm data on fairly large numbers of individuals with six different genetic disorders. However, more measures of autistic symptoms may have increased the power to detect differentiated profiles. The machine learning algorithm differentiated between some  better than others. This variability might be explained by the variation in sample sizes, so in future larger samples will need to be investigated. It was also notable that the ratings of the pattern of social dysfunction were most discriminative, raising the possibility that particular styles of social impairment may be related to particular genetic risk factors.

It seems likely that the incorporation of more symptoms and other phenotypic features, such as the presence of comorbid behavioural problems like those associated with ADHD, may improve the ability to assign cases to specific classes of genetic disorder. The inclusion of other conditions such as Fragile X may also help further to improve genotype-phenotype correlations. Future studies may reveal further contrasts relating to genetic factors that are biologically meaningful.

 

What are the clinical implications of your findings?

Our proof of concept study indicates the existence of ‘signature’ autistic behavioral profiles related to genetic risk factors. These signatures may be helpful in disentangling the aetiological and phenotypic heterogeneity evident in ASD, but warrant replication in larger and independent samples. The approach presented in our study could hold promise as a means of stratifying patients who may benefit from treatments targeted at specific pathways and as a way of identifying those patients in whom particular interventions may have unwanted effects.

 

Christine Elsik and Kim Worley on finding the missing honey bee genes

$
0
0

The honey bee has both great economic and ecological importance due to its role as a major pollinator. It also serves as a model organism for studies into human health, including fields such as allergy and immunity, and is a focus of research into eusociality and group behaviour. All of these traits made the honeybee an attractive candidate for genome sequencing, leading to the generation of the first draft of its genome in 2006.  However, this annotation was found wanting as the number of genes discovered appeared low when compared to other social insects. Christine Elsik from the University of Missouri, USA, Kim Worley from the Baylor College of Medicine, USA, and colleagues present an upgraded annotation of the honey bee genome in their recent study in BMC Genomics, revealing around 5000 more protein-coding genes than the previous annotation. Elsik and Worley explain more about what their results revealed, the issues around the first draft of the honey bee genome, and what lessons can be learned for future annotations.

 

What was the inspiration for this project?

People who work with unfinished genome sequences find gaps in the sequence that can cause errors in the translated proteins and problems for studies of non-coding sequences. These issues occur regardless of the sequencing technology used and are found in most sequenced genomes including the original honey bee genome. Only a handful of genomes have been ‘finished’ to a quality of one error in 10,000 base pairs, the standard of the human reference genome. The honey bee genome seemed particularly problematic, because parts of the original genome that were AT-rich were missing in early sequence data and were targeted for improvement, and the number of gene annotations seemed low compared to Drosophila and later sequenced Hymenoptera.

 

Why did people think the old honey bee genome assembly was poor?

In addition to the issues with draft assemblies we noted above, the low number of gene annotations suggested parts of the assembly were missing or assembled incorrectly. However, annotation also depends upon an accumulation of data from expressed sequences (RNAseq) and genomes from other species for comparison. Both of these types of data were limited for the old honey bee genome project, which predated the current sequencing technologies that have fostered more RNA sequencing as the cost has dropped, and which was the first Hymenoptera genome project. Only Dipterans (Drosophila and mosquito) and silkworm were available for comparative studies.

 

Why did it take so long to re-annotate the genome and what advances have now facilitated this re-annotation?

In an ideal world re-annotation would have been faster. The work was much more than running gene prediction software. The annotation process itself was a research project, and combining gene sets contributed by different sources required extra effort to reformat datasets. We tested several approaches and many different parameters, and adapted our methods when new datasets became available. We extensively evaluated alternatives before selecting a final gene set, and then further evaluated the selected set. We anticipate that re-annotation will be faster in the future because tools for RNAseq-based annotation are improving and methods described in each genome re-annotation publication will provide guidance for future projects.

 

Total gene number was used to infer the problems with the original annotation, and the final gene number after re-annotation was close to that originally predicted. Are better at predicting gene number or do we simply have more species for comparison?

Both. The evidence (RNAseq data and comparative species) are very important for improving the annotation. But our tools available to make use of these data have also evolved and improved. We think that there is still potential for improvement. We wonder if tuning the gene prediction algorithms to the GC content domains would improve the annotation further. Perhaps there are different features of genes found in AT richer domains versus GC richer domains and using different parameters for the predictions in the different domains would identify genes that were otherwise missed with the tuning for gene prediction in the genome average.

 

You ruled the out the presence of significant amounts of repetitive sequences in the honey bee genome. Do you think next generation sequencing techniques have resolved problems with repetitive sequences encountered in early genome sequencing attempts?

Although next generation sequencing technologies are much less expensive per base than earlier Sanger data and therefore projects can have much deeper raw sequence representation, so that unique sequences are of good quality, next generation, short read sequencing technologies are less capable of dealing with repetitive sequences. Sequence reads need to be long enough and with high enough quality to be uniquely placed and sequence reads or pairs of sequence reads need to be long enough or widely enough spaced with reliable inter-pair distance to step through longer repeat sequences. Short reads are often too short to do this. Longer read sequencing technologies are very helpful in this context.

 

Why do think repetitive sequences are so rare in honey bees?

The paucity of repetitive DNA in the honey bee genome remains a puzzle. The Honey Bee Genome Sequencing Consortium postulated the genome was low in retrotransposons due to haplodiploidy. Haploid drone genomes exposed to selection every generation would not tolerate disruption by retrotransposons (Nature. 2006, 443, 931–949). However, more recent Hymenoptera genome projects have reported larger numbers of retrotransposons in other organisms with similar haplodiploid lifestyles. Our analysis and the previous analysis (Nature. 2006, 443, 931–949) suggest that transposable elements were active and present in higher numbers in the past.

Apis mellifera has a few other unusual genome characteristics, including a high recombination rate and low and heterogeneous GC content with genes biased to lower than average GC content regions of the genome. Understanding evolutionary processes that have contributed to these characteristics may provide insight into the low repetitive DNA content.

 

Is the annotation of the honey bee genome an isolated case, or are there likely to be other poorly annotated genomes that could benefit from the same treatment? How wary should end users be of genome data?

Users of any data should be wary of the data quality, genome sequences and annotations are no exception. Trust but verify. Often people view the data in a genome browser and take the view as fact rather than drilling into the particulars of the underlying data to see where there are regions that are more reliable or less reliable (having gaps and low quality sequence). We have ongoing efforts to improve the contiguity of existing genome sequences with PacBio sequence and the PBJelly tool (PLoS One. 2012, 7 (11) e47768) and we have a number of low coverage Sanger genomes that have been improved and are being prepared for publication.

 

What are the lessons to be learned for future genome annotation projects?

Efforts to improve genomes and genome annotations are useful exercises that depend upon better underlying data (RNA sequence and comparisons with other high quality genome sequences) as well as improved automated annotation methods. An ongoing challenge is that with new types of data there is always a need to evaluate and revise computational approaches. For example, genome-guided reconstruction of transcripts from RNAseq has been improving over the last couple years, and future genome annotation projects need to leverage the most recent advances.

 

Jamie Toombs and Henrik Zetterberg on biomarker measurement for Alzheimer’s disease

$
0
0

Biomarkers for Alzheimer’s disease (AD) present the potential for early detection and consequently early treatment of the condition. They may also prove useful in monitoring disease progression and as a measure of the effectiveness of new therapies. Growing interest in this field of research has therefore led to numerous studies analysing cerebrospinal fluid (CSF) samples from AD patients to aid clinical diagnosis, research and drug development. In a recent study in Alzheimer’s Research & Therapy, Jamie Toombs, Henrik Zetterberg and colleagues from the Institute of Neurology at University College London, UK, investigate whether biomarker measurements from CSF samples are affected by transfer of these samples between tubes, revealing that a significant reduction in biomarker concentrations is indeed an issue. Toombs and Zetterberg discuss their key findings, potential solutions and what this means for the biomarker research community.

 

What led to your research interest in biomarker development for Alzheimer’s disease?

Alzheimer’s disease (AD) is a chronic, progressive neurodegenerative condition with a long preclinical phase. Biomarkers offer an exciting opportunity for engaging with detection in this early phase, differentiation throughout the disease course, as well as validating therapeutic efficacy. The recently founded Leonard Wolfson Experimental Neurology Centre (LWENC) aims to “accelerate the development and validation of treatments, open an earlier therapeutic window for intervention, and horizon scan for future therapeutic targets”. To facilitate this, a bio-resource centre is being created, with focus on building an extensive, high-quality sample base. It is in our research interest to establish rigorous standard operating procedures for sample handling, benefiting from, and contributing to, the latest developments in the field.

 

Your study looked at how the measurement of several Alzheimer’s disease biomarkers in cerebrospinal fluid (CSF) could be distorted by serial tube transfer. How did this investigation come about?

The relationship between amyloid beta monomers and tau isomers forms a cornerstone of the present understanding of AD, and is relevant in a wide range of other neurodegenerative conditions. However, significant differences are known to occur between measurements of target molecules in bio-fluid samples made by different laboratories. This can occur even when using the same samples and the same brand and batch of equipment. During the early testing of the effect of storage volume on core AD biomarkers we noticed that such a discrepancy had occurred among samples received from colleagues in another lab.

The concentrations of amyloid beta 42 (Aβ42) detected were lower than previously identified at the other site despite a shared protocol and close working relationship. Tau results from the same samples were unmodified from the previous estimates, so we reasoned that something had gone awry on the Aβ plate. ELISA assays involve lengthy, multi-step processes during which significant variables or errors may potentially occur at any stage. The assays were repeated and the results were exactly as before. The major results of the volume experiment, independent of which sample was analysed, showed that detectible Aβ42 in a CSF sample decreased as the relative surface area of the stored volume increased. In this context of finding further evidence that Aβ42, but not so much tau, had an apparent tendency to be adsorbed to container surfaces, a possible explanation for the earlier problem presented itself. It was realised that the samples had been transferred between containers a number of times between analysis in the original laboratory and our own. The work of del Campo et al. 2012 (Biomark Med. 2012, 6, 419-430) supported the plausibility of this mechanism, and a decision was made to investigate this hypothesis further.

 

What were your key findings and were you surprised by them?

Our key findings were:

Transfer of cerebral spinal fluid (CSF) samples between tubes led to approximately a 25 percent decrease per transfer in amyloid beta 42 peptide concentration.

Transfer of CSF samples between tubes led to approximately a 16 percent decrease per transfer in amyloid beta 38 and 40 peptide concentrations.

Transfer of CSF samples between tubes led to a significant decrease per transfer in tau protein concentration, though the magnitude of this (approximately 4 percent) was much smaller than with the amyloid beta peptides, and considered clinically irrelevant.

The introduction of 0.05 percent Tween 20, a non-ionic surfactant, mitigated this effect in all proteins tested, but did not entirely negate it in the amyloid peptides.

The results were not unexpected given our previous experience looking at the effect of storage volume on these proteins (Clin Chem Lab Med. 2013, 51, 2311-2317), and our suspicions regarding inter-laboratory variability. However, the magnitude of amyloid beta peptide loss was certainly greater than we had hoped. Sample handling and sharing practices will need to be adapted accordingly.

 

What other confounders to accurate biomarker measurement did you find, and how much of an effect do you think they may have?

In a previous study (Clin Chem Lab Med. 2013, 51, 2311-2317), we observed that storage volume can have a significant effect on core AD biomarker protein concentration. Smaller volumes of liquid have a greater proportional surface area than larger volumes, in turn meaning that molecules within solution will come into contact with the container surface in greater or lesser proportions. A range of volumes (50μL 75μL 100μL 125μL 250μL 500μL 1000μL 1500μL) were tested in the same 2mL tube type. Tau proteins were not significantly affected, but an increase of 10μL volume was, on average, associated with an increase of 0.95pg/mL in detected Aβ42. It was observed that the introduction of 0.05 percent Tween 20 at the aliquot making stage effectively neutralised this effect. CSF derived from AD patients and non-AD controls behaved similarly, as did pooled and individual subject CSF.

It is difficult to say exactly what effect such factors may have had, and are having on work within the field more broadly. The results of our studies show that there is at least the potential for incautious or inconsistent sample treatment to be a considerable problem. Awareness of standardisation issues within biomarker-based research is growing and in the last few years a number of other groups have published valuable studies on the subject. We hope that the lasting effect will be of improved accuracy and greater resource efficiency as we evolve our laboratory methods to neutralise such factors, rather than being confounded by them.

 

Do you think your findings may also be relevant to other hydrophobic biomarkers?

One must be careful when generalising, especially given the relatively limited nature of our study, but it seems reasonable to predict that a biomarker’s hydrophobicity will be a factor in its vulnerability to surface adsorption. It is important to note that the crucial point may be not whether the molecule is or is not considered hydrophobic, but its properties relative to competitors within the solution matrix.

Additionally, whilst the hydrophobicity of a molecule does appear to be important in terms of its interaction with container surfaces and other components of a solution, this is far from the only property governing protein dynamics. Certain proteins may simply bind more easily to certain materials due to compatible structural composition. Also, as we acknowledged in the paper, the method we used does not allow clear distinction to be drawn between direct protein loss to the tube surface and the aggregation of proteins within solution, which could mask the target epitopes and so decrease detection. Such aggregation could be contributed to by hydrophobicity or pH related conformation change, and contributing factors need not be mutually exclusive. Further work is needed to fully investigate the mechanisms involved in the effect observed.

We would certainly encourage other laboratories to test this effect for themselves, perhaps on their own molecules of interest. It would be interesting to know what they find.

 

What further research is needed in order to address how best to improve the accuracy of biomarker measurement, from initial collection from a patient to lab testing?

It is clear that the biomarker research community have to establish more rigorous and standardised operating procedures for each step of sample handling – from initial collection to final testing. This is a daunting task, as whatever protocols are developed will inevitably have to keep pace with changes in technology, ideas, and new biomarkers. Ideally, whenever a new biomarker is identified, a series of tests to understand its properties in regard to collection and storage factors should be conducted. Research themes of temperature, pH, aggregation, surface interactions, and matrix dynamics are likely to be useful for identifying confounding factors so that steps can be taken to neutralise them and so improve the accuracy of biomarker measurement by proxy.

 

What’s next for your research?

We are looking forward to continuing the investigation into Tween 20 as a potential stabilising additive to CSF. We are currently in the early stages of a project to compare the stability of patient CSF in the presence and absence of Tween 20 over multiple time points. Additionally, we intend to pursue confirmation of the exact mechanisms behind protein concentration loss in storage, and to define more precisely how non-ionic surfactants interact with our target biomarkers.

In a wider scope, our research aims to assist in the development of reference materials for assay standardisation. Such reference materials would be a considerable boon to biomarker research, therapeutic advancement, and ultimately patient well-being.

 

Joan Richtsmeier on 3D cranial changes in mouse models of Apert syndrome

$
0
0

Apert syndrome is a rare congenital disorder characterized by malformation of the skull, face, hands and feet. The cranial malformations, termed craniosynostosis, are a particular hallmark of the disorder and result in part from the fibrous joints of the skull (sutures) fusing prematurely during development. The underlying genetic cause of this autosomal dominant disorder was identified as mutations in fibroblast growth factor receptor 2 (FGFR2). Numerous studies in model organisms have therefore focused on the effect of these mutations on the function of cells in the vicinity of sutures. Joan Richtsmeier from Pennsylvania State University, USA, and colleagues took a new approach to understanding this condition in their recent study in BMC Developmental Biology, where they examined morphological changes in the overall 3D landscape of the cranium in two mouse models for Apert syndrome. Richtsmeier explains more about what their results revealed and the implications for human craniofacial diseases.

 

Craniofacial development in mice aged embryonic day 17.5 (left) and postnatal day 0 (right); superimposed micro CT and micro magnetic resonance image. Image source: Susan Motch Perrine, Pennsylvania State University, USA.

What was the main goal when you started this research?

I have been studying craniosynostosis since my PhD thesis. The main goal at that time was to try to understand how postnatal growth of the skull in children with Apert and Crouzon syndrome differed from patterns of typically developing children. I was able to demonstrate that postnatal cranial growth was different from typical growth in these syndromes, using what were at the time cutting-edge quantitative tools and longitudinal data (head x-rays). But it was clear that the shapes of the heads of these children were not typical at birth, and so it was critical to obtain prenatal data in order to understand how growth patterns contributed to the morphology that is obvious in newborns with these conditions. This cannot be done in humans, but it can be done in mice.

Also, sutures, or the ‘seams’ between bones of the skull, have been thought of as ‘growth centers’ for a long time. However, it was our idea that the suture might not be driving everything and that growth occurs on all surfaces of bones of the skull, and so we wanted to characterize that growth. It turns out that we were right, head shape is different before sutures close in animals carrying the mutations and change in form occurring on all surfaces of all bones contribute to the growth pattern in both typical and abnormal growth. We did find that differences in suture patency (whether the suture remains open or closed and for how long) also contribute significantly to the growth pattern, especially in the face. So if patterns of growth of all skull bones are different and suture closure patterns (timing and order of closure) are different in mice carrying these mutations, then we need to think more generally when developing therapies.

 

What are the benefits of studying these mutations in a mouse model?

The obvious benefit is that we can study biological processes and measure the resulting phenotypes at any point during prenatal development. Here we studied the later stages of prenatal development, but colleagues have shown amazing things by looking at earlier stages of development and, of course, what happens later is in many ways dependent upon what happens earlier in development. For the very specific event of premature suture closure, this occurs prenatally. It would be difficult, actually impossible, to observe and score the timing of suture closure (normal or abnormal) in humans and yet, with these mice, we can do this at the anatomical level by visualizing the sutures using micro computed tomography (3D x-rays) or at the mechanistic level by using immunohistochemistry or other approaches to see what the cells are doing as the sutures close.

 

Your study examined two different FGFR2 mutations known to cause Apert syndrome in humans. What are the main differences between these two mutations?

These two mutations are on the same gene occupying neighboring amino acids. In humans, these two mutations are responsible for approximately 99 percent of all cases of Apert syndrome. Once these mutations were discovered and the causative mutation in people with Apert syndrome could be precisely identified, clinicians began to accumulate data in an attempt to understand which disease traits were more common or more pronounced in individuals carrying one mutation or the other. From this work, it was concluded that individuals carrying the FGFR2 S252W mutation showed relatively more severe anomalies of the head and a higher incidence of cleft palate, while individuals who carried the FGFR2 P253R mutation showed relatively more profound limb anomalies.

In a previous study, we were able to demonstrate that features of the head were more strongly affected in mice carrying the mouse version of the Fgfr2 S252W mutation relative to those carrying the Fgfr2 P253R mutation. In our study published in BMC Developmental Biology, we were interested in further defining these differences and demonstrating how the two mutations affect cranial growth differently. We propose that this is due to differential primary effects of the mutations and differences that have to be made in the ‘adjustment’ among affected cranial tissues during growth.

 

How do you control for variation between individuals with your measurements?

As an evolutionary biologist, variation is the property in which I’m most interested, but it is very, very hard to study correctly, especially using three-dimensional coordinates of biological points on our specimens as primary data. In order to measure variation of biological objects in three dimensions, typically investigators pick a reference object (for example, a specific individual or the sample mean) and rotate and translate and scale data from all individuals so that they are superimposed onto the same coordinate system. Mathematically, the coordinate data representing the form of all individuals in the sample are overlaid one on top of the other and then stretched and shrunk along coordinate axes until the ‘best fit’ among them is attained. This is the most common way that biologists deal with this type of analysis but there is a problem in that the ‘best fit’ (the precise mathematical rule by which measures are minimized when the stretching and shrinking is done) is arbitrarily chosen and is not informed by the available data. We take an alternate approach: we start with three dimensional coordinate data and then estimate all unique linear distances among the original three dimensional coordinates of biological landmarks. Using those data to analyze our forms, we skip the step of superimposition, which in the case of a Generalized Procrustes fit tends to spread variation evenly across the objects of study. This is the reason why we are able to precisely locate the differences in form and differences in growth among the organisms we study. We have developed statistical tests that use confidence intervals to test for differences in form and in growth, so that we can provide information about the localized variation within samples.

 

How closely do the results in mice mimic what is seen in humans?

Pretty darn well for organisms whose last common ancestor existed 65 million years ago! We lean on evolutionary theory when we use mice to study human disease. We know that the appearance and individual features of mice are very different from humans, but we also know that many genes are conserved across mammals and the proximate functions of most of these genes are likely to be conserved as well. Evolutionary developmental biology has shown us that conservation of the patterns of development of complex structures (limbs, brains, hearts) implies that the genetic programs that specify structural design might also be conserved. This is why we can ‘recreate’ a complex genetic insult known in humans to cause a specific outcome and see similar outcomes in mice.

 

What are the implications for human health based on this work?

I think the implications are really important in terms of managing these and other craniofacial diseases that are caused by genetic mutations. Currently, the only option for people with Apert syndrome is rather significant reconstructive surgery, sometimes successive planned surgeries that occur throughout infancy and childhood, and into adulthood. These surgeries are necessary to restore function to some cranial structures and to provide a more typical morphology for some of the cranial features. But, if what we found in mice is analogous to the processes at work in humans with Apert syndrome, then we need to decide whether or not a surgical approach that we know is necessary, is also sufficient. If it is not, in at least in some cases, then we need to be working towards therapies that can replace or further improve surgical outcome. However, this is a very, very difficult problem because we have found, at least in mice, that these Fgfr2 mutations also change the size and shape of non-skeletal structures of the head. So the mutation initiates processes that are persistent, continually affecting shapes and shape changes during growth, in both skeletal and non-skeletal tissues.

 

Where will your research go from here?

I have great collaborators and a great group of really smart people in my lab, so there is no telling where we will go. I have always been interested in the relationship between development and evolution and these mouse models provide an ideal ‘laboratory’ for the formulation of hypotheses having to do with normal variation and variation that occurs at the ends of the normal distribution. That ‘normal distribution’ of organismal form changes as evolution proceeds, but many of the processes that underlie the production of structures remain. These processes just get ‘tweaked’ to happen more often during development, or at a faster pace, or in a different tissue, or to slow down or to only occur in certain environments. Being able to study what these mutations do across many cell types and tissues at different times during development, and how these various changes combine to produce changes in the overall appearance of the organism provides a wonderful place in which to formulate great questions. In terms of this particular disease, I would love to contribute to finding one of the key mechanisms that that would lead to improving the lives of people with craniosynostosis and other craniofacial disorders.

 

Clues to how deep brain stimulation may alleviate depression

$
0
0

Magnetic stimulation of the scalp is thought to influence neural activity of the brain and has been used in the successful treatment of depression, becoming approved for use in the US in 2008. More invasive deep brain magnetic stimulation (DMS) has also been used in patients with severe neuropsychiatric disorders, including Parkinson’s, however, the biological basis of electrostimulation as a therapeutic approach remains poorly understood. The adult neurogenesis hypothesis of depression suggests that recovery is marked by neural growth in the dentate gyrus of the hippocampus. A recent study in Molecular Brain now explores how DMS affects hippocampal neurons and other markers of depression in rodents.

Under microscopic examination of the hippocampus of wild type mice, Zilong Qiu from the Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, China, and colleagues found that DMS treatment resulted in neural stem cell proliferation and promoted dendrite growth in both adult and senescent animals. These striking morphological findings are supplemented by data showing the upregulation of marker genes for neural activity, including the hippocampal Fgf1b gene that is known to be induced by electroconvulsive stimulation.

Additional behavioral studies found that DMS could rescue an experimentally induced depressed phenotype in mice – an effect that was shown to depend upon growth of new hippocampal neurons, by using gamma irradiation to experimentally knock out neural proliferation. Recovery under DMS treatment was accompanied by restoration of MKP-1, a gene that is dysregulated during depression. Furthermore, an electrophysiological study of rats undergoing restraint stress (a model for psychosocial stress) revealed a recovery of axonal long-term potentiation to non-depressed levels under DMS treatment. This latter finding, alongside the changes that were already demonstrated in neural gene activity, suggests that DMS plays a role in modulating synaptic plasticity. Lastly, DMS was shown to reverse the anxiety phenotype and, somewhat surprisingly, extend lifespan in a mouse model of the neurodevelopmental disorder, Rett syndrome.

The effect of DMS treatment on neuronal growth and activity, in the adult rodent hippocampus, provides a novel biological evidence base for electrostimulation that merits further investigation to help treat neuropsychiatric disorders. The observations that DMS reverses several molecular, physiological and behavioral correlates of depression in rodent models deepens our knowledge of the pathophysiology of depressive disease in particular, and strengthens support for the use of electrostimulation to treat depression in humans.

More sensitive screening for malarial parasite Plasmodium vivax

$
0
0

Malaria is a parasitic disease caused by protozoa of the genus Plasmodium. It is spread by the female Anopheles mosquito and in 2012 resulted in over 200 million cases worldwide, according to the World Health Organization. Plasmodium falciparum causes the majority of deaths associated with malaria, but Plasmodium vivax is the main cause of recurring malaria. This is when dormant forms of the parasite reside in the liver and ‘activate’ to cause disease years after infection. Detection of the malarial parasite, in particular P. vivax, has proven to be a challenge due to limitations innate to current detection methods that are reliant on light microscopy and nucleic acid staining. In a recent study in Malaria Journal, John Adams from the University of South Florida, USA, and colleagues reveal a new technique for the detection of P. vivax that may overcome these limitations.

P. vivax is predominant in the Americas and Asia, whilst less common in Africa where the general human population lack the Duffy antigen through which P. vivax enters and infects red blood cells. Traditionally, light microscopy has been used in the field to detect malaria in blood smears. This method of detection can take a long time and also requires a skilled microscopist to identify the parasite which, in the case of P. vivax, is an extremely difficult task due to low levels of parasite-infected blood cells in the peripheral circulation. Consequently, the presence and numbers of parasites can be inaccurately read.

In more recent years, to address these issues, flow cytometry based detection methods using nucleic acid staining have been developed. These methods work on the premise that the only nucleic acid containing entities in the blood sample would be the malaria parasite, since white blood cells would have been removed and red blood cells have little or no nucleic acid content. A flow cytometer can then quantify the number of stained cells that pass through the detector.

However, nucleic acid based detection methods also have limitations as blood samples can still contain residual white blood cells. Moreover, anaemia caused by malaria increases the number of immature red blood cells called reticulocytes, which do contain nucleic acid. This is compounded by the fact that P. vivax has an affinity for reticulocytes, ultimately contributing to inaccurate detection and quantification of P. vivax in patient blood samples.

Adams and colleagues present a new method for detection, using flow cytometry, based around antibody staining against the Plasmodium falciparum BiP protein. This protein is highly expressed and found abundantly on endoplasmic reticulum of infected cells. Furthermore, it is well conserved and can therefore also be used as a marker for P. vivax. The antibody serum developed by the authors, called anti-PfBiP, was successfully used to accurately quantify the P. vivax parasite during its blood stage in both clinical isolates and in vitro.

The staining method presented by the authors in this study provides a robust and time sensitive detection process for P.vivax, removing the main obstacle found with existing nucleic acid staining techniques – namely the difficulty in distinguishing the parasite in nucleic acid filled reticulocytes. Although light microscopy is still the gold standard for detection of parasites, flow cytometry is becoming a readily available and affordable detection method for laboratories.

 

Brad Ruhfel on piecing together green plant phylogeny with plastid genomes

$
0
0

Green plants, known as Viridiplantae, comprise around 500,000 species and represent an abundance of diversity. Originating at least 750 million years ago and displaying great heterogeneity, understanding the phylogenetic relationships within this clade has proven to be a significant challenge; one compounded by the extinction of several major lineages. In an attempt to resolve some of the major questions in this field, researchers have looked to nuclear, mitochondrial and plastid DNA. In recent years, next generation sequencing has rapidly increased the number of complete plastid genomes. With this wealth of data now available, Brad Ruhfel from Eastern Kentucky University, USA, and colleagues, sought to deduce a comprehensive green plant phylogeny based on the plastid genome and to explore some of the major relationships across this clade, as published in their recent study in BMC Evolutionary Biology. Here Ruhfel explains the benefits and limitations of using plastid sequence data and what new insights their study uncovered.

 

What are the benefits of using plastid sequences as opposed to nuclear or mitochondrial sequences?

Plastid sequences have been the mainstay of plant phylogenetics since the mid 1980s for several reasons. First, the plastid genome is present in high copy numbers, making the DNA easy to obtain, particularly when compared to single-copy nuclear genes. Second, genes in the plastid genome are relatively easy to align across all green plants. Third, the plastid genome is highly conserved in terms of structure and gene evolution. Finally, horizontal gene transfer and gene duplication can cause problems when using data from the mitochondrial or nuclear genomes; these issues are rare when using plastid genome data. In short, plastid data are easier to obtain and analyze.

 

How is your study different from previous plastid phylogenomic analyses of green plants?

Previous studies using plastid genome data to examine the relationships of all green plants have had much poorer taxon sampling (i.e. they sampled fewer species). These studies typically used somewhere between 40 to 90 species, while here we used 360. This expanded taxon sampling allowed us to address major relationships across all green plants simultaneously. Our study is also unique regarding the depth of data exploration and analysis that we conducted.

We thoroughly explored our data by: i) using several character coding protocols (nucleotides, amino acids, and RY-coding), ii) analyzing various subsets of the data (e.g. first and second codon positions, third positions only), iii) examining base composition bias, and finally, iv) exploring several different partitioning strategies and statistically determining which was most appropriate for these data sets. This rigorous approach allowed us to determine which relationships in the phylogeny were robust to various analytical approaches and which were not.

 

Did you find any unexpected results and/or results that conflicted with earlier studies?

The finding that Zygnematophyceae, a clade of green algae, is sister to land plants is very interesting. Most textbooks state that another group of green algae, Chara and its relatives, occupy that position. However, to me the most interesting results were those areas of the green plant phylogeny that were in strong conflict across our various analyses. For instance, the main view of early land plant relationships taught in textbooks is that liverworts are sister to all other land plants, followed by mosses, and hornworts are sister to all vascular plants. This set of relationships is well supported in two of our four analyses, but mosses and liverworts are sister groups in the other two analyses. These various bryophyte relationships have been reported before, but it is surprising that with whole plastid genomes the problem still persists.

This is not an isolated incident; several examples of this type of problem occur through our various trees. Without analyzing the data in several different ways, we may not have realized that these various areas of the topology are not robust. Which relationships are correct? Why is the same data set coded in different ways giving us such different results? These types of questions are very exciting and point the way for future research.

 

What systematic errors did you identify in previous analyses, and what can we learn from this?

We did not examine the analyses of previously published works. However, in our analyses it seems that base composition bias is present and that highly complicated partitioning schemes are needed to account for the heterogeneous patterns of molecular evolution in the plastid genome data sets.

Base composition bias may have affected the phylogenetic placement of some taxa. For example, in the analysis that included all nucleotide positions, the monilophyte clade (ferns and relatives) was placed as sister to lycophytes plus seed plants, a placement at odds with several other lines of evidence. When steps were taken to account for base composition bias, monilophytes were placed as sister to seed plants as expected. Phylogenetic studies using plastid genome data often do not examine base composition homogeneity though it is a basic assumption in our analyses. Hopefully this will become common practice in future studies.

We also see evidence of highly heterogeneous patterns of sequence evolution in the data set as our model-fitting experiments always favor the most parameter-rich models. Identifying partitioning schemes and models that can accurately reflect the true processes of molecular evolution while avoiding over-parameterization may be important when analyzing similar data sets. Some previous studies have not statistically explored the best partitioning strategy for their data and have used standard strategies such as partitioning by gene or codon position. This and other studies are now showing that these standard strategies may not be the best choice when analyzing large plastid genome data sets.

 

In addressing the systematic errors highlighted in your study, do you think plastid sequences will be able to resolve all of the intricacies of plant phylogeny?

There are very likely relationships that plastid genome sequence data will not be able to resolve. However, this can also be said for any type of data: nuclear, mitochondrial or morphological. It is important that we pursue all lines of available evidence. Future analyses of plastid data should increase taxon sampling. Remember that we have reconstructed what is perhaps over one billion years of evolution, using only 360 representatives of a clade that contains about 500,000 species. Then of course there is the problem of the countless lineages that have gone extinct that we will never be able to sample. Analyses of mitochondrial and nuclear genomes at this same scale are needed and are currently being investigated. It is very likely that phylogenies from these three data sources will not agree in some areas. But those areas of disagreement will either point us towards some very interesting biological phenomena or allow us to develop better analytical methods.

 

Where should we direct future efforts to get a more complete picture of plant phylogeny?

First, we need more data from more taxa. Several groups of organisms are very poorly represented in our plastid genome data set. For example, only two moss species were included here, while there are thought to be around 10,000 species. As I mentioned above, for this study we analyzed plastid genome sequence data from 360 taxa from a clade that may contain over 500,000 species. This is only about 0.07 percent of green plant diversity! Similarly sized data sets should also be assembled and explored using mitochondrial and nuclear data. Integrating fossils is also extremely important. We may never be able to get DNA from fossil taxa, but combined analyses of morphological and molecular data may allow us to better place taxa with no molecular data available. Efforts should also be focused on developing better models of evolution.

 

As more plant genome sequences become available, do you think the overall picture of plant phylogeny is set to change or is it now a matter filling in the details?

One of my favorite ideas is that science is a permanent revolution. We will likely never know the real truth in its entirety, but we must keep striving to find it so as to better understand our world. As we gain a better understanding of the molecular evolution of plant genomes, there may be some real surprises around the corner. However, many relationships in the green plant tree of life agree across multiple analyses of both molecular and morphological data as we state in the article. We have made great progress in determining the evolutionary history of green plants but there is still much to do.

 


Charles Wondji and Janet Hemingway on malarial vector resistance to DDT

$
0
0

The global burden of malaria has led to the development of multiple approaches to reduce its incidence through targeting it’s vector, Anopheles mosquitoes. These measures include reducing vector breeding grounds, using biological controls such as lavivorous fish to decrease larvae numbers, and chemical spraying of insecticides such as pyrethroids and DDT (dichlorodiphenyltrichloroethane). However, the widespread use of these insecticides has led to the development of resistant vectors. Charles Wondji and Janet Hemingway from the Liverpool School of Tropical Medicine, UK, and colleagues reveal the first genetic marker for metabolic resistance to DDT in their recent study in Genome Biology. Wondji and Hemingway explain the implications of their results for tracking and managing vector resistance.

 

What first got you interested in mosquito genomics?

Our interest in mosquito genomics started when we realised the power of this approach in elucidating the underlying molecular basis of various genetic traits in mosquitoes, notably that of insecticide resistance in disease vectors. The sequencing of the first mosquito genome in 2002 was a major encouragement to further explore this field of research.

 

Were there any surprises for you in the results of the current study?

Yes, the big surprise for us was the discovery that a single amino acid change alone in a detoxification gene could have such a dramatic impact on metabolic resistance to insecticides in a mosquito species. This type of resistance is normally associated with increased transcription of detoxification genes, not single amino acid changes. In addition, we were also surprised by the extent of the selection footprint in resistant mosquitoes with the resistance allele having reached fixation in the highly DDT resistant population from Benin.

 

How does this study fit in with your previous work?

The two major causes of pyrethroid and DDT resistance are target-site insensitivity and metabolic resistance. Target-site resistance (also known as knockdown resistance) has previously been well characterised and can be easily monitored by PCR. However, this was not possible for metabolic resistance, despite its greater operational impact on malaria control, due to its complex molecular basis. No single metabolic resistance markers were available to track resistance in malaria vectors; consequently, there was no DNA-based diagnostic tool to easily detect metabolic resistance in field populations, in contrast to target-site resistance. This study provides the first step in generating DNA diagnostics for metabolic resistance.

 

What do you think the practical applications are for mosquito control in the field?

This study, for the first time, has defined a molecular marker for metabolic resistance, the type of resistance which is most likely to lead to control intervention failure against mosquito-transmitting malaria. This marks significant progress as the first DNA-based diagnostic tool has now been designed for this type of resistance. Such tools are needed to detect and track resistance at an early stage in field populations, allowing control programs to design rational, evidence-based resistance management strategies to overcome such resistance and maintain the efficacy of vector control interventions such as Indoor residual spraying (IRS) or Long Lasting Insecticide Nets (LLINs).

In addition, this study is a proof of concept that similar types of metabolic resistance in other disease vectors could also be elucidated and the relevant DNA-based diagnostic tools designed to improve ongoing and future control programs. In the case of malaria, this will fulfil one of the key goals of the recent WHO Global Plan for Insecticide Resistance Management (GPIRM).

 

You mention in your study that Southern Africa has a low incidence of the DDT resistance mutation. Can you foresee a time when controlled use of DDT will be reintroduced in countries where it is currently banned, perhaps as pyrethroid resistance increases?

Pyrethroid resistance observed in South Africa around 1999 was suspected to have caused a significant increase in malaria transmission in this country. The solution then was to re-introduce DDT because of the full susceptibility in malaria vectors and as a result these mosquitoes were again successfully controlled and malaria transmission significantly decreased. We therefore believe that due to the high and rapid spread of pyrethroid resistance currently observed in Africa (and resistance to other alternative insecticide classes such as carbamates), DDT if used in a controlled way can serve as a last resort insecticide to save lives in countries where DDT remains very efficient against major malaria vectors.

 

Is complete eradication of malaria-transmitting mosquitoes feasible or desirable? If so, will chemical spraying play a part in how this is achieved?

We don’t believe that a complete eradication of malaria-transmitting mosquitoes is possible due to their diverse and plastic ecology, biology and genetics. Indeed, these mosquitoes are able to exploit various types of breeding sites, exhibit extensive behavioural and genetic plasticity that will make their complete eradication virtually impossible. This is why the aim is rather to control these mosquitoes to the extent that malaria transmission could be interrupted. Vector control through the use of chemical spraying plays an important part in this aim but other approaches are also explored such as larval management or transgenic methods.

 

Are there any unanswered questions you would like to investigate?

The genetics basis of many insecticide resistance mechanisms remains uncharacterised and we are currently working on this. Besides this, establishing the real impact of insecticide resistance on malaria mortality and morbidity and how best to mitigate such impact is of great interest in the fight to reduce the burden of this disease. Design of more reliable diagnostic tools will be a significant contribution towards answering such important questions.

 

Combining pharmacological and physical therapy in spinal injury recovery

$
0
0

Axonal regeneration of neurons in the central nervous system, following trauma, inflammation or other pathological conditions, is significantly diminished due to a restrictive micro-environment. This is thought to be a major factor in limiting the extent of recovery after spinal cord injury. Several molecules that restrict axonal outgrowth have been identified, such as MAG, Nogo, chondroitin sulphate proteoglycans and semaphorin3A. It has been demonstrated that blocking some of these inhibitors can result in increased axonal regeneration. However, new projections alone are not sufficient for recovery; these projections must form appropriate synaptic connections to allow rewiring of the neuronal circuits that can re-establish the lost functions. The capacity to form and strengthen novel neural pathways, namely neuroplasticity, depends greatly upon activity-driven reinforcement – repetition is therefore key.

In a recent Molecular Brain study, Masaya Nakamura from the Keio University School of Medicine, Japan, and colleagues present a novel combinatorial approach to promote neuronal rewiring in rats subjected to spinal cord injury. They demonstrate that in addition to using pharmacological treatment to favour the regeneration of neuronal projections in the damaged area, intense physical activity was crucial for extensive functional recovery of hind limb motility.

The authors developed a novel drug delivery system using silicone sheets to efficiently  deliver the semaphorin 3A inhibitor, SM-345431, to treat the rats following spinal transection. In addition, rats were subjected to extensive treadmill training. Pharmacologically-promoted axonal regeneration significantly improved motor performance when combined with physical therapy. Although intense treadmill training did not modify the extent of axonal regeneration, it nonetheless significantly improved hind limb coordination and movement compared to rats that were treated with the semaphorin 3A inhibitor alone. Finally, re-transection of the initial lesion site only partially affected motor performance, indicating that functional recovery involved factors additional to axonal regeneration, such as the rewiring of local neural circuits involved in locomotion.

Although axonal regeneration in the central nervous system is very limited, neuroplasticity allows the rewiring of relevant neural circuits and has the potential to contribute to rehabilitation. This study highlights the importance of physical therapy in addition to pharmacological intervention, and presents a promising combinatorial approach to enhance motor performance after spinal cord injury.

 

Oliver Ullrich on the effects of microgravity on immune cell function

$
0
0

The evolution of life on Earth has been subject to a range of influences, from changes in the gaseous composition of the atmosphere to the movement of land masses and climatic shifts. Throughout this time, a constant and universal force was also at play, namely gravity. This has led to questions over whether the cellular and molecular functions that underpin the diverse array of organic life on the planet, require gravity in order to optimally function. Oliver Ullrich  from the University of Zurich, Switzerland, and colleagues tackle this question in their study in Cell Communication & Signaling, where they investigate the impact of gravity on the oxidative burst reaction in mammalian cells. Ullrich explains how they manipulated gravity for these purposes and the effects this had on immune cells.

 

Oliver Ullrich during  a parabolic flight manoeuvre. Image source: Oliver Ullrich, University of Zurich, Switzerland.

Oliver Ullrich during a parabolic flight manoeuvre. Image source: Oliver Ullrich, University of Zurich, Switzerland.

With a background in biochemistry and medicine, how did you become interested in space science and the cellular effects of gravity?

I was simply fascinated by the fundamental biological questions, if and how life on Earth requires and responds to gravity. Gravity has been a constant force throughout evolutionary history on Earth. It is so simple, so fundamental, but so poorly understood. In history, anatomical research elucidated in detail, how the human body is constructed to withstand and to live under the gravity conditions of Earth. Now, we try to understand how the architecture and function of human cells is related to gravitational force and therefore adapted to live on Earth.

 

Why did you choose to specifically study the effects of gravity on the immune system? What did your study set out to investigate?

Since the 1980s, a lot of evidence has been obtained suggesting that the function of mammalian cells and of small unicellular organisms is different under conditions of microgravity. Consequently, the question arose of how normal gravity may play a role in ‘normal’ cellular function and if gravity may provide important signals for the cell. From previous experiments it was known that cells of the immune system are severely influenced by altered gravity. The gravity-sensitive nature of these cells therefore renders them an ideal biological model in the search for general gravity-sensitive mechanisms in mammalian cells.

 

How are you able to test cells under altered gravity conditions?

The only opportunity to perform experiments with living mammalian cells in reduced gravity, without leaving our planet, is on board an aircraft performing parabolic flight manoeuvres that is weightless when it is flying on a Keplarian trajectory and is in free fall. For access to a parabolic flight experiment, an application either to the German Aerospace Center (DLR) or European Space Agency (ESA) is required, and once selected after peer review, a second application for funding. Then, the preparatory work  can start, with design and construction of the experiment hardware, the development and testing of biological mission scenarios and much more. In the last few years, we have performed 12 parabolic flight campaigns with more than 1000 parabola. In total, I have experienced more than four hours of microgravity.

 

How do experiments done on parabolic flights compare to results obtained from experiments on the International Space Station (ISS)?

For a coordinated research program, both platforms are required. In the search for rapid-responsive molecular alterations, short term microgravity of 22 seconds provided by parabolic flight manoeuvres on board the Airbus A300 is an ideal instrument to elucidate these initial and primary effects. The ISS is the only research platform for the investigation of integrative, long-term and functional effects of microgravity.

 

Your study showed that a key step in the oxidative burst reaction of macrophages – the generation of reactive oxygen species – is highly dependent on gravity. Why do you think this is?

In our experiments we demonstrated in real microgravity (parabolic flights), in simulated microgravity (2D clinostat) and in hypergravity (centrifuges and parabolic flights) that reactive oxygen species (ROS) release during the oxidative burst reaction responds rapidly and reversibly to altered gravity within seconds. Previous studies suggest a major role for the intact cytoskeleton in NADPH oxidase activation. Because rapid modifications of the cytoskeleton are very well described in microgravity, we think that cytoskeletal-dependent processes could be the reason why the oxidative burst reacts to altered gravity.

 

In light of your findings, what are your thoughts on the evolutionary significance of gravity for the development of complex cellular processes on Earth? Do you think some processes may be more dependent on gravity than others?

Of course, I think that gravity-dependent mechanisms are highly specific. Phagocytes and the oxidative burst are part of the ancient innate immune system in terms of evolution, and represent the most important barrier for microbes invading the body. NADPH oxidase enzymes in the early development of life was a success story: there is no evidence of multicellular life without these enzymes. Thus, it could be possible that the gravitational conditions on Earth were one of the requirements and conditions for development of the molecular machinery of the oxidative burst reaction.

The development of cellular mechanosensitivity and mechanosensitive signal transduction was probably an evolutionary requirement to enable our cells to sense their extracellular matrix and their individual microenvironment. However, mechanosensitive mechanisms were designed to work under the condition of 1g, but never had the possibility to adapt and adjust their reaction to conditions below 1g. Therefore it is possible that the same mechanisms that enable human cells to sense and to cope with mechanical stress, are potentially dangerous in microgravity. It is a major challenge to find out if our cellular machinery is able to live and to work without gravity force or if our cellular architecture will keep us dependent on the gravity field of Earth.

 

Numerous studies have investigated the effects of space travel on bone, muscle and cardiovascular health. What implications do your findings at the cellular level have in this context?

Several limiting factors for human health and performance in microgravity have been clearly identified for the musculoskeletal system, the immune system and the cardiovascular system during spaceflight conditions. Considering these constraints, substantial research activities are required in order to provide the basic information for appropriate integrated risk management. In particular, bone loss during long stays in weightlessness still remains an unacceptable risk for long-term and interplanetary flights.

Recently, there is emerging evidence that the immune and skeletal system are tightly linked by cytokine and chemokine networks and direct cell-cell interactions. It has been demonstrated that the immune system influences metabolic, structural and functional changes in bones directly. Both systems share common cellular players such as the osteoclasts, which are bone-resident macrophages. Therefore, knowing the cellular and molecular mechanisms of how gravity influences macrophageal cells is an invaluable requirement for the provision of therapeutic or preventive targets to keep the bone and immune systems of astronauts fully functional during long-term space missions.

 

What is the potential impact on future space travel of such results?

With the completion and utilization of the International Space Station and with mission plans to the moon and Mars during the first half of our century, astronautics has entered the era of long-term space missions. Such long-term missions represent a challenge never experienced before: small or even marginal medical problems could easily evolve to substantial challenges, which could possibly endanger the entire mission. Since crew performance is the crucial factor during space missions and since evacuation or exchange of the crew is impossible during interplanetary flights, there is an urgent need to elucidate the underlying mechanisms of limiting factors for human health and performance in microgravity, such as for the immune system. Results can be used for a better risk assessment, development of in vitro tests for medical monitoring or to identify targets for preventive interventions.

 

What’s next for your research?

In December 2014, we will send the TRIPLE LUX A experiment to the International Space Station, performed in the BIOLAB of the European COLUMBUS module. This experiment will investigate the oxidative burst reaction of macrophages during longer periods of microgravity, determine the gravitational threshold for the burst reaction and elucidate possible adaptation mechanisms.

 

Michael Akam and Carlo Brena on centipede segmentation dynamics

$
0
0

Centipedes, fruit flies and humans; three seemingly disparate animals at first glance. However probing their development reveals that all three undergo a process of segmentation, whereby repetitive units are generated along the axis of their bodies, running from front to back. Segmentation is common to three large groups of animals, namely arthropods (such as centipedes and fruit flies), vertebrates (from zebrafish to humans), and annelids (including earthworms and leeches). Decades of research have generated several models on how this fundamental developmental process occurs. In a recent study in BMC Biology, Carlo Brena and Michael Akam from the University of Cambridge, UK, explore the segmentation dynamics of the arthropod Strigamia maritima. Here Brena and Akam discuss what led them to focus their investigations on this centipede, what this model organism can tell us about segmentation and how their findings add to our current understanding.

 

What sparked your interest in studying segmentation in early development?

CB: I am interested in evolution and in particular in understanding how such an incredible variety of body shapes has possibly evolved from a common ancestor. One of the major aspects of the animal body plan is segmentation, with three out of the four most diverse and successful animal phyla typically recognised as being segmented (arthropods, annelids and vertebrates). To understand body shapes and patterns, one needs to look at early stages of development when major patterning occurs on the undifferentiated population of cells of an early embryo. This has been a general interest of mine that has been nurtured by the explosion of molecular tools and knowledge that has characterized biology in particular in the last 20 years, and which is the basis of the re-evaluation of development in the field of evolutionary studies – what has been called EvoDevo.

MA: It’s always a mixture of different things. I started working on fruit flies, on Drosophila, as a graduate student, but doing something very, very different from early development. I was introduced then, as a graduate student in Oxford, UK, to some of the wonderful phenomenology of the homeotic mutants – the bithorax mutants and so on that Ed Lewis had been working on. They seemed fascinating as biological phenomena that we didn’t understand at all. I’ve always been attracted to problems at the stage where they seem to be outside the range of understood explanations; I’m more interested in scoping out answers than dotting the i’s and crossing the t’s.

By the late 1970s, it was clear that it would soon be possible to find the DNA that was mutated in homeotic mutants.  I wanted to work with the people who were doing that. I arranged to do a postdoc in Stanford, USA, with David Hogness, and in close collaboration with Ed Lewis at CalTech. That got me into the field of studying mutations that affect Drosophila development. While I was doing that, the  Nüsslein-Volhard and Wieschaus screen was being published, revealing  wonderful segmentation phenotypes – pair rule mutants, segment polarity mutants and so forth.  The existence of phenotypes like that suggested that these mutants would provide some insight into the mechanism of segment formation.  Peter Lawrence and Sydney Brenner had excited me about pattern formation, and these Drosophila mutants seemed to provide a way of getting at the molecular mechanisms that underlay pattern formation.

 

What is known about the basic mechanism of segmentation?

CB: Segmentation in an animal is basically the introduction of a reiterated pattern on a uniform – or broadly differentiated – field of cells along the anteroposterior axis. Researchers have now gathered a great deal of information on two thoroughly studied model systems, representing, in a certain way, the two extreme segmentation systems: the vertebrate embryo and the fruit fly Drosophila.

In Drosophila, segmental patterning is achieved simultaneously along the body axis by the combinatorial activation of segmentation genes by broadly overlapping expressed ‘gap’

genes. In contrast in vertebrates, segmentation is a temporally sequential process associated with germ band elongation, where a complex circuit of ‘segmentation clock’ genes, oscillating in their expression in an undifferentiated field of cells, transfer a periodicity in time into a periodicity in space.

MA: It was known that lots of different animals made segments; that three of the major phyla were segmented. But Drosophila was the only organism in which there was any hint of mechanism. It became clear in the 1980s and early 1990s that Drosophila used a process whereby gradients – and in Drosophila’s case it was maternally established gradients – were interpreted by transcription factors through a hierarchical mechanism to subdivide a pre-existing field of cells into a repetitive array of different cell states; a particular juxtaposition of cell states made a segment boundary. The rest of the process of segmentation came from that. It became a model, really, for hierarchical transcription factor interactions, and that was the only model in town.

Later, with the discovery of oscillating gene expression in the chick embryo by Olivier Pourquié and colleagues, the mechanism of segmentation was worked out in vertebrates. Vertebrate segmentation, making somites, involves gene expression in a whole population of cells oscillating together, more or less in phase. But at a particular wave front in the tissue – a position at a particular point in the growth of the embryo – that oscillation stops. Stopping the oscillation freezes cells in a series of alternative cell states, and those alternative cell states are translated into the signals that generate segment boundaries.

Once that was worked out, the question of the evolution of segmentation became even more interesting, because we had two completely different types of mechanism, in two different groups of animals, for building a segmented body.

 

Why are you working on arthropods, in particular short-germ insects?

CB: Arthropods are by far the most diversified and successful animals, and one could argue that part of their potential in diversifying different parts along their anteroposterior axis relies on the modularity of their body. Drosophila has proven to be a fantastic system to understand the basic function of segmentation genes, but it is now clear that it is a very derived arthropod whose segmentation system is not strictly extendible to other arthropods. The large majority of arthropods have in fact a short-germ kind of embryo and add most segments sequentially, even post-embryonically. This system requires a cyclic activation of genes which is reminiscent of what happens in vertebrates. To understand how segmentation has evolved throughout the animals, and what elements and genes of the segmentation system are homologous, we need to look at more basal insects or arthropods, i.e. possibly closer to the common ancestor.

MA: Drosophila, the fruit fly, is an unusual arthropod – in fact an unusual organism – in that it makes the whole body by subdividing a pre-existing field of cells. It’s like having an entire cake and making slices by just cutting up the cake, whereas most organisms couple segmentation and growth.

I wanted to see how the Drosophila segmentation mechanism was modified in an organism that made its segments while it was growing, rather than made its segments all at once in this extended field of cells. I’ve always been interested in the evolution of developmental mechanisms and it seemed to me that segmentation was an example of a developmental mechanism where we had a very good understanding in Drosophila of how it worked and a very clear question about how it must have evolved, or at least how it must be different in relatively closely related animals. That’s why I started looking at short germ insects initially – things like the locust – and then, more recently, in the centipede, which has become my favourite for the last few years.

 

Where does the centipede you work on (Strigamia maritima) come from?

CB: Strigamia maritima is a thin 2-3 cm long centipede which lives on the seashore under the shingle layer all around the British coastline. We collect our material in Brora along the North-East coast of Scotland because that’s the only place in the UK where we can find a large enough number of eggs.

Myriapods, the subphylum to which Strigamia maritima belongs, are pivotal in understanding arthropod segmentation evolution because of their highly segmented body, limited differentiation and phylogenetic position at the base of the arthropod tree, between the group including spiders and the group of crustaceans plus insects. Strigamia, in particular, belong to the geophilomorphs, soil dwelling centipedes.

 

Why is Strigamia maritima an interesting organism to probe for insights on segmentation?

MA: It has two important characteristics: one is that, despite having more segments than most centipedes, it makes them all in the embryo. Embryos are much more accessible to work with than the later juvenile stages. Actually, having an animal that makes about 50 segments in the embryo in a relatively short period of time was experimentally very convenient.

The group of centipedes to which Strigamia belongs is also very variable in segment number. Different species have a very wide range of segment numbers, from the high 20s up to 200 or so, and even within species, there is variability in final adult segment number. Not many arthropods that have been used for lab studies show such variability. Insects certainly don’t. So if you are interested in how segment number evolved, then Strigamia maritima is a good species to work with.

Then there’s a very pragmatic reason: S. maritima is an incredibly abundant centipede in environments that suit it. We can collect thousands of eggs in the wild in a couple of days. We’d previously worked with a centipede which you could grow in the lab – which is an advantage – but one of my graduate students who worked with that species, Louise Smith, managed to collect a only a couple of hundred eggs in the three years that she was doing her PhD.  Just looking after the animals took a large fraction of her time. So Strigamia was an easier animal to work with.

 

How does your work on the centipede add to the historical debate on the evolution of body axis segmentation?

CB: Our work, although far from being fully understood, shows that a segmentation system which involved cyclic activation of genes spreading with waves of expression is indeed present not only in vertebrates but also in relatively primitive arthropods, like the centipedes. Even more interesting (this is the crux of the novelty of our BMC Biology article), is that it shows how although some of the segmentation genes involved may be the same, not all homogenous segments within the same animals are produced in the same way.

MA: By the time we started working on centipedes, it was already clear that aspects of the Drosophila mechanism were very special adaptations of the fast developing insects. It was also clear that one possible mode of making segments was this oscillator mechanism. We had some ideas about what genes might be involved in oscillatory processes and the work with the centipede let us test whether there were parallels between segmentation in this more basally branching arthropod group and what was known in vertebrates.

The first suggestions were that, yes indeed, there were parallels in terms of the genetic pathways used to make segments. My colleague Ariel Chipman found that the Notch signalling pathway, which was not known to have a role in Drosophila segmentation, but which was central to vertebrate segmentation, seemed to have a role in our centipede segmentation. That had been suggested by earlier work with spiders, but in our embryos it was much easier to get visual insight into what was going on, because the centipede embryo develops with a large field of cells in which we can see oscillatory waves of gene expression. We examine these waves in detail in the BMC Biology article.

The centipede work has provided strong evidence for an alternative and very different way of making segments, as compared to Drosophila. And yet at the same time, it has also provided convincing evidence for similarities in the gene networks. For example, work that my student Jack Green has done shows that almost all of the genes involved at the pair rule level of the cascade in Drosophila – genes that first reflect the repeating segment pattern – also seem to be involved in centipede segmentation.  Indeed, they show very similar interactions, at least at the downstream end of the process. This work confirms that centipede and fly share a common  origin or segmentation, and yet suggests that they use fundamentally different mechanisms to generate the initial periodicities.

 

What are the implications of your work for our future understanding of the mechanisms of segmentation and gene networks involved?

CB: Our data suggest that a partially different segmentation system could easily evolve from a common underling segmentation gene network. If that’s the case, we should be extremely cautious in inferring statements of general homology, and consequently of evolutionary history, based on a few genes or, worse, on similar dynamics.

MA: I think we – ‘we’ meaning the whole field – are only just beginning to get some insight into how gene networks evolve. There’s a lot of talk about ‘recruiting modules’ – the idea that a module that evolves in the context of one process of segmentation gets recruited wholesale to another biological context. There’s a lot of discussion about how specific interactions between genes – links in the network – might evolve in a way that is apparently without an effect on phenotype but then causes processes to respond in different ways to stresses in different organisms.

There are all sorts of things we would like to know about how gene networks can evolve both to generate changes in morphology and to cope with environmental stresses, to adapt for example, to different rates of development or requirements for different numbers of segments. Segmentation is a really good model to work out some of these questions.

There are some intriguing hints in the arthropod segmentation story that segment numbers can double in very closely related species. There’s a species of brine shrimp called Polyartemia, which looks almost exactly the same as an ordinary brine shrimp but has twice the number of segments. And there has recently been reported, by Sandro Minelli and colleagues, a centipede which, by all criteria, fits well into an established family of centipedes, except that it has almost twice as many segments as all of its close relatives. These data suggest that some unusual evolutionary events can result in a segmentation network adopting a different mode of activity that results in a substantial change in form.

Our work in Strigamia touches on that question, because we suggest that in Strigamia the segmentation network switches, during normal development of the embryo, between a process of making double segment units which are then subdivided, to a process of directly adding single segment units. That’s not something we’d expected to see.  Carlo Brena noticed it when he was looking carefully at the process of segment addition. It will be very interesting to model how a gene network might make such a transition.

 

What organism do we need to work on to provide more answers?

CB: Given the apparent plasticity of the segmentation system, the short answer is that we just need more and more different species of any kind to look at. Crucially, though, we need to move from the ‘one gene-one evolutionary scenario’ approach of the early days of EvoDevo and take a deeper look at new, non-model systems, in particular from a functional point of view. Unfortunately, a functional molecular approach has not yet proved possible in any myriapod, hence the priority should be to look for any myriapod species that allows for those techniques (or to develop alternative tools with Strigamia). Further, we should endeavour to understand the animal evolution of segmentation from more basal groups, such as primitive insects like jumping bristletails, primitive non-malacostracans crustaceans like remipedes, and primitive chelicerates like scorpions.

MA: From the point of view of someone who works on arthropods, I would say without hesitation that the animal I would really like to understand is the velvet worm, the onychophoran. Onychophorans have been enigmatic for a long time. They are not arthropods – they don’t have a hard external skeleton – but they have a segmented body like arthropods and were for a long time seen as an intermediate between annelids and arthropods. We are now pretty sure that they are not closely related to annelids, but they do seem to be the sister group to the arthropods. They represent an arthropod-like body plan before arthropods adopted hardened body plates and jointed limbs.

Onychophorans are slightly exotic –  they only occur now in the southern continents, in South Africa, Australia and New Zealand, so they’re not that easy to get hold of.  But if you are lucky enough to look at their embryos, you could be forgiven for thinking that you weren’t looking at any relative of an arthropod at all.  They look in some ways quite extraordinarily vertebrate-like. The first sign of segmentation is a beautiful band of mesodermal somites lying underneath the epidermis. Yet some of the genes that pattern segments, engrailed and wingless for example, seem to be used in the same way as they are in arthropods, to set up a boundary in the limbs. I would love to know how segmentation works in onychophorans.

One difference between arthropods and vertebrates is that in vertebrates segment patterning is in the mesoderm, whereas in arthropods it seems that the ectoderm – the outer layer – is primary for segmentation. In onychophorans, it’s not clear whether mesoderm or ectoderm is primary for segmentation. Onychophorans are sufficiently close to arthropods for us to be able to make comparisons sensibly, but sufficiently different to give real insight into how very different modes of segment patterning may evolve.

They are also lovely creatures. I don’t have too much difficulty in collecting centipedes and cutting them open, but cutting open an onychophoran is terribly difficult: they are just so beautiful.

 

Who is going to be interested in this/be affected by this research?

CB: Anyone interested in the evolution of animal body plans should be interested and the information gathered from this nodal point in the arthropod phylogenetic tree should affect how they interpret the evolutionary history leading to their studied species. Additionally, students who might not be normally exposed to this kind of article but who may be interested, such as modellers and theoretical biologists interested in understand how, given the changing cell/population conditions during development, changes in gene network dynamics fit in, as illustrated in late segmentation stages in Strigamia.

MA: I belong to that part of the EvoDevo community that comes closest to classical zoology – people who are interested in the big picture of animal evolution, in questions like: What did the last common ancestor of all animals look like? From what sort of animal did arthropods evolve? That community will, I think, be very interested in the work that we are doing. This is an area where modern comparative genomics is really opening up a lot of old questions to new ways of analysis. It used to be the case that we only had molecular insight into a very small number of examples of animal diversity, Now, comparative genomics is giving us genome sequences for all sorts of obscure creatures. The genome alone will not to tell us everything, but it certainly makes it easier to do experiments that test hypotheses about the evolution of limbs or the  origin of nervous systems. That is one of the communities we’re talking to.

Another community who I hope will be interested are the people who model gene networks to understand in quantitative terms the way that these networks, composed of largely conserved components, can generate what appear to be very different developmental mechanisms and final body plans.

 

Stephanie Huang on a genomic approach to predict drug responses in cancer patients

$
0
0

The ability to predict the response of a patient to a particular course of treatment is particularly important in cancer, where the treatment options available often have a narrow therapeutic index. In an effort to strike the best balance for an individual patient between targeting the cancer and the unwanted side effects on healthy tissues, researchers have looked to genetic biomarkers for clues to how a patient’s body will respond.  Stephanie Huang and colleagues from the University of Chicago, USA, present a novel method for predicting this response in patients based solely on the tumor gene expression profiles prior to treatment. Here Huang discusses how their method, published in a recent study in Genome Biology, fared when tested against existing clinical trial data, and the implications of their findings for clinical practice.

 

How does the approach taken in your study differ from previous studies into cancer biomarkers for the prediction of chemotherapeutic responses?

Our model was built based on cell line data, in which whole genome gene expression was fitted against drug sensitivity measurements obtained from a large panel of cell lines. The relationship matrix between every gene and drug sensitivity was then applied to expression levels obtained from a patients’ tumor prior to treatment to predict the patients’ response to that drug. Our method captures clinical drug response in multiple independent datasets from completely different cancer types and drugs, using no prior biological knowledge. Because the models were developed on cell lines, such a method could be easily extended to various drugs/compounds of interest.

For example, in drug development, our methods could be used to enrich for drug responders without exposing patients to highly toxic agents. The fact that we saw a strong performance from statistical models that allow small contributions from every gene also supports the idea of ‘omics’ level prediction, where a very large number of molecular markers are incorporated in a complex model, rather than prediction from a single nucleotide polymorphism (SNP) or a small scale gene signature.

 

Your method was developed using gene expression microarray data from almost 700 cell lines. How did you account for the innate differences in gene expression between cell lines and primary tumor tissue?

The innate difference between gene expression in cell lines and primary tumor tissue was corrected using a method that previously applied to batch correction in microarray experiments. This employs an empirical Bayesian approach to standardize the mean and variance for each gene across samples. This data homogenization is a critical step to our approach. These ‘standardized’ gene expression levels are then fitted in a whole-genome ridge regression model, which captures substantial variability in in vivo drug response.

 

How did your method compare when tested against existing clinical trial data? Were these results expected or surprising?

We validated our approach in three independent clinical trial datasets, and obtained predictions approximately as good, or even better than, gene signatures derived directly from the trials. At first these results were surprising, but there are some interesting considerations. The cell line training data included many more samples than any of the clinical datasets, thus offering improved power. It is also possible that on cell lines, drug response is measured with greater precision when screened in a controlled environment. Statistical models developed on a very large clinical dataset, with a precisely measured drug response phenotype, should undoubtedly outperform models developed on a comparable number of cell lines, but there are clear practical and ethical considerations and it is thus extremely difficult and expensive to obtain such clinical data. In comparison, cell lines offer a cheap, readily available model system on which drugs can be quickly screened at no risk to patients.

Our results clearly show that by correctly analysing/integrating the data, cell lines may often offer a practical and useful alternative approach. Furthermore, the cell lines employed in our model construction represent a collection of different types of cancer; while clinical trials often focus on a specific type of cancer. The drug sensitivity data obtained in various cancer types is likely more informative for predicting in vivo drug response.

 

Your method is based solely on whole-genome gene expression data. Do you think the absence of other parameters (e.g. cancer type, drug mode of action, other genomic aberrations) affects the power of your model?

It is likely that in future our approach could be improved by incorporating additional predictors in the models. However it should be noted that expression data acts as a surrogate for many unmeasured molecular phenotypes (e.g. tissue of origin, genomic aberrations), and so it remains debatable whether in many cases performance could be drastically improved with this additional information.

We have presented some evidence that a more rigorous quantification of the transcriptome (e.g. through RNA-seq) or incorporating additional ‘omic’ information (e.g. microRNA expression) could be particularly valuable in improving prediction accuracy. In addition, data obtained from more sophisticated cell line drug sensitivity screening (e.g. quantifying drug sensitivity under different microenvironments) may also improve prediction accuracy.

 

Efforts to leverage current cancer treatments has led to combination therapies. Do you think your method could be used to predict treatment responses in instances where more than one drug is administered?

This is a great question and is something that we are actively pursuing. We are working with our clinical colleagues to test the applicability of our approach in clinical trials where patients were treated with multiple drugs. It is likely that our method could capture the additive effect of several drugs, however, given the current model system (i.e. panels of cell lines treated with a single drug), interactions (such as synergic effects) between different drugs, or drugs and treatment regimes (e.g. radiation) would be missed. In order to capture these types of interactions a suitable model system would be required. It may be possible to develop such a system using cell lines, mouse or even clinical data (for already established multi-drug regimes).

In essence, developing similar statistical models for multi-drug regimes would be no different than for single drugs, it would simply require a suitable model system, accurately measured molecular and drug response phenotypes and the correct type of machine learning algorithm to relate these data to each other.

 

How feasible it is to translate your method to the clinic, in terms of the time, cost, and expertise/resources needed?

Given the falling price of gene expression quantification and the fact that expression based diagnostics are already used in several clinical settings (e.g. OncoType DX for breast cancer and colon cancer), it is feasible to incorporate whole genome expression into clinics. We are working on building user friendly software that employs the workflow presented in our study. This will compute the expression-drug sensitivity relationships and allow a clinician to upload patients’ tumor expression, thus obtaining predicted drug sensitivity for a group of patients. Of course, prospective studies are needed to resolve the issue of multiple drug treatment combinations and the interpretation of predictions for other drugs etc.

 

How important is open access to clinical trials data in facilitating your research in this field?

Access to clinical datasets is absolutely key for pharmacogenomics research in general. We could only identify a small number of datasets on which to test our methods and data access issues may have played a role in this. Open access to clinical trials would clearly facilitate more widespread and rigorous testing of this or similar methods.

 

What’s next for your research?

The immediate follow up work will involve the study of prediction in the combination therapy setting. We are also actively investigating the possibility of improving the predictive power of our approach by incorporating additional ‘omic’ level data (e.g. genomic abnormality and microRNA expression). We are seeking collaboration in conducting prospective clinical trials to explore the real world utility of our method. Furthermore, mechanistic studies are ongoing to examine some of the novel genes/pathways identified through our approach. We believe the key to unravelling a complex trait like drug response lays in obtaining very large quantities of relevant data, that can be leveraged using sophisticated machine learning algorithms, rather than traditional low throughput approaches that have been of very limited success in most cases.

 

Lixin Wei and Yihong Ye win the Ming K Jeang Award for Excellence in Cell & Bioscience

$
0
0

The Ming K Jeang Award for Excellence in Cell & Bioscience honours research of the highest quality and impact published in the official journal of the Society of Chinese Bioscientists in America (SCBA), Cell & Bioscience. A panel of leading scientists on the Editorial Board, chaired by Yun-Fai Chris Lau from the University of California, San Francisco, USA, select the winning articles each year, reflecting the best biological and medical advances published by Cell & Bioscience. The winners for research published in 2013 were revealed by Cell & Bioscience Editor-in-Chief Yun-Bo Shi in this Editorial.

This year’s winning articles tackled endosome trafficking – with Yihong Ye from the NIH National Institute of Diabetes and Digestive and Kidney Diseases, USA, and colleagues revealing that ‘Monoubiquitination of EEA1 regulates endosome fusion and trafficking’ –  and probed the cellular response to hepatitc ishchemia with Lixin Wei from Shanghai Jiaotong University, China, and colleagues revealing that ‘Autophagy lessens ischemic liver injury by reducing oxidative damage’.

Ischemia-reperfusion (I/R) injury, namely damage caused to a tissue when its blood supply is returned after a period of lack of oxygen, is a common complication during liver surgery, particularly in transplantation, trauma and resection. Using both in vitro experiments in a human liver cell line and in vivo experiments in rats, Wei and colleagues demonstrate that not only is autophagy induced during I/R but also has a protective role by eliminating mitochondria that would otherwise generate reactive oxygen species and contribute to necrosis.

 

“These results suggest a potential therapeutic strategy using pre-treatment in liver surgery”

Award judge Yun-Bo Shi, Editor-in-Chief of Cell & Bioscience

 

Leaving the autophagosome behind and looking elsewhere along the trafficking pathway, Ye and colleagues investigate the regulatory mechanisms behind endosome fusion. Early endosomal autoantigen 1 (EEA1) is known to be essential for this process. Research in Cell & Bioscience now shows how EEA1 ubiquitination regulates this key component, determining both the size of the endosomes and their trafficking pattern. Award judge T C Wu highlighted the importance of this research:

 

“The understanding of these molecular mechanisms may serve as an important foundation for altering the trafficking of endosomes through manipulation of the ubiquitination pathway.”

Award judge T C Wu, Johns Hopkins University

 

The Awards presented to Ye, Wei, and colleagues are made possible through the generous donation from the Ming K Jeang Foundation, USA.

For more about how Cell & Bioscience came about, and the SCBA, read what its Editor-in-Chief Yun-Bo Shi had to say here.

 

Carsten Wolff on staging the embryonic development of water fleas

$
0
0

Planktonic crustaceans belonging to the genus Daphnia have long been used as model organisms for the study of ecotoxicology, and show particular prowess in their ability to alter their phenotype in response to environmental stress. Now with the availability of the genome and genetic linkage map of Daphnia pulex, as well as the amenability of Daphnia for laboratory culture, these water fleas have shown themselves to also be of value in the field of evolutionary developmental biology. To maximise on the advantages offered by Daphnia to probe arthropod development and phylogeny, Carsten Wolff from the Humboldt University of Berlin, Germany, and colleagues present a detailed staging system of the embryonic development of Daphnia magna based on morphological landmarks, published in a recent study in EvoDevo. Wolff explains what makes Daphnia such a useful model organism and how their staging system contributes to current research efforts.

 

A female Daphnia magna carrying eggs in its brood pouch. Image source: Mittman et al, EvoDevo, 2014, 5:12

A female Daphnia magna carrying eggs in its brood pouch. Image source: Mittman et al, EvoDevo, 2014, 5:12

What is Daphnia magna, where does it live and what ecological function does it serve?

Daphnia magna is small planktonic branchiopod crustacean which is commonly called a ‘water flea’.  Its distribution is worldwide and it plays a major role in aquatic food chains. In addition, the small crustaceans of the genus Daphnia are known for their high phenotypic adaptability. Daphnia is one of the oldest model organisms in ecotoxicology and ecology.

 

What qualities does Daphnia have that mean it has been adopted as a lab model?

There are plenty of important advantages to use Daphnia magna as a lab model. Daphnia is very easy to maintain in culture. The life cycle is relatively short and you can have access to large number of embryos easily. Furthermore, injecting embryos is possible and so enables the generation of transgenic animals and facilitates RNAi-experiments. It is made even more powerful by the fact that the Daphnia genome is sequenced and transcriptomic data is available.

 

What sort of questions are Daphnia researchers addressing?

The publication of the Daphnia pulex genome has facilitated the application of genomics and the development of genetic tools to long-standing questions in ecotoxicology, ecology and evolutionary biology. A particular focus is laid on understanding the genetic basis of the striking ability of daphnids to change their phenotype in response to environmental stressors. Furthermore, Daphnia have recently been developed into crustacean model organisms for EvoDevo research, contributing to the ongoing attempt of resolving arthropod phylogeny. These problems require the comparative analyses of gene expression and functional data.

 

6 e,f Daphnia magna development at stage 7.5. Nuclear stain in green, scanning electron micrograph in grey. (Left panels: a ventral view. Right panels: ventral view with a focus on thoracopod differentiation. Image source: Mittman et al, EvoDevo, 2014, 5:12

 

How does  your staging system differ from standard staging approaches in other model organisms?

We provide a detailed staging system of the embryonic development of Daphnia magna based on morphological landmarks. The staging system does not rely on developmental hours and is therefore suitable for functional and ecological experiments, which often cause developmental delays in affected embryos and thus shifts in time reference points. We provide a detailed description of each stage and include schematic drawings of all stages showing relevant morphological landmarks in order to facilitate this application.

 

Scanning electron microscogh of a ventral view of Daphnia magna at stage 12. Image source: Mittman et al, EvoDevo, 2014, 5:14

Scanning electron micrograph of a ventral view of Daphnia magna at stage 12. Image source: Mittman et al, EvoDevo, 2014, 5:14

How might your developmental staging be used by other researchers?

With our staging we provided an easy lab tool for all researchers working in Daphnia embryology regardless which field they particularly investigate. The schematic drawings make it easy to recognise the specific embryonic stages using uncomplicated methods.

 

How far can your results be extrapolated to other related taxa?

The staging system can be adopted for other daphnids with minor variations since the sequence of development is highly conserved during early stages and only minor heterochronic shifts occur in late embryonic stages.

 

What’s next for your research?

There are numerous questions about aspects in Daphnia development that still need attention. Our focus will be on early development, namely cleavages, gastrulation processes and germ band elongation. We also have a particular interest in investigating the development of the peripheral nervous system.

 


Langley, Salzberg, Neale and Wegrzyn on sequencing the loblolly pine genome

$
0
0

Conifers are known to have large and highly complex genomes in the range of 20 to 40 Gbps. One of its members, the loblolly pine (Pinus taeda), is the second most common tree species in the USA making it vital to American forestry, and is also a feedstock for the generation of biofuels. With over 1.5 billion loblolly pine seeds planted each year, a large majority of which have been genetically bred for improvement, this pine tree was an ideal candidate for the generation of a reference genome for conifers. In a recent study in Genome Biology, Charles Langley and David Neale from the University of California, Davis, USA, Jill Wegrzyn from the University of Connecticut, USA, Steven Salzberg from Johns Hopkins University, USA, and colleagues, describe how they sequenced and assembled the first full length genome of the loblolly pine, making this the longest genome sequenced to date at 22.18 Gbps. Here Langley, Salzberg, Neale and Wegrzyn discuss how they overcame the challenges associated with sequencing such a large genome.

 

Schematic of loblolly reproductive pathway. Image source: Neale et al, Genome Biology, 2014, 15:R59

Schematic of loblolly reproductive pathway. Image source: Neale et al, Genome Biology, 2014, 15:R59

Why is loblolly pine an important species to study and what led you to sequence its genome?

SS: Loblolly pine is the number one commercial tree species in the USA, used for a wide range of products, especially paper and construction timber.

DN: Loblolly pine has been used extensively in genetic studies because of the availability of multi-generation pedigrees developed by the breeding cooperatives. Thus, all kinds of useful genetic resources were available in loblolly pine that would not be found in other pine/conifer species.

CL: Like a number of other reference genome sequences the loblolly genome serves as a solid and fertile foundation for investigations at many levels, from pathogen resistance and efficient breeding to the comparative genomics of terrestrial plants. From a technical perspective this sequencing project moves the scale and integration of technologies involved in next-generation whole genome sequencing (NG-WGS) up a level. Also noteworthy is the fact that this genome sequence was created in a collaboration with a few modest laboratories rather than a large sequencing center.

My own motivation for contributing to this project derives from its value in the study of population genomics. Natural populations of loblolly pine are large and well-studied for many interesting traits. This makes them ideal for testing population genetics theories. Studies to understand the origin, maintenance and divergence of the underlying genomic variation depend on this high quality reference sequence.

 

What challenges did you encounter when sequencing and assembling the loblolly pine genome, and what strategies did you take to overcome these challenges?

CL: While the increasing cost efficiency of present day next generation sequencing (NGS) made the direct cost of the sequencing such a large genome manageable, the complexity and heterozygosity of the available DNA made the assembly daunting. By choosing to conduct most of the sequencing in the haploid genome of a single gamete (pine nut) of the target tree and by very effectively error-correcting and pre-assembling the mountain of reads, we were able to present the state-of-the-art assembler with a manageable scale of input data.

As mentioned above this project was conducted in several small labs. Creative and effective planning, open exchange and strong, focused collaborative commitment were each necessary but not always easy to achieve among fiercely independent scientists.

DN: This is a very key point and the credit goes to Chuck Langley for understanding the importance of open and constant dialogue among team members. This led to a very creative process that would not have been achieved otherwise.

SS: The enormous size of the genome was the main challenge. At the time we started, no existing software could assemble a genome of this size – it would simply exceed the memory capacity of any available computer and then crash. The assembly team, at the University of Maryland and Johns Hopkins University, USA, developed a new algorithm that could reduce most of the data by about 100-fold, which was critical to getting the genome put together.

We also began developing a new method to use fosmids – small genomic chunks about 38 kilobases in length – as an aid to assembly. We found that we can pool together as many as 5000 fosmids and then disentangle them computationally. This approach is still in development, but we’ve already used it for part of the loblolly assembly.

The use of a haploid genome was also key: it’s rare to be able to get haploid (rather than diploid) DNA for a multi-cellular organism. The biology of the pine tree helped us out here: a pine nut contains a significant quantity of haploid DNA.

 

What is the importance of generating a high quality genome assembly, and how does the quality of the loblolly pine genome assembly compare with other sequenced plant species?

CL: It is widely recognized that a full high quality reference sequence can drive rapid advances. It is less well appreciated that an incomplete reference genome rife with errors can waste precious talent and effort, ultimately slowing and diverting science.

While the present loblolly pine sequence is incompletely assembled, it is a solid foundation. The error rate is low. But version 2.0 is ‘baking in the oven’.

SS: A high quality assembly provides the basis for a great variety of downstream research. Once we have the assembly in hand, we can identify all the genes and then begin to link genes to phenotype, as we have been doing for more than a decade now with the human genome. It all starts with the genome itself.

DN: The quality and open access approach used with the loblolly genome means that it will serve as the reference for about 400 conifer genomes that will be sequenced in the years ahead.

 

How did the high quality of the loblolly pine genome assembly affect gene annotation and the insights gained into gene family evolution?

JW: The combination of a high quality genome assembly with long scaffolds and a comprehensive transcriptome generated from multiple tissue types provided evidence to describe over 50,000 genes. Several conifer genes have long introns that exceed 20 Kb in length and these would have been difficult to identify with shorter scaffolds. The full length genes allowed us to perform comparisons with protein sequences from several fully sequenced plant genomes and further investigate those specific to pine.

 

How were you able to utilize the genome assembly to identify genes underpinning important traits, such as disease resistance?

JW: John Davis at the University of Florida, USA, and his colleagues identified a single nucleotisde polymorphism (SNP) associated with fusiform rust resistance in loblolly pine. This genetically-mapped SNP was originally identified in a partial expressed sequence tag (EST). Availability of the genome and transcriptome positively identified the partial EST as a Toll-Interleukin Receptor / Nucleotide Binding / Leucine-Rich Repeat (TNL) gene. Analysis of orthologous proteins from several plant species indicated that this gene belongs to a class of TNLs that have expanded in conifers.

 

How do you think the availability of the loblolly genome sequence and assembly will aid future research?

CL: It will enable functional genomics in conifers and genomic selection (modern breeding). It will be an essential component of plant comparative genomics and will also serve as the essential reagent in population genomics investigations and genome wide association studies.

DN: It will provide a genetic resource for ecological genomics research that will facilitate better management of forests under changing climate conditions.

 

Backman, Cherkezyan and Stypula-Cyrus on nanoscale chromatin changes in cancer

$
0
0

Changes in chromatin structure are known to occur in cancer cells and are associated with genetic and/or epigenetic alterations in the expression of tumor suppressor genes or proto-oncogenes, which ultimately lead to tumor development. These nuclear changes are thought to occur across a tissue beyond the boundaries of the tumor itself. This is based on the understanding that the aberrant genetic and/or epigenetic changes that occur in a tissue create an environment that favors the development of focal tumors. This concept, called field cancerization, means that tissue surrounding the tumor can be used to study some of the earliest events in cancer progression. Vadim Backman, Lusik Cherkezyan and Yolanda Stypula-Cyrus from Northwestern University, USA, and colleagues exploit this concept in their recent study in BMC Cancer looking at nanoscale chromatin changes in colorectal cancer. Using transmission electron microscopy to probe nuclear alterations prior to detectable microscopic changes, Backman, Cherkezyan and Stypula-Cyrus discuss the impact of their findings on our understanding of cancer etiology, as well as the clinical implications.

 

What led to your interest in applying biophysics to the study of cancer?

There have been decades of compelling cancer research, yet we are still losing the war on cancer. We need to change the way we think about cancer by integrating the biological sciences, physical sciences, and medicine. Using physical science-based techniques has enormous potential to address basic cancer and cellular biology questions.

 

What advantages do you think a biophysical approach to the study of cancer has over more traditional biomedical investigations?

Biophysical methods can quantify nanoscale changes in structure, which we found is a crucial marker of early neoplastic events. Many standard biomedical studies are semi-quantitative (i.e. relative protein expression of a biomarker) or rely on morphological changes observable with a light microscope, which would not target the earliest stages of carcinogenesis. While there are biomarkers to indicate early carcinogenesis, these are not uniformly altered or mutated across different cancers, or even patients with the same cancer type. Furthermore, these approaches are very costly to be practical in a clinical setting. By examining the nanoscale structure of a cell, a manifestation of molecular abnormalities, we eliminate many of these issues.

 

What did your study set out to investigate and why?

Chromatin structure is a central regulator of normal gene expression and cell function. While there have been many studies showing that nuclear atypia is a hallmark of cancer, little is known about the nanoscale structure and regulation of chromatin at the earliest stages of carcinogenesis. Therefore, we set out to study qualitative and quantitative differences in chromatin structure during early and field carcinogenesis. We also wanted to investigate the fractal nature of pre-neoplastic chromatin as another potential marker for early cancer detection.

 

You observed changes in chromatin structure that occur very early in carcinogenesis, prior to detectable microscopic changes, both in chromatin density and heterochromatin structure. Were you surprised by these results or did they confirm your expectations?

We have previously studied early tumorigenic morphological changes in tissues and cells using optical techniques specifically developed for that purpose (low-coherence enhanced backscattering and partial-wave spectroscopic microscopy). We discovered morphological alterations both in tissue as well as intracellular organization that precedes any known histological marker of cancer.  Specifically, we observed profound changes in the nuclear structure of cells in the microscopically normal rectal mucosa from patients with an adenoma or adenocarcinoma. Based on these preliminary results we hypothesized that the well-known changes in chromatin structure observed at later stages should also be present, at least to some extent, in the early or uninvolved mucosa of neoplastic tissue. In particular, we expected an increase in heterochromatin content and clump size, as well as a change in the chromatin density correlation.

All our expectations were confirmed by the presented high-resolution electron microscopy study. Together, these ultrastructural chromatin alterations support our hypothesis that changes in nanoscale chromatin density and organization precede microscopically observable alterations in nuclei, which coincide with the initiating genetic/epigenetic events driving tumorigenesis.

 

How feasible is it that a transmission electron microscopy (TEM) approach could be applied to the use of ultrastructural biomarkers for cancer in a clinical setting? What other alternatives are there?

TEM is the gold standard for imaging nanoscale cellular structures and an excellent technique to identify changes in higher order chromatin, specifically. The ultrastructural biomarkers we developed may be applied to future clinical studies. However, from a cost and time perspective, optical techniques provide distinct advantages over TEM. One of the markers we found to be altered in early carcinogenesis, the fractal dimension of chromatin, can be measured using such optical techniques.

 

Your study focuses on colorectal cancer. Do you think similar nanoscale changes may occur more widely across other cancer types?

Using spectroscopic techniques, our previous studies have established that early tumorigenic nanoscale changes in cellular structure are universally present across a variety of human cancers, including lung, pancreatic, and ovarian. However, how similar these changes are across different cancer types has not been determined. Since these changes cannot be resolved by means of optical techniques, future electron microscopy studies would be needed to identify and describe precisely what nanoscale alterations are occurring and compare them across different cancer types.

 

What kind of impact do you think a better understanding of ultrastructural chromatin changes may have in the future on i) our understanding of cancer etiology ii) clinical management of cancer patients?

Chromatin architecture defines the physical and biochemical forces that govern genome function. The discovered chromatin rearrangement precedes any known molecular marker of cancer, and therefore can be the force that triggers tumorigenic alterations in genomic activity. Thus, the study of early preneoplastic chromatin structure provides a critical insight to the understanding of cancer initiation and progression.

At the same time, characterization of premalignant changes serves to establish biomarkers of early tumorigenesis. This opens the door for a set of new technologies to develop and perform tests for the diagnosis, risk stratification and prevention of cancer. In the future, these changes can be detected by primary care physicians using cost- and time- effective, non- or minimally invasive optical methods, complementing the pathological examination and enhancing diagnosis through implementation of an automated analysis.

 

What’s next for your research?

Our research focuses on three main aspects: i) understanding the biological mechanism of malignant transformation, ii) identification, and iii) detection of preneoplastic structural changes in order to develop novel tools for cancer prediction and diagnosis.

Accordingly, we plan to study the functional consequences of early tumorigenic structural changes in chromatin structure on genome function through integration of the subdiffractional sensitivity of light-scattering and the enhancement of optical contrast in chromatin regions according to their transcriptional activity by carefully selected protein markers. Additionally, we will seek new insights into pre-neoplastic nuclear transformation in a variety of epithelial cancers, conducting advanced electron microscopy studies with improved spatial resolution and using protocols that better preserve native chromatin structure. And finally, we aim to develop novel optical methods capable of detecting the discovered biomarkers in the clinic, aiding cancer diagnosis in a cost- and time- effective, non-  or minimally invasive manner.

 

Getting to the center of microRNA-mRNA binding

$
0
0

microRNAs (miRNAs) influence the expression of around 60 percent of mammalian genes and have special relevance to the study of development and disease. They function by interacting with messenger (mRNA) via Watson-Crick base pairing, disrupting translation or transcription. A short seed region at the 5’end of a miRNA strand is important in binding to complementary target regions of mRNA. The seed site is evolutionarily conserved and is widely used in bioinformatics studies to predict miRNA target sites in the genome.

miRNAs bind to their target mRNAs within an RNA induced silencing complex (RISC) and bulk sequencing of pools of miRNA-mRNA, isolated from these reaction complexes, allows researchers to infer binding relationships. However, sequencing datasets do not preserve individual molecular relationships and typically, only 50 percent of an mRNA pool can be matched computationally to partner miRNA seed sites. With this in mind, Nicole Cloonan from the University of Queensland, Australia, and colleagues explore the importance of central regions of miRNA (centered sites) that are also highly evolutionarily conserved in their study in Genome Biology.

Venn diagrams illustrating the number of microarray mRNA probes shown to bind cononical miRNAs and isomiRs, and the overlap between related miRNAs biotin pull-down and overlapping between related miRNAs. Image course: Martin et al, Genome Biology, 2014, 15:R51

Venn diagram of the number of microarray mRNA probes that bind canonical miRNAs and isomiRs. Image source: Martin et al, Genome Biology, 2014, 15:R51

To elucidate binding relationships, the authors labeled ten different miRNAs with biotin and introduced each probe into cells to bind to target mRNA. The biotin tagged mRNA was then affinity purified from the cells and hybridized to microarrays containing known human mRNAs. This pull-down approach allows an individual miRNA to be assigned to its population of target mRNAs. In addition to using probes that were known to be true ‘canonical’ miRNAs, the authors also used naturally occurring isomiRs of each mRNA probe (miRNAs generated from the same pre-miRNA as the ‘canonical’ miRNA but with different 5′ and/or 3′ end cleavage) as controls that differed in seed site but share centered sites.

Each miRNA probe revealed a target population of approximately 1500 mRNAs, showing accordance with computational predictions and other published data. Base pair matches between the target mRNAs and individual miRNAs suggest that binding commonly occurs at both seed and centered sites. Assuming that centered site binding still occurs when there is some mismatch between the miRNA and its target, then together, seed and centered sites could account for up to 90 percent of interactions within the isolated mRNA populations. The latter ‘imperfect’ centered binding scenario was verified by Luciferase assay, considered the gold standard for testing miRNA-mRNA interactions.

Further experiments show that the biotin pull-down approach is robust in isolating RISC based miRNA-mRNA, although slightly biased against the sampling of seed mediated interactions, which may more readily dissociate during capture. Crucially, the rate of false positive identification of centered site interactions was found to be low. It was also observed that centered site miRNA binding most often caused translational repression, as opposed to mRNA degradation, in order to facilitate gene silencing.

It is clear that centered site mediated interactions should be a major consideration when interpreting and modeling miRNA regulated networks. However, the inclusion of centered binding sites may not lend predictive power to purely computational approaches to identify miRNA binding sites in the genome, that already suffer imprecision and tend to find too many putative binding sites. Instead, the authors advise that high throughput experimental methods at the level of both gene and protein expression are needed to gain a realistic picture of miRNA-mRNA interactions.

 

Lanjuan Li on the immune impact of H7N9 bird flu infection in humans

$
0
0

In 2013 avian influenza virus A (H7N9) was detected in humans and according to figures from the World Health Organization has been fatal in over a quarter of those known to have been infected. A comprehensive understanding of the effects of this virus on the human immune system is still lacking. Lanjuan Li and colleagues from the Zhejiang University School of Medicine, China, sought to address this in their recent study in Critical Care, where they investigated the cytokine profiles and functional phenotypes of the immune cells of a cohort of infected patients compared to healthy controls. Here Li explains what insights they gained and how this may affect treatment of human H7N9 infection.

 

What is avian influenza A (H7N9)?

Influenza A (H7N9) is a subtype of influenza viruses that had not previously been seen in either animals or people until it was found in March 2013 in China. The new human H7N9 viruses are the product of reassortment of viruses that are of avian origin. Most of the cases rapidly developed into acute respiratory distress syndrome, with a mortality rate of roughly 30 percent. Fortunately, this virus does not appear to transmit easily from person to person, and sustained human-to-human transmission has not been reported. Now a vaccine developed independently by Chinese scientists has been tested in controlled clinical trials.

 

Why is it important to characterize the immunological characteristics of patients with H7N9?

In almost all cases of viral diseases, the viral-specific immune response is crucial for viral clearance, however, it usually amplifies the non-specific immune response to cause damage of normal host tissues. For example during chronic hepatitis B virus (HBV) infection, the immunological ‘complication’ is usually mild, limited but long-lasting (without treatment intervention), leading to persistent hepatic inflammation. Sometimes, such as during SARS-coronavirus infection, the response is rapid, profound and systemic. We believe, in viral diseases, the immunological characteristics of patients accurately reflect the clinical characteristics of patients. When it comes to H7N9 infection, clarifying the traits in host response to H7N9 infection would deepen our understanding of the clinical presentation of this disease.

 

What were your main findings?

As we expected, most H7N9-infected patients had rapid, profound and systemic immunological consequences as reflected by T lymphopenia, activation of innate and adaptive immune cells, as well as hyper-cytokinemia in their circulation. Accordingly, most patients clinically exhibited systemic inflammation response syndrome (SIRS). In addition, we observed a simultaneous presence of the anti-inflammatory response. This is derived from a feedback response to limit the detrimental effect of systemic inflammation. The consequence of this anti-inflammatory response is not clear, however, it may predispose patients to secondary infection.

 

You found that many patients with severe avian H7N9 influenza developed T lymphopenia. How does this compare to other respiratory viruses?

We found this condition was similar to those of other A subtype influenza, H5N1 and pH1N1. In fact lymphopenia when combined with other parameters was capable of being compiled into an early ‘diagnostic scoring system’ during the influenza pandemic of 2009. Other common respiratory viruses such as seasonal H3N2 influenza virus, human rhinovirus (HRV), and respiratory syncytial virus (RSV) had lymphopenia too. In fact lymphopenia is a common factor in viral upper respiratory infections in humans, but the degree and consistency of these changes varies from virus to virus.

 

What are the clinical implications of your findings?

As the aberrant immune response in H7N9-infected patients led to the rapid progression of the disease, therapy aimed to block the profound and uncontrolled immune reaction may be beneficial. Our findings will help alert clinicians to the condition worsening. A potential response would be the use of glucocorticoids to inhibit cellular and humoral immunity. Another possible approach is to remove the over-flowed circulating inflammatory cytokines by plasma exchange and continuous veno-venous hemofiltration (CVVH), one form of artificial liver system, which was proved to reduce cytokine levels.

 

What further research is needed?

Our study provides an overall perspective into the immune status of H7N9-infected patients but a number of key questions remained to be answered. First, as innate immunity is the first line of host defense, it is unknown what the key innate cells and cytokines/chemokines of the innate immune system are in initiating the response to H7N9 infection. We also still don’t know what the mutual regulation between various cytokines is. Second, humoral immunity is an important component of the adaptive immune system in response to viral infection; identifying neutralizing antibodies against H7N9 virus may therefore be beneficial for treatment. Third, the immunological status of patients may vary greatly during disease progression. Thus, a dynamic observation of the immunological status of patients would be necessary.

 

Ross Prentice on hormone therapies and breast cancer risk postmenopause

$
0
0

The Women’s Health Initiative (WHI) trials, initiated by the US National Institutes of Health in 1991, marked one of the largest US prevention studies, and set out to investigate the most common causes of death, disability and impaired quality of life in postmenopausal women. The trials looked at cardiovascular disease, osteoporosis and cancer, including the effect of different hormone replacement therapies on breast cancer risk. Surprisingly, women on conjugated equine estrogens (CEE) alone showed a reduced risk, whilst those on CEE plus medroxyprogesterone acetate (MPA) showed an increased risk. Ross Prentice from the Fred Hutchinson Cancer Research Center, USA, and colleagues sought to understand what underlies these divergent results through analyzing differences in blood sex hormone levels, the results of which are published in a recent Breast Cancer Research study. Prentice discusses what they found and what impact this will have on hormone therapy for postmenopausal women.

 

What led to your research interest in hormone therapy for breast cancer?

The present research arose from a desire to understand the biology underlying the results of the Women’s Health Initiative randomized, placebo controlled trials of the two most commonly used postmenopausal hormone therapy preparations in the United States. The regimes studies were also widely used in the UK and other Western countries.

 

Your study probed whether sex hormone level changes could explain the divergent results of the Women’s Health Initiative trials into the association between breast cancer risk and postmenopausal hormone therapies. How did this investigation come about?

Conjugated equine estrogens (CEE) include a complex mixture of estrogens, and has a profound effect on blood levels of major estrogens, including an approximate doubling of estradiol and a tripling of estrone, but also an approximate doubling of the offsetting sex hormone binding globulin (SHBG). On the surface it seemed unlikely that these changes, which were very similar for CEE and for CEE plus medroxyprogesterone acetate (CEE+MPA), could both explain an important breast cancer risk increase with CEE+MPA and a breast cancer risk reduction with CEE-alone. However, the blood concentration changes were quite substantial, and we thought it important to examine the extent to which they could explain the clinical trial findings in a formal ‘mediation-type’ analysis.

 

What were your key findings and were you surprised by them?

We found that the collective changes in blood sex hormones could explain much of the breast cancer reduction with CEE, but none of the increase in risk with CEE+MPA. Evidently, in the population studied, the increase in SHBG with CEE was more than sufficient to compensate for the estrogen increases, yielding a somewhat reduced breast cancer risk. We were surprised to find that for CEE+MPA the substantial variation in breast cancer risk related to pre-randomization circulating sex hormones completely disappeared when a woman used CEE+MPA. It seems likely that the addition of MPA to the treatment regimen resulted in a major stimulation of breast ductal epithelial cells bringing women who were at relatively low risk, perhaps due to favorable diet and activity patterns over the life span, up to a similar breast cancer risk to that for women who were at a relatively high baseline risk.

 

What kind of impact will the findings have on hormone therapy formulations in the clinic?

There is already much reduced use of CEE+MPA worldwide, based on findings for breast cancer and other diseases from the Women’s Health Initiative and substantial observational studies, such as the UK Million Women’s Study (MWS). The use of CEE has also been reduced, to a lesser extent, since related health benefits and risks are approximately balanced. To avoid risks associated with CEE, that include elevations in stroke and venous thromboembolic disease, there has been a trend toward the use of lower doses of CEE and a trend toward the use instead of transdermal estrogens (i.e. estrogen patch), which bypass the liver and may avoid some of the inflammatory consequences of oral estrogen use. However, transdermal estrogens also do not produce SHBG increases, so any related favorable breast cancer effects may be lost. Further research into the development and evaluation of safe and effective hormone therapy regimens, or alternatives to hormone therapy for the control of menopausal symptoms, deserves a continuing high priority in the biomedical research agenda.

 

Who is going to be affected by this research?

Women making decisions about hormone therapy will be affected by this research. The sex hormone changes resulting from CEE seem to have rather little overall effect on breast cancer risk. However, many of the women studied in the WHI trials were years past the menopause when randomized, and the breast cancer implications appear to be more favorable for these women , than for women who start CEE near the menopause. In fact, with much larger numbers of women than is available in WHI, the UK MWS finds a  breast cancer risk increase with CEE among recently postmenopausal women. Hence a cautious approach to CEE use will continue to be needed among women who are post-hysterectomy, until more is learned about safe and effective regimens. Women with a uterus could wisely avoid the use of MPA among strategies they consider to address vasomotor symptoms.

 

What’s next for your research?

We are currently trying to more fully understand CEE effects on breast cancer, including some further comparisons of WHI data analyses with published UK MWS results. Also, some WHI investigators have been vigorously studying alternatives to exogenous hormones for menopausal symptom management.

 

Viewing all 235 articles
Browse latest View live


Latest Images