Quantcast
Channel: Biome » Research
Viewing all 235 articles
Browse latest View live

Gabriele Sorci on unexpected mate choices in an isolated bird population

$
0
0

Choosing a mate is key to the evolution and survival of a species, contributing to whether the most attractive and beneficial traits are passed on to future generations. It is generally thought that the female of a species should choose a genetically dissimilar male to mate with. This introduces a greater amount of genetic diversity to resultant offspring, potentially conferring enhanced fitness, and notably avoiding the negative impacts of inbreeding. However, in certain environments the opposite has been shown to be true. In a recent study in BMC Evolutionary Biology, Gabrielle Sorci from the University of Burgundy, France, and colleagues reveal how an isolated population of house sparrows prefer genetically similar mates. Here Sorci discusses what led to these surprising findings and whether this may occur more widely.

 

How much do we know about the use of genetic similarity as a criterion in mate choice?

Genetic similarity is very often mentioned as being one of the most important factors shaping mate choice in a wide range of organisms. The conventional wisdom was that animals should avoid mating with close relatives because of the risk of inbreeding depression. However, there is accumulating evidence showing a continuum of mate preference, from genetically distant to genetically similar mates.

 

What inspired you to conduct this project and why use an insular island population of birds?

Insular populations offer the opportunity to tackle questions that can be hard to address in other environmental conditions. Insular populations are often small, and poorly connected with other populations. This increases the risk of losing genetic variation, and we wished to study the mechanisms that might contribute to maintain genetic variation in such populations.

 

What did you expect to find when you began the study and how different were your actual results?

Given the small population size and the isolated nature of the population, we expected that there should be strong selection pressures to avoid mating with close relatives. Our results did not support this prediction and if any tended to show that females might prefer to mate with genetically close males. It is important to remember that we could only assess the ‘realised’ mate choice and that this might differ from an ideal ‘preference’ because many constraints can operate on this ideal preference.

 

You found an especially high level of extra-pair paternity, where offspring raised by one pair  would be the result of a female mating with a male outside of this pairing. How does that tie into your overall conclusions?

This was also quite a surprise because insular populations are usually expected to have low levels of extra-pair paternity. The choice of extra-pair mates tended to confirm the idea that females might prefer to mate with genetically close individuals.

 

How widely would you expect these patterns to be replicated, both in larger populations of house sparrows and in other isolated and non-isolated species?

This is difficult to say and is actually one aspect that we would like to explore in the future. To be more precise we would like to know not only how spatial variation affects the pattern of mate choice that we observed for this population, but also the importance of temporal variation (variation among years). This of course requires a large effort and long term monitoring of several populations, which is difficult to achieve with the prevailing funding scheme.

 

It has long been held that animals will in general avoid inbreeding. Do you think this idea needs rethinking?

This idea has already been reassessed. More than 30 years ago Patrick Bateson published a paper in Nature showing that quails prefer to mate with first cousins. The idea that mate choice should balance the cost of both inbreeding and outbreeding is now relatively well established.

 


Matthew Blurton-Jones on neural stem cells for treating Alzheimer’s

$
0
0

Dementia affects over 35 million people worldwide, with figures set to double by 2030 according to the World Health Organization. Alzheimer’s disease (AD) is the leading cause of dementia, however current treatments for this disease are unable to halt or reverse disease progression. The underlying pathology of AD is tied to the development of beta-amyloid plaques and the subsequent cascade of destructive events that this initiates, leading to nerve cell damage. Efforts to repair this damage have revealed that transplantation of neural stem cells can improve cognition in mouse models of the disease, however with no effect on the aggregation of beta-amyloid. In a recent study in Stem Cell Research & Therapy Matthew Blurton-Jones from the University of California Irvine, USA, and colleagues sought to combine the beneficial effects of neural stem cell transplantation with an enzymatic approach to target beta-amyloid degradation. Here Blurton-Jones discusses their results, which show that neural stem cells expressing the enzyme neprilysin can reduce beta-amyloid pathology in mouse models of AD, and how this approach may impact future treatment.

 

What are the current problems in the treatment of Alzheimer’s disease?

Currently approved drugs for Alzheimer’s (AD) are palliative and fail to modify the long term progression of the disease. Drug development and recent clinical trials have primarily focused on approaches that aim to decrease beta-amyloid production or increase its clearance from the brain. Unfortunately, these approaches have thus far failed and in some cases led to troubling side effects (Nature 2002, Jan 31, 415(6871):462 and N Engl J Med 2013; 369:341-350). There is therefore a clear need to study and develop novel therapeutic approaches.

 

What is the underlying pathology that causes Alzheimer’s disease?

The great majority of AD researchers believe that the most upstream cause of AD is the accumulation and aggregation of beta-amyloid within the brain. However, beta-amyloid in turn appears to drive several other key pathologies including neurofibrillary tangle formation, neuronal and synaptic loss, and inflammation. This schema is commonly referred to as the amyloid cascade hypothesis and is strongly supported by genetic and transgenic studies (Science 1992,10 April, 184-185).

However, recent imaging data suggest that beta-amyloid accumulation begins many years before cognitive dysfunction (reviewed in Lancet Neurol 2010, Jan, 9(1):119-28). This may therefore explain why anti-amyloid therapies have thus far failed in clinical trials. One additional important point regarding AD pathology is that synapse loss correlates more closely with cognitive dysfunction than beta-amyloid and tangle pathology (Ann Neurol 199, Oct, 30(4):572-80). This suggests that a combinatorial therapy that not only targets beta-amyloid but also increases synaptic plasticity could be beneficial.

 

What has been shown previously with neural stem cell transplantation and Alzheimer’s disease?

Our group previously examined neural stem cell (NSC) transplantation in 3xTg-AD mice (Proc Natl Acad Sci U S A 2009, Aug 11, 106(32)) and a transgenic model of hippocampal neuronal loss (J Neurosci 2007, Oct 31,27(44)). In both of these studies we found that murine NSCs could improve cognition. Interestingly, NSC transplantation had no effect on either beta-amyloid or tangle pathology in 3xTg-AD mice. Instead, transplanted NSCs increased hippocampal synaptic density and NSC-derived BDNF was necessary for these effects. Recently, other groups have reported similar data that NSCs can improve cognition without effecting beta-amyloid or tau pathology (J Neurosci 2010, Jul 28,30(30):9973-83; J Neurosci Res 2014 Vol 92, 2, 185–194; Mol Neurobiol 2014, Vol 49, 1, 66-77).

 

How else could neural stem cells be used in the treatment of Alzheimer’s disease?

A major advantage to neural stem cells (NSCs) is that they can migrate considerable distances toward areas of injury and inflammation (Proc Natl Acad Sci U S A 2004 Dec 28, 101(52):18117-22). This suggests that NSCs could be an ideal vehicle to deliver therapeutic proteins such as neprilysin to the brain. Given the extensive distribution of AD pathology, NSCs could likely provide more widespread delivery than current viral gene therapy approaches.  As we’ve previously shown, NSCs can also produce high levels of neuroprotective proteins such as neurotrophins, potentially providing additional benefits (Proc Natl Acad Sci U S A 2009, Aug 11, 106(32):13594-9).

 

In your current study you express the proteolytic enzyme neprilysin in the transplanted neural stem cells. What role do you think this could play as a potential treatment in Alzheimer’s disease?

Neprilysin is one of a few key enzymes in the brain that can degrade beta-amyloid. Studies suggest that neprilysin decreases with age and may therefore influence the risk of AD (reviewed in Neuron 2001, 32, 177-180). If amyloid accumulation is the driving cause of AD, then therapies that either decrease beta-amyloid production or increase its degradation could be beneficial, especially if they are started early enough. A great deal of effort has already been put forward to develop drugs that could inhibit beta-amyloid production. However, many of these compounds appear to have significant side effects. The other side of the coin is to also develop drugs that can increase beta-amyloid degradation by increasing either the expression or activity of enzymes like neprilysin.

 

In your study, you used two different mouse models of Alzheimer’s disease to investigate the effect of neural stem cells expressing the enzyme neprilysin. Why was this?

Every mouse model of AD is different and develops varying amounts, distribution, and types of beta-amyloid pathology. By studying the same question in two independent transgenic models, we can increase our confidence that these results are meaningful and broadly applicable to AD.

 

What further research is needed?

There is clearly a great deal more research needed to determine whether this kind of approach could eventually be translated to the clinic. Probably the most important question is whether a clinical grade human neural stem cell line can be identified that can safely provide equivalent synaptic and behavioral benefits. Another key question is how best to modify that line to express neprilysin. One would likely need to target the neprilysin transgene into a single safe-harbor integration site and then confirm the safety of the cells. Given that neprilysin can degrade a handful of other substrates (i.e. enkephalin, substance P), it would also be important to determine whether neprilysin overexpression can produce any troubling side effects.

 

sno-lncRNAs: a story of splicing across humans, rhesus and mice

$
0
0

Convention tells us that ‘DNA makes RNA makes protein’, but the function of the majority of RNA that does not code for protein has, historically, been less clear. The recent advent of high-throughput sequencing technology has challenged our preconceptions of long non-coding RNA (lncRNA) and revealed it to be one of the most abundant RNA molecules. Although only a small number of lncRNAs have been well characterised, they appear to be crucial in the control of gene expression pathways and are no longer considered ‘transcriptional noise’.

Found mostly within introns, sno-lncRNA is an elusive class of lncRNA that depends on snoRNA (small nucleolar RNA) at both ends for its processing. The genomic region encoding one abundant class of sno-lncRNAs is specifically deleted in those with Prader-Willi Syndrome (PWS), a rare genetic disorder in humans. But despite evidence for such apparent functionality, studies identifying and characterising sno-lncRNAs across the genome are lacking. This has been addressed in a recent study published in BMC Genomics by Li Yang from the Key Laboratory of Computational Biology at the CAS-MPG Partner Institute for Computational Biology, China, and colleagues.

Using available data and a series of bioinformatic tools, Yang and colleagues identified sno-lncRNAs across the human, rhesus monkey and mouse genomes. They annotated pairs of snoRNAs and used a custom computational pipeline to identify 19 expressed sno-lncRNAs. Primary sequence analysis revealed that while they do have highly conserved snoRNA ends, sno-lncRNAs themselves are not well conserved.

Within the PWS region, it was found that sno-lncRNAs were highly expressed in the human genome, somewhat expressed in rhesus and undetectable in mouse, indicating that mice might make unsuitable models to study human PWS. They also found one sno-lncRNA in the RPL13A region of the embryonic stem cell of the mouse. RPL13A is a gene encoding a ribosomal protein, and is associated with the regulation of lipid toxicity. Further analyses of PWS and RPL13A region sno-lncRNAs indicated that their expression is species-specific and results from alternative splicing, whereby a single gene can be spliced to code for multiple proteins.

Complex organisms are awash with non-protein-coding RNA, whose regulatory roles appear to be many and varied. This study brings us one step closer to understanding one of the most elusive of the non-coding RNAs and further demonstrates a complex regulatory network of coding and non-coding parts within the mammalian genome.

 

Kathryn Maitland on treating severe anaemia in sub-Saharan Africa

$
0
0

Anaemia, either due to insufficient numbers of red blood cells or an impaired ability of these cells to carry oxygen, can result in weakness, fatigue, and dizziness, and in its most severe form can be fatal. According to the World Health Organization the highest proportion of affected individuals are in Africa. Preschool age children were found to be especially at risk, with approximately two thirds of preschoolers affected. Childhood anaemia therefore presents a major public health burden in this region. Severe anaemia, which requires blood transfusion for treatment, is of particular concern. Limited blood supplies have led to guidelines for the rational use of blood transfusions in severe anaemia. However, the effectiveness of these guidelines on clinical outcomes is unclear. Kathyrn Maitland from Imperial College London, UK, and colleagues conducted a phase II clinical trial to investigate the safety and efficacy of transfusing a higher volume of blood in Ugandan children with severe anaemia, as published in their recent study in BMC Medicine (with an associated Commentary by Thomas Brick and Mark Peters from Great Ormond Street Hospital, UK). Maitland explains the extent of the severe anaemia problem faced in sub-Saharan Africa and the implications of their results.

 

What is severe anaemia, and what is its health burden in sub-Saharan Africa?

In sub-Saharan Africa severe anaemia (SA) in children is a leading cause of hospital admission, a major cause of direct mortality and a key factor in the 800,000 malaria deaths per year. The definition of severe anaemia varies with some researchers and guidelines defining it as either a haemoglobin level of less the 5g/dl and others using less than 6g/dl as the definition. As SA includes a very heterogeneous group of children other studies have addressed this by classifying children into subgroups based on both clinical severity and haemoglobin levels.

 

Why is severe anaemia such a big problem for paediatric patients in Africa and what are its main causes?

The causes of anaemia are multi-factorial with several co-factors causally related to mortality risk. Malaria still plays an important role in the development of severe anaemia (SA) in many parts of Africa. However, in the only comprehensive case-control study of children hospitalised with SA in Africa, undertaken in Malawi, the key associations with SA were bacteraemia (OR=5.3; 95% CI 2.6-10.9), malaria (2.3; 1.6-3.3), hookworm (4.8; 2.0-11.8), HIV infection (2.0; 1.0-3.8), vitamin A deficiency (2.8; 1.3-5.8) and vitamin B12 deficiency (2.2; 1.4-3.6). Neither iron nor folate deficiencies (which are currently recommended as treatments for severe anaemia) were associated with mortality, and were less prevalent among cases than controls.

 

What are the current recommendations for management of severe anaemia?

The World Health Organization management guideline includes conservative use of transfusion, iron and folate and antihelminths. Current recommendations are to give 20ml/kg whole blood (or 10ml/kg packed cells) only to children with profound anaemia (haemoglobin  less than 4g/dl) or children with haemoglobin  less than 6g/dl with signs of severity. Outcomes remain poor with an inpatient mortality 9-10 percent, and following initial transfusion with about 25 percent remaining severely anaemic (haemoglobin less than 5g/dl) leading to frequent use of multiple, low volume (20ml/kg) transfusions, which is wasteful, inefficient and exposes children to additional risks (such as reaction and infection).

 

What were the main findings of your study?

In a clinical trial we evaluated the safety and efficacy of a higher initial volume of whole blood (30ml/kg) against standard volume (20ml/kg) in 160 Ugandan children for 24 hour anemia correction (haemoglobin more than 6g/dl: primary outcome) and 28-day survival. We found that the higher initial transfusion volume prescribed at hospital admission was safe and resulted in an accelerated haematological recovery. By 24 hours 70 (90 percent) children in 30ml/kg arm had corrected severe anaemia compared to 61 (74 percent) in 20ml/kg arm. From admission to day 28 there was a greater hemoglobin increase from enrollment in the 30ml/kg arm (global p<0.0001) with only 1 death in 30ml/kg arm versus 6 deaths in 20ml/kg arm (p=0.12).

 

What impact do you think your findings will have on public health, policy and clinical practice?

Current transfusion guidelines, were developed to protect scarce resources, avert overuse, and reduce the risk of transfusion-transmissible infections but are conservative not only in terms of criteria applied for administering a transfusion at all, but also in terms of the volume of blood transfused. The evidence base for the paediatric guidelines is weak and consequently adherence is poor. A poor or incomplete response to recommended treatment in children with severe anaemia results in relapse, readmission and death. Because severe anaemia is very common, the high ‘hidden’ morbidity and mortality occurring within the first few weeks after initial diagnosis is likely to contribute importantly to overall under-five mortality. If not adequately addressed, severe anaemia may thus be an obstacle to the achievement of Millennium Development Goal No.4 on child survival in Africa. A Phase II trial demonstrating the safety and efficacy of a higher volume of blood is an essential step to the justification of a large Phase III trial.

 

What are the barriers to optimal severe anaemia management in sub-Saharan Africa?

The evidence base informing managements guidelines for children with severe anaemia is weak, and have not been revised despite new evidence suggesting that key correlates with poor outcome – inadequate response to transfusion, bacterial co-infection and vitamin deficiencies – are not covered within current guidelines. Combining different strategic approaches within a new management ‘bundle of care’, including more liberal and larger-volume transfusion, may reduce immediate mortality, subsequent transfusion requirements and additionally address nutritional deficiencies by using multi-vitamins and preventing further infections by anti-infective prophylaxis post-discharge that might have their greatest impact after the effects of transfusion have waned (by 2-3 months).

 

How do you think these barriers can be overcome to implement changes to increase transfusion volume?

More liberal transfusion policies may in fact be associated with both economic and indirect benefits for blood transfusion resources. In the short term, benefits may arise from averting the need for re-transfusion, with associated costs and cumulative person-hours per patient of health staff, prolongation of admission and inherent biological risks of multiple transfusions. Future research is needed to investigate ways in which blood can be delivered most efficiently to the benefit of the largest number of individuals.

 

What further research is required?

The TRansfusion and TReatment of severe Anaemia in African Children: a randomised controlled Trial (TRACT) was designed to address these barriers in optimal management and will evaluate three components that could form an integrated treatment package for severe anaemia: transfusion, micro-nutrient supplementation to address underlying nutrient deficiencies, and short-term antimicrobial prophylaxis to prevent recurrent infections. Each targets a different mechanism for reducing mortality and morbidity and targets both early and late outcomes: effects are thus expected to be additive. It will be conducted over the next 3 years in 3954 Ugandan and Malawian children.

 

Epigenomic clues to human inflammatory disease from comparative primate analyses

$
0
0

Genome wide analyses have brought new insights to the genetic basis of chronic inflammatory diseases, which have seen a significant rise in numbers during the 21st century. This increase in prevalence is considered too rapid to be accounted for by changes in the frequency of genetic risk variants. However the environment, which  is a key factor in the development of common inflammatory diseases, may impact the underlying genetics of these disorders through alterations to the epigenome. Christopher Bell from King’s College London, UK, and colleagues probe the human DNA methylome of peripheral blood to uncover epigenetic variations that may have a key role to play in inflammatory disease, as published in their recent study in Genome Medicine. Here Bell discusses their comparative epigenomic approach, using chimpanzee and rhesus macaque genomes, and the potential clinical impact of their findings. 

 

What was the main goal of this research?

The overarching goal of our work is to use epigenomic analysis to understand human disease. For this particular paper the focused aim was to identify strong human-specific epigenetic variation, in whole blood, by a triangulation comparison between human, chimpanzee and rhesus macaque. Thus we hoped to identify potential regulatory change in this immunologically important tissue type. These changes will be driven by facilitative genetic differences between the three primates, but may additionally also represent environmentally influenced variation. We were fortunate to be able to perform this comparative epigenomic analysis due to a fantastic collaboration with Lutz Walter and Christian Roos at the German Primate Centre.

Since the advent of high throughput SNP (single nucleotide polymorphism) array GWAS (genome-wide association studies), there has been considerable success in identifying common genetic variants associated with common diseases. However, we also know environmental factors have a strong impact on these diseases. This is particularly the case when we consider the inflammatory and metabolic conditions that have had a dramatic rise in prevalence in the last 50 years. This is too short a timeframe to be due to change in risk allele frequency, and is hypothesised to be due to the modern human environment.

These environmental factors, through the interface of epigenetic changes, may be translated into biological effectors on the genome. Due to both the replication stability of DNA methylation, but also its potential plasticity, it is proposed as a biomarker of quantitative lifetime environmental exposures or accrued pathogenic alterations. Therefore by defining this human-specific epigenome we may identify human species-specific physiological differences and vulnerabilities, as well as start to look within this variation for a potential subtle imprint of the modern environment.

 

What inspired you to take a comparative epigenomic approach to probing the human DNA methylome for disease assocations?

We have been and continue to be involved in common disease Epigenome-Wide Association Studies (EWAS). Whilst exciting strong signals with aging and smoking have been discovered, indicating the promise of the field, the non-cancer disease associations identified so far have only been very small DNA methylation changes. Therefore we took the approach that to increase our understanding this human population variation, which we were trying to associate with disease state, we needed to take a step back and learn more about fixed species-specific variation. This was partly inspired by an excellent paper from De and Babu (Proc Natl Acad Sci U S A 2010, 107, 13004-13009) discussing the ‘time-invariant’ principle of genome evolution, in that similar processes occur across all three time frames of species-, population-, and cancer-evolution. We have used this concept previously to investigate epigenetic change, firstly by focusing on direct facilitative genetic effects, identifying by inter-primate comparison the set of human-specific CpGs that we termed CpG ‘beacons’.

Furthermore regulatory change has been proposed as a major driver in the delineation of primate species phenotypic variation since the classic paper by King and Wilson in 1975 (Science 1975, 188, 107-116). Therefore to be able to assay this epigenetic regulatory level in a genome-wide study is another step in fulfilling their pioneering theory and was further inspiration for this work.

Finally the potential of environmentally-driven epigenetic effectors needs thorough exploration. Starting to define human-specific Differentially Methylated Regions (s-DMRs) comprising the human-specific DNA methylome is part of this process. Also we saw that a comparative blood cell methylome analysis could clearly be interesting with respect to the recent rise of common inflammatory diseases.

 

What surprised you when you started looking at the data?

At that point in time when we first looked at the data, there were still only a few genome-wide human methylomes published and no chimpanzee or rhesus macaque datasets. So it was obviously exciting to look at data no one has ever seen before. What surprised us initially was just how similar, on the kilobase plus scale, these methylomes looked. The bigWig MeDIP-seq tracks files nearly mirrored each other (see image below and the comparative trimethylome). This neatly brought home, due to the sequence similarity between these homologous primate species, just how genetically driven (predominately due to CpG density) the DNA methylome is.

The GNAS locus with MeDIP-seq DNA methylation tracts of human (blue), chimpanzee (orange) and rhesus macaque (olive). Image source: Christopher Bell, King’s College London, UK.

The GNAS locus with MeDIP-seq DNA methylation tracts of human (blue), chimpanzee (orange) and rhesus macaque (olive). Image source: Christopher Bell, King’s College London, UK.

Secondly, although not a direct surprise, as we had hypothesised this, but a pleasant one nonetheless, was that an immunological gene was our most significantly different locus and furthermore that its entire pathway was also involved. The Leukotriene B4 receptor, (LTB4R/BLT1), is implicated across the gamut of human inflammatory disease. That we have also been able to corroborate this finding with other datasets and studies was great, including across other primates as well as with both wild and captive born.

 

Who is going to be interested in this research?

We expect that this research would be of interest to all those researchers working in the genomics and epigenomics of common human disease, as a further avenue to think of in disentangling these disorders. Excellent work by Ziller and colleagues has started to define the dynamic human DNA methylome from multiple tissues (Nature 2013, 500, 477-481), and defining the human-specific methylome is an important additional step in the application of epigenomics to genomic medicine.

Of further particular interest for epigeneticists, currently unpicking the genetic effects on methylation, we found, through comparative motif analysis within CpG dense regions, trans-species evidence for a role of the transcriptional repressor, CTCF, in reducing methylation. Also we observed additional support for the interesting association of methylation variability within enhancer regions, due to an enrichment of s-DMRs in these loci.

With respect to immunology, we were able to show functional effects of our epigenetic findings in LTB4R, through another excellent collaboration with Grzegorz Woszczek and his graduate student Holly Foster at the MRC/Asthma UK Centre in Allergic Mechanisms of Asthma, UK. Exploring inter-human variability of this locus, as well as other s-DMRs, in association with disease is an area we, and others, may also wish to pursue. Our results also further emphasise the uniqueness of the human immune system, and the potential caution that is required in extrapolating human diseases from model organisms.

On the back of this work we have further proposed the ‘s-DMR hypothesis’, whereby regions of high sequence similarity between the primates, where human-specific epigenetic variation can be identified, may be enriched for human-specific environmentally-driven DMRs.

 

What kind of impact will your findings and similar studies have on the clinic?

Epigenomic analysis is poised to revolutionise pathological interpretation, due to the powerful ability of these cell-type specific signatures to deconvolute heterogeneous biopsy tissue samples.

Our work is at the beginning of the next stage of implementing the capabilities of epigenomics, whereby we will hopefully be able to precisely quantitate environmental exposure. We can see the dawning of this potential already with tobacco smoking, where instead of a physician relying on unreliable and imprecise self-reported measures, long term blood-derived DNA methylation changes, at the AHRR locus and others, could be measured. This can also be used to measure passive or in utero exposure.

Precisely delineating these environmental effects may also lead to new causative associations, as well as a molecular understanding of the pathophysiological processes involved in factors already associated with disease. That is, in how they modify the epigenome, and therefore influence genome function. Our work in defining the human-specific methylome, facilitates the beginning of outlining variability and potential impacts of modern human environments on the epigenome. The recently published reconstruction of an archaic human methylome by Gokhman and colleagues (Science 2014, Apr 17), through modeling of the expected degradation of DNA, shows that with this and accumulation of further datasets, we will be able to start to plot the chronicity of when these methylation changes arose. Moreover, once we can fully pick apart the subtle genetic effects on the methylome, due to particular transcription factor binding motif mutations for instance, as well as separate minute true change from technical variation, even more elusive signatures may be able to be identified.

 

What are you up to next?

Having dissected the human epigenome from an evolutionary perspective, as well as being involved in a number of EWAS, this has led us to the clear conclusion that the most powerful model to identify human disease-associated epigenetic variation is discordant Monozygotic Twins. Fortuitously the chance to pursue this avenue of research has become available due to the opportunity to work with Prof. Tim Spector and the TwinsUK cohort at King’s College London, where I am now based. We will also continue to use comparative evolutionary tools, as well as integrated omics approaches, to further our understanding of common human disease.

 

The twisted leukemia genome: a third dimension to cancer genomics

$
0
0

Studies that use the genome sequence or gene expression patterns to draw biological conclusions are ten-a-penny, especially in the field of cancer. For example, specific gene expression signatures and fusions in DNA have been shown to be highly predictive of leukemia. However, have you ever considered whether the shape – rather than the sequence – of the genome is also altered in cancer? A new proof-of-principle study in Genome Biology shows that this just might be the case.

Given that chromatin proteins are becoming ever more linked to cancer, Josee Dostie and colleagues from McGill University, Canada, wondered whether cancer genomes may suffer not only from defects in their DNA sequences but also from changes in their shape.

To address this question, machine learning, in which a computer program uses a curated dataset to discover which characteristics are linked to which outcomes, was combined with chromatin conformation data generated by the ‘5C’ technique. These data identify contacts between sequences located some distance from each other along the DNA molecule.

Looking specifically at the shape of the HOXA gene cluster in leukemia cells, Dostie and colleagues found that the shape of the genome, is highly predictive of which type of leukemia a cancer cell originates from – not only whether the cell contains an MLL-fusion protein, but also to which gene MLL is fused, thereby defining the leukemia subtype.

Incredibly, the shape of the genome was just as good as, if not better than, gene expression in making predictions.

The mechanistic relevance of the distinctive shape of HOX gene clusters in each type of leukemia is not entirely clear and awaits further study. One interesting theory holds that 3D organization compartmentalizes genomes, which concentrate genomic loci for the purpose of preventing or promoting gene expression.

As is common in science, we have also yet to disentangle cause-and-effect: does the misshapen genome precipitate cancer? Or do the changes to DNA sequence and/or gene expression induce twists in the genome?

Whatever the biological truth underlying Dostie and colleagues’ findings, their study will hopefully provide thought-provoking fodder for cancer researchers. If the strong predictability shown here extends to a wider range of cancers, then perhaps the field will be encouraged to think more widely about changes to the genome that occur during tumorigenesis, rather than limiting research to a hunt for mutations in the DNA sequence or signatures of gene expression.

 

For more on the general biology of the 3D genome, please see the BioMed Central blog.

Juliana Chan on unravelling the link between diabetes and cancer risk

$
0
0

An improved understanding of risk factors for cancer has helped reduce cancer incidence. However, much is yet to be clarified about how the complex pathways that contribute to increased risk can be collectively targeted to best counteract their effects. In individuals with type 2 diabetes, hyperglycaemia is a known risk factor for all-site cancer. Interventions including improved glycaemic control and the inhibition of pathways concerned with the renin-angiotensin system (RAS) and 3-hydroxy-3-methyl-glutaryl-coenzyme-A-reductase (HMGCR) have each been shown to separately reduce cancer risk. In a study in BMC Medicine, Juliana Chan from the Chinese University of Hong Kong, China, and colleagues, investigate the additive effects of these approaches on cancer incidence in patients with type 2 diabetes. Here Chan discusses the link between cancer and diabetes and the public and personal health implications of their findings.

 

Why has there been an increase in the prevalence of type 2 diabetes in recent years?

Diabetes is a disorder of energy metabolism where subjects cannot maintain blood glucose within a normal range of 5-8 mmol/L at all times. Blood glucose is regulated by complex mechanisms including  insulin, the only hormone which reduces blood glucose levels and many blood glucose raising factors. Rapid globalisation and modernisation characterised by excessive food intake, reduced physical activity, obesity and psychosocial stress can cause multiple hormonal changes that increase the risk of diabetes, especially in those with a family history and genetic predisposition (the latter often due to inadequate beta cell reserves). These scenarios are particularly relevant to populations such as Asians undergoing rapid acculturation and lifestyle changes who often live in environments with many health hazards, such as chronic infections (e.g. hepatitis B and C), high exposure to tobacco, environmental toxins and pollutants, which may cause low grade inflammation to further increase diabetes risk.

 

What are other diseases does diabetes predispose an individual to?

Chronic hyperglycaemia can cause widespread damage to blood vessels and nerves resulting in multiple organ failure. In addition to heart disease, stroke, kidney failure, leg amputation, and visual loss, diabetes also increases the risk of all-site cancer, infections, liver disease, cognitive dysfunction and depression by 1.3-3 fold compared to those without diabetes. These co-morbidities are becoming more frequent with aging and with a rising number of young subjects with diabetes or pre-diabetes who face a long disease duration. The declining death rate from heart disease and stroke, in part due to the availability of interventional therapies (e.g. dialysis and coronary intervention), may have also led to cancer becoming an increasing important cause of death in diabetes.

 

What is the link between diabetes and cancer?

In the 1920s, Otto Heinrich Warburg first reported that under anaerobic conditions, respiration due to fermentation (insufficient oxygen) favoured cancer cell growth over normal cell growth, which was more dependent on aerobic respiration (sufficient oxygen). Hyperglycaemia and excess free fatty acid (FFA) production in diabetes increases oxidative stress and activates multiple cellular signals resulting in an abnormal cell cycle. These cellular events may be perpetuated by generalised vasculopathy, with insufficient oxygen and glucose delivery at a tissue level.

Using the Hong Kong Diabetes Registry established in 1995, we have reported the following observations: 1) all-site cancer accounted for 25 percent of diabetes-related deaths, led by hepatocellular and colorectal cancer; 2) 1 percent increase in glycated haemoglobin A1c (a measure of average blood sugar levels over time) was independently associated with an 18 percent increased hazard ratio (HR) of all-site cancer; 3) use of anti-diabetic drugs, RAS inhibitors and statins were associated with reduced cancer risk; and 4) there were non-linear risk associations between lipids and cancer.

Many of these findings were corroborated by other research groups who have reported reduced cancer survival in patients with hyperglycaemia as well as increased cancer risk with blood glucose levels after adjustment for obesity in community-living subjects.

 

What were the main findings of your research?

We have previously reported potential crosstalk between RAS and HMGcoA reductase (HMGCR) pathways with increased cancer risk in animal models. In this analysis, we asked the question of whether suboptimal glycaemic control (reflected by glycated haemoglobin A1c levels of 7 percent or more (A1c³7%)), and non-use of RAS inhibitors and statins to block the HMGCR pathway, may additively increase the risk of all-site cancer. Using the Hong Kong Diabetes Registry with detailed documentation of risk factors, complications, treatments and clinical outcomes, amongst 6103 Chinese type 2 diabetics without prior history of cancer, 271 developed all-site cancer after 4.9 years. After adjusting for confounders, it was shown that new treatment with insulin,  oral anti-diabetic drugs, statins and RAS inhibitors were independently associated with reduced cancer risk. Patients with all three risk factors, i.e. A1c³7% and non-use of RAS inhibitors and statins, had a four-fold adjusted higher risk of cancer than those without any risk factors.

 

What are the public and personal health implications of these results?

Diabetes and cancer are complex diseases due to multiple causes relating to the host, environment and lifestyle. Apart from aging, rapid modernisation in developing countries has resulted in many young people developing pre-diabetes or diabetes, and are therefore at high risk for developing diabetes and cancer. Despite the proven benefits of controlling risk factors in type 2 diabetes, in real world practice, glycaemic control is often suboptimal with omission of many lifesaving drugs, such as statins and RAS inhibitors. In developing areas such as Asia where beta cell insufficiency, metabolic syndrome, low grade chronic infections, early onset of disease and renal dysfunction characterised by oxidative stress and micro-inflammation, are highly prevalent, the risk of diabetes and cancer may be substantially increased with personal and societal implications. In light of this, Asians living in the USA had higher rates of gastric and liver cancer than their white counterparts.

 

What further research is needed?

Apart from reducing public health hazards and raising awareness to identify high risk subjects for early intervention, our findings suggest that use of RAS inhibitors and statins may prevent cancer, especially in those suboptimally treated patients. Well-designed prospective cohorts, randomised trials and mechanistic studies are all needed to confirm these findings. However, given the complex yet probabilistic nature of the consequences of host-environment-lifestyle interactions, our data highlights opportunities for preventing multiple morbidities in diabetes by strengthening our healthcare system to optimise care with continuous evaluation. Here, translational research to develop sustainable and affordable care models to ensure timely intervention, including the use of lifesaving drugs, will also be important.

 

A 1000 human genomes…and some mycoplasma too

$
0
0

The revolution in rapid and cost-effective, high-throughput sequencing technologies have set a new trend in large-scale biomedical research. With such vast amounts of data being produced, the control of basic sequence quality downstream can present several challenges, with sequence contamination being one key area of concern. Mycoplasma are one of the most common contaminants of cell cultures. These minute bacteria lack cell walls and are particularly problematic due to difficulty in detecting their presence, even using light microscopes. But to what extent is contamination by mycoplasma corrupting downstream sequence databases? William Langdon, from the Department of Computer Science at University College London, UK, sought to address this question in his study in BioData Mining.

Langdon analysed the 1000 Genomes Project database – a large, highly respected, international study that has made its data publically available to researchers worldwide and aims to produce a detailed catalogue of human DNA variation. By downloading and scanning a random sample of more than 50 billion DNA sequences from diverse data sources, and mapping them against published genomes, he found tens of thousands of sequences that may have come from mycoplasma contamination. While many matches were of low quality, NCBI BLAST searches confirmed that some high quality, low entropy sequences matched mycoplasma strains. Overall, these results suggested that at least seven percent of public data provided by the 1000 Genome Project may be contaminated with mycoplasma.

The results probably come as little surprise to those who are already aware of the troublesome mycoplasma in molecular biology laboratories. While presenting a cause for concern, cross-species contamination in single-species databases such as the 1000 Genomes Project is relatively easy to screen for as compared to contamination from other members of the same species. However, as ever-increasing amounts of genomic data become available in the public domain and in silico research in biology grows, Langdon’s study highlights the need for further independent studies into sequence contamination of large databases.


Rodolphe Thiebaut and Laura Richert on how to accelerate HIV vaccine development

$
0
0

Over 35 million people worldwide are known to be infected with HIV, according to the World Health Organization. With such a heavy global burden, research into vaccines against HIV are underway in order to prevent its further spread. For such research to move from the laboratory bench into the field, lengthy clinical trials must be conducted to assess their safety, immunogenicity and efficacy. In an effort to accelerate the clinical development of HIV vaccine strategies, Rodolphe Thiébaut and Laura Richert from the Bordeaux School of Public Health, France, and colleagues, explore early stage trial designs for four HIV vaccine strategies and propose a randomised multi-arm phase I/II design. Here Thiébaut and Richert explain the key elements of their optimised trial design, as published in a recent methodology study in Trials, and discuss whether this could be applied to later phase trials and vaccine development strategies for other diseases.

 

What got you interested in HIV vaccine studies, and in particular methods to accelerate their clinical development?

As part of the biostatics core of the French Vaccine Research Institute (VRI) we are directly involved in HIV clinical trial designs and thus confronted with applied methodological questions in our daily research activities. Given the many unknowns in HIV vaccine development, in particular the absence of validated surrogate markers, early clinical trials are part of an iterative ‘discovery’ science, in which candidate strategies go back and forth between preclinical and clinical studies in order to learn the most from the data. Many prophylactic candidate vaccine strategies are currently in early development, and the most promising among them should be identified as rapidly as possible in order to use available resources efficiently in the overall development plan. Acceleration of the early clinical evaluation of these strategies is thus one key aspect for moving HIV vaccine research forward.

 

Adaptive design methods in clinical research have become more popular in recent years, especially in cancer. Do you think this is also the way forward for randomised controlled trials in HIV vaccine development?

We are completely inspired by the evolution in designs in cancer research and we think that adaptive designs should also be relevant for prophylactic HIV vaccine development. However, this latter field is very different from cancer research and requires specific trial designs.

 

Can you briefly outline the key aspects of your optimised study design for accelerating early stage clinical development of HIV vaccines?

Our proposed trial design combines phase I and phase II evaluations into one single trial, and therefore allows for a gain in the development timelines of heterologous prime-boost vaccine strategies. Continuous safety monitoring with Bayesian methods is implemented for the phase I evaluation of a new candidate vaccine, with the aim to stop the administrations of this vaccine before the boost phase, should it not be safe enough. This approach is justified if there is prior knowledge making it very likely that the new vaccine is indeed safe. In our portfolio this is the case, since the new vaccine consists of a vector that has been extensively studied before, and the novelty only concerns the HIV antigen insert used. Moreover, our design is a randomised multi-arm trial allowing for an unbiased evaluation of several vaccine strategies in parallel in the phase II part of the trial.

 

How do you hope your proposed methodology will impact the clinical research community?

We hope that our approach is useful for clinical researchers working on the development of complex vaccine strategies, which not only concern the HIV field, but for instance also malaria or tuberculosis vaccine research. In cases where our proposed design is not directly adoptable in a given research context, we hope that our publication gives an impetus to researchers to search for an appropriate ‘optimised’ design and to thoroughly evaluate its features before implementation.

 

Do you think some aspects of the early stage design put forth in your study could be extended to phase III/IV trials?

Continuous safety monitoring methods, which rely on sequential statistical methods, could be relevant for any stage of clinical research, in which there is a particular concern for the safety of an intervention. This is mostly the case in early stage development, where uncertainty about the properties of the evaluated intervention are usually higher than in later stage development, but in some circumstances it could also be warranted in phase III or IV trials.

The other aspect of our trial is the multi-arm design, which is a method also used in later stage development.

 

Do you think this study design could be applied to other vaccine development trials, including those requiring dose escalation?

In trials requiring dose escalation, a randomised design from the beginning is difficult to conceptualise since the phase II dose is unknown at trial start. Moreover, in our opinion, thorough methodological considerations are required as to whether dose escalation is suitable in vaccine development and if so, which method should be used. Indeed, most dose escalation designs have been developed in the context of therapeutic interventions, and their appropriateness for prophylactic vaccine research needs to be evaluated, given that a) prophylactic candidate vaccines are generally very well tolerated so it is plausible to not observe any major toxicity event in the considered dose range; b) if toxicities are observed, it is not clear whether the classical assumption of increasing toxicity with increasing dose also holds for vaccines; and c) and not only toxicity but also immunogenicity may need to be taken into account in order to define the dose range for subsequent development. This should be subject to methodological research before considering integration of dose escalation into phase II HIV vaccine designs.

 

Do you think an effective prophylactic vaccine strategy against HIV will emerge in the near future?

Many promising results have been obtained in preclinical studies of prophylactic HIV vaccine strategies in recent years, and there is an increasing understanding of the immune mechanisms likely to be required for an efficacious vaccine strategy. It is thus possible that proof of concept of an efficacious strategy in humans is possible in the future. However, even with optimised or adaptive clinical trial designs, the full development of a HIV vaccine strategy is lengthy due to the fact that large sample sizes and long follow-up times are required to demonstrate efficacy with regards to a HIV acquisition endpoint in phase IIB/III. Developing a strategy that not only provides proof of concept but also has the desired properties allowing for large-scale roll-out on an operational level is likely to require more time.

 

What’s next for your research?

Our current work focuses on the definition of immunogenicity endpoints in early stage HIV vaccine trials in the absence of validated surrogate immunogenicity markers. Our approach includes modeling the dynamics of the different types of immune responses to HIV vaccines over time in order to better understand their temporal patterns and interrelationships, thus allowing us to better define which immunogenicity markers should be assessed and at what time points in order to get the most information from early stage vaccine trials. Modeling HIV vaccine response could help in performing in silico trials.

 

How PP4 plays a part in protective gut immunity

$
0
0

Inflammatory bowel disease (IBD), encompassing ulcerative colitis and Crohn’s disease, arises as a result of an abnormal response by the immune system to antigens in the gut, including food and trillions of commensal intestinal bacteria (those that under normal circumstances do not harm their host). The pathogenesis of the disease is still incompletely understood, but is thought to involve a complex interaction of genetic, environmental and immunoregulatory factors. A recent study in Cell & Bioscience by Tse-Hua Tan and Ching-Yu Huang from the Immunology Research Center, Taiwan, and colleagues, provides fresh insights into the development of IBD, implicating the role of protein phosphatase 4 (PP4) and its effect on the specialised immune cells, regulatory T (Treg) cells.

Treg cells play a vital role in maintaining a balanced immune system and establishing tolerance to foreign, non-pathogenic antigens by suppressing the immune response of other cells. They can be induced in peripheral tissues or produced from T cells in the thymus; in the latter case PP4 is thought to be essential. The authors set out to more deeply probe the functions of PP4 on Treg cells in vivo.

Treg cell development and function was assessed in the absence of PP4 by generating floxed PP4 mice, whereby the gene encoding PP4 was blocked specifically in T cells at the stage of repertoire selection in the thymus. They found that deletion of PP4 led to fewer Treg cells in both the thymus and peripheral tissues. Also, PP4-deficient Treg cells had reduced suppressor function, which was associated with a decrease in expression of the proteins IL-10, CTLA4, GITR and CD103, implicating these molecules as potential targets of PP4.

Notably, symptoms resembling human Crohn’s disease, namely colitis and inflammation of the small intestine, developed in approximately 60 percent of the T cell-specific, PP4-deficient mice, and was correlated with reduced numbers of Treg cells in the gut. This suggests that PP4 is essential for the maintenance of protective gut immunity. However, further analysis revealed that PP4-deficient Treg cells were still capable of suppressing experimental colitis, indicating that this aspect of Treg cell function remains viable. Other elements of immune development influenced by PP4 are therefore likely to be responsible for inducing spontaneous colitis in these mice, supporting the view that IBD is a multifactorial disease.

As the incidence and prevalence of IBD increases worldwide, the need to understand this condition becomes more pressing. These findings suggest an important role for PP4 in regulating the immune system via the modulation of Treg function and may provide a new perspective for future studies on IBD.

 

Probing the aetiology of renal disease: Ming-hui Zhao & Jing Huang on FSGS

$
0
0

Focal Segmental Glomerulosclerosis (FSGS) is a major cause of primary glomerular disease in adults. The resulting glomerular damage can lead to generalised oedema, protein leakage into urine, and in some cases kidney failure. Its underlying cause remains unknown, however soluble urokinase receptor (suPAR) has been implicated in its aetiology. Mixed results over the involvement have suPAR, led Ming-hui Zhao, Jing Huang and colleagues from Peking University, China, to investigate its significance in a cohort of primary FSGS patients, as published in their recent study in BMC Medicine. Here Zhao and Huang explain how suPAR relates to FSGS and how their results could influence clinical practice.

 

What is Primary Focal Segmental Glomerulosclerosis (FSGS)?

Primary Focal Segmental Glomerulosclerosis (FSGS) is defined as a clinico-pathological syndrome without known aetiology. The ubiquitous clinical feature of the syndrome is proteinuria, which may be nephrotic or non-nephrotic. The ubiquitous pathological feature is focal segmental glomerular consolidation or scarring, which means some glomeruli are affected (focal) and the affected glomeruli show partial capillary loop involvement (segmental). Several distinctive histopathological patterns can be classified; collapsing FSGS, tip lesion FSGS, cellular FSGS, perihilar FSGS, and FSGS not otherwise specified (NOS).

FSGS is a major pathological type of refractory nephrotic syndrome in both children and adults. The degree of proteinuria is a predictor of the long term clinical outcome. Patients who have proteinuria of more than 10 g of protein per day have very poor long term renal survival, with the majority of patients reaching end stage renal disease (ESRD) within three years. In order to decrease proteinuria, renin-angiotensin-aldosterone system (RAAS) blockers are administered to almost all patients, and glucocorticoids or immunosuppressive therapy are given to those with nephrotic syndrome.

However, patients with primary FSGS remain frustrating to treat, and the aetiology and pathogenesis of the disease has not been well elucidated. The damage and detachment of podocytes from the glomerular basement membrane is regarded as the key point for the initiation and progression of FSGS; searching for pathogenic factors acting on podocytes may provide promising contributions to future disease therapies.

 

What is suPAR and how does it relate to FSGS?

Urokinase receptor (uPAR) is a glycosylphosphatidylinositol (GPI)-anchored protein with three domains (DI, DII, and DIII). It is expressed on the membrane of several different cell types, including kidney podocytes, neutrophils, monocytes, macrophages, activated T-lymphocytes, and endothelial cells. It can also be released into the circulation as soluble uPAR (suPAR) after cleavage of the GPI anchor. uPAR is additionally susceptible to cleavage at the linker region between DI and DII, thus both the whole receptor and various segments of it are detectable in circulation and are all referred to as suPAR.

In addition to the regulation of proteolysis, suPAR initiates signal transduction in cooperation with other transmembrane proteins such as integrins, caveolin and G-protein-coupled receptors, which promotes cell proliferation, invasion, motility and survival. Wei and colleagues (Nat Med, 2011, 17:952–960) reported that suPAR could bind to and activate β3 integrin on podocytes and thus cause proteinuria and FSGS in a mouse model,  and proposed that suPAR may be a pathogenic circulating permeability factor for FSGS.

Our study found urinary suPAR levels of patients with primary FSGS was significantly elevated and was associated with disease severity and treatment response. Cellular experiments revealed urinary suPAR could active β3 integrin on podocytes and promote wound-healing function in cultured human podocytes (unpublished data). This  indicated that suPAR in patients with primary FSGS might be pathogenic to podocytes. Taken together these findings support suPAR as a  helpful marker for diagnosis and a potentially causative factor in primary FSGS.

 

In your study you looked at urinary levels of suPAR. Is this better than measuring serum suPAR levels?

In our opinion, urinary suPAR levels may be better than measuring serum suPAR levels. We propose three possible reasons: 1) For urinary suPAR levels, 67.7 percent of primary FSGS patients showed elevated urinary suPAR levels over the cut-off value from normal healthy donors, while this was only true for only 54.1 percent of primary FSGS patients for plasma suPAR levels (shown in our previous study). 2) Urinary suPAR levels have been adjusted by urinary creatinine, and our results showed that there was no association between urinary suPAR levels and glomerular filtration function, so we think that urinary suPAR levels were less affected by glomerular filtration function as compared with serum suPAR. 3) suPAR filtrated freely through the glomerular basement membrane. Podocytes themselves could also express uPAR on their cell membranes, which may also release into urine. Thus urinary suPAR levels potentially represent the final level of both circulating suPAR and uPAR expressed by podocytes, and therefore may be of more significant clinical value.

 

What were your main findings?

We found that the urinary suPAR levels of patients with primary FSGS was significantly higher than that of patients with minimal change disease, membranous nephropathy, secondary FSGS and normal subjects. The urinary suPAR levels of patients with cellular FSGS was significantly higher than that of those with tip FSGS and not otherwise specified (NOS) FSGS. Urinary suPAR levels in patients with primary FSGS positively correlated with 24 hour urine protein levels and negatively correlated with plasma albumin levels. During follow up, the urinary suPAR levels of patients with complete remission decreased significantly. We therefore conclude that urinary suPAR was specifically elevated in patients with primary FSGS and was associated with disease severity.

In order to demonstrate the pathogenic role of urinary suPAR in patients with primary FSGS, we investigated the activation effect of urinary suPAR on its ligand (AP5 staining), β3 integrin, in cultured human differentiated podocytes. The AP5 signal was strongly induced along cell membranes when human differentiated podocytes were incubated with the urine of patients with FSGS at presentation, but not with disease and normal controls. More importantly, the signal could be reduced by a blocking antibody specific to uPAR.

 

Why is it important to distinguish FSGS from other kidney diseases?

It is very important to distinguish primary FSGS from other kidney diseases, especially from minimal change disease and secondary FSGS.

The clinical features of FSGS and minimal change disease could be exactly the same, while the therapeutic strategy, treatment response and renal outcomes are quite different. Patients with minimal change disease respond well to corticosteroids and will not reach end stage renal disease (ESRD), while a large portion of patients with primary FSGS need a long term treatment of high dose corticosteroids even with the combination of immunosuppressive drugs, and may progress to ESRD for those with no remission of nephrotic syndrome.

Differential diagnosis depends on histopathological features revealed from renal biopsies. However, the required glomeruli sections are not always available in clinical practice. We therefore tried to explore urinary suPAR detection as an aid for differential diagnosis and its pathogenic role action on podocytes.

Another disease, which needs to be differentiated from primary FSGS, is secondary FSGS. For secondary FSGS, treatment should be focused on the primary causes, while for primary FSGS, patients might receive corticosteroids and immunosuppressive drug treatments. The histopathological features could be quite similar between primary and secondary FSGS and the aetiology is not readily found at diagnosis. Consequently other methods are needed for differential diagnosis.

 

How will this work influence clinical practice?

The detection of urinary suPAR levels might provide a helpful method for the diagnosis of primary FSGS and its differential diagnosis from minimal change disease and secondary FSGS, alongside a combination of clinical manifestations and histopathological features discerned from renal biopsy sections. Urinary suPAR levels might also be helpful in predicting disease severity and therapeutic response.

 

What further research needs to be done?

Our study is a retrospective study and the sample size is limited, so a large prospective study is needed to validate the clinical usage and significance of suPAR in patients with primary FSGS. suPAR has three domains and is heavily glycosylated. The pathogenic domain or pathogenic form of glycosylation or phosphorylation of suPAR has not been fully elucidated and needs further study.

 

Translation from the heart of connexin-43

$
0
0

How many times have you dismissed a UFO (unidentified fluor-/chemiluminescent object) on your western blot as an uninteresting, accidental degradation product – focusing only on the band running at the predicted size for your protein of interest?

A research article published in Cell Communication & Signaling highlights the functional value of just such a UFO in the beating heart, detailing the mechanisms of its very deliberate production.

Researchers studying the major gap junction protein connexin-43 have for many years noticed a 20 kDa band (now known as GJA1-20k) on their blots. But what might have been written off as a UFO was recently shown by Smyth and Shaw to be a separate protein of independent function, the product of internal translation acting on connexin-43’s mRNA transcript (Cell Rep, 2013, Nov 14, 5(3): 611–618).

Internal translation of mRNA transcripts is a rare but long-studied phenomenon, with more than 100 known examples. Smyth and Shaw’s elegant study described how cells use internal translation to generate a protein equivalent to a C-terminal fragment of connexin-43, and that this truncated protein has an important function as a chaperone for full-length connexin-43 as it traffics to gap junctions. It is so important, in fact, that loss of connexin-43 internal translation results in the full-length protein becoming stuck in the endoplasmic reticulum.

The Smyth and Shaw study makes the assumption that GJA1-20k is produced by a cap-independent translation mechanism, as the literature often describes internal translation as an exclusively cap-independent process. However, Trond Aasen and colleagues from the Autonomous University of Barcelona, Spain, noticed that dissenting voices were emerging who argued against the ‘cap-independent only’ paradigm for internal translation (for example, in Gene, 2012, Jul 12, 502(2): 75–86).

In a Short Report published in Cell Communication & Signaling, Aasen and colleagues now convincingly demonstrate that GJA1-20k undergoes cap-dependent, and not cap-independent, translation. They further find that GJA1-20k translation is additionally dependent on upstream translation – reminiscent of a previously described bait-and-trap system for ribosomes – and that its abundance is negatively regulated by Mnk1/2 kinase signaling.

As connexin-43 and its function at gap junctions is associated with a number of pathological conditions (especially cardiac arrhythmias, but also skin, neurological and developmental disorders, as well as cancer), these new insights into the regulation of its trafficking chaperone are an important contribution to the study of a number of different diseases.

Amazonian giants: discovery of the novel giant mimivirus Samba

$
0
0

Over the last decade, discoveries of extremely large and complex viruses have challenged our concepts of what viruses are and how they evolved. These giant viruses are often comparable in dimension and genome size to small bacteria, even prompting some to postulate that they should be classed as a new kind of life form. Now, a novel giant virus named Samba (SMBV) has been found within amoebae of the Negro River in the Amazonian basin. Jônatas Santos Abrahão from the Federal University of Minas Gerais, Brazil, Bernard La Scola, from Aix-Marseille University, France, and colleagues, who discovered the virus, conducted a phylogenetic analysis of SMBV, the results of which are presented in their recent study in Virology Journal.

Their analysis showed that SMBV is most closely affiliated with Acanthamoeba polyphaga mimivirus (APMV), and falls within group A of the putative order Megavirales. SMBV virus has an average particle diameter of 574 nm and a genome containing 938 ORFs (that is, putative protein-coding genes). This contrasts with APMV, which is around 750 nm in diameter and has 911 predicted ORFs. Indeed, SMBV has one of the largest genomes of any group A Mimivirus known to date. They found that, although around 91 percent of SMBV’s ORFs are shared with APMV, and are largely in the same genome locus, many of SMBV’s ORFs are inverted when compared to APMV. SMBV also shares many ORFs that are known only from the Mimiviridae family, including those that putatively encode proteins with roles in protein translation or DNA repair.

Samba virus infected with RNV displaying morphological defects (left to right): defetive capsid wrapped around a small particle, lemon-shaped particle, defective spiral capsid. Image source: Campos et al, Virology Journal, 2014,11:95

Electron microscopy images of SMBV infected with RNV displaying morphological defects (left to right): defective capsid wrapped around a small particle, lemon-shaped particle, defective spiral capsid. Image source: Campos et al, Virology Journal, 2014,11:95

Analysing the SMBV-infected amoeba further using electron microscopy revealed the presence of a small, parasitic virus infecting SMBV’s viral factories. This virophage, named Rio Negro (RNV), shared high gene identity with Sputnik virus, the virophage associated with a strain of APMV. SMBV particles in those amoebas also containing RNV, were atypical in appearance, some presented with defective capsids (protein coats) wrapped around small particles, whilst others were lemon-shaped and had defective spiral capsids. RNV also caused decreased infectivity of SMBV – and, interestingly, of APMV.

This study expands our knowledge of viral distribution, diversity and evolution, and presents the first record of a giant virus in the Amazon rainforest – an environment already known for its striking diversity of flora and fauna, but with a poorly understood microbial ecology. Genome sequencing of SMBV sheds light on the relationships between different giant viruses, and also adds further data to the evolving story of why giant viruses appear to contain genes relating to translation when they rely on their host’s machinery for this process.

Michael Speicher and Ellen Heitzer on tracking cancer with ‘liquid biopsies’

$
0
0

Understanding the molecular nature of cancer is key to administering the most effective treatments. Cancer patients are therefore often subject to invasive tissue biopsies to discern the dominant traits of a tumour. However this approach does not necessarily capture the whole story, with rapid changes in the cancer genome known to occur as the disease progresses, in response to treatment, and during metastasis, often in a small proportion of cells. Prostate cancer in particular is known to recur in 20-30 percent of men after five years, in spite of initial curative treatment. Furthermore, metastatic prostate cancer often spreads to the bone, making tissue biopsies technically challenging and therefore limiting repeated sampling over the course of the disease. With this in mind, Michael Speicher and Ellen Heitzer from the Medical University of Graz, Austria, and colleagues, developed a ‘liquid biopsy’ method for prostate cancer patients, as published in their study in Genome Medicine, which this year received BioMed Central’s 8th Annual Research Award. Here winning author Heitzer, alongside Speicher, discuss what is needed for this approach to become routine clinical practice, and what the future of non-invasive cancer diagnostics entails.

 

How did your interest in genomic alterations and their impact on disease susceptibility come about, particularly with regards to cancer?

Despite the massive technological advances over the past half century, cancer is still a leading cause of death worldwide. Not many people have been spared the loss of a friend or loved one to cancer. It is not just one disease, it is many diseases and the more cancer genomes we decipher, the more molecular changes we discover. Most cancer cells harbour numerous genetic changes and even cells within a tumour can be completely different. It is quite astonishing that cancer cells are able to become more invasive and deadly as they accumulate more genetic changes, rather than dying because of the resulting chromosomal chaos.

Based on a deeper understanding of tumours at the molecular level, the ‘one size fits all’ approach to cancer treatment has dramatically changed and a new generation of so called ‘targeted therapies’ is available today. Unfortunately, with the implementation of these therapies into clinical practice, new challenges arose. The major problems are tumour heterogeneity and the lack of genetic follow up data. We therefore wanted to develop solutions to improve therapy management for cancer patients.

 

What led to your study in Genome Medicine, and what was the main goal of the research?

To tailor anti-cancer treatments as much as possible guiding biomarkers are needed. While prognostic markers help to identify individuals who are at high risk of recurrence of their cancer and should therefore receive further adjuvant therapy, predictive biomarkers help to identify therapy targets and subgroups of patients who are most likely to benefit from a given therapy.  However, even the best marker is useless if it is not accessible or cannot be repeatedly evaluated.

Therapy decisions are mainly based on biopsies and the treatment is designed to target the dominant clones. Thus, subclones that are also relevant for the treatment or already harbour primary resistance mechanisms might be missed by biopsies. In addition cancer cells accumulate new genetic changes as a consequence of tumour progression and the selective pressure of cancer therapies. Furthermore, metastases often show different changes than the primary tumour and in most cases we are not able to monitor these changes with simple biopsies. In general progression is only noted if it is already clinically obvious after evaluation with imaging techniques and clinical tumour markers that are not accurate enough.

One possibility to overcome these limitations is the establishment of biomarkers from easily accessible biofluids like blood or plasma, the so called liquid biopsy. The analysis of tumour-specific changes on circulating tumour cells (CTC) and cell-free circulating tumour DNA (ctDNA) is beneficial when compared to tissue biopsies as repeated sampling is easily achievable. Furthermore, ctDNA reflects genetic changes including point mutations, methylation patterns, copy number aberrations and structural rearrangements from different tumour locations, and can therefore be used as surrogate markers for the entire tumour genome.

We therefore aimed to develop a sequencing based technique where it is possible to monitor tumour genomes non-invasively from plasma on both a genome-wide and a gene-specific level. The main goal, however, was to establish a method that can deliver relevant information in a very short time and at reasonable costs, such that this method can eventually be implemented in routine clinical practice.

 

What did you find and why was this exciting/interesting?

Tumours release cell-free DNA in to the circulation, which is then referred to as circulating tumour DNA (ctDNA). As ctDNA reflects genetic changes from different tumour locations it is more representative of the mutational heterogeneity in a tumour than a tissue biopsy. We developed a sequencing based approach called plasma-Seq where it is possible to reconstruct tumour genomes non-invasively from plasma on both a genome-wide and a gene-specific level. We were able to establish copy number variations within 24 hours and with the addition of a targeted approach we can also detect specific driver mutations that might be used as therapy targets or for monitoring purposes.

The big advantage of our approach compared to other available monitoring tools is that we employ a benchtop sequencer, the Illumina MiSeq that is affordable for smaller labs and generates sequence data in a very short time and at reasonable costs. It is most important for patients as well as clinicians to obtain data for therapy decisions in a very short time frame to enable a rapid response to any relevant genetic changes.

 

What are the clinical implications of your findings?

The most important clinical implication is that we are able to systematically track the genomic evolution of cancer non-invasively in a very cost-effective manner. Knowing early if resistance has developed would allow patients to switch therapies before the progression is clinically obvious. This might therefore contribute to improved  survival of cancer patients. Furthermore, anti-cancer treatments are increasingly administered according to ‘druggable targets’ rather than the tumour entity. There is no question that continuous monitoring of the evolution of tumour genomes is inevitable. As tissue biopsies are invasive and some tumours are inaccessible, liquid biopsy represents a minimal invasive method where repeated sampling is easily achievable.

 

What barriers must be overcome to translate your ‘liquid biopsy’ approach to routine clinical practice?

The liquid biopsy is a very promising approach, however there are some issues that need to be solved before it can be actually implemented into clinical practice. First, the biology of the release of tumour DNA is not fully elucidated yet. Although plenty of studies indicate that ctDNA is correlated with tumour burden, tumour-specific changes cannot be identified in all tumour patients, not even in all patients showing metastasis. Second, the amount of ctDNA in the circulation varies dramatically and can range from less than one percent to more than 90 percent. Third, the mechanism of DNA clearance from plasma is also poorly understood and it is not known how other factors such as circadian rhythms, inflammation or particular therapies influence release and clearance mechanisms. Finally, the predictive and prognostic value of our approach, and ctDNA in general, needs to be evaluated in large clinical studies.

It is absolutely critical to establish standard operating procedures at both pre-analytical and analytical levels. To overcome logistical challenges that might represent hurdles for clinical implementation large-scale prospective studies are necessary in order to standardise our technique and to assess reproducibility. A further caveat is that for many cancer and/or tumour subtypes predictive markers are lacking and many mechanisms of progression or acquired resistance are not yet understood. In addition, in scenarios where only low levels of ctDNA are present (e.g. early stage or minimal residual cancer) our approach might not be applicable as its sensitivity is limited.

 

What impact do you think continued advances in next-generation sequencing will have on cancer diagnosis and therapy?

There is no doubt that next-generation sequencing (NGS) techniques have already and will continue to revolutionise cancer management. In terms of non-invasive cancer diagnostics there are basically two NGS-based approaches, one that is targeted and one that is genome-wide.

The targeted approach involves analysing known genetic changes in a primary tumour from a small set of frequently occurring driver mutations (e.g. mutations in KRAS, EGFR, etc), with implications for therapy decisions. A recent study from Bettegowda and colleagues (Sci Transl Med. 2014, Feb 19, 6, 224:224ra24) analysed a large set of cancer patients with different tumour entities and tumour stages. The sensitivity of ctDNA for detection of clinically relevant KRAS gene mutations was 87.2 percent and its specificity was 99.2 percent. However, they used highly sensitive methods including digital PCR and the Safe-SeqS method that was previously established by the same group (Proc Natl Acad Sci U S A. 2011, Jun 7, 108, 23:9530-5). This approach represents a good method to detect tumour specific mutations even at very low levels and allows a clear distinction from the background. Furthermore, they screened for translocations that are highly tumour-specific and can be used to detect tumour-specific changes at very low levels or for the identification of minimal residual disease. In our study we were also able to identify structural rearrangements after targeted enrichment of a chromosomal region that is frequently involved in translocations. Another technique to track down tumour-specific mutations in plasma was developed by Forshew and colleagues (Sci Transl Med. 2012, May 30, 4, 136:136ra68), called tagged-amplicon deep sequencing (TAm-Seq).

The genome-wide approach involves identifying de novo tumour-derived chromosomal alterations through massively parallel direct sequencing of DNA from the plasma, akin to our plasma-Seq method. This means sequencing the primary tumour is not necessary. Analysis of the ctDNA could deliver sufficient information for therapy management. Furthermore, such approaches are applicable to all patients as they do not rely on recurrent genetic changes. Dennis Lo and colleagues in Hong Kong, China, were among the first that established genome-wide profiles from plasma and further developed this technique by a combined assessment of hypomethylation and cancer-associated copy number aberrations. Leary and colleagues (Sci Transl Med. 2010, Feb 24, 2, 20:20ra14) also developed a method called PARE (personalised analysis of rearranged ends), to identify translocations in solid tumours and applied this approach in plasma DNA samples, where they identified several chromosomal copy number changes and rearrangements, including amplification of cancer driver genes such as ERBB2 and CDK6. Murtaza and colleagues (Nature. 2013, May 2, 497, 7447:108-12) performed exome sequencing of plasma DNA samples and followed multiple courses of treatment. Quantification of allele fractions in plasma identified an increased representation of mutant alleles in association with the emergence of therapy resistance.

There are therefore numerous approaches to analyse tumours non-invasively, but it is not clear yet which method will emerge as the best one. In contrast to our method where it is possible to obtain clinically relevant information in a cost-effective and fast manner – as demonstrated in our recent PLOS Genetics paper (PLoS Genet. 2014, Mar 27, 10, 3:e1004271) – most other techniques are still too expensive and time consuming. However, sequencing costs will further drop and this field of technology is continuously evolving. It is just a matter of time until technical advances and cost reductions will allow the implementation of genome-wide approaches with high resolution as a routine tool in laboratory medicine. Meanwhile, clinical standards are needed in order to compare and validate methods, also taking into account different diagnostic centres and settings.

 

Why did you choose to publish your findings in an Open Access journal?

The main reason for publishing in an Open Access (OA) journal was so that any researcher can read and build on our findings. As the research findings are publically available, an individual researcher may also gain more visibility, recognition, readership, and citations than in traditional journals. Although impact factor is still one of the principal metrics of an article and a measure of the reputation of a researcher, in our opinion it does not necessarily reflect the value of a scientific work. It is rather the readership that judges the relevance of an article which can be, for example, nicely reflected by an Altmetric score that provides information on the usage and dissemination of published article. It should however be noted that there are many OA journals that are at the top of their disciplinary categories and several studies indicate that there is no dramatic difference in citations compared to traditional journals.

 

What’s next for your research?

We are currently focusing on the three big Cs: prostate, colorectal, and breast cancer. Regarding colorectal cancer we recently published a study in PLOS Genetics, in which we used the plasma-Seq approach to assess the genetic evolution of tumours under anti-EGFR therapy. We observed that specific copy number changes of genes, such as KRAS, MET, or ERBB2, can be acquired under therapy and determine responsiveness to therapy. In the near future we aim to validate our approach in controlled, prospective clinical studies, and in doing so take one step closer to implementation of liquid biopsy in routine clinical practice.

We are also trying to better understand tumour progression and resistance mechanisms. Whilst establishing genome-wide copy number changes, we have frequently observed numerous focal amplifications that might include genes that are involved in tumour progression. We therefore may be able to identify new treatable targets or driver genes.

In addition, we are working on more sensitive methods to be able to monitor tumour evolution in early stages of cancer and in those patients in which the amount of tumour DNA is currently below the detection limit of our established assay.

 

Does the battle of the sexes start in the oviduct?

$
0
0

For many years, the gender of mammalian offspring was regarded as a matter of chance, resulting in an equal ratio of males and females in each generation. But growing evidence suggests that sex ratios at birth can be controlled. Whether this is an adaptive response to changing environmental or parental conditions is still contentious, especially as very little is known about the biological mechanisms behind such sex ratio bias. Several hypothetical mechanisms have been proposed in non-human mammals, some of which implicate the oviduct of the female because it plays such a crucial role in reproductive events before fertilisation. Might the female be able to distinguish between the presence of X- and Y-chromosome-bearing spermatazoa? Alireza Fazeli from the University of Sheffield, UK, and colleagues, explored this question further by investigating the sex-specific sperm recognition system in the oviducts of pigs, as published in a recent study in BMC Genomics.

Cluster heat map analysis of the transcriptional profiles obtained from oviductal samples inseminated with X-spermatozoa and Y-spermatozoa. Image source: Alminana et al, BMC Genomics, 2014, 15:293

Cluster heat map analysis of the transcriptional profiles obtained from oviductal samples inseminated with X-spermatozoa and Y-spermatozoa. Image source: Edited version from Alminana et al, BMC Genomics, 2014, 15:293

They used microarray analysis to investigate altered transcript expression on oviduct tissue samples collected from four sows, each inseminated with X-spermatazoa on one side of the reproductive tract and Y-spermatazoa on the other. Results showed that the two sperm types elicited different transcriptome responses within the oviduct. More specifically, around two percent of transcripts were consistently altered in the oviduct in the presence of Y-spermatozoa compared to the presence of X-spermatozoa. From these, 54.1 percent were down-regulated (‘switched off’) and 45.9 percent were up-regulated (‘switched on’) when the Y-spermatozoa was present in the oviduct. Of these genes, many were related to functions of the immune, digestive and endocrine systems, as well as signal transduction.

Fazeli and colleagues provide the first evidence of sperm cell type modulating the activation of specific signaling pathways in the female, suggesting that perhaps the oviduct is able to differentiate between them. If this is so, it may be that this sperm recognition system contributes towards gender selection in the offspring. This remains to be determined, however it is clear that the results of further research into gender bias in mammalian sex allocation could have profound implications for evolutionary biology.

 


Why the Pantoea pathogen persists in a multitude of environments

$
0
0

Bacteria were the first forms of life on Earth and are the most prevalent biomass on the planet; their success is perhaps unsurprising given their rapid evolutionary rates and remarkable adaptive ability. One such bacteria, Pantoea ananatis, has been the focus of a recent study published in BMC Genomics, by Pieter De Maayer from the Centre for Microbial Ecology and Genomics at the University of Pretoria, South Africa, and colleagues. This ubiquitous bacteria has been implicated in a number of plant diseases, including those of economically important crops such as pineapple, maize and onion. It is found across a broad range of environments, from soils and rivers to refrigerated beef and aviation fuel tanks, and its hosts include not only plants but also insects and even humans.

Onion bulbs cut open to reveal damage caused by Pantoea ananatis bacterial blight. Image source: Howard F Schwartz, Colorado State University, USA (Bugwood.org).

Onion bulbs cut open to reveal damage caused by Pantoea ananatis bacterial blight. Image source: Howard F Schwartz, Colorado State University, USA (Bugwood.org).

De Maayer and colleagues analysed the pan-genome of eight individual strains of P. ananatis; a pan-genome referring to the entire gene pool of all strains of a species, including genes not shared by all strains. It may be characterised as open or closed, depending on the relative presence of core (present in all strains) versus accessory (unique to or absent from particular strains) elements in the genome. They found that 69.65 percent of coding DNA sequences (CDSs) were core to all eight genomes and a sizeable 30.36 per cent were accessory. Extrapolating from this, they predicted that, with the addition of each sequenced strain, the core genome would change little but the accessory genome would increase by approximately 106 unique CDSs. Such an open pan-genome has similarly been found in other bacterial species that occupy a wide range of environments, have a diverse range of lifestyles and/or possess efficient means of horizontal transfer.

The nature of the open pan-genome of P. ananatis was investigated further by classifying the CDSs on the basis of orthology. Maayer found that the accessory part of the genome encodes mainly for proteins of unknown function and that 41.4 per cent of strain-unique CDSs are derived from prophages (that is, viral genomes integrated into the bacterial genomes). The translated protein products of the pan-genome CDSs were then compared against a database to identify those that may be involved in host-microbe interactions. The shared orthology with other microorganisms revealed the presence of a large number of proteins encoding for colonisation of distinct hosts, including animals, as well as for interactions with insect hosts, plant-microbe interactions and plant- and animal-pathogenesis.

During their evolutionary arms race, many bacteria have overcome host defences or colonised new hosts by accessorising their genomes with DNA from bacteria outside their species. These results suggest that horizontal transfer is likewise significant in the diversification of P. ananatis and indeed may have helped in its cross-Kingdom leap. As reports increase of P. ananatis-caused diseases in previously unrecorded environments and hosts, and with the potential to infect humans, such insights into the evolution and ecological success of this species are crucial.

 

Moshe Oren on taking transcriptomics to the next level with 4sUDRB-seq

$
0
0

Transcriptomics is generally associated with efforts to probe gene expression levels within total mRNA samples, an area of research that has yielded significant insights into processes such as carcinogenesis and cellular differentiation, especially following advances in high-throughput technology. However there is more to transcriptomics than analysing the end product of transcription alone, the process itself also raises numerous avenues for exploration. The question therefore arises of how this complex process can be captured on a genome-wide scale. In a Method article in Genome Biology, Moshe Oren, Gilad Fuchs, Yoav Voichek and colleagues from the Weizmann Institute of Science, Israel, present a novel method for measuring genome-wide transcriptional elongation rates termed 4sUDRB-seq. Here Oren, Fuchs and Voichek discuss the challenges they faced during its development and the surprising results this method revealed.  

 

What led you to develop the 4sUDRB-seq method?

We became interested in transcription elongation mainly due to our interest in chromatin biology. More specifically, we are interested in histone post translational modifications (PTMs) localised to the transcribed regions of genes. We wanted to employ a suitable method in order to explore the role of those histone PTMs in transcription elongation at a high resolution (minute scale) and across the genome. We were quite surprised to discover that back then a method to measure genome-wide transcription elongation rates did not exist.

 

How does this fit in with your earlier work?

Our lab didn’t really focus directly on transcription elongation rates in the past. However, we are interested in the role of chromatin in regulating gene expression. More specifically, we previously reported that histone H2B monoubiquitylation (H2Bub1) is needed for the induction of relatively long genes during stem cell differentiation (Molecular Cell, 2012,  46: 662–673). Based on this observation as well as on earlier studies in yeast and in cell-free transcription systems, we speculated that H2Bub1 is needed specifically for optimal transcriptional elongation during differentiation. We also recently identified the SWI/SNF remodeling complex as a H2Bub1 interactor, necessary for the transcription of a subset of genes, presumably through interaction with H2Bub1 within the transcribed regions (Cell Reports, 2013, Aug 15, 4, 3:601-8). Furthermore, we found that H2Bub1 can negatively affect transcription elongation of specific genes by preventing the binding of the elongation factor TFIIS (Molecular Cell, 2011, May 20, 42, 4:477-88). Now, by using the 4sUDRB-seq method we can assess more specifically the impact of each factor on genome-wide transcriptional elongation and initiation rates.

 

What were the biggest challenges you encountered while developing the 4sUDRB-seq method?

Since we wanted to measure the elongation rates of as many as possible genes we decided to perform the measurements four and eight minutes after removal of the reversible transcription inhibitor DRB. Working with these very short time points and obtaining reproducible results was quite a challenge. We actually calculated the time it takes to open the incubator door and take out the plates for harvesting. The additional challenge was a computational one: we needed to employ an algorithm that can identify very accurately for each gene the front edge of the advancing transcription wave. However, we were not sufficiently pleased with the algorithms that are commonly used for that purpose, and we therefore decided to develop a completely new method. After extensive optimisation and comparison of different approaches, we concluded that a logic based on estimation of the local background of each gene gave the best results.

 

Were you surprised by any of the results you obtained on transcription elongation rates and initiation frequencies?

We were a bit surprised by the fact that methylation of histone H3 on lysine 79 (H3K79me2) was significantly correlated with the transcription elongation rate. Methylations are usually quite stable modifications. It therefore seems more probable that the local levels of H3K79me2 are not a direct consequence of the dynamic elongation by RNA polymerase II, but rather H3K79me2 may have a more regulatory impact on elongation rate. It will be interesting to address this notion by depleting Dot1 (the enzyme that methylates H3K79) in vivo or testing the elongation rate in vitro in a H3K79-methylated nucleosome array.

Another pleasant surprise was that the method was found to be highly reproducible. Since we were dealing with relatively short measurements (four and eight minutes) we expected that in each biological repeat the transcription wave would not reach exactly the same point within each gene. However, we were happy to see that the variation between biological repeats was lower than we had anticipated.

 

Was there anything you wanted to incorporate into the 4sUDRB-seq method but could not due to technical, resource or time limitations?

Since we sequenced all tagged newly transcribed RNA (without PolyA selection), part of our reads originated from rRNA. We would have liked to add a rRNA depletion step in order to reduce rRNA contamination. However, due to time constraints and concerns that we might end up with too little RNA for the final RNA-seq analysis, we have not done it so far.

An exciting possibility that may not be presently feasible is to use this method in order to measure transcription elongation rates and initiation frequencies at single cell resolution. It will be very interesting to figure out how similar or different elongation rates and initiation frequencies are in different cells within a population.

 

What’s next for your research using the 4sUDRB-seq method?

We are now contemplating various directions in which the method can be implemented. One major direction is to figure out whether, and to what extent, differential gene expression can be regulated at the level of transcription elongation rates. Specifically, we would like to identify biological conditions where transcription elongation rates might be altered in a manner that affects the biological outcome. One such example is differentiation of embryonic stem cells. Since we and others have reported that the chromatin of embryonic stem cells is significantly altered during differentiation (we observed a significant increase in H2Bub1 levels), we would like to test whether such changes affect the transcriptional elongation rates in the differentiated cell in a manner that makes it more suitable for its new functions.

An additional direction is to use this method in order to test if changes in elongation rates contribute to specific pathologies. For example, aggressive acute leukaemia is known to be driven by translocations between the MLL gene and various components of the super elongation complex (SEC). It will be interesting to use the 4sUDRB-seq method in order to test whether the fusion between the MLL and SEC drives leukaemia through enhancing or decreasing the transcription elongation rates of specific genes.

 

A similar method was published by Artur Veloso, Mats Ljungman and colleagues in Genome Research. What are the main differences between the methods?

The two methods are indeed similar in concept, although they vary in technical detail (e.g. the use of bromouridine (BrU) versus 4-thiouridine (4sU) for labelling nascent RNA). Our calculated elongation rates are generally faster than those deduced by Veloso and colleagues. This might be due to several reasons. One possibility is that in the HeLa cells employed by us, transcription elongation is faster than in the cell lines examined by Ljungman’s group. We note, however, that a previous study that also used DRB but without using a nucleotide analogue has estimated the average transcription elongation rate to be ~3.8 Kb/min in both in Tet-21 and HEK293 cells (Nat Struct Mol Biol. 2009 Nov;16(11):1128-33). Incidentally, this is very similar to the value deduced in our study (3.6 Kb/min). We also note that Ljungman’s group measured relatively similar transcription elongation rates in five different cell lines.

An additional possibility is that different regions of the same gene are transcribed at different rates. This was actually suggested by Danko and colleagues (Mol Cell. 2013 Apr 25;50(2):212-22). Since we measured transcription elongation four and eight minutes after DRB removal while Veloso’s measurements were performed ten and 20 minutes after DRB removal, the two studies actually assessed elongation rates in different regions of the same genes. However, initial analysis of recent measurements that we performed eight and 12 minutes after DRB removal suggests that the elongation rates within these more downstream regions are quite similar to those measured at four and eight minutes after DRB removal, suggesting that elongation rates are relatively constant throughout genes.

Lastly, as noted above, Veloso used BrU in order to tag nascent RNA, while we used 4sU. It is possible that different nucleotide analogues affect the RNA polymerisation rate differently.

In addition to the possible impact of the technical differences between the methods, we believe that some of the apparent incongruence between our conclusions and those of Veloso and colleagues may stem from differences in the algorithms employed to determine the exact position of the elongation wave front. Of note, we measured elongation rates for several genes by qRT-PCR analysis, and the values obtained were in good agreement with those calculated by us form the 4sUDRB-seq analysis.

In addition, we also provide a method to calculate transcription initiation frequencies. We believe that the ability to determine transcription elongation rates and initiation frequencies in a single experiment are advantageous for understanding more precisely how specific transcription factors regulate gene expression. So far, when researchers deplete a specific transcription factor and performed RNA-seq, they cannot discriminate at which stage of the transcription process this specific factor has a role. Hence, combining 4sUDRB-seq with depletion of a specific transcription factor can provide a more comprehensive understanding of its role. Lastly, we believe that our method to calculate transcription initiation frequencies can also be applied successfully to data generated by Veloso and colleagues.

 

Robert Vassar and Lokesh Kukreja on mitochondrial mutants in Alzheimer’s

$
0
0

Mitochondrial dysfunction is associated with several neurodegenerative conditions, from those with known causative mitochondrial mutations such as Kearns–Sayre syndrome and  Leber’s Hereditary Optic Neuropathy, to those where the association is less clear cut such as Parkinson’s disease and Huntington’s disease. Alzheimer’s disease (AD) falls into the latter category where evidence suggests mitochondrial involvement though the mechanisms are as yet unclear. In a Molecular Neurodegeneration study, Robert Vassar and Lokesh Kurkreja from Northwestern University, USA, and colleagues reveal how increased mutations in mitochondrial DNA, as occurs with aging, promotes the development of the physiological hallmarks of AD in a mouse model of the disease. Here Vassar and Kukreja explain what led them down this road of investigation and what insights they gained into AD neuropathology.

 

What led to your interest in Alzheimer’s disease, and in particular the possible role mitochondrial dysfunction may play in its pathogenesis?

Alzheimer’s disease (AD) is an illness which primarily affects the elderly. We were particularly interested in understanding how age could contribute to the onset of the disease. For the past twenty years, our knowledge of the fundamental signaling pathways that regulate aging has significantly increased. Mitochondrial function is among the main regulators of aging, but the mechanism by which dysfunction of mitochondria is implicated in neurodegeneration in AD is unclear. Previous studies show strong evidence of mitochondrial dysfunction in AD. For example, Aβ-mediated toxicity can cause morphological, chemical and genetic changes in mitochondria. In our study, we sought to determine how interaction of Aβ with age-dependent mitochondrial dysfunction in vivo can contribute to the pathogenesis of AD.

 

Your study models mitochondrial dysfunction in vivo using mice carrying a knockin mutation that inactivates the function of mitochondrial DNA polymerase-γ. What were the reasons behind selecting this model?

Compared to age-matched controls, AD patients have significantly higher levels of mitochondrial DNA mutations in large vulnerable neurons of the hippocampus and neocortex. By utilising PolgA knockin mutant mice, which lack proofreading activity by DNA polymerase-γ, we were able to model the accumulation of mitochondrial DNA (mtDNA) mutations. Moreover, mutations in the PolgA gene in humans are known to cause various central nervous system disorders including cognitive decline, which substantiates the use of PolgA D257A mice for studying neurodegenerative disease. Aβ neurotoxicity appears to be a pathological response of the aging brain. As mtDNA mutations accumulate with age, PolgA D257A mice are useful in testing the hypothesis that age-related mitochondrial dysfunction exacerbates amyloid pathology and induce neurodegeneration.

 

How does your approach of using PolgA D257A mice differ from previous methods aimed at identifying the neuropathology of Alzheimer’s disease?

Our intent was to cross the PolgA D257A mice with a well-established transgenic AD mouse model carrying the APP familial London mutation (APP/Ld). APP/Ld mice do not exhibit neuron loss but develop amyloid plaques at about one year of age. Since age is the greatest risk factor in AD and PolgA D257A mice exhibit a premature aging phenotype, we investigated whether PolgA D257A; APP/Ld bigenic mice may model the interaction between mitochondrial dysfunction associated with aging and Aβ toxicity in the onset and progression of AD.

There are several benefits to using the PolgA D257A mice in our study. At the cellular level, the PolgA D257A mice recapitulate bioenergetic defects that develop similarly in neurons in AD. Genes that encode for electron transport chain complexes are downregulated, resulting in decreased oxygen consumption and ATP production. Even though age-related mitochondrial dysfunction has long been postulated to be linked to the production of reactive oxygen species, shortcomings for PolgA D257A mice include an inadequate display of positive markers for oxidative stress. However, accumulation of mtDNA mutations is associated with the activation of apoptosis in PolgA D257A brains as measured by TUNEL and cleaved caspase-3, which are also observable in AD brains. With accumulation of mtDNA mutations beginning in development in PolgA D257A mice, another limitation for using PolgA D257A mice is that they accumulate mutations earlier and at higher frequency than in naturally aging wild type mice. Nonetheless, these mtDNA mutations, specifically deletions, are driving the presence of the aging phenotype in PolgA D257A mice.

 

What were the main results of your study? What findings most excited/surprised you?

A major finding in our study was that both Aβ42 levels and amyloid plaque load were increased in the brains of D257A; APP/Ld bigenic mice when compared to APP/Ld monogenic mice normalised to transgenic APP/Ld protein levels. The presence of the D257A mutation appears to exacerbate the cerebral accumulation of Aβ42 per given amount of APP. Initially, we hypothesised that mitochondrial dysfunction might affect amyloidogenesis, but it appears to be weakening the Aβ clearance pathway toward higher cerebral amyloid levels. Specifically, the PolgA D257A mutation inhibits the increase of insulin degrading enzyme (IDE), a major Aβ degrading enzyme in the brain, in the presence of Aβ. Thus, reduced IDE-mediated degradation of Aβ could be the underlying mechanism for cerebral amyloid accumulation in the bigenic mice. In our study, the bigenic mice also exhibited significant brain atrophy along with positive markers for neurodegeneration. The neurodegenerative phenotype appears to have resulted from synergism between Aβ neurotoxicity and mitochondrial dysfunction. Even though we did not observe frank neuron loss in D257A; APP/Ld mice by cell counting, we suspect that neurons were in the process of dying, given the brain atrophy, vacuolated neurons, activated caspase-3, and elevated p25 levels.

 

What’s next for your research?

In our future studies, we want to investigate whether Aβ might further enhance mitochondrial dysfunction related to the PolgA D257A mutation in D257A; APP/Ld bigenic mice. We will perform a rigorous analysis over the lifetime of the animal for mitochondrial function such as sequencing of mtDNA for detection of DNA mutations and/or deletions, examination of expression and activity of key mitochondrial regulatory enzymes such as pyruvate dehydrogenase and cytochrome oxidase, and measuring mitochondrial respiratory efficiency from isolated mitochondria. Future experiments will also be performed to clarify the mechanism of decreased APP/Ld levels in D257A; APP/Ld bigenic brains. We may examine transcriptional changes in the APP transgene by directly analysing Thy 1 expressing cells to measure mRNA levels using a laser capture microdissection technique. We will also investigate evidence of the caspase-dependent fragment of APP generated during apoptosis by immunoblot and immunohistochemistry analyses. APP/Ld in D257A; APP/Ld mice might have undergone cleavage by increased levels of activated caspase-3, thus lowering total APP/Ld levels.

Interestingly, our study is the first to show in vivo that mitochondrial dysfunction, by the D257A mutation, can prevent the Aβ-induced IDE increase. We will inquire further about how the D257A mutation can affect enzymes in metabolic pathways, like IDE. Lastly we want to understand the molecular mechanism for brain atrophy in the bigenic mice. It may stem from the loss of white and/or grey matter. Interestingly, oligomeric forms of Aβ are widely thought to underlie synaptic dysfunction and neurodegeneration in AD. It will be important to conduct future studies to address the potential implication of the D257A mutation on Aβ oligomer level, form, and neurotoxicity.

 

How do you think the findings of this study will aid the design of new therapies for Alzheimer’s disease?

Our results support a role for age-dependent mitochondrial dysfunction in AD pathogenesis via decreased clearance of Aβ. We believe that future therapies directed toward mitochondria might prove efficacious in reducing amyloid accumulation and neurodegeneration. Evidence in the literature suggests that mitochondrial function is perturbed early in the course of AD and could be a critical disease-driving process. Future therapeutic attempts at mitochondria and downstream bioenergetic targets in AD will likely prove effective.

 

Hand on heart: microRNA regulation of Hand1 for cardiomyocyte differentiation

$
0
0

Damage to cardiac muscle cells, for which the heart has a limited ability to repair or replace, can ultimately lead to heart failure. Stem cell therapy is emerging as a potential regenerative treatment for heart failure, based on the essentially limitless potential of human embryonic stem cells to become any kind of cell in the human body. However so far there has been limited success in obtaining the desired types of cardiac muscle cell. For example, in vitro experimentation has resulted in human embryonic stem cells differentiating into mixed populations with heterogeneous properties. An understanding of the mechanisms by which stem cells develop into atrial, ventricular and other specialised heart muscle cells is therefore crucial. In a recent study in Stem Cell Research and Therapy, Harold Bernstein, now at Merck Sharp & Dohme Corporation, USA, and colleagues from the University of California, San Francisco, USA, uncover a unique pathway by which microRNA (miRNA) regulates the specialised differentiation of cardiac muscle cells.

miRNAs are short non-coding RNAs that regulate gene expression by forming imperfect base pairing on messenger RNAs (mRNAs), which consequently inhibits translation. miRNAs play a significant role in directing the differentiation of stem cells and are already recognised as important regulators of cardiac development and function. Bernstein and colleagues identified a subset of miRNAs (miR-363, -367, -181a, -181c) that showed differential expression levels during the development of cardiac muscle cells from human embryonic stem cells. These four miRNAs were identified to target heart and neural crest derivative (Hand) genes, Hand1 and Hand2. Hand1 and Hand2 are transcription factors that are known to play an essential role in determining left versus right ventricular development of the heart. In vivo regional expression, in silico predictions and experimental validation demonstrated that miR-363 is a regulator of Hand1 translation. Overexpression of miR-363 was shown to suppress Hand1 mRNA and protein levels, confirming that miR-363 negatively regulates Hand1 expression.

These findings unearth one aspect of the complex mechanisms by which heart development is regulated. Bernstein and colleagues are the first to report on the regulation of Hand1 and Hand2 by miRNAs. Their results also demonstrate that miR-363, by targeting Hand1, is significant in the regulation of cardiac muscle cell differentiation. Suppression of miR-363 could provide a novel strategy for generating functional cells of the left ventricle – and may provide a crucial step forward in tailoring stem cell therapy for the heart.

 

John Laterra on targeting lipid metabolism in malignant brain tumours

$
0
0

When cancer takes hold, the cellular metabolism of malignant tissues reaches new heights as larger amounts of energy are needed to generate lipids for membrane biosynthesis and tumour signal transduction. It is therefore hoped that research into lipid metabolism pathways in cancer may yield specific therapeutic targets. For the most part, cancer cells rely on de novo fatty acid synthesis, making enzymes in this pathway particularly promising targets. In a study in BMC Cancer, John Laterra from the Kennedy Krieger Institute, USA, and colleagues investigate the role of a key enzyme in the activation step of fatty acid synthesis, namely the Acyl-CoA synthetase ACSVL3. Probing its role in the malignant brain tumour, glioblastoma multiforme (GBM), they specifically look to GBM stem cells that are thought to be especially important for tumour progression. Here Laterra discusses the growing interest in cancer metabolism and the potential of ACSVL3 as a therapeutic target.

 

What led to your interest in cancer stem cell metabolism, and particularly your investigation of lipid metabolism pathways?

The Laterra lab has an interest in brain cancer stem-like cells, particularly those associated with glioblastoma because they are thought to contribute disproportionately to tumour growth, recurrence, and resistance to treatment (i.e. tumour propagating cells). The lab of Paul Watkins (also at the Kennedy Krieger Institute, USA) had been investigating enzymes of lipid metabolism in inherited metabolic diseases and found that one enzyme – ACSVL3 – was abundant in several human malignancies. It initially seemed logical to assess ACSVL3 levels in glioblastoma cells given the fundamental importance of lipids in cell signaling pathways known to be aberrantly upregulated in glioblastoma and in other cancers. Our initial collaborative study (Cancer Res, 2009, 69:9175-9182) found that inhibiting endogenous ACSVL3 expression in glioblastoma cells inhibited their tumourigenicity. This finding led to the evaluation of ACSVL3 in glioblastoma tumour-propagating stem-like cells that, based on the cancer stem cell hypothesis, determine tumourigenicity.

 

Your study looks specifically at ACSVL3. What was previously known about this enzyme and its role in cancer and tumourigenesis?

Surprisingly little is known about ACSVL3. Little or no ACSVL3 is found in glial cells of adult brain, yet the enzyme is abundant in glioblastoma. Knocking down ACSVL3 by RNA interference decreased the malignant growth properties of glioblastoma cells, and decreased their tumourigenicity in mice. Reducing the ACSVL3 level in glioblastoma cells decreased signaling through oncogenic receptor tyrosine kinases known to drive glioblastoma malignancy (Cancer Res, 2009, 69:9175-9182). This showed for the first time that ACSVL3 supports glioblastoma malignancy and further suggested a role for this enzyme in regulating PI3-kinase signalling and Akt activation by oncogenic receptor tyrosine kinases.

 

What were your key findings? Were you surprised by any of them?

One key finding was that ACSVL3 expression was significantly higher in glioblastoma stem-like cells than in the general population of glioblastoma cells, and that expression of ACSVL3 coincided with stem cell markers such as CD133, ALDH, Sox-2, Nestin, and Musashi-1. Another key finding was that forcing the differentiation of stem-like cells resulted in decreased ACSVL3 expression and conversely, knockdown of ACSVL3 by RNA interference induced differentiation of stem-like cells. As with glioblastoma multiforme (GBM) cells, lowering of ACSVL3 expression in stem-like cells decreased growth rate and receptor tyrosine kinase-mediated signalling. The less surprising conclusion from these new results is that ACSVL3 affects cell proliferation and receptor tyrosine kinase signalling similarly in glioblastoma stem-like cells and bulk-population cells. The more surprising result is that ACSVL3 plays a role in cell phenotype regulation (i.e. cell stemness) that can explain why inhibiting ACSVL3 expression inhibits tumourigenicity, a property that correlates with tumour cell stemness.

 

Your study shows that knock down of ACSVL3 prevents glioblastoma propagation. What further research is needed to determine how your findings could be translated to the clinic?

Our studies used RNA interference to inhibit ACSVL3 expression since specific inhibitors of ACSVL3 enzymatic activity don’t currently exist. While it is likely that the effects of knockdown of ACSVL3 expression are attributed to diminished enzymatic activity, we can’t absolutely rule out the possibility that enzymatic-independent effects are responsible for the observed anti-tumour effects. Resolving this question and developing specific ACSVL3 enzyme inhibitors are critical to the clinical translation of our findings. We are currently developing high-throughput strategies for screening drugs and chemical libraries to identify candidate small molecule ACSVL3 inhibitors.

 

Do you think that ACSVL3 may play a similar role in cancer stem cells in other tissue types?

We have not yet looked at stem cells from other types of cancer. However, we do know that ACSVL3 is abundant in the majority of lung and prostate cancers that we have examined, and we suspect that it will be overexpressed in stem cells derived from these malignancies.

Numerous fatty acid metabolism enzymes exist, and are differentially expressed in various tissues. Do you think some of these enzymes may also have a role in cancer stem cells?

Yes, that is probably true. There have been a few reports where lipogenic enzymes such as acetyl-CoA carboxylase and fatty acid synthase are upregulated in cancer stem cells and fatty acid synthase inhibitors have been found to have anti-cancer effects in laboratory studies.

 

How do you think an understanding of cellular metabolism in cancer stem cells will aid progress in cancer research?

Current concepts point to a critically important role for the stem-like phenotype in the formation of solid tumours and their resistance to therapy. Effective therapies will need to target the stem-like cells in addition to the bulk population of tumour cells. The stem-like cell populations are known to differ biochemically from bulk non-stem populations though the extent to which this is true for metabolic pathways remains poorly defined. Understanding this will ensure that future therapies directed at tumour metabolism do not spare the stem-like cell subsets.

 

The link between sugar metabolism and cancer is well established, and lipid metabolism is now being explored. To what extent can cellular insights help us understand how diet and lifestyle affect cancer risk?

Elucidating the cellular mechanisms of carbohydrate and lipid metabolism that drive malignancy may help us understand dietary and behavioural influences on cancer risk. However, showing that specific aspects of carbohydrate and/or lipid metabolism at the cancer cell level drives malignancy does not mean that behavioural or dietary changes will be sufficient to overcome the oncogenic effects.  Research efforts in this direction are likely more relevant to developing novel pharmacologic therapies.

 

What’s next for your research?

We want to explore in more depth the molecular mechanisms by which ACSVL3 supports cancer cell stemness and malignancy. In doing so we may identify additional novel therapeutic targets for clinical translation. We are actively in search of small molecule inhibitors of ACSVL3 for their promising therapeutic value.

 

Viewing all 235 articles
Browse latest View live




Latest Images