Thursday, 4 September 2014

Benefits for babies exposed to two languages found in Singaporean birth cohort study

Date:
September 2, 2014
Source:
A*Star Agency for Science, Technology and Research
Summary:
There are advantages associated with exposure to two languages in infancy, as team of investigators and clinician-scientists in Singapore and internationally have found. The findings reveal a generalized cognitive advantage that emerges early in bilingual infants, and is not specific to a particular language.
Image Image of a child going through the visual habituation test.
Image of a child going through the visual habituation test.

A team of investigators and clinician-scientists in Singapore and internationally have found that there are advantages associated with exposure to two languages in infancy. As part of a long-term birth cohort study of Singaporean mothers and their offspring called GUSTO -- seminally a tripartite project between A*STAR's Singapore Institute for Clinical Sciences (SICS), KK Women's and Children's Hospital (KKH) and the National University Hospital (NUH) -- (see Annex A), six-month old bilingualinfants recognised familiar images faster than those brought up in monolingual homes. They also paid more attention to novel images compared to monolingual infants.
The findings reveal a generalized cognitive advantage that emerges early in bilingual infants, and is not specific to a particular language. The findings were published online on 30 July 2014 in the scientific journal, Child Development.


Infants were shown a coloured image of either a bear or a wolf. For half the group, the bear was made to become the "familiar" image while the wolf was the "novel" one, and vice versa for the rest of the group. The study showed that bilingual babies got bored of familiar images faster than monolingual babies.
Several previous studies in the field have shown that the rate at which an infant becomes bored of a familiar image and subsequent preference for novelty is a common predictor of better pre-school developmental outcomes, such as advanced performance in concept formation, non-verbal cognition, expressive and receptive language, and IQ tests. The past studies showed that babies who looked at the image and then rapidly get bored, demonstrated higher performance in various domains of cognition and language later on as children.
Bilingual babies also stared for longer periods of time at the novel image than their monolingual counterparts, demonstrating "novelty preference." Other studies in the field have shown this is linked with improved performance in later IQ and vocabulary tests during pre-school and school-going years.
Associate Professor Leher Singh, who is from the Department of Psychology at the National University of Singapore's Faculty of Arts and Social Sciences and lead author of this study said, "One of the biggest challenges of infant research is data collection. Visual habituation works wonderfully because it only takes a few minutes and capitalises on what babies do so naturally, which is to rapidly become interested in something new and then rapidly move on to something else. Even though it is quite a simple task, visual habituation is one of the few tasks in infancy that has been shown to predict later cognitive development."
A bilingual infant encounters more novel linguistic information than its monolingual peers. A six-month old infant in a bilingual home is not just learning another language; it is learning two languages while learning to discern between the two languages it is hearing. It is possible that since learning two languages at once requires more information-processing efficiency, the infants have a chance to rise to this challenge by developing skills to cope with it.
Said Assoc Prof Leher Singh, "As adults, learning a second language can be painstaking and laborious. We sometimes project that difficulty onto our young babies, imagining a state of enormous confusion as two languages jostle for space in their little heads. However, a large number of studies have shown us that babies are uniquely well positioned to take on the challenges of bilingual acquisition and in fact, may benefit from this journey."
In comparison to many other countries, a large proportion of Singaporean children are born into bilingual environments. This finding that bilingual input to babies is associated with cognitive enhancement, suggests a potentially strong neurocognitive advantage for Singaporean children outside the domain of language, in processing new information and recognising familiar objects with greater accuracy.
Said Assoc Prof Chong Yap Seng, Lead Principal Investigator for GUSTO, "This is good news for Singaporeans who are making the effort to be bilingual. These findings were possible because of the unique Singaporean setting of the study and the detailed neurodevelopmental testing that the GUSTO researchers perform." Assoc Prof Chong is Senior Consultant, Department of Obstetrics & Gynaecology, National University Hospital (NUH), as well as Acting Executive Director, Singapore Institute for Clinical Sciences (SICS), Agency for Science, Technology and Research (A*STAR).

Scientists discover how to 'switch off' autoimmune diseases

Date:
September 3, 2014
Source:
University of Bristol
Summary:
Scientists have made an important breakthrough in the fight against debilitating autoimmune diseases such as multiple sclerosis by revealing how to stop cells attacking healthy body tissue. Rather than the body's immune system destroying its own tissue by mistake, researchers have discovered how cells convert from being aggressive to actually protecting against disease.
Aggressor cells, which have the potential to cause autoimmunity, are targeted by treatment, causing conversion of these cells to protector cells. Gene expression changes gradually at each stage of treatment, as illustrated by the color changes in this series of heat maps.

Scientists have made an important breakthrough in the fight against debilitating autoimmune diseases such as multiple sclerosis by revealing how to stop cells attacking healthy body tissue.

 
Rather than the body's immune system destroying its own tissue by mistake, researchers at the University of Bristol have discovered how cells convert from being aggressive to actually protecting against disease.
The study, funded by the Wellcome Trust, is published in Nature Communications.
It's hoped this latest insight will lead to the widespread use of antigen-specific immunotherapy as a treatment for many autoimmune disorders, including multiple sclerosis (MS), type 1 diabetes, Graves' disease and systemic lupus erythematosus (SLE).
MS alone affects around 100,000 people in the UK and 2.5 million people worldwide.
Scientists were able to selectively target the cells that cause autoimmune disease by dampening down their aggression against the body's own tissues while converting them into cells capable of protecting against disease.
This type of conversion has been previously applied to allergies, known as 'allergic desensitisation', but its application to autoimmune diseases has only been appreciated recently.
The Bristol group has now revealed how the administration of fragments of the proteins that are normally the target for attack leads to correction of the autoimmune response.
Most importantly, their work reveals that effective treatment is achieved by gradually increasing the dose of antigenic fragment injected.
In order to figure out how this type of immunotherapy works, the scientists delved inside the immune cells themselves to see which genes and proteins were turned on or off by the treatment.
They found changes in gene expression that help explain how effective treatment leads to conversion of aggressor into protector cells. The outcome is to reinstate self-tolerance whereby an individual's immune system ignores its own tissues while remaining fully armed to protect against infection.
By specifically targeting the cells at fault, this immunotherapeutic approach avoids the need for the immune suppressive drugs associated with unacceptable side effects such as infections, development of tumours and disruption of natural regulatory mechanisms.
Professor David Wraith, who led the research, said: "Insight into the molecular basis of antigen-specific immunotherapy opens up exciting new opportunities to enhance the selectivity of the approach while providing valuable markers with which to measure effective treatment. These findings have important implications for the many patients suffering from autoimmune conditions that are currently difficult to treat."
This treatment approach, which could improve the lives of millions of people worldwide, is currently undergoing clinical development through biotechnology company Apitope, a spin-out from the University of Bristol.

Why is stress more devastating for some?

Date:
September 2, 2014
Source:
Rockefeller University
Summary:
Some take stress in stride; others struggle with it, even developing psychiatric disorders. New research has identified the molecular origins of this so-called stress gap in mice. The results could contribute to a better understanding of the development of depression and other disorders brought on by stress.
Image Stress
Chemical changes to histones, proteins that provide structure for DNA, can alter gene expression. Above, individual spots represent acetylated histones in a brain region called the hippocampus. The researchers found the loss of acetyl groups from histones is key to the development of anxiety- and depression-like behaviors in stressed mice.

Some people take stress in stride; others are done in by it. New research at Rockefeller University has identified the molecular mechanisms of this so-called stress gap in mice with very similar genetic backgrounds -- a finding that could lead researchers to better understand the development of psychiatric disorders such as anxiety and depression.
"Like people, each animal has unique experiences as it goes through its life. And we suspect that these life experiences can alter the expression of genes, and as a result, affect an animal's susceptibility to stress," says senior author Bruce McEwen, Alfred E. Mirsky Professor and head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology. "We have taken an important step toward explaining the molecular origins of this stress gap by showing that inbred mice react differently to stress, with some developing behaviors that resemble anxiety and depression, and others remaining resilient."
The results, published September 2 in Molecular Psychiatry, point toward potential new markers to aid the diagnosis of stress-related disorders, such as anxiety and depression and a promising route to the development of new treatments for these devastating disorders.
In experiments, researchers stressed the mice by exposing them to daily, unpredictable bouts of cage tilting, altered dark-light cycles, confinement in tight spaces and other conditions mice dislike with the goal of reproducing the sort of stressful experiences thought to be a primary cause of depression in humans. Afterward, in tests to see if the mice displayed the rodent equivalent of anxiety and depression symptoms, they found about 40 percent showed high levels of behaviors that included a preference for a dark compartment over a brightly lit one, or a loss of interest in sugar water. The remaining 60 percent coped well with the stress. This distinction between the susceptible mice and the resilient ones was so fundamental that it emerged even before the mice were subjected to stress, with some unstressed mice showing an anxiety-like preference for a dark compartment over a lighted one.
The researchers found that the highly stress-susceptible mice had less of an important molecule known as mGlu2 in a stress-involved region of the brain known as the hippocampus. The mGlu2 decrease, they determined, resulted from an epigenetic change, which affects the expression of genes, in this case the gene that codes for mGlu2.
"If you think of the genetic code as words in a book, the book must be opened in order for you to read it. These epigenetic changes, which affect histone proteins associated with DNA, effectively close the book, so the code for mGlu2 cannot be read," says first author Carla Nasca, postdoc in the lab and a fellow of the American Foundation for Suicide Prevention. Previously, she and colleagues first implicated mGlu2 in depression when they showed that a promising potential treatment known as acetyl carnitine rapidly alleviated depression-like symptoms in rats and mice by reversing these epigenetic changes to mGlu2 and causing its levels to increase.
"Currently, depression is diagnosed only by its symptoms," Nasca says. "But these results put us on track to discover molecular signatures in humans that may have the potential to serve as markers for certain types of depression. Our work could also lead to a new generation of rapidly acting antidepressants, such as acetyl carnitine, which would be particularly important to reduce the risk of suicide."
A reduction in mGlu2 matters because this molecule regulates the neurotransmitter glutamate. While glutamate plays a crucial role relaying messages between neurons as part of many important processes, too much can lead to harmful structural changes in the brain.
"The brain is constantly changing. When stressful experiences lead to anxiety and depressive disorders the brain becomes locked in a state it cannot spontaneously escape," McEwen says. "Studies like this one are increasingly focusing on the regulation of glutamate as an underlying mechanism in depression and, we hope, opening promising new avenues for the diagnosis and treatment of this devastating disorder."

Scientists make diseased cells synthesize their own drug

Date:
September 2, 2014
Source:
Scripps Research Institute
Summary:
In a new study that could lead to many new medicines, scientists have adapted a chemical approach to turn diseased cells into unique manufacturing sites for molecules that can treat a form of muscular dystrophy.
Image  cells synthesize
Illustration of cells (stock image). "We're using a cell as a reaction vessel and a disease-causing defect as a catalyst to synthesize a treatment in a diseased cell," explained Professor Matthew Disney.

In a new study that could ultimately lead to many new medicines, scientists from the Florida campus of The Scripps Research Institute (TSRI) have adapted a chemical approach to turn diseased cells into unique manufacturing sites for molecules that can treat a form of muscular dystrophy.
"We're using a cell as a reaction vessel and a disease-causing defect as a catalyst to synthesize a treatment in a diseased cell," said TSRI Professor Matthew Disney. "Because the treatment is synthesized only in diseased cells, the compounds could provide highly specific therapeutics that only act when a disease is present. This means we can potentially treat a host of conditions in a very selective and precise manner in totally unprecedented ways."
The promising research was published recently in the international chemistry journal Angewandte Chemie.
Targeting RNA Repeats
In general, small, low molecular weight compounds can pass the blood-brain barrier, while larger, higher weight compounds tend to be more potent. In the new study, however, small molecules became powerful inhibitors when they bound to targets in cells expressing an RNA defect, such as those found in myotonic dystrophy.
Myotonic dystrophy type 2, a relatively mild and uncommon form of the progressive muscle weakening disease, is caused by a type of RNA defect known as a "tetranucleotide repeat," in which a series of four nucleotides is repeated more times than normal in an individual's genetic code. In this case, a cytosine-cytosine-uracil-guanine (CCUG) repeat binds to the protein MBNL1, rendering it inactive and resulting in RNA splicing abnormalities that, in turn, results in the disease.
In the study, a pair of small molecule "modules" the scientists developed binds to adjacent parts of the defect in a living cell, bringing these groups close together. Under these conditions, the adjacent parts reach out to one another and, as Disney describes it, permanently hold hands. Once that connection is made, the small molecule binds tightly to the defect, potently reversing disease defects on a molecular level.
"When these compounds assemble in the cell, they are 1,000 times more potent than the small molecule itself and 100 times more potent than our most active lead compound," said Research Associate Suzanne Rzuczek, the first author of the study. "This is the first time this has been validated in live cells."
Click Chemistry Construction
The basic process used by Disney and his colleagues is known as "click chemistry" -- a process invented by Nobel laureate K. Barry Sharpless, a chemist at TSRI, to quickly produce substances by attaching small units or modules together in much the same way this occurs naturally.
"In my opinion, this is one unique and a nearly ideal application of the process Sharpless and his colleagues first developed," Disney said.
Given the predictability of the process and the nearly endless combinations, translating such an approach to cellular systems could be enormously productive, Disney said. RNAs make ideal targets because they are modular, just like the compounds for which they provide a molecular template.
Not only that, he added, but many similar RNAs cause a host of incurable diseases such as ALS (Lou Gehrig's Disease), Huntington's disease and more than 20 others for which there are no known cures, making this approach a potential route to develop lead therapeutics to this large class of debilitating diseases.

'Brightpoints': New clues to determining the solar cycle

Date:
September 3, 2014
Source:
NASA
Summary:
Approximately every 11 years, the sun undergoes a complete personality change from quiet and calm to violently active. However, the timing of the solar cycle is far from precise. Now, researchers have discovered a new marker to track the course of the solar cycle -- brightpoints, little bright spots in the solar atmosphere that allow us to observe the constant roiling of material inside the sun.
Image Solar Cycle
A composite of 25 separate images from NASA's SDO, spanning one year from April 2012 to April 2013. The image reveals the migration tracks of active regions towards the equator during that period.

Aproximately every 11 years, the sun undergoes a complete personality change from quiet and calm to violently active. The height of the sun's activity, known as solar maximum, is a time of numerous sunspots, punctuated with profound eruptions that send radiation and solar particles out into the far reaches of space.
However, the timing of the solar cycle is far from precise. Since humans began regularly recording sunspots in the 17th century, the time between successive solar maxima has been as short as nine years, but as long as 14, making it hard to determine its cause. Now, researchers have discovered a new marker to track the course of the solar cycle -- brightpoints, little bright spots in the solar atmosphere that allow us to observe the constant roiling of material inside the sun. These markers provide a new way to watch the way the magnetic fields evolve and move through our closest star. They also show that a substantial adjustment to established theories about what drives this mysterious cycle may be needed.
Historically, theories about what's going on inside the sun to drive the solar cycle have relied on only one set of observations: the detection of sunspots, a data record that goes back centuries. Over the past few decades, realizing that sunspots are areas of intense magnetic fields, researchers have also been able to include observations of magnetic measurements of the sun from more than 90 million miles away.
"Sunspots have been the perennial marker for understanding the mechanisms that rule the sun's interior," said Scott McIntosh, a space scientist at the National Center for Atmospheric Research in Boulder, Colorado, and first author of a paper on these results that appears in the September 1, 2014, issue of the Astrophysical Journal. "But the processes that make sunspots are not well understood, and far less, those that govern their migration and what drives their movement. Now we can see there are bright points in the solar atmosphere, which act like buoys anchored to what's going on much deeper down. They help us develop a different picture of the interior of the sun."
Over the course of a solar cycle, the sunspots tend to migrate progressively lower in latitude, moving toward the equator. The prevailing theory is that two symmetrical, grand loops of material in each solar hemisphere, like huge conveyor belts, sweep from the poles to the equator where they sink deeper down into the sun and then make their way steadily back to the poles. These conveyor belts also move the magnetic field through the churning solar atmosphere. The theory suggests that sunspots move in synch with this flow -- tracking sunspots has allowed a study of that flow and theories about the solar cycle have developed based on that progression. But there is much that remains unknown: Why do the sunspots only appear lower than about 30 degrees? What causes the sunspots of consecutive cycles to abruptly flip magnetic polarity from positive to negative, or vice versa? Why is the timing of the cycle so variable?
Beginning in 2010, McIntosh and his colleagues began tracking the size of different magnetically balanced areas on the sun, that is, areas where there are an equal number of magnetic fields pointing down into the sun as pointing out. The team found magnetic parcels in sizes that had been seen before, but also spotted much larger parcels than those previously noted -- about the diameter of Jupiter. The researchers also looked at these regions in imagery of the sun's atmosphere, the corona, captured by NASA's Solar Dynamics Observatory, or SDO. They noticed that ubiquitous spots of extreme ultraviolet and X-ray light, known as brightpoints, prefer to hover around the vertices of these large areas, dubbed "g-nodes" because of their giant scale.
These brightpoints and g-nodes, therefore, open up a whole new way to track how material flows inside the sun. McIntosh and his colleagues then collected information about the movement of these features over the past 18 years of available observations from the joint European Space Agency and NASA Solar and Heliospheric Observatory and SDO to monitor how the last solar cycle progressed and the current one started. They found that bands of these markers -- and therefore the corresponding large magnetic fields underneath -- also moved steadily toward the equator over time, along the same path as sunspots, but beginning at a latitude of about 55 degrees. In addition, each hemisphere of the sun usually has more than one of these bands present.
McIntosh explains that a complex interaction of magnetic field lines may take place in the sun's interior that is largely hidden from view. The recent observations suggest that the sun is populated with bands of differently polarized magnetic material that, once they form, steadily move toward the equator from high latitudes. These bands will either have a northern or southern magnetic polarity and their sign alternates in each hemisphere such that the polarities always cancel. For example, looking at the sun's northern hemisphere, the band closest to the equator -- perhaps of northern polarity -- would have magnetic field lines that connect it to another band, at higher latitudes, of southern polarity. Across the equator, in the bottom half of the sun, a similar process occurs, but the bands would be an almost mirror image of those across the equator, southern polarity near the equator and northern at higher latitudes. Magnetic field lines would connect the four bands; inside each hemisphere and across the equator as well.
While the field lines remain relatively short like this, the sun's magnetic system is calmer, producing fewer sunspots and fewer eruptions. This is solar minimum. But once the two low-latitude marching bands reach the equator their polarities essentially cancel each other out. Abruptly they disappear. This process, from migratory start to finish at the equator takes 19 years on average, but is seen to vary from 16 to about 21 years.
Following the equatorial battle and cancellation, the sun is left with just two large bands that have migrated to about 30 degrees latitude. The magnetic field lines from these bands are much longer and so the bands in each hemisphere feel less of each other. At this point, the sunspots begin to grow rapidly on the bands, beginning the ramp-up to solar max. The growth only lasts so long, however, because the process of generating a new band of opposite polarity has already begun at high latitudes. When that new band begins to appear, the complex four-band connection starts over and the number of sunspots starts to decrease on the low-latitude bands.
In this scenario, it is the magnetic band's cycle -- the lifetime of each band as it marches toward the equator -- that truly defines the entire solar cycle. "Thus, the 11-year solar cycle can be viewed as the overlap between two much longer cycles," said Robert Leamon, co-author on the paper at Montana State University in Bozeman and NASA Headquarters in Washington.
The new conceptual model also provides an explanation of why sunspots are trapped below 30 degrees and abruptly change sign. However, the model creates a question about a different latitude line: Why do the magnetic markers, the brightpoints and g-nodes, start appearing at 55 degrees?
"Above that latitude, the solar atmosphere appears to be disconnected from the rotation beneath it," said McIntosh. "So there is reason to believe that, inside the sun, there's a very different internal motion and evolution at high latitudes compared to the region near the equator. 55-degrees seems to be a critical latitude for the sun and something we need to explore further."
Solar cycles theories are best tested by making predictions as to when we will see the next solar minimum and the next solar maximum. This research paper forecasts that the sun will enter solar minimum somewhere in the last half of 2017, with the sunspots of the next cycle appearing near the end of 2019.
"People make their predictions for when this solar cycle will end and the next one will start," said Leamon. "Sometime in 2019 or 2020, some people will be proved right and others wrong."
In the meantime, regardless of whether the new hypothesis provided by McIntosh and his colleagues is correct, this long term set of bright points and g-node locations offers a new set of observations to explore the drivers of solar activity beyond only sunspots. Inserting this information into solar models will provide an opportunity to improve simulations of our star. Such advanced models tell us more about other stars too, leading to a better understanding of similar magnetic activity on more exotic, distant celestial counterparts.
For more information about NASA's SDO, visit: www.nasa.gov/sdo

Possible neurobiological basis for tradeoff between honesty, self-interest

Date:
September 2, 2014
Source:
Virginia Tech
Summary:
What's the price on your integrity? Tell the truth; everyone has a tipping point. We all want to be honest, but at some point, we'll lie if the benefit is great enough. Now, scientists have confirmed the area of the brain in which we make that decision, using advanced imaging techniques to study how the brain makes choices about honesty.
Image Possible

What's the price on your integrity? Tell the truth; everyone has a tipping point. We all want to be honest, but at some point, we'll lie if the benefit is great enough. Now, scientists have confirmed the area of the brain in which we make that decision.
The result was published online this week in Nature Neuroscience.
"We prefer to be honest, even if lying is beneficial," said Lusha Zhu, the study's lead author and a postdoctoral associate at the Virginia Tech Carilion Research Institute, where she works with Brooks King-Casas and Pearl Chiu, who are assistant professors at the institute and with Virginia Tech's Department of Psychology. "How does the brain make the choice to be honest, even when there is a significant cost to being honest?"
Previous studies have shown that brain areas behind the forehead, called the dorsolateral prefrontal cortex and orbitofrontal cortex, become more active during functional brain scanning when a participant is told to lie or to be honest.
But there's no way to know if those parts of the brain are engaged because an individual is lying or because he or she prefers to be honest, King-Casas said.
This time, researchers asked a different question.
"We asked whether there's a switch in the brain that controls the cost and benefit tradeoff between honesty and self-interest," Chiu said. "The answer to this question will help shed light on the nature of honesty and human preferences."
Researchers compared the decisions of healthy participants with decisions made by participants with damaged dorsolateral prefrontal cortices or orbitofrontal cortices.
The team, including scientists from the Virginia Tech Carilion Research Institute and the University of California at Berkeley, had volunteers decide between honesty and self-interest in an economic "signaling game," which has been extensively studied in behavioral economics, game theory, and evolutionary biology.
In one game, the researchers presented participants with an option that gave them more money at a cost to an anonymous opponent, and an option that gave the opponent more money at a cost to the participant. Unsurprisingly, participants chose the option that filled their own pockets.
In a different game, the researchers presented participants with the same options and but asked the participants to send a message to their opponents, recommending one option over the other. The participants either lie and reap the reward, or tell the truth and suffer a loss.
"The average person usually shows lie aversion," Zhu said. "If they don't need to send a message, they prefer the option that gives them more money. If they do need to send a message, they're more likely to send a message that will benefit the other person even at a loss to themselves. They want to be honest, at the cost of their own wallet."
Participants with damage in the dorsolateral prefrontal cortex were not as averse to lying as the two comparison groups. They were more likely to pick the practical option and were less concerned about the potential cost to self-image.
In the game where no message was required, however, participants with dorsolateral prefrontal cortex damage showed the same pattern of decision-making as the comparison groups, suggesting that for each group, the baseline tendency to give to others is the same.
"These results suggest that the dorsolateral prefrontal cortex, a brain region known to be critically involved in cognitive control, may play a causal role in enabling honest behavior," Chiu said.
"People feel good when they're honest and they feel bad when they lie," King-Casas said. "Self-interest and self-image are both powerful factors influencing a person's decision to be honest."
Previous studies, according to King-Casas, were unable to control for an important distinction.
"In past studies, participants are typically instructed by the experimenter to lie or be honest. There's no consequence for lying; the subject is just complying," said King-Casas. "One of the real strengths of our study is that we're able to see how a person's tradeoffs change when we add in responsibility."
Another strength is the measurable tradeoff -- when will an honest person decide the benefit is worth the lie?
"We manipulated the costs and benefits of honesty to quantify the tipping point for each person," said Chiu. "We picked tough dilemmas where, for example, telling a lie might harm the other player one cent, whereas being honest will cost you $20. And you might decide that being seen as an honest person is worth more than $20, so you won't lie even though it costs you, or you might decide that one cent of harm isn't so bad."
The study sheds light on the neuroscientific basis and broader nature of honesty. Moral philosophers and cognitive psychologists have had longstanding, contrasting hypotheses about the mechanisms governing the tradeoff between honesty and self-interest.
The "Grace" hypothesis, suggests that people are innately honest and have to control honest impulses if they want to profit. The "Will" hypothesis holds that self-interest is our automatic response.
"The prefrontal cortex is key to controlling our behavior and helps to override our natural impulses to be either honest or self-interested," King-Casas said. "Knowing this, we can test whether 'Grace' or 'Will' is dominant. By including participants with lesions in the prefrontal cortex, we were able to test whether honesty requires us to actively resist self-interest -- in which case disrupting the prefrontal cortex would reduce the influence of honesty preferences -- or whether we are automatically predisposed toward honesty, in which case disrupting the prefrontal cortex would instead enhance honest behavior. And our results show a necessary role for prefrontal control in generating honest behavior by overriding our tendencies to be self-interested.
"Our next step will be to combine functional brain imaging with economic modeling to understand how the brain computes the tradeoff between the costs and benefits of lying," King-Casas added. "Then we can begin to understand the nature of honesty."

Ocean mappers discover seamount in Pacific Ocean

Date:
September 2, 2014
Source:
University of New Hampshire
Summary:
Scientists on a seafloor mapping mission have discovered a new seamount near the Johnson Atoll in the Pacific Ocean. The summit of the seamount rises 1,100 meters from the 5,100-meter-deep ocean floor. The seamount's impact remains unknown -- for now. It's too deep (its summit lies nearly 4,000 meters beneath the surface of the ocean) to be a navigation hazard or to provide rich fisheries. "It's probably 100 million years old," Gardner says, "and it might have something in it we may be interested in 100 years from now."

University of New Hampshire scientists on a seafloor mapping mission have discovered a new seamount near the Johnson Atoll in the Pacific Ocean. The summit of the seamount rises 1,100 meters from the 5,100-meter-deep ocean floor.
The seamount was discovered in August when James Gardner, research professor in the UNH-NOAA Center for Coastal and Ocean Mapping/Joint Hydrographic Center, was leading a mapping mission aimed at helping delineate the outer limits of the U.S. continental shelf.
Working aboard the R/V Kilo Moana, an oceanographic research ship owned by the U.S. Navy and operated by the University of Hawaii, Gardner and his team were using multibeam echosounder technology to create detailed images of the seafloor when, late at night, the seamount appeared "out of the blue." The team was able to map the conical seamount in its entirety.
The yet-unnamed seamount, located about 300 kilometers southeast of the uninhabited Jarvis Island, lies in one of the least explored areas of the central Pacific Ocean. Because of that, Gardner was not particularly surprised by the discovery.
"These seamounts are very common, but we don't know about them because most of the places that we go out and map have never been mapped before," he says. Since only low-resolution satellite data exists for most of Earth's seafloor, many seamounts of this size are not resolved in the satellite data but advanced multibeam echosounder missions like this one can resolve them. "Satellites just can't see these features and we can," Gardner adds.
While the mapping mission was in support of the U.S. Extended Continental Shelf Task Force, a multi-agency project to delineate the outer limits of the U.S. continental shelf, the volcanic seamount lies within the U.S. exclusive economic zone. That means the U.S. has jurisdiction of the waters above it as well as the sediment and rocks of the seamount itself.
The seamount's impact remains unknown -- for now. It's too deep (its summit lies nearly 4,000 meters beneath the surface of the ocean) to be a navigation hazard or to provide rich fisheries. "It's probably 100 million years old," Gardner says, "and it might have something in it we may be interested in 100 years from now."

Potential for 'in body' muscle regeneration, rodent study suggests

Date:
September 2, 2014
Source:
Wake Forest Baptist Medical Center
Summary:
What if repairing large segments of damaged muscle tissue was as simple as mobilizing the body’s stem cells to the site of the injury? New research in mice and rats suggests that “in body” regeneration of muscle tissue might be possible by harnessing the body’s natural healing powers.
Image Potential for in body muscle
Physiotherapy (stock image). Research in mice and rats suggests that “in body” regeneration of muscle tissue might be possible by harnessing the body’s natural healing powers.

What if repairing large segments of damaged muscle tissue was as simple as mobilizing the body's stem cells to the site of the injury? New research in mice and rats, conducted at Wake Forest Baptist Medical Center's Institute for Regenerative Medicine, suggests that "in body" regeneration of muscle tissue might be possible by harnessing the body's natural healing powers.
Reporting online ahead of print in the journal Acta Biomaterialia, the research team demonstrated the ability to recruit stem cells that can form muscle tissue to a small piece of biomaterial, or scaffold that had been implanted in the animals' leg muscle. The secret to success was using proteins involved in cell communication and muscle formation to mobilize the cells.
"Working to leverage the body's own regenerative properties, we designed a muscle-specific scaffolding system that can actively participate in functional tissue regeneration," said Sang Jin Lee, Ph.D., assistant professor of regenerative medicine and senior author. "This is a proof-of-concept study that we hope can one day be applied to human patients."
The current treatment for restoring function when large segments of muscle are injured or removed during tumor surgery is to surgically move a segment of muscle from one part of the body to another. Of course, this reduces function at the donor site.
Several scientific teams are currently working to engineer replacement muscle in the lab by taking small biopsies of muscle tissue, expanding the cells in the lab, and placing them on scaffolds for later implantation. This approach requires a biopsy and the challenge of standardizing the cells.
"Our aim was to bypass the challenges of both of these techniques and to demonstrate the mobilization of muscle cells to a target-specific site for muscle regeneration," said Lee.
Most tissues in the body contain tissue-specific stem cells that are believed to be the "regenerative machinery" responsible for tissue maintenance. It was these cells, known as satellite or progenitor cells, that the scientists wanted to mobilize.
First, the Wake Forest Baptist scientists investigated whether muscle progenitor cells could be mobilized into an implanted scaffold, which basically serves as a "home" for the cells to grow and develop. Scaffolds were implanted in the lower leg muscle of rats and retrieved for examination after several weeks.
Lab testing revealed that the scaffolds contained muscle satellite cells as well as stem cells that could be differentiated into muscle cells in the lab. In addition, the scaffold had developed a network of blood vessels, with mature vessels forming four weeks after implantation.
Next, the scientists tested the effects of several proteins known to be involved in muscle formation by designing the scaffolds to release these proteins. The protein with the greatest effect on cell recruitment was insulin-like growth factor 1 (IGF-1).
After several weeks of implantation, lab testing showed that the scaffolds with IGF-1 had up to four times the number of cells than the plain scaffolds and also had increased formation of muscle fibers.
"The protein effectively promoted cell recruitment and accelerated muscle regeneration," said Lee.
Next, the scientists will evaluate whether the regenerated muscle is able to restore function and will test clinical feasibility in a large animal model.

Direct brain-to-brain communication demonstrated in human subjects

Date:
September 3, 2014
Source:
Beth Israel Deaconess Medical Center
Summary:
In a first-of-its-kind study, an international team of neuroscientists and robotics engineers has demonstrated the viability of direct brain-to-brain communication in humans.
Image Brain to Brain
View of emitter and receiver subjects with non-invasive devices supporting, respectively, the BCI based on EEG changes driven by motor imagery (left) and the CBI based on the reception of phosphenes elicited by a neuronavigated TMS (right) components of the B2B transmission system.
Credit: Grau C, Ginhoux R, Riera A, Nguyen TL, Chauvat H, et al. (2014) Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies. PLoS ONE 9(8): e105225. doi:10.1371/journal.pone.0105225

 In a first-of-its-kind study, an international team of neuroscientists and robotics engineers have demonstrated the viability of direct brain-to-brain communication in humans. Recently published in PLOS ONE the highly novel findings describe the successful transmission of information via the internet between the intact scalps of two human subjects -- located 5,000 miles apart.
"We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways," explains coauthor Alvaro Pascual-Leone, MD, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at Beth Israel Deaconess Medical Center (BIDMC) and Professor of Neurology at Harvard Medical School. "One such pathway is, of course, the internet, so our question became, 'Could we develop an experiment that would bypass the talking or typing part of internet and establish direct brain-to-brain communication between subjects located far away from each other in India and France?'"
It turned out the answer was "yes."
In the neuroscientific equivalent of instant messaging, Pascual-Leone, together with Giulio Ruffini and Carles Grau leading a team of researchers from Starlab Barcelona, Spain, and Michel Berg, leading a team from Axilum Robotics, Strasbourg, France, successfully transmitted the words "hola" and "ciao" in a computer-mediated brain-to-brain transmission from a location in India to a location in France using internet-linked electroencephalogram (EEG) and robot-assisted and image-guided transcranial magnetic stimulation (TMS) technologies.
Previous studies on EEG-based brain-computer interaction (BCI) have typically made use of communication between a human brain and computer. In these studies, electrodes attached to a person's scalp record electrical currents in the brain as a person realizes an action-thought, such as consciously thinking about moving the arm or leg. The computer then interprets that signal and translates it to a control output, such as a robot or wheelchair.
But, in this new study, the research team added a second human brain on the other end of the system. Four healthy participants, aged 28 to 50, participated in the study. One of the four subjects was assigned to the brain-computer interface (BCI) branch and was the sender of the words; the other three were assigned to the computer-brain interface (CBI) branch of the experiments and received the messages and had to understand them.
Using EEG, the research team first translated the greetings "hola" and "ciao" into binary code and then emailed the results from India to France. There a computer-brain interface transmitted the message to the receiver's brain through noninvasive brain stimulation. The subjects experienced this as phosphenes, flashes of light in their peripheral vision. The light appeared in numerical sequences that enabled the receiver to decode the information in the message, and while the subjects did not report feeling anything, they did correctly receive the greetings.
A second similar experiment was conducted between individuals in Spain and France, with the end result a total error rate of just 15 percent, 11 percent on the decoding end and five percent on the initial coding side.
"By using advanced precision neuro-technologies including wireless EEG and robotized TMS, we were able to directly and noninvasively transmit a thought from one person to another, without them having to speak or write," says Pascual-Leone. "This in itself is a remarkable step in human communication, but being able to do so across a distance of thousands of miles is a critically important proof-of-principle for the development of brain-to-brain communications. We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or motor-based communication."

Newly identified galactic supercluster is home to the Milky Way

Date:
September 3, 2014
Source:
National Radio Astronomy Observatory
Summary:
Astronomers using the Green Bank Telescope -- among other telescopes -- have determined that our own Milky Way galaxy is part of a newly identified ginormous supercluster of galaxies, which they have dubbed 'Laniakea,' which means 'immense heaven' in Hawaiian.
Image Milky way
A slice of the Laniakea Supercluster in the supergalactic equatorial plane -- an imaginary plane containing many of the most massive clusters in this structure. The colors represent density within this slice, with red for high densities and blue for voids -- areas with relatively little matter. Individual galaxies are shown as white dots. Velocity flow streams within the region gravitationally dominated by Laniakea are shown in white, while dark blue flow lines are away from the Laniakea local basin of attraction. The orange contour encloses the outer limits of these streams, a diameter of about 160 Mpc. This region contains the mass of about 100 million billion suns.

Astronomers using the National Science Foundation's Green Bank Telescope (GBT) -- among other telescopes -- have determined that our own Milky Way galaxy is part of a newly identified ginormous supercluster of galaxies, which they have dubbed "Laniakea," which means "immense heaven" in Hawaiian.
This discovery clarifies the boundaries of our galactic neighborhood and establishes previously unrecognized linkages among various galaxy clusters in the local Universe.
"We have finally established the contours that define the supercluster of galaxies we can call home," said lead researcher R. Brent Tully, an astronomer at the University of Hawaii at Manoa. "This is not unlike finding out for the first time that your hometown is actually part of much larger country that borders other nations."
The paper explaining this work is the cover story of the September 4 issue of the journal Nature.
Superclusters are among the largest structures in the known Universe. They are made up of groups, like our own Local Group, that contain dozens of galaxies, and massive clusters that contain hundreds of galaxies, all interconnected in a web of filaments. Though these structures are interconnected, they have poorly defined boundaries.
To better refine cosmic mapmaking, the researchers are proposing a new way to evaluate these large-scale galaxy structures by examining their impact on the motions of galaxies. A galaxy between structures will be caught in a gravitational tug-of-war in which the balance of the gravitational forces from the surrounding large-scale structures determines the galaxy's motion.
By using the GBT and other radio telescopes to map the velocities of galaxies throughout our local Universe, the team was able to define the region of space where each supercluster dominates. "Green Bank Telescope observations have played a significant role in the research leading to this new understanding of the limits and relationships among a number of superclusters," said Tully.
The Milky Way resides in the outskirts of one such supercluster, whose extent has for the first time been carefully mapped using these new techniques. This so-called Laniakea Supercluster is 500 million light-years in diameter and contains the mass of one hundred million billion Suns spread across 100,000 galaxies.
This study also clarifies the role of the Great Attractor, a gravitational focal point in intergalactic space that influences the motion of our Local Group of galaxies and other galaxy clusters.
Within the boundaries of the Laniakea Supercluster, galaxy motions are directed inward, in the same way that water streams follow descending paths toward a valley. The Great Attractor region is a large flat bottom gravitational valley with a sphere of attraction that extends across the Laniakea Supercluster.
The name Laniakea was suggested by Nawa'a Napoleon, an associate professor of Hawaiian Language and chair of the Department of Languages, Linguistics, and Literature at Kapiolani Community College, a part of the University of Hawaii system. The name honors Polynesian navigators who used knowledge of the heavens to voyage across the immensity of the Pacific Ocean.

First Neanderthal rock engraving found in Gibraltar: Abstract art older than thought?

Date:
September 4, 2014
Source:
CNRS
Summary:
The first example of a rock engraving attributed to Neanderthals has been discovered in Gorham's Cave, Gibraltar. Dated at over 39,000 years old, it consists of a deeply impressed cross-hatching carved into rock. Its analysis calls into question the view that the production of representational and abstract depictions on cave walls was a cultural innovation introduced into Europe by modern humans. On the contrary, the findings support the hypothesis that Neanderthals had a symbolic material culture.
Image First Neanderthal

The first example of a rock engraving attributed to Neanderthals has been discovered in Gorham's Cave, Gibraltar, by an international team bringing together prehistorians from the French Laboratory 'De la Préhistoire à l'Actuel: Culture, Environnement et Anthropologie' (PACEA -- CNRS/Université Bordeaux/Ministère de la Culture et de la Communication), and researchers from the UK and Spain. Dated at over 39,000 years old, it consists of a deeply impressed cross-hatching carved into rock. Its analysis calls into question the view that the production of representational and abstract depictions on cave walls was a cultural innovation introduced into Europe by modern humans.
On the contrary, the findings, published Sept. 1 in theProceedings of the National Academy of Sciences, support the hypothesis that Neanderthals had a symbolic material culture.
The production of representational and abstract depictions on cave walls is seen as a key stage in the development of human cultures. Until now, this cultural innovation was considered to be a characteristic feature of modern humans, who colonized Europe around 40,000 years ago. It has also frequently been used to suggest that there were marked cognitive differences between modern humans and the Neanderthals who preceded them, and who did not express themselves in this way. The recent discovery in Gorham's Cave changes the picture.
It consists of an abstract engraving in the form of a deeply impressed cross-hatching carved into the bedrock at the back of the cave. At the time it was identified it was covered by a layer of sediment shown by radiocarbon dating to be 39,000 years old. Since the engraving lies beneath this layer it is therefore older. This dating, together with the presence of Mousterian* tools characteristic of Neanderthals in the sediments covering the engraving, shows that it was made by Neanderthals, who still populated the south of the Iberian peninsula at that time.
Researchers at the PACEA Laboratory (CNRS/Université de Bordeaux/Ministère de la Culture et de la Communication) undertook a microscopic analysis of the engraving, produced a 3-D reconstruction of it, and carried out an experimental study, which demonstrated its human origin. The work also showed that the engraved lines are not the result of utilitarian activity, such as the cutting of meat or skins, but rather that of repeatedly and intentionally passing a robust pointed lithic tool (a pointed tool made of stone) into the rock to carve deep grooves. The lines were skilfully carved, and the researchers calculated that between 188 and 317 strokes of the engraving tool were necessary to achieve this result.
The discovery supports the view that graphic expression was not exclusive to modern humans, and that some Neanderthal cultures produced abstract engravings, using these to mark their living space.
The research was supported by an ERC grant.
*Mousterian culture was produced in Europe by Neanderthals during the Middle Paleolithic (300,000 to 39,000 years ago).

Dreadnoughtus: Gigantic, exceptionally complete sauropod dinosaur

Date:
September 4, 2014
Source:
Drexel University
Summary:
The new 65-ton (59,300 kg) dinosaur species Dreadnoughtus schrani is the largest land animal for which body mass can be accurately calculated. Its skeleton is the most complete ever found of its type, with over 70 percent of the bones, excluding the head, represented. Because all previously discovered supermassive dinosaurs are known from relatively fragmentary remains, Dreadnoughtus offers an unprecedented window into the anatomy and biomechanics of the largest animals to ever walk the Earth.
Image Dreadnoughtus
A US-Argentinian team led by Drexel University's Kenneth Lacovara, PhD, excavated the skeleton of Dreadnoughtus schrani from southern Patagonia over four field seasons from 2005 through 2009. The completeness and articulated nature of the two skeletons they found are evidence that these individuals were buried in sediments rapidly before their bodies fully decomposed.

Scientists have discovered and described a new supermassive dinosaur species with the most complete skeleton ever found of its type. At 85 feet (26 m) long and weighing about 65 tons (59,300 kg) in life,Dreadnoughtus schrani is the largest land animal for which a body mass can be accurately calculated. Its skeleton is exceptionally complete, with over 70 percent of the bones, excluding the head, represented. Because all previously discovered supermassive dinosaurs are known only from relatively fragmentary remains, Dreadnoughtus offers an unprecedented window into the anatomy and biomechanics of the largest animals to ever walk the Earth.
"Dreadnoughtus schrani was astoundingly huge," said Kenneth Lacovara, PhD, an associate professor in Drexel University's College of Arts and Sciences, who discovered the Dreadnoughtus fossil skeleton in southern Patagonia in Argentina and led the excavation and analysis. "It weighed as much as a dozen African elephants or more than seven T. rex. Shockingly, skeletal evidence shows that when this 65-ton specimen died, it was not yet full grown. It is by far the best example we have of any of the most giant creatures to ever walk the planet."
Lacovara and colleagues published the detailed description of their discovery, defining the genus and species Dreadnoughtus schrani, in the journalScientific Reports from the Nature Publishing Group today. The new dinosaur belongs to a group of large plant eaters known as titanosaurs. The fossil was unearthed over four field seasons from 2005 through 2009 by Lacovara and a team including Lucio M. Ibiricu, PhD, of the Centro Nacional Patagonico in Chubut, Argentina, the Carnegie Museum of Natural History's Matthew Lamanna, PhD, and Jason Poole of the Academy of Natural Sciences of Drexel University, as well as many current and former Drexel students and other collaborators.
Over 100 elements of the Dreadnoughtus skeleton are represented from the type specimen, including most of the vertebrae from the 30-foot-long tail, a neck vertebra with a diameter of over a yard, scapula, numerous ribs, toes, a claw, a small section of jaw and a single tooth, and, most notably for calculating the animal's mass, nearly all the bones from both forelimbs and hindlimbs including a femur over 6 feet tall and a humerus. A smaller individual with a less-complete skeleton was also unearthed at the site.
The 'gold standard' for calculating the mass of quadrupeds (four-legged animals) is based on measurements taken from the femur (thigh bone) and humerus (upper arm bone). Because the Dreadnoughtus type specimen includes both these bones, its weight can be estimated with confidence. Prior to the description of the 65-tonDreadnoughtus schrani specimen, another Patagonian giant, Elaltitan, held the title of dinosaur with the greatest calculable weight at 47 tons, based on a recent study.
Overall, the Dreadnoughtus schrani type specimen's bones represent approximately 45.3 percent of the dinosaur's total skeleton, or up to 70.4 percent of the types of bones in its body, excluding the skull bones. This is far more complete than all previously discovered giant titanosaurian dinosaurs.
"Titanosaurs are a remarkable group of dinosaurs, with species ranging from the weight of a cow to the weight of a sperm whale or more. But the biggest titanosaurs have remained a mystery, because, in almost all cases, their fossils are very incomplete," said Matthew Lamanna.
For example, Argentinosaurus was of a comparable and perhaps greater mass thanDreadnoughtus, but is known from only a half dozen vertebrae in its mid-back, a shinbone and a few other fragmentary pieces; because the specimen lacks upper limb bones, there is no reliable method to calculate a definitive mass of Argentinosaurus. Futalognkosaurus was the most complete extremely massive titanosaur known prior toDreadnoughtus, but that specimen lacks most limb bones, a tail and any part of its skull.
To better visualize the skeletal structure of Dreadnoughtus, Lacovara's team digitally scanned all of the bones from both dinosaur specimens. They have made a "virtual mount" of the skeleton that is now publicly available for download from the paper's open-access online supplement as a three-dimensional digital reconstruction.
"This has the advantage that it doesn't take physical space," Lacovara said. "These images can be ported around the world to other scientists and museums. The fidelity is perfect. It doesn't decay over time like bones do in a collection."
"Digital modeling is the wave of the future. It's only going to become more common in paleontology, especially for studies of giant dinosaurs such as Dreadnoughtus, where a single bone can weigh hundreds of pounds," said Lamanna.
The 3D laser scans of Dreadnoughtus show the deep, exquisitely preserved muscle attachment scars that can provide a wealth of information about the function and force of muscles that the animal had and where they attached to the skeleton -- information that is lacking in many sauropods. Efforts to understand this dinosaur's body structure, growth rate, and biomechanics are ongoing areas of research within Lacovara's lab.
A Dinosaur that Feared Nothing
"With a body the size of a house, the weight of a herd of elephants, and a weaponized tail, Dreadnoughtus would have feared nothing," Lacovara said. "That evokes to me a class of turn-of-the-last century battleships called the dreadnoughts, which were huge, thickly clad and virtually impervious."
As a result, Lacovara chose the name "Dreadnoughtus," meaning "fears nothing." "I think it's time the herbivores get their due for being the toughest creatures in an environment," he said. The species name, "schrani," was chosen in honor of American entrepreneur Adam Schran, who provided support for the research.
To grow as large as Dreadnoughtus, a dinosaur would have to eat massive quantities of plants. "Imagine a life-long obsession with eating," Lacovara said, describing the potential lifestyle of Dreadnoughtus, which lived approximately 77 million years ago in a temperate forest at the southern tip of South America.
"Every day is about taking in enough calories to nourish this house-sized body. I imagine their day consists largely of standing in one place," Lacovara said. "You have this 37-foot-long neck balanced by a 30-foot-long tail in the back. Without moving your legs, you have access to a giant feeding envelope of trees and fern leaves. You spend an hour or so clearing out this patch that has thousands of calories in it, and then you take three steps over to the right and spend the next hour clearing out that patch."
An adult Dreadnoughtus was likely too large to fear any predators, but it would have still been a target for scavengers after dying of natural causes or environmental disasters. Lacovara's team discovered a few teeth from theropods -- smaller predatory and scavenging dinosaurs- among the Dreadnoughtus fossils. However, the completeness and articulated nature of the two skeletons are evidence that these individuals were buried in sediments rapidly before their bodies fully decomposed. Based on the sedimentary deposits at the site, Lacovara said "these two animals were buried quickly after a river flooded and broke through its natural levee, turning the ground into something like quicksand. The rapid and deep burial of the Dreadnoughtustype specimen accounts for its extraordinary completeness. Its misfortune was our luck."