Sunday, 9 November 2014

Scientists Use String Field Theory to Validate Quantum Mechanics

String Field Theory Linked to Quantum Mechanics
A new study uses string field theory to try to validate quantum mechanics, proposing a link that could open the door to using string field theory — or a broader version of it, called M-theory — as the basis of all physics.
“This could solve the mystery of where quantum mechanics comes from,” said Itzhak Bars, USC Dornsife College of Letters, Arts and Sciences professor and lead author of the paper.
Bars collaborated with Dmitry Rychkov, his Ph.D. student at USC. The paper was published online on October 27 by the journal Physics Letters.
Rather than use quantum mechanics to validate string field theory, the researchers worked backwards and used string field theory to try to validate quantum mechanics.
In their paper, which reformulated string field theory in a clearer language, Bars and Rychov showed that a set of fundamental quantum mechanical principles known as “commutation rules’’ may be derived from the geometry of strings joining and splitting.
“Our argument can be presented in bare bones in a hugely simplified mathematical structure,” Bars said. “The essential ingredient is the assumption that all matter is made up of strings and that the only possible interaction is joining/splitting as specified in their version of string field theory.”
Physicists have long sought to unite quantum mechanics and general relativity, and to explain why both work in their respective domains. First proposed in the 1970s, string theory resolved inconsistencies of quantum gravity and suggested that the fundamental unit of matter was a tiny string, not a point, and that the only possible interactions of matter are strings either joining or splitting.
Four decades later, physicists are still trying to hash out the rules of string theory, which seem to demand some interesting starting conditions to work (like extra dimensions, which may explain why quarks and leptons have electric charge, color and “flavor” that distinguish them from one another).
At present, no single set of rules can be used to explain all of the physical interactions that occur in the observable universe.
On large scales, scientists use classical, Newtonian mechanics to describe how gravity holds the moon in its orbit or why the force of a jet engine propels a jet forward. Newtonian mechanics is intuitive and can often be observed with the naked eye.
On incredibly tiny scales, such as 100 million times smaller than an atom, scientists use relativistic quantum field theory to describe the interactions of subatomic particles and the forces that hold quarks and leptons together inside protons, neutrons, nuclei and atoms.
Quantum mechanics is often counterintuitive, allowing for particles to be in two places at once, but has been repeatedly validated from the atom to the quarks. It has become an invaluable and accurate framework for understanding the interactions of matter and energy at small distances.
Quantum mechanics is extremely successful as a model for how things work on small scales, but it contains a big mystery: the unexplained foundational quantum commutation rules that predict uncertainty in the position and momentum of every point in the universe.
“The commutation rules don’t have an explanation from a more fundamental perspective, but have been experimentally verified down to the smallest distances probed by the most powerful accelerators. Clearly the rules are correct, but they beg for an explanation of their origins in some physical phenomena that are even deeper,” Bars said.
The difficulty lies in the fact that there’s no experimental data on the topic — testing things on such a small scale is currently beyond a scientist’s technological ability.
The research was funded by the Department of Energy.
Publication: Itzhak Bars and Dmitry Rychkov, “Is string interaction the origin of quantum mechanics?,” Physics Letters B, 2014; DOI: 10.1016/j.physletb.2014.10.053

Curiosity Rover Provides First Confirmation of a Mineral Mapped from Orbit

Curiosity Provides First Confirmation of a Mineral Mapped from Orbit
This image shows the first holes drilled by NASA’s Mars rover Curiosity at Mount Sharp. The loose material near the drill holes is drill tailings and an accumulation of dust that slid down the rock during drilling. Image Credit: NASA/JPL-Caltech/MSSS
A sample of powdered rock extracted by the Curiosity rover’s drill from the “Confidence Hills” target has provided NASA scientists with the first confirmation of a mineral mapped from orbit.
“This connects us with the mineral identifications from orbit, which can now help guide our investigations as we climb the slope and test hypotheses derived from the orbital mapping,” said Curiosity Project Scientist John Grotzinger, of the California Institute of Technology in Pasadena.
Curiosity collected the powder by drilling into a rock outcrop at the base of Mount Sharp in late September. The robotic arm delivered a pinch of the sample to the Chemistry and Mineralogy (CheMin) instrument inside the rover. This sample, from a target called “Confidence Hills” within the “Pahrump Hills” outcrop, contained much more hematite than any rock or soil sample previously analyzed by CheMin during the two-year-old mission. Hematite is an iron-oxide mineral that gives clues about ancient environmental conditions from when it formed.
In observations reported in 2010, before selection of Curiosity’s landing site, a mineral-mapping instrument on NASA’s Mars Reconnaissance Orbiter provided evidence of hematite in the geological unit that includes the Pahrump Hills outcrop. The landing site is inside Gale Crater, an impact basin about 96 miles (154 kilometers) in diameter with the layered Mount Sharp rising about three miles (five kilometers) high in the center.
“We’ve reached the part of the crater where we have the mineralogical information that was important in selection of Gale Crater as the landing site,” said Ralph Milliken of Brown University, Providence, Rhode Island. He is a member of Curiosity’s science team and was lead author of that 2010 report in Geophysical Research Letters identifying minerals based on observations of lower Mount Sharp by the orbiter’s Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). “We’re now on a path where the orbital data can help us predict what minerals we’ll find and make good choices about where to drill. Analyses like these will help us place rover-scale observations into the broader geologic history of Gale that we see from orbital data.”
Much of Curiosity’s first year on Mars was spent investigating outcrops in a low area of Gale Crater called “Yellowknife Bay,” near the spot where the rover landed. The rover found an ancient lakebed. Rocks there held evidence of wet environmental conditions billions of years ago that offered ingredients and an energy source favorable for microbial life, if Mars ever had microbes. Clay minerals of interest in those rocks at Yellowknife Bay had not been detected from orbit, possibly due to dust coatings that interfere with CRISM’s view of them.
The rover spent much of the mission’s second year driving from Yellowknife Bay to the base of Mount Sharp. The hematite found in the first sample from the mountain tells about environmental conditions different from the conditions recorded in the rocks of Yellowknife Bay. The rock material interacted with water and atmosphere to become more oxidized.
The rocks analyzed earlier also contain iron-oxide minerals, mostly magnetite. One way to form hematite is to put magnetite in oxidizing conditions. The latest sample has about eight percent hematite and four percent magnetite. The drilled rocks at Yellowknife Bay and on the way to Mount Sharp contain at most about one percent hematite and much higher amounts of magnetite.
“There’s more oxidation involved in the new sample,” said CheMin Deputy Principal Investigator David Vaniman of the Planetary Science Institute in Tucson, Arizona.
The sample is only partially oxidized, and preservation of magnetite and olivine indicates a gradient of oxidation levels. That gradient could have provided a chemical energy source for microbes.
The Pahrump HIlls outcrop includes multiple layers uphill from its lowest layer, where the Confidence Hills sample was drilled. The layers vary in texture and may also vary in concentrations of hematite and other minerals. The rover team is now using Curiosity to survey the outcrop and assess possible targets for close inspection and drilling.
The mission may spend weeks to months at Pahrump Hills before proceeding farther up the stack of geological layers forming Mount Sharp. Those higher layers include an erosion-resistant band of rock higher on Mount Sharp with such a strong orbital signature of hematite, it is called “Hematite Ridge.” The target drilled at Pahrump Hills is much softer and more deeply eroded than Hematite Ridge.
Another NASA Mars rover, Opportunity, made a key discovery of hematite-rich spherules on a different part of Mars in 2004. That finding was important as evidence of a water-soaked history that produced those mineral concretions. The form of hematite at Pahrump Hills is different and is most important as a clue about oxidation conditions. Plenty of other evidence in Gale Crater has testified to the ancient presence of water.
NASA’s Jet Propulsion Laboratory, a division of Caltech in Pasadena, manages the Mars Reconnaissance Orbiter and Mars Science Laboratory projects for NASA’s Science Mission Directorate in Washington, and built the Curiosity rover. NASA’s Ames Research Center, Moffett Field, California, developed CheMin and manages instrument operations. The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, developed and operates CRISM.
Source: Preston Dyches / Guy Webster, Jet Propulsion Laboratory
Image: NASA/JPL-Caltech/MSSS

Astronomers Identify Bizarre Object at the Center of the Milky Way

UCLA Astronomers Solve Puzzle about Galactic Center Source G2
Telescopes at the Keck Observatory use adaptive optics, which enabled UCLA astronomers to discover that G2 is a pair of binary stars that merged together.
In a new study, astronomers reveal that the bizarre object known as G2 is most likely a pair of binary stars that had been orbiting the black hole in tandem and merged together into an extremely large star, cloaked in gas and dust.
For years, astronomers have been puzzled by a bizarre object in the center of the Milky Way that was believed to be a hydrogen gas cloud headed toward our galaxy’s enormous black hole.
Having studied it during its closest approach to the black hole this summer, UCLA astronomers believe that they have solved the riddle of the object widely known as G2.
A team led by Andrea Ghez, professor of physics and astronomy in the UCLA College, determined that G2 is most likely a pair of binary stars that had been orbiting the black hole in tandem and merged together into an extremely large star, cloaked in gas and dust — its movements choreographed by the black hole’s powerful gravitational field.The research is published in the journal Astrophysical Journal Letters.
Astronomers had figured that if G2 had been a hydrogen cloud, it could have been torn apart by the black hole, and that the resulting celestial fireworks would have dramatically changed the state of the black hole.
“G2 survived and continued happily on its orbit; a simple gas cloud would not have done that,” said Ghez, who holds the Lauren B. Leichtman and Arthur E. Levine Chair in Astrophysics. “G2 was basically unaffected by the black hole. There were no fireworks.”
Black holes, which form out of the collapse of matter, have such high density that nothing can escape their gravitational pull — not even light. They cannot be seen directly, but their influence on nearby stars is visible and provides a signature, said Ghez, a 2008 MacArthur Fellow.
Ghez, who studies thousands of stars in the neighborhood of the supermassive black hole, said G2 appears to be just one of an emerging class of stars near the black hole that are created because the black hole’s powerful gravity drives binary stars to merge into one. She also noted that, in our galaxy, massive stars primarily come in pairs. She says the star suffered an abrasion to its outer layer but otherwise will be fine.
Ghez and her colleagues — who include lead author Gunther Witzel, a UCLA postdoctoral scholar, and Mark Morris and Eric Becklin, both UCLA professors of physics and astronomy — conducted the research at Hawaii’s W.M. Keck Observatory, which houses the world’s two largest optical and infrared telescopes.
When two stars near the black hole merge into one, the star expands for more than 1 million years before it settles back down, said Ghez, who directs the UCLA Galactic Center Group. “This may be happening more than we thought. The stars at the center of the galaxy are massive and mostly binaries. It’s possible that many of the stars we’ve been watching and not understanding may be the end product of mergers that are calm now.”
Ghez and her colleagues also determined that G2 appears to be in that inflated stage now. The body has fascinated many astronomers in recent years, particularly during the year leading up to its approach to the black hole. “It was one of the most watched events in astronomy in my career,” Ghez said.
Ghez said G2 now is undergoing what she calls a “spaghetti-fication” — a common phenomenon near black holes in which large objects become elongated. At the same time, the gas at G2’s surface is being heated by stars around it, creating an enormous cloud of gas and dust that has shrouded most of the massive star.
Witzel said the researchers wouldn’t have been able to arrive at their conclusions without the Keck’s advanced technology. “It is a result that in its precision was possible only with these incredible tools, the Keck Observatory’s 10-meter telescopes,” Witzel said.
The telescopes use adaptive optics, a powerful technology pioneered in part by Ghez that corrects the distorting effects of the Earth’s atmosphere in real time to more clearly reveal the space around the supermassive black hole. The technique has helped Ghez and her colleagues elucidate many previously unexplained facets of the environments surrounding supermassive black holes.
“We are seeing phenomena about black holes that you can’t watch anywhere else in the universe,” Ghez added. “We are starting to understand the physics of black holes in a way that has never been possible before.”
The research was funded by the National Science Foundation, the Lauren Leichtman and Arthur Levine Chair in Astrophysics, the Preston Family Graduate Student Fellowship and the Janet Marott Student Travel Awards. The W. M. Keck Observatory is operated as a scientific partnership among the University of California, Caltech and NASA.
Publication: G. Witzet, et al., “Detection of Galactic Center Source G2 at 3.8 μm during Periapse Passage,” 2014, ApJ, 796, L8; doi:10.1088/2041-8205/796/1/L8
PDF Copy of the Study: Detection of Galactic Center Source G2 at 3.8 μm during Periapse Passage
Source: Stuart Wolpert, UCLA Newsroom
Image: Ethan Tweedie

CWRU Theoretical Physicists Suggest Dark Matter May Be Massive

Theoretical Physicists Suggest Dark Matter May Be Massive
In a new study, theoretical physicists from Case Western Reserve University suggest that dark matter may be massive and that the Standard Model may account for it.
The physics community has spent three decades searching for and finding no evidence that dark matter is made of tiny exotic particles. Case Western Reserve University theoretical physicists suggest researchers consider looking for candidates more in the ordinary realm and, well, more massive.
Dark matter is unseen matter, that, combined with normal matter, could create the gravity that, among other things, prevents spinning galaxies from flying apart. Physicists calculate that dark matter comprises 27 percent of the universe; normal matter 5 percent.
Instead of WIMPS, weakly interacting massive particles, or axions, which are weakly interacting low-mass particles, dark matter may be made of macroscopic objects, anywhere from a few ounces to the size of a good asteroid, and probably as dense as a neutron star, or the nucleus of an atom, the researchers suggest.
Physics professor Glenn Starkman and David Jacobs, who received his PhD in Physics from CWRU in May and is now a fellow at the University of Cape Town, say published observations provide guidance, limiting where to look. They lay out the possibilities in a paper listed below.
The Macros, as Starkman and Jacobs call them, would not only dwarf WIMPS and axions, but differ in an important way. They could potentially be assembled out of particles in the Standard Model of particle physics instead of requiring new physics to explain their existence.
“We’ve been looking for WIMPs for a long time and haven’t seen them,” Starkman said. “We expected to make WIMPS in the Large Hadron Collider, and we haven’t.”
WIMPS and axions remain possible candidates for dark matter, but there’s reason to search elsewhere, the theorists argue.
“The community had kind of turned away from the idea that dark matter could be made of normal-ish stuff in the late ’80s,” Starkman said. “We ask, was that completely correct and how do we know dark matter isn’t more ordinary stuff— stuff that could be made from quarks and electrons?”
After eliminating most ordinary matter, including failed Jupiters, white dwarfs, neutron stars, stellar black holes, the black holes in centers of galaxies and neutrinos with a lot of mass, as possible candidates, physicists turned their focus on the exotics.
Matter that was somewhere in between ordinary and exotic—relatives of neutron stars or large nuclei—was left on the table, Starkman said. “We say relatives because they probably have a considerable admixture of strange quarks, which are made in accelerators and ordinarily have extremely short lives,” he said.
Although strange quarks are highly unstable, Starkman points out that neutrons are also highly unstable. But in helium, bound with stable protons, neutrons remain stable.
“That opens the possibility that stable strange nuclear matter was made in the early universe and dark matter is nothing more than chunks of strange nuclear matter or other bound states of quarks, or of baryons, which are themselves made of quarks,” he said. Such dark matter would fit the Standard Model.
The Macros would have to be assembled from ordinary and strange quarks or baryons before the strange quarks or baryons decay, and at a temperature above 3.5 trillion degrees Celsius, comparable to the temperature in the center of a massive supernova, Starkman and Jacobs calculated. The quarks would have to be assembled with 90 percent efficiency, leaving just 10 percent to form the protons and neutrons found in the universe today.
The limits of the possible dark matter are as follows:
  • A minimum of 55 grams. If dark matter were smaller, it would have been seen in detectors in Skylab or in tracks found in sheets of mica.
  • A maximum of 1024 (a million billion billion) grams. Above this, the Macros would be so massive they would bend starlight, which has not been seen.
  • The range of 1017 to 1020 grams per centimeter squared should also be eliminated from the search, the theorists say. Dark matter in that range would be massive for gravitational lensing to affect individual photons from gamma ray bursts in ways that have not been seen.
If dark matter is within this allowed range, there are reasons it hasn’t been seen.
  • At the mass of 1018 grams, dark matter Macros would hit the Earth about once every billion years.
  • At lower masses, they would strike the Earth more frequently but might not leave a recognizable record or observable mark.
  • In the range of 109 to 1018, dark matter would collide with the Earth once annually, providing nothing to the underground dark matter detectors in place.
Publication: Submitted to MNRAS

Type 2 Diabetes and Cardiovascular Disease Share Eight Molecular Pathways

Type 2 Diabetes and Cardiovascular Disease May Be Related
A genetic network shows 10 proposed “key driver” genes that may have especially great influence in both type 2 diabetes and cardiovascular disease. Liiu lab/Brown University
In a newly published study, researchers uncover several potential ways that type 2 diabetes and cardiovascular disease may be related at the level of genes, proteins, and fundamental physiology in women.
Providence, Rhode Island (Brown University) — Type 2 diabetes (T2D) and cardiovascular disease (CVD) appear to have a lot in common. They share risk factors such as obesity and they often occur together. If they also share the same genetic underpinings, then doctors could devise a way to treat them together too. With that hope in mind, scientists applied multiple layers of analysis to the genomics of more than 15,000 women. In a new study they report finding eight molecular pathways shared in both diseases as well as several “key driver” genes that appear to orchestrate the gene networks in which these pathways connect and interact.
The scientists started by looking for individual genetic differences in women of three different ethnicities who had either or both of the conditions compared to similar but healthy women – a technique called a Genome Wide Association Study (GWAS). But the team members didn’t stop there. They also analyzed the women’s genetic differences in the context of the complex pathways in which genes and their protein products interact to affect physiology and health.
“Looking at genes one by one is standard,” said Dr. Simin Liu, professor of epidemiology and medicine in the Brown University School of Public Health and a co-senior author of the study published in the American Heart Association journal Circulation: Cardiovascular Genetics. “But ultimately, the interactions of biology are fundamentally organized in a pathway and network manner.”
The study drew upon the genetic samples and health records of 8,155 black women, 3,494 Hispanic women and 3,697 white women gathered by the Women’s Health Initiative, a major research project funded by the National Heart, Lung and Blood Institute. In comparing women with CVD and T2D to healthy women, lead author Kei Hang K. Chan, a postdoctoral fellow at the Center for Population Health and Clinical Epidemiology, and the team found key differences in eight pathways regulating cell adhesion (how cells stick within tissues), calcium signaling (how cells communicate), axon guidance (how neurons find their paths to connect with target sites), extracellular matrix (structural support within tissue), and various forms of cardiomyopathy (heart muscle problems).
These were all common across ethnicities. In addition the team found a few pathways that were ethnicity-specific between T2D and CVD.
Chan used five different methodologies to conduct these pathway analyses, reporting only those pathways that showed up as significant by at least two methods.
From there, the analysis moved further by subjecting the genes and their pathways to a network analysis to identify genes that could be “key drivers” of the diseases. The paper highlights a “top ten” list of them.
“These [key driver] genes represent central network genes which, when perturbed, can potentially affect a large number of genes involved in the CVD and T2D pathways and thus exert stronger impact on diseases,” wrote the authors, including co-senior author Xia Yang of the University of California–Los Angeles.
Potential therapeutic targets
To assess whether those genes made sense as key drivers, the research team looked them up in multiple databases that researchers have compiled about the importance of the genes in the health of mouse models.
In the paper they discuss the pathways they implicate in terms of how they could reasonably relate to the disease. For example, axon guidance, normally of note in how developing fetuses build the nervous system, involves mechanisms that also happen to sustain beta cells in the pancreas, which lies at the heart of diabetes. A breakdown in that pathway could leave the cells more vulnerable, affecting the processing of sugars.
With the pathways and key driver genes identified, Liu said, there are now ample opportunities for follow-up, both to refine the understanding of the role these pathways may play in vascular health outcomes and to design and test treatments that may repair them.
“Using a systems biology framework that integrates GWAS, pathways, gene expression, networks, and phenotypic information from both human and mouse populations, we were able to derive novel mechanistic insights and identify potential therapeutic targets,” the researchers wrote.
In addition to Liu, Chan, and Yang, other authors are Dr. Yen-Tsung Huang of Brown; Qingying Meng, Eric Sobel, and Aldons Lusis of UCLA; Chunyuan Wu and Lesley Tinker of the Fred Hutchinson Cancer Research Center in Seattle; and Alexander Reiner of the University of Washington.
The National Institutes of Health, the American Heart Association and the Leducq Foundation supported the research.
Publication: Kei Hang K. Chan, et al., “Shared Molecular Pathways and Gene Networks for Cardiovascular Disease and Type 2 Diabetes in Women across Diverse Ethnicities,” Circulation: Cardiovascular Genetics, November 4, 2014; doi: 10.1161/CIRCGENETICS.114.000676
Source: Brown University
Image: Liiu lab/Brown University

Special Coating Prevents Batteries from Conducting Electricity after Being Swallowed

Special Coating Makes Batteries Safer
A new coated battery still conducts electricity when compressed, but not if accidentally ingested.
A new special coating developed by a team of researchers prevents electrical current from damaging the digestive tract after accidental battery ingestion.
Every year, nearly 4,000 children go to emergency rooms after swallowing button batteries — the flat, round batteries that power toys, hearing aids, calculators, and many other devices. Ingesting these batteries has severe consequences, including burns that permanently damage the esophagus, tears in the digestive tract, and in some cases, even death.
To help prevent such injuries, researchers at MIT, Brigham and Women’s Hospital, and Massachusetts General Hospital have devised a new way to coat batteries with a special material that prevents them from conducting electricity after being swallowed. In animal tests, they found that such batteries did not damage the gastrointestinal (GI) tract at all.
“We are all very pleased that our studies have shown that these new batteries we created have the potential to greatly improve safety due to accidental ingestion for the thousands of patients every year who inadvertently swallow electric components in toys or other entities,” says Robert Langer, the David H. Koch Institute Professor at MIT and a member of MIT’s Koch Institute for Integrative Cancer Research, Institute for Medical Engineering and Science (IMES), and Department of Chemical Engineering.
Langer and Jeffrey Karp, an associate professor of medicine at Harvard Medical School and Brigham and Women’s Hospital, are the senior authors of a paper describing the new battery coatings in this week’s edition of the Proceedings of the National Academy of Sciences. The paper’s lead authors are Bryan Laulicht, a former IMES postdoc, and Giovanni Traverso, a research fellow at the Koch Institute and a gastroenterologist at MGH.
Small batteries, big danger
About 5 billion button batteries are produced every year, and these batteries have become more and more powerful, making them even more dangerous if swallowed. In the United States, recent legislation has mandated warning labels on packages, and some toys are required to have battery housings that can only be opened with a screwdriver. However, there have been no technological innovations to make the batteries themselves safer, Karp says.
When batteries are swallowed, they start interacting with water or saliva, creating an electric current that produces hydroxide, a caustic ion that damages tissue. This can cause serious injury within just a couple of hours, especially if parents don’t realize right away that a child has swallowed a battery.
“Disc batteries in the esophagus require [emergency] endoscopic removal,” Traverso says. “This represents a gastrointestinal emergency, given that tissue damage starts as soon as the battery is in contact with the tissue, generating an electric current [and] leading to a chemical burn.”
The research team began thinking about ways to alter batteries so they would not generate a current inside the human body but would still be able to power a device. They knew that when batteries are inside their housing, they experience a gentle pressure. To take advantage of this, they decided to coat the batteries with a material that would allow them to conduct when under pressure, but would act as an insulator when the batteries are not being compressed.
Quantum tunneling composite (QTC), an off-the-shelf material commonly used in computer keyboards and touch screens, fit the bill perfectly. QTC is a rubberlike material, usually made of silicone, embedded with metal particles. Under normal circumstances, these particles are too far apart to conduct an electric current. However, when squeezed, the particles come closer together and start conducting. This allows QTC to switch from an insulator to a conductor, depending on how much pressure it is under.
To verify that this coating would protect against tissue damage, the researchers first calculated how much pressure the battery would experience inside the digestive tract, where movements of the tract, known as peristalsis, help move food along. They calculated that even under the highest possible forces, found in patients with a rare disorder called “nutcracker esophagus,” the QTC-coated batteries would not conduct.
“You want to know what’s the maximum force that could possibly be applied, and you want to make sure the batteries will conduct only above that threshold,” Laulicht says. “We felt that once we were well above those levels, these coatings would pass through the GI tract unchanged.”
After those calculations were done, the researchers monitored the coated batteries in the esophagus of a pig, and found no signs of damage.
“A relatively simple solution”
Because QTC is relatively inexpensive and already used in other consumer products, the researchers believe battery companies could implement this type of coating fairly easily. They are now working on developing a scalable method for manufacturing coated batteries and seeking companies that would be interesting in adopting them.
“We were really interested in trying to impose design criteria that would allow us to have an accelerated path to get this out into society and reduce injuries,” Karp says. “We think this is a relatively simple solution that should be easy to scale, won’t add significant cost, and can address one of the biggest problems associated with ingestion of these batteries.”
Also, because the coating is waterproof, the researchers believe it could be used to make batteries weather-resistant and more suitable for outdoor use. They also plan to test the coating on other types of batteries, including lithium batteries.
Edith Mathiowitz, a professor of medical science at Brown University who was not involved in the research, says she believes this approach is a “brilliant idea” that offers an easy fix for the potential dangers of battery ingestion.
“What I like about it is that it’s a simple idea you could implement very easily. It’s not something that requires a big new manufacturing facility,” she says. “And, it could be useful eventually in any type of size of battery.”
The research was funded by the National Institutes of Health.
Publication: Bryan Laulicht, et al., “Simple battery armor to protect against gastrointestinal injury from accidental ingestion,” PNAS, 2014; doi: 10.1073/pnas.1418423111
Source: Anne Trafton, MIT News

Noninvasive Technique Measures Fruit and Vegetable Consumption

Laser Technique Measures Fruit and Vegetable Consumption in Skin
A new noninvasive technique objectively measures fruit and vegetable consumption, helping nutritionists and medical professionals measure and improve the diets of children and adults.
A diet rich in fruit and vegetables is linked to a variety of improved health outcomes, but accurately measuring consumption by self-report, especially with children, is challenging and can be of questionable validity.
But a device being developed in a collaboration that involves researchers from the Yale School of Public Health has the potential to change that.
Researchers demonstrated for the first time that the device, which uses blue laser light to quickly and painlessly scan the skin of a subject’s palm, accurately measures changes in a biomarker known as skin carotenoids in response to an intervention involving a diet enriched in fruits and vegetables.
In a study that appears in the American Journal of Clinical Nutrition, the researchers report that the noninvasive technique tracked skin carotenoid changes over a 28-week period that was divided into distinct dietary phases marked by high and low intake of provided fruit and vegetables along with a phase where study participants resumed their usual diets.
The palm-reading device, which uses resonance Raman spectroscopy (RRS), has the potential to help nutritionists and other medical professionals measure and improve the diets of children and adults alike.
“There is great interest in the development of objective biomarkers of dietary intake, especially biomarkers that can be measured non-invasively,” said Susan T. Mayne, C.-E. A. Winslow Professor of Epidemiology, one of the study’s authors and a developer of the device. “Our earlier studies demonstrated a correlation between skin carotenoids and fruit and vegetable intake; this new paper demonstrates that the biomarker was sensitive to changes in fruit and vegetable intake in the intervention setting. Many diet interventions lack objective verification that subjects actually changed intake; this research demonstrates that skin carotenoids can serve that purpose.”
The RRS device works by measuring changes in energy levels of electrons in molecules after the laser has excited them. It consists of a flexible fiberoptic probe connected to a boxlike central machine; the probe is held against an individual’s palm for about 30 seconds while the light interacts with carotenoids in the skin. Then software on an attached laptop processes the results, which takes another 30 seconds.
Diets rich in fruit and vegetables have been linked to important health outcomes, including reductions in cardiovascular disease, type 2 diabetes and some forms of cancer. But only 11 percent of the U.S. population currently meets the daily recommendations for vegetable consumption, while 20 percent meet the guideline for fruit.
Subjective methods of measuring fruit and vegetable intake, such as questionnaires, are prone to bias and error. Measuring carotenoids in the blood, meanwhile, provides a highly accurate result, but the invasive process is more expensive and especially difficult with children.
Brenda Cartmel, a senior research scientist and lecturer at the Yale School of Public Health, is a co-author of the paper, along with researchers from the USDA/Agricultural Research Service Grand Forks Human Nutrition Research Center, and the University of Utah.
Publication: Lisa Jahns, et al., “Skin and plasma carotenoid response to a provided intervention diet high in vegetables and fruit: uptake and depletion kinetics,” American Journal of Clinical Nutrition, September 2014, vol. 100 no. 3 930-937; doi: 10.3945/​ajcn.114.086900
Source: Michael Greenwood, Yale University

How to Land on a Comet Moving 40 Times Faster Than a Speeding Bullet

A new four minute ScienceCast video previews the first-ever landing on a comet. Showing how difficult it will be for the Philae lander since comet 67P will be moving 40 times faster than a speeding bullet, spinning, and shooting out gas and dust.
ScienceCasts: How to Land on a Comet
The European Space Agency’s Rosetta spacecraft is about to attempt something “ridiculously difficult” – landing a probe on the surface of a speeding comet.
Generally speaking, space missions fall into one of three categories: difficult, more difficult, and ridiculously difficult.
Flybys are difficult. A spaceship travels hundreds of millions of miles through the dark void of space, pinpoints a distant planet or moon, and flies past it at 20 to 30 thousand mph, snapping pictures furiously during an achingly brief encounter.
Going into orbit is more difficult. Instead of flying past its target, the approaching spaceship brakes, changing its velocity by just the right amount to circle the planet. One wrong move and the spacecraft bounces off the atmosphere, becoming an unintended meteor.
Landing is ridiculously difficult. Just play NASA’s “Seven Minutes of Terror” video. Watching Curiosity parachute, retrorocket, and sky-crane its way to the surface of Mars rarely fails to produce goosebumps. Since the Space Age began, the space agencies of Earth have succeeded in landing on only six bodies: Venus, Mars, the Moon, Titan, and asteroids 433 Eros and Itokawa.
In a move that could set a new standard for difficulty, the European Space Agency is about to add a seventh member to the list. On November 12th ESA’s Rosetta spacecraft will drop a lander named “Philae” onto the surface of Comet 67P/Churyumov–Gerasimenko.
“How hard is this landing?” asks Art Chmielewski, the US Rosetta Project Manager at JPL. “Consider this: The comet will be moving 40 times faster than a speeding bullet, spinning, shooting out gas and welcoming Rosetta on the surface with boulders, cracks, scarps and possibly meters of dust!”
Rosetta will drop Philae from a height of 22 km as the comet rotates freely below. No active steering will take place during the slow descent.
“Unlike previous landings, where reconnaissance had been done beforehand–at Mars, for instance, we mapped the planet well in advance—Rosetta just started learning about its target a couple of months ago,” explains Claudia Alexander, Project Scientist for the U.S. Rosetta Project. “This introduces much more risk.”
Rosetta arrived at 67P on August 6, 2014. What it found was shocking. The comet’s nucleus is strangely shaped, (one observer has likened it to a “freak-show mushroom”) dominated by a pair of mile-wide “knobs” joined by a boulder-strewn “neck.” Picking a landing site would not be easy.
Rosetta spent more than a month surveying the comet before engineers and scientists gathered in France to make their decision.
“None of the candidate landing sites met all of the operational criteria at the 100% level,” says Stephan Ulamec, Philae Lander Manager at the German Aerospace Center (DLR), “but Site J is clearly the best solution.”
Site J is a relatively flat, boulder-free location on the smaller of the comet’s knobs. It gets plenty of sunlight for the lander’s solar panels and has good line-of-sight visibility for communications with Rosetta orbiting overhead.
The descent will take about 7 hours, a drawn-out process that could be enlivened by unpredictable jets of gas emerging from the comet’s core.
You thought 7 minutes of terror was bad? “This will be Seven Hours of Terror,” says Alexander.
If all goes well, Philae will touch down at walking pace and deploy harpoons to fasten itself to the crusty surface. A suite of 10 sensors on the lander, including a drill for sample collection and an acoustic sounder to probe the comet’s sub-surface structure, can then begin an unprecedented study of a comet at point-blank range.
“A comet is unlike any other planetary body that we’ve attempted to land on,” says Alexander. “Getting Philae down successfully will be an incredible achievement for humankind!”
Try your hand at landing a spacecraft on a comet with NASA Space Place’s Comet Quest: http://spaceplace.nasa.gov/comet-quest/
Source: Dr. Tony Phillips, Science@NASA

Human Body Weight Influenced by Microbes in the Gut

Research Shows Weight Influenced by Microbes in the Gut
New research from King’s College London and Cornell University reveals that genetic makeup influences whether people are fat or thin by shaping which types of microbes thrive in our body, paving the way for personalized probiotic therapies that are optimized to reduce the risk of obesity-related diseases based on an individual’s genetic make-up.
By studying pairs of twins at King’s Department of Twin Research, researchers identified a specific, little known bacterial family that is highly heritable and more common in individuals with low body weight. This microbe also protected against weight gain when transplanted into mice.
The results, published today in the journal Cell, could pave the way for personalized probiotic therapies that are optimized to reduce the risk of obesity-related diseases based on an individual’s genetic make-up.
Previous research has linked both genetic variation and the composition of gut microbes to metabolic disease and obesity. Despite these shared effects, the relationship between human genetic variation and the diversity of gut microbes was presumed to be negligible.
In the study, funded by National Institutes of Health (NIH), researchers sequenced the genes of microbes found in more than 1,000 fecal samples from 416 pairs of twins. The abundances of specific types of microbes were found to be more similar in identical twins, who share 100 per cent of their genes, than in non-identical twins, who share on average only half of the genes that vary between people. These findings demonstrate that genes influence the composition of gut microbes.
The type of bacteria whose abundance was most heavily influenced by host genetics was a recently identified family called ‘Christensenellaceae’. Members of this health-promoting bacterial family were more abundant in individuals with a low body weight than in obese individuals. Moreover, mice that were treated with this microbe gained less weight than untreated mice, suggesting that increasing the amounts of this microbe may help to prevent or reduce obesity.
Professor Tim Spector, Head of the Department of Twin Research and Genetic Epidemiology at King’s College London, said: ‘Our findings show that specific groups of microbes living in our gut could be protective against obesity – and that their abundance is influenced by our genes. The human microbiome represents an exciting new target for dietary changes and treatments aimed at combating obesity.
‘Twins have been incredibly valuable in uncovering these links – but we now want to promote the use of microbiome testing more widely in the UK through the British Gut Project. This is a crowd-sourcing experiment that allows anyone with an interest in their diet and health to have their personal microbes tested genetically using a simple postal kit and a small donation via our website (www.britishgut.org). We want thousands to join up so we can continue to make major discoveries about the links between our gut and our health.’
Ruth Ley, Associate Professor at Cornell University in the United States, said: ‘Up until now, variation in the abundances of gut microbes has been explained by diet, the environment, lifestyle, and health. This is the first study to firmly establish that certain types of gut microbes are heritable — that their variation across a population is in part due to host genotype variation, not just environmental influences. These results will also help us find new predictors of disease and aid prevention.’
The study was also supported by the National Institute for Health Research (NIHR) BioResource Clinical Research Facility and Biomedical Research Center based at Guy’s and St Thomas’ NHS Foundation Trust and King’s College London.
Publication: Julia K. Goodrich, et al., “Human Genetics Shape the Gut Microbiome,” Cell, Volume 159, Issue 4, p789–799, 6 November 2014; doi:10.1016/j.cell.2014.09.053
Source: King’s College London

New On-Site Fabrication Process Makes Taller Wind Turbines More Feasible

New Fabrication Process Makes Taller Wind Turbines More Feasible
Model of a turbine constructed with Keystone Tower System’s spiral tapered welding process. Courtesy of Keystone Tower Systems
MIT engineers have developed a new fabrication system that adapts a traditional pipe-making technology to make wind turbines on location, at wind farms, making taller towers more economically feasible.
Wind turbines across the globe are being made taller to capture more energy from the stronger winds that blow at greater heights.
But it’s not easy, or sometimes even economically feasible, to build taller towers, with shipping constraints on tower diameters and the expense involved in construction.
Now Keystone Tower Systems — co-founded by Eric Smith ’01, SM ’07, Rosalind Takata ’00, SM ’06, and Alexander Slocum, the Pappalardo Professor of Mechanical Engineering at MIT — is developing a novel system that adapts a traditional pipe-making technology to churn out wind turbines on location, at wind farms, making taller towers more economically feasible.
Keystone’s system is a modification of spiral welding, a process that’s been used for decades to make large pipes. In that process, steel sheets are fed into one side of a machine, where they’re continuously rolled into a spiral, while their edges are welded together to create a pipe — sort of like a massive paper-towel tube.
Developed by Smith, Takata, and Slocum — along with a team of engineers, including Daniel Bridgers SM ’12 and Dan Ainge ’12 — Keystone’s system allows the steel rolls to be tapered and made of varying thickness, to create a conical tower. The system is highly automated — using about one-tenth the labor of traditional construction — and uses steel to make the whole tower, instead of concrete. “This makes it much more cost-effective to build much taller towers,” says Smith, Keystone’s CEO.
With Keystone’s onsite fabrication, Smith says, manufactures can make towers that reach more than 400 feet. Wind that high can be up to 50 percent stronger and, moreover, isn’t blocked by trees, Smith says. A 460-foot tower, for instance, could increase energy capture by 10 to 50 percent, compared with today’s more common 260-foot towers.
“That’s site-dependent,” Smith adds. “If you go somewhere in the Midwest where there’s open plains, but no trees, you’re going to see a benefit, but it might not be a large benefit. But if you go somewhere with tree cover, like in Maine — because the trees slow down the wind near the ground — you can see a 50 percent increase in energy capture for the same wind turbine.”
Keystone Fabrication Process Makes Taller Wind Turbines More Feasible
In Keystone’s fabrication process, trapezoid-shaped steel sheets of increasing sizes are fed into a modified spiral welding machine — with the shorter size fed into the machine first, and the longest piece fed in last. Welding their edges assembles the sheets into a conical shape. Courtesy of Keystone Tower Systems
Solving transport problems
The Keystone system’s value lies in skirting wind-turbine transportation constraints that have plagued the industry for years. Towers are made in segments to be shipped to wind farms for assembly. But they’re restricted to diameters of about 14 feet, so trucks can safely haul them on highways and under bridges.
This means that in the United States, most towers for 2- or 3-megawatt turbines are limited to about 260 feet. In Europe, taller towers (up to about 460 feet) are becoming common, but these require significant structural or manufacturing compromises: They’re built using very thick steel walls at the base (requiring more than 100 tons of excess steel), or with the lower half of the tower needing more than 1,000 tons of concrete blocks, or pieced together with many steel elements using thousands of bolts.
“If you were to design a 500-foot tower to get strong winds, based on the force exerted on a turbine, you’d want something at least 20 feet in diameter at the base,” Smith explains. “But there’s no way to weld together a tower in a factory that’s 20 feet in diameter and ship it to the wind farm.”
Instead, Keystone delivers its mobile, industrial-sized machine and the trapezoid-shaped sheets of steel needed to feed into the system. Essentially, the sheets are trapezoids of increasing sizes — with the shorter size fed into the machine first, and the longest piece fed in last. (If you laid all the sheets flat, edge-to-edge, they’d form an involute spiral.) Welding their edges assembles the sheets into a conical shape. The machine can make about one tower per day.
Any diameter is possible, Smith says. For 450-foot, 3-megawatt towers, a base 20 feet in diameter will suffice. (Increasing diameters by even a few feet, he says, can make towers almost twice as strong to handle stress.)
Smith compares the process to today’s at-home installation of rain gutters: For that process, professionals drive to a house and feed aluminum coils into one end of a specialized machine that shapes the metal into a seamless gutter. “It’s a better alternative to buying individual sections and bringing them home to assemble,” he says. “Keystone’s system is that, but on a far, far grander scale.”
Behind Keystone
Smith, who studied mechanical engineering and electrical engineering and computer science at MIT, conceived of a tapered spiral-welding process while conducting an independent study on wind-energy issues with Slocum.
Running a consulting company for machine design after graduating from MIT, Smith was vetting startups and technologies in wind energy, and other industries, for investors. As wind energy picked up steam about five years ago, venture capitalists soon funded Smith, Slocum, and other wind-energy experts to study opportunities for cost savings in large, onshore wind turbines.
The team looked, for instance, at developing advanced drivetrain controls and rotor designs. “But out of that study we spotted tower transport as one of the biggest bottlenecks holding back the industry,” Smith says.
With Slocum’s help, Smith worked out how to manipulate spiral-welding machines to make tapered tubes and, soon thereafter, along with Slocum, designed a small-scale, patented machine funded by a $1 million Department of Energy grant. In 2010, Smith and Slocum launched Keystone with Rosalind Takata ’01, SM ’06 to further develop the system in Somerville, Mass. The company has since relocated its headquarters to Denver.
In launching Keystone, Smith gives some credit to MIT’s Venture Mentoring Service (VMS), which advised the startup’s co-founders on everything from early company formation to scaling up the business. Smith still keeps in touch with VMS for advice on overcoming common commercialization roadblocks, such as obtaining and maintaining customers.
“It’s been extremely valuable,” he says of VMS. “There are many different topics that come up when you’re founding an early-stage company, and it’s good to have advisors who’ve seen it all before.”
Opening up the country
Keystone is now conducting structural validation of towers created by its system in collaboration with structural engineers at Northeastern University and Johns Hopkins University. For the past year, the startup’s been working toward deploying a small-scale prototype (about six stories high) at the MIT-owned Bates Linear Accelerator Center in Middleton, Mass., by early 2015.
But last month, Keystone received another $1 million DOE grant to design the full mobile operation. Now, the company is working with the Danish wind-turbine manufacturer Vestas Wind Systems, and other turbine makers, to plan out full-scale production, and is raising investments to construct the first commercial scale machine.
Although their first stops may be Germany and Sweden — where taller wind towers are built more frequently, but using more expensive traditional methods — Smith says he hopes to sell the system in the United States, where shorter towers (around 260 feet) are still the norm.
The earliest adopters in the United States, he says, would probably be areas where there is strong wind, but also dense tree cover. In Maine, for example, there’s only a small percentage of the state where wind power is economically feasible today, because trees block wind from the state’s shorter turbines. In the Midwest, wind energy has already reached grid-parity, undercutting even today’s low-cost natural gas — but in areas like New England and the Southeast, taller towers are needed to reach the strong winds that make wind energy economically feasible.
“Once you’re at the heights we’re looking at,” Smith says, “it really opens up the whole country for turbines to capture large amounts of energy.”
Source: Rob Matheson, MIT News