To obtain a hard copy of the Myers-Briggs Type Indicator (MBTI®), the most popular personality test in the world, one must first spend $1,695 on a week-long certification program run by the Myers & Briggs Foundation of Gainesville, Florida.
This year alone, there have been close to 100 certification sessions in cities ranging from New York to Pasadena, Minneapolis, Portland, Houston, and the Foundation’s hometown of Gainesville, where participants get a $200 discount for making their way south to the belly of the beast. It is not unusual for sessions to sell out months in advance. People come from all over the world to get certified.
In New York last April, there were twenty-five aspiring MBTI practitioners in attendance. There was a British oil executive who lived for the half the year under martial law in Equatorial Guinea. There was a pretty blonde astrologist from Australia, determined to invest in herself now that her US work visa was about to expire. There was a Department of Defense administrator, a gruff woman who wore flowing skirts and rainbow rimmed glasses, and a portly IBM manager turned high school basketball coach. There were three college counselors, five HR reps, and a half-dozen “executive talent managers” from Fortune 500 companies. Finally, there was me.
I was in an unusual position that week: Attending the certification program had not been my idea. Rather, I had been told that MBTI certification was a prerequisite to accessing the personal papers of Isabel Briggs Myers, a woman about whom very little is known except that she designed the type indicator in the final days of World War II. Part of our collective ignorance about Myers stems from how profoundly her personal history has been eclipsed by her creation, in much the same way that the name “Frankenstein” has come to stand in for the monster and not his creator.
Flip through the New York Times or Wall Street Journal, and you will find the indicator used to debate what makes an employee a good “fit” for her job, or to determine the leadership styles of presidential candidates. Open a browser, and you will find the indicator adapted for addictive pop psychology quizzes by BuzzFeed and Thought Catalog. Enroll in college, work an office job, enlist in the military, join the clergy, fill out an online dating profile, and you will encounter the type indicator in one guise or another — to match a person to her ideal office job or to her ideal romantic partner.
Yet though her creation is everywhere, Myers and the details of her life’s work are curiously absent from the public record. Not a single independent biography is in print today. Not one article details how Myers, an award-winning mystery writer who possessed no formal training in psychology or sociology, concocted a test routinely deployed by 89 of the Fortune 100 companies, the US government, hundreds of universities, and online dating sites like Perfect Match, Project Evolove and Type Tango. And not one expert in the field of psychometric testing, a $500 million industry with over 2,500 different tests on offer in the US alone, can explain why Myers-Briggs has so thoroughly surpassed its competition, emerging as a household name on par with the Atkins Diet or The Secret.
Less obvious at first, and then wholly undeniable, is how hard the present-day guardians of the type indicator work to shield Myers’s personal and professional history from critical scrutiny. For the foundation, as well as for its for-profit-research-arm, the Center for Applications of Psychological Type (CAPT), this means keeping journalists far away from Myers’s notebooks, correspondences and research materials, which are stored in the Special Collections division of the University of Florida library. Although they are technically the property of the university — thus open to the public — Myers’s papers require permission from CAPT to access; permission that has not been granted to anyone1 in the decade since the papers were donated to the university by Myers’s granddaughter, Katharine Hughes. Twice I was warned by the university librarian, a kind and rueful man, that CAPT was “very invested in protecting Isabel’s image.” Why her image should need protection, I did not yet understand.
When I wrote to CAPT in August 2014, I received an enthusiastically officious email from their Director of Research Operations, requesting additional details about my interest in type indicator and a book I was planning to write on personality testing. “Will there be descriptions and historical background about other personality tests in addition to the MBTI instrument?” she wrote. “If so, we would like to be informed.” So began nine months of correspondence with the staff of CAPT, which culminated this April in their request that I become a certified administrator of the MBTI instrument. Certification was a necessary precursor to giving me access to the papers, the director told me over the phone. CAPT would even be willing to consider “possibilities for funding the training.”
This is how I found myself in the company of the oil man, the astrologist, the Department of Defense administrator and twenty other people at the certification workshop, located on the sixth floor conference room of the United Jewish Appeal Federation building on East 59th Street. We sat at tables of five or six, our backs pressed against a smoked-glass wall decorated with etchings of Seder plates, unfurling braids of challah, and half lit menorahs. Each of us wore a name tag with our first name, last name, and our four letter type printed on it in big block letters. It was not unusual for people to lead with their type when they introduced themselves.
I said hello to the woman sitting next to me. Her name tag said “Laurie — ENFJ.”
Laurie2 checked me out and sighed, relieved. “We’re both E’s,” she said. “We’ll get along great.”
The most important part of becoming MBTI certified is learning to speak type,” declares Barbara, our instructor for the next week and a self-proclaimed “clear ENTJ.” Dressed in black, with prominent red toenails and a commanding nasal tone, Barb, as she insists we call her, will teach us how to “speak type fluently.”
“This is only the beginning!” Barb says. “Just think of this as a language immersion program.”
The comparison is an apt one. There are sixteen types, each made up of a combination of four different letters. Each letter represents one of two poles in a strict dichotomy of human behavior. From the pre-training test I took earlier in the week, I learn that, like Barb, I too am an “ENTJ.” I prefer extraversion (E) to introversion (I), intuition (N) to sensing (S), thinking (T) to feeling (F), and judging (J) to perception (P). It is strange, this tidy division of myself into these alien categories. Initially, I have trouble keeping the letters straight. Strange too is the ease with which people around me speak their types, as if declaring oneself a “clear ENTJ” or a “borderline ISFP” were the most natural thing in the world.
Of course, speaking type is anything but natural. Still Barb’s job is to convince us that this simple system of thought can account for the messiness of many of our personal and interpersonal relationships, regardless of gender, race, class, age, language, education, or any of the other intricacies of human existence. Type is intensely democratizing in its vision of the world, weird and wonderful in its commitment to flattening the material differences between people only to construct new and imaginary borders around the self. Its populism is most clearly demonstrated by MBTI’s astonishing geographic reach: Last year, two million people took the test, in seventy different countries, and in 21 languages. “As long as you have a seventh grade reading level and you’re a ‘normal’ person” — by which Barb means, you are not mentally ill or blithely psychopathic — “you can learn to speak type.”
Across all languages and continents, however, the first rule of speaking type remains the same. You do not, under any circumstances, refer to MBTI as a “test.” It is a “self-reporting instrument” or, more succinctly, an “indicator.” “People use the word ‘test’ all the time,” Barb complains. “But what you’re taking is an indicator. It’s indicating based on what you told the test.”
Although her statement sounds tautological, Barb assures us that it is not. Unlike a standardized test, like the SAT, which asks the test taker to choose between objectively right and wrong answers, the MBTI instrument has no right or wrong answers, only competing preferences. Take, for instance, two questions from the test I took last April: “In reading for pleasure, do you: (A) Enjoy odd or original ways of saying things; or; (B) Like writers to say exactly what they mean.” And: “If you were a teacher, would you rather teach: (A) Fact courses, or; (B) Courses involving theory?” And unlike the SAT, in which a higher score is always more desirable than a lower one, there are no better or worse types. All types, Barb announces rapturously, are created equal.
The indicator’s sole measure of success, then, is how well the test aligns with your perception of your self: Do you agree with your designated type? If you don’t, the problem lies not with the indicator, but with you. Maybe you were in a “work mindset when you answered the questions,” Barb suggests. Or you had become unusually adept at “veiling your preferences” to suit the wants and needs of your husband or wife, your co-workers, your children. Whatever the case may be, somehow you were inhibited from answering the questions as your “shoes off self” — Isabel Briggs Myers’s term for the authentic you.
More cynically, what this seems to mean is that the indicator can never be wrong. No matter how forcefully one may protest their type, the indicator’s only claim is that it holds a mirror up to your psyche. Behind all the pseudo-scientific talk of “instruments” and “indicators” is a simple, but subtle, truth: the test reflects whatever version of your self you want it to reflect. If what you want is to see yourself as odd or original or factual and direct, it only requires a little bit of imagination to nudge the test in the right direction, to rig the outcome ahead of time. I do not mean this in any overtly manipulative sense. Most people do not lie outright, for to do so would be to shatter the illusion of self-discovery that the test projects. I mean, quite simply, that to succeed, a personality test must introduce the test taker to the preferred version of her self — a far cry, in many cases, from the “shoes off,” authentic you.
But Barb doesn’t pause to meditate on the language lesson she has started to give us. Instead she projects onto a large screen behind her a photograph of a pale and bespectacled man in a neat cravat. Peering over us is Carl Gustav Jung, the Swiss psychiatrist whose 654-page study Psychological Types(1923) inspired Myers’s development of the indicator. Jung was “all about Freud, the couch, neurosis!” Barb laughs. For the purposes of our training, the relationship between his theory of psychological types and Myers’s commodification of it is a matter of good branding strategy. “Jung is a very respected name, a big name,” Barb says. “Even if you don’t know who he was, know his name. His name gives the test validity.”
Validity is crucial to selling the test, even if it doesn’t mean exactly what Barb seems to think it does. After the certification session is over, the participants will return to work with a 5-by-7 diploma, a brass “MBTI” pin, and a stack of promotional materials that they are encouraged to use to persuade their clients or colleagues to take an MBTI assessment. Each test costs $49.95 per person, more if you want a full breakdown of your type, and even more if you want an MBTI-certified consultant to debrief your type with you. No one questions the sheer ingenuity of this sales scheme. We are paying $1,695 to attend a course that authorizes us to recruit others to buy a product — a product which tells us nothing more than what we already know about ourselves.
Although Barb invokes Jung’s name with pride and a touch of awe, Jung would likely be greatly displeased, if not embarrassed, by his long-standing association with the indicator. The history of his involvement with Myers begins not with Isabel, but with her mother Katharine Cook Briggs, whom Barb mentions only in passing. After the photograph of Jung, Barb projects onto the screen a photograph of Katharine, unsmiling and broad necked and severely coiffed. “I usually don’t get into this,” she says, gesturing at Katharine’s solemn face. “People have already bought into the instrument.”
Yet Katharine is an interesting woman, a woman who might have interested Betty Friedan or Gloria Steinem or any second-wave feminist eager to dismantle the opposition between “the happy modern housewife” and the “unhappy careerist.” A stay-at-home mother and wife who had once studied horticulture at Michigan Agricultural College, Katharine was determined to approach motherhood like an elaborate plant growth experiment: a controlled study in which she could trace how a series of environmental conditions would affect the personality traits her children expressed. In 1897, Isabel emerged — her mother’s first subject. From the day of her birth until the child’s thirteenth birthday, Katharine kept a leather-bound diary of Isabel’s developments, which she pseudonymously titled The Life of Suzanne. In it, she painstakingly recorded the influence that different levels of feeding, cuddling, cooing, playing, reading, and spanking had on Isabel’s “life and character.”
Today we might think of Katharine as the original helicopter parent: hawkish and over-present in her maternal ministrations. But in 1909, Katharine’s objectification of her daughter answered feminist Ellen Key’s resounding call for a new and more scientific approach to “the vocation of motherhood.” More progressive still was how Katharine marshaled the data she had collected on Isabel to write a series of thirty-three articles in The Ladies Home Journal on the science of childrearing. These articles, which were intended to help other mothers systematize their childcare routines, boasted such single-minded titles as “Why I Believe the Home Is the Best School” and “Why I Find Children Slow in Their School Work.” Each appeared under the genteel nom de plume “Elizabeth Childe.”
It is not surprising that Jung’s work should pique the interest of “Elizabeth Childe,” an aspiring pedagogue who perceived the maturation of her child’s personality as nothing less than an experimental form to be cultivated, even perfected, over the years. Indeed, Katharine first encountered an English translation of Jung’s Psychological Types in 1923, when she was editing The Life of Suzanne to submit to publishers. She found Psychological Types an unwieldy text, part clinical assessment, part romantic meditation on the nature of the human soul, which emphasized the “creative fantasy” required for psychological thought. Katharine took this as an invitation to start thinking of her children’s personalities as divided into three oppositional axes: extraverted versus introverted, intuitive versus sensory, thinking versus feeling. In 1927, she wrote to Jung to express her feverish admiration for his work — her “Bible,” she called it — and her desire to bring a more practical approach to his densely theoretical observations, which her “children … had been greatly helped by.”
“How wasteful children are, even with their own precious, irreplaceable lives!” Jung once wrote to Freud, a letter that might have doubled as his irritated response to Katharine and her request to collaborate. From the outset, it seems that Jung was impressed by Katharine’s brilliance and flattered by her enthusiasm, but skeptical of her eagerness to bring his typology to the science of childrearing. When Katharine wrote to him for advice about a neighborhood child, a young girl in great emotional distress who she believed she could cure through Jungian type analysis, Jung rebuked her for overstepping her bounds as a dispassionate observer. “You overdid it,” he wrote. “You wanted to help, which is an encroachment upon the will of others. Your attitude ought to be that of one who offers an opportunity that can be taken or rejected. Otherwise you are most likely to get in trouble. It is so because man is not fundamentally good, almost half of him is a devil.”
Despite Jung’s unwillingness to help Katharine see beyond the devil in man, some of the more practical applications of his typology appeared in a 1926 article that Katharine published in The New Republic, winningly titled “Meet Yourself: How to Use the Personality Paint Box.” In it, she would present Jung’s dichotomies as an elegant paint-by-numbers exercise, in which E/I, N/S, and T/F were the “primary character colors” that each individual could “combine and blend” to form “his own personality portrait.” Even babies, those “little bundles of psychic energy,” had types, and the sooner a mother identified her child’s type, the better it was for his mental maturity. “One need not be a psychologist in order to collect and identify types any more than one needs to be a botanist to collect and identify plants,” Katharine assured her fellow mothers. There was no need to doubt one’s ability to type one’s child.
“Meet Yourself” enjoyed quiet acclaim among parents when it was first published, but ultimately, Katharine’s desire to spread Jung’s gospel to a broader audience would inspire a shift in genre. She would abandon The Life of Suzanne as a parenting guide and turn instead to fiction, which she believed would help her reach a larger and more dedicated audience. Her longest work, written toward the end of her life, was a romance novel inspired by Psychological Types called The Guesser, the story of a love affair between two incompatible Jungian types. It was summarily rejected by ten publishers and two film producers for dwelling too much on Jung, whom no one other than Katharine was interested in, and not enough on love.
Like her mother, Isabel also began her adult life as a wife and mother. She graduated from Swarthmore in June of 1918 — Phi Beta Kappa, an aspiring fiction writer, and a moderately disillusioned newlywed, who had followed her husband first to Memphis, where he was training as a bomber pilot, and then to Philadelphia, where he enrolled in law school. In each city, she made a list of her future goals in a notebook which she titled Diary of an Introvert Determined to Extrovert, Write, & Have a Lot of Children.
Keep complete job list and do one every day.
Housekeep till 10 A.M.
Two hours writing.
One hour outdoors.
One hour self-development—music, study, friends.
Wash face with soap every night.
Never wear anything soiled.
But despite her clear goals and clean clothes, Isabel struggled to find a job. After an unfulfilling stint at a temp agency, she wrote to Katharine to complain about the difficulties of finding meaning in one’s work, particularly as a married woman who was expected to do nothing more than to have children. “I think under the spur of necessity a woman can do a man’s work as well as he can, provided she is as capable for a woman as he is for a man,” she wrote. “But I’m perfectly sure that it takes more out of her. And it’s a waste of life to spend yourself on work that someone else can do at less cost. I’m sure men and women are made differently, with different gifts and different kids of strengths.” In a perfect world, she concluded, there would exist “some highly intelligent division of labor that can be worked out, so everybody works, but not at the wrong things.”
Isabel’s “instinctive answer” to the question of what to do with herself was to be “my man’s helpmeet.” And for nearly a decade she was. Until 1928, she did housework, gave birth to two children, and at night, when the house was in order and the children were asleep, she continued to wonder what was missing from her life. Although a husband and children and a “beloved little ivy-covered colonial house” in the suburbs were “everything in the world that I wanted,” Isabel wrote, “I knew I wanted something else.” That something else was the time and energy to pursue a career as a successful fiction writer, something her mother had never been able to realize. “In the evenings, between nine and three, stretched six heavenly, uninterrupted hours — if I could stay awake to use them,” she mused.
Working at night, but most often with one fitful child or another in her lap, Isabel started and finished a detective novel, which she promptly submitted to a mystery contest at New McClure’s magazine. The winner was to receive a $7,500 cash prize (over $100,000 today) and a book contract with a prominent New York publisher. Katharine, apparently jealous that her daughter was trying to succeed where she had once failed, had little encouragement for her daughter, only what Isabel lamented as some “cool criticisms” of the “novel’s style.” Much to her mother’s surprise, Isabel’s novel,Murder Yet to Come, took first place, surpassing the writing team behind the Ellery Queen novels, among the many other seasoned pulp writers who had vied for the prize.
Yet there was plenty of reason for Katharine, ever the devoted scholar of Jung, to appreciate how she had inculcated her daughter into speaking — or, in this case, writing — type. Unlike other detective stories of the time, which often pair a brilliantly imaginative sleuth with a more literal minded sidekick, Murder Yet to Come features a team of three amateur detectives: an effeminate playwright, his dutiful assistant, and a brawny Army sergeant. Unburdened by crying children or any other domestic responsibilities, they set out to solve a gruesome murder. Each member of the team possesses what Isabel, in her letter to her mother, described as “different gifts and different kinds of strengths.” The playwright has the “quickness of insight” to uncover the murderer’s identity, the sergeant takes “smashingly, effective action” to apprehend him, while the assistant makes “slow, solid decisions” that protect the family of the victim from scandal. None of the detectives “works at the wrong things.” Like today’s slick police procedurals, in which there are the people who investigate the crime and those who prosecute the offenders, every character in Murder Yet to Come is designed to maximize the efficiency of the team.
As a mystery story, Murder Yet to Come is decidedly second-rate; the villain predictable, his motive commonplace, the detectives flat and uncharismatic. But as a testing ground for the Myers-Briggs type indicator, the novel is a remarkably direct receptacle for Isabel’s ideas about work, right down to its crude division of gender roles between the feminized playwright and the hyper-masculine military man. Strengths and weaknesses are distributed in a zero-sum fashion; the character who possesses a keen eye for sensory details reverts to a slow, stuttering imbecile when asked to abstract larger patterns from his observations. Friendships and working relationships are always invigorated by personality differences, never strained by them. And for death-defying detectives, the characters are all unusually self-aware, each happy to accept his personal limitations and cede authority to others when necessary, like cogs in a well-oiled machine. Reprinted by CAPT in 1995, Murder Yet to Come showcases characters who are “beautifully consistent with type portraits,” according to the forward to the new edition. “Those readers who know type will enjoy ‘typing them’ as the mystery progresses.”
CAPT’s website, where I purchased Murder Yet to Come for $15.00, claims that the novel was Isabel’s “only sojourn into fiction” before she shifted her attention to the type indicator. This is incorrect. The company has not reprinted Isabel’s second novel, Give Me Death (1934), which revisits the same trio of detectives half a decade later. Perhaps this is due to the novel’s virulently racist plot: One by one, members of a land-owning Southern family begin committing suicide when they are led to believe that “there is in [our] veins a strain of Negro blood.” Despite their differences, the detectives agree that it is “better for [the family] to be dead” than for them to be alive, heedlessly reproducing with white people.
Give Me Death is more explicitly about the preservation of the family, but saddled with a far more sinister understanding of type: Type as racially determined. There is talk of eugenics. There is much hand wringing about the preservation of Southern family dynasties, about “honor” and “esteem.” That the novel was written in the years when laws forbidding interracial marriage were increasingly the target of ACLU and NAACP protests makes it all the more reactionary, and thus all the more unsuitable, from an image management perspective, for reissue today. One would hardly enjoy “typing” these characters.
If Isabel had started her life as her mother’s experiment, she had quickly grown into Katharine’s student, her apostle, and even her competition. Fiction had presented one way for her to unite her mother’s talk of type with the intelligent division of labor, ordering imaginary characters into a rational system with a profitable end: bringing criminals to justice. After World War II, the emergent industry of personality testing would give Isabel the opportunity to organize — and experiment on — real people.
The second rule of speaking type is: Personality is an innate characteristic, something fixed since birth and immutable, like eye color or right-handedness. “You have to buy into the idea that type never changes,” Barb says, speaking slowly and emphasizing each word so that we may remember and repeat this mantra — “Type Never Changes” — to our future clients. “We will brand this into your brain,” she vows. “The theory behind the instrument supports the fact that you are born with a four letter preference. If you hear someone say, ‘My type changed,’ they are not correct.”
Of all the questionable assumptions that prop up the Myers-Briggs indicator, this one strikes me as the shakiest: that you are “born with a four letter preference,” a reductive blueprint for how to move through life’s infinite and varied challenges. Many other personality indicators, ranging in complexity from zodiac signs to online dating questionnaires to Harry Potter’s sorting hat, share the assumption that personality is fixed in one form or another. And yet the belief of a singular and essential self has always seemed to me an irresistibly attractive fiction: One that insists on seeing each of us as a coherent human being, inclined to behave in predictable ways no matter what circumstances surround us. There is, after all, a certain narcissistic beauty to the idea that we are whole. “If personality is an unbroken series of successful gestures, then there was something gorgeous about him, some heightened sensitivity to the promises of life,” wrote F. Scott Fitzgerald of his greatest creation, Jay Gatsby, in the same year that Katharine fell under the sway ofPsychological Types. Learning to speak type means learning to link the quotidian gestures of life into an easily digestible story, one capable of communicating to perfect strangers some sense of who you are and why you do what you do.
Yet the impulse to treat personality as innate is, in no small part, a convenient way of putting these gorgeously complete people in their rightful places. Just as each one of Isabel’s three detectives serves a unique purpose in her novels, a way of moving the plot forward that follows from his innate “gifts,” so too does the indicator imagine that each person will fall into their designated niche in a high-functioning and productive social order. This is another fiction — to my mind, a dystopian fiction — that most personality tests trade in: The fantasy of rational organization, and, in particular, the rational organization of labor. “The MBTI will put your personality to work!” promises a career assessment flier from Arizona State University, a promise that is echoed by thousands of leadership guides, self-help books, LinkedIn profiles, and job listings, the promise that underwrites such darkly futuristic films as Divergent or Blade Runner. To live under an economic system that is not organized by personality, thinks the heroine of Divergent, is “not just to live in poverty and discomfort; it is to live divorced from society, separated from the most important thing in life: community.”
Or as a trainee belts out in the middle of an exercise, “Team work makes the dream work!”
Genes, like people, have families — lineages that stretch back through time, all the way to a founding member. That ancestor multiplied and spread, morphing a bit with each new iteration.
For most of the last 40 years, scientists thought that this was the primary way new genes were born — they simply arose from copies of existing genes. The old version went on doing its job, and the new copy became free to evolve novel functions.
Certain genes, however, seem to defy that origin story. They have no known relatives, and they bear no resemblance to any other gene. They’re the molecular equivalent of a mysterious beast discovered in the depths of a remote rainforest, a biological enigma seemingly unrelated to anything else on earth.
The mystery of where these orphan genes came from has puzzled scientists for decades. But in the past few years, a once-heretical explanation has quickly gained momentum — that many of these orphans arose out of so-called junk DNA, or non-coding DNA, the mysterious stretches of DNA between genes. “Genetic function somehow springs into existence,” said David Begun, a biologist at the University of California, Davis.
This metamorphosis was once considered to be impossible, but a growing number of examples in organisms ranging from yeast and flies to mice and humans has convinced most of the field that these de novo genes exist. Some scientists say they may even be common. Just last month, research presented at the Society for Molecular Biology and Evolution in Vienna identified 600 potentially new human genes. “The existence of de novo genes was supposed to be a rare thing,” said Mar Albà, an evolutionary biologist at the Hospital del Mar Research Institute in Barcelona, who presented the research. “But people have started seeing it more and more.”
Researchers are beginning to understand that de novo genes seem to make up a significant part of the genome, yet scientists have little idea of how many there are or what they do. What’s more, mutations in these genes can trigger catastrophic failures. “It seems like these novel genes are often the most important ones,” said Erich Bornberg-Bauer, a bioinformatician at the University of Münster in Germany.
The Orphan Chase
The standard gene duplication model explains many of the thousands of known gene families, but it has limitations. It implies that most gene innovation would have occurred very early in life’s history. According to this model, the earliest biological molecules 3.5 billion years ago would have created a set of genetic building blocks. Each new iteration of life would then be limited to tweaking those building blocks.
Yet if life’s toolkit is so limited, how could evolution generate the vast menagerie we see on Earth today? “If new parts only come from old parts, we would not be able to explain fundamental changes in development,” Bornberg-Bauer said.
The first evidence that a strict duplication model might not suffice came in the 1990s, when DNA sequencing technologies took hold. Researchers analyzing the yeast genome found that a third of the organism’s genes had no similarity to known genes in other organisms. At the time, many scientists assumed that these orphans belonged to families that just hadn’t been discovered yet. But that assumption hasn’t proven true. Over the last decade, scientists sequenced DNA from thousands of diverse organisms, yet many orphan genes still defy classification. Their origins remain a mystery.
In 2006, Begun found some of the first evidence that genes could indeed pop into existence from noncoding DNA. He compared gene sequences from the standard laboratory fruit fly, Drosophila melanogaster, with other closely related fruit fly species. The different flies share the vast majority of their genomes. But Begun and collaborators found several genes that were present in only one or two species and not others, suggesting that these genes weren’t the progeny of existing ancestors. Begun proposed instead that random sequences of junk DNA in the fruit fly genome could mutate into functioning genes.
Yet creating a gene from a random DNA sequence appears as likely as dumping a jar of Scrabble tiles onto the floor and expecting the letters to spell out a coherent sentence. The junk DNA must accumulate mutations that allow it to be read by the cell or converted into RNA, as well as regulatory components that signify when and where the gene should be active. And like a sentence, the gene must have a beginning and an end — short codes that signal its start and end.
In addition, the RNA or protein produced by the gene must be useful. Newly born genes could prove toxic, producing harmful proteins like those that clump together in the brains of Alzheimer’s patients. “Proteins have a strong tendency to misfold and cause havoc,” said Joanna Masel, a biologist at the University of Arizona in Tucson. “It’s hard to see how to get a new protein out of random sequence when you expect random sequences to cause so much trouble.” Masel is studying ways that evolution might work around this problem.
Another challenge for Begun’s hypothesis was that it’s very difficult to distinguish a true de novo gene from one that has changed drastically from its ancestors. (The difficulty of identifying true de novo genes remains a source of contention in the field.)
Ten years ago, Diethard Tautz, a biologist at the Max Planck Institute for Evolutionary Biology, was one of many researchers who were skeptical of Begun’s idea. Tautz had found alternative explanations for orphan genes. Some mystery genes had evolved very quickly, rendering their ancestry unrecognizable. Other genes were created by reshuffling fragments of existing genes.
Then his team came across the Pldi gene, which they named after the German soccer player Lukas Podolski. The sequence is present in mice, rats and humans. In the latter two species, it remains silent, which means it’s not converted into RNA or protein. The DNA is active or transcribed into RNA only in mice, where it appears to be important — mice without it have slower sperm and smaller testicles.
The researchers were able to trace the series of mutations that converted the silent piece of noncoding DNA into an active gene. That work showed that the new gene is truly de novo and ruled out the alternative — that it belonged to an existing gene family and simply evolved beyond recognition. “That’s when I thought, OK, it must be possible,” Tautz said.
A Wave of New Genes
Scientists have now catalogued a number of clear examples of de novo genes: A gene in yeast that determines whether it will reproduce sexually or asexually, a gene in flies and other two-winged insects that became essential for flight, and some genes found only in humans whose function remains tantalizingly unclear.
The Odds of Becoming a Gene
Scientists are testing computational approaches to determine how often random DNA sequences can be mutated into functional genes. Victor Luria, a researcher at Harvard, created a model using common estimates of the rates of mutation, recombination (another way of mixing up DNA) and natural selection. After subjecting a stretch of DNA as long as the human genome to mutation and recombination for 100 million generations, some random stretches of DNA evolved into active genes. If he were to add in natural selection, a genome of that size could generate hundreds or even thousands of new genes.
At the Society for Molecular Biology and Evolution conference last month, Albà and collaborators identified hundreds of putative de novo genes in humans and chimps — ten-fold more than previous studies — using powerful new techniques for analyzing RNA. Of the 600 human-specific genes that Albà’s team found, 80 percent are entirely new, having never been identified before.
Unfortunately, deciphering the function of de novo genes is far more difficult than identifying them. But at least some of them aren’t doing the genetic equivalent of twiddling their thumbs. Evidence suggests that a portion of de novo genes quickly become essential. About 20 percent of new genes in fruit flies appear to be required for survival. And many others show signs of natural selection, evidence that they are doing something useful for the organism.
In humans, at least one de novo gene is active in the brain, leading some scientists to speculate such genes may have helped drive the brain’s evolution. Others are linked to cancer when mutated, suggesting they have an important function in the cell. “The fact that being misregulated can have such devastating consequences implies that the normal function is important or powerful,” said Aoife McLysaght, a geneticist at Trinity College in Dublin who identified the first human de novo genes.
De novo genes are also part of a larger shift, a change in our conception of what proteins look like and how they work. De novo genes are often short, and they produce small proteins. Rather than folding into a precise structure — the conventional notion of how a protein behaves — de novo proteins have a more disordered architecture. That makes them a bit floppy, allowing the protein to bind to a broader array of molecules. In biochemistry parlance, these young proteins are promiscuous.
Scientists don’t yet know a lot about how these shorter proteins behave, largely because standard screening technologies tend to ignore them. Most methods for detecting genes and their corresponding proteins pick out long sequences with some similarity to existing genes. “It’s easy to miss these,” Begun said.
That’s starting to change. As scientists recognize the importance of shorter proteins, they are implementing new gene discovery technologies. As a result, the number of de novo genes might explode. “We don’t know what things shorter genes do,” Masel said. “We have a lot to learn about their role in biology.”
Scientists also want to understand how de novo genes get incorporated into the complex network of reactions that drive the cell, a particularly puzzling problem. It’s as if a bicycle spontaneously grew a new part and rapidly incorporated it into its machinery, even though the bike was working fine without it. “The question is fascinating but completely unknown,” Begun said.
A human-specific gene called ESRG illustrates this mystery particularly well. Some of the sequence is found in monkeys and other primates. But it is only active in humans, where it is essential for maintaining the earliest embryonic stem cells. And yet monkeys and chimps are perfectly good at making embryonic stem cells without it. “It’s a human-specific gene performing a function that must predate the gene, because other organisms have these stem cells as well,” McLysaght said.
“How does novel gene become functional? How does it get incorporated into actual cellular processes?” McLysaght said. “To me, that’s the most important question at the moment.”
If possible always invent in imitation of Nature. God knows his designs.
By the way I have long considered and have experimented with the idea of a reactive liquid armor that both redirects projectile trajectories and disperses force in spread waves rather than attempts to meet it with direct resistance.
So I found this step forward to be doubly interesting. In construction method, in design, and as a pointer towards improved future capabilities.
Illustration of deformation mechanisms in laminates
Rudykh et al
Body armor suffers from a core tension: it must be light enough so the soldier wearing it can still fight effectively, but strong enough to actually stop bullets and shrapnel. Durable, shock-absorbing Kevlar is the current standard, but it can definitely be improved upon. What if, instead of making the armor itself a liquid, researchers borrow an armor design from creatures that move through it? A team at MIT, led by mechanical engineer Stephan Rudykh, designed a flexible armor inspired by fish scales.
Scale armor is almost as old as armor itself, with numerous examples found in ancient art from Rome to China. To improve on an ancient concept, the MIT team came up with a single metric for the armor’s value: protecto-flexibility (Ψ). This is “a new metric which captures the contrasting combination of protection and flexibility, taken as the ratio between the normalized indentation and normalized bending stiffness.” Working from a single metric, the researchers were able to greatly increase the strength of the armor while only modestly reducing its flexibility.
The practical implications of the study are hinted at by who funded it: the research “was supported by the U.S. Army Research Office through the MIT Institute for Soldier Nanotechnologies.” In the future, soldiers could have fish-scale suits of armor that are more flexible around joints and sturdier across the rest of the body, adding greater protection where none was before without diminishing any of the value of previous armor.
This armor is still in the early testing stages. “Flexibility and protection by design: imbricated hybrid microstructures of bio-inspired armor” only covers indentation tests, designed to see just how far the scales would bend when forced to. Next stages include trying the armor against bullets and shrapnel. If successful, the future of armor could look a heck of a lot like the past.
What to do when you just can’t quit–no matter how many times you’ve tried.
By Nir Eyal
I had just finished giving a speech on building habits when a woman in the audience exclaimed, “You teach how to create habits, but that’s not my problem. I’m fat!” The frustration in her voice echoed throughout the room. “My problem is stopping bad habits. That’s why I’m fat. Where does that leave me?”
I deeply sympathized with the woman. “I was once clinically obese,” I told her. She stared at my lanky frame and waited for me to explain. How did I hack my habits?
One Size Doesn’t Fit All
The first step is to realize that starting a new routine is very different from breaking an existing habit. As I describe in this video, there are different techniques to use depending on the behavior you intend to modify.
For example, creating a habit requires encoding a new set of automatic behaviors, while breaking a habit requires a different set of processes. The brain learns causal relationships between triggers that prompt an action and the associated outcome. If you’d like to get in the habit of taking a vitamin every day, for example, the key is to place the pills somewhere in the path of your normal routine–say, next to your toothbrush, so you remember to take it each morning before you brush. Doing so daily acts as a reminder until, over time, the behavior becomes something done with little or no conscious thought.
However, breaking an existing habit is an entirely different story, and the distinction is something many people mischaracterize. For example, Charles Duhigg, author of The Power of Habit, describes a bad cookie-eating habit that added eight pounds to his waistline.
Every day, Duhigg says, he found himself going to the 14th floor of his office building to buy a cookie. When he began to analyze this habit, Duhigg discovered that the real reward for his behavior was not the cookie itself but the socializing he enjoyed while nom nom nom-ing with co-workers. Once Duhigg figured out that the reward was connecting with friends, he could get rid of the cookie-eating habit by substituting one routine for another. Voilà!
Duhigg echos the popular belief that the key to breaking a bad habit is replacing it with another habit. I’m not so sure.
Maybe replacing cookies with co-workers did it for Duhigg, but what if you’re the kind of person (like me) that loves the hell out of cookies? I was obese precisely because, among many other delicious things, I love cookies and for no other reason than the fact that they taste amazing! For me, ooey gooey chocolate chewy beats chatting it up with Mel from accounting every time.
“Where does that leave me?” the woman in the audience wanted to know. Having struggled with my own weight for years, there was no way I was going to look her in the face and tell her she should chat it up with her co-workers the next time she has a sugar craving. Not going to happen.
When it comes to gaining control over bad habits, like eating food we know isn’t good for us, I shared with her the only thing that has worked for me. I call it “progressive extremism,” and it works particularly well in situations in which substituting one habit for another just won’t do. Before diving into the method I use to transform my habits, follow me back about 20 years.
I was once a vegetarian. As anyone who has made a dramatic shift in diet knows, friends always ask, “Don’t you miss meat? I mean, it tastes so good!” Of course I missed meat!
However, when I began calling myself a vegetarian, somehow what was once appetizing suddenly became something else. The things I once loved to eat were now inedible because I had changed how I defined myself. I was a vegetarian, and vegetarians don’t eat meat.
Saying no to eating animals was no longer difficult. It was no longer a struggle. It was something I just did not do, much in the same way I’d imagine a Hasidic Jew does not eat pork or an observant Muslim does not drink alcohol–they just don’t.
Identity helps us make otherwise difficult choices by offloading willpower. Our choices become what we do because of who we are.
Don’t Versus Can’t
Recent research reveals why looking at our behaviors this way can have a profound impact. A study published in the Journal of Consumer Research tested the words people use when confronting temptation. During the experiment, one group was instructed to use the words “I can’t” while the other used “I don’t” when considering unhealthy food choices. Then the real experiment began.
When people finished the study, they were offered either a chocolate bar or granola bar to thank them for their time. Unbeknownst to participants, the researchers were measuring whether they would take the relatively healthy or unhealthy choice. While 39 percent of people who used the words “I can’t” chose the granola, 64 percent of those in the “I don’t” group picked it over chocolate. The study authors believe saying “I don’t” rather than “I can’t” provides greater “psychological empowerment.”
I was meat-free for about five years, and during that time resisting certain foods was not that difficult because it was consistent with how I saw myself. “I don’t eat meat,” was tied to my identity as a vegetarian.
If not eating meat was easy when it was something I just didn’t do, why couldn’t the same technique be used to stop other unhealthy habits? It turns out it most certainly can.
Here’s How it Works
First, a disclaimer. This technique only works for triggers that can be removed from your environment–for instance, this doesn’t work for quitting a nail-biting habit unless you’re looking to dispose of some digits.
Start by identifying the behavior you want to stop. For example, say you’d like to stop eating processed sugar. Taken all at once, cutting out the sweet stuff is too big of a goal for most people to quit cold turkey.
Instead, think of just one specific food you’d like to cut from your diet. However–here’s the important part–it needs to be something you wouldn’t really miss and it needs to be forever.
Overwhelming research reveals diets don’t work because they are temporary fixes. If you imagine you’ll get to eat Goobers some day when you’re thinner, this technique won’t work. Temporary diets do nothing but train the brain to binge eat.
To become part of your identity, the commitment needs to be forever, just as vegetarians believe they’ll eat the same way for the rest of their life–it’s who they are.
The mistake most people make is they bite off more than they can chew (excuse the pun). The key is to only remove the things from your diet you won’t really miss. For example, do you like candy corn? I sure don’t. As a kid, the stuff was always the dregs of my Halloween haul. For me, removing candy corn for life was no big deal, so it was first on my list. I don’t eat candy corn and I never will. Done!
Next, write down what you no longer eat and the date you gave it up for good. Writing this down marks the shift from a temporary “can’t” to a permanent “don’t.” Remember, the things you give up have to be easy enough to give up for the rest of your life.
The next step is to wait. This method takes time. When you’re ready, reevaluate what else you can do. Find another trigger to remove that meets the criteria of something you can give up for life that you wouldn’t really miss. For me, I decided to never have sugary carbonated drinks at home. I could still have them elsewhere, just not inside the house. Easy peasy.
If the commitment feels like too much, you’re doing too much. Each step needs to feel almost effortless, no big deal, but involve something you can be proud to give up forever.
For example, when I wanted to stop a bad habit of mindlessly surfing the internet and reduce the online distractions in my life, I didn’t quit the Web entirely. I quit one simple thing I wouldn’t miss and intend not to do it for life. I don’t read articles in my Web browser during working hours–ever! Instead, every time I see something that looks interesting, I use an app called Pocket to save it for later (see more about how Pocket works here).
The process of unwinding bad habits takes years, but progressive extremism is an effective way I’ve found to stop behaviors that weren’t serving me. Occasionally, I look at all the unhealthy things that no longer control me the way they once did, and if I feel up to it, I find new bad habits to slay.
By slowly ratcheting up what you don’t do, you invest in a new identity through your record of successfully dropping bad habits from your life. It may start small, but over time, it adds up to a whole new you.
The process for stopping bad habits is fundamentally different from forming new ones.
Existing behaviors etch a neural circuitry that makes unlearning an association between an action and a reward extremely difficult.
Whereas learning new habits follows a slow progression, stopping old behavioral tendencies requires a different approach.
A process I call “progressive extremism” utilizes what we know about the psychology of identity to help stop behaviors we don’t want.
By classifying specific behaviors as things you will never do again, you put certain actions into the realm of “I don’t” versus “I can’t.”
WASHINGTON, D.C.—In an unusual press conference here today, NASA released a batch of bizarre sound recordings and video from the Messenger spacecraft moments before it impacted the surface of Mercury. Scientists are struggling to decipher what the data mean, but some contend they sound like human voices crying out in agony.
Messenger had been orbiting Mercury since 2011, but it used up nearly all of its propellant and was drifting closer to the surface of the planet. So last week, NASA officials decided to point the probe nose downward for a controlled crash. “We were hoping it would kick up some soot for spectroscopic analysis,” says Messenger Principal Investigator Angra Mainyu, a planetary scientist at Columbia University. Just what it did find instead is not entirely clear.
At the press conference, Mainyu played grainy recordings of what sounded like anguished voices in various languages. And she showed even grainier images of what appeared to be writhing figures. When asked by a reporter how NASA interpreted the data, Mainyu shrugged her shoulders and said, “How the hell should I know?”
Reactions to the news were swift and, in some cases, decisive. Welcoming what he called “ineluctable evidence of hell,” Father Felix Flammis, a spokesperson for the Vatican Observatory in Italy, said: “This wonderful discovery shows that science and religion can work together to discover the truth.” But Richard Dawkins, the famed evolutionary biologist and atheist, rejected the finding. “This is clearly a bunch of drivel,” he says. “Wind whistling past the spacecraft, electronic noise—there obviously has to be some other explanation.” Even if the evidence holds up, he quips, “proof of the devil ain’t the same as proof of God.”
The findings are somewhat of a surprise, because Venus had long been the leading contender, in our solar system at any rate, for harboring Hades. With a mean surface temperature of 462°C, an oppressive atmosphere, and sulfuric acid rains, it certainly seems to fit biblical descriptions. “Plus, it’s much closer to Earth, so lost souls would be only a hop, skip, and a jump from hell,” says Thor Kölski, an astrophysicist at the University of the Valkyrs in Reykjavik. Kölski has pinpointed the likely epicenter of hell as Venus’s Ganiki Chasma, a rift zone where infrared flashes were first observed last year—phenomena that he asserts are new arrivals to the underworld.
Still others think there may be multiple hells within our solar system. “Everything we know about string theory tells us that the ‘Many Hells theory’ isn’t only plausible, it highly likely,” says Franklyn Stein, a theoretical physicist at University College London.
Luminaries in the scientific community are by and large embracing the notion of hell. Even Stephen Hawking is on board. The cosmologist stirred controversy in 2010, when he wrote in his book The Grand Design that “[i]t is not necessary to invoke God to light the blue touch paper and set the universe going.” Earlier today, Hawking tweeted: “The devil is a different story. All hail Messenger!”
The discovery should provide a major shot in the arm to NASA, whose fortunes in Washington have faded since it retired the space shuttles in 2011. “This is a proud day for the space agency,” says Don Tey, a spokesperson for the Planetary Society in Pasadena, California, who insists that it’s merely a coincidence that the announcement was made on April Fools’ Day. “Congress told NASA to go to hell, and, by Jove, they made it.”
Our Ancient and Medieval ancestors were much, much more ingenious that most modern people give them credit for. Someone should create/produce an app/algorithm to scour ancient and medieval medicinal texts (and other kinds of texts) to see what other advantages could be gleaned.
Rather than doing this kind of work (and this is hardly the first example I’ve seen of such historical re-creation) by piecemeal examination and experimentation.
By the way I not long ago finished another set of brilliant lectures by Mike Drought of Wheaton College.
Take cropleek and garlic, of both equal quantities, pound them well together… take wine and bullocks gall, mix with the leek… let it stand nine days in the brass vessel…
So goes a thousand-year-old Anglo Saxon recipe to vanquish a stye, an infected eyelash follicle.
The medieval medics might have been on to something. A modern-day recreation of this remedy seems to alleviate infections caused by the bacteria that are usually responsible for styes. The work might ultimately help create drugs for hard-to-treat skin infections.
The project was born when a microbiologist at the University of Nottingham, UK, got talking to an Anglo Saxon scholar. They decided to test a recipe from an Old English medical compendium called Bald’s Leechbook, housed in the British Library.
Some of the ingredients, such as copper from the brass vessel, kill bacteria grown in a dish – but it was unknown if they would work on a real infection or how they would combine.
Sourcing authentic ingredients was a major challenge, says Freya Harrison, the microbiologist. They had to hope for the best with the leeks and garlic because modern crop varieties are likely to be quite different to ancient ones – even those branded as heritage. For the wine they used an organic vintage from a historic English vineyard.
As “brass vessels” would be hard to sterilise – and expensive – they used glass bottles with squares of brass sheet immersed in the mixture. Bullocks gall was easy, though, as cow’s bile salts are sold as a supplement for people who have had their gall bladders removed.
After nine days of stewing, the potion had killed all the soil bacteria introduced by the leek and garlic. “It was self-sterilising,” says Harrison. “That was the first inkling that this crazy idea just might have some use.”
A side effect was that it made the lab smell of garlic. “It was not unpleasant,” says Harrison. “It’s all edible stuff. Everyone thought we were making lunch.”
The potion was tested on scraps of skin taken from mice infected with methicillin-resistant Staphylococcus aureus. This is an antibiotic-resistant version of the bacteria that causes styes, more commonly known as the hospital superbug MRSA. The potion killed 90 per cent of the bacteria. Vancomycin, the antibiotic generally used for MRSA, killed about the same proportion when it was added to the skin scraps.
A loathsome slime
Unexpectedly, the ingredients had little effect unless they were all brought together. “The big challenge is trying to find out why that combination works,” says Steve Diggle, another of the researchers. Do the components work in synergy or do they trigger the formation of new potent compounds?
Using exactly the right method also seems to be crucial, says Harrison, as another group tried to recreate the remedy in 2005 and found that their potion failed to kill bacteria grown in a dish. “With the nine-day waiting period, the preparation turned into a kind of loathsome, odorous slime,” says Michael Drout of Wheaton College in Norton, Massachusetts.
If the 9th Century recipe does lead to new drugs, they might be useful against MRSA skin infections such as those that cause foot ulcers in people with diabetes. “These are usually antibiotic-resistant,” says Diggle. However, he doesn’t recommend people try this at home.
This stunning image of a shooting star is what award-winning photographs are made of – but the man behind the lens said capturing the sight was an “absolute fluke”.
John Alasdair Macdonald, a tour guide in the Scottish Highlands, caught the meteor on film at about 9pm last night.
Based in Drumnadrochit, on the west shore of Loch Ness, Mr Macdonald had taken his Sony RX100 compact camera outside to capture some photographs of the stars on what he described as a “beautiful night”.
But as he clicked away, the meteor soared right into his sights.
“As my wife said, it was just sheer dumb luck,” Mr Macdonald told The Independent: “It was a complete fluke, an absolute fluke”.
Mr Macdonald posted the image on the Facebook page of his tour website, The Hebridean Explorer, where it quickly attracted a lot of attention.
Asked whether the experience had inspired him to pursue his photography skills on a more professional level, Mr Macdonald said: “I think that’s as good as I’m going to get!”
Meteors are small particles of space debris that burn up as they enter the Earth’s atmosphere, making them appear like falling stars.
Tonight for February 18, 2015
Moon Phase Courtesy U.S. Naval Observatory
The new moon comes on February 18, 2015, and then reaches perigee less than one-third day later. It’s the closest new moon of the year, which qualifies it as a new moon supermoon. It’s also a seasonal Black Moon; that is, the third of four new moons in the current season (December solstice to March equinox). The moon reaches lunar perigee – the moon’s closest point to Earth for the month – some 7.6 hours after the moon turns new at 23:47 UTC (6:47 p.m. CDT) on February 18. Don’t expect to see anything special, not even a little crescent like that in the photo above. A full moon supermoon is out all night – brighter than your average full moon. But a new moon supermoon is only out during the daytime hours, hidden in the sun’s glare. Follow the links below to learn more about the supermoon/ Black Moon of February 18, 2015.
Can new moons be supermoons?
Spring tides accompany February 2015’s supermoon.
February 2015 new moon also a seasonal Black Moon
Seasonal Black Moon and monthly Blue Moon in 2015
Monthly Black Moon and seasonal Blue Moon in 2016
View larger. | Youngest possible lunar crescent, with the moon’s age being exactly zero when this photo was taken — at the precise moment of the new moon – at 07:14 UTC on July 8, 2013. Image by Thierry Legault. Visit his website. Used with permission.
View larger. | Youngest possible lunar crescent, with the moon’s age being exactly zero when this photo was taken — at the precise moment of the new moon – at 07:14 UTC on July 8, 2013. Image by Thierry Legault. Visit his website. Used with permission.
Can new moons be supermoons? Yes, the February 18 new moon qualifies as a supermoon, if you accept the definition by Richard Nolle that started the whole supermoon craze a few years ago. Nolle, who is credited for coining the term, defines a supermoon as:
… a new or full moon which occurs with the moon at or near (within 90% of) its closest approach to Earth in a given orbit.
Given that definition, the new moon of February 18, 2015 definitely makes the grade.
Some people dislike the term supermoon, maybe because some supermoons – like the February 18 supermoon – don’t look all that super. But we like the term. We like it better than perigee new moons, which is what we used to call a new moon closest to Earth.
Taking it further, some object to a new moon being called a supermoon because a new moon isn’t visible (unless there’s a solar eclipse).
Nonetheless, the February 2015 new moon enjoys supermoon status, according to Nolle’s definition. We’ve already seen other media talking about it. Hate to say it, y’all, but the term supermoon – which is so simple and clear – will likely outlive the objectors!
By the way, the next supermoon will arrive with the new moon of March 20, 2015. The March new moon will actually pass in front of the sun, to stage a total solar eclipse at far-northern Arctic latitudes. From Greenland, Iceland, Europe, northern Africa and northeastern Asia, varying degrees of a partial eclipse will be visible. In other words, if you’re on the right spot on Earth, the March 20 new moon will be seen in silhouette against the bright solar disk (remember to use eye protection).
Read more: Supermoon causes total eclipse of equinox sun on March 20
Live by the moon with your 2015 EarthSky lunar calendar!
You won’t see today’s new moon at perigee – the
You won’t see today’s new moon at perigee – the “supermoon” – but Earth’s oceans will feel it. Expect higher-than-usual tides in the days following a supermoon.
Spring tides accompany February 2015’s supermoon. Will the tides be larger than usual at the February new moon? Yes, all new moons (and full moons) combine with the sun to create larger-than-usual tides, but perigee new moons (or perigee full moons) elevate the tides even more.
Each month, on the day of the new moon, the Earth, moon and sun are aligned, with the moon in between. This line-up creates wide-ranging tides, known as spring tides. High spring tides climb up especially high, and on the same day low tides plunge especially low.
The February 18 extra-close new moon will accentuate the spring tide, giving rise to what’s called a perigean spring tide. If you live along an ocean coastline, watch for high tides caused by the February 2015 perigean new moon – or supermoon. It’s likely to follow the date of new moon by a day or so.
Will these high tides cause flooding? Probably not, unless a strong weather system accompanies the perigean spring tide. Still, keep an eye on the weather, because storms do have a large potential to accentuate perigean spring tides.
Learn more: Tides and the pull of the moon and sun
Total solar eclipse photo by Ben Cooper/Launch Photography. Visit Launch Photography online.
There’s no such thing as a black-colored moon seen in Earth’s sky, unless you mean the moon’s silhouette in front of the sun during a total solar eclipse. Read more: Supermoon causes total eclipse of equinox sun on March 20 This total solar eclipse photo is by Ben Cooper/Launch Photography.
February 2015 new moon also a seasonal Black Moon Some people may also call this February 2015 new moon a Black Moon. We’d never heard the term Black Moon until about a year ago, but here’s our best understanding of it. Usually, there are only three new moons in one season, the period of time between a solstice and an equinox – or vice versa. However, there are four new moons in between the December 2014 solstice and the March 2015 equinox. Some people call the third of these four new moons a seasonal Black Moon.
December solstice: December 21, 2014
New moon: December 22, 2014
New moon: January 20, 2015
New moon: February 18, 2015
New moon: March 20, 2015 (9:36 Universal Time)
March equinox: March 20, 2015 (22:45 Universal Time)
There is also a monthly definition for Black Moon. It’s the second of two new moons to occur in one calendar month. A Black Moon by this definition last happened on March 30, 2014, and will next happen on October 30, 2016.
Seasonal Black Moon and monthly Blue Moon in 2015 It may be of interest to know that in the year 2015, a seasonal Black Moon (February 18, 2015) and a monthly Blue Moon (July 31, 2015) occur in the same calendar year. A Blue Moon by the monthly definition of the term refers to the second of two full moons in one calendar month.
Monthly Black Moon and seasonal Blue Moon in 2016 And next year, in 2016, we find that a monthly Black Moon (October 30, 2016) and a seasonal Blue Moon (May 22, 2016) happen in the same calendar year. A Blue Moon by the seasonal definition of the term refers to the third of four full moons in one season.
Bottom line: The new moon on February 18, 2015, is both a supermoon and a seasonal Black Moon. Will you see it? No. The moon will be hidden in the sun’s glare throughout the day. However, those along coastlines might expect higher than usual tides in the days following this close new moon.
Studying Titan is thought to be looking back in time at an embryonic Earth, only a lot colder. Titan is the only moon in the solar system to have a significant atmosphere and this atmosphere is known to possess its own methane cycle, like Earth’s water cycle. Methane exists in a liquid state, raining down on a landscape laced with hydrocarbons, forming rivers, valleys and seas.
Several seas have been extensively studied by NASA’s Cassini spacecraft during multiple flybys, some of which average a few meters deep, whereas others have depths of over 200 meters (660 feet) — the maximum depth at which Cassini’s radar instrument can penetrate.
So, if scientists are to properly explore Titan, they must find a way to dive into these seas to reveal their secrets.
Envisaged as a possible mission to Titan’s largest sea, Kracken Mare, the autonomous submersible would be designed to make a 90 day, 2,000 kilometer (1,250 mile) voyage exploring the depths of this vast and very alien marine environment. As it would spend long periods under the methane sea’s surface, it would have to be powered by a radioisotope generator; a source that converts the heat produced by radioactive pellets into electricity, much like missions that are currently exploring space, like Cassini and Mars rover Curiosity.
Communicating with Earth would not be possible when the vehicle is submerged, so it would need to make regular ascents to the surface to transmit science data.
But Kracken Mare is not a tranquil lake fit for gentle sailing — it is known to have choppy waves and there is evidence of tides, all contributing to the challenge. Many of the engineering challenges have already been encountered when designing terrestrial submarines — robotic and crewed — but as these seas will be extremely cold (estimated to be close to the freezing point of methane, 90 Kelvin or -298 degrees Fahrenheit), a special piston-driven propulsion system will need to be developed and a nitrogen will be needed as ballast, for example.
This study is just that, a study, but the possibility of sending a submersible robot to another world would be as unprecedented as it is awesome.
Although it’s not clear at this early stage what the mission science would focus on, it would be interesting to sample the chemicals at different depths of Kracken Mare.
“Measurement of the trace organic components of the sea, which perhaps may exhibit prebiotic chemical evolution, will be an important objective, and a benthic sampler (a robotic grabber to sample sediment) would acquire and analyze sediment from the seabed,” the authors write (PDF). “These measurements, and seafloor morphology via sidescan sonar, may shed light on the historical cycles of filling and drying of Titan’s seas. Models suggest Titan’s active hydrological cycle may cause the north part of Kraken to be ‘fresher’ (more methane-rich) than the south, and the submarine’s long traverse will explore these composition variations.”
A decade after the European Huygens probe landed on the surface of Titan imaging the moon’s eerily foggy atmosphere, there have been few plans to go back to this tantalizing world. It would be incredible if, in the next few decades, we could send a mission back to Titan to directly sample what is at the bottom of its seas, exploring a region where the molecules for life’s chemistry may be found in abundance.
Google-owned Boston Dynamics has been making incredible robots long before it was purchased by Google.
Today it showed off its latest amazing robot, Spot – a smaller, more agile version of its WildCat robot.
Then, a BD team member decided to kick it, therefore dooming us all when robots become sentient.
Seriously, doesn’t this guy know that robots will be able to search YouTube in the future? Maybe the robots will just go after this guy and leave the rest of us robot-loving humans alone.
While I’m concerned about a robot uprising, Spot is incredibly impressive and maybe a little bit terrifying. The 160-pound, electrically-powered and hydraulically-actuated robot can walk and trot, so don’t bother try running away. It can also climb up stairs and walk up and down hills.
A sensor on the robot’s head helps it navigate over rough terrain.
While the thought of an army of these approaching you on the street might keep you awake at night, robots like Spot could be used to enter areas too dangerous for humans to occupy, or bring important supplies to destinations too treacherous for regular robots and too wooded for drones.
Plus, robots are cool. Just don’t go around kicking them.
When we think about our solar system, most of our mind’s likely wander to Jupiter’s immensely large storm, or Saturn’s fantastical rings. Perhaps some picture Neptune’s deep blue hue, or its sea of liquid diamond. The point being, these huge objects capture our imagination because they are so far-flung from Earthly sights, like the rolling seas of blue and green and the rocks that crunch beneath our shoes. We kind of look over the fact that the vast majority of planets are composed almost entirely of gas; our solar system included.
Here, in a piece called “Space Without the Space,” XKCD’s Randall Munroe stitched together an old school, pirate-like map that shows all of the solid ground in our solar system (excluding speculative estimates solid “ground” we might find deep within the cores of gas-giants). Earth clearly wins hands down, though it’s unclear as to how Munroe incorporated the oceans of Earth in the map. Venus comes in at a close second, which isn’t surprising since it’s very similar to Earth in size. Then we have the other rocky bodies, Mercury and Mars.
What might be surprising to some is just how similar in size the planets and moons are. Three out of four of the Galilean moons (Callisto, Ganymede and Io) make up a considerable amount of the map. Ganymede, in particular, is the most noteworthy. Believe it or not, it’s actually a bit larger than the inner-most planet from the Sun, Mercury (it’s not that much smaller than Mars, for that matter). It even appears as if all of the dwarf-planets (pictured near the bottom) could fit inside any of the three largest Galilean moons.
It’s also neat that he grouped asteroids, comets and other small planetoids together. They make a small, but discernible fraction of our solar system’s rock. I’m not sure which point of view is cooler: the fact that there are so many of these objects scattered throughout our solar system that, together, they are the same size as a small moon, or that objects so numerous (there are billions, perhaps even trillions, of them) could be so small that all of them combined only add up to the size of a small moon. I’ll leave that one up to you guys.
How Planets Form:
Despite just how vastly different they are in size and composition, terrestrial planets and gaseous ones form in a strikingly similar manner (at least we think so).
It’s understood that based on our most current model, our solar system (along with all of the other planetary systems we’ve found circling distant “Suns”) formed following the collapse of a nebular cloud. From there, it’s understood that after a newly born star emerges from its cocoon, an elliptical disk of material, called a protoplanetary disk, encircles the young star.
The disk is composed of a variety of materials: including ice, water-ice, rock, grains and some heavier elements (iron, nickel, gold, etc), but gas is by far the most prevalent type of material. Within the chaotic, spinning disks, the materials collide and start to coalesce into a planet. After enough of the materials gather, gravity takes over and helps transform the oddly shaped planetesimals into the spherical planets we all know and love.
Gas-Giants v.s. Rocky Bodies:
Of course, the concentration and the quantity of the materials dictate what the planets are made of and the number of them that form, but a different mechanism — one occurring much farther out within the disk — starts to influence the proto-planets. After hundreds of millions of years of slow accretion, all at once, they start accreting gaseous envelopes (like an atmosphere). The growth can be stanched by stellar phenomena (like solar winds), but these effects are diluted over vast distances, thus allowing the more distant planets to keep on growing until they are more gas than rock.
At such distances, the temperatures also drop off, eventually becoming so cold that even gas itself freezes over. The newly acquired mass allows the large bodies to capture the frozen gas and become even more immense, until the planets become full-blown gas-giants.
Biology relies upon the precise activation of specific genes to work properly. If that sequence gets out of whack, or one gene turns on only partially, the outcome can often lead to a disease.
Now, bioengineers at Stanford and other universities have developed a sort of programmable genetic code that allows them to preferentially activate or deactivate genes in living cells. The work is published in the current issue of Cell, and could help usher in a new generation of gene therapies.
The technique is an adaptation of CRISPR, itself a relatively new genetic tool that makes use of a natural defense mechanism that bacteria evolved over millions of years to slice up infectious virus DNA.
Standard CRISPR consists of two components: a short RNA that matches a particular spot in the genome, and a protein called Cas9 that snips the DNA in that location. For the purposes of gene editing, scientists can control where the protein snips the genome, insert a new gene into the cut and patch it back together.
Inserting new genetic code, however, is just one way to influence how the genome is expressed. Another involves telling the cell how much or how little to activate a particular gene, thus controlling how much protein a cell produces from that gene and altering its behavior.
It’s this action that Lei Stanley Qi, an assistant professor of bioengineering and of chemical and systems biology at Stanford, and his colleagues aim to manipulate.
Influencing the genome
In the new work, the researchers describe how they have designed the CRISPR molecule to include a second piece of information on the RNA, instructing the molecule to either increase (upregulate) or decrease (downregulate) a target gene’s activity, or turn it on/off entirely.
Additionally, they designed it so that it could affect two different genes at once. In a cell, the order or degree in which multiple genes are activated can produce different metabolic products.
“It’s like driving a car. You control the wheel to control direction, and the engine to control the speed, and how you balance the two determines how the car moves,” Qi said. “We can do the same thing in the cell by up- or downregulating genes, and produce different outcomes.”
As a proof of principle, the scientists used the technique to take control of a yeast metabolic pathway, turning genes on and off in various orders to produce four different end products. They then tested it on two mammalian genes that are important in cell mobility, and were able to control the cell’s direction and how fast it moved.
The ability to control genes is an attractive approach in designing genetic therapies for complex diseases that involve multiple genes, Qi said, and the new system may overcome several of the challenges of existing experimental therapies.
“Our technique allows us to directly control multiple specific genes and pathways in the genome without expressing new transgenes or uncontrolled behaviors, such as producing too much of a protein, or doing so in the wrong cells,” Qi said. “We could eventually synthesize tens of thousands of RNA molecules to control the genome over a whole organism.”
Next, Qi plans to test the technique in mice and refine the delivery method. Currently the scientists use a virus to insert the molecule into a cell, but he would eventually like to simply inject the molecules into an organism’s blood.
“That is what is so exciting about working at Stanford, because the School of Medicine’s immunology group is just around the corner, and working with them will help us address how to do this without triggering an immune response,” said Qi, who is a member of the interdisciplinary Stanford ChEM-H institute. “I’m optimistic because everything about this system comes naturally from cells, and should be compatible with any organism.”
Over the past four to five days I have discovered (both through experimentation and by healing animal patients) some very important medical principles which make the successful treatment of certain kinds of injuries and diseases much easier and much more effective. Also these principles make it far less likely that any form of treatment will in any way promote infection, interfere with the healing process, produce malignant counter or side effects, cause relapse, slow recovery, or prevent full recovery. Methods of the application of these principles vary according to the specific conditions surrounding the patient (age, general state of health, weight, etc.) and the individual nature of the case itself but the principles are valid in and of themselves.
I say discover, actually I have rediscovered (for I knew most of these principles already but either did not practice them fully or in the necessary manner or did not until recently realize their true import) or refined the principles I’m going to name, and I’m also sure the ancients and many medieval doctors knew them as well.
Additionally I should add the caveat that some of these principals are really for medical applications devoid of access to modern medical facilities and sometimes due to the fact of the lack of proper medicines – either because the patient and doctor/medic are isolated and cannot reach such facilities, because such facilities are not available in a given area, or because the patient lies on the borderline between being able to treat themselves or at home and needing to be hospitalized, but the injury or illness has not quite yet progressed to the point of an emergency hospitalization.
All of these Principles are going into my Book of Medicine as currently defined below, however as I improve upon my techniques and make further discoveries I will refine these definitions as necessary. Also I have a couple of ideas regarding inventions to best apply some of these principles but I’ll discuss those inventions at a later date after I’ve had a chance to work upon them. 1 THE PRINCIPLE OF HIBERNATION– The patient should be encouraged to or force himself to go into a state of self-induced hibernation or a coma-like state (even if this state must persist for many hours or even days or weeks) until the patent has reached the state that a sufficient point of verifiable recovery has been achieved or there are definite signs of self-sustaining improvement. The only treatment that should be administered or self-administered during this hibernation state should be small amounts of water with nutrients and electrolytes (liquid metaergogenics).
2 THE PRINCIPLE OF REVERSE APPLICATION – If the patient is unable or unwilling to eat then all necessary and beneficial nutrients and electrolytes should be introduced through liquids and via liquid consumption. If the patient is unwilling to drink then all necessary and beneficial nutrients and electrolytes should be introduced through whatever food is consumed and the food should be soaked in beneficial liquids and water and moisturized or reduced to a semi-liquid paste. These two principles are especially good and useful in cases where it is not possible to administer an IV .
3 THE PRINCIPLE OF APPLIED STASIS OR NON-INTERFERENCE – There are times when a patient has received a severe, traumatic, or at least serious injury or illness, and aside from keeping the patient warm and clean no attempt should be made to treat the patient at all other than the periodic administering of small amounts of food and/or drink (see principle of Reverse Application and the principle of Fasting) and instead they should encouraged to rest and to sleep (see principle of Hibernation). Only after a patient shows signs of the recovery of strength and of a tendency to recover should the patient be treated in a more normal manner to speed recovery.
4 THE PRINCIPLE OF FASTING– In certain situations the patient should not be fed at all but should undergo a period of fasting to best facilitate healing. Break the fast when signs of recovery become obvious or if the patient shows signs of weakness or harmful weight loss. Liquid intake should be maintained as normal or increased as necessary.
5. THE PRINCIPLE OF WOUND HOMEOSTASIS– Sometimes a wound (or even a state of illness) is too moist and must be drained, dried, and caused to remain dry (in a general sense, all biological health depends to some degree upon moisture) so as the suppress or prevent serious forms of infection (gangrene, etc.). Sometimes a wound (or even a state of illness) is too dry and requires the introduction of sterile yet beneficial forms of moisture and nutrients introduced through the medium of that moisture. Each particular case will vary according to the circumstances but if there are indications that the injury, wound, or disease state is too moist, then drying methods must be employed, and if there are indications that the injury, wound, or disease state is too dry then moisture must be applied. Then intent is to reach a state of patient homeostasis in which the patient can achieve and remain in an ongoing condition of optimal healing and recovery.
6. THE PRINCIPLE OF SHADOW (OR UNFELT OR UNKNOWN) TREATMENT APPLICATION – I will discuss this principle later after I have had more time to experiment. Initial indications show it to be very effective but the initial methods of application could be much improved I think. This is a new principle to me.
Is Heaven static (is everything to be done or enjoyed predetermined or fixed by God), or is it progressive (is it open to being added to and improved upon, etc.)?
(I am not speaking of progressive in the political sense of course, but in the sense of actual and real progress.)
For instance as more and more human souls are added to Heaven (and God only knows what other kinds of creatures and beings) does Heaven expand, and does God allow or even encourage those dwelling there to explore, to discover, to conduct scientific experiments, to create, to do art, to invent and design things, and so forth given the parameters under which Heaven operates? (Which I assume will involve different technologies and physics and biological and operating principles, if those are even the proper terms, than in our world.)
I cannot say for certain but it seems to me, and this is something I have long pondered, that if this world, as imperfect as it is, is open to invention and creation (or at least sub-creation) and experimentation and discovery and exploration and expansion and progress then I can only imagine how open to good and noble progress Heaven will and ought to be.
Brown University evolutionary biologist Sohini Ramachandran has joined with colleagues in publishing a sweeping analysis of genetic and linguistic patterns across the world’s populations. Among the findings is that geographic distance predicts differentiation in both language and genes.
Producing new insights into the evolution and development of human populations around the globe is no easy task, but scientists can draw on multiple sources of data to do it. In a new study, Sohini Ramachandran and colleagues at Stanford University and University of Manitoba analyzed troves of data on genetics and distinct sounds in language—phonemes—to discern important patterns.
Among the findings published in Proceedings of the National Academy of Sciences, is that genes and languages both vary more as geographic distance increases. The analysis showed there are distinct geographic patterns, or axes, of the greatest differences. The data also reflect how languages and genes evolve differently, for instance among isolated populations.
Ramachandran, assistant professor of ecology and evolutionary biology, discussed these and other insights with writer David Orenstein.
Why are language and genes sometimes combined in studies of populations?
Fields that study the human past, especially ancient human history, have to draw on multiple disciplines and lines of evidence in order to confirm and calibrate observed signatures in data, since we can’t truly know all events in human history. Because language is inherited ‘vertically’ [from parents to children] like genes, and also changes ‘horizontally’ based on contact among populations, many researchers in genetics interpret analyses of DNA from different populations in the context of the languages the study populations speak.
This kind of interdisciplinary work is what initially drew me to studying human evolution.
In this study what did you find was similar between languages and genes and what was different?
We saw that axes of differentiation in both our linguistic and genetic dataset corresponded, meaning that differences in both datasets of very different types of markers were geographically distributed quite similarly.
One very interesting contrast we saw between languages and genes had to do with isolated populations: an isolated population loses genetic diversity rapidly, as individuals marry within the population; in contrast, we saw a range of variation in linguistic markers for languages that are geographically isolated (have few neighboring languages). Some languages that are isolated lose complexity and others gain complexity and innovate new sounds. This makes me wonder whether contact among populations homogenizes their languages in some way so people can understand each other.
We found that linguistic markers do not hold signatures of the human expansion out of Africa, which is not surprising due to the rate at which languages changes and can be influenced by neighboring languages.
Tell us more about that difference between what genes and languages showed regarding human origins in Africa?
To be precise, genes tell us that the people living today with the most genetic diversity currently live in Southern Africa (like the San bushmen) and that modern humans emerged in Africa, but we don’t know where the geographic origin of our species was precisely based on genetic data. The language analysis did not reveal this African origin because language changes in a complex way, much differently from genes where we have a good sense of the mutation process. In my conversations with different linguists, including those at Brown who generously listened to me present our ideas multiple times, the rate at which language mutates, and which linguistic markers are more likely to change than others, seems to be an open question.
You found geographic axes, or directions, of difference in language and genetics. What might they tell us about human evolution and history?
These axes, which look for directions along which a dataset is most differentiated, tell us about axes along which humans likely did not migrate a great deal. For example, migration north/south in Africa would mean moving across climate regimes; we also know populations are quite different across latitudes in Europe and we see that for both our language datasets and genetic datasets.
What do your findings tell us about how we can use genes and language, either together or separately, for population studies?
We learn more from using both data types together and analyzing them using similar methods than we would have learned from either type alone. One signal we saw loud and clear in this study is how much geographic distance affected our ancestors’ genes and languages; geographic distance predicts differentiation in both data types, underscoring that there are still deep signatures of ancient migrations in our genomes and cultures today.
Physicists at the University of Sussex have tamed one of the most counterintuitive phenomena of modern science in their quest to develop a new generation of machines capable of revolutionizing the way we can solve many problems in modern science.
The strange and mysterious nature of quantum mechanics is often illustrated by a thought experiment, known as Schrӧdinger’s Cat, in which a cat is theoretically both dead and alive simultaneously.
According to a new study published this week in Physical Review A, Sussex physicists have now managed to create a special type of “Schrӧdinger’s” cat using new technology based on trapped ions (charged atoms) and microwave radiation.
Like the cat, the researchers made these ions exist in two states simultaneously by creating ‘entanglement’, an effect that challenges the very fabric of reality itself.
Trapped ions are leading the race towards constructing a new type of computer able to solve certain problems with unprecedented speeds by taking its power from a theory called ‘quantum physics‘.
Traditionally, lasers have been used to drive such quantum processes. But millions of stable beams would have to be carefully aligned in order to be able to work with the very large number of ions required to encode a useful amount of data.
It would be much easier to build a quantum computer that uses microwave radiation instead of lasers for all quantum operations because, just like in a standard kitchen microwave, the radiation is easily broadcast over a large area using well-developed and inherently stable technology.
The Sussex researchers’ ability to create and fully control a Schrӧdinger’s cat ion using microwave radiation instead of lasers constitutes a significant step towards the realisation of a large scale microwave quantum computer.
Dr Winfried Hensinger, who leads the Sussex team, says: “While constructing a large scale quantum computer is still a significant challenge, this achievement demonstrates that we are moving beyond basic science towards realizing new step-changing technologies that have the potential to change our lives.”
Dr Hensinger’s team, consisting of postdoctoral fellows Dr Seb Weidt and Dr Simon Webster, along with PhD students Kim Lake, Joe Randall and Eamon Standing, worked for over two years to develop this microwave based technology that is capable of significantly simplifying the engineering required to build an actual quantum computer.
Dr Seb Weidt says: “This achievement opens up a whole range of opportunities to realize new quantum technologies.”
More information: ‘Generation of spin-motion entanglement in a trapped ion using long-wavelength radiation ‘, by K. Lake, S. Weidt, J. Randall, E. D. Standing, S. C. Webster, and W. K. Hensinger, is published in Physical Review A [Phys. Rev. A 91, 012319 (2015)]. journals.aps.org/pra/abstract/… 3/PhysRevA.91.01
Being an inventor myself I completely agree with the concept of “stripping away complexity” in order to produce light, flexible designs for most commercial and market applications.
Of course once the hoverbike becomes numerous in models and well-received or popular in usage additional complexities will be added back in, covering everything from entertainment, to pilot protections and security, to sensoring capabilities, to GPS navigation systems, to flight control automation and computerization, to running and warning lights, to communications . Just has occurred with cars and motorcycles. But for now, in the developmental and popularization phase, simplicity is the key to superior development.
By the way, back when I was in CAP this was already a Squadron and even a Wing project and I’ve seen several Air Force designs for basically the same kind of craft.
“Planet X” might actually exist — and so might “Planet Y.”
At least two planets larger than Earth likely lurk in the dark depths of space far beyond Pluto, just waiting to be discovered, a new analysis of the orbits of “extreme trans-Neptunian objects” (ETNOs) suggests.
Theory predicts a certain set of details for ETNO orbits, study team members said. For example, they should have a semi-major axis, or average distance from the sun, of about 150 astronomical units (AU). (1 AU is the distance from Earth to the sun — roughly 93 million miles, or 150 million kilometers.) These orbits should also have an inclination, relative to the plane of the solar system, of almost 0 degrees, among other characteristics.
But the actual orbits of the 13 ETNOs are quite different, with semi-major axes ranging from 150 to 525 AU and average inclinations of about 20 degrees.Nightly News
“This excess of objects with unexpected orbital parameters makes us believe that some invisible forces are altering the distribution of the orbital elements of the ETNOs, and we consider that the most probable explanation is that other unknown planets exist beyond Neptune and Pluto,” lead author Carlos de la Fuente Marcos, of the Complutense University of Madrid, said in a statement.
“The exact number is uncertain, given that the data that we have is limited, but our calculations suggest that there are at least two planets, and probably more, within the confines of our solar system,” he added.
The potential undiscovered worlds would be more massive than Earth, researchers said, and would lie about 200 AU or more from the sun — so far away that they’d be very difficult, if not impossible, to spot with current instruments.
In a laboratory first, Duke researchers have grown human skeletal muscle that contracts and responds just like native tissue to external stimuli such as electrical pulses, biochemical signals and pharmaceuticals.
The lab-grown tissue should soon allow researchers to test new drugs and study diseases in functioning human muscle outside of the human body.
The study was led by Nenad Bursac, associate professor of biomedical engineering at Duke University, and Lauran Madden, a postdoctoral researcher in Bursac’s laboratory. It appears January 13 in the open-access journal eLife
“The beauty of this work is that it can serve as a test bed for clinical trials in a dish,” said Bursac. “We are working to test drugs’ efficacy and safety without jeopardizing a patient’s health and also to reproduce the functional and biochemical signals of diseases—especially rare ones and those that make taking muscle biopsies difficult.”
Bursac and Madden started with a small sample of human cells that had already progressed beyond stem cells but hadn’t yet become muscle tissue. They expanded these “myogenic precursors” by more than a 1000-fold, and then put them into a supportive, 3D scaffolding filled with a nourishing gel that allowed them to form aligned and functioning muscle fibers.
“We have a lot of experience making bioartifical muscles from animal cells in the laboratory, and it still took us a year of adjusting variables like cell and gel density and optimizing the culture matrix and media to make this work with human muscle cells,” said Madden.
Madden subjected the new muscle to a barrage of tests to determine how closely it resembled native tissue inside a human body. She found that the muscles robustly contracted in response to electrical stimuli—a first for human muscle grown in a laboratory. She also showed that the signaling pathways allowing nerves to activate the muscle were intact and functional.
To see if the muscle could be used as a proxy for medical tests, Bursac and Madden studied its response to a variety of drugs, including statins used to lower cholesterol and clenbuterol, a drug known to be used off-label as a performance enhancer for athletes.
The effects of the drugs matched those seen in human patients. The statins had a dose-dependent response, causing abnormal fat accumulation at high concentrations. Clenbuterol showed a narrow beneficial window for increased contraction. Both of these effects have been documented in humans. Clenbuterol does not harm muscle tissue in rodents at those doses, showing the lab-grown muscle was giving a truly human response.
“One of our goals is to use this method to provide personalized medicine to patients,” said Bursac. “We can take a biopsy from each patient, grow many new muscles to use as test samples and experiment to see which drugs would work best for each person.”
This goal may not be far away; Bursac is already working on a study with clinicians at Duke Medicine—including Dwight Koeberl, associate professor of pediatrics—to try to correlate efficacy of drugs in patients with the effects on lab-grown muscles. Bursac’s group is also trying to grow contracting human muscles using induced pluripotent stem cells instead of biopsied cells.
“There are a some diseases, like Duchenne Muscular Dystrophy for example, that make taking muscle biopsies difficult,” said Bursac. “If we could grow working, testable muscles from induced pluripotent stem cells, we could take one skin or blood sample and never have to bother the patient again.”
Other investigators involved in this study include George Truskey, the R. Eugene and Susie E. Goodson Professor of Biomedical Engineering and senior associate dean for research for the Pratt School of Engineering, and William Krauss, professor of biomedical engineering, medicine and nursing at Duke University.
The research was supported by NIH Grants R01AR055226 and R01AR065873 from the National Institute of Arthritis and Musculoskeletal and Skin Disease and UH2TR000505 from the NIH Common Fund for the Microphysiological Systems Initiative.
More information: “Bioengineered human myobundles mimic clinical responses of skeletal muscle to drugs,” Lauran Madden, Mark Juhas, William E Kraus, George A Truskey, Nenad Bursac. eLife, Jan. 13, 2015. DOI: 10.7554/eLife.04885
Astronauts onboard the International Space Station (ISS) have been evacuated to the Russian segment of the station after alarms were triggered that can “sometimes be indicative of an apparent ammonia leak.” Although an earlier report from Russia’s Federal Space Agency claimed that there were “harmful emissions,” Nasa has since clarified that “there is no hard data to suggest that there was a real ammonia leak” and that the problem is likely “a faulty sensor or computer relay.”
Nasa reports that onboard crew — comprising two American astronauts, one Italian astronaut, and three Russian cosmonauts — followed normal safety procedures and donned gas masks, moving to the Russian half of the ISS and sealing the American segment behind them. The flight control team in Houston reports that crew members are in “excellent shape” and that all other systems onboard the ISS are functioning perfectly.
Canadian astronaut and former ISS crew member Chris Hadfield tweeted that a leaking coolant system was one of the “big three” emergencies that astronauts train for on the station. “Ammonia is used for cooling through pipes & heat exchangers on the outside of Station,” said Hadfield. “We train for it & the crew and MCC [mission control center] have responded well.” He added that the other big emergencies were “fire/smoke” and “contaminated atmosphere/medical.”
NASA is currently updating the situation and says that the most likely cause at this point in time is “a faulty sensor or computer relay.”
Update January 4th, 8:23AM ET: This article was amended to reflect the latest reports from NASA suggesting that the alarm was falsely triggered.
There is a show I very much enjoy watching when I can. It’s called Faith in History. Yes, the guy who conducts the show has a very pronounced sort of stumbling delivery when he speaks, but despite that, which often makes it difficult to follow him, I very much like the guy and the show is superb.
Today at lunch my youngest daughter and I sat down to watch the latest recorded episode because it was about George Washington Carver (and lately she had requested that she be allowed to study African history, which I’ll get back to in a moment) and although Carver is as American as peanut butter he was black and he was in my opinion the second greatest native inventor this nation ever produced (shy of Edison), and the very greatest bio-chemist (bar none) and one of the very greatest scientists this nation ever produced.
(Being particularly partial and interested in the biological, chemical, and genetic sciences myself I really like Carver and his work. He was brilliant, and well ahead of his time.)
Plus, I very much agree with his approach to invention, which I’ll recount later, as it is the closest parallel to my own method of invention that I have ever encountered in history.
Anyway it was an extremely good episode on Carver, dwelling upon both his scientific achievements and his personal life and faith.
My daughter seemed to enjoy the episode quite a bit, and as we watched it we would stop the show at various points and discuss science, God, technology, history, invention, writing, politics, and so forth. As is our wont when watching or discussing anything educational.
As for Carver’s methods of discovery, experimentation, inspiration, and invention they closely parallel my own, as he described in numerous letters, and in this speech:
“God is going to reveal to us things He never revealed before if we put our hands in His. No books ever go into my laboratory. The thing I am to do and the way of doing it are revealed to me. I never have to grope for methods. The method is revealed to me the moment I am inspired to create something new. Without God to draw aside the curtain I would be helpless.
Locking the door to his laboratory, Dr. Carver confided:
Only alone can I draw close enough to God to discover His secrets.”
The closest other two parallels I can name are found in the methods of Newton and Archimedes, both of whom I also seek to emulate when it comes to scientific discovery and invention. Archimedes in particular, and perhaps one day soon I will discuss the Agapoloid techniques I employ, which are derived to a large extent from Archimedes’ internal and mental mathematical and geometric laboratory.
After that and as we were cleaning up from lunch my daughter asked me if she could begin two independent courses of study.
My oldest child began her independent courses of study (that is to say she would choose two out of six curriculum areas to study in a self-directed fashion) at the age of 17 but my youngest wants to start now, at age 15.
Knowing now what I do about how advanced my children are and having loosened up a good deal over time with my second child I agreed and asked her to make me a list of what she most wanted to study.
Independent Areas of study are, of course, courses of study she chooses for herself, based upon her own interests, and in which she will do detailed research and work at the college level. Of course she’s been at college level in all her subject areas for a while now, but I mean detailed enough to write a collegiate term paper.
Her list was as follows:
1. Germany (pre-Nazi war era – my oldest daughter is a WWII history nut, as I was at her age, but my younger daughter seems to prefer much earlier time periods. Ancient, Classical, and Medieval.)
2. Africa (I am going to suggest to her that she begins her in-depth studies of Africa with either Egypt, or with Cush or Nubia or Ethiopia, as I have already done my own in detailed archaeological and historical studies of these ancient areas and kingdoms/realms as research for my novels. So I am already familiar with some excellent research materials. Plus those kingdoms were either advanced or relatively advanced. I’m also going to suggest she make an entirely separate study of ancient Alexandria. But in the end it will be up to her, those are just my suggestions.)
3. African Wildlife, Biology, and Geography
4. English Grammar (yes, being a writer this pleases me, but the girl actually loves grammar, English and Latin – I love language and primarily vocabulary and philology, but she loves grammar)
5. Italy (I’ve yet to ask her if she means ancient Italy, such as Etruscan/Roman eras, of if she means Medieval or Modern Italy, prior to World War II. If it’s ancient Rome that’s good though she just finished the Decline and Fall of the Roman Empire and if Medieval Italy I’ll suggest studying Florence and Naples and Venice as city-states, and as commerce hubs. As a matter of fact just last year I finished a superb set of lectures on Florence, her naval power, and her trade that she should really enjoy.)
Lastly now that my older daughter is working and preparing for college my youngest daughter and I spend much more time together. The other night we were watching Agent Carter together and I was commenting on how much more clever the general level of conversation, formal or colloquial, was back then (in the Forties to early Fifties – language started declining in the mid-Fifties). That the language was snappier and more ironic than it is today, the level of conversation was far more clever, plus it was filled with universal cultural references and idioms.
“But,” I said, “I don’t care much for the décor or architecture of that time period. And I could have never walked around all day in a monkey suit.”
“Dad,” she said, “you must be crazy! I love the décor, the architecture, the clothes, and especially the cars and airplanes from that time period. I love almost everything about the Forties and I’d love to go back and live in that time period, minus, you know, the whole segregation and suppression of women things.”
“Yeah, I guess there is always that,” I said.
“But otherwise the Forties are for me!”
She’s a throwback to my Old Man. He grew up in that time period and always loved it too.
(Phys.org)—Despite the celebrations leading up to the New Year last week, progress in science marched on—a paper by molecular geneticist Edward Kipreos, with the University of Georgia, for example, describing a study that found a possible alternative explanation for dark energy made news. He suggested that changing the way people think about time dilation might offer an alternative explanation of the mysterious force that drives the expansion of the universe. Also, a team of physicists at City College of New York published a paper describing their work which involved unveiling new half-light, half-matter quantum particles in very thin semiconductors—which could help pave the way to computing technology based on quantum properties of light. And in an interview with Phys.org, Professor David Pines of the University of California and the Santa Fe Institute described a paper he had published with Dr. Yi-feng Yang of the Chinese Academy of Sciences, regarding how a novel experiment-based expression can explain the behavior of unconventional superconductors.
Very, very interesting. Adaptive assembly without prior instructional encoding. Is it then possible that many amino acids may have a molecularly adaptive equivalency function similar to undifferentiated stem cells (at a higher level) which allows disparate proteins to guide assembly in emergency situations in an almost ad hoc fashion – yet still produce biologically viable proteins?
If so that would mean far more than mere instructional assembly in biological construction and replication, it would mean adaptive biological construction at near the very base level of Life (animate matter).
That could not possibly be accidental for it would mean that base construction rates did not lose adaptive function as they advanced and differentiated but retained such functions (at least as a potential that can be later restimulated) throughout all stages of development.
It would also mean a near plethora of medicinal applications.
This definitely goes into my research files.
Defying Textbook Science, Study Finds New Role for Proteins
Published: January 1, 2015.
Released by University of Utah Health Sciences
Open any introductory biology textbook and one of the first things you’ll learn is that our DNA spells out the instructions for making proteins, tiny machines that do much of the work in our body’s cells. Results from a study published on Jan. 2 in Science defy textbook science, showing for the first time that the building blocks of a protein, called amino acids, can be assembled without blueprints – DNA and an intermediate template called messenger RNA (mRNA). A team of researchers has observed a case in which another protein specifies which amino acids are added.
“This surprising discovery reflects how incomplete our understanding of biology is,” says first author Peter Shen, Ph.D., a postdoctoral fellow in biochemistry at the University of Utah. “Nature is capable of more than we realize.”
To put the new finding into perspective, it might help to think of the cell as a well-run factory. Ribosomes are machines on a protein assembly line, linking together amino acids in an order specified by the genetic code. When something goes wrong, the ribosome can stall, and a quality control crew is summoned to the site. To clean up the mess, the ribosome is disassembled, the blueprint is discarded, and the partly made protein is recycled.
Yet this study reveals a surprising role for one member of the quality control team, a protein conserved from yeast to man named Rqc2. Before the incomplete protein is recycled, Rqc2 prompts the ribosomes to add just two amino acids (of a total of 20) – alanine and threonine – over and over, and in any order. Think of an auto assembly line that keeps going despite having lost its instructions. It picks up what it can and slaps it on: horn-wheel-wheel-horn-wheel-wheel-wheel-wheel-horn.
“In this case, we have a protein playing a role normally filled by mRNA,” says Adam Frost, M.D., Ph.D., assistant professor at University of California, San Francisco (UCSF) and adjunct professor of biochemistry at the University of Utah. He shares senior authorship with Jonathan Weissman, Ph.D., a Howard Hughes Medical Institute investigator at UCSF, and Onn Brandman, Ph.D., at Stanford University. “I love this story because it blurs the lines of what we thought proteins could do.”
Like a half-made car with extra horns and wheels tacked to one end, a truncated protein with an apparently random sequence of alanines and threonines looks strange, and probably doesn’t work normally. But the nonsensical sequence likely serves specific purposes. The code could signal that the partial protein must be destroyed, or it could be part of a test to see whether the ribosome is working properly. Evidence suggests that either or both of these processes could be faulty in neurodegenerative diseases such as Alzheimer’s, Amyotrophic lateral sclerosis (ALS), or Huntington’s.
“There are many interesting implications of this work and none of them would have been possible if we didn’t follow our curiosity,” says Brandman. “The primary driver of discovery has been exploring what you see, and that’s what we did. There will never be a substitute for that.”
The scientists first considered the unusual phenomenon when they saw evidence of it with their own eyes. They fine-tuned a technique called cryo-electron microscopy to flash freeze, and then visualize, the quality control machinery in action. “We caught Rqc2 in the act,” says Frost. “But the idea was so far-fetched. The onus was on us to prove it.”
It took extensive biochemical analysis to validate their hypothesis. New RNA sequencing techniques showed that the Rqc2/ribosome complex had the potential to add amino acids to stalled proteins because it also bound tRNAs, structures that bring amino acids to the protein assembly line. The specific tRNAs they saw only carry the amino acids alanine and threonine. The clincher came when they determined that the stalled proteins had extensive chains of alanines and threonines added to them.
“Our job now is to determine when and where this process happens, and what happens when it fails,” says Frost.
Elon Musk is starting off 2015 with a bang – or hopefully, a soft landing.
On January 6, Musk’s company SpaceX will launch a Falcon 9 rocket to the International Space Station. The launch itself is fairly unremarkable; SpaceX has had a contract with NASA for some time now to transport cargo to the ISS via unmanned rockets, as part of the Commercial Resupply Services program. What SpaceX will attempt to do after the launch is what makes the mission so exciting. The company will try to land the first stage of its Falcon rocket on a platform in the ocean — a feat that has never been done before.
If successful, the landing will be the first major step toward one of the holy grails of the space industry: reusable rockets. Up until now, all rocket launches have been something of a one-and-done stunt. After a rocket blasts off, the first stage of the vehicle – which comprises the bulk of the rocket and contains most of the engines and fuel – burns up and falls away into the ocean, never to be used again. This rocket design is known as a disposable launch system, and it makes launching rockets extremely expensive. The only exception has been the Space Shuttle, which was considered a partially reusable launch system; although the shuttle itself and its solid rocket boosters were recovered after each launch, its large external tank, which carried most of the shuttle’s fuel, broke apart and was never re-used. This made launching shuttles quite costly, as well, since a new external tank had to be built for each flight.
Here’s to 2015: The year that space flight could become affordable.
Imagine if this type of design were applied to air travel, and every time you flew in an airplane, the plane had to be discarded and then rebuilt for its next trip; a ticket from New York to Los Angeles would require a lifetime of savings. Disposable launch systems are why space tourism is currently reserved for the nerdy 1 percent (and British pop singers) – but reusable rockets could change all that, by bringing down the cost of space flight and revolutionizing the space industry.
The hypersonic grid wings attached to the Falcon 9 rocket
To ensure the safe landing of its Falcon 9, SpaceX has equipped the rocket with four “hypersonic grid fins” (placed on the vehicle in an “X-wing” configuration). The fins will be closed during ascent, but when the first stage falls to Earth, the fins will extend perpendicular from the rocket’s body. They can then move independently of one another, to help control the vehicle’s descent and guarantee a precise landing on the rocket’s target.
That target is an autonomous spaceport drone ship, meant to catch the landing rocket in the Atlantic Ocean. The ship’s landing platform is 300 by 100 feet, but it also comes with wings that can extend its width to 170 feet. The seaport itself isn’t anchored, but boasts powerful thrusters that will help it stay in place.
Autonomous Spaceport Drone Ship
Yet landing on such a small platform that isn’t completely stationary won’t be easy, and Musk estimates a 50 percent chance of success on January 6. Plus, the landing will occur after the first stage separates from the second stage — the part of the rocket that will take the cargo capsule the rest of the way to the ISS. That means not all of the rocket will be saved, as the second stage will never be recovered. (However, Musk plans to recover the second stage in future launches.)
Still, the fact that SpaceX is attempting such an endeavor instills hope for a cheaper commercial spaceflight industry. According to Quartz, the cost to build a Falcon 9 rocket is $54 million, but the cost of its fuel is only $200,000. If launching a rocket in the future only required refueling and other servicing costs on the ground, that could bring down the price of going to space by millions of dollars.
So here’s to 2015: The year that space flight could become affordable.
Correction (01/02/2015, 2:50 pm ET): The original story misstated where the rocket will attempt to land; it’s the Atlantic Ocean, not the Pacific, and it has been corrected. We regret the error.