To obtain a hard copy of the Myers-Briggs Type Indicator (MBTI®), the most popular personality test in the world, one must first spend $1,695 on a week-long certification program run by the Myers & Briggs Foundation of Gainesville, Florida.
This year alone, there have been close to 100 certification sessions in cities ranging from New York to Pasadena, Minneapolis, Portland, Houston, and the Foundation’s hometown of Gainesville, where participants get a $200 discount for making their way south to the belly of the beast. It is not unusual for sessions to sell out months in advance. People come from all over the world to get certified.
In New York last April, there were twenty-five aspiring MBTI practitioners in attendance. There was a British oil executive who lived for the half the year under martial law in Equatorial Guinea. There was a pretty blonde astrologist from Australia, determined to invest in herself now that her US work visa was about to expire. There was a Department of Defense administrator, a gruff woman who wore flowing skirts and rainbow rimmed glasses, and a portly IBM manager turned high school basketball coach. There were three college counselors, five HR reps, and a half-dozen “executive talent managers” from Fortune 500 companies. Finally, there was me.
I was in an unusual position that week: Attending the certification program had not been my idea. Rather, I had been told that MBTI certification was a prerequisite to accessing the personal papers of Isabel Briggs Myers, a woman about whom very little is known except that she designed the type indicator in the final days of World War II. Part of our collective ignorance about Myers stems from how profoundly her personal history has been eclipsed by her creation, in much the same way that the name “Frankenstein” has come to stand in for the monster and not his creator.
Flip through the New York Times or Wall Street Journal, and you will find the indicator used to debate what makes an employee a good “fit” for her job, or to determine the leadership styles of presidential candidates. Open a browser, and you will find the indicator adapted for addictive pop psychology quizzes by BuzzFeed and Thought Catalog. Enroll in college, work an office job, enlist in the military, join the clergy, fill out an online dating profile, and you will encounter the type indicator in one guise or another — to match a person to her ideal office job or to her ideal romantic partner.
Yet though her creation is everywhere, Myers and the details of her life’s work are curiously absent from the public record. Not a single independent biography is in print today. Not one article details how Myers, an award-winning mystery writer who possessed no formal training in psychology or sociology, concocted a test routinely deployed by 89 of the Fortune 100 companies, the US government, hundreds of universities, and online dating sites like Perfect Match, Project Evolove and Type Tango. And not one expert in the field of psychometric testing, a $500 million industry with over 2,500 different tests on offer in the US alone, can explain why Myers-Briggs has so thoroughly surpassed its competition, emerging as a household name on par with the Atkins Diet or The Secret.
Less obvious at first, and then wholly undeniable, is how hard the present-day guardians of the type indicator work to shield Myers’s personal and professional history from critical scrutiny. For the foundation, as well as for its for-profit-research-arm, the Center for Applications of Psychological Type (CAPT), this means keeping journalists far away from Myers’s notebooks, correspondences and research materials, which are stored in the Special Collections division of the University of Florida library. Although they are technically the property of the university — thus open to the public — Myers’s papers require permission from CAPT to access; permission that has not been granted to anyone1 in the decade since the papers were donated to the university by Myers’s granddaughter, Katharine Hughes. Twice I was warned by the university librarian, a kind and rueful man, that CAPT was “very invested in protecting Isabel’s image.” Why her image should need protection, I did not yet understand.
When I wrote to CAPT in August 2014, I received an enthusiastically officious email from their Director of Research Operations, requesting additional details about my interest in type indicator and a book I was planning to write on personality testing. “Will there be descriptions and historical background about other personality tests in addition to the MBTI instrument?” she wrote. “If so, we would like to be informed.” So began nine months of correspondence with the staff of CAPT, which culminated this April in their request that I become a certified administrator of the MBTI instrument. Certification was a necessary precursor to giving me access to the papers, the director told me over the phone. CAPT would even be willing to consider “possibilities for funding the training.”
This is how I found myself in the company of the oil man, the astrologist, the Department of Defense administrator and twenty other people at the certification workshop, located on the sixth floor conference room of the United Jewish Appeal Federation building on East 59th Street. We sat at tables of five or six, our backs pressed against a smoked-glass wall decorated with etchings of Seder plates, unfurling braids of challah, and half lit menorahs. Each of us wore a name tag with our first name, last name, and our four letter type printed on it in big block letters. It was not unusual for people to lead with their type when they introduced themselves.
I said hello to the woman sitting next to me. Her name tag said “Laurie — ENFJ.”
Laurie2 checked me out and sighed, relieved. “We’re both E’s,” she said. “We’ll get along great.”
The most important part of becoming MBTI certified is learning to speak type,” declares Barbara, our instructor for the next week and a self-proclaimed “clear ENTJ.” Dressed in black, with prominent red toenails and a commanding nasal tone, Barb, as she insists we call her, will teach us how to “speak type fluently.”
“This is only the beginning!” Barb says. “Just think of this as a language immersion program.”
The comparison is an apt one. There are sixteen types, each made up of a combination of four different letters. Each letter represents one of two poles in a strict dichotomy of human behavior. From the pre-training test I took earlier in the week, I learn that, like Barb, I too am an “ENTJ.” I prefer extraversion (E) to introversion (I), intuition (N) to sensing (S), thinking (T) to feeling (F), and judging (J) to perception (P). It is strange, this tidy division of myself into these alien categories. Initially, I have trouble keeping the letters straight. Strange too is the ease with which people around me speak their types, as if declaring oneself a “clear ENTJ” or a “borderline ISFP” were the most natural thing in the world.
Of course, speaking type is anything but natural. Still Barb’s job is to convince us that this simple system of thought can account for the messiness of many of our personal and interpersonal relationships, regardless of gender, race, class, age, language, education, or any of the other intricacies of human existence. Type is intensely democratizing in its vision of the world, weird and wonderful in its commitment to flattening the material differences between people only to construct new and imaginary borders around the self. Its populism is most clearly demonstrated by MBTI’s astonishing geographic reach: Last year, two million people took the test, in seventy different countries, and in 21 languages. “As long as you have a seventh grade reading level and you’re a ‘normal’ person” — by which Barb means, you are not mentally ill or blithely psychopathic — “you can learn to speak type.”
Across all languages and continents, however, the first rule of speaking type remains the same. You do not, under any circumstances, refer to MBTI as a “test.” It is a “self-reporting instrument” or, more succinctly, an “indicator.” “People use the word ‘test’ all the time,” Barb complains. “But what you’re taking is an indicator. It’s indicating based on what you told the test.”
Although her statement sounds tautological, Barb assures us that it is not. Unlike a standardized test, like the SAT, which asks the test taker to choose between objectively right and wrong answers, the MBTI instrument has no right or wrong answers, only competing preferences. Take, for instance, two questions from the test I took last April: “In reading for pleasure, do you: (A) Enjoy odd or original ways of saying things; or; (B) Like writers to say exactly what they mean.” And: “If you were a teacher, would you rather teach: (A) Fact courses, or; (B) Courses involving theory?” And unlike the SAT, in which a higher score is always more desirable than a lower one, there are no better or worse types. All types, Barb announces rapturously, are created equal.
The indicator’s sole measure of success, then, is how well the test aligns with your perception of your self: Do you agree with your designated type? If you don’t, the problem lies not with the indicator, but with you. Maybe you were in a “work mindset when you answered the questions,” Barb suggests. Or you had become unusually adept at “veiling your preferences” to suit the wants and needs of your husband or wife, your co-workers, your children. Whatever the case may be, somehow you were inhibited from answering the questions as your “shoes off self” — Isabel Briggs Myers’s term for the authentic you.
More cynically, what this seems to mean is that the indicator can never be wrong. No matter how forcefully one may protest their type, the indicator’s only claim is that it holds a mirror up to your psyche. Behind all the pseudo-scientific talk of “instruments” and “indicators” is a simple, but subtle, truth: the test reflects whatever version of your self you want it to reflect. If what you want is to see yourself as odd or original or factual and direct, it only requires a little bit of imagination to nudge the test in the right direction, to rig the outcome ahead of time. I do not mean this in any overtly manipulative sense. Most people do not lie outright, for to do so would be to shatter the illusion of self-discovery that the test projects. I mean, quite simply, that to succeed, a personality test must introduce the test taker to the preferred version of her self — a far cry, in many cases, from the “shoes off,” authentic you.
But Barb doesn’t pause to meditate on the language lesson she has started to give us. Instead she projects onto a large screen behind her a photograph of a pale and bespectacled man in a neat cravat. Peering over us is Carl Gustav Jung, the Swiss psychiatrist whose 654-page study Psychological Types(1923) inspired Myers’s development of the indicator. Jung was “all about Freud, the couch, neurosis!” Barb laughs. For the purposes of our training, the relationship between his theory of psychological types and Myers’s commodification of it is a matter of good branding strategy. “Jung is a very respected name, a big name,” Barb says. “Even if you don’t know who he was, know his name. His name gives the test validity.”
Validity is crucial to selling the test, even if it doesn’t mean exactly what Barb seems to think it does. After the certification session is over, the participants will return to work with a 5-by-7 diploma, a brass “MBTI” pin, and a stack of promotional materials that they are encouraged to use to persuade their clients or colleagues to take an MBTI assessment. Each test costs $49.95 per person, more if you want a full breakdown of your type, and even more if you want an MBTI-certified consultant to debrief your type with you. No one questions the sheer ingenuity of this sales scheme. We are paying $1,695 to attend a course that authorizes us to recruit others to buy a product — a product which tells us nothing more than what we already know about ourselves.
Although Barb invokes Jung’s name with pride and a touch of awe, Jung would likely be greatly displeased, if not embarrassed, by his long-standing association with the indicator. The history of his involvement with Myers begins not with Isabel, but with her mother Katharine Cook Briggs, whom Barb mentions only in passing. After the photograph of Jung, Barb projects onto the screen a photograph of Katharine, unsmiling and broad necked and severely coiffed. “I usually don’t get into this,” she says, gesturing at Katharine’s solemn face. “People have already bought into the instrument.”
Yet Katharine is an interesting woman, a woman who might have interested Betty Friedan or Gloria Steinem or any second-wave feminist eager to dismantle the opposition between “the happy modern housewife” and the “unhappy careerist.” A stay-at-home mother and wife who had once studied horticulture at Michigan Agricultural College, Katharine was determined to approach motherhood like an elaborate plant growth experiment: a controlled study in which she could trace how a series of environmental conditions would affect the personality traits her children expressed. In 1897, Isabel emerged — her mother’s first subject. From the day of her birth until the child’s thirteenth birthday, Katharine kept a leather-bound diary of Isabel’s developments, which she pseudonymously titled The Life of Suzanne. In it, she painstakingly recorded the influence that different levels of feeding, cuddling, cooing, playing, reading, and spanking had on Isabel’s “life and character.”
Today we might think of Katharine as the original helicopter parent: hawkish and over-present in her maternal ministrations. But in 1909, Katharine’s objectification of her daughter answered feminist Ellen Key’s resounding call for a new and more scientific approach to “the vocation of motherhood.” More progressive still was how Katharine marshaled the data she had collected on Isabel to write a series of thirty-three articles in The Ladies Home Journal on the science of childrearing. These articles, which were intended to help other mothers systematize their childcare routines, boasted such single-minded titles as “Why I Believe the Home Is the Best School” and “Why I Find Children Slow in Their School Work.” Each appeared under the genteel nom de plume “Elizabeth Childe.”
It is not surprising that Jung’s work should pique the interest of “Elizabeth Childe,” an aspiring pedagogue who perceived the maturation of her child’s personality as nothing less than an experimental form to be cultivated, even perfected, over the years. Indeed, Katharine first encountered an English translation of Jung’s Psychological Types in 1923, when she was editing The Life of Suzanne to submit to publishers. She found Psychological Types an unwieldy text, part clinical assessment, part romantic meditation on the nature of the human soul, which emphasized the “creative fantasy” required for psychological thought. Katharine took this as an invitation to start thinking of her children’s personalities as divided into three oppositional axes: extraverted versus introverted, intuitive versus sensory, thinking versus feeling. In 1927, she wrote to Jung to express her feverish admiration for his work — her “Bible,” she called it — and her desire to bring a more practical approach to his densely theoretical observations, which her “children … had been greatly helped by.”
“How wasteful children are, even with their own precious, irreplaceable lives!” Jung once wrote to Freud, a letter that might have doubled as his irritated response to Katharine and her request to collaborate. From the outset, it seems that Jung was impressed by Katharine’s brilliance and flattered by her enthusiasm, but skeptical of her eagerness to bring his typology to the science of childrearing. When Katharine wrote to him for advice about a neighborhood child, a young girl in great emotional distress who she believed she could cure through Jungian type analysis, Jung rebuked her for overstepping her bounds as a dispassionate observer. “You overdid it,” he wrote. “You wanted to help, which is an encroachment upon the will of others. Your attitude ought to be that of one who offers an opportunity that can be taken or rejected. Otherwise you are most likely to get in trouble. It is so because man is not fundamentally good, almost half of him is a devil.”
Despite Jung’s unwillingness to help Katharine see beyond the devil in man, some of the more practical applications of his typology appeared in a 1926 article that Katharine published in The New Republic, winningly titled “Meet Yourself: How to Use the Personality Paint Box.” In it, she would present Jung’s dichotomies as an elegant paint-by-numbers exercise, in which E/I, N/S, and T/F were the “primary character colors” that each individual could “combine and blend” to form “his own personality portrait.” Even babies, those “little bundles of psychic energy,” had types, and the sooner a mother identified her child’s type, the better it was for his mental maturity. “One need not be a psychologist in order to collect and identify types any more than one needs to be a botanist to collect and identify plants,” Katharine assured her fellow mothers. There was no need to doubt one’s ability to type one’s child.
“Meet Yourself” enjoyed quiet acclaim among parents when it was first published, but ultimately, Katharine’s desire to spread Jung’s gospel to a broader audience would inspire a shift in genre. She would abandon The Life of Suzanne as a parenting guide and turn instead to fiction, which she believed would help her reach a larger and more dedicated audience. Her longest work, written toward the end of her life, was a romance novel inspired by Psychological Types called The Guesser, the story of a love affair between two incompatible Jungian types. It was summarily rejected by ten publishers and two film producers for dwelling too much on Jung, whom no one other than Katharine was interested in, and not enough on love.
Like her mother, Isabel also began her adult life as a wife and mother. She graduated from Swarthmore in June of 1918 — Phi Beta Kappa, an aspiring fiction writer, and a moderately disillusioned newlywed, who had followed her husband first to Memphis, where he was training as a bomber pilot, and then to Philadelphia, where he enrolled in law school. In each city, she made a list of her future goals in a notebook which she titled Diary of an Introvert Determined to Extrovert, Write, & Have a Lot of Children.
Keep complete job list and do one every day.
Housekeep till 10 A.M.
Two hours writing.
One hour outdoors.
One hour self-development—music, study, friends.
Wash face with soap every night.
Never wear anything soiled.
But despite her clear goals and clean clothes, Isabel struggled to find a job. After an unfulfilling stint at a temp agency, she wrote to Katharine to complain about the difficulties of finding meaning in one’s work, particularly as a married woman who was expected to do nothing more than to have children. “I think under the spur of necessity a woman can do a man’s work as well as he can, provided she is as capable for a woman as he is for a man,” she wrote. “But I’m perfectly sure that it takes more out of her. And it’s a waste of life to spend yourself on work that someone else can do at less cost. I’m sure men and women are made differently, with different gifts and different kids of strengths.” In a perfect world, she concluded, there would exist “some highly intelligent division of labor that can be worked out, so everybody works, but not at the wrong things.”
Isabel’s “instinctive answer” to the question of what to do with herself was to be “my man’s helpmeet.” And for nearly a decade she was. Until 1928, she did housework, gave birth to two children, and at night, when the house was in order and the children were asleep, she continued to wonder what was missing from her life. Although a husband and children and a “beloved little ivy-covered colonial house” in the suburbs were “everything in the world that I wanted,” Isabel wrote, “I knew I wanted something else.” That something else was the time and energy to pursue a career as a successful fiction writer, something her mother had never been able to realize. “In the evenings, between nine and three, stretched six heavenly, uninterrupted hours — if I could stay awake to use them,” she mused.
Working at night, but most often with one fitful child or another in her lap, Isabel started and finished a detective novel, which she promptly submitted to a mystery contest at New McClure’s magazine. The winner was to receive a $7,500 cash prize (over $100,000 today) and a book contract with a prominent New York publisher. Katharine, apparently jealous that her daughter was trying to succeed where she had once failed, had little encouragement for her daughter, only what Isabel lamented as some “cool criticisms” of the “novel’s style.” Much to her mother’s surprise, Isabel’s novel,Murder Yet to Come, took first place, surpassing the writing team behind the Ellery Queen novels, among the many other seasoned pulp writers who had vied for the prize.
Yet there was plenty of reason for Katharine, ever the devoted scholar of Jung, to appreciate how she had inculcated her daughter into speaking — or, in this case, writing — type. Unlike other detective stories of the time, which often pair a brilliantly imaginative sleuth with a more literal minded sidekick, Murder Yet to Come features a team of three amateur detectives: an effeminate playwright, his dutiful assistant, and a brawny Army sergeant. Unburdened by crying children or any other domestic responsibilities, they set out to solve a gruesome murder. Each member of the team possesses what Isabel, in her letter to her mother, described as “different gifts and different kinds of strengths.” The playwright has the “quickness of insight” to uncover the murderer’s identity, the sergeant takes “smashingly, effective action” to apprehend him, while the assistant makes “slow, solid decisions” that protect the family of the victim from scandal. None of the detectives “works at the wrong things.” Like today’s slick police procedurals, in which there are the people who investigate the crime and those who prosecute the offenders, every character in Murder Yet to Come is designed to maximize the efficiency of the team.
As a mystery story, Murder Yet to Come is decidedly second-rate; the villain predictable, his motive commonplace, the detectives flat and uncharismatic. But as a testing ground for the Myers-Briggs type indicator, the novel is a remarkably direct receptacle for Isabel’s ideas about work, right down to its crude division of gender roles between the feminized playwright and the hyper-masculine military man. Strengths and weaknesses are distributed in a zero-sum fashion; the character who possesses a keen eye for sensory details reverts to a slow, stuttering imbecile when asked to abstract larger patterns from his observations. Friendships and working relationships are always invigorated by personality differences, never strained by them. And for death-defying detectives, the characters are all unusually self-aware, each happy to accept his personal limitations and cede authority to others when necessary, like cogs in a well-oiled machine. Reprinted by CAPT in 1995, Murder Yet to Come showcases characters who are “beautifully consistent with type portraits,” according to the forward to the new edition. “Those readers who know type will enjoy ‘typing them’ as the mystery progresses.”
CAPT’s website, where I purchased Murder Yet to Come for $15.00, claims that the novel was Isabel’s “only sojourn into fiction” before she shifted her attention to the type indicator. This is incorrect. The company has not reprinted Isabel’s second novel, Give Me Death (1934), which revisits the same trio of detectives half a decade later. Perhaps this is due to the novel’s virulently racist plot: One by one, members of a land-owning Southern family begin committing suicide when they are led to believe that “there is in [our] veins a strain of Negro blood.” Despite their differences, the detectives agree that it is “better for [the family] to be dead” than for them to be alive, heedlessly reproducing with white people.
Give Me Death is more explicitly about the preservation of the family, but saddled with a far more sinister understanding of type: Type as racially determined. There is talk of eugenics. There is much hand wringing about the preservation of Southern family dynasties, about “honor” and “esteem.” That the novel was written in the years when laws forbidding interracial marriage were increasingly the target of ACLU and NAACP protests makes it all the more reactionary, and thus all the more unsuitable, from an image management perspective, for reissue today. One would hardly enjoy “typing” these characters.
If Isabel had started her life as her mother’s experiment, she had quickly grown into Katharine’s student, her apostle, and even her competition. Fiction had presented one way for her to unite her mother’s talk of type with the intelligent division of labor, ordering imaginary characters into a rational system with a profitable end: bringing criminals to justice. After World War II, the emergent industry of personality testing would give Isabel the opportunity to organize — and experiment on — real people.
The second rule of speaking type is: Personality is an innate characteristic, something fixed since birth and immutable, like eye color or right-handedness. “You have to buy into the idea that type never changes,” Barb says, speaking slowly and emphasizing each word so that we may remember and repeat this mantra — “Type Never Changes” — to our future clients. “We will brand this into your brain,” she vows. “The theory behind the instrument supports the fact that you are born with a four letter preference. If you hear someone say, ‘My type changed,’ they are not correct.”
Of all the questionable assumptions that prop up the Myers-Briggs indicator, this one strikes me as the shakiest: that you are “born with a four letter preference,” a reductive blueprint for how to move through life’s infinite and varied challenges. Many other personality indicators, ranging in complexity from zodiac signs to online dating questionnaires to Harry Potter’s sorting hat, share the assumption that personality is fixed in one form or another. And yet the belief of a singular and essential self has always seemed to me an irresistibly attractive fiction: One that insists on seeing each of us as a coherent human being, inclined to behave in predictable ways no matter what circumstances surround us. There is, after all, a certain narcissistic beauty to the idea that we are whole. “If personality is an unbroken series of successful gestures, then there was something gorgeous about him, some heightened sensitivity to the promises of life,” wrote F. Scott Fitzgerald of his greatest creation, Jay Gatsby, in the same year that Katharine fell under the sway ofPsychological Types. Learning to speak type means learning to link the quotidian gestures of life into an easily digestible story, one capable of communicating to perfect strangers some sense of who you are and why you do what you do.
Yet the impulse to treat personality as innate is, in no small part, a convenient way of putting these gorgeously complete people in their rightful places. Just as each one of Isabel’s three detectives serves a unique purpose in her novels, a way of moving the plot forward that follows from his innate “gifts,” so too does the indicator imagine that each person will fall into their designated niche in a high-functioning and productive social order. This is another fiction — to my mind, a dystopian fiction — that most personality tests trade in: The fantasy of rational organization, and, in particular, the rational organization of labor. “The MBTI will put your personality to work!” promises a career assessment flier from Arizona State University, a promise that is echoed by thousands of leadership guides, self-help books, LinkedIn profiles, and job listings, the promise that underwrites such darkly futuristic films as Divergent or Blade Runner. To live under an economic system that is not organized by personality, thinks the heroine of Divergent, is “not just to live in poverty and discomfort; it is to live divorced from society, separated from the most important thing in life: community.”
Or as a trainee belts out in the middle of an exercise, “Team work makes the dream work!”
Genes, like people, have families — lineages that stretch back through time, all the way to a founding member. That ancestor multiplied and spread, morphing a bit with each new iteration.
For most of the last 40 years, scientists thought that this was the primary way new genes were born — they simply arose from copies of existing genes. The old version went on doing its job, and the new copy became free to evolve novel functions.
Certain genes, however, seem to defy that origin story. They have no known relatives, and they bear no resemblance to any other gene. They’re the molecular equivalent of a mysterious beast discovered in the depths of a remote rainforest, a biological enigma seemingly unrelated to anything else on earth.
The mystery of where these orphan genes came from has puzzled scientists for decades. But in the past few years, a once-heretical explanation has quickly gained momentum — that many of these orphans arose out of so-called junk DNA, or non-coding DNA, the mysterious stretches of DNA between genes. “Genetic function somehow springs into existence,” said David Begun, a biologist at the University of California, Davis.
This metamorphosis was once considered to be impossible, but a growing number of examples in organisms ranging from yeast and flies to mice and humans has convinced most of the field that these de novo genes exist. Some scientists say they may even be common. Just last month, research presented at the Society for Molecular Biology and Evolution in Vienna identified 600 potentially new human genes. “The existence of de novo genes was supposed to be a rare thing,” said Mar Albà, an evolutionary biologist at the Hospital del Mar Research Institute in Barcelona, who presented the research. “But people have started seeing it more and more.”
Researchers are beginning to understand that de novo genes seem to make up a significant part of the genome, yet scientists have little idea of how many there are or what they do. What’s more, mutations in these genes can trigger catastrophic failures. “It seems like these novel genes are often the most important ones,” said Erich Bornberg-Bauer, a bioinformatician at the University of Münster in Germany.
The Orphan Chase
The standard gene duplication model explains many of the thousands of known gene families, but it has limitations. It implies that most gene innovation would have occurred very early in life’s history. According to this model, the earliest biological molecules 3.5 billion years ago would have created a set of genetic building blocks. Each new iteration of life would then be limited to tweaking those building blocks.
Yet if life’s toolkit is so limited, how could evolution generate the vast menagerie we see on Earth today? “If new parts only come from old parts, we would not be able to explain fundamental changes in development,” Bornberg-Bauer said.
The first evidence that a strict duplication model might not suffice came in the 1990s, when DNA sequencing technologies took hold. Researchers analyzing the yeast genome found that a third of the organism’s genes had no similarity to known genes in other organisms. At the time, many scientists assumed that these orphans belonged to families that just hadn’t been discovered yet. But that assumption hasn’t proven true. Over the last decade, scientists sequenced DNA from thousands of diverse organisms, yet many orphan genes still defy classification. Their origins remain a mystery.
In 2006, Begun found some of the first evidence that genes could indeed pop into existence from noncoding DNA. He compared gene sequences from the standard laboratory fruit fly, Drosophila melanogaster, with other closely related fruit fly species. The different flies share the vast majority of their genomes. But Begun and collaborators found several genes that were present in only one or two species and not others, suggesting that these genes weren’t the progeny of existing ancestors. Begun proposed instead that random sequences of junk DNA in the fruit fly genome could mutate into functioning genes.
Yet creating a gene from a random DNA sequence appears as likely as dumping a jar of Scrabble tiles onto the floor and expecting the letters to spell out a coherent sentence. The junk DNA must accumulate mutations that allow it to be read by the cell or converted into RNA, as well as regulatory components that signify when and where the gene should be active. And like a sentence, the gene must have a beginning and an end — short codes that signal its start and end.
In addition, the RNA or protein produced by the gene must be useful. Newly born genes could prove toxic, producing harmful proteins like those that clump together in the brains of Alzheimer’s patients. “Proteins have a strong tendency to misfold and cause havoc,” said Joanna Masel, a biologist at the University of Arizona in Tucson. “It’s hard to see how to get a new protein out of random sequence when you expect random sequences to cause so much trouble.” Masel is studying ways that evolution might work around this problem.
Another challenge for Begun’s hypothesis was that it’s very difficult to distinguish a true de novo gene from one that has changed drastically from its ancestors. (The difficulty of identifying true de novo genes remains a source of contention in the field.)
Ten years ago, Diethard Tautz, a biologist at the Max Planck Institute for Evolutionary Biology, was one of many researchers who were skeptical of Begun’s idea. Tautz had found alternative explanations for orphan genes. Some mystery genes had evolved very quickly, rendering their ancestry unrecognizable. Other genes were created by reshuffling fragments of existing genes.
Then his team came across the Pldi gene, which they named after the German soccer player Lukas Podolski. The sequence is present in mice, rats and humans. In the latter two species, it remains silent, which means it’s not converted into RNA or protein. The DNA is active or transcribed into RNA only in mice, where it appears to be important — mice without it have slower sperm and smaller testicles.
The researchers were able to trace the series of mutations that converted the silent piece of noncoding DNA into an active gene. That work showed that the new gene is truly de novo and ruled out the alternative — that it belonged to an existing gene family and simply evolved beyond recognition. “That’s when I thought, OK, it must be possible,” Tautz said.
A Wave of New Genes
Scientists have now catalogued a number of clear examples of de novo genes: A gene in yeast that determines whether it will reproduce sexually or asexually, a gene in flies and other two-winged insects that became essential for flight, and some genes found only in humans whose function remains tantalizingly unclear.
The Odds of Becoming a Gene
Scientists are testing computational approaches to determine how often random DNA sequences can be mutated into functional genes. Victor Luria, a researcher at Harvard, created a model using common estimates of the rates of mutation, recombination (another way of mixing up DNA) and natural selection. After subjecting a stretch of DNA as long as the human genome to mutation and recombination for 100 million generations, some random stretches of DNA evolved into active genes. If he were to add in natural selection, a genome of that size could generate hundreds or even thousands of new genes.
At the Society for Molecular Biology and Evolution conference last month, Albà and collaborators identified hundreds of putative de novo genes in humans and chimps — ten-fold more than previous studies — using powerful new techniques for analyzing RNA. Of the 600 human-specific genes that Albà’s team found, 80 percent are entirely new, having never been identified before.
Unfortunately, deciphering the function of de novo genes is far more difficult than identifying them. But at least some of them aren’t doing the genetic equivalent of twiddling their thumbs. Evidence suggests that a portion of de novo genes quickly become essential. About 20 percent of new genes in fruit flies appear to be required for survival. And many others show signs of natural selection, evidence that they are doing something useful for the organism.
In humans, at least one de novo gene is active in the brain, leading some scientists to speculate such genes may have helped drive the brain’s evolution. Others are linked to cancer when mutated, suggesting they have an important function in the cell. “The fact that being misregulated can have such devastating consequences implies that the normal function is important or powerful,” said Aoife McLysaght, a geneticist at Trinity College in Dublin who identified the first human de novo genes.
De novo genes are also part of a larger shift, a change in our conception of what proteins look like and how they work. De novo genes are often short, and they produce small proteins. Rather than folding into a precise structure — the conventional notion of how a protein behaves — de novo proteins have a more disordered architecture. That makes them a bit floppy, allowing the protein to bind to a broader array of molecules. In biochemistry parlance, these young proteins are promiscuous.
Scientists don’t yet know a lot about how these shorter proteins behave, largely because standard screening technologies tend to ignore them. Most methods for detecting genes and their corresponding proteins pick out long sequences with some similarity to existing genes. “It’s easy to miss these,” Begun said.
That’s starting to change. As scientists recognize the importance of shorter proteins, they are implementing new gene discovery technologies. As a result, the number of de novo genes might explode. “We don’t know what things shorter genes do,” Masel said. “We have a lot to learn about their role in biology.”
Scientists also want to understand how de novo genes get incorporated into the complex network of reactions that drive the cell, a particularly puzzling problem. It’s as if a bicycle spontaneously grew a new part and rapidly incorporated it into its machinery, even though the bike was working fine without it. “The question is fascinating but completely unknown,” Begun said.
A human-specific gene called ESRG illustrates this mystery particularly well. Some of the sequence is found in monkeys and other primates. But it is only active in humans, where it is essential for maintaining the earliest embryonic stem cells. And yet monkeys and chimps are perfectly good at making embryonic stem cells without it. “It’s a human-specific gene performing a function that must predate the gene, because other organisms have these stem cells as well,” McLysaght said.
“How does novel gene become functional? How does it get incorporated into actual cellular processes?” McLysaght said. “To me, that’s the most important question at the moment.”
If possible always invent in imitation of Nature. God knows his designs.
By the way I have long considered and have experimented with the idea of a reactive liquid armor that both redirects projectile trajectories and disperses force in spread waves rather than attempts to meet it with direct resistance.
So I found this step forward to be doubly interesting. In construction method, in design, and as a pointer towards improved future capabilities.
Illustration of deformation mechanisms in laminates
Rudykh et al
Body armor suffers from a core tension: it must be light enough so the soldier wearing it can still fight effectively, but strong enough to actually stop bullets and shrapnel. Durable, shock-absorbing Kevlar is the current standard, but it can definitely be improved upon. What if, instead of making the armor itself a liquid, researchers borrow an armor design from creatures that move through it? A team at MIT, led by mechanical engineer Stephan Rudykh, designed a flexible armor inspired by fish scales.
Scale armor is almost as old as armor itself, with numerous examples found in ancient art from Rome to China. To improve on an ancient concept, the MIT team came up with a single metric for the armor’s value: protecto-flexibility (Ψ). This is “a new metric which captures the contrasting combination of protection and flexibility, taken as the ratio between the normalized indentation and normalized bending stiffness.” Working from a single metric, the researchers were able to greatly increase the strength of the armor while only modestly reducing its flexibility.
The practical implications of the study are hinted at by who funded it: the research “was supported by the U.S. Army Research Office through the MIT Institute for Soldier Nanotechnologies.” In the future, soldiers could have fish-scale suits of armor that are more flexible around joints and sturdier across the rest of the body, adding greater protection where none was before without diminishing any of the value of previous armor.
This armor is still in the early testing stages. “Flexibility and protection by design: imbricated hybrid microstructures of bio-inspired armor” only covers indentation tests, designed to see just how far the scales would bend when forced to. Next stages include trying the armor against bullets and shrapnel. If successful, the future of armor could look a heck of a lot like the past.
What to do when you just can’t quit–no matter how many times you’ve tried.
By Nir Eyal
I had just finished giving a speech on building habits when a woman in the audience exclaimed, “You teach how to create habits, but that’s not my problem. I’m fat!” The frustration in her voice echoed throughout the room. “My problem is stopping bad habits. That’s why I’m fat. Where does that leave me?”
I deeply sympathized with the woman. “I was once clinically obese,” I told her. She stared at my lanky frame and waited for me to explain. How did I hack my habits?
One Size Doesn’t Fit All
The first step is to realize that starting a new routine is very different from breaking an existing habit. As I describe in this video, there are different techniques to use depending on the behavior you intend to modify.
For example, creating a habit requires encoding a new set of automatic behaviors, while breaking a habit requires a different set of processes. The brain learns causal relationships between triggers that prompt an action and the associated outcome. If you’d like to get in the habit of taking a vitamin every day, for example, the key is to place the pills somewhere in the path of your normal routine–say, next to your toothbrush, so you remember to take it each morning before you brush. Doing so daily acts as a reminder until, over time, the behavior becomes something done with little or no conscious thought.
However, breaking an existing habit is an entirely different story, and the distinction is something many people mischaracterize. For example, Charles Duhigg, author of The Power of Habit, describes a bad cookie-eating habit that added eight pounds to his waistline.
Every day, Duhigg says, he found himself going to the 14th floor of his office building to buy a cookie. When he began to analyze this habit, Duhigg discovered that the real reward for his behavior was not the cookie itself but the socializing he enjoyed while nom nom nom-ing with co-workers. Once Duhigg figured out that the reward was connecting with friends, he could get rid of the cookie-eating habit by substituting one routine for another. Voilà!
Duhigg echos the popular belief that the key to breaking a bad habit is replacing it with another habit. I’m not so sure.
Maybe replacing cookies with co-workers did it for Duhigg, but what if you’re the kind of person (like me) that loves the hell out of cookies? I was obese precisely because, among many other delicious things, I love cookies and for no other reason than the fact that they taste amazing! For me, ooey gooey chocolate chewy beats chatting it up with Mel from accounting every time.
“Where does that leave me?” the woman in the audience wanted to know. Having struggled with my own weight for years, there was no way I was going to look her in the face and tell her she should chat it up with her co-workers the next time she has a sugar craving. Not going to happen.
When it comes to gaining control over bad habits, like eating food we know isn’t good for us, I shared with her the only thing that has worked for me. I call it “progressive extremism,” and it works particularly well in situations in which substituting one habit for another just won’t do. Before diving into the method I use to transform my habits, follow me back about 20 years.
I was once a vegetarian. As anyone who has made a dramatic shift in diet knows, friends always ask, “Don’t you miss meat? I mean, it tastes so good!” Of course I missed meat!
However, when I began calling myself a vegetarian, somehow what was once appetizing suddenly became something else. The things I once loved to eat were now inedible because I had changed how I defined myself. I was a vegetarian, and vegetarians don’t eat meat.
Saying no to eating animals was no longer difficult. It was no longer a struggle. It was something I just did not do, much in the same way I’d imagine a Hasidic Jew does not eat pork or an observant Muslim does not drink alcohol–they just don’t.
Identity helps us make otherwise difficult choices by offloading willpower. Our choices become what we do because of who we are.
Don’t Versus Can’t
Recent research reveals why looking at our behaviors this way can have a profound impact. A study published in the Journal of Consumer Research tested the words people use when confronting temptation. During the experiment, one group was instructed to use the words “I can’t” while the other used “I don’t” when considering unhealthy food choices. Then the real experiment began.
When people finished the study, they were offered either a chocolate bar or granola bar to thank them for their time. Unbeknownst to participants, the researchers were measuring whether they would take the relatively healthy or unhealthy choice. While 39 percent of people who used the words “I can’t” chose the granola, 64 percent of those in the “I don’t” group picked it over chocolate. The study authors believe saying “I don’t” rather than “I can’t” provides greater “psychological empowerment.”
I was meat-free for about five years, and during that time resisting certain foods was not that difficult because it was consistent with how I saw myself. “I don’t eat meat,” was tied to my identity as a vegetarian.
If not eating meat was easy when it was something I just didn’t do, why couldn’t the same technique be used to stop other unhealthy habits? It turns out it most certainly can.
Here’s How it Works
First, a disclaimer. This technique only works for triggers that can be removed from your environment–for instance, this doesn’t work for quitting a nail-biting habit unless you’re looking to dispose of some digits.
Start by identifying the behavior you want to stop. For example, say you’d like to stop eating processed sugar. Taken all at once, cutting out the sweet stuff is too big of a goal for most people to quit cold turkey.
Instead, think of just one specific food you’d like to cut from your diet. However–here’s the important part–it needs to be something you wouldn’t really miss and it needs to be forever.
Overwhelming research reveals diets don’t work because they are temporary fixes. If you imagine you’ll get to eat Goobers some day when you’re thinner, this technique won’t work. Temporary diets do nothing but train the brain to binge eat.
To become part of your identity, the commitment needs to be forever, just as vegetarians believe they’ll eat the same way for the rest of their life–it’s who they are.
The mistake most people make is they bite off more than they can chew (excuse the pun). The key is to only remove the things from your diet you won’t really miss. For example, do you like candy corn? I sure don’t. As a kid, the stuff was always the dregs of my Halloween haul. For me, removing candy corn for life was no big deal, so it was first on my list. I don’t eat candy corn and I never will. Done!
Next, write down what you no longer eat and the date you gave it up for good. Writing this down marks the shift from a temporary “can’t” to a permanent “don’t.” Remember, the things you give up have to be easy enough to give up for the rest of your life.
The next step is to wait. This method takes time. When you’re ready, reevaluate what else you can do. Find another trigger to remove that meets the criteria of something you can give up for life that you wouldn’t really miss. For me, I decided to never have sugary carbonated drinks at home. I could still have them elsewhere, just not inside the house. Easy peasy.
If the commitment feels like too much, you’re doing too much. Each step needs to feel almost effortless, no big deal, but involve something you can be proud to give up forever.
For example, when I wanted to stop a bad habit of mindlessly surfing the internet and reduce the online distractions in my life, I didn’t quit the Web entirely. I quit one simple thing I wouldn’t miss and intend not to do it for life. I don’t read articles in my Web browser during working hours–ever! Instead, every time I see something that looks interesting, I use an app called Pocket to save it for later (see more about how Pocket works here).
The process of unwinding bad habits takes years, but progressive extremism is an effective way I’ve found to stop behaviors that weren’t serving me. Occasionally, I look at all the unhealthy things that no longer control me the way they once did, and if I feel up to it, I find new bad habits to slay.
By slowly ratcheting up what you don’t do, you invest in a new identity through your record of successfully dropping bad habits from your life. It may start small, but over time, it adds up to a whole new you.
The process for stopping bad habits is fundamentally different from forming new ones.
Existing behaviors etch a neural circuitry that makes unlearning an association between an action and a reward extremely difficult.
Whereas learning new habits follows a slow progression, stopping old behavioral tendencies requires a different approach.
A process I call “progressive extremism” utilizes what we know about the psychology of identity to help stop behaviors we don’t want.
By classifying specific behaviors as things you will never do again, you put certain actions into the realm of “I don’t” versus “I can’t.”