Friday, April 22, 2016

Five Key Principles of Mental Toughness and Resilience

Man thinking on a train journey.
Thinking on a train journey (Photo: Wikipedia)

by Phil Pearl DCH, DHP, MCH, GHR Reg, Personal Development Toolbox:

Throughout our lives we face change and challenges. Nothing stays the same; the good times don’t last but neither do the bad times.

People and places come and go; the world changes and so does our place within it. To survive the changes we need to be adaptable and refocus on our objectives.

We may have to modify who we are and how we are, in order to face the new realities. We must strive to find opportunity in adversity. Of course, all of this is easier said than done. In this article I have highlighted five key principles of mental toughness and resilience. 

Rational Thinking 

We are what we think. When we change our thoughts, we change how we feel and act. Rational thinking and rational beliefs are the foundations of mental toughness and resilience; they assist us in our aims, objectives and survival. Rational beliefs are flexible and non-extreme; they are based on reality and the available evidence. The emphasis is on seeing things as they really are and keeping any negative attributes in perspective and in proportion, so that we do not over-react emotionally or avoid challenges. If our thinking and beliefs are dogmatic, rigid or extreme we remain trapped in the past and are doomed to repeat the same mistakes. The key is to ask ourselves “how is thinking or behaving this way helping me to feel good or achieve my goals?”

Rational thinking is resilient thinking and helps us build our tolerance for frustration and discomfort without making “mountains out of molehills” or seeing a situation as being worse than it really is. The fact is that things could always be worse. Our rational thoughts and beliefs are essential to overcoming unhelpful emotions and behaviour such as anxiety, depression and avoidance. By changing our thinking, we change who we are, how we feel and what we do. 


Mental toughness means that we take ultimate responsibility for our thoughts, emotions and behaviour, together with responsibility for our decisions and the likely consequences of our actions. Events and conditions will of course have an impact and an affect on us, but we are responsible for the things that come within our domain of influence. Events can only upset us if we allow them to. Nothing and no one can bother or disturb us unless we grant them permission to do so. We choose what we think, how we feel and what we do.

To be resilient we need to take responsibility, otherwise we will tend to view ourselves as pawns and victims. We may blame everyone and everything for our conditions rather than take active steps to change whatever we are capable of. At times we may all seek to blame the government or this or that corporation for the way our lives are, but the ultimate responsibility is still ours. We are ultimately in control. 


For mental toughness and mental health in general, we need to be adaptable. We may seem mentally healthy when we are suited to the conditions around us, such as our jobs, relationships and home. However, if these conditions change and we are unable to adapt, then we are at risk of poor mental health. Change is uncomfortable but we need to accept some discomfort and pain in order to learn, adapt and survive. If we remain static and fixed in our outlook, the world moves on and leaves us behind.

Resilient people do not see themselves as victims of change. They do not complain “why me” and demand that bad things must not happen to them. Resilient people see bad events as a normal (although unwelcome) part of life; they adjust to the new reality. Evolution favours those who can adapt to new environments and realities; we must be relentless in our adaptability, ingenuity and creativity to survive. This is true of individuals and organisations. 


Mental toughness and commitment is having a clear idea of what we want out of life - our goals, objectives and purpose. If we don’t know where we are going, then any road will take us there. It is healthy if our commitments extend to different areas of our lives such as our relationships, careers, health and home rather than be focused in just one or two areas. It is also helpful to be committed to things outside of ourselves such as charity work, local groups or political concerns. A key aspect of commitment is that it provides us with meaning in our lives. If we ask ourselves, “What is the meaning of life?” then our commitments and goals should provide the answer.

Having goals and being resilient means that we will keep going and problem solve in the face of setbacks and difficulties. When life knocks us down, we will pick ourselves up again. We will tolerate short-term frustration and discomfort for our long-term gain. Resilience and persistence are key; most people simply give up. 


Confidence is our belief in our ability to get things done. Our confidence will vary according to different circumstances and events. For mental toughness and resilience we need to consistently increase the areas where we feel confident. We may prefer to stay within our comfort zones but the world changes and eventually all comfort zones will become uncomfortable. Our comfort zones become comfort traps.

To be more confident we need to be accurate in our appraisal of threats. If we perceive that challenges are unrealistically dangerous or threatening, then we will not take action. If we avoid failure then we also avoid success, so we need to take calculated risks and step out side of our comfort zones. To be resilient we need to be less concerned how others may view us and what we believe they are thinking or saying about us. We need to challenge our self-imposed limits and our restricted views of reality. We don’t see things as they are; we only see things as we are.

I hope you find these principles useful, there is more information and articles on my website.

Kind regards

Phil Pearl, Clinical Hypnotherapist.
Phil Pearl DCH, DHP, MCH, GHR Reg
Clinical Hypnotherapist

Sunday, April 17, 2016

No, You Can’t Feel Sorry for Everyone: The Idea of Empathy For All Ignores the Limits of Human Psychology

English: Robert Plutchik's Wheel of Emotions
Robert Plutchik's Wheel of Emotions (Wikipedia)
Us and Them: Sometimes, punishment of an out-group is taken to colorful extremes. Here, fans of the Duke Blue Devils try to distract Justin Jackson of the North Carolina Tar Heels during a 2016 game. This time, it didn’t work—UNC defeated Duke 76-72.Lance King/Getty Images
The endpoint of the liberal humanitarian project, which is universal empathy, would mean no boundary between in-group and out-group. In aiming for this goal, we must fight our instincts. That is possible, to a degree.

Research confirms that people can strengthen their moral muscles and blur the divide between in-group and out-group. Practicing meditation, for example, can increase empathy, improving people’s ability to decode emotions from people’s facial expressions1 and making them more likely to offer a chair2 to someone with crutches. Simply increasing people’s beliefs in the malleability of empathy increases the empathy they express toward ideologically and racially dissimilar others.3 And when all else fails, people respond to financial gain. My co-authors and I have shown that introducing monetary incentives for accurate perspective-taking increased Democrats’ and Republicans’ ability to understand each other and to believe that political resolutions were possible.4

But these exercises can take us only so far. In fact, there is a terrible irony in the assumption that we can ever transcend our parochial tendencies entirely. Social scientists have found that in-group love and out-group hate originate from the same neurobiological basis, are mutually reinforcing, and co-evolved - because loyalty to the in-group provided a survival advantage by helping our ancestors to combat a threatening out-group. That means that, in principle, if we eliminate out-group hate completely, we may also undermine in-group love. Empathy is a zero-sum game.

Absolute universalism, in which we feel compassion for every individual on Earth, is psychologically impossible. Ignoring this fact carries a heavy cost: We become paralyzed by the unachievable demands we place on ourselves. We can see this in our public discourse today. Discussions of empathy fluctuate between worrying that people don’t empathize enough and fretting that they empathize too much with the wrong people. These criticisms both come from the sense that we have an infinite capacity to empathize, and that it is our fault if we fail to use it.

In 2006, then-Senator Barack Obama spoke at Northwestern University’s commencement bemoaning the country’s “empathy deficit” and urging people “to see the world through those who are different from us.” Several studies supported Obama’s concern: People in the 21st century exhibit less empathy5 and more narcissism6 than in decades past. A torrent of think-pieces have lamented and diagnosed this empathy decline.

And then the pendulum swung back. People do care, newspaper editorialists and social-media commenters granted. But they care inconsistently: grieving for victims of Brussels’ recent attacks and ignoring Yemen’s recent bombing victims; expressing outrage over ISIS rather than the much deadlier Boko Haram; mourning the death of Cecil the Lion in Zimbabwe while overlooking countless human murder victims. There are far worthier tragedies, they wrote, than the ones that attract the most public empathy.

Almost any attempt to draw attention to some terrible event in the world elicits these complaints, as though misallocated empathy was more consequential than the terrible event itself. If we recognized that we have a limited quantity of empathy to begin with, it would help to cure some of the acrimony and self-flagellation of these discussions. The truth is that, just as even the most determined athlete cannot overcome the limits of the human body, so too we cannot escape the limits of our moral capabilities. We must begin with a realistic assessment of what those limits are, and then construct a scientific way of choosing which values matter most to us. 

We can and do override our moral instincts using our more logical and deliberative mode of thinking, so the in-group vs. out-group opposition is not absolute. But we have limited cognitive resources, which rapidly become depleted. For example, keeping a nine-digit insurance policy number in mind without writing it down requires working memory, and can impair our ability to recall other information, like the phone number of the insurance agent. Similar constraints cause what is known as decision fatigue: Deliberating over an initial series of decisions can inhibit thoughtfulness in later decisions, as observed in judges deciding whether to grant prisoners parole earlier and later in the day.7 Similarly, full compassion requires inhibitory control (regulating our own emotions to distinguish them from another person’s emotions), self-reflection, externally focused attention, and recognition of another person’s suffering. These faculties, too, can tire.

Morality can’t be everywhere at once - we humans have trouble extending equal compassion to foreign earthquake victims and hurricane victims in our own country. Our capacity to feel and act pro-socially toward another person is finite. And one moral principle can constrain another. Even political liberals who prize universalism recoil when it distracts from a targeted focus on socially disadvantaged groups. Empathy draws our attention toward particular targets, and whether that target represents the underprivileged, blood relatives, refugees from a distant country, or players on a sports team, those targets obscure our attention from other equally (or more) deserving ones.

That means we need to abandon an idealized cultural sensitivity that gives all moral values equal importance. We must instead focus our limited moral resources on a few values, and make tough choices about which ones are more important than others. Collectively, we must decide that these actions affect human happiness more than those actions, and therefore the first set must be deemed more moral than the second set. 

Once we abandon the idea of universal empathy, it becomes clear that we need to build a quantitative moral calculus to help us choose when to extend our empathy. Empathy, by its very nature, seems unquantifiable, but behavioral scientists have developed techniques to turn people’s vague instincts into hard numbers. Cass Sunstein of Harvard Law School has suggested that moral concepts like fairness and dignity can be assessed using a procedure he calls breakeven analysis. Do people feel that the benefits of a given course of action justify the costs? If so, the action is worth taking. For example, we may judge that invasive phone-hacking is moral if the cost of invasion of privacy is countered with the benefit of preventing one terrorist attack at some minimum frequency, say, every five and a half years.

Furthermore, survey data from people across the globe can reveal what people consider the most important factors in their happiness and suffering. Advances in survey methods that examine happiness associated with specific daily events8 or that use smartphone technology to assess happiness9 at a moment-to-moment basis have improved on basic self-report methods. Implicit measures that capture how quickly people associate words related to the self (“me”) with words related to happiness (“elated”) have begun to capture aspects of happiness separable from explicit reports of happiness. And neuroimaging methods have identified neural signatures of both hedonic well-being (related to pleasure) and eudaimonic well-being (related to meaning in life).

Basing our moral criteria on maximizing happiness is not simply a philosophical choice, but rather a scientifically motivated one: Empirical data confirm that happiness improves physical health, enhancing immune function and reducing stress, both of which contribute to longevity. Shouldn’t our moral choice be the one that maximizes our collective well-being? These data sets can give us moral “prostheses,” letting us evaluate different values side-by-side - and helping us to discard those lesser values that obstruct more meaningful ones. The only wrong choice when it comes to morals is “all of the above.”
Universal morality: Eleanor Roosevelt peruses the English version of the United Nation’s 1948 Universal Declaration of Human
These approaches can help us create a universal moral code—something that can serve as a moral guide in all cases, even if we are not able to actually apply it to all people all the time. Indeed, numerous scientifically rigorous descriptive theories of universal values already exist: Shalom Schwartz’s theory of basic values and Jonathan Haidt and colleagues’ Moral Foundations Theory, among others. We’ve tried to create a universal code before: In 1946, the United Nations established an 18-member committee of varying nationalities to formulate the Universal Declaration of Human Rights. Two additional U.N. committees reviewed the draft before the General Assembly voted to adopt it in 1948. Still, it relied on the opinions of elites rather than of a broader populace. Today, we could take a more data-driven approach.

As a case study, take Apple’s recent battle with the FBI over unlocking an iPhone belonging to one of the San Bernardino shooters. The FBI requested that Apple circumvent the iPhone’s encryption that protects the user’s personal data (before ultimately doing so itself). The case pitted personal security (protection from government surveillance) against national security (determining whether the San Bernardino attack involved coordination with ISIS). The balance is a difficult one to strike, and it was largely argued in an adversarial way that magnified differences of opinion.

We could be more systematic. We could use a standardized score to examine how violations of personal security and national security affect happiness. This could allow us to state that certain values are more universal than others, and are therefore more central to human well-being. Such an effort could tell us, perhaps, that on average the anxiety people feel regarding the possibility of the government reading their text messages is greater than the distress caused by experiencing or anticipating a terrorist attack. If so, Apple would emerge as more “morally right” than the FBI (or vice versa).

A data-based approach to identifying and ranking universal values is ambitious to be sure. But, crucially, it calls on us to make use of the limits on morality that are inherent to all of us as human beings, rather than lamenting them. These constraints challenge us to focus our attention, and drive us to see that not all values are equally valid. Instead of indefinitely fighting over tradeoffs between in-group- and out-group-oriented moralities, we might find that picking among universally held values is more palatable, efficient, and uniting - providing a moral function in and of itself. In place of the usual, default concentric circles of in-groups that guide us today (family, friends, neighbors, citizens) we would have the tools to carefully engineer to whom we should extend our empathy, and when.

Think of the great progress physicists made when they acknowledged the limitations of the physical world - nothing can move faster than light, or be perfectly localized in the subatomic realm. Similarly, we will make our greatest moral progress when we accept and work within the limitations of human moral cognition, and forego an unrealistic concern for respecting difference and moral diversity at any cost. 

Adam Waytz is a social psychologist and professor at Northwestern University’s Kellogg School of Management. He studies humanization, dehumanization, and the moral implications of these processes. 


1. Mascaro, J.S., Rilling, J.K., Tenzin Negi, L., & Raison, C.L. Compassion meditation enhances empathic accuracy and related neural activity. Social Cognitive and Affective Neuroscience 8, 48-55 (2013).

2. Condon, P., Desbordes, G., Miller, W.B., & DeSteno, D. Meditation Increases Compassionate Responses to Suffering. Psychological Science 24, 2125-2127 (2013).

3. Schumann, K., Zaki, J., & Dweck, C.S. Addressing the empathy deficit: Beliefs about the malleability of empathy predict effortful responses when empathy is challenging. Journal of Personality and Social Psychology 107, 475-493 (2014).

4. Waytz, A., Young, L.L., & Ginges, J. Motive attribution asymmetry for love vs. hate drives intractable conflict. Proceedings of the National Academy of Sciences 111, 15687-15692 (2014).

5. Konrath, S.H., O’Brien, E.H., & Hsing, C. Changes in dispositional empathy in American college students over time: A meta-analysis. Personality and Social Psychology Review 15, 180-198 (2011).

6. Twenge, J.M. & Campbell, W.K. The Narcissism Epidemic: Living in the Age of Entitlement Atria Books, New York, NY (2010).

7. Danziger, S., Levav, J., & Avnaim-Pesso, L. Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences 108, 6889-6892 (2010).

8. Kahneman, D., Krueger, A.B., Schkade, D., Schwartz, N., & Stone, A.A. A survey method for characterizing daily life experience: The day reconstruction method. Science 306, 1776-1780 (2004).

9. Killingsworth, M.A. & Gilbert, D.T. A wandering mind is an unhappy mind. Science 330, 932 (2010).

Friday, April 15, 2016

How People Learn to Become Resilient

Perception is key to resilience: Do you conceptualize an event as traumatic, or as a chance to learn and grow?
Perception is key to resilience (Gizem Vural)
by , The New Yorker:

Norman Garmezy, a developmental psychologist and clinician at the University of Minnesota, met thousands of children in his four decades of research. But one boy in particular stuck with him.

He was nine years old, with an alcoholic mother and an absent father. Each day, he would arrive at school with the exact same sandwich: two slices of bread with nothing in between. At home, there was no other food available, and no one to make any. Even so, Garmezy would later recall, the boy wanted to make sure that “no one would feel pity for him and no one would know the ineptitude of his mother.” Each day, without fail, he would walk in with a smile on his face and a “bread sandwich” tucked into his bag.

The boy with the bread sandwich was part of a special group of children. He belonged to a cohort of kids - the first of many - whom Garmezy would go on to identify as succeeding, even excelling, despite incredibly difficult circumstances. These were the children who exhibited a trait Garmezy would later identify as “resilience” (he is widely credited with being the first to study the concept in an experimental setting). 

Over many years, Garmezy would visit schools across the country, focusing on those in economically depressed areas, and follow a standard protocol. He would set up meetings with the principal, along with a school social worker or nurse, and pose the same question: Were there any children whose backgrounds had initially raised red flags - kids who seemed likely to become problem kids - who had instead become, surprisingly, a source of pride? 

"What I was saying was, ‘Can you identify stressed children who are making it here in your school?’" Garmezy said, in a 1999 interview. “There would be a long pause after my inquiry before the answer came. If I had said, ‘Do you have kids in this school who seem to be troubled?,’ there wouldn’t have been a moment’s delay. But to be asked about children who were adaptive and good citizens in the school and making it even though they had come out of very disturbed backgrounds - that was a new sort of inquiry. That’s the way we began.”

Resilience presents a challenge for psychologists. Whether you can be said to have it or not largely depends not on any particular psychological test but on the way your life unfolds. If you are lucky enough to never experience any sort of adversity, we won’t know how resilient you are. It’s only when you’re faced with obstacles, stress, and other environmental threats that resilience, or the lack of it, emerges: Do you succumb or do you surmount?

Environmental threats can come in various guises. Some are the result of low socioeconomic status and challenging home conditions (those are the threats studied in Garmezy’s work). Often, such threats - parents with psychological or other problems; exposure to violence or poor treatment; being a child of problematic divorce - are chronic. Other threats are acute: experiencing or witnessing a traumatic violent encounter, for example, or being in an accident. What matters is the intensity and the duration of the stressor. In the case of acute stressors, the intensity is usually high. The stress resulting from chronic adversity, Garmezy wrote, might be lower - but it “exerts repeated and cumulative impact on resources and adaptation and persists for many months and typically considerably longer.”

Prior to Garmezy’s work on resilience, most research on trauma and negative life events had a reverse focus. Instead of looking at areas of strength, it looked at areas of vulnerability, investigating the experiences that make people susceptible to poor life outcomes (or that lead kids to be “troubled,” as Garmezy put it).

Garmezy’s work opened the door to the study of protective factors: the elements of an individual’s background or personality that could enable success despite the challenges they faced. Garmezy retired from research before reaching any definitive conclusions - his career was cut short by early-onset Alzheimer’s - but his students and followers were able to identify elements that fell into two groups: individual, psychological factors and external, environmental factors, or disposition on the one hand and luck on the other.

In 1989, a developmental psychologist named Emmy Werner published the results of a thirty-two-year longitudinal project. She had followed a group of six hundred and ninety-eight children, in Kauai, Hawaii, from before birth through their third decade of life. Along the way, she’d monitored them for any exposure to stress: maternal stress in utero, poverty, problems in the family, and so on. Two-thirds of the children came from backgrounds that were, essentially, stable, successful, and happy; the other third qualified as “at risk.”

Like Garmezy, she soon discovered that not all of the at-risk children reacted to stress in the same way. Two-thirds of them “developed serious learning or behavior problems by the age of ten, or had delinquency records, mental health problems, or teen-age pregnancies by the age of eighteen.” But the remaining third developed into “competent, confident, and caring young adults.” They had attained academic, domestic, and social success - and they were always ready to capitalize on new opportunities that arose.

What was it that set the resilient children apart? Because the individuals in her sample had been followed and tested consistently for three decades, Werner had a trove of data at her disposal. She found that several elements predicted resilience. Some elements had to do with luck: a resilient child might have a strong bond with a supportive caregiver, parent, teacher, or other mentor-like figure. 

But another, quite large set of elements was psychological, and had to do with how the children responded to the environment. From a young age, resilient children tended to “meet the world on their own terms.” They were autonomous and independent, would seek out new experiences, and had a “positive social orientation.” “Though not especially gifted, these children used whatever skills they had effectively,” Werner wrote.

Perhaps most importantly, the resilient children had what psychologists call an “internal locus of control”: they believed that they, and not their circumstances, affected their achievements. The resilient children saw themselves as the orchestrators of their own fates. In fact, on a scale that measured locus of control, they scored more than two standard deviations away from the standardization group.

Werner also discovered that resilience could change over time. Some resilient children were especially unlucky: they experienced multiple strong stressors at vulnerable points and their resilience evaporated. Resilience, she explained, is like a constant calculation: Which side of the equation weighs more, the resilience or the stressors? The stressors can become so intense that resilience is overwhelmed. Most people, in short, have a breaking point. On the flip side, some people who weren’t resilient when they were little somehow learned the skills of resilience. They were able to overcome adversity later in life and went on to flourish as much as those who’d been resilient the whole way through. This, of course, raises the question of how resilience might be learned.

George Bonanno is a clinical psychologist at Columbia University’s Teachers College; he heads the Loss, Trauma, and Emotion Lab and has been studying resilience for nearly twenty-five years. Garmezy, Werner, and others have shown that some people are far better than others at dealing with adversity; Bonanno has been trying to figure out where that variation might come from. Bonanno’s theory of resilience starts with an observation: all of us possess the same fundamental stress-response system, which has evolved over millions of years and which we share with other animals. The vast majority of people are pretty good at using that system to deal with stress. When it comes to resilience, the question is: Why do some people use the system so much more frequently or effectively than others?

One of the central elements of resilience, Bonanno has found, is perception: Do you conceptualize an event as traumatic, or as an opportunity to learn and grow? “Events are not traumatic until we experience them as traumatic,” Bonanno told me, in December. “To call something a ‘traumatic event’ belies that fact.” 

He has coined a different term: PTE, or potentially traumatic event, which he argues is more accurate. The theory is straightforward. Every frightening event, no matter how negative it might seem from the sidelines, has the potential to be traumatic or not to the person experiencing it (Bonanno focuses on acute negative events, where we may be seriously harmed; others who study resilience, including Garmezy and Werner, look more broadly).

Take something as terrible as the surprising death of a close friend: you might be sad, but if you can find a way to construe that event as filled with meaning - perhaps it leads to greater awareness of a certain disease, say, or to closer ties with the community - then it may not be seen as a trauma (indeed, Werner found that resilient individuals were far more likely to report having sources of spiritual and religious support than those who weren’t). The experience isn’t inherent in the event; it resides in the event’s psychological construal.

It’s for this reason, Bonanno told me, that “stressful” or “traumatic” events in and of themselves don’t have much predictive power when it comes to life outcomes. “The prospective epidemiological data shows that exposure to potentially traumatic events does not predict later functioning,” he said. “It’s only predictive if there’s a negative response.” In other words, living through adversity, be it endemic to your environment or an acute negative event, doesn’t guarantee that you’ll suffer going forward. What matters is whether that adversity becomes traumatizing.

The good news is that positive construal can be taught. “We can make ourselves more or less vulnerable by how we think about things,” Bonanno said. In research at Columbia, the neuroscientist Kevin Ochsner has shown that teaching people to think of stimuli in different ways - to reframe them in positive terms when the initial response is negative, or in a less emotional way when the initial response is emotionally “hot” - changes how they experience and react to the stimulus. You can train people to better regulate their emotions, and the training seems to have lasting effects.

Similar work has been done with explanatory styles - the techniques we use to explain events. I’ve written before about the research of Martin Seligman, the University of Pennsylvania psychologist who pioneered much of the field of positive psychology: Seligman found that training people to change their explanatory styles from internal to external (“Bad events aren’t my fault”), from global to specific (“This is one narrow thing rather than a massive indication that something is wrong with my life”), and from permanent to impermanent (“I can change the situation, rather than assuming it’s fixed”) made them more psychologically successful and less prone to depression. 

The same goes for locus of control: not only is a more internal locus tied to perceiving less stress and performing better but changing your locus from external to internal leads to positive changes in both psychological well-being and objective work performance. The cognitive skills that underpin resilience, then, seem like they can indeed be learned over time, creating resilience where there was none.

Unfortunately, the opposite may also be true. “We can become less resilient, or less likely to be resilient,” Bonanno says. “We can create or exaggerate stressors very easily in our own minds. That’s the danger of the human condition.” Human beings are capable of worry and rumination: we can take a minor thing, blow it up in our heads, run through it over and over, and drive ourselves crazy until we feel like that minor thing is the biggest thing that ever happened. In a sense, it’s a self-fulfilling prophecy. Frame adversity as a challenge, and you become more flexible and able to deal with it, move on, learn from it, and grow. Focus on it, frame it as a threat, and a potentially traumatic event becomes an enduring problem; you become more inflexible, and more likely to be negatively affected.

In December the New York Times Magazine published an essay called “The Profound Emptiness of ‘Resilience.’ “It pointed out that the word is now used everywhere, often in ways that drain it of meaning and link it to vague concepts like “character.” But resilience doesn’t have to be an empty or vague concept. In fact, decades of research have revealed a lot about how it works. This research shows that resilience is, ultimately, a set of skills that can be taught. In recent years, we’ve taken to using the term sloppily - but our sloppy usage doesn’t mean that it hasn’t been usefully and precisely defined. It’s time we invest the time and energy to understand what “resilience” really means.

Monday, April 11, 2016

Aldous Huxley on the Transcendent Power of Music and Why It Sings to Our Souls

by Maria Popova, Brain Pickings:

“Without music life would be a mistake,” Nietzsche proclaimed in 1889. But although a great many beloved writers have extolled the power of music with varying degrees of Nietzsche’s bombast, no one has captured its singular enchantment more beautifully than Aldous Huxley (July, 26 1894–November 22, 1963). In his mid-thirties — just before the publication of Brave New World catapulted him into literary celebrity and a quarter century before his insightful writings about art and artists and his transcendent experience with hallucinogenic drugs — Huxley came to contemplate the mysterious transcendence at the heart of this most spiritually resonant of the arts. His meditations were eventually published as the 1931 treasure Music at Night and Other Essays (public library).
In a magnificent essay titled “The Rest Is Silence” — which inspired the title of Alex Ross’s modern masterwork The Rest Is Noise — Huxley writes:
From pure sensation to the intuition of beauty, from pleasure and pain to love and the mystical ecstasy and death — all the things that are fundamental, all the things that, to the human spirit, are most profoundly significant, can only be experienced, not expressed. The rest is always and everywhere silence.
After silence that which comes nearest to expressing the inexpressible is music.
In a parenthetical observation that calls to mind Susan Sontag on the aesthetics of silence, Huxley adds:
Silence is an integral part of all good music. Compared with Beethoven’s or Mozart’s, the ceaseless torrent of Wagner’s music is very poor in silence. Perhaps that is one of the reasons why it seems so much less significant than theirs. It “says” less because it is always speaking.
Huxley considers music’s singular capacity for expressing the inexpressible:
In a different mode, or another plane of being, music is the equivalent of some of man’s most significant and most inexpressible experiences. By mysterious analogy it evokes in the mind of the listener, sometimes the phantom of these experiences, sometimes even the experiences themselves in their full force of life — it is a question of intensity; the phantom is dim, the reality, near and burning. Music may call up either; it is chance or providence which decides. The intermittences of the heart are subject to no known law.
More than merely echoing our experience, Huxley argues, music enlarges it:
Listening to expressive music, we have, not of course the artist’s original experience (which is quite beyond us, for grapes do not grow on thistles), but the best experience in its kind of which our nature is capable — a better and completer experience than in fact we ever had before listening to the music.
But the most complete experience of all, the only one superior to music, is silence:
When the inexpressible had to be expressed, Shakespeare laid down his pen and called for music. And if the music should also fail? Well, there was always silence to fall back on. For always, always and everywhere, the rest is silence.

One of Arthur Rackham’s rare 1917 illustrations for the fairy tales of the Brothers Grimm
In a different piece from the same collection, the uncommonly breathtaking title essay “Music at Night,” Huxley revisits the subject of humanity’s most powerful medium of expression:
Moonless, this June night is all the more alive with stars. Its darkness is perfumed with faint gusts from the blossoming lime trees, with the smell of wetted earth and the invisible greenness of the vines. There is silence; but a silence that breathes with the soft breathing of the sea and, in the thin shrill noise of a cricket, insistently, incessantly harps on the fact of its own deep perfection. Far away, the passage of a train is like a long caress, moving gently, with an inexorable gentleness, across the warm living body of the night.
Suddenly, by some miraculously appropriate confidence (for I had selected the record in the dark, without knowing what music the machine would play), suddenly the introduction to the Benedictus in Beethoven’s Missa Solemnis begins to trace patterns on the moonless sky.
Huxley exhales:
The Benedictus. Blessed and blessing, this music is in some sort the equivalent of the night, of the deep and living darkness, into which, now in a single jet, now in a fine interweaving of melodies, now in pulsing and almost solid clots of harmonious sound, it pours itself, stanchlessly pours itself, like time, like the rising and falling, falling trajectories of a life. It is the equivalent of the night in another mode of being, as an essence is the equivalent of the flowers, from which it is distilled.
“Blessedness is within us all,” Patti Smith wrote in her beautiful elegy for her soul mate, and it is the revelation of this blessedness that Huxley celebrates as music’s highest power:
There is, at least there sometimes seems to be, a certain blessedness lying at the heart of things, a mysterious blessedness, of whose existence occasional accidents or providences (for me, this night is one of them) make us obscurely, or it may be intensely, but always fleetingly, alas, always only for a few brief moments aware. In the Benedictus Beethoven gives expression to this awareness of blessedness. His music is the equivalent of this Mediterranean night, or rather of the blessedness at the heart of the night, of the blessedness as it would be if it could be sifted clear of irrelevance and accident, refined and separated out into its quintessential purity.
I think immediately of Saul Bellow’s spectacular Nobel Prize acceptance speech, in which he asserted: “Only art penetrates … the seeming realities of this world. There is another reality, the genuine one, which we lose sight of. This other reality is always sending us hints, which without art, we can’t receive.” For Huxley, no art swings open the gates of reception more powerfully than music — but the language in which it communicates to us that hidden, genuine reality is untranslatable into our ordinary language:
Music “says” things about the world, but in specifically musical terms. Any attempt to reproduce these musical statements “in our own words” is necessarily doomed to failure. We cannot isolate the truth contained in a piece of music; for it is a beauty-truth and inseparable from its partner. The best we can do is to indicate in the most general terms the nature of the musical beauty-truth under consideration and to refer curious truth-seekers to the original. Thus, the introduction to the Benedictus in the Missa Solemnis is a statement about the blessedness that is at the heart of things. But this is about as far as “our words” will take us. If we were to start describing in our “own words” exactly what Beethoven felt about this blessedness, how he conceived it, what he thought its nature to be, we should very soon find ourselves writing lyrical nonsense… Only music, and only Beethoven’s music, and only this particular music of Beethoven, can tell us with any precision what Beethoven’s conception of the blessedness at the heart of things actually was. If we want to know, we must listen — on a still June night, by preference, with the breathing of the invisible sea for background to the music and the scent of lime trees drifting through the darkness, like some exquisite soft harmony apprehended by another sense.
Although Music at Night and Other Essays belongs in the sad cemetery of life-giving books that have perished out of print, used copies are still findable and very much worth finding. Complement this particular portion with Henry Beston’s exquisite love letter to nighttime, then revisit Huxley on the two types of truth artists must reconcile, how we become who we are, and his little-known children’s book, then revisit other notable reflections on the power of music.

Saturday, April 9, 2016

BOOK REVIEW: "Empire of Things" by Frank Trentmann

Cover of “Empire of Things” (Harper)
by Carlos Lozada:

"Empire of Things: How We Became a World of Consumers, From the Fifteenth Century to the Twenty-First” by Frank Trentmann. 

There are lots of books about a single thing that supposedly changed everything. 

They all sound the same, with titles like “Cod: A Biography of the Fish That Changed the World” or “Tea: The Drink That Changed the World” or “The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger.” They’re all true, to a point, but all exaggerated - magnifying the influence of one item or product to tell a bigger story.

Frank Trentmann, a historian at Birkbeck College, University of London, takes this model and flips it: He has written a book about everything in order to illuminate the story of individual daily life. “Empire of Things” is a massively ambitious - and just plain massive - history of what and how and why we consume, a work spanning continents, centuries, ideologies, political systems, faiths, and lots and lots of physical stuff.

Though we can be described as voters or households or taxpayers or workers or whatever label suits the political calendar, Trentmann thinks our identity as consumers trumps all others. A typical German owns some 10,000 objects, he notes, while a standard Los Angeles garage may not house a car but rather hundreds of boxes of random stuff. “Instead of warriors or workers, we are more than ever before consumers,” Trentmann writes. “In the rich world - and in the developing world increasingly, too - identities, politics, the economy and the environment are crucially shaped by what and how we consume.”

But the way we think about that consumer culture is all wrong, Trentmann argues. In particular, he contests the notions that consumerism is a U.S.-driven phenomenon; that it emerged with the post-World War II economic boom; that it flows from market forces; and that it invariably reflects a move toward secularism, superficiality and transience.

Trentmann does what historians do, finding stories of personal acquisitiveness as far back as Renaissance Italy, where households began accumulating silverware and table items as markers of “domestic sociability and politeness,” and Ming Dynasty China, where “antiquities and original pieces of art were stocks to be cherished for life.” In the Netherlands, local lotteries in the 1600s offered wine goblets and silver sword handles as minor prizes; the powerball gave you a full silver table service worth 4,000 florins.

For as long as people have yearned for more stuff, intellectuals have chastised them for that desire. Even Plato’s “Republic” followed the “decline of a virtuous, frugal city as it was corrupted by the lust for luxurious living,” Trentmann reminds readers. 

But his true nemesis is more recent: John Kenneth Galbraith, author of “The Affluent Society” (1958), in which the late economist argued that modern society seeks not only to fulfill our needs but also to create new ones, propelling us to live beyond our means, go into debt and thus strengthen the power of business. Though Trentmann acknowledges that the book has been enormously influential in cementing popular notions of consumerism, he dismisses it as “not a sober empirical study but a piece of advocacy to justify greater public spending.”

The irony of a statist and anti-consumerist tract is that, as Trentmann makes clear, governments have been major drivers of consumption patterns and growth throughout history. “Empire” is not just a metaphor here; imperialism and co­lo­ni­al­ism take up big chunks of the author’s attention, as they created demand for and interest in new goods, on the part of both the subjugators and the subjugated. In particular, the post-Waterloo expansion of British power liberalized world trade, unleashing the movement of more stuff. “British rule brought armies and tax collectors,” Trentmann writes, “but it also spread new norms, habits and behaviours. It changed the terms of consumption.”

Without producing mini-commodity biographies, Trentmann examines the role of cotton - “the first truly global mass consumer good,” with Indian dyed cotton reaching East Africa as early as the 11th century - and explores how tea, coffee and chocolate traversed the globe, thanks to the power of conquest or proselytism. “The first European chocoholics were Jesuits and Dominicans in the New World,” Trentmann deadpans. Legacies of empire and trade affect what people think of as local or national products, the author explains. 

“In Belize today, treasured local dishes have their roots in imperial trade, which prized imported fish and tinned fruit over the local catch,” he writes. “The new local is the old global.” Trentmann traces how popular concerns about products evolved from place of origin to price and now, with the fair-trade movement, to supply-chain conditions.

Though it’s easy to regard growing consumption as resulting from the influence of pro-market Western democracies, Trentmann shows all manner of regimes - whether progressive New Dealers or Stalinists, Nazis or anti-colonial nationalists - embracing consumerism and product symbolism for political purposes and ideological projects. 

In socialist systems, for instance, consumption marks one’s place in the pecking order. “The clothes people were able to buy (or not), the price they had to pay, whether they obtained things in a state store - people’s lives as consumers were defined by their relation to the state,” Trentmann writes of the early Soviet era. And he lingers on Mohandas Gandhi, who as a young man set out for England with a white flannel suit but later embraced the dhoti (a waistcloth slightly longer than a loincloth) as a marker of Indian emancipation, both spiritual and political. Gandhi “was a sartorial fundamentalist,” Trentmann concludes.

Government spending in Western nations has also contributed enormously to consumption, especially in the second half of the 20th century, Trentmann writes, with social services in housing, education and health - as well as income supports for the poor and the elderly - helping lift marginalized groups into the consumer society. “Without the rise in social spending, the bottom would have fallen out of the boom in consumer durables.” 

And Trentmann loves spewing out factoids showing the ever-increasing growth in the acquisition of those durables. For instance: “In 1954, still only around 7-8 percent of [French] households had a fridge or washing machine, 1 percent a TV. ... By 1975, it was 91 percent, 72 percent and 86 percent, respectively.”

Trentmann attempts to remain dispassionate, avoiding the good-vs.-evil judgments of modern consumerism and simply interpreting major historical events through his laser focus on the consumer (even the Cold War was mainly about “whether America or the Soviet Union offered a superior material civilization,” he writes).

But it’s hard to remain the pure historian; overall, Trentmann seems favorably inclined toward our acquisition of ever more goods, seeing it less as a sign of “conspicuous consumption” (yes, Thorstein Veblen’s “Theory of the Leisure Class” is another frequent target) and more a reflection of individual memory and emotion, or part of the creation of group or personal identities. For instance, greater levels of consumer spending by the young have created a “cult of the teenager,” he writes, while, “with the exception of Viagra and stairlift, older consumers continue to be largely invisible.”

He does worry about the physical effects of consumerism on the planet, even if he is skeptical of the long-term impact of consumer-based efforts to redress it, such as fair trade or recycling; he regards the latter as “the ally of high-octane consumption, a kind of ersatz sacrament that cleanses us of our stuff.”

One thing that might have helped mitigate our environmental wreckage: a shorter book. Though Trentmann displays astonishing erudition across multiple disciplines and a voracious appetite for arcane, revelatory details and statistics, I’m not convinced that he needed 800-plus pages of text, notes, charts and images - especially for an author skilled at boiling down complicated ideas into pithy, memorable sentences. 

“Looking back at the twentieth century,” Trentmann writes of the rise of consumers in Asian countries, “learning to save was probably as important for the global rise in consumption as learning to want more.” Of government spending on culture and arts in Western Europe: “Those who like to listen to Shakespeare or Verdi have their own welfare state.” And of the proliferation of gadgets and domestic appliances: “The home has morphed into one gigantic socket.”

Still, I understand Trentmann’s dilemma: When you’re writing the definitive book about everything, you really don’t want to leave anything out. 

Carlos Lozada is the nonfiction book critic of The Washington Post.