Neuroscience of Behaviour Change: Why New Years Resolutions Don't Work
It turns out, the reason resolutions start losing momentum a few weeks into the New Year, is not because ‘we're lazy’ or ‘there’s no time,’ but because were approaching them all wrong; a resolution is really just a decision to change a behavior or form a new habit in sparkly-clothing right? Neuroscientist Dr. Tara Swart explains why and developed a 4-stage formula to shift an undesired behavior or create a new one. read more
I came across this little nugget of neuroscience that has helped me and many clients over the years actually stick to those New Years Resolutions and I had to share!
It turns out, the reason resolutions start losing momentum a few weeks into the New Year, is not because ‘we're lazy’ or ‘there’s no time,’ but because we don’t understand the building blocks of behaviour change! While we think we are in control of our behaviours, our subconscious beliefs and patterning are actually in the driving seat! A resolution is really just a decision to change a behaviour or form a new habit in sparkly-clothing, so we need to approach it as we would any other behaviour we want to change.
Dr. Tara Swart, a very clever neuroscientist from MIT, developed a four-stage formula — 1. Raised awareness 2. Focused attention 3. Deliberate Practice 4. Accountability— to shift an undesired behaviour or create a new one. Though working a goal/intention/resolution through each stage of the process, making it stick is possible! To help illustrate, let’s walk through the four stages with a goal that comes up a lot in counsel; ‘I want to lose 10 pounds’.
Raised Awareness- This means to identify something residing in your unconscious, that is preventing you from achieving your goal, and bring it up to the surface. In our example, this may look like asking yourself; why am I not achieving the goal I want? Maybe there is an underlying belief about not being able to reach any goal (from childhood?) - that is preventing the follow through? **Tip; try Stream of Consciousness Journaling to help uncover some of these beliefs/blocks!
Focused Attention- This means bringing awareness into your subconscious emotions or triggers driving your behaviour. For example; try to notice what food choices you make when you were tired or upset? **Tip - place a posted note on the snack cupboard to prompt you to notice when you reach for something not contributing to your goal. Recognizing a pattern is the key to changing it!
Deliberate Practice - This means taking purposeful action, that one small step, towards making your goal doable. In our example, this could look like, meal prepping one night a week to ensure you have healthy dinners and won’t order takeout when you’re tired and hungry! **Tip - start with raw ingredient meal-prep; chop up a bunch of kale or potatoes for easy roast dinners, or slice carrots for easy snacks rather than crisps
Accountability - This looks like having someone or some form of technology to demonstrate to yourself, objectively rather than subjectively, that you are working towards your goal and provide support along the way! For example, using a food tracking app to objectively reflect back at you that you are eating what you think you are, maybe a helpful tool. **Tip - make a WhatsApp chat with friends on a similar journey to help keep each other accountable and motivated.
And remember it is never too late to make a resolution or change an undesirable behaviour - we don’t need a New Year to change our life now! With these steps, you will be reaching your goals in no time!
Power of Attention + Intention
Research shows we are missing out on 50% of our lives. Why? Because we aren’t paying attention! That means - were only showing up as our best selves, performing to our full potential in our work, relationships, or even the gym 1/2 the time because our attention is elsewhere.
Research shows we are missing out on 50% of our lives. Why? Because we aren’t paying attention!
That means - were only showing up as our best selves, performing to our full potential in our work, bring our dreams to fruition, have the relationships we desire, or even sticking to a diet - only 1/2 the time because our attention is elsewhere!
Attention is our most precious resource, our fuel for success because where we direct our attention, we direct our life energy
But its not our fault! The main sources of distraction are not technology but 1. distracting thoughts; the stories we tell ourselves, or ruminating on the past or future 2. emotional unresolved trauma causing stress 3. physical pain, exhaustion, health-ailments, 4. mental, physical deficiencies
Now what - how can we direct our attention to what truly matters?
Turns out, our brain’s attention system is as trainable as our body physical system - just like we work a certain body part to increase its strength, we can do the same for our brain. The most effective ways to take our mind to the gym is mindfulness.
Mindfulness, or being mindful is available to us in every moment. Mindfulness strengthens and improve coordination between brain networks that carry out a variety of attentional functions; our ability to direct our focus, witness our environments and experience, and manage our goals and behaviors. This basic human ability to be fully present, aware of what we are thinking, doing and what is happening around us, is more readily available and accessible when we practice it.
One practice to begin.
To start to understand and train your mind, try this simple and easy practice. Try this practice tomorrow morning.
Try to remember, ask a partner to remind you or write a post-it note next to your bedside to jog your memory. Try not to reach for your phone until you have finished the practice.
On waking, sit in your bed or a chair in a relaxed posture. Close your eyes, go inward and connect with the sensations of your body. How does your body feel?
Take three long, deep breaths from your belly — breathing in through your nose and out through your mouth. Then let your breath settle into its own rhythm, noticing the rise and fall of your chest and belly as you breathe. Do this until you feel calm, maybe a few rounds of breath or a few minutes.
Ask yourself your version of ‘what is my intention for today?’, ‘how can I show up as my best self today?’, ‘how can I take care of myself today?’
After a few minutes of thinking over these prompts, set your intention for the day. This can be one word or a phrase such as ‘today, I will be kind and thoughtful to myself and others’, ‘today I will stay grounded and balanced in the face of challenge’, ‘today I will do my best to achieve my goals’, or anything else you feel is important
Throughout the day, revisit your intention. Perhaps setting a notification on your phone to go off at intervals to remind you of your intention. Notice through out the day and as you do this practice regularly, you become more and more conscious of your intentions for the day and life and how this improves your mood, the quality of your relationships, and ability to achieve your goals.
Give it a try and let me know how it goes! Personally and everyone who follow through with this sees and feels big shifts in their mindset and and productivity!
The limits of my language mean the limits of my world – Ludwig Wittgenstein
Seven Types of Rest
You know those times when you have slept 8 hours and still wake up exhausted? Turns out, sleep may have not been the type of rest you need! Researcher Dr. Saundra Dalton-Smith in her book Sacred Rest explains the 7 types of rest we need to make sure we are getting to show up as our happiest, creative and productive selves!
I hope you are feeling well and rested? If you're like most - sadly probably not! One of the most common complaints I hear in my practice is feeling blah; no longer feeling the zest and energy for life they once had. 'I sleep 7 hours a night - do yoga 3 times a week - and spend most weekends this year watching Netflix! - why am I so tired!? They come to me puzzled and exhausted - turns out, physical rest is just one of the 7 types of rest we need to feel our best. I'm no different; I needed to be quarantined in paradise to begin to understand and really rest for the first time in my life!
During these two weeks - my day looked like walking 3 minutes to the beach, 1 minute to the restaurant and maybe 30 seconds to the pool; yet I needed to sleep 10 hours a night. It felt like my body was going through a process of deep rest; each night peeling away the years of fatigue-inducing emotional stagnation, mental stress and physical overwork. What happened? Tightly bound muscles in my upper back began to unravel allowing my shoulders to settle - my hips loosened and felt like they could move in a more fluid dance - and creatively; I felt a burst of energy and inhibited expression I have not known before. Before this forced experiment I would have put this magical transformation down to stretching or a new supplement, but I knew this not the case. REST - was the remedy. Looking for answers, I turned to the research, and came across Dr. Saundra Dalton-Smith book Sacred Rest
Key findings from the book
Sleep and rest are not the same
Sleep is just 1 of the 7 types of rest we need
Sleep alone can never fully restore us to the point we feel rested
Rest is the most underused, free, safe and effective alternative therapy available to us
Adequate rest increases our productivity, overall happiness, performance, emotional regulation
The first step in conquering our rest deficit requires us to take inventory of the dominant energy drains in our lives; do we feel drained mentally, spiritually, emotionally, socially, sensorily, creatively or physical
From there, identify which area we need to attend to; balance in all areas is key to adequate whole body rest!
Focus on the type of rest you need to restore the energy in that specific area that is depleted
What are the 7 Types of Rest?
Physical Rest.
This type of rest can be passive or active. Passive physical rest includes sleeping and physically resting, while active physical rest includes restorative activities such as yoga, stretching and massage therapy. This rest relives your body tension, aches and pains and helps revitalize the body to sleep better
A deficit may show up in body pain, cramping or swelling in legs or feet, getting sick often, sluggish digestion or slower than usual metabolism
How to rest -> Gentle movement like stretching, restorative (Yin) yoga, take a walk outside
Mental Rest.
A deficit in this area can feel like - lacking the ability to concentrate, experiencing brain fog or walking into a room and forgetting why you went there. Mental rest allows you to quiet your mind and re-focus on what is important.
Take some down-time to give your mind a break to refresh and fully turn off the mind.
How to rest -> Try guided meditation or schedule short breaks every few throughout your workday to re-centre and settle your mind. Keep a notepad by the bed or throughout the day to jot down any thoughts that keep your mind activated at night or through the day
Sensory Rest.
The bright lights of computer screens, incessant background noise, notifications up the ying-yang, engaging in too many zoom conversation though out the day, all can be a drain our sensory system and leave our senses overwhelmed. No matter how busy you feel, find time to be alone in silence to recharge
How to rest -> calm the sensory overload by closing your eyes for a minute in the middle of the day, or going for a quick walk in nature, or intentionally unplugging from electronics for a few hours before bed, or give your self a day off of technology one day a week would be a great sensory-recharging habit to get into
Emotional Rest.
We have an internal capacity to manage our emotions; our emotional bank is filled and depleted constantly every day by an internal and external stimulus (people, stresses, work...)
Feeling emotionally drained or exhausted may show up in an inability to respond consciously rather than react to a situation, feeling emotionally numb, or engulfed in anxiety, fear, self-doubt or failure.
How to rest -> This type of rest requires having ample time and space to fully express and authentically express your feelings; try journaling out your feelings, finding a pleasurable activity or maybe expressing to a person who can fully hold the space for you
Spiritual Rest.
Spirituality refers to how humans draw meaning from their lives and connect to themselves and others beyond the physical and mental aspects. This helps cultivate a sense of belonging, love, acceptance and purposes.
Making space and time for spiritual rest helps to bring grounding and clarity to our needs, goals and deepest soul purpose.
How to rest -> Connecting to the rhythms of nature through grounding exercises like walking outside barefoot or sitting in meditation can be helpful tools, or practice free writing or drawing to connect to your inner self or learn about different spiritual philosophies and see what resonates
Creative Rest.
A creative rest deficit may feel like an inability to think of how to solve even a simple problem, or brainstorm new and innovative ideas; writer’s block is a perfect example of this
Creative rest reawakens the limitless innovation and wonder inside each of us, and reinvigorates our creative potential.
How to rest -> Set aside time to be in nature; going for an undistracted walk outside or even looking at photos of natural beauty are helpful tools, or curating your work or home space with inspiring pieces of art or images, going to a museum, listening to music, reading are great ways of refilling your creative cup
Social Rest.
Social rest requires the wisdom to take inventory of and identify which relationships are reviving or draining
Too much social interaction with even the most life-giving friends can be draining, so having ample time to recharge after a lot of social interaction is key to full-body rest
How to rest -> Cultivate and surround yourself with positive, supportive relationships physically or virtually. Learn how to set boundaries and take time away from others that are emotionally draining or toxic; 5 min of respite goes along the way can go along way if you have to be with draining people , or decompress after a lot of social interaction: schedule time to be alone to do activities that are recharging for you; make a list to refer to!
Re-remembering Our Way : The Practice of Circle
The timeless and intuitive ritual of council; gathering in intimate circles around a warm fire, sharing stories and wisdom, has been intrinsic to human problem solving and connection since time immemorial, but has been lost in translation in recent western culture. Derived from the ancient tradition of storytelling, the spirt of council is a gift offered to us by the elders of many cultures which placed strong circle practices at the centre of connection. I was first introduced to the origins and methodology of this practice through The Way of Council, a text which came to inform so much of my life and work and with such value I have gleaned thought its teachings, I feel called to share some of its wisdom.
The timeless and intuitive ritual of council; gathering in intimate circles around a warm fire, sharing stories and wisdom, has been intrinsic to human community since time immemorial, but has been lost in translation in recent western culture. The spirt of council is a gift offered to us by the elders of many cultures which placed strong circle practices at the centre of connection. I was first introduced to the origins and methodology of this practice through The Way of Council, a text which came to inform so much of my life and work and with such value I have gleaned thought its teachings, I feel called to share some of its wisdom.
With roots in the natural world, these practices are visible in diverse cultures and religions around the world, and can be seen in the sacred stone circles of the many earth-centred traditions, in the governance and circle-of-life teachings of First Nation peoples, the native Hawaiian sacred ‘talking story’ ceremony and in moon and women gatherings of past and present, in the ancient Greek and Roman ruins and referenced in texts like the Iliad, to reference a few.
This is a practice from which we came, council is not a way we must learn but re-remember.
Nature is by no means individualist and through drawing from the depth of human memory, archetypes and myths of ancient traditions for guidance, we can navigate our inner and external challenges and cultivate the worlds we wish to embody. Relating consciously in non-hierarchical communication is how to begin; feeling into our inner selves, ‘listening from the heart’, attentively and deeply to those we commune and ‘speaking from the heart’ openly, honestly and authentically is the path to discovering new insights, understanding, truth and healing.
In this time of an almost terrifying level of opportunity, where the structures that have formed and upheld so many of the conceptualisation and boundaries of self, purpose and society and fallen away, the moment and impetus to shift consciousness at the macro and micro levels is even more evident. While it has been said that /we become the average of the five people we spend the most time with/ reflection into who and how we cultivate our circle of counsel is paramount.
Counsel vs. Council. Concillium from which the word council derives, means 'a gathering of people’ and offers a method of communication to facilitate this, and counsel can mean to advise, the person doing the advising or the advice itself.
Heightened by the times we live in, I feel a great yearning for conscious connectiveness, a sentiment also reflected by so many in my community, and a way of practice to facilitate this. The way of council is a form and practice from which I resonate with and perhaps offers a tool to help navigate this moment.
With this, I ask - who makes up your counsel? And if you resonate with the teaching I have reflected; does your council reflect the tenants of council?
To delve more into this topic, I recommend The Way of Council, Jack Zimmerman + Virginia Coyle
Identity at the Dinner Table: What to Eat, Nutritionism + Sustainable Futures
Why do we eat what we do? What is considered healthy and what is considered not? This is an indepth exploration of the preoccupation in contemporary Western society with making the right decisions as to ‘what is good to eat’. I will look at the philosophical origins, consequences of the moral hierarchical and separatism which became established with the 20th century’s discourse of body, food, identity and society.
Sarah, a mother of two, is walking into the breakfast aisle trying to buy a box of healthy cereal for her young children sitting in the shopping trolley. They are pulling at her shirt and heartstrings trying to persuade her to buy a bright box of CocoPops while she is gradually becoming overwhelmed by this bombardment of choice. Low sugar, low fat and no cholesterol run around her head as she attempts to remember which dietary advice is the healthy choice. In order to navigate the plethora of perspectives on what is ‘healthy’, heuristics are implemented to make the decision easier. Lobstein and Davies reveal that when questioned, consumers, despite difficulty in knowing how to implement this knowledge, claim to be able to discern ‘what is or is not healthy’ (2008:331). In order to assist consumers and increase the market share of the producer, mental shortcuts such as ‘traffic light’ signalling on food packages (ibid) facilitate the ease and practicability of selecting ‘healthy’ food options. The focus in recent years to investigate these consumer assistance tools (eg. nutrient labels and terms such as ‘healthy’ and ‘low fat’) has increased the academic appetite to understand how individuals form ideas of and engage with discourses of health.
The preoccupation in contemporary Western society with making the right decisions as to ‘what is good to eat’ (Coveney, 2000: x), I will argue, originally stems from a reconceptualisation of food as component parts which will be explained through the history of nutritionism. The philosophical structure produced from the various forms of moral hierarchical dichotomisation, which became established with the 20th century’s discourse of body, food (nutrition), and society, laid the foundations for how food has become characterised in these terms of good/bad for the modern subject (Coveney, 2000:160). The rise of nutritionism provides the groundwork for the dichotomy to flourish under this moral assignment as one of the factors that comprises modern identity and has led to the shift from religious to health based identity. Perhaps the relationship between consumption and identity can be explained by the long standing belief in the proverb, ‘you are what you eat’ (Fox, 2008:1). Further than the physical, social and pleasurable need to satisfy hunger, eating also provides a ‘powerful symbol of who we are’ (ibid). Therefore, for the post-modern citizen, identity is no longer a definitive consequence of the community one is born into nor is it stable in nature, but becomes something one actively creates ‘through consumption’ (Elliot, 1998: 132). For the contemporary neoliberal consumer, the media represents the central vehicle in which these ideological principles are transferred. Within this context, the nature of how to ‘achieve health’ (Roy, 2008) will be discussed in terms of the mode in which these ideas are transferred to the individual and how these ideas manifest both conceptually and actually. This paper will then trace the implementation of these ideas in the context of the family dinner table to see how this ideology is actualised in language and choice. While many sites of socialisation form an individual’s ideology of what health means or how to achieve it, focusing on the dinner table will provide some insight into this complex topic.
Rise of Nutritionism
In order to explain the emergence of the dichotomy good/bad food, this section will explore the foundation for which the binary relationship was established. The emergence of nutritionism as a part of the human consumption landscape can be traced back to the 19th century German chemist Justus von Liebig (Anderson, 2014: 51). Building on the English doctor William Prout’s (1785 – 1850) discovery of the three constituting parts of food (protein, fat and carbohydrates), Liebig’s (1803 –1873) discovery of micro-nutrients, along with his theory of metabolism, launched the human fascination with nutrition and food science (Pollan, 2008: 20). From these beginnings, the authority to guide ‘what to eat’ was removed from the body and the power was placed in nutritional science. This trend, which came in response to outbreaks of scurvy, gained vogue amongst the middle classes during the early 1900s and resulted in the use of ‘vitamins’, later named ‘magic molecules’ (ibid).
At this time, knowledge of the body became devalued and dominated by medical and scientific discourse, which as a result came to shape the prevailing conceptualisation of health (Clarke 2011:164). More simply, the idea that the body is a source of authority on itself, based on lived experience or innate knowledge, or what it needs to consume, began to be replaced by the logical ‘scientific discourse of nutritional science’ (ibid:163). The ‘absolute certainty’ quality attributed to natural-scientific knowledge (Grimshaw, 1996) began to take precedent in both the natural sciences and the body. Due to the invisibility of these particles to the untrained or ‘un-scientific’ eye, choices of ‘what to eat’ and ideas of what is healthy/not healthy was usurped from the realm of the individual and into the domain of the expert nutritionist (Pollan, 2008:3). Naturally, as theories of these magic molecules began to take centre stage and the shift from eating food for aesthetic pleasure or necessity transformed into ‘eating nutrients’, food came to be defined by its place on the spectrum (Scrinis, 2008:39). In consequence of this paradigm, the conscious understanding of food shifted to consideration within a continuum, between good nutrients/foods and bad nutrients/foods (ibid:29). As we have seen, the public discourse and contemporary understanding of food in binary terms of good/bad has been shaped by the authority medical knowledge superposed knowledge of the body along with the re-defining of food in terms of ‘nutrition science’.
This ideology, which became termed by Australian sociologist George Scrinis (2008) and made popular by journalist Michael Pollan, prioritises the scientifically determined nutrients contained within the food over the whole food itself (Pollan, 2007:1). The sum of the nutrient and biochemical composition of each individual component, within this paradigm, is considered superior or at least equal to the food (ibid). Within the framework of nutritionism, as Scrinis elaborates (quoted in Pollan, 2008: 32), ‘even processed foods may be considered to be healthier for you than whole foods if they contain the appropriate quantities of some nutrients’.
This ‘reductive approach to food’ began to predominate the policy surrounding and the public’s understanding of not only food but of their health (Scrinis, 2008: 39).
With this model applied to the realm of ‘what to eat’, scientific knowledge and the rational mind are not only empowered with the ability to make the decision of what to consume, but the decision is considered a ‘matter of choice’ (Clarke 2011:164). This notion of ‘what to eat’ and the rational route in which the consumer should follow in making a decision has most significant implications, which will be discussed later through both elements of the equation: the food and the being. As a result, the moral decision of consuming good/healthy or bad/unhealthy food is formulated in the same fashion, as individuals in contemporary society are both openly considered failures or bad for giving in and often broadcast their ‘weakness’ for consuming these foods (ibid).
Many argue however that the characterisation of food as nutritious or not is not based on any ‘consistent standards or criteria’ but customarily defined by the ‘absence of [currently] problematic ingredients - fat, sugar, sodium- rather than by the presence of any beneficial nutrient they might contain’ (Drewnowski, 2005 :721). In other words, food is emphasised in terms of its micro-nutrient profile and health achieved through the restriction of these ‘evil’ macros: fats or carbohydrates. As a result, dietary guidelines appeal to the same form of logic (Marks et al., 2000:142). While this perhaps inconsistent allocation has been proven extremely unproductive to the consumer, who is therefore required to translate the nutrient recommendations into actual food in their shopping cart, this emphasis still perseveres (ibid).
The seemingly arbitrary and changeable nature of nutrition advice has been most clearly seen in the politics of fat consumption. Questions as to the efficacy of the conventional nutritional wisdom embedded in the ‘low-fat-is-good-health hypothesis’ (Freeland-Graves, 2002:100), originally proposed in the 1970s by Ancel Keys (1970, cited in Krauss, 2000) who suggested that consuming fat raises cholesterol and causes heart disease, has in recent years been confronted with scientific research that opposes this idea. While the once unexplainable ‘French Paradox’, whereby somehow the people of France have lower incidence of coronary heart disease despite high intake of dietary cholesterol and saturated fat (Ferrières, 2004: 107), had always raised questions as to this relationship, the fat-fearing American populations under the guise of their nutritional advisors or doctors avoided fat as if it were the plague (Teicholz, 2014). Slowly but surely, the suggestion of replacing margarine with butter, which only a few years ago would have resulted in a nutritionist’s career suicide, is now gaining traction and causing the nutritional community to slowly follow suit.
To investigate the rise of the Western contemporary nutrition discourse, Crotty (1995:64) describes the promulgation of ‘Good Nutrition’ by nutrition experts, doctors and scientists. This view, she suggests, is a form of social control whereby individuals are encouraged to adopt these principles in theory and in eating practice, or risk being diagnosed as ‘sick’ and morally cast of out good citizenry (ibid). The contemporary manifestation of this ‘medical model’ (Coveney, 2006 :17) which Crotty is so critical of, places all of the power to guide and control behaviour and achieve good health of the body in scientific knowledge. Attempts to quantify the nutritional nature or content of any food have proved highly problematic and difficult, especially as the scientific knowledge and subsequent advice is forever changing, as well as the shifting focus from the concern with avoidance of disease towards motivations of greater health (Bury, 1994 in Lawton, 2003:240).
To provide greater context to the philosophical structures which gave rise to the dichotomisation of good/bad food, the dividing practices of the medicalised body are explained. Within the last few decades the ‘growth of medicalisation’, or the process in which elements of life which were previously considered outside the ‘jurisdiction of medicine come to be construed as medical problems’ has occurred at such a rapid rate that the Western body and idea of health is forever changed (Clarke et al., 2003 :153). While Clarke et al. discuss the historical shift from medicalisation to bio-medicalisation and more specifically the influence of bio-technology on the body, the analysis and framing of the discussion is most applicable and useful to the transformation of health in contemporary Western society. Through the contemporary reinterpretation of Foucault’s (1984, in Clarke et al., 2003:184) notion of ‘dividing practices’, individuals and their bodies are perceived to be in need of increasingly more discipline and ‘invasive technologies of bio-medicalisation’ due to population’s ‘risky genetics, demography and behaviours’(Clarke et al., 2003: 184). These practices come in the forms of increased monitoring, diagnosing, and maintenance of one’s health and require greater knowledge and responsibility for self and others (ibid). The obligation to ‘know and take care of thyself’, as explained in this article, not only exemplifies this contemporary ideology of health but helps to give rise to new personal identities and relationships to self and society (ibid).
Perhaps, as a result of the reconceptualisation of the body in terms of its component parts, coupled with the moral hierarchy and need for technologies to repress the undisciplined, the foundations for understanding food in the same terms is the next logical step.
Shift Towards Health Identity
Similar to the binary between ‘normality’ and ‘abnormality’, which provides societal order through clear distinctions between those on either side of the ‘deviant-immoral or respectable- moral’ divide (Douglas, 1970:3), the distinction between ‘good food’ and ‘bad food’ is both equally powerful and not representative of the ‘truth of being’ (Dennis, 1970:174). Similar to how highlighting the deviant in society produces law abiding moral citizens, the moral dichotomisation of food, in this way, has similar ramifications, but on the body. In modernity, these highly recognisable and defined categories of whether or not a food item is healthy or good, perform a similar direct tool of population control and act as an effective form of indirect governance through self-surveillance and self-regulation (Foucault, 1987). Social cohesion has shifted from a traditional to a post- industrial society where society and community were based on a shared moral compass or consciousness (Warren, 1996:60) towards a society whose values are based on difference and individualism (Mestrovic, 1992).
In order to understand how the conceptualisation of food into absolute categories of healthy/good and unhealthy/bad arose as necessary when traditional identity no longer applied, it is important to outline the human motivations and larger social influences for this occurrence. Tradition ‘no longer steers the construction process’ as identity is no longer defined by the institution and therefore the post-modern individual is charged with the task of constructing and maintaining one’s identity (Boeve, 2005:104). Boeve describes this process whereby for the contemporary individual, ‘detraditionalisation’ and ‘marketisation’ undermine the socialising influence of traditional institutions, such as religion and as a consequence the individual loses the template for guiding his identity and is left to fill in the missing pieces (ibid). Therefore, individuals ‘need to rebuild their self-understandings’ and utilise the market and its resources to achieve their needs (McAlexander et al, 2014:858).
In traditional societies, religiously ordained structures serve their members as ‘pillars of
identity’ (McAlexander et al, 2014:859) and govern and define all aspects of life in questions ranging from who to marry to how or what to consume (Goffman, 1968). Perhaps facilitated by the practical need for cooperation in these traditional societies and the lack of geographical mobility, individuals sustained close-knit communities, which acted to help maintain social order and cohesion (Goody, 1982). Many social thinkers theorised the transformation of community in the rise of industrialised Western society and the subsequent influence it had on individual’s identity and sense of community. Boeve’s secularisation thesis helps to explain the gradually diminishing impact both individually and socially on the Christian tradition: modernisation and religion are inverse of each other; ‘the more modernisation, the less religion’ (Boeve, 2005:104). Within this shifting climate, religion no longer plays a role in the ‘construction and legalisation of individual and social identities’ (ibid).
The social harmony that is found in religion, where order is consecrated in the moral conventions common in traditional community, begins to lessen in importance with transition to capitalist laden industrialised Western society, where the means of individual governance are replaced with laws of ‘convention’: the individual is left to fight to regain his identity as a member of a community (Truzzi, 1971). Order is preserved, Durkheim explains, irrespective of the change in the organisation of community as long as the quintessential element - the common consciousness - is fostered and therefore allowed to prosper (2003). While the body was once informed by the ‘divine Sobriety’, for the contemporary citizen it becomes disciplined by ‘calories and protein’ (Turner, 1982:29).
If we were to rely on the traditional definition of community, mentioned above, to help explain the new form of community, we would have an incomplete explanation. The complexities of the interaction between self and identity, constantly evolving and amplifying with the ever connected global world, continue to impede the simplicity of the equation. The contemporary quest for community is ‘undertaken for therapeutic reasons’ as consumers seek to avoid the social isolation common in the structure of postmodern society (Thompson, 2007:148). Benedict Anderson (1983, in Weber, 2009) argues that contemporary community, as a result of the practical transformations which do not allow for face-to-face community, are based on one which is socially constructed via those who perceive themselves apart of these ‘imagined communities’ (Weber, 2009: 40). These communities, formed as a response to the post-industrial society, whether based on brand association or adherence to a specific diet (eg. Paleo), provide a sense of much needed individual and community identity (Thompson, 2007:148). These forms of communities, facilitated by the powers of the interconnected viral age, are indoctrinated through public discourse and rituals and are based on individual alliance to any varied interest rather than a forced religious or kinship tie (ibid).
Therefore, for the post-modern citizen identity is no longer a definitive consequence of the community one is born into nor is it stable in nature, but becomes something one actively creates ‘through consumption’ (Elliot, 1998:132). The ‘search for self-identity’, according to Elliott and Wattanasuwan (2015:131), is a key determinant of the postmodern individual. In order to resist being faced with the ‘looming threat of personal meaninglessness’ (Giddens, 1991:201), the individual seeks to create and maintain an identity through consumption. In contemporary society, consumerism plays a most meaningful role in the ‘creation and production of a sense of self’ (Todd, 2012:48). In other words, the moral quality individuals assign to themselves via consumption of certain foods, motivated by the need to curate the lost form of traditional identity, comes to the forefront of the modern subject’s identity practice.
Most famously argued by Baudrillard (1998), consumption is the predominant site in which one creates and communicates identity. Following these assumptions, capitalised on by the marketers’ insinuation that the person in the advertisement ‘could be you’, the contemporary consumer has the ability to mould any ‘self’ by flipping through the advertisements and purchasing the right product from the magazine pages (Stromberg, 1990:12).
Within this new ideology of identity, which by some is termed as more characteristic of a religion rather than an ideology (Stromberg, 1990:12), the individual becomes first and foremost a consumer, who has the ability to exercise free-will to form or consume the identity one wishes to embody (ibid). While it is difficult to divorce an individual’s particular socio-cultural and historical context which informs the array of possible identities made salient to them at any one time, from the chosen identity, the assumed freedom of choice inherent within this mode allows for greater movement and flexibility within one’s life time.
While the direction of the relationship remains unclear between advertisers’ suggestion of consumables and the individual’s need to buy the said product, the market resolves this consumer wish through constantly providing purchasable items to compose whatever identity is desired. Many examples could be sited here, but for the purpose of beginning this discussion an article from the 1985 Glamour magazine helps to highlight the longevity and discursive nature of how this phenomena is materialised in print. The editor writes, ‘if you’ll give me just a few minutes of your time [...] I can help you change almost anything about yourself...Begin the great and continuing makeover of you!’ (Stromberg,1990:14).
Citizen Socialisation through the Media
The ways in which bodies are socially constructed and the means in which these ideas are conveyed and internalised are extremely complex and intricate (Lupton,1994:602). While isolating any single determining attribute offers an incomplete perspective, for the purpose of this paper, print media will be focused on, as it holds a most significant role in the distribution of knowledge and how or what a healthy lifestyle entails is conveyed to individuals in society (Santich, 1995:127).
In the 20th century, media writings on health focused on the neoliberal principles of individual responsibility and the many forms it takes as they became the prominent form of transmitting healthy discourse to the public. The media’s role in the spreading of notions of what constitutes ‘healthy lifestyles’ as well as ‘techniques for fabricating the healthy self’ (Bunton 1997: 239) is undeniable. The media's portrayal of the ‘healthy lifestyles’ as something vital to being a moral member of society, is also vital to the process of self- identity making. Therefore, for the modern citizen, knowledge of the ‘good diet’ has become the central focus and the ‘only rational choice’ as it holds them responsible for acquiring the knowledge to attain health (Clarke, 2011:164)
In the 1990s, the American media’s expanding obsession and broadcasting of the obesity ‘epidemic’ and the alleged health problems associated with it, increased alongside the Americans expanding waistlines (Boero, 2007:41). Between 1990 and 2001, The New York Times published over 750 articles on obesity and its detrimental health effects. In the same decade or so, they published only 531 articles on climate change and pollution and 672 articles on the the AIDS epidemic (Boero, 2007:42). The plethora of articles surrounding the fears of nationwide obesity, fatness and expanding body size were presented in dark opposition to health and frequently presented along with its ‘cure’ and supplemented with costly quick fix faddish foods, or the suggestion to eliminate entire macronutrient food groups. Through taking a social constructionist approach to the ‘obesity epidemic’, Boero looks at the way the discourse surrounding the obesity epidemic was defined and framed as both an individual responsibility and societal epidemic (ibid: 42). Magazines are one of the foremost avenues of society’s declaration of the prevailing or optimum ideas of what is healthy and how to attain such a state of being (Andsanger and Powers 1999). They ‘act as guidebooks for women’s lives’ (Ballaster et al., 1991) by providing questions and solutions to the created problems or desires of the readers, to be found within the pages of the magazine (Roy, 2008:464). Boero’s argument, helps to show how the media shapes and creates individuals conceptualisation of health and how to attain such a state is most clearly seen as a result of a few mechanism of transference: the media’s explicit communication of these ideas to the readers and the secondary transmission of this information to the less educated.
Canadian sociologist Dorothy Smith, (1990:209-210) explored in one of her most seminal texts, the concept that social life, in all of its manifestations, is ‘textually-mediated’. ‘Communication, action and social relations’, she expresses, are ‘infused with a process of inscription, producing written meaning[...] or working from them’ (ibid:209). Smith suggests that writing is not just the spoken word put down on paper, but through the process of the thought it takes to write the word, it acquires new meaning (Barton, 2001:101). Truth and knowledge are not only man-made but social reality ‘comes into being through language’(Smith,1999:128), therefore human language or the reality that is socially constructed in the process is not an objective characterisation of reality but a product of the humanly constructed knowledge process (Berger & Luckmann, 1966). Following Smith’s theoretical understanding of the social construction of language, our identities and lived experiences are mediated through the language which surrounds them. Thus, language which makes up the discursive practices and from which social practice and meaning is constructed (Fairclough,1995) can be used as a basic standpoint to understand how healthy eating discourse in magazines shapes the meaning of the term in our everyday lives.
Along with the rise of neoliberal politics and economic policies beginning in the 1970s (McGregor, 2001, p.83), the social and individual body began to take on a similar framework. Health in the West has been reimagined as a ‘morally-laden personal attribute’ associated with the neoliberal notions of ‘individual responsibility and achievement’ (Paugh & Izquierdo, 2009: 185). On a global level, neoliberal ideology is shaping healthcare policy, while on an individual level these principles are forming philosophical thought and practical food choice (McGregor, 2001, p.83). Within this framework, the goal of health becomes ‘maximising freedom, well-being, [and] quality of life’ (2007:53), which informs consumers’ conceptualisation of not only what they should eat, but also largely shapes the opinion of their subsequent choices (Fusrt et al., 1996:247).
This idea is most clearly exemplified in discussing the Great American Makeover (2006:24), an idea and a book with the same title, through which Dana Heller describes the factors that unite, create and sustain the ‘Americanness’ desire for self improvement and uphold the American or first world dream . The neoliberal ideologies, she argues, ‘position the subject as an entrepreneur of the self’ who is required to engage in the constant process of self improvement in order to compete in the global market and health maintenance to be a good citizen (Weber, 2009:39). In extension of the thought mentioned earlier, that scientific knowledge is superior to bodily knowledge, food consumption is conceptualised as an individual’s responsibility and choice. Therefore, when an individual ‘wrongly’ indulges or engages in ‘deviant irrational behaviour’, it is considered entirely their responsibility when they ‘suffer negative health consequences’ (Clarke 2011:164).
Lived Practice : Ethnography of Family Dinner Table
While this paper has explored the identity motivations and one of the prominent modes in which ideas of health are defined and transferred to the social body, the process by which these ideas are embodied has yet to be examined. To investigate how the knowledge of health behaviour and eating habits are experienced in practice and the moral implications of these daily health negotiations, analysis of two dinner table ethnographies are used to illustrate its daily manifestation.
In an ethnography of the American dinner table, Paugh and Izquierdo (2009) conducted research amongst middle class working families in Los Angeles, California. Their work and linguistic analysis of the interactions during the dinner time of five families culminated in the published work, Why is This a Battle Every Night?: Negotiating Food and Eating in American Dinnertime Interaction and is therefore most illuminating to the lived experience of these practices over the dinner table.
The family is the site where knowledge of healthy behaviour and eating habits are formed, negotiated and reinforced (Beach and Anderson, 2003). The family dinner table acts to socialise the younger generation through both everyday participation and observation of the food routines and interactions and reveals the parent’s moral ‘health-conscious expectations’ of what they should eat and what they should feed their children (Paugh & Izquierdo, 2009:186). At this crucial site, both class and cultural perspectives and inclinations towards ‘healthy, eating practices [...] notions of morality, responsibility, individualism, success, and what it means to be a family’ are revealed through the practice (Paugh & Izquierdo, 2009:185).
Their paper investigates the socialisation of health related beliefs and the subsequent internalisation of daily consumption practices and how they are expressed in reaction to the everyday family mealtime, through looking at the ‘battles’ which endure as a result of the conflict between the parents morally endowed expectations of what their children ‘should’ eat and what the children ‘want’ to eat (ibid:186). Through conversation analysis, the anthropologists draw attention to the ways social interactions create cognitions about health and food behaviour through analysing the language of the dinner table.
Parents’ desire to feed their children ‘healthy’ food, which is arguably a result of their own explicit or implicit socialisation and the child’s similar socialisation but with different desired outcomes, creates such anxiety and unpleasantness that the meal is often unbearable for everyone involved (Paugh & Izquierdo, 2009: 199). The desire to feed their children what they consider ‘healthy’ foods is so strong that the parents often resort to adopting food-reward schemes, i.e. desserts are employed as a reward for eating these ‘healthy foods’(ibid). As a result, food in the mind of the child is logically conceived in terms of a reward or punishment and therefore the conceptualisation of food in terms of the good/bad continuum is born and reinforced.
In a second dinner time ethnography, this polarising continuum of deciphering good from bad food was similarly explored by Wiggins (2004), through analysing audio-recorded mealtimes of families in the North of England. Following the neoliberal framing of food and consumption, Wiggins identifies two types of ‘healthy eating talk’ most prevalent during the meal interaction ‘focusing on the individual’ talk, which holds the child accountable for their food choices and behaviour at the meal, and communication, which falls under the ‘general advice giving’ umbrella, where the parent discusses with the child what ‘is health’ and what is not (Wiggins, 2004 :545). The child, since the first time he is confronted with ‘healthy eating talk’ at the dinner table, internalises the knowledge of what constitutes healthy or unhealthy as an individual problem to be solved.
Within Wiggins’ dinner time conversation, the items identified as being healthy for ‘your body’ are generalised in terms of the minerals (potassium in this example) any good body/citizen in society requires (ibid: 541). The generic quantities of any mineral and explanations given for the consumption of these foods completely void of any understanding of human variance, are most evident in this piece (ibid). This perfectly modelled understanding of nutritionism exemplifies the earlier discussion of Marks et al.’s (2000) notion of the confusion consumers face attempting to understand what each member of their family needs to consume and how to choose food to supply these requirements. Within these lines of conversation, food and its nutritional contents are spoken about in terms of definable knowledge and advice inherent within this community and are doubtless and factual (Wiggins, 2004 :540). Additionally, Wiggin’s recordings of a family discussing the homemade gravy, rather than granules, about to be served at dinner (ibid) reveals the way healthy eating is constructed and expressed in each family member's mind. Line 16 (ibid:550) of the family transcript highlights the appearance of the ominous ‘their’ (the media or health advisors) recommendations of what to consume to be healthy; ‘they recommend you use granules rather than homemade gravy’ as it is ‘very low fat’ (ibid:550). Even though ‘they’ are not identified by name or form, the importance of the purveyed opinion and pressure to eat according to these ideas is most evident and powerful.
Being healthy, as presented by Wiggins through her analysis, is constructed through the consumption of particular ‘trace elements’, minerals or vitamins within the said food, the soil, or the body (2004:545). Therefore, this construction is grounded on the idea that these physical needs of the body are greater, or more important than the individual's personal taste preferences or desires. Subsequently, health, in this context and as a result of the over nutritionisation of food and life in general, is postulated as achievable through consuming the ‘right’ elements or behaving in the ‘right’ way. While the phrasing of health in these terms provides a beacon of hope to the human potential, it also leaves the space for misinterpretation and market manipulation. The unintentional schooling which occurs in this context, whether actively or passively, produces schemas of food knowledge to guide the practical daily consumption patterns (ibid:541).
Food Futures: Origins of Meat Consumption + Ethical Consumption + Environmental Consequence
Throughout history, food of animal origins has been considered a prized delicacy, a necessity for human health and an aspirational symbol of power. The growth of animal production coupled with the highly emission producing biology of livestock have made this industry one of the most significant contributors to anthropomorphic climate change. Animal consumption in the contemporary Western world has become a daily or even three times daily habit, a trend which will soon be imitated by the developing nations and which will further magnify the environmental effects. What are the foundations and mechanisms on which the avidity of global meat consumption stands and discusses potential culturally sensitive alternatives in order to limit the global environmental consequences of such consumption.
Throughout history, food of animal origins has been considered a prized delicacy, a necessity for human health and an aspirational symbol of power. The growth of animal production coupled with the highly emission producing biology of livestock have made this industry one of the most significant contributors to anthropomorphic climate change. Animal consumption in the contemporary Western world has become a daily or even three times daily habit, a trend which will soon be imitated by the developing nations and which will further magnify the environmental effects. This dissertation explores the foundations and mechanisms on which the avidity of global meat consumption stands and discusses potential culturally sensitive alternatives in order to limit the global environmental consequences of such consumption.
The story of human evolution is one intimately tied to the hunting of animals and the consumption of meat. Consuming animal has become emblematic of human culture and symbolic of strength and success; as a result, the once infrequent feast has become for the average American, almost a halfpound a day (Molla, 2014). The consequences of this, combined with the rapidly increasing global population, have made the act of routine consumption a cause for environmental concern.
The consumption of food of animal origins is one of the most significant drivers of anthropomorphic climate change (Bailey et al., 2014). While technological changes and movements away from conventional livestock production offer a glimmer of hope, in reality these changes cannot limit the catastrophic level of environmental degradation we risk inflicting. In order to discern the best steps to take to reduce the environmental impact of meat production and consumption, we must first dissect the human ‘love affair’ (Zaraska, 2016:43) with consuming animal.
This dissertation examines the cultural ideology of meat consumption and the environmental consequences resulting from the industrialisation of the production process. To what extent are the beliefs surrounding human consumption of animal formed by culture? Has the deliberate or unintentional divide between human and animal magnified consumption to a point beyond environmental repair or could public awareness and slight modification in eating practices be the key?
Articulating Meat Eating Culture
The complexity of the human sentiment towards meat can perhaps be simply illustrated by a short dinnertime conversation in Margaret Atwood’s novel, T he Edible Woman (1969). The protagonist of the story, Marion, stares at her halfeaten steak and suddenly her face goes white as she makes the connection between steak and cow. With this realisation she exclaims: ‘This is ridiculous [...] everyone eats cows, it’s natural; you have to eat to stay alive, meat is good for you, it has lots of proteins and minerals’ (ibid:151). This simple quotation portrays the most colloquially given explanations for why humans need to consume meat and highlights the equally common human discomfort with consuming animal.
The following chapter will explore both sides of this equation: the cultural beliefs which create the assumed need to consume animal and the mechanisms employed to allow for suppression of the human discomfort with the choice. In doing so, I will first explore the common justifications or rationalisations given for consuming animal, or what Piazza et al. refer to as the ‘4Ns’: Natural: Humans are natural carnivores; Necessary: Meat provides essential nutrients; Normal: I was raised eating meat; Nice: It’s delicious’ (Piazza et al., 2015: 660). The second part of the chapter will explore the culture of meat eating in the West and the reason for its proliferation. Lastly, the modifications and mechanisms implemented consciously and subconsciously to ensure the consumption of animal will be examined.
Secular Religion
One of the most frequently given explanations for why humans need to consume meat is that we are evolutionarily predisposed to do so. Exploring the biological need for consuming animal products and the behaviour of meat consumption separately can be most helpful to begin discussing human’s affinity with these products. In the end of this section I will discuss the strength of this argument.
Biologically the human body is physiologically and anatomically herbivorous with omnivorous tendencies. Cardiovascular pathologist and editor of the American Journal of Cardiology William Roberts authored an article in which he presents evidence suggesting that ‘because humans get atherosclerosis [...] a disease only of herbivores, humans must be herbivores’ (2008: 467). Roberts then presents a list of anatomical characteristics that support his claim such as: the flat nature of human teeth; the length of human’ intestines, which is twelve times their body length like herbivores; humans’ characteristics of cooling their bodies through sweating rather than panting, and sipping liquids rather than lapping, as herbivores do; and a list of physical attributes and many other devastating health conditions argued as a result of consuming an unsuitable diet (ibid). These findings have been similarly reproduced by many others including the Physicians Committee for Responsible Medicine President, Dr. Neal Barnard (1990), renowned paleontologists Dr Richard Leakey (Freston, 2009), Cornell University professor Dr. T Colin Campbell (2004) and Dr. Milton Mills (2009), just to name a few.
From a behavioural standpoint however, humans are omnivores. This creates for humans, as most artfully termed by Michael Pollan in his book with the same title, ‘The Omnivore's Dilemma’ (2006). With the vast plethora of food choices available, throughout history humans have had to rely on something else to guide decision making. Until two millions years ago, when human ancestors roamed the savannahs of Africa, according to Katharine Milton (1999), a physical anthropologist at the University of California, Berkeley, humans subsisted on foraged plant foods.
The eating of meat however, according to most biological anthropologists, was the impetus which ‘sprung Homo erectus from their australopithecine past’ (Wrangham, 2009: 15). While the question of what humans are (herbivores with omnivorous tendencies) is overwhelmingly supported by the scientific community, the introduction of animal matter into the human diet two million years ago supplied necessary calorically dense amino acids and nutrients, which provided the body with enough calories to support the growth of the human brain and increase body size (Milton, 1999). By routinely including animal protein in their diet, despite having the gut anatomy of a herbivore, these early humans were able to reach the dietary intake required for cerebral expansion (ibid). This evolutionary argument of where humans came from, and historical precedent, created and reinforced the ingrained necessity and the social norm for consuming meat and animal products (ibid). This norm is especially prominent in the West, where the diet is largely grounded in animal consumption and the belief that doing so is necessary for health and wellbeing.
The prominence of meat in Western cuisine is established and supported by its perceived necessity in the human diet, largely grounded in a fear of not being able to consume enough protein on a plantbased diet alone. Protein from its Greek origin proteos became emblematic of the 19th century Westerner, as its translation into English is ’of prime importance’ (Campbell, 2004: 27). From these beginnings, animal protein was and continues to be hailed as the ‘lifeenhancing elixir’ which creates ‘health, growth, vitality, virility and even weight loss’ (Simon, 2013: 101).
Since its discovery by Dutch chemist Gerhard Mulder (1839) and its glorification by Justus von Liebig in 1824, protein became understood as the only essential nutrient required for ‘building human muscle’ (Zaraska, 2016: 44). Carl von Voit, following on from the work of his teacher Liebig, calculated that adults need to consume between 100135 g/day, later lowered to 52g/day, which, he said, must come mostly from meat (cited in Campbell, 2013). Despite this calculation was based on an average protein intake of the ‘healthy’ looking men he surveyed, his quantifications of required daily intake became mainstream, constructing the link between protein and meat. Since then protein, hailed as the source of health and longevity, became synonymous with highquality ‘animal protein, the cornerstone of ... good nutrition’ (Campbell, 2004: 4), an equation which stays true for many today. Perhaps because of the similarity between animal and human muscle, the understanding that humans must consume animal or its byproducts to become healthy and strong entered the mainstream and guided nutritional theory. As a result, nutritional textbooks and governmental guidelines in the 20th century were based on the understanding that the protein that humans need for good health must ‘come mainly from meat, fish, cheese, milk and eggs’ (Matthews and Wells, 1982: 1).
By 1944 the US Department of Agriculture (USDA) recommended man and women to consume respectively a minimum 70 and 60 grams of protein daily (Zaraska, 2016). The 1970’s highprotein, highfat diet books such as Dr. Atkins’ New Diet Revolution (1972), which paved the way for other highly popular weightloss and healthgaining diets such as Protein Power (2000) and the The South Beach Diet (2001), epitomised the mainstream understanding of the route to health. The reverence and necessity for animal protein still exists today and is promoted relentlessly in modern magazines such as Flex, where bodybuilding experts write, ‘simply put, our muscles are meat so we need to eat muscle to gain it’ (Zaraska, 2016: 68).
However, one of most comprehensive studies on the lifestyle, diet, disease and mortality within one controlled population was carried out by Loma Linda University beginning in the 1960s: the Adventist Health Studies (AHS) led by Dr. Gary Fraser with a team of researchers, have followed the lives of 34,000 Seventhday Adventists living in their community in California (Buettner, 2010). Compared to other Americans, Adventists have a 40% lower rate of all cancers, 34% lower rate of coronary heart disease, significantly lower rates of diabetes and obesity, and outlive the average American by over a decade (ibid). Most interestingly, the Adventist Church advocates vegetarianism, and a large percentage of the church followers abstain from all animal products or are vegan (ibid). Over the course of this 40year study, Dr. Fraser and his team have identified a sharp distinction within the religious community between those who have an even higher chance of avoiding these common diseases: ‘exercise, vegetarian diet, not smoking, eating nuts and social support have been found to predict longevity in Adventists’, a trend even more significant for those following a vegan diet (Butler et al., 2009:4). Moreover, in regards to protein, the lead researcher on the Adventist Health Study team notes that, ‘nutrition experts have known for decades that plantbased diets provide more than enough protein. In our studies we consistently find that as people switch from animalbased to plantbased diets, the diets become richer in vitamins, fibre and other important nutrients. There is never a need to add animal products’ (Buettner cited in Zaraska, 2016: 48).
Reinforcing the scientifically defined need for humans to consume ample amounts of meat daily in order to reach the suggested protein requirement, created, according to Zaraska, ‘the 2nd myth of meat eating’ (ibid: 47). The thought goes that if consuming meat equates to health and strength, then not consuming meat protein or ‘going vegetarian’ or vegan, makes the body and mind weak. In addition, Simon (2013: 102), while not necessarily advocating its consumption, asserts that ‘a peanut butter and jelly sandwich on whole wheat bread contains more protein (14 grams) than a McDonald's hamburger (13 grams) [...] a baked potato contains as much protein as a hot dog, 2 ounces of peanuts equals a chicken pot pie, and ounce for ounce, roasted pumpkin seeds have more protein than ham’. Despite its inaccuracy, the animal food industry successfully promotes the idea that plant protein is lower in quality than animal protein, a belief which readily enters the mainstream (ibid).
While many believe that eating meat is a ‘biological instinct’ (Harris, 1986: 31) whereby humans are genetically predisposed to seek out and nutritionally need meat, the custom of eating meat for generalists such as humans is more a feature of culture and social value (Le Gros Clark, 1968). Fiddles explains how ‘meat hunger’, which can only be satisfied through consuming ‘real (animal) food’ (1991: 14) , helps reflect the habitual understanding and social place of meat in the Western diet.
Culture & Symbolism
This notion that humans crave flesh or animal products based on necessity and for its physical qualities is central to the argument to consume it. Therefore it is most important to understand the difference between biologically and culturally defined taste. This section will elaborate further on the previous concept to suggest that meat consumption is determined by culture and how it is magnified in the West by the symbolic value meat holds.
Rozin’s most profound work, The Selection of Foods by Rats, Humans and other Animals (1976), contrasts the hardwired specialised eaters of the animal kingdom, such as koalas, whose preference of what to consume is genetically determined, with the existential question which must ensue for the omnivore: to kill an animal to eat or not. For the omnivore, survival is maintained through seeking foods that are familiar (neophobia) and are as diverse and novel as possible (neophilia) (ibid). This almost unfortunate task placed on some animals in the kingdom, including humans, takes not only considerable energy, but can result in significant ethical, health and environmental consequences. On the other hand, the plethora of choice and tastes permit an array of gustatory pleasure perhaps unequalled.
To alleviate this task, omnivorous humans ‘like the food we’ve learned we are supposed to like’ (Joy, 2010: 16). Therefore taste is largely defined and reinforced by the specific culture one is born into and, as a result, certain foods considered delicacies by one culture or religion are tabooed by another (ibid).
Anthropologist Mary Douglas, discusses this notion of cultural taste and notes, ‘nutritionists know that the palate is trained, that taste and smell are subject to cultural control’ (1978: 59). The arbitrary nature of some foods is clearly expressed in our everyday lives in regards to the high status some foods confer to the ‘high status on the eaters’ (Fieldhouse, 1986: 77). Mennell’s analysis (1985) of the transformation of brown bread from the once staple of the poor working class to the modern symbol of the wealthy organic consuming elite, simply illustrates the rather inconsistent hierarchy one food may be assigned based on socially defined status.
Taste is therefore not an absolute but something that adapts and develops under governance of the culture it resides in (Fiddles, 1991). Allowing or even encouraging the understanding of food and taste in this way permits a greater, more complete knowledge of the mechanisms involved in this process and the space from which these cultural norms are created.
To suggest that status, likes or dislikes are somehow objective interpretations of some natural quality inherent in the meat form itself, can be simply argued by the variation in global and societal tastes. The discovery of horse meat in beef processing sold in the UK created such an outrage that not only were new testing regimes implemented but many were arrested for such wrongdoings (BBC, 2013). This point begs the question I will return to later: what exactly is the difference between a cow and a horse other than defining one meat and the other pet?
Therefore, in contrast to all other animals living in the wild, who instinctively eat based on edibility, humans must rely on culturally transferred information to modulate consumption (Rozin, 2009). While these guides vary within and between cultural groups, the rules which govern the suggestions are largely moulded by the powers who have the largest motivations. On the one hand, cultural knowledge of what to eat, or perhaps even more importantly of what not to eat, is passed down through generations or communities to help prevent the individual from having to experiment with questionably poisonous foods; this also bestows the population with a toolkit of eating practices. On the other hand, any natural ‘native wisdom’ (Pollan, 2006: 1) we once may have had is now superseded by whattoeat anxiety. Contradictory media reports, celebrities donning the ‘got milk’ (Gifford, 2014: 321) moustache, highly technical scientific journals and lobbying powers confuse almost everyone into jumping on the latest diet fad or ensuring that they consume whatever is being sold.
The culinary tradition of eating animals and animal by products has been defined and maintained throughout history as a result of their highly symbolic value and the importance placed on meat. In his seminal book, Meat: A Natural Symbol (1991), anthropologist Nick Fiddles explores the foundations from which meat’s symbolic importance originated. In one chapter of his analysis, Fiddles dissects the highly complex symbolic value meat holds globally and examines variations between cultures around this universally prized food stuff. The universal affinity with the killing and eating of animal flesh, Fiddles argues, is emblematic of man’s ultimate ‘muscle’, domination or ‘control of nature’ and a symbol of strength and civilisation itself (ibid, 6). In addition, the process of cooking ‘transforms meat from a natural substance to a cultural artefact’ and consequently highlights man’s superior status over the animal which cannot or does not need to cook meat to consume it (ibid, 91).
The authority of animal protein has been long established and reproduced by both the public and nutritional experts for centuries to the point of it being considered the ‘only real food’ (ibid). In Fiddles’ first chapter entitled ‘Food = Meat’ (1991:11), he explores this historical paradigm that still very much takes precedent today: a meal is not complete without meat. This equation places meat at the centrepiece of every meal whereas vegetables and grains are auxiliary, a concept perfectly exemplified by the colloquial phrase ‘meat and two veg’ guiding most traditional British households, which also illustrates Western cultural drawings of a meal. The complexity of this relationship is epitomized by the drama of the weekly ‘Sunday roast’ arriving on the British dinner table, comparable to the hamburger grilling at the American backyard barbecue, which enraptures the soon to beindulging and wide eyed guests. The ‘idea[...]feeling[...]and spirit of meat’ (Fiddles, 1991:16) is so intrinsically tied to the taste of eating meat that the experience almost takes a role of something mythical.
The Sexual Politics of Meat
The human equation between the need to consume the spirit and physicality of the muscle to build muscle can be traced back to a long held philosophy that by consuming a ‘physical substance one can somehow partake of its essence’ (Fiddles, 1991:67). The metaphorical relationship between meat, vegetables and language is reflected in the behaviour and thought of men and women (Rozin, 2012). Rozin concluded that a clear relationship exists between the word, meaning and association between ‘maleness’ and ‘meat, hamburger, sausage, frankfurter, steak, beef’, and similarly between ‘femininity’ and ‘vegetables, milk, cheese, egg, and fruit’ (ibid: 30). Rozin identified a clear malemeat link whereby ‘strength and power emerge as attributes associated with meat preference’ (ibid, 13). The dichotomy between meat/man and vegetable/woman was exposed most controversially in Carol J. Adams’ The Sexual Politics of Meat: A Feminist Vegetarian Critical Theory (1990). In this important exploration, Adams dissects the patriarchal values embedded within Western meateating culture and more specifically the correlation between ‘meateater’ with ‘virile male’ and ‘women with animals’ (2000:102). She argues that the oppression of animals is socially acceptable due to ‘associating them with women’s lesser status’ (ibid) and through implementation of a concept she terms ‘the absent referent’ to be discussed later.
This association between meat strength and superiority, and vegetable weakness and inferiority can be seen through its infusion into the English language by way of colloquial phrases such as ‘beef up’ meaning to make something stronger, and ‘vegetable’, ‘veg out’ or to be a ‘couch potato’ inferring a lazy body. Brian Wansick, a professor in Consumer Behaviour and Executive Director of the USDA Centre for Nutrition Policy and Promotion, discusses the understanding and use of gender prototyping and the notion of personality identification by marketing professionals to most effectively appeal to their desired audience (2006). In his book, he provides an example in which a soy patty producer who attempted to make a product appealing to men reviewed a study revealing that men thought of soy as feminine and steak as masculine, so the researchers recommended that soy producers reshaped their products to look like ‘various cuts of beef, and repackaged and advertised them to have more meatrelated cues’ (ibid: 231). Once these phrases and connotations enter the mainstream consciousness, fastfood chains like Burger King capitalise on these ideas to sell their products. The 2008 commercial for the Texas Double Whopper was nothing short of a musical starring a man who is ‘too hungry to settle for chick food’ so needs to ‘eat (meat) like a man’ and therefore ‘scarfs down a beef bacon jalapeño’ Texas Double Whopper (Halford et al., 2007: 22).
Similar to the rarity and elite status of sugar (Mintz, 1985), meat was once only affordable and consumed by the wealthy proletariat of European society (Bower, 1997). However, the efficiency of modern intensive agricultural technology and methods have increased production yields while systematically reducing the price of these items, ensuring accessibility for the global market. While the consumption of flesh is normalised in Western culture, the negative moral conscience elicited when thinking of killing an animal for consumption is so commonplace that certain mechanisms are employed to repress these feeling and justify the act. These mechanisms will be discussed in the following section.
Meatonomics : Mechanisms & Justifications
This section will explore the external and internal factors enabling such avidity of animal consumption. External factors such as the highly influential meat industry and the purposeful invisibility of meat production that shape public health policy and what consumers eat daily, will be discussed first. Internal distancing mechanisms and belief systems within the mind of the meateater on the one hand enable animal consumption, but on the other create a cognitive dissonance which will be referred to as the ‘meat paradox’ (Herzog, 2010) in the second part of this section.
In her book, In Meat We Trust, Ogle explores how and why Americans became the ‘greatest eaters and providers of meat in history’ (2013: 5). While there is a human element to the desire to consume, the history of the gross quantity of meat eating in America, and in extension the world, is also manipulated by economic gain. American meat eating was largely shaped, Ogle argues, by the ‘meatpacking titan’ Gustavus Swift, and Don Tyson, a chicken farmer who created the largest food company in the world (ibid, 12). While livestock rearing, like any other food production, began on the family farm, it was forever changed by the rise of the agribusiness large scale production.
The small family farm was replaced by large confined meat production in the 1970s (Ogle, 2013). For example, while dairy demand increased by almost half between 1954 and 2007, the number of dairy farms in the US plummeted from 2.9 million to 65,000 (Simon, 2013). While the image on the carton of milk has not changed, the reality of the farm certainly has. Today the meat industry, with the aid of powerful lobbyists, in the words of the cattlemen’s magazine Beef, ‘work hard to create the love affair that Americans have with a big juicy ribeye’ (Zaraska, 2016: 43). Marketing of animal products is also as iconic as the food itself. Embedded within the rulebook of advertising in this domain is ensuring that the food advertised has almost no resemblance to the animal life form. Taglines such as ‘Milk. It Does a Body Good’, and ‘Pork. The Other White Meat’ and, most famously, ‘Beef. It's What's for Dinner’ are so successful that they were worth the USD 42 million spent on them in 1992 (Halford et al., 2007). According to the magazine American Meat, the meat and poultry industry ‘generates USD 864.2 billion annually to the US economy or 6% of the GDP’, which is largely the profit of four pork producers controlling twothirds of the market, four producers of beef holding threequarter of the market (ibid: 45), and Cargill, owning 21% of the US market share, who reported earnings of USD 88.3 billion in beef sales in 2008 (Ostlind, 2011).
With subsidies and the scale and efficiency of production, the cost of meat in America is ‘artificially low’ (Simon, 2013: 20) and producers strive to keep it so as to drive demand. Between 2008 and 2012 the farm bill allocated USD 307 billion largely to producers within the animal processing food chain, among which seed and agrochemical suppliers and meat packing companies (Food & Water Watch 2012, Lessig 2011). These government funded ‘check off programs urge us to buy more meat and dairy’, an instruction we follow to the ‘t’ (ibid, 41). Taking inflation into account, over the same period the prices of ground beef and cheddar cheese fell 53% and 27% respectively, while the prices of fruit and vegetables rose 46% and 41% respectively (Leonhardt, 2009), making the ‘choice’ to consume animal the most affordable in most cases. One of the most unsettling consequences of this cheap overabundance of meat, as Simon notes in his book Meatonomics (2013:20), is that the ‘system encourages us to eat much more meat and dairy than even the USDA advises’ and is healthy for us.
The abundance of contemporary meat consumption suggests that humans’ care for animals is limited, but this is not the case. The animal and ‘human brains are hardwired to empathise’, and it is with this automated biological response functioning as a survival mechanism that human societies or wild animal groups are held together (de Waal, 2009: 58). A tension noted by many scholars regarding the omnivore’s moral conflict arising from loving animals while also loving to eat animals, is an internal battle aptly named the ‘meat paradox’ (Herzog, 2010; Joy, 2010; Loughnan et al., 2014). Vegetarians and vegans (210% of the population) purposefully limit the tension caused through the ‘cognitive dissonance’ (or the drive to maintain consistency with expressed behaviours and held beliefs and attitudes (Festinger, 1957)) ingrained with the human ‘desire to avoid hurting animals with our appetite for their flesh’ (Loughnan et al., 2014:15). While attempts to eliminate or at least reduce this dilemma are popular with the meat producers and individuals who choose to employ techniques to make the reality invisible, the moral dilemma creates such dissonance that behavioural or cognitive change is required. Asking the questions, ‘can they (animals) reason?, can they talk?, can they suffer?’ without a clearly reachable answer of ‘yes’, allows the ‘passage from farm to fork [to be] less troubling’ as animals are equated as lesser beings warranting consumption (ibid: 14).
The justification for animal consumption stems from the Cartesian notion that animals are machines or ‘bête machine’ (Newman, 2001). In this philosophical doctrine, animals, unlike humans, lack the ability to have consciousness (mind or soul) and the ability to reason, and are therefore lesser beings and commodities existing for human consumption. While this theory is considered obsolete by most, the practical implication of these conceptualisations stays true today.
While adhering to the familiar, traditional or convenient food preferences embedded within meat consumption is practical and useful, it also poses a moral problem in itself. Carol Adams’ concept of the ‘absent referent’ explains the function of the cloak placed over the meat: acting to hide the death of the sacrificed animal and protecting the meateater’s conscience (2000). The complete avoidance of the connection between the breathing animal and consumable meat is visible at every stage and angle of the process. From farm to table, the process is physically, socially, discursively and conceptually invisible. This invisibility is central to the extent that meat is subject to be consumed and therefore central to this paper.
While humans have consumed meat for centuries on a semiregular basis, whether it be only a few times a year during feast days, or the occasional sliver as a flavouring to a plant based dish, the humananimal relationship has changed drastically in the rise of industrial society. Richard Bulliet (2005) distinguishes four stages in the history of human animal relationships: separation, predomesticity, domesticity and postdomesticity. The period of ‘domesticity’, he notes, was characterised by normalised daily contact with the animals used for consumption and as pets, but in the transition to postdomesticity (from the 1970s to today), humans are physically and psychologically separated from the animal used for food while paradoxically remaining in close connection with animals as pets. This contradiction creates great tension for those consuming animals: ‘feelings of guilt, shame, and disgust when they think (or not) about the industrial processes by which domestic animals are rendered into products and about how those products come to market’ (ibid: 3). Traditionally, the butcher window displayed animal carcasses proudly and offered the public exact instruction as to where on the animal the piece in question originated, whereas the contemporary butcher is quite different. Today, ‘meat’s connections with live animals (have) to be camouflaged’ (Stewart, 1989: 7): ‘bones, guts and skin are nowhere to be seen’ (Fiddles, 1991: 96) in order to halt the consumers’ imagination that an animal is staring back at them through the unidentifiable package, and encourage them instead to ‘think forward to what they will eat rather than backwards to the animal in the field’ (British Meat, 1987:4).
Why We Love Dogs, Eat Pigs, and Wear Cows:
Melanie Joy, in her book, Why we love dogs, eat pigs, and wear cows (2010), explores the factors and belief system which allow humans to contradict their moral compass in the consumption of some animals and not others. 32% of Americans questioned in a Gallup 2015 survey believe animals should be given the same rights as people, a number much higher than the percentage of the population abstaining (3.3%) from meat or animal products all together (Harris, 2015). Joy (ibid) and others (Fiddles, 1991; Foer, 2009; Zaraska, 2016; Herzog, 2010) suggest that the invisibility embedded in every step of the process from an animal’s life and slaughter to the marketing of meat on television, and even the language used in this context, insulate us from the reality of the system and create a perfect environment for mass ‘carnism’ to exist (ibid, 21). Carnism, as termed by Joy, is the prevailing belief system of contemporary Western society in which the ‘choice’ to consume certain animals (cows, pigs, chickens) rather than others (dogs, cats, horses) is ‘considered ethical and appropriate’ (ibid: 30).
Whether the origination is in the language used to describe the animal or the denial of the alikeness between the smiling cow in the Old MacDonald nursery book and the ground beef on the dinner table, this relationship is created and reinforced by not only the producers, but also the consumer. The success of the factory farm depends on ‘consumers’’ nostalgic images of food production because these images correspond to something we respect and trust’ (Foer, 2009:55). Even the disparity between the words used to describe the form of animal is most interesting: cow/beef, pig/pork, and veal/calve. Such varying terminology enables the consumer to eat without ‘envisioning the animal we’re eating’ (ibid: 21). Language therefore acts as a powerful distancing mechanism, an accomplice to aid the consumer to eat a product rather than a body. This ideology is so normal and mainstream that it does not need a name to describe the dietary choice of someone who chooses to eat meat, despite the many words available for the choice not to abide by the prevailing ideology: vegetarian, pescatarian, vegan etc.
The cycle of invisibility extends into the physical realm of livestock production as well. 10 billion livestock (not including fish or other sea animals), over 1/3 larger than the global population, are raised every year for consumption (USDA, 2016), though somehow every step of the process and the animals lives are out of sight. These animals live their short lives in ‘confined animal feeding operations’ (CAFO), before being slaughtered at a rate of 19,011 animals per minute (Joy, 2010: 39).
Understanding the discreetly hidden ‘behind the scenes’ and dissociated nature of the factory farm (Fitzgerald, 2010: 58), epitomised by the slaughterhouse, can help to illustrate the many other elements of carnism. Anthropologists Amy Fitzgerald suggests that the slaughterhouse as an institution, through its deliberate placing and mechanisation, has changed cultural sensibilities towards animal killing and relationships between the animals’ lives and those financially or nutritionally benefiting from it. The invisibility of the slaughterhouse and distance between animal and consumer is by no means an accident; animal slaughter ‘tends to be a somewhat ‘unpopular’ subject: no one wants to know about it” (Vialles, 1994:125). The space between a ‘halftruth and an evasion’ (Foer, 2009:341) is where industrial meat production stands for the benefit of the producer and consumer: an almost ‘heroic act of not knowing’ (Pollan, 2006: 85).
Awareness of the shocking conditions of the meat industry is nothing new; vivid illustrations of the harsh and unsanitary conditions of the American meatpacking plants were first depicted by Upton Sinclair’s exposé, The Jungle (1906). As a result, regulation of the processing plants increased while the number of small processing plants and farming operations sharply declined. Between 1982 and 1997 the number of CAFOs dropped from 435,000 to 213,000 (Stull and Broadway, 2004). Accompanying these physical changes to modern processing plants, public knowledge of what happens inside now mirrored the conglomerate plants’ desires. As meat demand rises, the number of animals not only increases but simultaneously contributes to the diminished quality of the animal's life and death alongside consumer’s ‘affected ignorance’ of such happenings (Williams, 2008: 102). Perhaps if the consumer knew the truth about meat production, as Sir Paul McCartney famously claimed, or ‘slaughterhouses had glass walls, everyone would be vegetarian’ (PETA UK, 2014).
Pressures to adhere to the culinary patterns outlined by the culture we identify with are so strong that natural human moral tendencies must be suppressed or hidden to pave the way for mechanisms which enable its adoption. As comfortable consumption is insured through adjusting animals’ moral standing and creating an invisible production system combined with social and market pressure, the consumer is alleviated from questioning the passage from farm to fork. The obscurity of the contemporary agricultural production disorientates the consumer by carefully erasing from sight and mind certain steps of the process to ensure putative carnivorous consumption. While the marketed image of the anthropomorphized happy farm animal is all that is seen, the true treatment of the animals within the confines and residual environmental damage is void from the human eye. While the ethical conundrum inherent in the process of killing and eating animals is obvious for many the ethicality of environmental consideration needs to take precedence in this discussion.
Consequences of Consumption : Livestock Greenhouse Gas (GHG) Emissions
To properly place this discussion in context, it is important to understand the mechanisms at play in creating climate change and the associated physical manifestations. GHG that occurs through natural sources (eg. animal and plant respiration) and through human activities enters the earth’s ozone layer and causes a chain reaction through the emittance of infrared radiation (Oppenlander, 2013). As the amount of GHG compounds in the atmosphere increases, the greenhouse effect, which traps heat in the atmosphere allowing sunlight to pass through more freely, subsequently causes the warming of the earth and therefore climate change (Allison, 2015). Three of the five GHGs in Earth's atmosphere (methane, nitrous oxide and carbon dioxide) are anthropomorphic, i.e., largely produced through human activities (ibid).
Accompanying the rise in average global temperatures, ocean temperatures and atmospheric water vapour increase, along with the shrinkage of ice glaciers causing sealevels to rise and countless other consequences (Allison, 2015). If GHG emissions rise and the earth’s surface is 2 degrees Celsius (2 °C) warmer than preindustrialised levels, ‘catastrophic changes’ will occur (ibid). There has been a 0.8°C increase in temperature over the last 40 years, largely attributed to the industrial revolution, a rise which has already caused ice caps to melt, oceans to rise and become 30% more acidic, and cause dramatic weather conditions such as heat waves, droughts and tornados (ibid). The 2 °C can be best explained as a point similar to ‘crossing a threshold’ where, once crossed, the speed and destruction of climate change will occur at a much more rapid rate (White et al., 2013). Past this 2 °C ‘tipping point’, projected to occur before 2050, sea levels will rise, weather will increasingly become more severe and agricultural production and biological diversity will decrease (Oppenlander, 2013). This is expected to cause global famine and water shortages and the minimum displacement of the 600 million people living on the coastlines (ibid). Controlling emissions to stay below the 2 °C ‘tipping point’ cannot be reconciled by a move away from traditional methods of production; dietary change is necessary (Bailey and Tomlinson, 2016).
The livestock industry’s contribution to climate change, according to the United Nations Food and Agriculture Organisation (UNFAO) Livestock’s Long Shadow report, was calculated as 18% of anthropomorphic (GHG) emissions, while transportation, for example, contributes 13% (2006). Although this number (18%, as reported by the FAO) warrants concern and call to action, a number of independent researchers find this figure highly underestimated and place the anthropomorphic contribution of livestock to GHG emissions in a range from 30 to 51% (Calvert, 2005; Anhang and Goodland, 2009).
Since the FAO’s publication, literature supporting livestock’s contribution to GHG emissions and anthropomorphic climate change has grown (Laestadius et al., 2014). While each animal’s emissions and rearing practices dramatically influence the efficiency and contribution to the GHG emissions, animal products and to a greater extent beef, emit greater GHG emissions than any food product based on protein content and weight (Laestadius et al., 2014; Weber & Matthews, 2008). Beef, in isolation of other animal production, is responsible for 41% of the GHG emissions from livestock (Opio et al., 2013), while also being one of the driving forces of land deforestation and degradation (Cederberg et al., 2011). In addition, 90% of the Amazon rainforest that has been cleared since 1970 is used for livestock production (Margulis, 2003). A study by Pimentel and Pimentel revealed that producing 1 kg of animal protein requires ‘100 times more water than producing 1 kg of grain protein’ (2003:663). Therefore, to produce 1 kg of beef, about 13 kg of grain and 30 kg of hay are required as well as more than 200,000 litres of water (Thomas, 1987). While direct costs such as cost of grain and water use are easy to calculate, indirect costs such as fossil fuel expenditures are almost invisible. For example, to produce 1 calorie of beef, 40 units of fossil fuel energy is created as a byproduct. Similar equations have been calculated: 1 calorie of egg requires 39 units of energy; 1 calorie of milk requires 14 units of energy; and 1 calorie of turkey requires 10 units of energy (ibid).
Livestock contribute to the emission of GHG through the many avenues of the production process and type of emission; mainly carbon dioxide (CO2), methane and nitric oxide. According to a report by the UN, CO2 in the form of fossil fuels is burned during the production of fertiliser used to grow crops for livestock as well as during food processing, and the transportation of meat once it is slaughtered for market (Steinfeld et al., 2006). Methane gas is released into earth's atmosphere through a variety of avenues, including the burning of fossil fuels, gas refining and drilling processes, and most significantly from the flatulence of animal husbandry. Methane and nitrous oxide, released most significantly from ruminant animals, are especially problematic as they have ‘greater global warming potentials’ than CO2 that of methane is reportedly eightysix times that of CO2 (Koneswaran and Nierenberg, 2008:579). Ruminant animals (sheep and cattle), through their unique process of digestion known as enteric fermentation, release methane as a byproduct, which are on average ‘more GHGintensive than all other forms of food’ (Weber & Matthews, 2008:3511); a process which creates 40% of total livestock GHG emissions (Gerber et al., 2013). Beef, compared to plant and any animalbased foods, has ‘by far the largest climate footprint’ (see figure 1) through the industry’s land use, freshwater consumption and GHG emissions (Raganathan et al., 2016 : 71).
Projected Global Consumption
Another paramount component to this discussion is the projected global convergence towards Western animalcentred consumption practices in the next thirty years. Based on numbers calculated by the UNFAO Livestock’s Long Shadow report, global meat consumption has increased from 47 million tonnes (mt) in 1950 (Steinfeld et al., 2006) to almost 315 million tonnes in 2014 (Mustafa, 2015). While the world population has little more than doubled in the same time period, this dramatic sixfold increase is not accounted for. According to the US Census Bureau, a world population counting tool, the world population is calculated at 7.5 billion people (July, 2016), a number that is growing at a rate of 76 million annually (Steinfeld et al., 2006). According to the UN medium projection forecast, at this rate the global population in 2050 will reach 9.1 billion with the largest population increase occurring in developing countries (95%) (ibid). Global meat production is projected to more than double between the years 2000 and 2050 from 229 mt to 465 mt while dairy products are projected to grow from 580 to 1043 mt over the same period (Steinfeld et al., 2006). Growing global populations and incomes are the two major factors that contribute to this projected increase in livestock production.
It is important to understand where the seemingly increased global taste for meat comes from. Popkins, Horton and Kim (2001) termed the ‘catchingup process’ developing nations are engaging in as the ‘nutrition transition’ (Steinfeld et al., 2006:10). While historically, a significant gap exists between developed and developing countries in terms of animal consumption, the global trend of greater urbanisation coupled with higher standards of living in the latter, facilitates greater demand and consumption of these once expensive luxuries (Steinfeld et al., 2006; Rae, 1998). Over the decade between 1991 and 2001, per capita GDP grew globally by 1.4% with 7% in East Asia and 3.6% in South Asia (ibid). Steinfeld et al. suggest that the sharp increase in animal products produced and consumed globally is highly correlated to a sharp decrease in price (meat price decreased 12%, and dairy price decreased 31.5%). The patterns of consumption once unique to the West are soon to be adopted by the developing world with ‘potentially catastrophic consequences’ (Garnett, 2010: 32).
Case For Change
A report by Bailey et al. (2014) reveal that a significant ‘awareness gap’ prohibits public recognition of animal agriculture’s contribution to climate change. This gap is widened by various motivations from those seeking to financially or politically gain from its consumption who deliberately and/or unintentional avoid presenting complete and informed material to the public (ibid). This report reveals that the public believe that emissions from the ‘power production’ sector are the greatest GHG emitters and that the livestock sector contributes the least (out of seven) (ibid: 18). This lack of public awareness, according to Bailey et al., is reflected in the lack of climate change discourse directed towards the livestock sector and therefore inhibits action on behalf of the public, government and industry (ibid). Closing this awareness gap is therefore perhaps the most important step in halting the environmental damage.
The concern for the sustainability of our planet and the desire to take steps to reduce contribution to and further limit detrimental emissions has grown significantly in the last decade upon reports of the climate change ‘tipping point’ (Bailey et al., 2014). Public awareness of human carbon footprint and influence on global warming grew from publication of the UN climate change conferences and through documentary films such as Al Gore’s An Inconvenient Truth (2006). These publications and this documentary specifically, for which Gore was awarded the Nobel Peace Prize, on the one hand furthered public knowledge that humans contribute to climate change, and on the other curated the information to tell a very specific story. We are reaching a ‘true planetary emergency’, Gore explains to the audience after thoroughly convincing the viewer that global warming exists, an event which ‘is really not a political issue, so much as a moral one’ (Gore, 2006). The general public’s collective understanding of climate change is built around the emphasis Gore places as rooted in oil, gas and coal production, and summed up perfectly in a interview with the Guardian last year in which Gore believes the driving industrial forces, Chevron, Exxon and BP, are using earth’s ‘atmosphere as an open sewer’ (Confino, 2015:1).
In response to Gore’s campaign, legislation to ‘cap and trade’ CO2 emissions through initiatives such as taking 5minute showers, changing to more efficient light bulbs, and driving a hybrid car were promoted as techniques to curb individual’s foot prints and subsequent protests over Tar Sands oil extraction became popular acts of environmental activism (Oppenlander, 2013). Whether purposefully or as a consequence of misunderstanding, the message of reducing GHG emissions and therefore climate change, although important, was directed towards inanimate objects (cars, planes and buildings) despite the smaller contribution these factors play in the total emitters. Taxes on ‘gasoline, diesel, aviation fuel, incentives for oil and gas drilling, nuclear power plant construction, [and] carbon capture’ were implemented and investment in renewable energy was increased significantly (ibid, 56).
The current environmental repercussions and, by a greater extent, the future sustainability and food security challenges projected to incur as the global taste for animal products converge with the Western model, seems indisputable at this point. The question then becomes: what are the options? Suggestions of how to feed the growing meatcentered world and how to raise consumers’ awareness of the relationship between livestock production and environmental degradation will follow in the next chapter.
Considering the Alternatives
The previous two chapters have explored the culture of meat consumption and to what extent this model of consumption, soon to be the global standard, will not only exceed earth's physical capacity but will continue to cause irreversible environmental degradation. In the last few years, as awareness of the environmental contribution of large scale animal agriculture has became more mainstream, movement away from factory farms and towards more slowly raised free range and grass-fed animals has begun to gain in popularity. Therefore, this next section will explore the effectiveness of this solution.
While for many the mere suggestion of eliminating animal from our diets is a laughing matter, for others the suggestion of not doing so causes the opposite reaction. While the need for a countermeasure to the ferocity of meat consumption is conclusive for those aware of the environmental situation soon to ensue, the form which this should take is by no means as clear. A solution years in the pipeline will also be explored, which utilizes the greatest technology available today, and may provide a palatable option to this dilemma: animal-less meats.
Grassfed Solution?
Institutions and communities based around these ‘locally grown’, ‘cagefree’, ‘sustainably sourced’, ‘pastured’, and ‘farmtotable’ food ideals have surged in the last decade in direct opposition to the traditional industrialised foodproduction system (Oppenlander, 2013). Food writers such as Michael Pollan, social organisers such as Slow Food Org., farmers such as Joel Salatin and diets such as Paleo have become the faces of these ideals and movements (ibid). Not only do these movements and leaders offer an avenue to still consume the highly prized animal but they also provide a theoretically sustainable and morally superior option.
Individuals following the ‘Paleo Diet’ epitomise the slow food, grassfed, organic animal market. Robb Wolf, one of the pioneers of the Paleo movement suggests that 36 ounces of animal protein (with an emphasis on beef) or 35 eggs, as well as 1 or 2 tablespoons of butter or lard should be consumed at each meal (Kubal, Wolf, and Rodgers, 2016). All animal protein and products need to be grassfed, organic and freerange, Wolf and many other leading voices of the community suggest (ibid). While the exact meaning of these words changes depending on the context and voice of authority and the human health benefit being emphasised, such as the vastly ‘superior omega3 to omega6 ratio’ with these foods (Noël, 2010:21), the even more environmentally damaging nature of these diets remain constant.
For the strengths of these nonindustrial alternative production methods, the suggestion that this substitution is all that is needed to halt the environmental degradation is far from accurate. Judith Capper, professor of Animal Science at Washington State University, published a report comparing the energy input and varying environmental impacts of conventional, natural and grassfed beef production systems (2012). She determined that conventional beef production (finished in feedlots with growthenhancing technology) requires significantly less land, water and fossil fuels, and has a lower carbon footprint than either natural (feedlot finished with no growthenhancing technology) or grassfed (foragefed, no growthenhancing technology) systems. She explains that while CAFOs arguably provide a worse quality of life for the cow itself, the efficiency of the fattening or finishing process produces far less emissions per cow just on the basis of time spent alive to create gases; the intensively raised cow eats 2,800 pounds of corn to gain 1,000 pounds in a few months, versus a grassfed cow who lives on pasture twice as long, whilst continuing to emit GHG emissions (Palmer, 2010).
The recent condemnation of conventionally CAFOproduced meat and the subsequent upsurge of the seemingly more environmentally sustainable grassfed organically produced meat, as this last section has shown, is ground in inaccuracies. Failure to see the likeness between the idolised organic grassfed steak and the scorned McDonald’s hamburger, for the middle classes, shapes the conversation in one direction and prevents any substantial reduction of consumption. The act of assuming that the essential problem (if one exists at all) lies within industrial meat production, directs the conversation conveniently away from the problem itself and provides a favorable solution to the middle class consumer.
In response to this clear relationship between meat consumption, GHG emissions and the subsequent environmental degradation, many food focused, animal protection and environmental nongovernmental organisations (NGOs) have begun modifying their messages (Laestadius et al., 2014). In their metaanalysis, Laestadius et al. revealed that overall NGOs’ messages are now advocating for the ‘modest reductions in meat consumption [and] increase [in] the proportion of meat consumed from grassfed ruminant animals’ (ibid: 85).
Moderation of meat consumption, if any, whether advocated by the Dietary Guidelines for Americans (USDA, 2016) nutritionists or food writers, lies in the vague idea that we should ‘eat less’ or enjoy it in moderation. Consumer pressure and individuals like Michael Pollan have encouraged the move away from conventional farming practices and towards small farming operations, and some have even encouraged ‘less’ consumption. However, this lack of quantification allows for stagnation as what ‘less’ is in comparison to is undefined and therefore leaves space for individual manipulation and relative modification. In light of this, in an attempt at placing a quantification on the ‘less’ suggestion, perhaps the inverse of the ‘Meatless Monday’ movement, which promotes not consuming meat once a week (Lerner, 2003), needs to be reasonably rewritten as Meat only on Monday.
Cultured Meat
In August 2013, Professor Mark Post of the University of Maastricht cooked the first cellcultured hamburger live on air in London (Jha, 2013). Post, whose primary research is engineering skeletal muscle tissue for use in human arterial grafting, has applied this medical technology over the last five years to cultured meat for human consumption (Datar, 2015). Using the same techniques, his team of researchers harmlessly biopsied cells from a cow’s muscle which were then cultured to grow and form 20,000 muscle fibres to form the world's first lab grown hamburger (Rushe, 2016). While in an early estimate Post revealed that an ‘early retail price could be set at $29.50 per pound, but as production scales up [...] that price could come down to approximately $3.60 per pound’ (Rushe, 2016: 3).
The 20th century capitalist society characterised by the Fordist means of production and technological advancements has created an everincreasing efficiency and control (Parsons, 1991). The contemporary commercial animal factories are economies of scale which use ‘highdensity, hyperconfinement methods [... and] automated processes’ where animals are bred and artificially manipulated to reach slaughtering weight much quicker than ever before (Simon, 2013: 216). The efficiency and productivity of these techniques are noteworthy: egg production per hen has doubled in the last century and milk from dairy cows has tripled, as have speed of growth and weight of chickens (Roberts, 2000). If the end result is to produce a piece of meat trimmed of fat, skin, bones and anything that resembles the animal, then why does it need to be produced from the whole organism?
Cellular agriculture is based on finding a solution to this problem, logically turning the production line on its head, building from the smallest unit of life and growing up to produce an identical piece of meat while avoiding the issues, i.e land use, environmental degradation as well as the less desirable and unintentional fat, bones and skin. While this stemcell meat may be too big of a philosophical hurdle or too reminiscent of ‘frankenfood’ (Miller and Conko, 2004) to be palatable just yet, there are other alternatives.
Plant Based Meat
The Not Company (Not Co.), a foodtech startup comprised of a biochemist, an engineer, a research associate and an artificially intelligent computer called Giuseppe based in Santiago, Chile, endeavour to take the place of the local animal food producers (Muchnick, 2016). While there have been many meatreplacement products and mainstream producers such as Alpro and Silk for decades, this new wave of animal replacement takes a different angle with a more intricate social understanding of what consumers and humans want.
These new alternative meat producers, with companies such as Beyond Meat, Impossible Foods, Hamilton Creek, Not Co. and many others are not reformulating the seemingly oldschool tofu or tempeh,but are building a piece of meat directly from plants. The aim of these highly environmentally motivated companies is to precisely replace animal protein with plant alternates and make every lipid and amino acid identical to its animal form with indistinguishable taste, smell, and texture and to make a burger that bleeds and sizzles. Beyond Meat founder Ethan Brown is committed to ‘recreating a piece of meat from plant inputs’ that is also nutritionally improved: ‘our meat has more iron than steak, more protein than beef, more omega3s than fish, more calcium than milk, and more antioxidants than blueberries’ (Brown, quoted in LeschinHoar, 2015).
By using a bottomup approach, the individual biochemists (or in Not Co.’s case, a highly trained computer) apply the most current understandings of molecular and human behaviour to input an array of various plant sources to determine which fits best. The Not Co. have a different business model than the other multinationalfocused startups mentioned above, as their ambition is to provide a plantbased meat for the local Chilean populations (Muchnick, 2016). These new notmeats ask the consumer to reconsider and redefine meat as a product of its scientific architecture and not the source of where it came from: this is the task. These new animalless meats perhaps provide the only bestofbothworlds solutions whereby we can have our culturally sensitive steak while conserving the planet too.
Given the longevity of eating meat in human history and the almost religious status ingrained into the Western context, the abundance of its consumption and the breadth of its consequences should be of no surprise. If human proclivity to consume animal were as it is framed, a simple and innocuous act of pleasure, free of consequences, then perhaps there would be no stimulus for this paper. This, unfortunately, is not the case.
In modernity, especially in the West, to live without quotidian consumption of meat is considered a state of imposed deprivation, an entrenched belief system which, as explored, has the most adverse consequences for every being on this planet and the earth itself. Unintentionally, we have created a world soon to be unfit for our inhabitance, as the very thing that made us human will be the very thing holding us back from our ability to thrive on the planet. Feeding the world, physically and sustainably, is a complex problem, one which cannot and must not rest on the laurels of history and tradition: to accomplish this, the meateating ‘culturecoated veil [must] [...] be peeled off’ (Oppenlander, 2013, 286), and the invisibility of the system of production and its unintended consequences must be made visible.
As global populations inherit the Westerner’s food practices, demand for meat and the subsequent unintended environmental consequences will dramatically increase and questions of how to physically and sustainably feed the world will continue to arise. In light of this, modification of the zeitgeist model of meateating culture is not only warranted but necessary. Through exploration of the culture of eating animal and its environmental consequences, it is clear that the solution lies in a culturally sensitive multidimensional approach that reflects the intricate cultural layers of meat consumption. Possible steps towards a sustainable food future include: raising public awareness of the mystified food chains and environmental impacts of livestock production, diversifying protein inputs of what constitutes meat (animal, vegetable or cultured) by broadening the protein barrier to include plantbased sources and redirecting government policy towards encouraging animalless consumption, whether in the form of placing a tax on animal or redirecting livestock subsidies to animalless food producers.
Bibliography is available on request
Love Aphrodisiacs for Mood, Energy and Brain
Some of my favourite mood, love, energy and libido elevators have been used around the world by indigenous peoples for centuries and some may already be in your kitchen. In true nutritional anthropologist fashion, I had to include a little description of why!