DISCLAIMER: This article was written before the beginning of the ‘North East Environments of Childhood’ project, so is not strictly related. But I think it is relevant enough to be an interesting read.
The Ethiopian drought of 1983 to 1985, that lead to a famine which left 1.2 million people dead and 2.5 million displaced, was not an act of God. This time something was different, or something was different in the way people were beginning to view what environmental disasters meant in the “modern age”. More specifically, what they meant in an age where the activities of humanity were becoming increasingly inseparable from those of the rest of the natural world. Questions were being raised as to why, as James Verdin et al. ask in Climate Science and Famine Early Warning, this drought had come so quickly off the back of several other droughts the region had endured in recent years, out of step with existing cycles of Ethiopian climate that typically saw drought once a decade? Droughts occur when high temperatures increase the rate of ‘evapotranspiration’, where water is lost from soil and the flora it supports. This also leads to wildfires, such as those seen in Australia in 1966, 1993, and 2019. Changes in temperatures also affect rates of rainfall, as they influence air and ocean currents making dry areas drier and wet ones wetter, meaning plants that would traditionally grow in that region will no longer do so. As food is an integral element of culture, loss of these traditional foodstuffs then damages affected peoples on a societal as well as an economic level.
Contemporary climate scientists suspected that the Ethiopian droughts were not “natural” in origin, at least in the traditional sense of the word. Instead, they later proved that they were a consequence of a tropical rain belt that globally had been consistently pushing southwards over the course of the latter half of the 20th century, leading to decreased rainfall all across the southern Sahara. Furthermore, the reason for this shift southwards was directly attributable to human activity. In an article for Geophysical Research Letters in 2013 Yen-Ting Hwang et al. explain how the release of sulphates into the atmosphere via the burning of coal in Europe and North America had been the ‘primary cause’ for the ‘aerosol cooling of the Northern Hemisphere’ and the subsequent change in global weather patterns.
Since 1985, an increasing number of “natural” disasters have been ascribed, at least in part, to human environmental influence and as a result historians have increasingly been looking backwards with a more critical eye toward connections that could be drawn between what previously has been viewed as the separate disciplines of “natural history” and “human history”. Amongst this discussion, one term has come to embody this newly perceived relationship, Anthropocene; the age where, as Paul Crutzen remarked on the turn of the millennium, ‘the global effects of human activities have become clearly noticeable’. In this new state of affairs the human had taken position as an integral element of the planet’s climatic, aquatic, and other earth systems. But how has the “human age” degraded nutritional and economic ecologies? And why did humans create the system of global, industrial, capitalistic agriculture that is primarily responsible for it?
Ecosystems of Ideology
Understanding ideology, poverty, and famine as biocultural aspects of the same system is essential in the Anthropocene. When pressure is applied to one part of the system, the others will be affected, be those pressures economic, psychological, or nutritional. In 1993 India saw flooding that killed 530 people and destroyed 1.2 million acres of crops and other flora. To understand this, an approach is required that highlights three main factors of environment: the biotic, the abiotic, and the cultural. The biotic factors are those that include the biology and health of humans, but also those of the other life forms that occupy the same environment. Abiotic factors involve the geography and climate of the environment, and cultural factors are manufactured anthropogenic elements that include ‘such phenomena as world-views, property and power’. Understanding each of these factors in any given environment requires understanding of the other two. In other words, they must all be seen as variables of the ecosystem. In this case, increasing pollution in the atmosphere was responsible for creating a more intense greenhouse effect that lead to greater oceanic evaporation and more rainfall, followed by these floods that left millions of people homeless. The flooding had also caused soil erosion that further impacted agricultural productivity during recovery and therefore increased nutritional stress. In this way the cultural (or ideological) drive of industrialisation in the market-economy acted as a force of oppression via abiotic and then biotic factors.
Often such relationships are non-obvious, such as in how the mineral content of a community’s water will be based on the composition of the rock into which they have dug wells. Those with less natural fluoride available in their water, such as the people of Huila province in Columbia, for example, will have higher rates of tooth decay and thus more nutritional complications. On the other side, those with too much fluoride can suffer ‘enamel fluorosis’, impairing tooth development when young. Such environmental inhibitors can range from the relatively insignificant, as with the fluoride content of water, to far more serious, as with life expectancies in northern china reducing by 5 ½ years compared with the south from 1950 onwards. Yuyu Chen found that because of a cooler climate, air pollution in northern China had become considerably higher than in the south, due to the burning of more coal, and this is by a large margin the deciding factor in the increased rate of cardiorespiratory deaths in the region.
However, whilst the recent Anthropocene has seen a greater prevalence of human-induced environmental change, the scientific consensus is that humans have been contributing to the warming of the planet for at least a century, and historians note that humanity has been shaping the planet’s ecosystems long before that. As Robert Goodland and Jeff Ahang explain in Livestock and Climate Change, measurements of rising undersea and atmospheric temperatures that fall far outside of what could be considered normal variability, are solid evidence of the impact that industrial agriculture and power production has had in producing global warming. This phenomenon has polarized the earth’s climatic zones, shuffling them north in the northern hemisphere and south in the southern hemisphere, leading there to be the most dangerous changes in areas that were already “on the edges” of these zones. These areas most affected, being those of more extreme conditions, are typically those where marginalised peoples, flora, and fauna exist who have more specialist lifestyle requirements, making these changes all the more devastating. This is why, according to the UN, 99% of deaths attributable to climate change have occurred in developing nations. As David Ciplet et al.explain in Power in a warming world, the advent of the Anthropocene has only increased the ways social inequality and poverty translates into poor health, through factors such as increases in stress hormones, exposure to dangerous toxins, and diminished access to healthcare. Those living in poverty are more likely to live near toxic sites, such as residents living alongside the oil fields of the Niger delta, 60% of which say their health is being affected by air, water, and land pollution. As fossil fuels become scarcer, methods of extracting them become more inefficient in terms of both energy use but also other resources such as water. Techniques such as hydraulic fracturing need lots of fresh water to operate, and pollute local water cycles, thus competing for the valuable resource with growing human populations.
For historians, “Anthropocene” is not only the recent epoch however, but a framework of methodology, studying the history of environmental eventsin the context of Anthropocene. To this end, the material culture of civilisations is an essential source. For example we can use palaeopathological methods to look at the skeletal structure of Mayans from the 6th to 10th centuries, as William Haviland did in the 1960s, and note that people were growing shorter over time. Simultaneously we see fewer animal bones in the record, indicating a reduction in food availability. At the same time, however, those skeletons of the elite that were entombed did not change in size, showing us how the nutritional stress on the population had both environmental and societal influences. This historical practice of ‘medical ecology’ is still an emerging discipline and requires historians not only to appreciate which environmental factors affect a person’s health, but how those consequences then tie back into the earth-systems they inhabit. Famine and poverty exist in cycles, or spirals, that create the conditions for their own continued deterioration.
Anthropogenic climate change has contributed to the cycle of poverty by putting excess stress on individuals and communities that did not have the resources, be those economic, political, agrarian or otherwise, to cope with the change. For example, as human settlements were increasingly built, or climatically moved into, areas of the world that previously were too hot to be hospitable in the 20th century, such as in Saudi Arabia, more and more people moved into housing that was designed explicitly with air conditioning in mind. However, problems arose for people living in poverty who could not afford this “convenience” and had their economic and physical health threatened as a result of what is called ‘cooling poverty’. To use Nancy Romero-Daza’s term for describing the relationship between violence, drug abuse, prostitution, and HIV/AIDS, the relationship between ideology, poverty, famine, and environment is syndemic. They are ‘not simply concurrent problems, but rather constitute a set of mutually reinforcing interconnected epidemics’. For example, when in 1998 the Nipah virus broke out in Malaysia, all those who suffered it were not Malaysian but Chinese. This was not because the Chinese were biologically vulnerable compared to the rest of the population, but because they were the people who provided cheap labour on the pig farms where the disease originated. Given this context, an even more appropriate term to describe the relationship between poverty, famine, and ecological degradation in the Anthropocene would be ecosyndemic, an idea that places emphasis on environmental factors in the creation of poverty and famine in the 20th and 21st centuries.
Experiences of poverty and famine are ideological, in that they exist within a set of power relationships between those experiencing them, and those not. As Ann McElroy and Patricia Townend describe in Medical Anthropology in Ecological Perspective: ‘Sicknessis a social category – the sick role in a particular society, the way a person who is ill is expected to behave’. Looking at poverty and famine in the Anthropocene, these relationships are more complex. If the whole world is “sick” in the Anthropocene, who plays the doctor? With the 1983 Ethiopian drought, it was the nations that had caused the disaster to begin with, inadvertently or otherwise, who acted as healers by providing financial support through the “live aid” event. In these western nations, mid-20th century optimism, particularly in America, was typified by an ideological belief that technological advances in nuclear energy, antibiotics, and agriculture with the “green revolution” would be able to create a world without poverty, disease, or hunger. Such ideas were strongly rationalistic, based on presumptions of humans being able to control their environment through technology; but the famines, epidemics, and other “natural” disasters of the late 20th and early 21st centuries speak otherwise. The environmental consequences of Anthropocene ideologies have served to undermine them and have brought forward new arguments that suggest that poverty and famine can only be addressed when working within the frameworks of the natural world rather than over them.
Looking back into our history, the Anthropocene conceptual framework pushes us to question where human decision in relation to the environment played a role in causing poverty and famine. Peoples who chose to transition from hunter-gatherer into settled agricultural societies, for example, created an environment of living that caused new types of disease to evolve from closer contact with animals. Clearing land made new breeding places for mosquitoes and digging irrigation ditches more homes for the tiny parasitic worms that cause bilharzia and therefore anaemia. As Mark Cohen and Gillian Crane-Kramer write in Ancient Health, skeletal records of early agricultural societies show ‘increased nutritional stress’ compared to their hunter-gatherer contemporaries who had a more varied and flexible diet and were thus less susceptible to famine. Foragers, having diets of higher energy and greater variety, were also far more unlikely to develop weaknesses in their nutrition, unlike settled agricultural peoples. With the emergence of the city environment in the historical record, Cohen and Crane Kramer also note how cultural differences between peoples begin to play more important roles in their lifestyles. Indeed, the general trend identified in societies from the advent of agriculture to the modern day has been an increase in social stratification related to changing economic conditions based on environmental factors. Cases of poverty and famine in the Anthropocene follow this trend, being most acutely felt by specific groups of people on lower levels of social strata. Such experiences in a society become less universal and more variable based on a person’s specific place in the nature-culture order.
The great Chinese famine of the late 1950s and early 1960s, as Dali Yang notes, is one example of famine that is considered to be predominantly of “man-made” causes, as in being the result of ideological policy decisions that caused falls in food production. In this example the connection is clear between human decision and human suffering, but because in the Anthropocene poverty and famine are often more abstracted from their ideological causes via environment, it is harder to conceive of and draw lines of causation. Indeed, the Anthropocene leaves little room for policy makers to blame “natural disasters” for crises, as the Chinese establishment did. What makes this more difficult is that the decisions behind anthropogenic climate change which have contributed to famine and poverty in modern history were generally made in countries different from those which were harmed. As René Dubos wrote in Science and Man’s Nature, often the most widespread impacts of ecological stresses are not the events themselves, but the social organisation and behavioural traits of the societies that surround them.
Minority Ecologies of the Anthropocene
Whilst peoples who live outside, or even tangential to, globalised consumerist economies make up a fraction of the earth’s human population, they are responsible for managing c.25% of the world’s tropical forests. For these people, even if the Anthropocene does not bring full famine or poverty, it can still bring aspects of those things. Human bodies require a complex array of nutrients to operate efficiently, which are obtained in different ways with different cultures around the world. For example, because of the environment of the South American rainforests, where the tropical climate results in acidic soil and therefore the plant life being low in nutritional value, there isn’t an easy supply of animals available to provide protein for a person’s diet. Therefore in many indigenous diets where meat has been uncommon, protein was instead obtained by a mix of maize and beans, that complement each other when eaten together. In the 1960s these two staples began to be upset by consumerist economies of the Anthropocene that resulted in less healthy and nutritious diets, if not hunger. This pattern also holds true for the peoples of the Kalahari Desert where there has been a steady decrease in food obtained from hunting and foraging over time, being replaced with imported commercial products such as cornmeal and sugar. Where at the start of the 20th century almost all of their nutrition came from wild sources, by 1980 this figure was down to 20%. This came with benefits and disadvantages, such as in seeing an increase in nutritional deficiencies but also a decrease in infant mortality due to a greater availability of milk. This is exemplary of how, in regards to food production, the Anthropocene influences not only the production of food in different environments, but also the way it distributed throughout a given economy and the way it is culturally viewed and prepared.
Hunter/gatherer societies’ food sources are not only put under threat from climate change, however. Indeed, according to the International Union for Conservation of Nature (IUCN), habitat loss has been the primary reason behind extinctions and endangerments of mammals, birds, and amphibians around the world, and whilst climate change is a contributor to habitat loss, most of this has been the result of more physically immediate human action through the remodelling of landscapes for agricultural land. This decrease in habitat size has thus reduced the number of plants and animals from which to gather.
In contrast to foragers, people worldwide who live as subsistence farmers (approximately 75% of the world’s 1.2 billion poor) have diets that depend on one specific staple. In temperate European and Asian climates, this has been wheat. For Africa millet or sorghum, south and Southeast Asia rice, and maize for the Americas. Economies built in this style are able to produce more food than hunter/gatherers due to the use of more intensive agricultural techniques but are bound to that single staple’s limitations. Such peoples have been more vulnerable to a lack of certain vitamins or minerals, and more likely to have seasonal hunger incorporated into their farming routines before a harvest. This means that should their resources fall prey to blight, drought, or another limiter, the entire system collapses, leading to famine. This occurred in Kenya in the 1980s when economic pressures of Anthropocene markets caused an increase in poverty and malnutrition because pastoralists were not able to sell their produce at sustainable prices. In the Anthropocene, this is becoming more common, and so this style of subsistence is becoming less viable, as with hunter/gathering. Traditional cuisines around the world have been thus put under threat in the Anthropocene via ecological and also economic contributors. Local foodstuffs are highly cultural and so this represents societal damage, but they are also the ends of a process of adaptive selection over time whereby different culinary combinations have been trialled and those best suited to the environment chosen. Changing these diets without consideration to local particularities, therefore, is more likely to be a force of harm than help.
The Anthropocene has created the environmental conditions which make methods of subsistence other than that which birthed it, global industrial commercialised production, more difficult to maintain. Via the limitation of wild spaces for foraging through habitat destruction, and the increasing of “natural” disasters that decimate traditional farming techniques, those older forms of food acquisition are attacked and diminished. The reason for this is that the majority of the energy utilised in industrial agriculture does not come from human labour, only around 5% compared to its alternatives that use around 90%. Instead it draws on energy sources such as coal and oil that produce high amounts of pollution. Industrial agriculture is vulnerable to the same threats as subsistence agriculture, namely the cultivation of monocultures that are susceptible to disease, but this is remedied with the use of pesticides which in turn harm the health of the humans and ecosystems they come into contact with; for example when pesticides of the neonicotinoid group caused ‘colony collapse disorder’ in bees in Massachusetts in 2013; this in turn damaged the whole ecosystem due to the loss of pollinators, including hurting the yields of the farmers who use those chemicals. Furthermore, such crops have less nutritional content overall compared to crops grown without the use of pesticides or glyphosate herbicides. Indeed, the use of these fertilisers ultimately degrades the soil and water resources of the ecosystem to such an extent that it becomes unsustainable, especially in an environment of increased drought, and contributes to the growth of cyanobacteria via runoff into bodies of water. This bacteria flourishes in excesses of nitrogen and other chemicals that fertiliser provides and in turn harms the health of aquatic life and humans who drink the water, eat products of that water, or use it for recreation.
The nutritional and economic stresses of the Anthropocene are not only found in traditional agricultural and foraging environments however, although these are the most drastically affected. In America, for example, at least 15% of households suffer from nutritional stress and food insecurity despite availability of charities and government programs. It is also the case that these same households are prone to obesity, because both conditions relate to a lack of healthy food in a person’s diet, rather than a lack of food overall. Those societies most responsible for creating the Anthropocene, the same as those who have profited on it, have seen a shift toward meat as the prime source of protein in their diets, and therefore increased intakes of animal fats. Alongside this, people living on ‘supermarket diets’ see increased intakes of high-glucose and fructose foodstuffs which are associated with obesity and several chronic illnesses including diabetes, various autoimmune diseases, and kidney inflammation.
Industrial meat production chains that have used antibiotics on animals in order to keep them in closer quarters have built immunity in the pathogens they were targeting, thus making those illnesses harder to treat in human populations. Additionally, estrogenic chemicals used in the production of plastic packaging in industrial food networks, and those used on animals and crops, are suggested to promote the growth of fat cells in the body. These chemicals disrupt a human or other animal’s endocrine system, meaning various hormones that are used to regulate fat growth can be confused. It is also likely that such defections can be passed down genetically to children of parents who are exposed to these chemicals pre or postnatally. At the same time, the inspections process of fresh produce has not been able to keep up with production or global distribution, and so foodstuffs contaminated either by bacteria or pesticide have been more likely to reach cooking pots. This has made the task of tracing contaminants back to their source more difficult.
As Anna Bellisari describes the phenomenon in The Obesity Epidemic, obesity is: ‘the predictable outcome of the highly evolved human metabolic system functioning in an obesogenic environment created to fulfil the American Dream’. This highlights how the cultural aspect of an environment can fundamentally impact the biotic and abiotic. Supermarket societies of the 20th and 21st centuries that have seen obesity epidemics have done so primarily because their economic and political structures created a gastronomic environment wherein unhealthy foodstuffs were promoted because they were more lucrative than healthier alternatives. Additionally, the promotion of the idea that a person’s fatness is based wholly on individual choice as supposed to also being influenced by structural elements of the environments of nutrition and health that they live in, has allowed the continuation of this structure and the growth of other industries surrounding diet and fitness.
The increased yields typical of industrial agriculture also sometimes translate to less or worse nutrition for people. In central American nations, where the production of products such as beef increased dramatically during the 1960s, the consumption of beef in that area of the world has went down, ranchers themselves having to ‘pay more for less’ as the food was exported to other markets around the world. These globalised markets of industrial societies have also widened the scope of the impacts of food crises such as droughts, particularly for those living in poverty. A poor harvest in one key area of the world can have devastating impacts for the nutritional health of people the world over, such as with the millennium drought in Australia (1996 to 2010) that caused a spike in grain prices and led to deaths and protests in over 50 nations around the world.
Recent history has challenged historians to change the ways they think about phenomena such as ideology, poverty, and famine. The Anthropocene’s ecosyndemic systems of nutrition and economy can only be grappled with a holistic understanding of each existing as nodes in a network of causality that is often non-obvious and multidirectional.
Author/Publisher: Louis Lorenzo
Date of Publication: 4th of August 2020