Did ‘Rising Star’ Shine Too Bright?

Last week was rather exceptional for human evolution science, even for those of us who are used to the extravagances of media attention that surround the field.

We were spoilt with the announcement of no less than two major discoveries in just as many days.

The first of them – the new South African species Homo naledi – attracted a great deal of attention from a media only too keen to indulge in truck loads of hyperbole and speculation.

The other announcement – the sequencing of ancient DNA from 300,000 to 400,000 year old fossils from Atapuerca in Spain – barely rated a mention in the press, overshadowed by the naledi hype.

This was probably in part because it was announced at an international conference, the coverage it received in Science suggesting it will shortly be published in detail in this prestigious journal.

This seems to be a regular practice by Science these days, as shown with other similar discoveries.

But perhaps also the announcement of Homo naledi the day before the Atapuerca DNA study broke meant that the media had been largely saturated; so it was also a bit of bad luck in the timing.

So, what was all the fuss surrounding Homo naledi about?

The bones of this new species were discovered accidentally by cavers exploring the Dinaledi Chamber of the ‘Rising Star’ Cave in the Cradle of Humankind region near Johannesburg, and subsequently brought to the attention of scientists.

A modest excavation resulted in 1,550 fossils from an extinct human relative, representing the partial skeletons of at least 15 individuals.

The teeth are described as primitive but small; its hand, wrist, lower limb and foot bones are human-like; while other bones of the trunk, shoulder, pelvis and thigh are also quite primitive, being a lot like species of Australopithecus.

Reading the scientific article describing Homo naledi you realise that the work is detailed, rigorous and careful.

It involved a large number of specialists covering a very wide set of physical features on the bones and teeth.

The case for the new species is, in my opinion, detailed, compelling and praise worthy.

So far, so good: another new species, the human tree gets all the more interesting, and complicated.

The human drama surrounding the discovery of the bones and their recovery by a group of petite, commando style, female cave explorers is also fun and adds a lot of colour to the tale of the discovery of Homo naledi.

One rather odd thing about it though is that the scientists involved still haven’t determined its geological age.

This is unprecedented in my experience and raises lots of questions in my mind like: Did the scientists rush the announcement for some reason? Why didn’t they wait until they had an age estimate at hand before going to a journal? Are the geologists unable to date the fossils?

My ‘nonsense-filter’ also tells me that all the talk in the media about this new species burying its dead and having human-like morality, or that is dismantles one of the key pillars of human uniqueness, needs to be called out for what it truly is: absurd.

Completely unnecessary hype to sell the significance of the find to the media.

It’s just the sort of thing that infuriates many scientists and detracts from an otherwise significant discovery; pity really.

The fossils recovered from the site are so far apparently exclusively from naledi and may represent near-complete (or complete) bodies that ended up in one part of the cave.

The geologists involved believe the cave was always dark and therefore the bodies may have been deliberately placed there.

Could be, but there might be other explanations as well that need to be given much more serious scientific exploration.

Why leap to the most complicated, least likely explanation? I’ll leave you to work out why.

Even so, other very rich fossil sites like Sima de Los Huesos (the ‘pit of bones’) in Atapuerca, northern Spain, coincidentally the focus of the new DNA research, have also produced a very large number of hominin remains, and they may also have been put deliberately into the cave.

This site is between 300,000 and 400,000 years old.

Yet, as with Rising Star Cave, there is no evidence that they were burying their dead, or had a concept of the afterlife or morality or engaged in ritual or religious ceremony.

Archaeology, biology and neuroscience all tell us that such behaviours fall exclusively within the human domain, and I see nothing about this new find that changes this.

The oldest convincing evidence for funeray practice is associated with our species and could be up to 160,000 years old.

Again, it would have been helpful to know how old naledi really is; and speculating it could be as old as 3 million years, without any apparent evidence, as the team is reported to have done in the media, is like adding nitroglycerine to the fire of media speculation.

It’s one thing to get the message out to the public about the exciting discoveries we’re making and to educate the very people who kindly allow us the privilege of doing science using their hard earned tax dollars.

I’m thrilled when my colleagues announce their work to the media, even if I don’t always agree with their conclusions.

It can be fun to have a bit of a public stoush over interpretations, and the wider public benefits from a sense that scientific findings can be interpreted in varying ways.

Doing so helps enrich understanding of the human enterprise we call science and to maintain or even grow public interest in it in a world driven by an overriding economic imperative and one prone to disregarding the huge cultural and intellectual contributions it makes to society.

But if we go too far, we run the real risk of trivialising the huge investment of time, money, energy, care and intellectual effort that goes into many scientific discoveries.

It can also do damage to science itself and, dare I suggest, even contribute to the mistrust that increasing numbers of people in the Anglophone West seem to feel about it.

You can also end up with egg on your face, and some people never seem to learn this lesson.

In contrast, the Atapuerca DNA research has direct bearing on understanding the evolution of the living human species, which is quite rightly where the central focus of human evolution research should be.

Researchers have argued about three scenarios for the Atapuerca hominins: they might be the earliest known Neanderthals; or could sample the population that gave rise to Neanderthals; or perhaps are the common ancestor of both humans and Neanderthals.

The research, as reported by Ann Gibbons, confirms that they are in fact the earliest Neanderthals: a kind of ‘archaic’ Neanderthal if you like, and subsequently evolved into the ‘classic’ Neanderthals we see in Europe and West Asia by about 150,000 years ago.

What are the broader implications of the research for understanding the evolution of living humans?

First, the finding pushes the age of the shared human-Neanderthal ancestor well beyond 400,000 years ago, suggesting our species, H. sapiens, might also be at least this old.

Also, with the Atapuerca group living in Europe, it’s even possible that our species evolved in this or an adjacent region of Eurasia, and later migrated back into Africa.

And being close to the common ancestor, the Atapuerca fossils give us real insights into what it must have looked like and the ancestral body form of our own species.

The fossils from Europe, Asia and Africa from around this time are physically very diverse, with some researchers thinking they represent multiple species, only one of which could be the ancestor of living humans.

Question is, which one?

This new research suggests the European branch is closest among them all and deserves much more attention in this regard.

In contrast, we don’t know, and will doubtless ever know, whether Homo naledi had anything to do with the evolution of living humans, least of all whether its brain, mind or behaviour were anything like our own.

The Long Reach of the Past: Did Prehistoric Humans Shape Today’s Ecosystems?

We all know that humans are having a massive impact on the planet.

Our effects include altering the Earth’s rotation by damming large amounts of surface water; changing the composition of the atmosphere by punching a hole in the ozone layer and adding vast amounts of CO2, methane and other pollutants; transforming the composition and temperature of the oceans; and clearing large tracts of land and removing or dramatically altering vast numbers of terrestrial and aquatic ecosystems in the process.

Plenty of these changes are plain for all to see; others are more obscure, but no less significant.

And, with the COP21 UN Paris Climate Change Conference just around the corner, politicians, policy makers and NGOs are again turning their attentions to reaching an agreement that aims to keep global temperature change to below 2° Celsius.

A major issue for scientists studying the Earth’s physical and biological systems is just how great the influence of humans has been and for how long its been happening.

Our global destructive impacts mean that potentially any organism or ecosystem, and many of the earth’s physical systems like erosion, soil formation and water cycles, carbon and nitrogen cycles, and climate, have been affected in some way by human activity.

But can we disentangle the effects of human activity on these systems and organisms from natural signals and cycles?

I’d argue we probably can’t – that human impacts are just too wide ranging and too ancient, and that our disruptive and destructive effects have reached every part of the planet.

This means that probably every scientific study of any contemporary system or living organism catalogues the effects of our species and its economic activity in some way.

If we go back far enough to a time when humans didn’t exist, we have the potential to understand how the world looked and how natural systems behaved before we were around.

Problem is that the information we get is from the very incomplete and often biased geological record, in the form of fossils and various archives of climate and environmental change like isotopes recorded in ice or cave stalagmites.

And, of course, many organisms alive in the past are now extinct: the planet looked very different even just 20,000 years ago at the peak of the last major cold stage of the Ice Age or Pleistocene epoch.

Scientists like me who study extinct organisms and long lost ecosystems wonder whether large scale human impacts like those we see today are truly confined to the period following industrialisation.

Did the ‘Anthropocene’ really begin 215 years ago?

Or does the environmental legacy of our Palaeolithic ancestors reach into today’s world?

This issue is at the centre of one of the most hotly contested questions in palaeontology and archaeology, namely, the extinction of the Pleistocene ‘megafauna’.

But its a lot broader than this issue of course and it cuts to the core of who we are as a species, the way we have evolved, and lengths we’ll go to ensure our own survival; some would argue even our future survival.

Some scientists have also suggested that the megafauna extinctions set the stage for the planet’s sixth major extinction event, which is unfolding before our eyes.

During the last phase of the Ice Age, between roughly 50,000 and 10,000 years ago, almost 200 species of mammals went extinct across the globe.

That’s half of the world’s mammals weighing more than 44 kg perishing in what was an instant in the long history of life on the planet.

A growing body of highly contested research suggests that humans may indeed have dramatically shaped the diversity of living mammals in the deep past, just like today, leaving us an impoverished natural legacy.

And, let’s not forget that humans with our average body mass of close to 70 kg are megafauna as well.

While our species is obviously still here, we remain the chief suspect in the extinction of our close cousins the Neanderthals, Denisovans, and probably other relatives around this time.

The megafauna debate has been highly polarised for decades: humans being blamed on one hand, and natural climate change on the other.

Environmentally altering activities like burning the landscape by ancient hunter-gatherers in places like Australia, for example, have proven very difficult to establish, and their possible impacts hard to separate from natural climate cycles.

Yet other studies suggest that Ancient Aborignal Australians were one of the major agents involved in dispersing baobab trees in northern Australia; so our environmental impacts can be quite surprising.

And the chronology of human settlement and timing of megafauna disappearance in Australia remain uncertain: significant barriers to resolving the extinction question with any certainty.

So again, a major difficulty is the poor quality of the information we have from the fossil, archaeological and ancient environmental records.

The spectre of the confounding effects of natural and human-induced environmental signals remain all too real even for the Ice Age.

Another way to approach the question of human environmental change in the past is to construct mathematical models to look at changes over time and the influences of natural cycles and changes compared to human facilitated ones.

A battery of such studies is beginning to point firmly to the prehistoric human colonisation of new parts of the planet as a major driver of extinction and environmental change; possibly the leading cause of the megafauna extinctions.

New research published over the last couple of months by Soren Faurby of the Museo Nacional de Ciencias Naturales Madrid and Jens-Christian Svenning of Aarhus University has also pointed the finger squarely at humans.

In an interesting twist, they modelled what worldwide diversity patterns of mammals might look like in the absence of past and present human impacts, based on estimates of the natural distribution of each species (5,747 of them) according to its ecology, biogeography and the current environmental template.

They found that prehistoric human-driven extinctions in addition to recent ones were probably an important influence on present global mammal diversity patterns.

They even suggested that areas normally thought by ecologists to be biodiversity hot spots, like mountains, may in fact reflect their role as refuges for species otherwise affected by hunting and habitat destruction, rather than reflecting a natural pattern.

I’m satisfied that a strong case exists that humans did play an extinction role and that there truly is a link between what our Palaeolithic ancestors were doing to the environment and what we’re doing today.

The difference now of course is that with an expected almost 10 billion people by 2050 and the remarkably destructive technology we possess we’re doing damage on an unprecedented scale and face a future on a planet with an irreparably damaged biosphere and dramatically altered atmosphere.

Spare Your Health, Budget, and the Planet: Ditch the Palaeodiet

Interest in the diet of our evolutionary ancestors would ordinarily be a topic of curiosity in only the most obscure of scientific circles.

But the popularity of the so-called palaeodiet has brought unprecedented attention to the foods consumed by Stone Age or Palaeolithic people.

And, it might ultimately be doing us all more harm than good.

The palaeodiet is claimed to be a recipe for natural health and able to cure a vast range of diseases.

We await the clinical trials to pass verdict on its claimed disease curing benefits, but at the moment most such claims seem like little more than snake oil peddling or faith healing.

Even a cursory look at the palaeodiet highlights huge contradictions and a wilful ignorance of the science behind human evolution and diet.

Proponents of the palaeodiet eschew all processed food, but are happy to suck on beverages like wine.

Don’t be fooled though, wine, like many other components of the human diet, is a processed food.

Wine making involves turning a fruit into an alcoholic beverage through the mechanical breakdown or heating up of grapes, addition of sugar, acid, nutrients, yeast and other chemicals to promote fermentation, add flavor, remove sediment and preserve the wine.

And humans have been processing their food for tens of thousands perhaps millions of years, so its absurd to think you can exclude processed food altogether.

The palaeodiet eliminates all grains, legumes and potatoes, yet there is plenty of evidence that humans have evolved to eat carbohydrates especially starches.

Take the amylase genes which evolved to aid the digestion of starch either in our saliva or pancreas through secretion into the small intestine.

Humans are unique among primates in possessing large numbers of salivary amylase genes and there is a clear association between gene number and the concentration of the amylase enzyme in the saliva.

Plant foods containing high quantities of starch may even have been essential for the evolution of the large human brain over the last 2 million years, according to new research by Karen Hardy from Universitat Autónoma de Barcelona and colleagues published recently in The Quarterly Review of Biology.

Our brains are three times the size of our chimpanzee cousins and are undoubtedly the seat of many of the differences between us in terms of our biology including behaviour.

Previous models such as the ‘expensive tissue’ hypothesis of Aiello and Wheeler proposed that the use of stone tools facilitated a shift from a mostly plant-based to largely meat-comprising diet in our ancestors in order to feed our large brains.

This shift, they suggested, facilitated the evolution of our enlarged brain as well as smaller teeth and reduced gut adapted for eating meat.

Yet there have been lingering doubts, sometimes claimed refutations, of the links between human evolution and meat eating.

There is no clear association across mammals including primates between an enlarged brain and reduced gut size.

Instead, large brains seem to be found in mammals that are capable of storing large amounts of body fat to stave off starvation and also have efficient forms of locomotion like our human bipedalism.

The new model from Hardy and co-authors suggests that cooked starch greatly increased energy availability to energy expensive tissues like the brain, red blood cells, and also the developing fetus.

They also suggest that the number of copies of the salivary amylase gene may
have enhanced the importance of starch in human evolution following the controlled use of fire and development of cooking.

But there are of course many sources of carbohydrates in the diet and research suggests that early humans may have eaten underground food items like roots, tubers and rhizomes, as well as seeds, certain fruits and bark which are all widely available and rich in starch.

Grains were also an important and highly effective source of carbohydrates in the Palaeolithic, despite what the palaeodiet states.

Grinding seeds to make flour and probably bread is known from at least 25,000 years ago in Europe, arguably much longer, and humans have been cooking for at least 400,000 years, but perhaps even 2 million years.

The truth is we have no idea how much meat was eaten in the Palaeolithic because so little of the plant food remains have preserved for us to study and to garner an accurate picture of the complete diet of our ancestors.

Mammal bones with signs of butchering or cooking are plentiful in the archaeological record, but bones always preserve as fossils much longer than plant remains, and so we have a highly skewed view of past diets.

We would also do well to keep in mind that the role and safe amounts of animal food in the contemporary human diet remain controversial in nutritional and medical science regardless of what we think our ancestors may have eaten.

Red meat in particular has been linked to a range of diseases like metabolic syndrome, a variety of cancers, atherosclerosis and Type 2 diabetes, so a degree of caution about safe levels of consumption seems wise.

If your aim is to lose weight, then the palaeodiet is by no means your only option.

Much clinical research has shown that the key to weight loss is reducing the total amount of calories consumed, regardless of whether its carbohydrates, protein or fat.

Watch what you eat, reduce your calories and lift your activity level, is a tried and true formula that works for most people.

Studies of hunter-gatherers during the last couple of hundred years have also shown they walked an awful lot: on average 360 km a year, but up to 3,600 km per annum.

So, you might consider a palaeo-exercise regime combined with a scientifically based and balanced diet as a healthy starting point for weight loss and general good health, rather than the potentially dangerous palaeodiet.

Nutritionists also advise greatly reducing the amount of factory-made foods we consume because much of it lacks nutritional balance, and often has excessive calories and high sugar, salt or fat.

I guess this is one thing palaeodieters and nutritionists are close to agreement on, probably because it seems an awful lot like common sense.

While palaeodiet inventor Loren Cordain argues we should only be eating animals that have themselves eaten a ‘wild’ diet, Australian celebrity chef Pete Evans has extended it to consuming only organic food.

Adopting such an approach to food selection is impossible for most of the planet’s 7 billion inhabitants who couldn’t afford expensive organically grown food.

Evans wants the palaeodiet to be the new ‘normal’ for everyone, but to me, this smacks of Western middle class elitism and is simply out of touch with the realities faced by most people on the planet.

Anyway, most of the sources of animal food consumed by palaeodieters are from domesticated animals, which have been bred for flavour and meat quantity, and haven’t eaten a truly wild diet for thousands of years.

Eating a diet based on wild caught food would also be devastating for the planet.

The environment is becoming degraded and its natural resources depleted on a remarkable scale and pace, and a good deal of this is associated with agriculture and activities like fishing.

It’s estimated that each year tens, perhaps hundreds, of millions of sharks alone are harvested from the oceans and in many places fisheries are far from sustainable.

Similarly, if you’re concern is with animal welfare, then organic farming may not always be the best choice.

We need to get the balance right in our food choices between the broader effects of production on the environment, welfare of livestock and impacts on humankind more broadly.

The United Nations predicts there will be almost 10 billion people in the world by 2050.

This will lead to a dramatic need to increase food production to feed the extra people.

The scale of the challenge ahead was pithily described by Charles Godfray and co-authors in an article about the challenges of population growth and food security in Science magazine in 2010:

This challenge requires changes in the way food is produced, stored, processed, distributed, and accessed that are as radical as those that occurred during the 18th- and 19th-century Industrial and Agricultural Revolutions and the 20th-century Green Revolution. Increases in production will have an important part to play, but they will be constrained as never before by the finite resources provided by Earth’s lands, oceans, and atmosphere.

All of this within the context of the growing impact global climate change will have on food and water availability as well.

If we’re truly concerned about the fate of the planet and humankind, especially those of us in the West, we all need to be prepared to comprise our lifestyles including our diet and ditch luxuries like the palaeodiet.

Eating large amounts of meat, especially animals which have eaten a wild diet, is simply unrealistic, unsustainable and unreasonable if we want to do our bit for nature and the rest of humankind.

How Many Forms Can An Ape Take?

The study of form has been central to biology ever since people have contemplated how life came to exist and how individual species or groups of them are related to one another.

When biologists speak of ‘form’ they mean the shape, appearance or structure an organism takes — be it whole organism or only a constituent part such as a bodily system, organ, microscopic structure or even a molecule.

A famous example from the 20th century is the form of the DNA molecule, which we have known to be a double helix since Watson and Crick published their model in 1953.

Palaeontologists like me are especially interested in form because it gives us clues about the diversity of past life and deep insights into the history and mechanisms of evolution.

Most organisms seem to be well designed for their ecological circumstances, an observation that is as old as biology itself.

Why this is the case and how well the fit between organism and environment actually is remain fundamental questions still in evolutionary science today.

Developing views about form

Interest in form goes back to the Ancient Greeks who were the first people to formally observe the great variety of life and explain how it came into being.

In his work Historia Animalium (The History of the Animals), Aristotle (384-322 BC) produced one of the first scholarly works devoted to the subject of comparative anatomy, or the comparison form among animals.

A little later, the Roman naturalist and philosopher Pliny the Elder (about 23-79 AD) also pondered the diversity and relationships of life and was also the first person to describe the strong resemblance of humans to primates in his book Naturalis Historia (Natural History).

Galen of Pergamon (c130-199/ or 217 AD), a prominent Greek physician, surgeon and philosopher in the Roman Empire, and whose ideas dominated Western medicine for over 1000 years, also recognised similarities in form between humans and primates.

He is even said to have commended his students to study primate anatomy in order to develop a better understanding of humans.

Seventeen centuries later, Charles Darwin provided in 1859 a mechanism by which favourable forms could become widespread within species and persist over long periods, namely, through natural selection.

His understanding of how form arose within individuals, and was inherited and modified, remained rather rudimentary, and his speculations were ultimately shown to be incorrect.

These problems would have to await a proper understanding of growth and development, including embryology, and the principles of inheritance, all of which were developed during or soon after Darwin’s time.

It was really with D’Arcy Thompson’s 1917 book On Growth and Form that the examination of form took on a more scientific bent.

While others before him like Goethe had recognised the importance of form — inventing terms like ‘morphology’ to describe it — theirs was a largely descriptive approach, not an explicitly geometric or mathematical one.

In 1968, in his book Order and Life, James Needham wrote, “the central problem of biology is the form problem,” and it remains so today.

In the 1990s the scientific discipline of ‘evo-devo’, or evolutionary developmental biology, blending embryology, genetics and evolution, began to provide deep insights into how form was constructed and how it changed.

Evo-devo marked the beginning of a profound shift in our understanding, but there remain many unanswered questions.

A limit to form?

One question that has plagued the study of form since before Thompson is whether nature sets limits on how many different forms life can take.

Or put another way, is evolution constrained in the solutions it can invent to solve the ecological problems species face?

Influential palaeontologists like Simon Conway-Morris of Cambridge University, for example, who has devoted his career to studying the Cambrian explosion of animal life argues there is indeed a limitation to the number of possible forms life can take.

The evidence for this, he argues, comes from the prevalence of repeated forms in nature across distinct evolutionary lines, or what biologists call evolutionary convergence.

If this were correct, what might be the cause of such limitations to form? Is there, for example, only a limited number of combinations that genes can take?

The genes that control the body plan in the developing embryo such as the Homeobox cluster are after all highly conserved and very ancient.

There are striking similarities in these genes even between humans and fruit flies.

But Conway-Morris sees repeating forms in nature as evidence for God or design.

For me, Creation is too complex a solution to be satisfactory, one that raises far more questions that it answers, and one not really amenable to testing within the scientific framework.

Besides, evo-devo has opened up many more possibilities now including tinkering by natural selection in the timing of key events during embryogenesis through genetic mutation or even epigenetic influences.

Deep insights from apes

If we take a close look at the animals I’m most familiar with — humans and our ape cousins — we find good reasons to be sceptical that the kind of constraints on form that Conway-Morris supposes actually exist.

Research published recently by Sergio Almécija of George Washington University and co-authors shows that scientists have dramatically underestimated the ways in which the body form of apes can and do vary.

For primates, whose lives are spent mostly or entirely within trees, grasping hands are one of the main ways in which they interact with their environment.

Hands hang onto branches when primates moved about or rest, they grasp food for eating, and apes groom each other using their well developed handgrips.

Almécija and colleagues examined the proportions of the bones of the hand and hand digits in humans and other apes and found that they varied an awful lot, much more so than we had all been led to believe before now.

They found that among the apes, gibbons possess a highly unique form of hand, while chimpanzees and orangutans had similar hands, which had evolved independently of one other.

Gorillas and humans had very conservative or ‘primitive’ hands that had changed very little during our evolutionary histories.

Even today, human and gorilla hands look an awful lot like monkey hands rather than those of our chimpanzee cousins.

So our monkey-like human hands also turn out to be an awful lot like those of the earliest members of our evolutionary group, the bipeds who first strode the African savannah seven million years ago.

One big implication of this work is that it challenges a long-held assumption that living chimpanzees are a lot like our earliest human ancestors: a kind of evolutionary snap shot of our own earliest bipedal ancestors if you like.

For decades now we have been studying chimpanzees to glean insights into our immediate evolution, but this approach looks increasingly problematic.

Coming back to where we started, there seem also to be far fewer limits to form than many scientists have believed, and that important features like hands can change or stay the same in ways that are not immediately obvious or predictable.

Evolution Took Many Paths to Building ‘Pygmy’ Bodies

For more than two centuries physical anthropologists have been preoccupied with cataloguing and explaining the way humans vary physically across the planet.

We mostly differ in familiar ways: body mass and stature, limb proportions, head size and shape, nasal prominence, proportions of the face, tooth size and shape, hair form and colour, skin pigmentation, iris colour, among others.

From the late 18th through to second half of the 20th century physical variation was assumed to reflect the deep division of humanity into ancient ‘races’.

With major developments in human genetics from the 1960s onwards the notion of races began to be dismantled and eventually their use fell into disrepute.

Present day racialist stalwarts are wilfully ignorant of genetics or live in a vacuum divorced from the history of race theory and realities of human biological variation.

That’s not to deny that human geographic variation exists, a fact demonstrated powerfully by genetics time and time again.

But the way we vary along geographic lines simply doesn’t fit the old racial categories; but then, they never were about science, as readily acknowledged by early physical anthropologists like Johann Blumenbach.

One group that has received more than its fair share of scientific and racialist scrutiny is the so-called ‘pygmy’ peoples.

These are short statured populations – average height around or below 150 cm – found in many parts of the world including in Africa, South Asia, Southeast Asia, Australia, New Guinea and South America.

For example, the Efe hunter-gatherers of the Ituri rainforest (Democratic Republic of Congo) have mean adult female and male statures of 136 cm and 143 cm.

Such populations have played a major role in evolutionary models, underpinned by racialist theory, such as the Negrito settlement of Southeast Asia and Australia.

The term ‘pygmy’ is still widely used in science and the popular reporting of science, but we should be a little more circumspect about its use.

It’s a term tied to 19 th and early 20 th Century Social Darwinism, with pygmies seen as a lower stage in human evolution, and therefore, a lesser race than the imperialist Europeans who studied them.

Their small body size, and it was argued, diminutive brain size, was seen as a kind of infantile stage in the evolution of humankind.

Although, their brain sizes actually lie comfortably within the range of other, taller, populations.

Unsurprisingly, the ‘all grown up’ and strapping northern European lads doing the racial categorising saw themselves as occupying the apex of human evolution.

Pygmy is also a term these people don’t apply to themselves, and so has often come to be seen as derogatory.

Most of these small-bodied populations inhabit tropical rainforests, such as intensely studied groups like the Aka, Mbuti, Baka and Efe in tropical Africa, Andaman Islanders in South Asia, Aeta, Agta and Batak in tropical Southeast Asia, and the Mountain Ok and Mafulu peoples of New Guinea.

But there are some important exceptions, such as the San in southern Africa who live in the Kalahari Desert, and some of the Khoikhoi people who live around the southern African cape.

While it’s been argued they migrated to these regions from the tropics, genomic research indicates the Kalahari may be the evolutionary homeland for the human species, with the San being one of the oldest genetic lineages of humankind.

These peoples are, or were until recently, hunter-gatherers and so have also been the subject of intense research about all sorts of questions surrounding human evolution, including today by evolutionary psychologists.

We would be wise to be sceptical about this though: living hunter-gatherers don’t represent a snapshot of a lost world, or stage of humanity’s evolution, as such studies often imply.

Dwarfism is of broad interest in evolutionary biology because it’s known to affect many mammals especially on island settings.

Take the Ice Age pygmy elephants on Mediterranean islands or last of the woolly mammoths that became pygmies in the arctic region.

Another celebrated case is the so-called Hobbit from Flores – Homo floresiensis – also thought to be an example of island dwarfing.

The underlying cause of the small body size of ‘pygmy’ people has been one of the major themes of human biology for decades.

Short stature has been shown to be associated with perturbations in the GH1-IGF1 pathway, one of three endocrine systems that regulate growth.

A number of studies have shown that many, but not all, of these hunter-gatherer groups have low plasma levels of IGF1, hinting at differences in their underlying genetics.

Chronic malnutrition can also lead to low levels of IGF1 and other growth hormones, so environmental effects have also been implicated.

Genetic studies over the last half decade have suggested a complex role for DNA mutations and natural selection in at least some cases of human population dwarfism.

Selection could plausibly have acted to reduce body size in rainforest dwelling people, as they occupy a kind of ‘ecological island’ with scarce and difficult to acquire food resources.

Small bodies require less energy so people with them could survive on lower caloric intakes.

Other explanations have included thermoregulation in a hot tropical environment, the need to reduce energy expended during locomotion, and a life history explanation keyed into a lower reproductive age.

The scientific jury’s still out on which one provides the best explanation.

A new study published in Nature Communications has examined the growth patterns of the Baka people from Southeast Cameroon and offers some fascinating new insights into the mechanisms underpinning the human ‘pygmy’ phenotype.

Fernando Rozzi from the Centre National de la Recherche Scientifique in Paris and his team studied mission records from the 1970s onwards gathered by medically trained nuns and recording the growth of almost three hundred children.

In their paper they found that body size at birth was within the normal limits set by larger bodied populations, but that the growth rate of the Baka infants slowed significantly during the first two years of life.

After this, their growth more or less followed the standard pattern seen in people across the world, including the adolescent growth spurt accompanying puberty, which is a universal and unique characteristic of our species.

The Baka growth pattern also contrasts with that documented for other short-statured populations in Africa.

So evolution seems to have acted to produce the same outcome in different populations using different mechanisms.

We know that across life, evolutionary tinkering with growth and development is one of the major causes of differences among closely related species.

Body size is also one of the most important ecological variables among mammals and so understanding the mechanisms that alter it provides profound insights into evolution and fundamental ecological strategies.

Finally, the fact that such vast differences in growth have been found between short-statured populations on the same continent, evolving independently, shows once again that the old race categories like ‘pygmy’ or ‘Negrito’ are simply incapable of doing our evolutionary history justice.


Making Sense of Our Evolution

The science aboutour special senses – vision, smell, hearing and taste – offers fascinating and unique perspectives on our evolution.

Yet it remains patchy; we know surprisingly little for example about how our sense of hearing has evolved since we shared an ancestor with chimpanzees some 8 million years ago.

In contrast, understanding of the evolution of human vision and smell, including new developments in ancient DNA research, offers great promise in answering some long standing questions about our uniqueness as a species.

A very visual mammal

Humans live in a world dominated by images and colour. Our sense of vision largely dictates how we perceive the environment around us.

We’re also prone to summing up others by the way they look. Faces and expressions, skin, eye and hair colour; as if we can read someone’s heritage or personality like a book.

All of this hints at the crucial role vision plays in our social lives as well.

For our kind of mammal, the primates, vision is king.

Our ancient ancestors evolved for a life in the trees and today most of our primate cousins still lead an arboreal existence.

The need to safely judge distances when leaping or climbing about a canopy, tens of metres above a forest floor, certain death only a single wrongly placed hand grip away, must have led to intense natural selection.

We have, as a result, highly refined vision; monkeys and apes, including humans, possessing stereoscopy: we see in 3-dimensions.

Our skulls, eyes and brains have evolved to facilitate 3D vision: eye sockets that face forwards, the field of vision from each eye overlapping, and brains processing visual information from each eye equally on left and right hemispheres of the brain.

Trichromatic vision allows humans and many other primates to perceive perhaps 10 million colours; its evolution probably keyed into the eating of fruit by our distant primate ancestors, allowing it to be distinguished against a forested backdrop of leaves.

The primate eye is largest relative to body size of all the mammals; a legacy perhaps of the nocturnal lifestyle of the earliest primates.

Yet, the human eye is unusual among all primates in having an exposed sclera, the outer layer of hard tissue that encloses and protects it.

The sclera is also white, but in other primates it’s pigmented, being brown in colour, and probably acting as camouflage.

The depigmented white of the human sclera plays a role in enhancing communication especially when we make eye contact and may have a function in sexual attraction as well.

When we compare the human skull with our Neanderthal cousins we find their visual system was probably better developed than ours, as estimated from the volume of their eye sockets (orbits) and the space that would have been filled by the occipital lobe of their brain.

Just what Neanderthals were doing with their eyes that was so different to us remains unknown. Did it help them in low light or snow covered landscapes, or with hunting?

So far, though, we’ve learned remarkably little about the evolution of sight from ancient DNA; a couple of genes were identified in the initial sequence of the Neanderthal genome in 2010, but little seems to have emerged since.

The neglected sense

Can you imagine what our lives would be like if our dominant sense was smell, or olfaction?

Scents and odours filling our world like colours on the visual spectrum, only the shades and tones would be odours.

Across all life, from bacteria to mammals, the ability to detect chemicals in the environment is fundamental to survival.

For most mammals the sense of smell dominates their world much as our sense of sight does.

The mouse has around 1,000 different cell types for detecting odours, or so-called olfactory receptors. Humans only have about 350 of them.

It’s also been a major catalyst of biological evolution. Among vertebrates alone at least four different kinds of olfactory systems have evolved.

The evolution of olfaction has also left a very large imprint on the mammal genetic code; olfactory genes represent the single largest gene family in the mammal genome.

The human genome contains an estimated 900 genes and pseudogenes associated with the perception of smells while the mouse genome has roughly 1,400 of them.

In comparison, the catfish has only around 100 olfactory receptor genes.

It’s the pseudogenes though that have attracted much of the research attention: pseudogenes have either lost their ability to produce proteins or fail to produce them within a particular kind of cell.

Around 60 percent of human olfactory receptor genes are in fact peudogenes compared with only about 30 percent in other apes, and 20% in mice and dogs.

Charles Darwin thought that the human sense of smell was a vestigial (or ‘useless’) trait, and he may even have taken our large number of pseudogenes as confirmation of his ideas; had he known about them.

While Darwin’s was clearly an overstatement, the dramatic loss of functional genes strongly hints at major differences between our sense of smell and that of most other mammals including our ape cousins.

Still, we know that in living humans our sense of smell is anything but useless; it plays a role in our immune system, in social communication, reproduction including choosing mates and during courtship and sex, detecting emotional stress in others (‘emotional contagion’) and of course during eating.

Neanderthal smells

Its long been suggested from fossil comparisons that humans have a better developed sense of smell than our Neanderthal cousins did; the opposite situation to our sense of vision.

The olfactory bulb – an organ that sits inside our braincase, overlies the nose cavity and transmits smell perception to our brains – was probably larger in humans for a start.

Ancient DNA has also opened up the possibility of studying differences in the olfactory genes directly across humans, Neanderthals and the mysterious Denisovans.

Last year Graham Hughes of University College Dublin and co-workers reported differences in olfaction genes between humans and our extinct cousins.

They found that 10 functional olfaction genes in humans were inactive (pseudogenes) in Neanderthals, and 8 in the Denisovans.

This points to subtle but probably ecologically important differences in smell between our species.

Another recent study led by Kara Hoover of the University of Alaska Fairbanks compared the ability of humans from a range of populations across the globe to detect an odour called OR7D4.

This odd gene mutation allows us to detect a smell called androstenone that’s produced by pigs and wild boar, so may have played a role in diet among our ancestors.

The presence of the gene is known to be a good predictor of androstenone smelling ability.

Hoover and her team also studied the OR7D4 gene in the sequences of the Neanderthals and Denisovans and found the Neanderthal version was like our own, but the Denisovan one differed from it in terms of its DNA code, but functioned in a similar way.

Around 50 percent of living adults cannot smell androstenone while about 35 percent can detect 200 parts per trillion in air and are offended by it.

Androstenone is also found naturally in human sweat and urine, with boar androstenone even being marketed and sold as a human aphrodisiac.

So the ability of Neanderthals and Denisovans to detect it might eventually also have something to tell us about their sex lives as well.

Promise of DNA

We can only go so far with fossil studies when it comes to studying the special senses; so little information preserves for us to reconstruct their anatomy afterall, and little in the way of function.

But with vast numbers of olfactory genes available for study in the genome of living humans, our extinct cousins like the Neanderthals and Denisovans, and many living primate relatives, we’ve still a lot to learn about our remarkable sense of smell.

Aboriginal History Rewritten Again by Ignorant Political Class

Last week Liberal Democrats Senator David Leyonhjelm was widely reported as suggesting that people other than Aboriginal Australians may have occupied the Australian continent in the past.

At a doorstop at Parliament House he apparently couldn’t name his sources when pressed by journalists and seemed rather vague on the details.

His doubt was apparently based on disagreement among anthropologists over the identity of the painters of the so-called ‘Bradshaw’ or ‘Gwion Gwion’ rock paintings in the Kimberley region of Western Australia.

Now there is a very strong sense of deja vu here because this very issue was at the centre of a widely reported and politically fuelled stoush from the late 1990s to mid-2000s, but back then within the context of Native Title.

Actually, the debate over these paintings has existed ever since Joseph Bradshaw brought attention to them in 1892 because they were thought at the time to be ‘too advanced’ to have been made by Aborigines.

This fitted a 19th Century linear worldview in which societies progressed from primitive to advanced, the Bradshaw/Gwion Gwion paintings being touted as an anomaly made by an exotic people.

The Bradshaw/Gwion Gwion art style was however widely accepted by academic researchers from the late 1960s onwards as belonging within the broader rock art traditions of Northern Australia.

But following the publication of a book about the art in 1994 by amateur archaeologist Grahame Walsh the 19th Century view made a comeback.

Walsh argued that the Bradshaw/Gwion Gwion tradition was painted by a pre-Aboriginal group 20,000 years ago, Aboriginal Australians only arriving in the area 10,000 years ago.

In a second book published in 2000, he even went to great lengths to disconnect Aboriginal Australians culturally from the Bradshaw/Gwion Gwion paintings and instead connected them to a population possibly originating in Africa.

A great deal of space has been devoted in academic journals to deconstructing Walsh’s unfounded ideas and analyzing the political fallout from them.

Ian McNiven, an archaeologist at Monash University, wrote an article in 2011 in the journal Australian Archaeology about the 1990s/2000s public debate over them.

As he noted, there is very good evidence for cultural continuity between these paintings and recent art as documented for example by amateur archaeologist David Welch in 1996.

Paul Taçon who holds a chair in rock art research at Griffith University also pointed out in an article in Nature Australia (1998-1999) that Welch:

“has documented a recent use of every type of artifact depicted in Bradshaw art, strongly suggesting the paintings reflect Indigenous Australian way of life”.

More broadly, the science of human origins has moved a long way in the last two decades not the least because of big developments in genetic research.

DNA shows clearly that Aboriginal and Torres Strait Islander people are directly descended from the earliest humans to have settled Australia, New Guinea and surrounding islands.

Genetic clocks show they split from populations alive in East Asia today between 45,000 and 75,000 years ago.

Human skeletons from the Willandra Lakes region of southwest New South Wales also make abundantly clear that living Aboriginal Australians are the very same people as those who arrived here more than 40,000 years ago.

McNiven has also pointed out the very long history of the political use of archaeology to justify colonial ends by disassociating Indigenous people from their land and heritage.

He pithily concluded in 2011:

Thus, I suspect, we haven not heard the last of colonialist interpretations of Gwion Gwion paintings. As long as Australian society struggles to comprehend and acknowledge Aboriginal Native Title rights, archaeology will continue to be manipulated by those seeking to undermine Aboriginal authenticity and legitimacy of connections to land and heritage.

And so it is now with Constitutional recognition of Australia’s First people: once again Aboriginal and Torres Strait people find their history and culture being rewritten by ignorant politicians for ideological reasons.

Senator Leyonhjelm’s comments are clearly an attempt to reopen the Bradshaw/Gwion Gwion debate, and in so doing, cast doubt over the legitimacy of Aboriginal and Torres Strait Island people as the first inhabitants of Australia.

Sadly, he might just succeed within the context of a 24 hour news cycle and the seeming absence of a long term memory in the media and society more broadly.

The Conversation

Darren Curnoe is Human evolution specialist & ARC Future Fellow at UNSW Australia.

This article was originally published on The Conversation.
Read the original article.