Thursday 18 July 2013

Our Food Industry & How It’s Killing Us (Part III): Paying the Price



In my first two essays in this series – subtitled ‘Obesity& the Incoherence of Much Current Dietary Advice’ and ‘The Rise of the High Sugar Diet’– I first took the reader through the scientific arguments of Dr Robert Lustig, Professor of Paediatrics at the University of California in San Francisco, who believes that it is sugar, rather than dietary fats  – or simply eating too much – that is the main cause of the high rates of obesity, hypertension, Type 2 diabetes and cardiovascular disease (CVD) currently found in the USA and other parts of the western world. After examining – and largely discounting – the fairly widespread belief that it is one type of sugar in particular – High Fructose Corn Syrup (HFCS) – that is the principal culprit, I then looked at a raft of statistics on per capita sugar consumption in different countries and concluded that, although much of the data on this subject is somewhat less than reliable, from figures published by the US Department of Agriculture, we can say with absolute certainty that in the USA, at least, there was a very significant increase in overall sugar consumption between 1985 and 2000, and that although the figures have fallen back a little over the last ten years – very possibly as a result of the debate over HFCS and the increased uptake of sugar-free versions of leading soft drinks – they are still considerably higher than in the 1960s, thereby adding further weight to Professor Lustig’s thesis.

That I was unable to find equally reliable statistics for either the UK or the EU does, of course, render the evidence somewhat less than conclusive. I cannot say for certain, for instance, that the reason the UK is now officially the second most obese country in the world – after the USA – is that it too is on a rising curve when it comes to sugar consumption. What I can say, however, is that the UK diet is very similar to that in the USA, includes many of the same products from many of the same multinational food manufacturers, and that if Americans are consuming more sugar, I’d be very surprised if we in the UK were not.

The real question, therefore, is not whether the case against sugar has been made, but why – assuming that we are – we’re all eating so much of it, especially as at no point over the last thirty years do I remember making this choice? 

The easy answer, of course, is to blame Coca-Cola. In one of his lectures, Professor Lustig points out how much the size of Coca-Cola bottles has increased over the last thirty years and how much extra sugar we are consuming per year as a result. The question one has to ask, however, is whether the replacement of all such sugar-rich soft drinks with sugar-free alternatives would solve the problem. And judging by the overall consumption figures and the proportion attributable to soft drinks the answer is almost certainly no. It would certainly help. But alone and of itself, it would not put an end to our burgeoning waistlines, which are less the result of individual products – which we could simply choose to avoid – than of our diet as a whole, or,  more especially, of the way in which it is now produced.

To properly understand this, however, one has to understand the way in which our food industry has developed in the modern era, not just over the last thirty years, but over the last sixty or seventy – since the Second World War, in fact. 

Crucially, before this historical watershed, the value-added sectors of the industry – particularly processing and manufacture – were very much smaller than they are today. Food manufacturers already existed, of course. In the UK, they are numerous famous brands that were established as long ago as the 19th century. One thinks of Cadbury’s (chocolate) in Birmingham, for instance, or  Colman’s (mustard) in Norwich. But nearly all of these manufacturers were concerned with long shelf-life products, such as confectionaries and condiments, rather than with what one might call the bread and butter of daily life, which, for the most part, was produced by much smaller enterprises, closer to the customer, and, in many cases, with an original view to preservation. 

Butchers, for instance, cured and smoked pork, not just to sell us bacon and ham, but to stop it going off. Dairies turned milk in cheese, not just to give us something different to put in our sandwiches, but to give their raw material a longer shelf life. The same is true of herrings turned into kippers and fruit and vegetables turned into jams and marmalades, pickles and chutneys. True, butchers also made pies and sausages – and other forms of charcuterie – to use up the offal and off-cuts of meat they couldn’t sell in any other form. But even though this may not have been ‘food preservation’ in the strictest sense, it was still about making use of everything they had and avoiding wastage. 

It was the Second World War, itself, and the need to supply soldiers in war-zones all over the world, that brought about the first major change: a revolution in canning! In order to store and distribute prepared rations over thousands of miles while keeping them fresh, everything that could be sealed in a tin – from corned beef to poached pears – was, thus establishing a large scale canning industry which, after the war, then gave us baked beans, tomato soup and steamed treacle pudding – though the latter, I seem to remember, never came out very well. With the exception of canned soups and stews, however, most of the foods sold in this form were still elements of meals rather than meals in themselves, and the role they played in our diet was still quite marginal.

When I was growing up in the 1950s and 60s, for instance, at a time when most households could still manage on the income of a single wage-earner, the vast majority of meals were still cooked from scratch, using locally sourced produce, bought from locally owned butchers, bakers, fishmongers and greengrocers. Even in the early 70s, when I went to university, there still wasn’t very much in the way of ready prepared meals in the supermarkets. It’s why every student of my generation learnt to make at least two simple dishes – usually spaghetti bolognaise and some kind of curry – which, going on to form the basis of our culinary repertoire over the years that followed, now more or less define us as being of that age. It wasn’t until the late 70s, by which time inflation had made it more or less essential that every household have at least one and a half wage-earners – with most women therefore having to go out to work, either full-time or part-time, as well as doing most of the household cooking and cleaning – that ‘convenience’ foods, in the form of fully or partially prepared meals, began to take hold.

And it was at this point that the value-added food industry, as we know it today, really came into its own. For while people were still cooking food at home, using fresh ingredients, its scope for adding value was always strictly limited, being largely based on distribution. Moreover, fresh ingredients have a much shorter shelf life than processed food, leading to far greater wastage. Distribution therefore had to be fast and efficient, with only premium products travelling any significant distance. All this meant that small, local suppliers and retailers could still hold their own. With the increase in demand for convenience foods, however, all that changed. By producing ready prepared meals – at this stage mostly frozen – large manufacturers not only added value to fresh ingredients, they also added longevity. This, in turn, allowed them to lengthen the distribution chain, concentrating manufacture in major industrial centres, thereby achieving greater economies of scale.

Manufactured ready-meals also facilitated more extensive branding. Most long shelf-life items, such biscuits and confectionary, may already had well-established brands; but it is very hard to brand a chicken. Not so coq-au-vin for two in its own little aluminium tray, meaning less washing-up.

More extensive branding also allowed for more extensive advertising and far more intensive product development. From celebrity endorsed brands of ‘cook-in’ sauce and salad dressing, to new types of breakfast cereal and yoghurt, throughout the 80s and 90s it seemed like hardly a week went by without something new arriving on our supermarket shelves and television screens. And as sales soared, more and more money was invested in the industry.

From a patchwork of local artisan producers and retailers, our value-added food industry became big business, and has now actually overtaken agriculture as the largest industry in the UK. In 2012, for instance,  the total value of domestic and imported agriculture, as shown in Figure 1, was £78.9 billion. In contrast, the total value of the value-added food industry, including processing, distribution and retail – if you add them all up – was £86.9 billion.

Figure 1: UK Value-added Supply Chain
(Source: Food Statistics Pocketbook 2012, DEFRA)

It is one of the great business success stories of our time. But it has also produced a number of far less desirable consequences.

The first of these is that most of the independent butchers, bakers, fishmongers and greengrocers, on which we once relied, have now disappeared, their prices undercut, their business model obsolete. Far worse, the restructuring of the industry has, itself, led to greater and greater consolidation, thereby reducing the number of participants. In the long shelf-life sector, for instance, most of the world’s most recognisable brands are now owned by just five or six multinational conglomerates, including Kraft, Nestles, Coca-Cola and PepsiCo. With respect to retail, in the UK we now buy 89% of all our groceries from just five large supermarket chains, with the largest of these, Tesco, having a 30%  market share.

It is the very success of these mega-corporations, however, that has now exposed a contradiction at the very heart of the value-adding principle which made this success possible. For the purpose of adding value to basic ingredients, of course, is to be able charge more for the resulting products and hence make greater profits. According to this principle, therefore, you shouldn’t sell a man tomatoes to make a pasta sauce if you can actually sell him the pasta sauce.  Similarly, you shouldn’t sell him the pasta sauce to make a lasagne if you can sell him the lasagne. This only works, however, if you are able to make a lasagne at a price he can afford. If not, he’ll go back to buying the tomatoes and making it himself.

Of course, there is some latitude in this. Given the convenience of a readymade lasagne, the customer may be prepared to pay a little bit more than it would cost him to make it from scratch. But ideally, it would be best if you could actually make the manufactured lasagne cheaper than homemade, thus not only providing the customer with convenience, but saving him money. The problem, of course, is that by doing all the work for him, the value-added supply chain adds costs at every stage; and the only way these costs can be offset is by reducing the cost of the basic ingredients.

To determine by how much the manufacturer has to cut his ingredients costs to meet this requirement, I therefore conducted a little experiment. At my local supermarket, the cheapest readymade lasagne I could find was £1.99 for a single portion. I therefore set about making a lasagne from scratch for £2.00 per head, based on the ingredients shown in Figure 2.

Figure 2: Cost Breakdown for a Homemade Lasagne for Six

That I could only produce a lasagne at £2.00 per head if I made enough for six is, of course, a bit of a problem. For one of the great advantages of a ready meal of any kind is the convenience of the ‘single serving’ portion. This is therefore something to which I shall have to return later.

First, however, I want to continue with the experiment, for which the next step is to determine the cost at the farm gate of both my ingredients and the equivalent ingredients purchased by the manufacture. For I, of course, am buying mine at a supermarket, with the cost sale and distribution already added, whereas the manufacturer is buying his wholesale. What we need to work out is the value of the ingredients in each case when they were sold by the farmer.

We do this by first calculating the average cost breakdown of all foods sold in a supermarket using the figures for the value added supply chain shown in Figure 1, ignoring, of course, that part of the supply passing through the catering industry. The results are shown in Figure 3.

Figure 3: Average Cost Breakdown of Food Sold in Supermarkets

This is the cost breakdown for all food sold in a supermarket, regardless of the amount of processing the food has undergone. 30.27% is therefore the average cost of processing, with some items receiving more and some less. 

In fact, we can divide ‘processing’ into three broad categories: high, medium and low. Examples of low level processing include the butchering and mincing of beef, the pasteurising and bottling of milk, and the grading and packaging of tomatoes. Medium level processing, for instance, includes the making of cheese and the manufacture of tomato puree, while high level processing involves the taking of intermediate products such as minced beef, cheese, and the said tomato puree, and the manufacture of a readymade lasagne, which probably represents a fairly average level of processing in this category.

The making of my own homemade lasagne, in contrast, involves some pre-processing – in the mincing of the beef, and the manufacture of cheese – but given that I am also using fresh vegetables and herbs, which have received little or no pre-processing, and am making my own pasta, the overall level is probably just above average for the lower band.

Having thus established that the readymade lasagne is somewhere in the highest category of food processing and my own homemade lasagne is somewhere in the lowest category, we are now, therefore, in a position to calculate the relative cost of the ingredients at the farm gate for both the readymade lasagne and my homemade version.

We can do this because, in addition to knowing what the mean processing cost of all food is – 30.27% – we also know both the minimum and maximum. The minimum, of course, is 0%: no processing at all, as in the case of loose, unwrapped carrots and onions. The maximum we can work out if we first assume that the cost of sale and distribution for all items is more or less constant. Taking these two values out of the equation, therefore, this leaves 69.16%, which is the combined cost of the processing and raw ingredients. If we then assume that there are some products for which the cost of ingredients is nothing – as in the case of some types of bottled water, for instance – it follows, therefore, that it is possible for the value-added component to be the entire 69.16%, which therefore gives us our maximum.

Assuming, therefore, that the processing costs of all foods sold in a supermarket are situated somewhere in this range, between 0% and 69.16%, and knowing that the mean cost is 30.27%, we can therefore generate the graph shown in Figure 4.

Figure 4: Relative Ingredients Costs for Homemade & Readymade Lasagne

Importantly, this curve is not a representation of an actual distribution, as would be the case if based on empirical data. To produce that, however, I would need to know the input costs and the output pricing for every product produced by every manufacture, or at least a large enough sample to be sure that it was representative: a herculean task, to say the least. Figure 4 is rather an estimate or approximation. Assuming, however, that the actual data – if I had it – would follow a normal distribution, I’d be very surprised if what I have produced here were very far off the mark.

On the horizontal axis we have the percentage processing costs for all food items, ranging from 0 to 69.16%. On the vertical access, we have the percentage of products which have each of these values. At the lowest point in the range, for instance, we can see that around 0.5% of all products have no processing costs at all. Given the piles of loose fruit and vegetable which greet us whenever we enter a supermarket, this may seem somewhat low. But what you have to remember is that the percentage processing cost attributed to each item, is a percentage of their price at the checkout; and although supermarkets may sell a large volume of these items, they actually only represent a small fraction of total sales.

The question we now have answer, therefore, is where the ingredients for my homemade lasagne and its readymade equivalent sit on this scale?

In the case of the former, I have gone for half way between the minimum and mean, at around 15%. In accordance with my earlier analysis, this is just above the middle of the lower range, which runs from 0 to 23%, and takes into account the inclusion of ingredients such as the butter and the two types of cheese, which fall into the medium band. In the case of the readymade lasagne, I have gone for half way between the mean and the maximum, which is approximately 50%. This, therefore, is actually at the lower end of the higher range, which runs from 46% to 69.16%, and consequently represents a very conservative estimate of how much processing goes into ready-meals of this type. Even so, the cost differential between the ingredients that go into a readymade and a homemade lasagne, as shown in Figure 5, is very substantial.

Figure 5: Relative Cost Breakdown for Readymade and Homemade Lasagne

In my homemade lasagne, as you can see, more than half of the cost is going to the farmer for the raw ingredients, whereas in the readymade version, the farmer is receiving less than a quarter. More importantly, given that I have taken a fairly conservative view of the cost of processing for ready-meals, a similar differential would hold for just about every highly processed product you might buy.

So how do the supermarkets and the manufacturers do it? Well, part  of the answer, of course, is that they buy so much, they are able to force the price down at the farm gate. Indeed, the pressure they put on their agricultural suppliers is, in itself, one of the less desirable consequences of their omnipotence, forcing farmers to adopt practices which are both inhumane and environmentally damaging, while still driving many of them out of business.

In the UK, for instance, dairy farmers, in particular, are almost an endangered species. Due to an unfavourable exchange rate between the pound and the euro, supermarket chains are able to buy milk from EU farmers at a price which is less than the cost of domestic production. As a result, the UK dairy industry is contracting, with many dairy farmers being forced to diversify, often by adding value to their own raw ingredient by becoming artisan cheese makers. As a result, there are now more than a thousand specialist cheeses in the UK, many of which have won international awards, but none of which appear on any supermarket shelves. For in order to do so, these new artisan cheese makers would have to both increase their production – by at least an order of magnitude – and reduce their prices, both of which measures would affect their quality, thereby effectively defeating the purpose of the exercise.

However, it is not just by driving down prices at the farm gate that the food industry solves its value-added cost conundrum. After all, the ingredients for my own homemade lasagne were also bought at a supermarket, and the price I paid, therefore, also benefitted from this same price-squeezing. In order to maintain the cost differential between the ingredients I purchased and the ingredients that go into an industrially produced lasagne, therefore, the food industry has to take even tougher measures, and it does this by buying the lowest quality ingredients they can get away with.

In the UK recently, there was a major scandal over the revelation that there was horsemeat in some industrially produced ready-meals, including lasagnes. For days, our newspapers and television screens were filled with nothing else, as discovery after  discovery meant that more and more products had to be removed from the supermarket shelves. The fact is, however, that horsemeat is one of the least offensive ingredients in some of the pre-prepared foods we eat. Most of the meat in most low cost lasagnes, for instance, is MRM (Mechanically Recovered Meat), most of which is produced from what would otherwise be regarded as abattoir waste.

Indeed, if one looks at the list of ingredients for my homemade lasagne in Figure 2, it is fairly clear where the industrial manufacturer has to save money. For the two most expensive items are, of course, the minced beef and the cheese: the protein and the dairy fat. These are the ingredients that all manufacturers are therefore forced to cut back on, making it also very unlikely, as a consequence, that the cheese sauce in a manufactured lasagne is actually made from real cheese, a soya based substitute with cheese flavouring now being the preferred option.

Even the tomato sauce is likely to have been made from sugar, vinegar, emulsifier and tomato flavouring. Indeed, the only two ingredients in any of these products one can really trust to be what they purport to be are the sugar and the salt: the two low cost ingredients that are absolutely essential in making any industrially produced food palatable. And it is this, more than anything else, I believe, that explains the amount of sugar we are now all eating. 

From supposedly healthy cereals and yoghurts, to readymade chicken tikka masala and naan bread, it’s in almost every processed food we buy; and the tragedy is that the cheaper the product the more sugar it tends to contain. As a result, we are now very probably the first society in history in which obesity has become a disease of the poor.

The really sad fact, however, is that we like it: all this sugar-rich food. It is not quite that we are addicted to it, but we have certainly become accustomed to it. We’re like the person who always puts two sugars in his tea or coffee and grimaces in disgust if he accidentally takes a sip from an unsweetened cup. He doesn’t realise that if he drank it unsweetened for a week or two, he’d actually come to like it that way, and would then find the sweetened variety far too rich and sickly for his taste.

Not that, as a society, we’re likely to make this discovery any time soon, especially as we train our children to want and prefer sweet foods almost from birth.

In my first essay on this subject, I pointed out that we now have six-month-old babies suffering from obesity. But I didn’t explain why this was. The answer, however, is fairly simple. It’s baby formula.
Natural milk contains its own sugar: lactose. In baby formula, however, the manufacturers take this out and replace it with either sucrose of HFCS. Lots of it. Which makes it very yummy. As I also pointed in Part I, however, the fructose in the sucrose or HFCS, as well as being turned into fat in the baby’s liver, also produces a substance which is a leptin inhibiter – leptin being the hormone which tells our brains when we have eaten enough. This means that when the baby has his feed, he will probably drink the whole bottle, and will enjoy it very much, but will still not feel full. Half an hour later, as a result, he will then start wailing for more, showing all the signs of being hungry, to which his mother – not knowing what else to do, and very probably at the end of her tether – will probably respond by preparing another bottle. In no time at all, therefore, we have an obese baby, who will probably grow up to be an obese child and then an obese adult, turning to sweet comfort food as his only solace in a world that has played such a mean trick on him.

In the UK, it is now estimated that the National Health Service spends £5billion per annum treating obesity and the diseases that can be directly attributed to it. It is further estimated that over the next twenty years, this figure will more than double, making it extremely unlikely that, with an aging population and the escalating cost of new drugs, the NHS will be able to go on indefinitely treating patients freely at the point of use. This is not, therefore, just a health issue; it is also a financial issue.

Part of the problem, of course, is that the food industry, itself, can do nothing about it. It has followed a certain business logic, a game-plan that made perfect sense in business terms, and probably still does to those who are unable to look beyond this framework. The idea that they might now go back to selling people healthy raw ingredients for them to cook at home, effectively dismantling the value-added supply chain they have built up over the last fifty years and reducing their business to a quarter of its current size, is therefore beyond fanciful. When faced with criticism, the industry’s strategy, as already revealed in the USA, will therefore be to deny that sugar is a problem, point out that all products are clearly labelled, and argue that the consumer has a choice as to what they eat. And if that doesn’t work, they will then bring out the big guns, funding scientific studies to produce evidence supportive of its position and lobbying governments to ensure that no legislation is passed to make the slow poisoning of people with fructose illegal. Just like the tobacco industry over the last fifty years, in fact, we can expect the food industry to use every tactic available to it to maintain its lucrative value-added business. After all, what’s the alternative?

The good news for the industry is that even if more people decided to heed Professor Lustig’s warning, and wanted to start cooking again using raw ingredients, there are far fewer people now than thirty years ago who could actually do it. For despite all the celebrity chefs on our television screens and the hundreds of endlessly recycled cook-books sold each Christmas, the fact is that, for the most part, we have become a nation of only occasional cooks, with Christmas becoming the one exceptional occasion. Yes, there are still some very good home cooks out there. But most of them are either middle class enthusiasts, who like to think of themselves as living the ‘good life’ with Hugh Fearnley-Whittingstall, or they’re my age. Very few of them are the harassed and overworked parents of children whose tastes and ideas about what they want for supper are largely determined by TV advertising. 

More importantly still, in many homes today, the number of times per year the family sits down to a meal together can be counted on the fingers of one hand. Different members of the family want different things and different times. Afterschool activities and teenagers wanting to go out to meet their friends mean that meal times are often staggered, and the take-away and the ready-meal are the ideal solution. My homemade lasagne, which I had to make for six in order to make it economical, just no longer suits the way most families now live. For in changing its business model since the Second World War, our food industry not only changed itself, it changed us. Families today, are not like the families I knew when I was growing up. And for most people, the idea of going back to live that way is as unimaginable as the food industry actually dismantling itself.

‘But what about the government?’ I here you say. ‘If the effect on our health of all this sugary food is as serious as you say it is, shouldn’t they be doing something about it?’ What, and alienate an industry that contributes so much to their campaign funds! And where are the votes in it? We like our diet just the way it is. If we didn’t, we wouldn’t eat it. To many people, therefore, any government which tried to change it would be seen as just one more example of the ‘nanny state’. And after long campaigns against smoking, alcohol and dietary fats, to most politicians a campaign against sugar would be seen as a campaign too far.

Then there is the strategic issue. Not that I would ever suspect politicians of thinking strategically. But it’s always a handy excuse for inaction. And the fact is that the world’s population is currently 7.16 billion, and is rising by over 100,000 every day. By 2050, therefore, it is estimated that it will have reached 10.9 billion, which is a lot of people to feed. Too many if you want to feed them fresh meat, fish and vegetables. Already in the UK, as a result, there are people building farms to breed insects, from which animal protein can be extracted, which can then be processed to (very probably) taste like chicken. 

High value-added processing, based on low value ingredients is our future. The only good news is that with life expectancy in some countries now dropping as a result of our highly processed sugar-rich diet, we won’t have to endure it for very long.

Saturday 6 July 2013

Our Food Industry & How It’s Killing Us (Part II): The Rise of the High Sugar Diet



In the first part of this essay, subtitled ‘Obesity &the Incoherence of Much Current Dietary Advice’, I cited a lecture by Dr Robert Lustig, Professor of Paediatrics at the University of California in San Francisco, in which he argues that, despite their widespread currency, the two most prevalent theories for explaining the increase in obesity, hypertension, Type 2 diabetes and cardiovascular disease in the second half of the 20th century are both fundamentally wrong. 

The first of these theories – which gained broad acceptance in the early 1980s – says that one of the most important factors in the aetiology of all of the above diseases is the level of fat in our diets: a proposition that is now so well established in our collective belief system that it is not only generally accepted as a fact, but is the basis upon which much current dietary advice continues to be given.
According to Professor Lustig, however, not only was the international study, upon which  this theory was initially grounded, seriously flawed – failing to take into account all the possible contributory factors – its continued status defies much of the evidence of the last thirty years. For while our intake of dietary fats has significantly fallen during this period – by around 25% – the incidence of each of the diseases with which these fats were believed to be causally related has continued to rise, reaching near-epidemic proportions.

Over the last ten to fifteen years, as a consequence, a second theory – one based less on scientific evidence than apparent common sense – has steadily gained greater currency. Instead of attempting to identify one particular substance or foodstuff as the principal culprit, it says that obesity – and all its other attendant diseases – is less the result of what we eat than simply how much. Based on the first law of thermodynamics, which tells us that, in a closed system, energy is never lost, it states that if we consume more in calories than we burn off in exercise and the sheer business of staying alive then the excess has to go somewhere. And the obvious answer as to where this might be is in our adipose tissue in the form of fat.

What this physics-based model fails to take into account, however, is that our bodies not only have different ways of dealing with excess dietary inputs – some of them hardly entering the closed system of our metabolism at all – they also have different ways of metabolising the different substances that do get that far. 

In Part I of this essay, I illustrated this by taking the reader through the biochemistry involved in the metabolism of two common sugars: glucose and fructose. I shall not rehearse this excursion into the abstruse and wonderful world of human metabolism again here, not least because it would involve reproducing most of Part I all over again. The important point, however, is that it is quite possible for us to ingest identical amounts of two very similar substances, and for our bodies to treat them in completely differently ways. In the case of glucose, for instance, our bodies either use it to produce instantly available energy – in the form of ATP (adenosine triphosphate) – or turn it into the short-term energy store, glycogen. In contrast, if we ingest any significant amount of fructose, our bodies turn nearly all of it into fat.

And it is this that Professor Lustig believes to be the real problem: not the dietary fats which our metabolism largely breaks down into other (mostly) useful substances; but the non-fats which our bodies turn into fats, to be stored as such in adipose, skeletal muscle and cardiac tissue, where, if left unused and allowed to build up over time, they can do considerable harm. And chief among these harmful, ‘lipogenic’ non-fats, according to Professor Lustig, is indeed fructose. 

More importantly, I have yet come across a single biochemist who disagrees with the basic science behind this contention. I can therefore state with a fair degree of confidence that, even if there are still some grains of truth in either of the other two theories used to explain the increase in obesity and heart disease over the last thirty years, if you want to avoid putting on fat, then the one thing you should certainly do is cut down on your consumption of fructose.

It is at this point, however, that we run into our first problem. For even if more people were to become aware of just how lipogenic – or disposed to fat formation – fructose truly is, it is unlikely that many of us would be able to tell you just how much of the stuff we are actually eating. This is because very little of our daily intake comes in a form that is readily identifiable as such. For most people, for instance, less than 5% of their fructose consumption comes in the form of fresh fruit – from which, in its broadest sense, all fructose is ultimately derived. A far greater proportion – the vast majority, in fact – is added to our food in the form of processed sugar.

Even in this regard, however, it is not always obvious how much we are consuming. For not all of the sugar we ingest is conspicuously spooned over strawberries or stirred into our tea or coffee. Most of it, in fact, is almost entirely hidden, not just in the cakes and biscuits we casually enjoy as mid-morning snacks, but in the ready-meals and fast-food takeaways – along with their accompanying soft drinks – that have become such a major part of our diet over the last thirty years.

To complicate matters further, different types of sugar contain different amounts of fructose: a fact which has led to the fairly widespread belief that there is one type of sugar – used exclusively in the industrial manufacture of food products – that is worse than all the others. The believed culprit is High Fructose Corn Syrup, or HFCS, which first made its appearance in the mid-1970s, after President Nixon asked his then Secretary of State for Agriculture, Earl Butz, to find a way of stabilising food prices so as to prevent them from becoming a political issue. Butz did this by subsidising the large scale production of HFCS made from maize grown in America’s Mid-West. 40% cheaper than sucrose – which is made from either sugarcane or sugar beet – it very quickly caught on with the food industry, especially with manufacturers of soft drinks such as Coca-Cola and Pepsi, which has further led to its demonization among certain campaigners, the view being that if Coca-Cola is using it, then it’s got to be evil.

This, however, is a very distorted view of what is actually going on here. For while it may not be entirely coincidental that the introduction of HFCS occurred more or less at the same time as the start of the period of rapid growth in obesity and CVD, to assume that this correlation is either simple or direct would be to make the same kind of mistake researchers in the 1970s made with respect to dietary fats. They saw a correlation and immediately assumed a cause.

One can see this more clearly if one steps back from the US context – where most of this debate is taking place – and takes a more global perspective. For despite what many campaigners seem to think, the production and consumption of HFCS is still very much a US phenomenon. In 2010, for instance, HFCS accounted for around 38% of the sugar – or ‘sweetener’ – consumed by the average American. In Europe, in contrast, it accounted for less than 5%. Yet Europe too – and the UK in particular – is experiencing a similar trend with respect to obesity and CVD. It may not be as pronounced as in the USA, where it started earlier, but it is following a very similar path.

Even more significantly, HFCS and sucrose are very similar in terms of their biochemistry. As can be seen in Figure 1, sucrose comprises a bonded pair of fructose and glucose molecules, which almost immediately breaks apart on digestion, producing one fructose molecule and one glucose molecule. One can therefore say that sucrose is more or less 50% fructose and 50% glucose. HFCS, in comparison, is 55% fructose and 42% glucose, with the other 3% being mostly water. The difference in the amount of fructose in each of these forms of sweetener may not be entirely trivial, but it is not enough, therefore, to blame one and not the other. In fact, singling out HFCS for attack, as many people seem to want to do, merely allows the food industry to counter by arguing that it is no more harmful than sucrose, which is more or less correct.

 Figure 1: Molecular Structure of Sucrose

The real problem, therefore, is not the type of processed sugar we are consuming, but the total amount. Here, however, we have another problem. For obtaining reliable data on sugar consumption is not easy.

The first difficulty one encounters is in determining what counts as ‘sugar’ in the various datasets that are out there, and what is meant by ‘consumption’. A recent report by the Indian Council of Agricultural Research (ICAR), for instance, states that Brazil has the highest per capita consumption of sugar of any country in the world, with each Brazilian consuming 58 kg (128lbs) of the stuff per year. The USA, in contrast, comes in in seventh place, with each American only consuming half this amount, 29 kg (64lbs). It is only when one looks at the data in more detail that one starts to realise that this claim isn’t quite what it seems.

The first clue comes in the attribution of authorship on the title page. For while the report may have been published by ICAR, it was actually written by the Sugarcane Breeding Institute in Coimbatore. It will not, therefore, come as much of a surprise to discover that, under the heading ‘sugar’, the report only actually includes sucrose produced from sugarcane, which is a major agricultural crop in both India and Brazil. In Brazil, however, the sugar produced is not only sold in granulated form and used to sweeten manufactured foods and drinks; it is also used to produce Cachaça, one of Brazil’s most popular alcoholic beverages. This means that a large part of Brazil’s so-called ‘sugar consumption’ is not ingested as sugar at all – but as ethanol – and while it may make sense, from an economic and agricultural perspective, to include it as one of the more important raw materials consumed by the Brazilian economy, to include it as part of the country’s official per capita sugar intake gives one a totally distorted impression.

Of course, my selection of this rather extreme example to illustrate what is a fairly general point means that it is not entirely typical of most of the data on sugar consumption one finds on the internet. Statistically, it is a bit of an outlier. In many ways, however, it is actually far less misleading than quite a few reports I could have cited. For the vast majority of websites providing statistical information of this kind not only fail to reference their data’s provenance or define what it includes, in many cases they are produced on behalf of clearly vested interests, of which more trusting readers need to be aware.

There are, of course, plenty of scientific studies available, which, given peer-review, one can assume to be without intentional bias. But most of the ones at which I’ve so far looked are primarily concerned with correlating sugar consumption with the incidence of various specific diseases, and tend to be based on fairly small sample populations taken from a single geographical region, comprising a single sex in a fairly narrow age-range. In terms of helping us quantify per capita sugar consumption, they are therefore of little use. If we are looking for reliable data, as a consequence, all we really have to go on are official national statistics. And in the UK, even these are not very helpful.

The ONS, for instance – the Office of National Statistics – has absolutely nothing on the subject. DEFRA – the Department for Environment, Food and Rural Affairs – has figures for UK sugar production; but nothing on consumption. And while, for the purposes of ‘Health Education’, the Department of Health has published one or two papers on the dangers of high sugar diets, it appears not to have commissioned any real scientific work on the subject since 1999. 

Part of the problem is that, in the UK, sugar consumption has not yet become a political issue. This, however, is certainly not something you can say about the USA, where the problem, if anything, is one of over-politicisation. Last year, for instance, it was announced that per capita sugar consumption in the USA had exceeded 100lbs (45kg) the first time ever – though, again, what was included under the heading ‘sugar’ is not absolutely clear. In October, however, the US Department of Agriculture, ever-mindful of the need to keep Midwest farming interests onside, announced that it was changing the way in which sugar intake would be calculated in future and duly revised the 2012 figure down to 76.7lbs (34.8kg).

Figure 2: US Per Capita Sweetener Consumption 1965-2010
(Source: US Department of Agriculture)

The irony is that, if one looks at the official figures which the Department of Agriculture published in September 2010 – before this change in methodology took place – they actually reveal a marked decline in per capita sugar consumption over the last decade, very possibly as a result of the growing campaign against HFCS which began in the early 2000s. Changing the method of calculation is therefore likely to obscure this.

What Figure 2 most strikingly reveals, however, is not only how rapidly HFCS substantially replaced sucrose (here designated as ‘Refined Sugar’) during the 1970s and early 80s, but how its per capita consumption still continued to grow even after the consumption of sucrose more or less levelled off, thereby leading to a marked increase in total sugar intake during a period in which the USA coincidentally experienced its most significant increase in the incidence of obesity and CVD. 

To attribute this increase solely to the extra 5% fructose in HFCS, however, simply beggars belief, as does the argument that following the US consumer’s rejection of HFCS – and the subsequent decline in overall sugar consumption – the problem has now been resolved. For although some American consumers – having watched Professor Lustig’s lecture on YouTube perhaps – may be voting with their wallets and refusing to buy products containing HFCS, this does not mean that they have fundamentally changed their diet, or that the American food industry is now gearing itself up to produce food with a significantly lower sugar content. Indeed, it is questionable whether this latter is even possible. For having already reduced the amount of fat in foods they are producing – largely by replacing it with sugar – the question now facing all food manufacturers is with what – if they were forced to it – would they replace the sugar. 

Not that they are in any imminent danger of being forced to make this decision, of course. For not only is the US Food & Drug Administration (FDA) still a long way from accepting that HFCS – or any other form of sugar – is harmful, but politicians and industry-insiders alike know full well that were they required to reduce the sugar content of their products, not only would many manufacturers go out of business, the effect on the overall US economy would be devastating. 

This is because, as Earl Butz recognised, sugar is the key to cheap, mass-produced food. Without it, many manufactured foods would either be too lacking in flavour to be saleable, or too expensive for them to actually have a mass market.

To understand this, however, one needs to understand the economics of our food industry in the way that it is currently structured. And it is this that will be the subject of my third and last essay in this series, subtitled ‘Paying the Price.’

In it I shall not only describe how our food industry got itself – and us – into this extremely dire and possibly intractable predicament, I shall also attempt to outline the even more dire consequences that may follow if no solution can be found.