Thursday 18 July 2013

Our Food Industry & How It’s Killing Us (Part III): Paying the Price



In my first two essays in this series – subtitled ‘Obesity& the Incoherence of Much Current Dietary Advice’ and ‘The Rise of the High Sugar Diet’– I first took the reader through the scientific arguments of Dr Robert Lustig, Professor of Paediatrics at the University of California in San Francisco, who believes that it is sugar, rather than dietary fats  – or simply eating too much – that is the main cause of the high rates of obesity, hypertension, Type 2 diabetes and cardiovascular disease (CVD) currently found in the USA and other parts of the western world. After examining – and largely discounting – the fairly widespread belief that it is one type of sugar in particular – High Fructose Corn Syrup (HFCS) – that is the principal culprit, I then looked at a raft of statistics on per capita sugar consumption in different countries and concluded that, although much of the data on this subject is somewhat less than reliable, from figures published by the US Department of Agriculture, we can say with absolute certainty that in the USA, at least, there was a very significant increase in overall sugar consumption between 1985 and 2000, and that although the figures have fallen back a little over the last ten years – very possibly as a result of the debate over HFCS and the increased uptake of sugar-free versions of leading soft drinks – they are still considerably higher than in the 1960s, thereby adding further weight to Professor Lustig’s thesis.

That I was unable to find equally reliable statistics for either the UK or the EU does, of course, render the evidence somewhat less than conclusive. I cannot say for certain, for instance, that the reason the UK is now officially the second most obese country in the world – after the USA – is that it too is on a rising curve when it comes to sugar consumption. What I can say, however, is that the UK diet is very similar to that in the USA, includes many of the same products from many of the same multinational food manufacturers, and that if Americans are consuming more sugar, I’d be very surprised if we in the UK were not.

The real question, therefore, is not whether the case against sugar has been made, but why – assuming that we are – we’re all eating so much of it, especially as at no point over the last thirty years do I remember making this choice? 

The easy answer, of course, is to blame Coca-Cola. In one of his lectures, Professor Lustig points out how much the size of Coca-Cola bottles has increased over the last thirty years and how much extra sugar we are consuming per year as a result. The question one has to ask, however, is whether the replacement of all such sugar-rich soft drinks with sugar-free alternatives would solve the problem. And judging by the overall consumption figures and the proportion attributable to soft drinks the answer is almost certainly no. It would certainly help. But alone and of itself, it would not put an end to our burgeoning waistlines, which are less the result of individual products – which we could simply choose to avoid – than of our diet as a whole, or,  more especially, of the way in which it is now produced.

To properly understand this, however, one has to understand the way in which our food industry has developed in the modern era, not just over the last thirty years, but over the last sixty or seventy – since the Second World War, in fact. 

Crucially, before this historical watershed, the value-added sectors of the industry – particularly processing and manufacture – were very much smaller than they are today. Food manufacturers already existed, of course. In the UK, they are numerous famous brands that were established as long ago as the 19th century. One thinks of Cadbury’s (chocolate) in Birmingham, for instance, or  Colman’s (mustard) in Norwich. But nearly all of these manufacturers were concerned with long shelf-life products, such as confectionaries and condiments, rather than with what one might call the bread and butter of daily life, which, for the most part, was produced by much smaller enterprises, closer to the customer, and, in many cases, with an original view to preservation. 

Butchers, for instance, cured and smoked pork, not just to sell us bacon and ham, but to stop it going off. Dairies turned milk in cheese, not just to give us something different to put in our sandwiches, but to give their raw material a longer shelf life. The same is true of herrings turned into kippers and fruit and vegetables turned into jams and marmalades, pickles and chutneys. True, butchers also made pies and sausages – and other forms of charcuterie – to use up the offal and off-cuts of meat they couldn’t sell in any other form. But even though this may not have been ‘food preservation’ in the strictest sense, it was still about making use of everything they had and avoiding wastage. 

It was the Second World War, itself, and the need to supply soldiers in war-zones all over the world, that brought about the first major change: a revolution in canning! In order to store and distribute prepared rations over thousands of miles while keeping them fresh, everything that could be sealed in a tin – from corned beef to poached pears – was, thus establishing a large scale canning industry which, after the war, then gave us baked beans, tomato soup and steamed treacle pudding – though the latter, I seem to remember, never came out very well. With the exception of canned soups and stews, however, most of the foods sold in this form were still elements of meals rather than meals in themselves, and the role they played in our diet was still quite marginal.

When I was growing up in the 1950s and 60s, for instance, at a time when most households could still manage on the income of a single wage-earner, the vast majority of meals were still cooked from scratch, using locally sourced produce, bought from locally owned butchers, bakers, fishmongers and greengrocers. Even in the early 70s, when I went to university, there still wasn’t very much in the way of ready prepared meals in the supermarkets. It’s why every student of my generation learnt to make at least two simple dishes – usually spaghetti bolognaise and some kind of curry – which, going on to form the basis of our culinary repertoire over the years that followed, now more or less define us as being of that age. It wasn’t until the late 70s, by which time inflation had made it more or less essential that every household have at least one and a half wage-earners – with most women therefore having to go out to work, either full-time or part-time, as well as doing most of the household cooking and cleaning – that ‘convenience’ foods, in the form of fully or partially prepared meals, began to take hold.

And it was at this point that the value-added food industry, as we know it today, really came into its own. For while people were still cooking food at home, using fresh ingredients, its scope for adding value was always strictly limited, being largely based on distribution. Moreover, fresh ingredients have a much shorter shelf life than processed food, leading to far greater wastage. Distribution therefore had to be fast and efficient, with only premium products travelling any significant distance. All this meant that small, local suppliers and retailers could still hold their own. With the increase in demand for convenience foods, however, all that changed. By producing ready prepared meals – at this stage mostly frozen – large manufacturers not only added value to fresh ingredients, they also added longevity. This, in turn, allowed them to lengthen the distribution chain, concentrating manufacture in major industrial centres, thereby achieving greater economies of scale.

Manufactured ready-meals also facilitated more extensive branding. Most long shelf-life items, such biscuits and confectionary, may already had well-established brands; but it is very hard to brand a chicken. Not so coq-au-vin for two in its own little aluminium tray, meaning less washing-up.

More extensive branding also allowed for more extensive advertising and far more intensive product development. From celebrity endorsed brands of ‘cook-in’ sauce and salad dressing, to new types of breakfast cereal and yoghurt, throughout the 80s and 90s it seemed like hardly a week went by without something new arriving on our supermarket shelves and television screens. And as sales soared, more and more money was invested in the industry.

From a patchwork of local artisan producers and retailers, our value-added food industry became big business, and has now actually overtaken agriculture as the largest industry in the UK. In 2012, for instance,  the total value of domestic and imported agriculture, as shown in Figure 1, was £78.9 billion. In contrast, the total value of the value-added food industry, including processing, distribution and retail – if you add them all up – was £86.9 billion.

Figure 1: UK Value-added Supply Chain
(Source: Food Statistics Pocketbook 2012, DEFRA)

It is one of the great business success stories of our time. But it has also produced a number of far less desirable consequences.

The first of these is that most of the independent butchers, bakers, fishmongers and greengrocers, on which we once relied, have now disappeared, their prices undercut, their business model obsolete. Far worse, the restructuring of the industry has, itself, led to greater and greater consolidation, thereby reducing the number of participants. In the long shelf-life sector, for instance, most of the world’s most recognisable brands are now owned by just five or six multinational conglomerates, including Kraft, Nestles, Coca-Cola and PepsiCo. With respect to retail, in the UK we now buy 89% of all our groceries from just five large supermarket chains, with the largest of these, Tesco, having a 30%  market share.

It is the very success of these mega-corporations, however, that has now exposed a contradiction at the very heart of the value-adding principle which made this success possible. For the purpose of adding value to basic ingredients, of course, is to be able charge more for the resulting products and hence make greater profits. According to this principle, therefore, you shouldn’t sell a man tomatoes to make a pasta sauce if you can actually sell him the pasta sauce.  Similarly, you shouldn’t sell him the pasta sauce to make a lasagne if you can sell him the lasagne. This only works, however, if you are able to make a lasagne at a price he can afford. If not, he’ll go back to buying the tomatoes and making it himself.

Of course, there is some latitude in this. Given the convenience of a readymade lasagne, the customer may be prepared to pay a little bit more than it would cost him to make it from scratch. But ideally, it would be best if you could actually make the manufactured lasagne cheaper than homemade, thus not only providing the customer with convenience, but saving him money. The problem, of course, is that by doing all the work for him, the value-added supply chain adds costs at every stage; and the only way these costs can be offset is by reducing the cost of the basic ingredients.

To determine by how much the manufacturer has to cut his ingredients costs to meet this requirement, I therefore conducted a little experiment. At my local supermarket, the cheapest readymade lasagne I could find was £1.99 for a single portion. I therefore set about making a lasagne from scratch for £2.00 per head, based on the ingredients shown in Figure 2.

Figure 2: Cost Breakdown for a Homemade Lasagne for Six

That I could only produce a lasagne at £2.00 per head if I made enough for six is, of course, a bit of a problem. For one of the great advantages of a ready meal of any kind is the convenience of the ‘single serving’ portion. This is therefore something to which I shall have to return later.

First, however, I want to continue with the experiment, for which the next step is to determine the cost at the farm gate of both my ingredients and the equivalent ingredients purchased by the manufacture. For I, of course, am buying mine at a supermarket, with the cost sale and distribution already added, whereas the manufacturer is buying his wholesale. What we need to work out is the value of the ingredients in each case when they were sold by the farmer.

We do this by first calculating the average cost breakdown of all foods sold in a supermarket using the figures for the value added supply chain shown in Figure 1, ignoring, of course, that part of the supply passing through the catering industry. The results are shown in Figure 3.

Figure 3: Average Cost Breakdown of Food Sold in Supermarkets

This is the cost breakdown for all food sold in a supermarket, regardless of the amount of processing the food has undergone. 30.27% is therefore the average cost of processing, with some items receiving more and some less. 

In fact, we can divide ‘processing’ into three broad categories: high, medium and low. Examples of low level processing include the butchering and mincing of beef, the pasteurising and bottling of milk, and the grading and packaging of tomatoes. Medium level processing, for instance, includes the making of cheese and the manufacture of tomato puree, while high level processing involves the taking of intermediate products such as minced beef, cheese, and the said tomato puree, and the manufacture of a readymade lasagne, which probably represents a fairly average level of processing in this category.

The making of my own homemade lasagne, in contrast, involves some pre-processing – in the mincing of the beef, and the manufacture of cheese – but given that I am also using fresh vegetables and herbs, which have received little or no pre-processing, and am making my own pasta, the overall level is probably just above average for the lower band.

Having thus established that the readymade lasagne is somewhere in the highest category of food processing and my own homemade lasagne is somewhere in the lowest category, we are now, therefore, in a position to calculate the relative cost of the ingredients at the farm gate for both the readymade lasagne and my homemade version.

We can do this because, in addition to knowing what the mean processing cost of all food is – 30.27% – we also know both the minimum and maximum. The minimum, of course, is 0%: no processing at all, as in the case of loose, unwrapped carrots and onions. The maximum we can work out if we first assume that the cost of sale and distribution for all items is more or less constant. Taking these two values out of the equation, therefore, this leaves 69.16%, which is the combined cost of the processing and raw ingredients. If we then assume that there are some products for which the cost of ingredients is nothing – as in the case of some types of bottled water, for instance – it follows, therefore, that it is possible for the value-added component to be the entire 69.16%, which therefore gives us our maximum.

Assuming, therefore, that the processing costs of all foods sold in a supermarket are situated somewhere in this range, between 0% and 69.16%, and knowing that the mean cost is 30.27%, we can therefore generate the graph shown in Figure 4.

Figure 4: Relative Ingredients Costs for Homemade & Readymade Lasagne

Importantly, this curve is not a representation of an actual distribution, as would be the case if based on empirical data. To produce that, however, I would need to know the input costs and the output pricing for every product produced by every manufacture, or at least a large enough sample to be sure that it was representative: a herculean task, to say the least. Figure 4 is rather an estimate or approximation. Assuming, however, that the actual data – if I had it – would follow a normal distribution, I’d be very surprised if what I have produced here were very far off the mark.

On the horizontal axis we have the percentage processing costs for all food items, ranging from 0 to 69.16%. On the vertical access, we have the percentage of products which have each of these values. At the lowest point in the range, for instance, we can see that around 0.5% of all products have no processing costs at all. Given the piles of loose fruit and vegetable which greet us whenever we enter a supermarket, this may seem somewhat low. But what you have to remember is that the percentage processing cost attributed to each item, is a percentage of their price at the checkout; and although supermarkets may sell a large volume of these items, they actually only represent a small fraction of total sales.

The question we now have answer, therefore, is where the ingredients for my homemade lasagne and its readymade equivalent sit on this scale?

In the case of the former, I have gone for half way between the minimum and mean, at around 15%. In accordance with my earlier analysis, this is just above the middle of the lower range, which runs from 0 to 23%, and takes into account the inclusion of ingredients such as the butter and the two types of cheese, which fall into the medium band. In the case of the readymade lasagne, I have gone for half way between the mean and the maximum, which is approximately 50%. This, therefore, is actually at the lower end of the higher range, which runs from 46% to 69.16%, and consequently represents a very conservative estimate of how much processing goes into ready-meals of this type. Even so, the cost differential between the ingredients that go into a readymade and a homemade lasagne, as shown in Figure 5, is very substantial.

Figure 5: Relative Cost Breakdown for Readymade and Homemade Lasagne

In my homemade lasagne, as you can see, more than half of the cost is going to the farmer for the raw ingredients, whereas in the readymade version, the farmer is receiving less than a quarter. More importantly, given that I have taken a fairly conservative view of the cost of processing for ready-meals, a similar differential would hold for just about every highly processed product you might buy.

So how do the supermarkets and the manufacturers do it? Well, part  of the answer, of course, is that they buy so much, they are able to force the price down at the farm gate. Indeed, the pressure they put on their agricultural suppliers is, in itself, one of the less desirable consequences of their omnipotence, forcing farmers to adopt practices which are both inhumane and environmentally damaging, while still driving many of them out of business.

In the UK, for instance, dairy farmers, in particular, are almost an endangered species. Due to an unfavourable exchange rate between the pound and the euro, supermarket chains are able to buy milk from EU farmers at a price which is less than the cost of domestic production. As a result, the UK dairy industry is contracting, with many dairy farmers being forced to diversify, often by adding value to their own raw ingredient by becoming artisan cheese makers. As a result, there are now more than a thousand specialist cheeses in the UK, many of which have won international awards, but none of which appear on any supermarket shelves. For in order to do so, these new artisan cheese makers would have to both increase their production – by at least an order of magnitude – and reduce their prices, both of which measures would affect their quality, thereby effectively defeating the purpose of the exercise.

However, it is not just by driving down prices at the farm gate that the food industry solves its value-added cost conundrum. After all, the ingredients for my own homemade lasagne were also bought at a supermarket, and the price I paid, therefore, also benefitted from this same price-squeezing. In order to maintain the cost differential between the ingredients I purchased and the ingredients that go into an industrially produced lasagne, therefore, the food industry has to take even tougher measures, and it does this by buying the lowest quality ingredients they can get away with.

In the UK recently, there was a major scandal over the revelation that there was horsemeat in some industrially produced ready-meals, including lasagnes. For days, our newspapers and television screens were filled with nothing else, as discovery after  discovery meant that more and more products had to be removed from the supermarket shelves. The fact is, however, that horsemeat is one of the least offensive ingredients in some of the pre-prepared foods we eat. Most of the meat in most low cost lasagnes, for instance, is MRM (Mechanically Recovered Meat), most of which is produced from what would otherwise be regarded as abattoir waste.

Indeed, if one looks at the list of ingredients for my homemade lasagne in Figure 2, it is fairly clear where the industrial manufacturer has to save money. For the two most expensive items are, of course, the minced beef and the cheese: the protein and the dairy fat. These are the ingredients that all manufacturers are therefore forced to cut back on, making it also very unlikely, as a consequence, that the cheese sauce in a manufactured lasagne is actually made from real cheese, a soya based substitute with cheese flavouring now being the preferred option.

Even the tomato sauce is likely to have been made from sugar, vinegar, emulsifier and tomato flavouring. Indeed, the only two ingredients in any of these products one can really trust to be what they purport to be are the sugar and the salt: the two low cost ingredients that are absolutely essential in making any industrially produced food palatable. And it is this, more than anything else, I believe, that explains the amount of sugar we are now all eating. 

From supposedly healthy cereals and yoghurts, to readymade chicken tikka masala and naan bread, it’s in almost every processed food we buy; and the tragedy is that the cheaper the product the more sugar it tends to contain. As a result, we are now very probably the first society in history in which obesity has become a disease of the poor.

The really sad fact, however, is that we like it: all this sugar-rich food. It is not quite that we are addicted to it, but we have certainly become accustomed to it. We’re like the person who always puts two sugars in his tea or coffee and grimaces in disgust if he accidentally takes a sip from an unsweetened cup. He doesn’t realise that if he drank it unsweetened for a week or two, he’d actually come to like it that way, and would then find the sweetened variety far too rich and sickly for his taste.

Not that, as a society, we’re likely to make this discovery any time soon, especially as we train our children to want and prefer sweet foods almost from birth.

In my first essay on this subject, I pointed out that we now have six-month-old babies suffering from obesity. But I didn’t explain why this was. The answer, however, is fairly simple. It’s baby formula.
Natural milk contains its own sugar: lactose. In baby formula, however, the manufacturers take this out and replace it with either sucrose of HFCS. Lots of it. Which makes it very yummy. As I also pointed in Part I, however, the fructose in the sucrose or HFCS, as well as being turned into fat in the baby’s liver, also produces a substance which is a leptin inhibiter – leptin being the hormone which tells our brains when we have eaten enough. This means that when the baby has his feed, he will probably drink the whole bottle, and will enjoy it very much, but will still not feel full. Half an hour later, as a result, he will then start wailing for more, showing all the signs of being hungry, to which his mother – not knowing what else to do, and very probably at the end of her tether – will probably respond by preparing another bottle. In no time at all, therefore, we have an obese baby, who will probably grow up to be an obese child and then an obese adult, turning to sweet comfort food as his only solace in a world that has played such a mean trick on him.

In the UK, it is now estimated that the National Health Service spends £5billion per annum treating obesity and the diseases that can be directly attributed to it. It is further estimated that over the next twenty years, this figure will more than double, making it extremely unlikely that, with an aging population and the escalating cost of new drugs, the NHS will be able to go on indefinitely treating patients freely at the point of use. This is not, therefore, just a health issue; it is also a financial issue.

Part of the problem, of course, is that the food industry, itself, can do nothing about it. It has followed a certain business logic, a game-plan that made perfect sense in business terms, and probably still does to those who are unable to look beyond this framework. The idea that they might now go back to selling people healthy raw ingredients for them to cook at home, effectively dismantling the value-added supply chain they have built up over the last fifty years and reducing their business to a quarter of its current size, is therefore beyond fanciful. When faced with criticism, the industry’s strategy, as already revealed in the USA, will therefore be to deny that sugar is a problem, point out that all products are clearly labelled, and argue that the consumer has a choice as to what they eat. And if that doesn’t work, they will then bring out the big guns, funding scientific studies to produce evidence supportive of its position and lobbying governments to ensure that no legislation is passed to make the slow poisoning of people with fructose illegal. Just like the tobacco industry over the last fifty years, in fact, we can expect the food industry to use every tactic available to it to maintain its lucrative value-added business. After all, what’s the alternative?

The good news for the industry is that even if more people decided to heed Professor Lustig’s warning, and wanted to start cooking again using raw ingredients, there are far fewer people now than thirty years ago who could actually do it. For despite all the celebrity chefs on our television screens and the hundreds of endlessly recycled cook-books sold each Christmas, the fact is that, for the most part, we have become a nation of only occasional cooks, with Christmas becoming the one exceptional occasion. Yes, there are still some very good home cooks out there. But most of them are either middle class enthusiasts, who like to think of themselves as living the ‘good life’ with Hugh Fearnley-Whittingstall, or they’re my age. Very few of them are the harassed and overworked parents of children whose tastes and ideas about what they want for supper are largely determined by TV advertising. 

More importantly still, in many homes today, the number of times per year the family sits down to a meal together can be counted on the fingers of one hand. Different members of the family want different things and different times. Afterschool activities and teenagers wanting to go out to meet their friends mean that meal times are often staggered, and the take-away and the ready-meal are the ideal solution. My homemade lasagne, which I had to make for six in order to make it economical, just no longer suits the way most families now live. For in changing its business model since the Second World War, our food industry not only changed itself, it changed us. Families today, are not like the families I knew when I was growing up. And for most people, the idea of going back to live that way is as unimaginable as the food industry actually dismantling itself.

‘But what about the government?’ I here you say. ‘If the effect on our health of all this sugary food is as serious as you say it is, shouldn’t they be doing something about it?’ What, and alienate an industry that contributes so much to their campaign funds! And where are the votes in it? We like our diet just the way it is. If we didn’t, we wouldn’t eat it. To many people, therefore, any government which tried to change it would be seen as just one more example of the ‘nanny state’. And after long campaigns against smoking, alcohol and dietary fats, to most politicians a campaign against sugar would be seen as a campaign too far.

Then there is the strategic issue. Not that I would ever suspect politicians of thinking strategically. But it’s always a handy excuse for inaction. And the fact is that the world’s population is currently 7.16 billion, and is rising by over 100,000 every day. By 2050, therefore, it is estimated that it will have reached 10.9 billion, which is a lot of people to feed. Too many if you want to feed them fresh meat, fish and vegetables. Already in the UK, as a result, there are people building farms to breed insects, from which animal protein can be extracted, which can then be processed to (very probably) taste like chicken. 

High value-added processing, based on low value ingredients is our future. The only good news is that with life expectancy in some countries now dropping as a result of our highly processed sugar-rich diet, we won’t have to endure it for very long.

Saturday 6 July 2013

Our Food Industry & How It’s Killing Us (Part II): The Rise of the High Sugar Diet



In the first part of this essay, subtitled ‘Obesity &the Incoherence of Much Current Dietary Advice’, I cited a lecture by Dr Robert Lustig, Professor of Paediatrics at the University of California in San Francisco, in which he argues that, despite their widespread currency, the two most prevalent theories for explaining the increase in obesity, hypertension, Type 2 diabetes and cardiovascular disease in the second half of the 20th century are both fundamentally wrong. 

The first of these theories – which gained broad acceptance in the early 1980s – says that one of the most important factors in the aetiology of all of the above diseases is the level of fat in our diets: a proposition that is now so well established in our collective belief system that it is not only generally accepted as a fact, but is the basis upon which much current dietary advice continues to be given.
According to Professor Lustig, however, not only was the international study, upon which  this theory was initially grounded, seriously flawed – failing to take into account all the possible contributory factors – its continued status defies much of the evidence of the last thirty years. For while our intake of dietary fats has significantly fallen during this period – by around 25% – the incidence of each of the diseases with which these fats were believed to be causally related has continued to rise, reaching near-epidemic proportions.

Over the last ten to fifteen years, as a consequence, a second theory – one based less on scientific evidence than apparent common sense – has steadily gained greater currency. Instead of attempting to identify one particular substance or foodstuff as the principal culprit, it says that obesity – and all its other attendant diseases – is less the result of what we eat than simply how much. Based on the first law of thermodynamics, which tells us that, in a closed system, energy is never lost, it states that if we consume more in calories than we burn off in exercise and the sheer business of staying alive then the excess has to go somewhere. And the obvious answer as to where this might be is in our adipose tissue in the form of fat.

What this physics-based model fails to take into account, however, is that our bodies not only have different ways of dealing with excess dietary inputs – some of them hardly entering the closed system of our metabolism at all – they also have different ways of metabolising the different substances that do get that far. 

In Part I of this essay, I illustrated this by taking the reader through the biochemistry involved in the metabolism of two common sugars: glucose and fructose. I shall not rehearse this excursion into the abstruse and wonderful world of human metabolism again here, not least because it would involve reproducing most of Part I all over again. The important point, however, is that it is quite possible for us to ingest identical amounts of two very similar substances, and for our bodies to treat them in completely differently ways. In the case of glucose, for instance, our bodies either use it to produce instantly available energy – in the form of ATP (adenosine triphosphate) – or turn it into the short-term energy store, glycogen. In contrast, if we ingest any significant amount of fructose, our bodies turn nearly all of it into fat.

And it is this that Professor Lustig believes to be the real problem: not the dietary fats which our metabolism largely breaks down into other (mostly) useful substances; but the non-fats which our bodies turn into fats, to be stored as such in adipose, skeletal muscle and cardiac tissue, where, if left unused and allowed to build up over time, they can do considerable harm. And chief among these harmful, ‘lipogenic’ non-fats, according to Professor Lustig, is indeed fructose. 

More importantly, I have yet come across a single biochemist who disagrees with the basic science behind this contention. I can therefore state with a fair degree of confidence that, even if there are still some grains of truth in either of the other two theories used to explain the increase in obesity and heart disease over the last thirty years, if you want to avoid putting on fat, then the one thing you should certainly do is cut down on your consumption of fructose.

It is at this point, however, that we run into our first problem. For even if more people were to become aware of just how lipogenic – or disposed to fat formation – fructose truly is, it is unlikely that many of us would be able to tell you just how much of the stuff we are actually eating. This is because very little of our daily intake comes in a form that is readily identifiable as such. For most people, for instance, less than 5% of their fructose consumption comes in the form of fresh fruit – from which, in its broadest sense, all fructose is ultimately derived. A far greater proportion – the vast majority, in fact – is added to our food in the form of processed sugar.

Even in this regard, however, it is not always obvious how much we are consuming. For not all of the sugar we ingest is conspicuously spooned over strawberries or stirred into our tea or coffee. Most of it, in fact, is almost entirely hidden, not just in the cakes and biscuits we casually enjoy as mid-morning snacks, but in the ready-meals and fast-food takeaways – along with their accompanying soft drinks – that have become such a major part of our diet over the last thirty years.

To complicate matters further, different types of sugar contain different amounts of fructose: a fact which has led to the fairly widespread belief that there is one type of sugar – used exclusively in the industrial manufacture of food products – that is worse than all the others. The believed culprit is High Fructose Corn Syrup, or HFCS, which first made its appearance in the mid-1970s, after President Nixon asked his then Secretary of State for Agriculture, Earl Butz, to find a way of stabilising food prices so as to prevent them from becoming a political issue. Butz did this by subsidising the large scale production of HFCS made from maize grown in America’s Mid-West. 40% cheaper than sucrose – which is made from either sugarcane or sugar beet – it very quickly caught on with the food industry, especially with manufacturers of soft drinks such as Coca-Cola and Pepsi, which has further led to its demonization among certain campaigners, the view being that if Coca-Cola is using it, then it’s got to be evil.

This, however, is a very distorted view of what is actually going on here. For while it may not be entirely coincidental that the introduction of HFCS occurred more or less at the same time as the start of the period of rapid growth in obesity and CVD, to assume that this correlation is either simple or direct would be to make the same kind of mistake researchers in the 1970s made with respect to dietary fats. They saw a correlation and immediately assumed a cause.

One can see this more clearly if one steps back from the US context – where most of this debate is taking place – and takes a more global perspective. For despite what many campaigners seem to think, the production and consumption of HFCS is still very much a US phenomenon. In 2010, for instance, HFCS accounted for around 38% of the sugar – or ‘sweetener’ – consumed by the average American. In Europe, in contrast, it accounted for less than 5%. Yet Europe too – and the UK in particular – is experiencing a similar trend with respect to obesity and CVD. It may not be as pronounced as in the USA, where it started earlier, but it is following a very similar path.

Even more significantly, HFCS and sucrose are very similar in terms of their biochemistry. As can be seen in Figure 1, sucrose comprises a bonded pair of fructose and glucose molecules, which almost immediately breaks apart on digestion, producing one fructose molecule and one glucose molecule. One can therefore say that sucrose is more or less 50% fructose and 50% glucose. HFCS, in comparison, is 55% fructose and 42% glucose, with the other 3% being mostly water. The difference in the amount of fructose in each of these forms of sweetener may not be entirely trivial, but it is not enough, therefore, to blame one and not the other. In fact, singling out HFCS for attack, as many people seem to want to do, merely allows the food industry to counter by arguing that it is no more harmful than sucrose, which is more or less correct.

 Figure 1: Molecular Structure of Sucrose

The real problem, therefore, is not the type of processed sugar we are consuming, but the total amount. Here, however, we have another problem. For obtaining reliable data on sugar consumption is not easy.

The first difficulty one encounters is in determining what counts as ‘sugar’ in the various datasets that are out there, and what is meant by ‘consumption’. A recent report by the Indian Council of Agricultural Research (ICAR), for instance, states that Brazil has the highest per capita consumption of sugar of any country in the world, with each Brazilian consuming 58 kg (128lbs) of the stuff per year. The USA, in contrast, comes in in seventh place, with each American only consuming half this amount, 29 kg (64lbs). It is only when one looks at the data in more detail that one starts to realise that this claim isn’t quite what it seems.

The first clue comes in the attribution of authorship on the title page. For while the report may have been published by ICAR, it was actually written by the Sugarcane Breeding Institute in Coimbatore. It will not, therefore, come as much of a surprise to discover that, under the heading ‘sugar’, the report only actually includes sucrose produced from sugarcane, which is a major agricultural crop in both India and Brazil. In Brazil, however, the sugar produced is not only sold in granulated form and used to sweeten manufactured foods and drinks; it is also used to produce Cachaça, one of Brazil’s most popular alcoholic beverages. This means that a large part of Brazil’s so-called ‘sugar consumption’ is not ingested as sugar at all – but as ethanol – and while it may make sense, from an economic and agricultural perspective, to include it as one of the more important raw materials consumed by the Brazilian economy, to include it as part of the country’s official per capita sugar intake gives one a totally distorted impression.

Of course, my selection of this rather extreme example to illustrate what is a fairly general point means that it is not entirely typical of most of the data on sugar consumption one finds on the internet. Statistically, it is a bit of an outlier. In many ways, however, it is actually far less misleading than quite a few reports I could have cited. For the vast majority of websites providing statistical information of this kind not only fail to reference their data’s provenance or define what it includes, in many cases they are produced on behalf of clearly vested interests, of which more trusting readers need to be aware.

There are, of course, plenty of scientific studies available, which, given peer-review, one can assume to be without intentional bias. But most of the ones at which I’ve so far looked are primarily concerned with correlating sugar consumption with the incidence of various specific diseases, and tend to be based on fairly small sample populations taken from a single geographical region, comprising a single sex in a fairly narrow age-range. In terms of helping us quantify per capita sugar consumption, they are therefore of little use. If we are looking for reliable data, as a consequence, all we really have to go on are official national statistics. And in the UK, even these are not very helpful.

The ONS, for instance – the Office of National Statistics – has absolutely nothing on the subject. DEFRA – the Department for Environment, Food and Rural Affairs – has figures for UK sugar production; but nothing on consumption. And while, for the purposes of ‘Health Education’, the Department of Health has published one or two papers on the dangers of high sugar diets, it appears not to have commissioned any real scientific work on the subject since 1999. 

Part of the problem is that, in the UK, sugar consumption has not yet become a political issue. This, however, is certainly not something you can say about the USA, where the problem, if anything, is one of over-politicisation. Last year, for instance, it was announced that per capita sugar consumption in the USA had exceeded 100lbs (45kg) the first time ever – though, again, what was included under the heading ‘sugar’ is not absolutely clear. In October, however, the US Department of Agriculture, ever-mindful of the need to keep Midwest farming interests onside, announced that it was changing the way in which sugar intake would be calculated in future and duly revised the 2012 figure down to 76.7lbs (34.8kg).

Figure 2: US Per Capita Sweetener Consumption 1965-2010
(Source: US Department of Agriculture)

The irony is that, if one looks at the official figures which the Department of Agriculture published in September 2010 – before this change in methodology took place – they actually reveal a marked decline in per capita sugar consumption over the last decade, very possibly as a result of the growing campaign against HFCS which began in the early 2000s. Changing the method of calculation is therefore likely to obscure this.

What Figure 2 most strikingly reveals, however, is not only how rapidly HFCS substantially replaced sucrose (here designated as ‘Refined Sugar’) during the 1970s and early 80s, but how its per capita consumption still continued to grow even after the consumption of sucrose more or less levelled off, thereby leading to a marked increase in total sugar intake during a period in which the USA coincidentally experienced its most significant increase in the incidence of obesity and CVD. 

To attribute this increase solely to the extra 5% fructose in HFCS, however, simply beggars belief, as does the argument that following the US consumer’s rejection of HFCS – and the subsequent decline in overall sugar consumption – the problem has now been resolved. For although some American consumers – having watched Professor Lustig’s lecture on YouTube perhaps – may be voting with their wallets and refusing to buy products containing HFCS, this does not mean that they have fundamentally changed their diet, or that the American food industry is now gearing itself up to produce food with a significantly lower sugar content. Indeed, it is questionable whether this latter is even possible. For having already reduced the amount of fat in foods they are producing – largely by replacing it with sugar – the question now facing all food manufacturers is with what – if they were forced to it – would they replace the sugar. 

Not that they are in any imminent danger of being forced to make this decision, of course. For not only is the US Food & Drug Administration (FDA) still a long way from accepting that HFCS – or any other form of sugar – is harmful, but politicians and industry-insiders alike know full well that were they required to reduce the sugar content of their products, not only would many manufacturers go out of business, the effect on the overall US economy would be devastating. 

This is because, as Earl Butz recognised, sugar is the key to cheap, mass-produced food. Without it, many manufactured foods would either be too lacking in flavour to be saleable, or too expensive for them to actually have a mass market.

To understand this, however, one needs to understand the economics of our food industry in the way that it is currently structured. And it is this that will be the subject of my third and last essay in this series, subtitled ‘Paying the Price.’

In it I shall not only describe how our food industry got itself – and us – into this extremely dire and possibly intractable predicament, I shall also attempt to outline the even more dire consequences that may follow if no solution can be found.

Tuesday 14 May 2013

Our Food Industry & How It’s Killing Us (Part I): Obesity & the Incoherence of Much Current Dietary Advice



In 1980, a large-scale study by the epidemiologist Ancel Keys was featured on the cover of Time magazine. Called The Seven Countries Study, it compared per capita fat intake in the USA, Canada, Australia, England, Wales, Italy and Japan, and appeared to demonstrate a simple and direct correlation between dietary fat and the relative incidence of cardiovascular disease (CVD) in each of the countries concerned. It also marked something of a turning point in history. For over the next decade or so, it fundamentally changed our attitudes to what we eat.

For those brought up in a world in which this change had already taken place, this may be somewhat difficult to appreciate; but prior to the 1980s, the prevailing attitude was largely one of innocence. Mealtimes were still mostly family affairs, comprising regular family favourites, which most of us, I suspect, simply took for granted, enjoying the odd treat now and again as one of life’s simple pleasures, but never really giving our diet, as such, that much thought. It was The Seven Countries Study – or, perhaps more accurately, the flurry of media attention and paternalistic government action that followed in its wake – that lifted the scales from our eyes. For as departments of health throughout the western world responded to the growing political imperative by issuing dietary guidelines and mounting ‘healthy eating’ campaigns, we were all ineluctably made aware of the hazards inherent in our previously incontinent lifestyles, and were thereby forced, as much by social pressure as any concern for our hearts, to start ‘watching’ what we ate.

Indeed, it was as much their appeal to our vanity as their play upon our fears, that in the end, I suspect, made all those government campaigns to have us eat more healthily so successful. For successful, they certainly were. Over the next thirty years, the amount of fat in our diet, as a percentage of total calorific intake, fell from around 40% in the late 70s, to just over 30% today. Indeed, it’s hard to think of another campaign to change our behaviour on such a scale that has had anywhere near this level of success. The only problem was, of course, that it didn’t actually have the desired effect. For despite getting us to do all the things we were supposed to do, it didn’t bring about the changes in our health that it was believed would follow. In fact, during that same thirty year period in which we managed to reduce our fat intake by 25%, not only did our average weight increase – by as much as 25 pounds (12 kg) in the USA – but the incidence of obesity, hypertension, Type 2 diabetes, and coronary heart disease all continued to rise.

So what went wrong? Was dietary fat not to blame after all? Not if you count the number of articles on this subject still being submitted to major medical journals. If one listens carefully, however, there has been a subtle change in the way many healthcare professionals now seem to approach the subject. If you ask most dieticians, for instance, they are far more likely to tell you that it is not what we are eating that is the problem but how much. For although a ‘low-fat diet’ is still what is ‘officially’ recommended, this once simple message is now being combined with what is arguably an entirely different and diametrically opposed explanation as to why we’re all getting so fat. This is the view that it doesn’t actually matter whether our calorific intake is in the form of carbohydrates, proteins or fats, in that, in energy terms, the value of every calorie is the same. What is important, therefore, is the simple maths: that if we ingest n calories and only burn off n-1 calories in exercise, then the remaining calorie has to be put into storage, almost certainly in the form of fat.

Apart from sending out a mixed and therefore somewhat confusing message, the real problem with this new ‘quantitative’ approach to the problem, of course, is that it is simply wrong – a matter to which I shall return shortly. What makes it all the more damaging, however, is the effect it is having on our already dysfunctional relationship with food. For if being told to cut down on fat made us that much more self-conscious with respect to what we were eating, being told that a healthy diet is simply a matter of inputs and outputs, which can be counted and controlled, has made us positively obsessive – assuming, that is, that we haven’t already given up altogether and fallen into that slough of self-loathing, despair and depression, which is so often the psychological correlate of our physical malaise. 

For the implication of this new  quantitative approach, of course, is that, if we are overweight, it is entirely our own fault. We eat too much and exercise too little. We are, in short, guilty of two deadly sins: gluttony and sloth. And how our media love to rub our noses in it! On UK television at present, there is a programme called ‘Big Body Squad’. It is about members of the emergency services who are tasked with the problem of getting grossly obese patients out of their homes in order to take them to hospital: a task which almost invariably involves taking out windows, knocking down walls and the use of a crane. 

In fact, watching morbidly obese people being ritually humiliated for our derision and delight has become something of a new spectator sport. Importantly, however, the deep vein of inhumanity and cruelty to which this kind of reality programming panders is not the only form of sickness it exploits. For while the gross and unlovely flesh of our fallen brethren may initially reinforce our sense of moral superiority – that wholly unsought, if not entirely unpleasant by-product of the hours we spend each week in the gym, turning our bodies into temples to the god Narcissus – it also serves as a cautionary tale as to what could so easily happen should we allow our iron discipline to falter, thus adding further impetus to our own obsessive-compulsive behaviour. 

What is really troubling, however, is the fact that, for those who are overweight, it is not just the state of their own bodies over which they are made to feel guilty and ashamed. As parents, they also have to take responsibility for the obesity epidemic that is now sweeping through our children. For the sad fact is that today’s overweight ten-year-olds are very probably members of the first generation for over a century to have a lower life expectancy than that of their progenitors. So bad are we at feeding our children and ensuring that they have enough healthy exercise, in fact, that we are now even producing obese babies. For the first time in history, infants as young as six months old are experiencing problems requiring medical intervention purely as a result of their weight. It’s no wonder, therefore, that we, their parents, feel guilty. The question, however, is whether the shame and anger we rightly feel, ought, more appropriately, be directed at someone other than ourselves.

For think about it: how do six-month-old babies become obese? Do we really believe that it is because they are gluttonous and slothful? After all, at that age, they have no psychological or behavioural triggers that would cause them to eat more than they need. Might it not be the case, therefore, that just as the healthcare profession may have got it wrong over the role of dietary fats in the aetiology of heart disease, so too they may have been slightly over-quick to judgement over the role of sin.

One endocrinologist who certainly thinks so is Dr Robert Lustig, Professor of Paediatrics at the University of California in San Francisco. In July 2009, a video of one of his lectures was posted on YouTube, in which he argues that both of the current views on the causes of obesity and cardiovascular disease are wrong.

With respect to the view that it is dietary fat that is to blame, not only does he point out that reducing fat intake hasn’t had the desired effect, he also questions the scientific rigour of the study which raised this whole question in the first place, arguing that, despite being based on a multivariate regression analysis – one designed to identify and weight all the contributory factors in a complex causal matrix – The Seven Countries Study completely failed to take into account the contribution of another common foodstuff: one which, at the time, represented a very similar relative proportion of each of the diets studied, and which could therefore be shown to have the exact same simple and direct correlation with cardiovascular disease as dietary fat. What this ‘other common foodstuff’ is, I shall return to shortly.

Just as importantly, he also argues that the view that all calories are the same, and that it doesn’t matter what we eat as long as our calorific inputs and outputs are balanced, is equally misguided. This is because it fails to take into account the very different ways in which our bodies metabolise different foods. 

We can demonstrate this quite simply if we compare the biochemistry involved in the metabolism of two common carbohydrates:

  1. fructose, which is the sugar that is found in fruit; and
  2. glucose, which most of us obtain from starchy staples such as bread, pasta, rice and root vegetables.
If we start with the latter, the first and most important thing to know about glucose is that it is one of the few substances we ingest that can be directly absorbed and metabolised by more or less every cell in the body. This is because when glucose enters the bloodstream it triggers the pancreas to release insulin. The insulin molecules then attach themselves to the outer membranes of our  cells, and attract to them – from within the cells – a protein called GLUT4 (Glucose transporter type 4), which, together with the insulin, forms a physical conduit through the cell membrane – a bit like a valve – through which individual glucose molecules are able pass. Once inside, the cell’s mitochondria then
use the energy produced by glucose breakdown to produce adenosine triphosphate (ATP) – the ‘molecular unit of currency of intracellular energy transfer’, as it is sometimes called – which then combines with different enzymes and different structural proteins to be consumed by or to facilitate other cellular processes.

This doesn’t mean, of course, that all ingested glucose is instantly absorbed in this way. How much of it is depends on the rate of ingestion. If drip fed intravenously, for instance, at a slow but steady rate – as happens to patients in hospitals – non-hepatic metabolism can get fairly close to 100%. Normal ingestion, however, accomplished through eating, is a little more erratic, leading to peaks and troughs in blood/sugar levels. To use an example from Professor Lustig’s lecture, if you were to eat a sandwich comprising two slices of bread containing 120g of glucose – ignoring the sandwich’s other contents, and assuming that you are hungry, and that your blood isn’t already glucose saturated – then it is likely that about 80% of the glucose (96g) would be taken up and metabolised as described above. The rest would end up in your liver, the body’s more general metabolic factory, where most of it would then be turned into glycogen – as shown in Figure 1 – a highly accessible, medium term energy store, of which our livers can actually hold an almost limitless amount without experiencing dysfunction or damage.

This is what happens, in fact, when marathon runners ‘carb up’ the night before a race, usually by eating vast amounts of pasta. Most of the excess glucose is stored in the liver as glycogen, which is then released back into bloodstream as glucose as the runner’s blood/sugar starts to fall. This goes on until, eventually, all the glycogen is used up and the runner hits ‘the wall’.



Figure 1: Metabolism of Glucose in the Liver


If we now compare this with what happens to fructose, the story is very different. To begin with, the presence of fructose in the bloodstream does not trigger the release of insulin. Nor can it use any insulin/GLUT4 conduits that may already exist. For although it is a slightly smaller molecule than glucose, like a key with the wrong number of notches, with one extra carbon atom it is physically the wrong shape. In fact, to enter our cells at all, it needs another transporter, GLUT5. Apart from in our intestines, however, GLUT 5 is only produced in our livers. And it is in the liver, therefore, that all ingested fructose is metabolised.

Even in the liver, the metabolism of fructose still has to follow a different course from that of glucose. For just as fructose molecules are the wrong shape to use insulin/GLUT4 conduits, so too they have the wrong chemical composition to be turned into glycogen. The result is that, depending upon the rate at which the fructose is absorbed by the liver, it then follows one of four different metabolic pathways, as shown in Figure 2.

  1.  The most benign of these is the one shown towards the top of the diagram in which the fructose is first used to produce ATP by the mitochondria of the hepatic cells, in the same way as happen to glucose in other cells of the body. It then follows one of two pathways – again depending upon the rate of absorption – the most benign of which results in its eventual transformation into glucose. This only happens, however, to a very small proportion of the ingested fructose, or when the absorption rate is very low.
  2. If the absorption rate is faster than the rate at which ATP can be turned into glucose, this results in a depletion of the available phosphate within the cell, which then triggers the activation of the scavenger enzyme adenosine monophosphate deaminase-1, which recoups intracellular phosphate by converting the ATP breakdown products – adenosine diphosphate (ADP), adenosine monophosphate (AMP), and inosine monophosphate (IMP) – back into ATP, leaving a residual waste product in the form of uric acid.
  3. The real problem occurs, however, when the rate of absorption exceeds the rate at which the mitochondria can turn the fructose into ATP in the first place. It then enters a process known as de novo lipogenesis (DNL) or the creation of new fat, in which the majority of it is first turned into pyruvate, before entering what is known as the citrate shuttle – a sequence of biochemical transformations, including feedback loops – from which most of it finally emerges as VLDL (Very Low Density Lipoprotein), a transporter protein containing, among other by-products of this process, cholesterol and triglycerides (fats), which are then eventually deposited in adipose, cardiac and skeletal muscle tissue throughout the body.
  4. Alternatively, the various lipids created in DNL can also form fatty droplets which are actually deposited within the liver, itself, producing an effect on the liver very similar to that of alcohol.


Figure 2: Metabolism of Fructose in the Liver

To summarise, therefore:

  • If you ingest glucose, your body turns it into instantly available energy, and/or the short-term energy store, glycogen.
  • If you ingest any significant amount of fructose, your body turns it into the long-term energy store, fat.
Anyone who tells you that a calorie is a calorie is a calorie, therefore, just doesn’t understand the biochemistry. 

There is also another way in which the metabolism of glucose differs from that of fructose. Ordinarily, an increase in lipoproteins and triglycerides in the bloodstream triggers the release of a hormone called leptin, which makes us feel full and uncomfortable whenever we eat too much. It is our bodies’ way of sending a message to our brains to say that we’ve had enough. As can be seen in Figure 1, this is what happens when we consume large amounts of glucose. If the rate of ingestion is too rapid for all of it to be turned into glycogen, then glucose, too, can end up entering the citrate shuttle, to be turned into fat, thus releasing leptin and making us feel as if we couldn’t eat another bite. It’s why marathon runners, in carbing up, have to eat very slowly. In the case of fructose, however, one of the free fatty acids created as a by-product of its breakdown (FFA in Figure 2) causes insulin to act as a leptin inhibiter. It actually stops the brain from getting the message.

Combined with our bodies’ overall disposition to turn fructose into fat, this suggests, in fact, that at some point during our history, our bodies’ way of dealing with fructose had an evolutionary value. A hundred thousand years ago, when we were still hunter gatherers, but had left the all-year-round bounty of Africa behind, our ancestors would only have eaten fruit during a few months of the year, in late summer and early autumn. Individuals who were able to gorge themselves on this harvest without feeling bloated, and whose bodies were naturally disposed to lay all this abundant energy down as fat, would therefore have had a far greater chance of surviving the lean months of winter than individuals who either couldn’t force themselves to eat that much fruit, or whose bodies didn’t metabolise fructose in this way. As a result of natural selection, therefore, these are the bodies we have inherited. The problem is that although our consumption of fructose is no longer confined to two or three months of the year, our prehistoric bodies still treats it as if it were.

‘But fruit!’ I hear you say. ‘I thought it was good for us.’ And so it is. In addition to fructose, it contains a whole raft of other beneficial and necessary nutrients. Moreover, if you eat it as whole fruit – rather than as fruit juice, for instance – it also comes packaged in a large amount of fibre, which slows down its digestion and the rate at which it enters the liver. If one were to eat just two or three pieces of whole fruit a day, therefore, not only would this be enough to provide one with all the additional nutrients one needs, but most of the fructose would follow the first of the metabolic pathways described above and be turned into glucose. The problem for most of us, however, is that most of the fructose we ingest no longer comes in the form of whole fruit. Most of it, indeed, has so little connection with any fruit we would recognise as such, that its fruit-based origin is purely nominal. For most of the fructose that now arrives on our plates or in our glasses is actually in the form of processed sugar, not conspicuously spooned into cups of tea or coffee, on which we could choose to cut down, but hidden in the industrially manufactured food and drink upon which most of our diets are now based. And it is this hidden sugar that Professor Lustig argues is the real cause of the obesity and CVD epidemics that are slowly killing us.

In the second part of this essay – ‘Our Food Industry and How it is Killing Us (Part II): The Rise of the High Sugar Diet’ – I shall therefore be looking at the rate at which our sugar consumption has increased over the last thirty years, and at the role of one type of sugar, in particular, High fructose Corn Syrup or HFCS.

In Part III, subtitled ‘Paying the Price’, I shall then examine how this change in what our food industry is feeding us came about, and consider the consequences of what may follow if nothing is done about it.

For those more interested in the health aspects of this issue, however, you might like to watch the 2009 lecture by Professor Lustig I mentioned earlier, which can be found at:


While for those who would like to take a closer look at the biochemistry involved in the metabolism of fructose, there is also a published scientific paper by Professor Lustig available at: