Nutrition Perfected NYC
 
Picture
Since ancient times, humans have experimented with ways to increase the sweetness of foods and beverages without the use of sugar.  Ancient Romans used sugar of lead (a.k.a. lead acetate) as a sugar substitute.  For obvious reasons, using lead as a sweetener caused some serious health problems and its use was abandoned, though not for many centuries.  With the accidental discovery of saccharine in 1879, the modern era of non-sugar sweeteners was born.  Cyclamate, aspartame, acesulfame potassium, sucralose, neotame, stevia, and sugar alcohols have followed saccharine into the US market over the ensuing 131 years.  However, despite their many benefits to the public, artificial sweeteners have come under almost constant fire from watchdog groups and the FDA since the early 1960s.  Fortunately, almost all of the information underpinning the negative stigma surrounding sugar substitutes is based on either horribly faulty research or simply misinformation and ignorance.  Let’s look at each sweetener and finally see where the truth actually lies.

First up is the 19th century granddaddy of them all: saccharine.  Saccharine is about 300 times as sweet as sugar but can impart a bitter or metallic taste to a product, worsening as the concentration of saccharine increases.  Its best use is often as one part of a sweetener system made up of two or more artificial sweeteners.  Saccharine is best known in the US by the brand name Sweet’N Low, which is found in most restaurants in the pink single serving package.  Though its use as a commercial sugar substitute was immediately recognized upon its discovery, saccharine’s use in mass-market food products was limited until World War I.  During WWI and WWII, sugar was rationed due to military demands and so saccharine became a popular sugar substitute.  Saccharine gained even more popularity in the 1960s and 1970s due to America’s growing interest in weight control at the time.

However, in the 1960s, fear began to spread over saccharine’s purported carcinogenicity due to a study that showed an increased incidence of bladder cancer in rats that were fed saccharine.  In 1977, the FDA proposed a ban on saccharine, but congress acted to prevent the ban from taking effect.  Though the sweetener was still allowed on the market, a warning label was required on all products into which it was incorporated.  However, in 2000 the warning labels were removed due to recent discoveries proving that the mechanism by which saccharine causes cancer in rats does not apply to humans.  The bottom line here is that saccharine is NOT dangerous unless you are a rat.  Even California has accepted the truth by now, so you KNOW that there’s no reason to worry.

Second in line is cyclamate.  Though it is approved for use in food in over 55 countries, cyclamate has been banned in the US since 1969.  Cyclamate is 30-50 times as sweet as sugar, making it less powerful than some other artificial sweeteners, but it is inexpensive and generally has a good sweetness profile with little off-flavor.  In many applications, it is blended 10:1 with saccharine for optimal sweetness while minimizing negative taste characteristics.

Problems for cyclamate began in 1969 when a study was published indicating that cyclamate cause bladder cancer in rats.  Though the cyclamate exposure levels used in the study were gigantic compared to those seen in human consumption, the government banned the sweetener later that year.  However, within 4 years, new evidence was presented to the FDA to repeal the ban on cyclamate.  A scientific review panel was convened to interpret the new studies, which included over 20 experiments using mice, rats, guinea pigs, and rabbits.  The panel concluded that there was no evidence indicating that cyclamate acted as a carcinogen.  However, in 1980 the FDA denied the petition to allow cyclamate back into the US food supply.  Since then, research into cyclamate’s safety has continued.  To date, over 70 studies using a plethora of techniques have shown cyclamate to be non-mutagenic (not damaging to DNA).  As well, the World Health Organization and other governing and regulatory bodies the world over have repeatedly affirmed cyclamate’s safety over the last 30 years.  Unfortunately for those of us in the US, cyclamate got off on the wrong foot in this country and, while the rest of the world relies on the vast body of evidence indicating cyclamate’s non-harmful properties, our government instead has chosen to favor paranoia and fear as their regulatory guides in this case.

Next up is aspartame, possibly the most hated of all sugar substitutes.  Aspartame is about 180 times as sweet as sugar and can lend a bitter taste to foods and drinks.  Like saccharine, it is often used in combination with other artificial sweeteners to maximize its beneficial properties while minimizing its off-flavors.  Aspartame is best known in the US as Nutrasweet or Equal and is often found in blue single serving packages.  It was approved for use in all food products in 1996, though it had been previously approved for certain uses.  Aspartame has been accused of causing brain cancer and numerous other problems due to three of its metabolites (breakdown products): methanol, aspartic acid, and phenylalanine.

The approval process of aspartame began in the mid 1970s and included a review of almost 200 studies on aspartame.  Following its approval, aspartame has been comprehensively studied, finding no evidence of carcinogenic action at the levels currently consumed by humans.  Studies with mice, rats, hamsters, and dogs, using aspartame doses as high as 4000 mg/kg bw/day (milligrams per kilogram of bodyweight per day [that equals 272 GRAMS(!) of aspartame per day for a 150 pound man]) have all found no evidence for adverse effects caused by the sweetener.  Meta-analyses of aspartame safety studies have also failed to find evidence of carcinogenicity or genotoxicity.  Aspartame is one of the most heavily studied food additives of all time due to the ongoing negative attention it has gotten over the past 40 years.

As far as its metabolites, research has shown conclusively that exposure from aspartame metabolism to methanol, aspartic acid, and phenylalanine is far outweighed by that from other dietary sources.  The only legitimate risk of aspartame is to those individuals who suffer from the genetic disorder phenylketonuria (PKU).  Fortunately, everyone is screened for PKU shortly after birth, so if you have it, you know about it.  If any sugar substitute has run through the scientific gauntlet and come out the other side intact, it is aspartame.  It has been studied extensively for decades and its safety is without question.

Another popular sweetener in the US is acesulfame potassium, also known as Ace K.  IT is about 200 times as sweet as sugar and is known in the US by its brand names Sunett and Sweet One.  It was discovered by accident (common theme, it seems!) in 1967 by a German chemist.  Ace K is often found blended with sucralose (a.k.a. Splenda) to produce a more sugar-like sweetness profile while masking the sometimes bitter aftertaste of Ace K.  It has also been widely used in conjunction with aspartame in the past, though in recent years sucralose has become favored due to its superior heat stability and taste profile.  Ace K was approved by the FDA in 1988, but has since come under scrutiny.  Animal studies have shown no evidence for carcinogenicity of Ace K, though a rat study did indicate that Ace K stimulates the release of insulin much like sugar.  Despite the fact that the insulin-related study showed no hypoglycemia (low blood sugar) resulting from even the VERY large doses of Ace K given to the rodents, opponents of Ace K suggest that human consumption at much lower levels could produce a low blood sugar condition.  However, more than 20 years worth of science and empirical data speaks for itself, showing Ace K to be an extremely safe and effective sugar substitute.

Our next sugar substitute is sucralose the current heavyweight champion of artificial sweeteners.  Sucralose is widely marketed in the US under the Splenda brand name, but is available in other guises.  It was discovered in 1989 in England and is approximately 600 times as sweet as sugar.  In fact sucralose is based on sucrose (table sugar).  The difference between sucrose and sucralose is that in the latter, three hydroxyl groups (an oxygen bound to a hydrogen) have been replaced by chlorine atoms.  This change in structure makes sucralose indigestible to humans and much, much sweeter at the same time.  However, sucralose retains some excellent properties such as acid and heat stability, very good solubility in water, and a sugar-like taste profile.

Sucralose has been studied extensively before and since its approval in the US in 1998.  Over a hundred animal studies have unanimously shown no evidence of toxicity, carcinogenicity, mutagenicity or other detrimental effects from sucralose consumption.  In fact, even a dose equivalent to 1,000 pounds of sucralose consumed in a single day by a 165-pound human produced no negative effects in rats.  Even the crazies at the Center for Science in the Public Interest have deemed sucralose safe.

Of course, despite the library of evidence proving the safety of sucralose, someone will come out of the woodwork to try and throw a wrench in the works.  The claim this time is that sucralose is harmful to humans because it is a member of a chemical class known as cholorocarbons that also contains many toxic substances.  However, these claims are unfounded for a couple of reasons.  First, sucralose is almost completely insoluble in non-polar solvents like fat.  Therefore, sucralose will not accumulate in human fatty tissue like some other chlorocarbons.  Secondly, sucralose does not dechlorinate in the human body.  About 99% of ingested sucralose is excreted unchanged, with the other 1% undergoing limited metabolism and producing non-toxic metabolites.  Sucralose is not processed within the body in any way similar to other, toxic chlorocarbons and so generalizations about chlorocarbon toxicity made to cover sucralose are simply wrong.  Sucralose has been proven to be completely safe in all respects for human consumption.

Neotame is the most powerful sugar substitute approved for use in the US.  A chemical cousin of aspartame, it is 10,000 times as sweet as sugar.  Despite being on the commercial market since 2002, neotame is used only rarely in the US.  It has a sweetness profile similar to aspartame and can also impart a similar bitter aftertaste.  Because of its incredibly high sweetening power, it may be difficult for many food manufacturers to use precisely.  Despite its drawbacks, one area in which neotame has an advantage over aspartame is in its metabolic byproducts.  While aspartame is broken down into its two component amino acids, aspartic acid and phenylalanine, neotame contains an extra group of atoms that physically blocks access to the molecule by enzymes that would normally perform the amino acid cleavage.  Metabolism of neotame produces very little phenylalanine and is therefore safe for consumption by those people suffering from PKU, unlike aspartame.

Neotame has come under similar fire as its cousin aspartame.  However, due to its limited use no large-scale battles have erupted.  The FDA approved neotame after reviewing 113 animal and human studies that evaluated the potential toxic, carcinogenic, mutagenic, and neurological effects of neotame.  They determined that neotame posed no risk in any category to humans.

The last two sugar substitutes included in this review separate themselves from the rest of the class in that they are considered natural sweeteners by the FDA.  First up for the naturals is stevia.  Widely available under the brand names PureVia and Truvia and also known as Reb-A and rebiana, stevia was approved for use as a dietary supplement in the US in 1995 and as a food additive in 2008.  Commercial stevia is made from high purity extracts from the species Stevia rebaudiana and is generally 200 to 300 times as sweet as sugar.  Though it has a sweet taste, stevia’s taste profile and sweetness dynamics are quite different from that of sugar.  In addition, it can impart a significantly bitter and/or metallic aftertaste to a food product.  However, stevia is gaining in popularity as masking technologies tailored to the ingredient come of age and methods are found to make the best use of the sweetener.

Stevia’s long history begins in South America where it has been used for centuries as a sweetener and as an ingredient in local medicinal traditions.  Stevia’s regulatory problems began in 1991 when the FDA restricted the import of stevia and labeled it unsafe after receiving complaints about toxicological concerns about the plant.  However, between 2006 and 2008, a number of comprehensive reviews of stevia safety studies performed by both the World Health Organization and individual researchers concluded that the high purity extracts used commercially in the food industry do not have any carcinogenic, mutagenic, or toxic effects in humans, even at extremely high consumption levels.  In early 2009, the FDA awarded rebaudioside A, the active ingredient in modern stevia extracts, GRAS (generally recognized as safe) status.  Though stevia had a rough start in the US, the evidence now is clear and has been recognized properly by the FDA.  Stevia is a safe sugar substitute and will likely carve out a well-deserved spot in the food industry’s reduced-calorie and natural products markets.

Last but not least, we have sugar alcohols.  The name sugar alcohol can actually refer to a number of different, but chemically related compounds, including sorbitol, maltitol, mannitol, xylitol, erythritol, and others.  They generally have less energy (calories per gram) than sugar’s four, but they also often provide less sweetness.  However, they can be paired with other, high-power sugar substitutes to compensate for their low sweetening power.  Xylitol and other sugar alcohols are commonly used in chewing gums because they cannot be digested by the bacteria resident in our mouths and therefore do not contribute to tooth decay.  In addition, a number of sugar alcohols give a significant cooling sensation when their crystallized forms are put in the mouth due to their negative enthalpy of dissolution.  It’s also worth mentioning that sugar alcohols do not have anything to do with ethyl alcohol, the compound we consume to get drunk.  They are called alcohols simply because they have a hydroxyl (oxygen and hydrogen, also known as “alcohol”) group where a normal sugar would have a carbonyl (carbon double-bonded to oxygen) group.

With the exception of erythritol, the common sugar alcohols have one big drawback: gastrointestinal upset.  Like normal sugar, sugar alcohols attract water.  When sugar alcohols pass into the large intestine, they bring quite a bit of water along for the ride.  This excess water can cause diarrhea and bloating, with the effects getting worse as the dose of sugar alcohol increases.  In fact, sorbitol is used as a laxative in certain circumstances when a quick bowel movement is needed without the use of stimulants.  The amount of sugar alcohol that will produce gastrointestinal problems varies between individuals as well as types of sugar alcohols.  Some people can consume quite a bit of sugar alcohols with little to no ill effect, while others may have somewhat severe diarrhea with a light dose.  You just have to try them out and see.

Erythritol is unique in that it has a much higher threshold for gastrointestinal upset than other sugar alcohols.  Unlike other sugar alcohols, it is absorbed by the small intestine and excreted in the urine.  Because it never makes it to the large intestine, diarrhea is generally avoided.  In addition, while most sugar alcohols have 2-2.5 calories per gram, erythritol has only 0.2, making it a useful sugar substitute in low-calorie products.  However, with only 60-70% of the sweetening power of sugar and government regulations limiting its maximum concentration in food products, erythritol almost always is seen in combination with other sweeteners, whether natural of artificial.

Artificial sweeteners and sugar substitutes have always come under attack from those especially wary of new additions to the food supply.  However, in all of the cases mentioned in this article, scientific evidence has proven their worries to be misplaced.  Sugar substitutes offer viable solutions to the fight between the human desire for sweet tastes and the global epidemic of obesity.  As well, in most cases these compounds are also a blessing for people with diabetes because they don’t cause the large fluctuations in blood sugar seen with the use of sugar.  Finally, sugar substitutes allow anyone with the desire to control their body composition and overall health to more easily control their body’s output of insulin and to keep their daily energy levels high and stable.  Sugar substitutes are a fantastic resource and, while the safety research must be done to protect consumers, they should be valued and used whenever appropriate to benefit the health of the public.

 
 
Picture
Gluten is a protein composite found in wheat, barley, rye, and related species of plants.  It has gained popular notice over the last few years as the culprit in celiac disease.  Celiac disease is an autoimmune disorder that causes the body to attack the small intestine in response to gluten consumption, creating an inflammatory reaction in the tissue.  As a result, the intestinal villi that line the surface of the small intestine shorten.  The villi normally act to increase the surface area of the small intestine, helping it absorb nutrients from food.  When the villi become blunted, the body is less able to utilize the food you consume, leading to weight loss, anemia, osteoporosis, fatigue, and vitamin deficiencies including A, D, E, K, and some B vitamins.  Fortunately, celiac disease affects only about 1% of the US population.  However, over the last few years, a growing trend has emerged of non-celiac individuals adhering to a gluten-free diet.  But why?  Are they on to something important or are they just mindlessly feeding on the latest fad diet hype and paranoia?

The Hartman Group, a market research firm, did some work to determine who is buying gluten-free products and why.  One of the most striking findings was that only 7.5% of the people surveyed who had recently bought a gluten-free product had celiac disease.  The other 92.5% fell into one of three categories, defined in the Hartman data as, “Those with an overall interest in health and wellness, those with an interest in ascetic-based practices of self-improvement, and the ever present fad dieters looking for the ‘flavor of the month’ diet trend.”  Clearly, the last of the three groups is the least rational of all.  Blindly going along with the latest fad diet is obviously a horrible idea.  The nature of fad diets is to rely on hype, marketing, and often outright lies in order to make a quick buck off uneducated people.  Not a plan for success.  Anyway, it’s clear that this group of gluten-free purchasers is not operating on logical principles, so they are discarded from our discussion.

The next group is possibly the most interesting, if not mysterious: those adhering to a gluten-free diet because they seem to equate asceticism with self-improvement.  Asceticism is the practice of self-denial in order to attain personal and/or spiritual discipline or other benefit.  Unfortunately for these individuals, in area of human nutrition (and many other facets of life) discomfort, denial, and limiting choices without necessity are rarely useful solutions.  The mindset of “pain = good” runs rampant through the modern exercise and nutrition culture.  Burnout sets, regularly practiced forced reps, the grapefruit diet, and the ridiculously named “bootcamp” phenomenon are all examples of exercise and dietary regimens that focus mainly not on progress and sustainability but on maximizing exertion and discomfort.  The fact is that these misguided methods produce short-term benefits at best and are often detrimental to progress.  Limiting one’s food options without reasonable cause and believing that self-denial is a viable way to maximize nutrition and fitness are simply counterproductive.  This group’s choice to maintain a gluten-free lifestyle is based not on science, but on an entirely misguided theory of how best to achieve optimal health and wellness.

Finally, we come to the last bunch of non-celiac gluten-free fans.  These folks abhor wheat protein because of their “overall interest in health and wellness.”  While these individuals may mean well in their attempts to optimize their diet, I’d bet that most of them were simply duped at some point into thinking that gluten is bad for the general population and not just for celiac sufferers.  Let’s investigate some of the lies being spouted by the non-celiac, anti-gluten crowd and see where the “health and wellness” people might have been led astray.

First, there’s the claim that wheat products lead to blood sugar spikes and problems with insulin regulation.  While it’s true that refined wheat products can negatively influence blood sugar stability, 100% whole wheat products generally have quite low glycemic indices and are productive additions to many meals.  In addition, the wheat that is removed from gluten-free foods is often replaced with another refined flour, often from potato, corn, or rice.  The high GIs of these ingredients make it unlikely that the gluten-free version of a product normally made from wheat flour will be any better than the original at controlling blood glucose and insulin levels.  In fact, the gluten-free versions are very often worse!  Unfortunately for celiac sufferers, it’s common for a patient to gain weight after being placed on a gluten-free diet.  The idea that gluten-free means better blood glucose control is simply backwards.

Next up is the idea that gluten causes “leaky gut” disease.  Leaky gut occurs when the proteins that bind together the cells lining the intestines stop working normally.  This disruption allows nutrients and microorganisms from food to pass inappropriately through the intestinal wall and into the body.  Symptoms of the disease include abdominal pain, muscle cramps and pains, malnutrition, poor exercise tolerance, and numerous other problems.  Anti-gluten fanatics would have you believe that gluten consumption causes leaky gut even in people without celiac or other digestive diseases.  However, the evidence indicates that leaky gut is not a cause but in fact an effect of celiac disease.  When gluten inflames the intestines of a celiac sufferer, the intercellular proteins of the intestinal walls can become damaged, bringing about leaky gut.  For those with a normal reaction to gluten, however, the inflammation and the resulting leaky gut do not occur.  As with many subjects in science in general and especially in nutrition, the direction of causality is incredibly important.  Understanding the difference between an association and a causal link is fundamental to effectively understanding scientific writing and to protecting yourself from the hype and lies of nutritional fanatics.  Objective assessment is key.

Clearly, gluten-free diets are appropriate for those people with hypersensitivity to gluten.  However, for those people with healthy guts able to process gluten products, a gluten-free diet is not a good idea.  While I’m all for reducing high GI carbohydrates and controlling blood sugar and insulin levels properly, abhorring gluten is a terrible way to do it.  Nutrient deficiencies, inconvenience, and increased cost of food are three reasons to avoid a gluten-free diet.  Not to mention that the ideas behind the application of this medical diet to healthy individuals are simply foolish.  Construct your dietary plans in a rational manner, using principles that provide you with nutritional guidelines that are not only effective, but also reasonable and sustainable within your lifestyle.  Don’t fall for the hype and misinformation of the gluten-free crowd.  If you need to avoid gluten, then definitely do what you need to do in order to take care of your body.  If you believe that you may be presenting symptoms of gluten hypersensitivity, you can easily be tested by a doctor using blood tests and an intestinal biopsy.  However, if you are not part of the 1% of Americans who suffer from celiac disease then learn use wheat and other grain products for their many benefits and see them as positive tools in your nutritional arsenal, not as enemies to be avoided at all cost.

 
 
Picture
Since I wrote the article about breakfast cereals in October, I've had a number of people mention their love or hatred of steel-cut oats.  While these lightly processed oats are widely considered to be the most nutritious option around, they take approximately FOREVER to cook.  However, a reader of the blog recently sent me a link to a page describing a method for preparing steel-cut oats in a much more convenient manner than the traditional "boiling from scratch" arrangement.  In addition to steel-cut oats, this recipe also incorporates some pearl barley, which I thought was a interesting and worthy addition.  The new and improved procedure:

1. Combine 1/3 cups steel-cut oats, 2 tbsp uncooked pearl barley, and 1 1/4 cups water in a microwavable bowl.  Cover it and refrigerate for at least four hours, though an overnight soak will work and might be more convenient for a breakfast application.

2. Microwave on high power for three minutes, stir well, and microwave for another three minutes.

3. You now have ready-to-eat steel-cut oats with only six minutes cooking time!  Feel free to add chopped or dried fruits, protein powder, spices, and/or nuts to complete your breakfast.

Remember, when constructing a breakfast with this sort of starchy ingredient, make sure to balance it with a significant portion of protein.  Whether it comes from meat, dairy, eggs, or other sources, make sure it's there.  Don't rely on a carbohydrate-heavy breakfast to keep you going.  Just because something is "healthy" or "organic" doesn't mean that it's all you need for optimal nutrition.  Balance is key!

The original source article can be found here.

 
 
Picture
The use of synthetic dyes in food has been a contentious issue for quite some time, bristling hairs in the political world, the food industry, and throughout the public sector.  In the middle of 2010, the Center for Science in the Public Interest teamed up with UCLA doctoral candidate Sarah Kobylewski to release a review of safety-related studies performed on nine food dyes called Food Dyes – A Rainbow of Risks.  While the review seemed quite comprehensive, the conclusions drawn from the data didn’t really seem to make sense.  CSPI is a rather inflammatory organization, often on the extreme end of conservatism when it comes to food regulations (meaning they favor heavy regulation).  Given CSPI’s history of overstatement and fear-mongering, Ms. Kobylewski’s paper read almost as if she had written the review portion and then CSPI had come along afterward and written (or rewritten) the conclusions.  Because this report has gotten so much attention in the media since its release, I feel it’s important to cover some of the more questionable aspects of the paper and give my thoughts on some of the major points.  A little rationality can go a long way when it comes to data interpretation.  Unfortunately, Ms. Kobylewski’s paper often errs on the side of hyperbole and paranoia, much to the detriment of the public at large.

The first point that must be made is that only six of the nine dyes reviewed are used in any appreciable quantity.  The other three are either defunct or used in such small quantities that their effect on humans is almost assuredly nil.  Citrus Red 2 is a dye used to color the peels of some oranges.  While it might raise some concern if used in processed foods or other consumed products, its presence on the peel is benign.  In addition, its use is federally regulated to a maximum of 2ppm (~0.000001g/lb of fruit), which is an incredibly small amount in any sort of application.  Green 3 is next on the list of irrelevant dyes.  Registering in at a miniscule 0.1% of total yearly FDA-certified dye production, green 3 is very rarely used.  When a green color is needed, 99% of the food industry chooses a combination of blue 1 and yellow 5.  In addition, green 3 is known to be poorly absorbed, further reducing its effect on the body.  Mice studies yielded no evidence against green 3 and rat studies produced quite inconclusive data at very high treatment levels (1.25-5% of the total diet as green 3!).  With almost no negative data to its name, even if you are especially paranoid, green 3 is a non-factor because it is so easy to avoid.  The final meaningless dye in this review is Orange B.  Approved for use only in sausage casings, Orange B is no longer used and hasn’t even had a batch approved for use in over a decade.

Now onto the relevant dyes, starting with blue 1 (a.k.a Brilliant Blue).  Blue 1 comprises 4.7% of the total yearly FDA-certified dye production.  Part of this low percentage is derived from the fact that blue 1 is an intensely powerful colorant and is therefore generally used in minute quantities, even relative to other major dyes.  No published studies on blue 1 produced usable data pointing to toxicity or carcinogenicity (cancer-causing action).  A lone unpublished study (suspicious? Yes.) showed some rise in rates of kidney tumors, but a dose-response relationship could not be established, making the claim of carcinogenicity quite suspect.  Two out of nine studies assessing the genotoxicity of blue 1 produced positive results in chromosomal aberration tests.  However, one study was listed without a dose of the active ingredient and the other used a dose of 5mg/ml, which is literally insanely high when compared to human consumption levels.

The only interesting result regarding blue 1 comes from a single neurotoxicity study in which blue 1, in combination with L-glutamic acid, partially inhibited the development of neurites, outcroppings from neurons that are associated with neuronal and cognitive development.  Studies have found significant (if sometimes loosely defined) correlations between the consumption of food dyes and hyperactivity in children.  It’s possible that there is some connection there, but given the level of current evidence for causality, it’s certainly not established.  So, if your child is very young (<1 year) or has a problem with hyperactivity, it may be worth the effort to avoid blue 1.  Otherwise, as usual, it appears to do basically nothing.

Blue 2 is the other certified blue dye and makes up 3.7% of certified dye production.  It is extremely poorly absorbed, even more so than blue 1 and green 3.  Ten out of eleven studies looking at the potential genotoxicity of blue 2 found no effect.  Studies investigating chronic toxicity of blue 2 found nothing conclusive to point towards a negative impact on humans.  Activist groups have made loud noises over a finding in one study that showed an apparent increase in brain gliomas (a tumor that arises from glial cells).  However, the data failed to show a number of characteristics that would indicate blue 2 acted as a carcinogen.  In addition, when taken in context with other studies on the same type of rats, the incidence of the brain gliomas was not unusual, even in untreated populations.  Despite the scientific evidence against the viability of the brain glioma data, it is still touted by opponents of food dyes as a reason to ban blue 2.

Now that we’ve covered the blues, it’s time to take a look at our two options for red color.  The first is red 3.  Only making up 1.4% of total certified dye production, red 3 is by far the lesser-used red dye.  Its one major place in the diet is in maraschino cherries, though it can also be found in canned fruits, candies, oral drugs, and a few other minor products.  Suspicions of genotoxicity regarding red 3 were raised when a few studies found positive evidence.  However, two of the four positive studies were performed on isolated yeast cells, which are hardly indicative of mammalian cells within a body.  One study, using the most informative protocol of all the referenced experiments, found positive results after three hours of treatment but negative results after 24 hours.  While interesting, the contradictory nature of the results makes any claims based on that data somewhat dubious.  That leaves one useful positive result out of 11 reviewed studies.  Hardly convincing.

In studies investigating potential toxicity of red 3, no effect was found at doses up to 4% of the total diet.  However, there was significant evidence pointing towards a carcinogenic effect at the highest dose.  It seems a bit ridiculous to grossly generalize results from data based on consuming 4% of the diet as dye when a human may normally consume on the order of a few milligrams per day (if even that), especially when data on lesser doses showed no effect.  However, this is one of those cases where, if you are especially paranoid, red 3 is easy enough to cut out of your diet.

Red 3’s big brother is red 40.  Red 40 is the most heavily-consumed dye and comprises 41.3% of yearly certified-dye production.  Genotoxicity data is limited, but one study found evidence of DNA damage due to red 40 at very high doses.  However, a single positive study among all the negative results provides little insight into the truth of the matter.  With the data currently available, I feel that the genotoxicity of red 40 is quite inconclusive.  Toxicity and carcinogenicity testing found no reliable effect.  Opponents of dyes often claim that one study found that red 40 accelerated the development of a certain type of tumor in mice.  However, that suspicion was preliminary and was raised in the middle of the study.  By the end of the experiment, the researchers had found no acceleration of tumor appearance.  In addition, a second confirmatory study was performed to assess the risk of red 40 specifically in regards to the tumor type seen in the first trial.  That experiment also found no effect of red 40 on tumor generation.

Despite the non-issue of red 40 toxicity, other valid concerns over the dye exist.  The first is hypersensitivity reactions seen in a very small percentage of the population.  However, one’s reaction to red 40 would be apparent and for the vast majority of people will not be a concern.  The most serious issue surrounding red 40, as well as the two yellow dyes to be discussed shortly, is contamination with potentially carcinogenic impurities.  While the contaminants historically found in red 40 likely pose little risk to humans at the levels consumed, in recent years more and more dyes have begun to be imported from foreign producers like China.  Given China’s decidedly suspect history with chemical contamination and adulteration, along with the general difficulty of producing completely contaminant-free red 40, more in-depth inspections by the FDA of imported dyes is warranted.  However, that issue will be handled through policy and regulatory changes if it ever comes to pass in the future and should be of little concern to the day-to-day user.

The last two dyes discussed in the CSPI review are the yellows.  Yellow 5 is the second most popular dye used in food and cosmetic products.  It produces an intense neon-type yellow color and is often used in combination with red 40 or blue 1 to produce orange and purple colors.  A few studies showed some evidence for concern of genotoxicity with yellow 5.  However, a majority of studies showed no evidence for such effects and only one of the positive studies was performed in vivo, limiting the viability of genotoxicity claims.  Studies found no evidence indicating carcinogenicity of yellow 5 even at extremely high doses.

However, one negative aspect of yellow 5 that is well-established is hypersensitivity.  A small percentage of the population is allergic to the dye.  Interestingly, there is a large crossover between those allergic to yellow 5 and those allergic to aspirin.  If an individual reacts to one compound, they are likely to react to the other.  In the end though, the vast majority of consumers are not reactive to yellow 5 at all.

Finally, there is the concern over contamination, similar to that raised with red 40.  The contaminant of major concern in the CSPI report, and the one that is continually mentioned throughout the text, is benzidine.  However, when consulting the original studies used by CSPI to bolster their claims of apparently widespread benzidine contamination in yellow dyes, the data does not appear to support their arguments.  The two studies were performed in the early to mid 1990s and examined the amount of free and bound benzidine found in samples of certified yellow 5 and yellow 6.  The FDA only tests for free benzidine and CSPI claims that bound benzidine is also dangerous because it is liberated to its free form in the human gut.  A number of holes in CSPI’s statements become apparent after reading the original documents.  First, the theory that bound benzidine is freed during digestion is simply postulated by the authors and not supported by any data, whatsoever.  Second, 90% of the lots found to be contaminated with benzidine from both studies combined came from a single producer.  So, in essence the study didn’t uncover a widespread contamination problem, but a localized issue with one company.

Last but not least in our review is yellow 6, the “oranger” cousin of yellow 5.  In genotoxicity trials, yellow 6 was seen to produce no effect in 8 out of ten experiments.  In addition, the only in vivo study found no effect.  Therefore, the evidence for genotoxicity in the case of yellow 6 is quite suspect.  The only evidence for carcinogenicity of yellow 6 comes from a single study on rats.  However, the FDA reviewed the study and found that the data lacked a number of important features that would indicate that yellow 6 was acting as a carcinogen.  Given the lack of other studies pointing to yellow 6 as a carcinogen, the data supporting an argument against the dye is extremely weak.

Yellow 6 shares the same valid concerns over hypersensitivity and contamination that pertain to yellow 5.  Reactions to the dye are extremely rare and shouldn’t affect its use in the food industry considering labeling requirements and openness of ingredient information.  Contamination is a potential (but not current) problem that can easily be fixed through more stringent tests by the FDA.  Especially considering the growth in dye importation, increased ingredient security is a priority worth pursuing.

Dyes play an important role in the food industry.  The argument that natural colorants could simply be substituted for the synthetics isn’t a feasible option for many companies considering consumer demand for low food prices.  In addition, some natural colorants produce similar hypersensitivity reactions to those rarely seen with synthetics.  Importation of dyes is a valid concern and should be addressed by the FDA through enhanced certification testing.  The truth about food dyes is that there is little reason for concern.  The amount of dye we consume on a regular basis is incredibly small and, even considering the gigantic doses seen in many animal studies, the risks posed by these dyes are largely negligible.  CSPI is an organization that has a history of overstatement, hyperbole, and fear-mongering.  The information they put out is at the very least misleading and is most definitely untrustworthy.  Too much of anything is a bad idea and the same is true for dyes.  There’s no reason to be especially fearful of dyes and avoid them altogether, but consuming enormous amounts of color every day is also likely not the best plan.  Moderation is key in most issues related to the intersection of food and human health.  The subject of dyes is just another perfect example.

 
 
Picture
These days, it’s pretty much common knowledge that calcium helps to build strong bones and prevents osteoporosis as we age.  There’s been tons of media put out to educate the public on the need for proper calcium consumption.  There are innumerable calcium supplements available on the market today, ranging from pills to drinks to chocolate flavored chews.  However, calcium is not the only player in the bone health game.  Vitamin D has been shown to exhibit a major influence on the regulation of bone mass and unfortunately it has not gained the same notoriety as calcium.  It’s been estimated that approximately 40% of the adult population over the age of 50 may be deficient in vitamin D.  Often the problem goes untreated because it causes few symptoms that would be noticed on a daily basis.  It’s time to learn about vitamin D and how to best ensure the health of your bones.

Vitamin D is produced naturally by the body in response to skin exposure to UVB sunlight, the highest energy ultraviolet radiation the passes through the ozone barrier in any substantial amount.  However, there are a number of reasons why endogenous (within the body itself) production of vitamin D often does not satisfy the body’s need.  First, vitamin D production in response to sunlight declines with age.  Ironically, older people also tend to suffer some of the harshest consequences for vitamin D deficiency.  Also, sunscreen and avoidance of sun exposure has become more popular due to growing concerns over skin cancer and other forms of damage.  A sunscreen of SPF 8 cuts down on the production of vitamin D by about 95%.  Many sunscreens are SPF 30 and above, which stop almost 100% of vitamin D generation.

The other source of vitamin D is through the diet.  In 1923, a biochemist named Harry Steenbock discovered that irradiating many foods with ultraviolet light increased their levels of vitamin D.  By the 1930’s, governments began to fortify milk with vitamin D in an attempt to eradicate rickets in children, caused by vitamin D deficiency.  However, milk is pretty much the only major food with added vitamin D.  Even other dairy products like yogurt and cheese are often left unfortified.  Adding to the problem is the fact that these days many people no longer drink milk.  Besides milk, eggs, fatty fish, beef liver, and mushrooms naturally supply vitamin D.  Vitamin D supplements are also widely available in both health food stores and groceries. 

Whether produced internally or consumed, vitamin D undergoes a number of conversion steps before becoming fully active in the body.  First it is transported to the liver where it is hydroxylated (gets an oxygen and a hydrogen atom added to it) and then stored until it is needed.  Under conditions of low calcium, the vitamin D is transported through the blood stream to the kidneys where it is again hydroxylated to form calcitriol, the final active form of vitamin D.  Calcitriol then binds to a receptor in the nuclei of certain cells, causing the cells to increase their production of transport proteins that help absorb calcium from food through the intestinal wall.  As you can see, vitamin D serves as the body’s messenger, carrying the news of low calcium levels to the cells capable of fixing the problem.  Without adequate vitamin D levels, this signaling pathway can’t function properly, the cells never get the message, and as a result calcium is poorly absorbed.

Generally, a good goal is to consume 1,000 IU of vitamin D per day good goal for people aged 19-50, while 1,200 IU per day is appropriate for people over 50.  As with all vitamins and especially the fat soluble bunch (vitamins A, E, D, and K), be sure to count all of your vitamin D sources in order to maintain a healthy intake level without going overboard.  Vitamin D toxicity is rare, but somewhere between 2,000-10,000 IU/day for adults and around 1,000 IU/day for children, over a period of about six months, can cause serious problems.  So make sure to reach your goal, but also be aware of your total vitamin D intake.  Vitamin D can be the difference between strong bones and debilitating injury.  Set yourself up for a lifetime of health, mobility, and vigorous activity with proper levels of vitamin D and calcium!

 
 
Picture
The debate over the consumption of raw dairy products has raged for years.  Proponents of raw dairy consumption say that the pasteurization and homogenization processes that modern dairy products undergo destroy beneficial nutrients and make the product less digestible due to the degradation of enzymes present in raw milk.  Opponents of the raw dairy industry claim that the risk to public health from the consumption of raw diary products is too great to justify its legalization on a large-scale.

In most states, it’s possible to procure raw milk through legal channels.  A few states provide for legal retail sale of raw milk.  Other states allow the sale of raw milk only direct from the farm.  Some states have taken policy steps to allow the use of cow-share programs in which a consumer buys “shares” of a dairy cow or herd and are thereby legally entitled to the milk from the animals.  There are also states that have no official laws on cow-share programs but generally allow them.  Finally, a few states allow raw milk to be sold as pet food and not for human consumption.  However, it’s not uncommon to find milk bought as pet food ending up on the breakfast table.

It’s true that the heating that occurs during pasteurization degrades some compounds found in raw milk.  Vitamin C and enzymes like lipase and amylase can be destroyed.  Calcium is made somewhat harder to absorb.  Immunoglobulins (also known as antibodies) present in raw milk are also sensitive to heat.  However, it’s been shown that they are not degraded as severely as the raw milk party would have us believe.  59-76% of immunoglobulin activity was retained following pasteurization, while homogenization, skimming, and standardization of the milk to 1-2% fat had no effect at all.  However, it’s worth noting that UHT (ultra high temperature) processing does effectively degrade immunoglobulin activity.  UHT processing is far more common in Europe and South America than it is in the US, though a number of US organic milk producers utilize UHT instead of pasteurization.  To be sure, always read the packaging!  UHT processed milk can generally (but not always) be identified by its storage in a room temperature environment.  Most dairy products that require refrigeration are pasteurized, not UHT processed.

In contrast to the relatively mild (and often quite overstated) negative effects of pasteurization on the nutrients in milk, the risks and consequences of raw dairy consumption can be quite severe.  In 2010 alone, there have been at least ten outbreaks stemming from the consumption of raw dairy products.  At least 105 people have been sickened so far from bacteria including E. coli O157:H7, Campylobacter jejuni, Listeria monocytogenes, Staphylococcus aureus, Salmonella, and Brucella, as well as a parasite called Cryptosporidium parvum.  Clearly, there is a reason why almost universal pasteurization of dairy products was established decades ago through governmental regulation.  In 1938, milk caused 25% of all food- and water-related sicknesses.  In 1993, that number was 1%.  The evidence for the positive impact of dairy pasteurization on public health is undeniable.

When choosing dairy products, always be sure to consume those that have been properly treated through pasteurization, HTST, or UHT processing.  Whatever modest nutrient degradation is produced through the heat treatment of dairy products, it’s more than worth avoiding food-borne illness.  Remember that vitamins and minerals can be had easily in the form of cheap, widely available supplements.  Err on the side of caution and buy properly treated dairy products for the sake of your family’s health as well as your own.

 
 
Picture
Fish and other seafood are often fantastic sources of nutrition.  Many species are high in protein and some even supply relatively high concentrations of omega-3 fatty acids.  Unfortunately, our oceans and fisheries are facing a global crisis that may prevent future generations from enjoying and benefiting from fish and seafood.  Overfishing and poor fisheries management practices are destroying fish populations and severely polluting waterways.  If we continue on our current path, many of our favorite seafood items will simply become unavailable.  As a consumer you can make a difference with your purchasing choices in the grocery store and the restaurant.  Choosing the right marine foods will not only directly help maintain current populations by may also impact the practices of fishers and aquaculturists (fish farmers) for the better.

Overfishing is a somewhat complicated subject, owing to the complexity of interactions between different species within an ecosystem, various influences on population growth rates, and the different types of fishing and methods of aquaculture.  However, there are some clear principles that are important and easily understood, even by those not in the fishing industry.  The first is that every species matters.  Some may question the importance to an ecosystem of halibut or flounder, for example.  The truth is that the homeostasis of energy flows through an ecosystem is finely balanced through predation, breeding, disease, adaptation, and other factors.  When a particular species is removed from an environment or even significantly reduced in number, it can no longer fulfill its natural role in the system.  The result can be an explosion in the population size of other organisms that compete for resources with or who usually serve as food for the endangered species.  That imbalance may then affect other species of both plants and animals that live within the same system.  To maintain biodiversity and a stable ocean environment, it is imperative that we prevent overfishing of all species.

It’s also important to understand why some species are more susceptible to overfishing than others.  Fish that live in deep, cold waters like orange roughy generally have slower metabolisms, mature later in life, and breed less frequently than those that live in shallower, warmer waters.  Fishing methods like deep-sea trawling are especially harmful to these sensitive species because they can be extremely efficient at catching these deep dwellers and are able to operate for long periods of time.  As a community’s breeding population is fished out, the ability of the group to replenish itself naturally decreases.  At a certain point, the spawning rate sinks low enough that the population as a whole begins to decrease.

While aquaculture at first appears to be a fantastic alternative to open water fishing, it is not without its drawbacks.  Because fish farms house large numbers of fish in small areas, they can produce waste water with extremely high concentrations of nutrients and bacteria.  When these products are released into the surrounding waters, they can negatively affect native species and produce algal blooms, throwing the local ecosystem far out of balance.  In addition, when the algal blooms eventually die off, their decomposition can deplete the water’s oxygen level.

Aquaculture has also been responsible for the introduction of invasive species into the local environment.  The newcomers can sometimes outcompete native species for food and other resources, decimating the native populations.   Finally, foreign organisms may carry parasites or diseases to which native organisms have no defense.  The Japanese oyster drill is an example of a detrimental piggy-backer, coming to North America in the early 20th century within shipments of Pacific oysters.

So, now that you’ve heard all of the doom and gloom, what can you do to help?  Fortunately, the answer is easy.  Choose fish and seafood products made from species farmed or fished using sustainable practices.  There are a number of resources available to help you get on the right track.  One of the best is the Monterey Bay Aquarium’s Seafood Watch Pocket Guides website.  They have downloadable references that help you choose sustainable fish species specific to each region of the US.  Another excellent resource is the Marine Conservation Society’s (MCS) fish purchasing guide.  It even breaks down which fish are best to buy by month.  Finally, look for sustainability ratings for seafood products in supermarkets.  More and more suppliers are packaging their fish with color-coded labels indicating a level of sustainability certified by independent organizations like MCS.  They are an easy way to quickly assess what you’re buying and to help make more informed decisions.  Choose the right seafood products and benefit from their excellent nutritional properties while helping to save fish populations from collapse.  Do your part and we will all be better off in the long run!

 
 
Picture
Exercise and nutrition are indispensible in the maintenance of long-term health as well as the development of winning athletes.  Two of the most misunderstood areas of exercise planning are frequency and volume.  In other words, how often should you exercise and how much work should you perform during each session?  Unfortunately for many people, without proper regulation of both of these aspects of your training program, your results likely will be less than optimal.  The good news is that proper frequency and volume are easy to implement within your program and that you can soon be reaping the results!  This article covers mostly strength training programs, but the principles of proper frequency and volume apply to cardio, also.

Exercise frequency is defined    as the number of training sessions you perform over a certain amount of time.  For convenience purposes, training schedules usually are planned to repeat on a weekly basis.  But, there’s no reason why a schedule can’t be designed around a monthly or even longer duration cycle.  The proper frequency for you in a particular training program depends upon two major factors: your ability to recover from intense training and the way that you choose to divide your exercises.  For example, assume you find, as many people do, that you progress most efficiently in strength and muscle size by working each major muscle group or movement once a week.  There are myriad ways to divide that work.  One option is to perform a whole body workout once per week.  You could alternatively split upper and lower body exercises into two separate days.  You could also do a “push, pull, squat” type of division in which you work all of your pushing movements on one day, your pulling movements on another, and your squatting or lower body movements on a third.  There are others, but those three can be effective for a large percentage of the population.

If, on the other hand, you find that you recover faster than most (or are a rank beginner) and progress better working each muscle group or movement twice per week, then your split needs to be changed to fit your needs.  Since you do NOT want to be in the gym six days per week, a “push, pull, squat” split working each twice a week is not going to work.  However, you could reasonably perform an upper/lower split twice per week for four total training sessions.  You could also choose to execute two full body workouts per week, instead.  As you can see, your recovery ability has a say in the way that you divide your work, but you also need to keep your total number of work days per week under control.  Personally, I find that two to four training sessions per week is best for most people, assuming that each session is intense and focused.

Now that you have your training frequency worked out, you need to decide on the volume of work that you will do during each session.  Exercise volume can have a number of definitions, but the one that I find to be most useful for general application is: Volume = Repetitions x Sets x Exercises.  What that means is that the total training volume is found by multiplying the approximate number of repetitions per set (you may miss a few in there when things get tough, but that’s ok) by the number of sets per exercise and multiplying that product by the total number of exercises you plan to perform.  It’s worth noting that the first one or two warm-up sets of each exercise are not counted towards the total.  Count only sets that make you work at least moderately hard.

For example, if I were doing a “push, pull, squat” split and I was planning my pushing day, I might set this schedule as such (not including warm-ups):

Barbell Bench Press 3 (sets) x 8 (reps)
Dumbbell Overhead Press 4x5
Rear Deltoid Raise 3x10
Triceps Pushdown 3x8

My total volume would be calculated by multiplying 7.75 (average number of reps) by 3.25 (average number of sets) by 4 (number of exercises).  The total is just over 100.  While this number means little on its own, over time you will compare volumes used in different programs and find a range that allows you to progress most efficiently.  In fact, various areas of your body may require distinct amounts of volume due to variances in muscle fiber type distribution and other physiological factors.

Volume also is useful when assessed on a weekly and even monthly basis.  My total volume per week will help me relate how I feel and how I am progressing overall to the amount of total work I perform.  The stress put on the body when training intensely is not localized to the muscle group being worked; it is systemic.  It affects immune function, energy levels, and other “whole body” metrics.  Overtraining is a state, induced by putting out too much work for too long a time.  It exhibits symptoms including a lack of training progress, disinterest in exercise, excessive fatigue, poor mood, and lowered resistance to disease.  Avoidance of overtraining is essential to maintain consistent, long-term improvement.  Keeping track of your total exercise volume over periods like weeks and months can help you adjust your training plan to prevent overtraining and maximize your progress.

As an example, if I completed a 16 week training plan and found that during the cycle I felt more and more run down and less interested in training as it went on, it is plausible that the volume and/or frequency was too high.  When designing the next training schedule, I will calculate the total volume of work and make sure that it is less than that of the previous, non-optimal cycle.  Through this process of analysis and revision, I will eventually find a balance of volume and frequency that allows me the best progress and prevents problems with overtraining.

Volume and frequency are integral parameters to any strength training program.  Getting them right can be the key to optimizing your results.  Too much volume or frequency will prevent the body from recovering properly from one session to the next and can negatively impact not only strength and muscle gains, but also immunity, mood, and overall wellbeing.  Pay attention to your body and use that feedback to help pinpoint the amount of training that works best with your particular physiology!

 
 
Picture
Consuming an optimal amount of protein to support efficient body composition management and sports performance is often difficult without the use of animal products, particularly flesh.  However, over the last two decades, meat has come under scrutiny both in research and the popular media for its role in the development of chronic diseases and the fact that the meat industry is rife with cases of animal cruelty.  The truth of the matter is that humans are specifically adapted in many ways to consume meat and it should be a part of almost all healthy diets.  Therefore, it’s important to understand both the benefits and detriments to the body, the planet, and society that come from regularly consuming various meat products.

The first stop on our carnivorous journey is poultry, including chicken, turkey, and duck.  Nutritionally, poultry can be a fantastic source of lean protein.  Chicken and turkey breasts are classics in the world of fat loss and muscle building for a reason.  In addition to protein, the dark meat of poultry is a good source of fat soluble vitamins as well as some minerals including selenium and zinc.  In 2007, the US per capita yearly consumption of poultry products was about 75 pounds!

However, eating poultry can have a downside.  While the use of hormones in poultry has been banned for decades (that’s right, nobody uses them!), antibiotics are still used in the industry to prevent the outbreak of disease amongst animals, especially in large-scale facilities where birds live in very close quarters.  It has been shown in a few cases that widespread antibiotic use in animals can help bring about resistant strains of bacteria.  However, the incidence of such adaptations is low and it has been argued that the benefits to human health (assuming our current rate of poultry consumption) of including antibiotics in chicken feed may outweigh the potential risks of bacterial resistance to the drugs.

The ecological implications of large-scale poultry farming represent another issue worth considering when consuming avian products.  Runoff from poultry farms can contain high levels of nitrogen, leading to algal blooms that can devastate waterway ecosystems.  Waters contaminated with waste products can also harbor infectious bacteria that originate in farm animals.  Also, high nitrate concentrations in drinking water, known to come from poultry farm runoff, can also increase the risk of methemoglobinemia in infants.

Finally, some people raise ethical or moral arguments against large-scale poultry farming practices.  In many commercial facilities, birds are kept in very small cages and live in high densities.  While this kind of production is necessary for the low prices that we demand for our food at our current level of consumption, many people see it as cruel and inhumane.  While the solutions for many problems with large-scale poultry farming may be found in smaller facilities with more natural living conditions and nutrition standards, there is the trade-off of price.  You will pay more for birds grown in less cost-efficient environments.

Next, let’s address pork.  Pig meat can range widely in fat content, much like beef.  When used properly, pork products can serve as excellent lean protein sources and can produce great results in a fat loss or muscle building program.  As with poultry, however, there can be detriments to pork consumption.  From a nutritional standpoint, high fat pork products are very common in both restaurant and home cooked meals.  The classic American breakfast often includes pork sausage or bacon slices.  Neither of these foods can be recommended as a healthy source of protein, to say the least.  In addition, many pork products are highly processed or cured.  Processed and nitrate-cured meats have been implicated in higher risks of cancer, heart disease, and diabetes.  As with all sources of meat, the less processed the better.

On the ecological side of things, pig farms produce similar water pollution problems to poultry farms.  Bacteria and nutrients in farm waste leech into the waterways, at times spreading infectious disease and causing ecosystem imbalances.  In addition, large hog farms often produce hydrogen sulfide gas, which can cause ill effects in humans.  In high concentrations, exposure to the gas has killed farm workers.  Some studies have also shown detrimental effects of hog farm air pollution on those living in close proximity to the farm.

As with poultry, some large-scale hog farms also utilize production practices that many see as inhumane.  Hogs are often kept in confined spaces, especially when giving birth and nursing.  While this separation is necessary to prevent the mother from accidentally rolling onto her babies, it is thought to be distressing to the animals.  Also, disease is easily spread due to the crowded living condition of a hog pen.  Salmonella, gastrointestinal infections, and viruses like porcine parvo can run rampant through a herd.  Not all farms are horrible torture factories, to be sure.  There are good and bad farms and farmers, like in every industry.  But it’s important to understand what can happen on the unfortunate end of things.

Finally, it’s what’s for dinner.  Beef, like pork and many other quadrupedal land-dwellers, contains meat that varies widely in its fat content.  Eye of round, for example, is exceedingly lean, comparable even to chicken.  On the other hand, ground beef can be comprised of over 18% fat by weight!  In addition, the last decade has seen an explosion of grass-fed and free range cattle farms.  The meat from these cows has been shown to have a higher concentration of healthy fats, including omega-3 fatty acids, as a result of their more natural diet.  While these high end meats may not be a possibility for everyone (or even many people), it’s important to choose your beef source and cut wisely in order to fit it properly into your healthy nutrition plan.

While beef can be a beneficial part of your diet, its production strains the environment.  There are the familiar problems with farm runoff getting into waterways, carrying along with it bacteria like E. coli, as well as chemicals like ammonia, oil and grease.  In addition, cows produce methane, a greenhouse gas that is 20 times more efficient at trapping heat than carbon dioxide.  In the US alone, cattle produce 5.5 million metric tons of methane per year, accounting for 20% of our country’s methane emissions.  Globally, ruminant livestock release over 80 million metric tons of methane annually, which is 28% of the global total produced from man-made activity.

Whatever meat product you consume, some basic guidelines can be applied to choose the healthiest meat and process it to maximize benefits and minimize risk.  First, choose generally lean meats and put in the minimal effort to trim off visible excess fat from the cut.  Fish, of course, is the exception to this rule.  Fatty, deep water fish are fantastic sources of omega-3 fatty acids and should be consumed regularly.  When cooking meat, keep in mind that carcinogens are formed when flesh is cooked at high temperatures.  Polycyclic aromatic hydrocarbons (PAHs) and hetercyclic amines (HCAs) are the two major culprits.  To help prevent the formation of these compounds, cook meat in a moist environment like a stew or stir-fry and cook at an appropriate temperature.  Keep in mind that fat from meat can cause flare-ups on grills and barbeques that can char meat and produce carcinogens.  Leaner cuts will tend to flare less.  Finally, make sure your meat is cooked to a proper temperature to prevent food-borne illness.  Poultry should reach an internal temperature of 165 degrees Fahrenheit, ground beef and pork 160 degrees, 150 degrees for pork roasts, and 145 degrees for beef roasts.  It’s definitely a case of better safe than sorry when it comes to food bugs.

The lesson here is to take some time to learn about what you are consuming.  Personally, I am a big meat eater.  I utilize protein heavily in my nutrition plan as a tool to optimize my body composition and sports performance.  I understand the global consequences of my diet, but my health (and admitted self-interest) overcomes my ecological considerations on this issue.  I contribute to resource-sparing in other ways, like saving on energy and water, but food is where I draw the line.  Everyone has their own cost-benefit analysis to do when it comes to the foods they eat, both nutritionally and in terms of ecological and societal impact.  Where you draw the line is up to you, but my goal is to help you make a more informed decision.  When you do eat meat, source it wisely, choose the right cut, and always prepare it in the safest manner you can.  Meat is a beneficial and tasty part of a normal human diet, so use it safely and intelligently to reap its benefits while minimizing its risks to both body and planet.

 
 
Picture
Probiotics have been gaining in public attention and commercial success over the last few years.  Products like Dannon’s Activia yogurt in the US and Unilever’s Latta margarine in Germany showcase some of the potential health benefits from probiotics.  One excellent definition of a probiotic comes from a paper published by researcher Roy Fuller in 1989.  He characterized a probiotic as “a live microbial feed supplement which beneficially affects the host animal by improving its intestinal microbial balance."  What that means in simple terms is that probiotic foods help to encourage the growth of healthy bacteria within your intestinal tract.  In addition to providing benefits specific to each strain of probiotic bacteria, an increase in numbers of helpful bacteria may also generally decrease the population of harmful bacteria in your gut.

While probiotics as a class are generally advantageous to digestion and health, it’s important to note that each genus, species, and strain of probiotic bacteria can often lend its own specific effects to the body.  While the evidence for many probiotic effects lacks bulletproof scientific confidence, many are supported by well designed studies.  It is worth taking a look at the results of just a few of the multitudinous studies linking the consumption of probiotic bacteria to health benefits.  For those of you not used to scientific notation, the name of the bacteria is in italics.
  • Lactobacillus casei Shirota - lowered recurrence of bladder cancer.
  • L. acidophilus and B. infantis - reduced rates of overall mortality and necrotizing enterocolitis in infants.
  • L. rhamnosus GG and Bifidobacterium lactis BB-12 for prevention and L. reuteri SD2222 for treatment - acute diarrhea caused by rotavirus.
  • Saccharomyces boulardii - reduced diarrhea in travelers and prevention of diarrhea caused by Clostridium difficile resulting from antibiotic treatment.
  • Mix of lactobacilli, bifidobacteria, and streptococcus species - prevented relapse of inflammatory bowel disease symptoms.
  • B. lactis HN019 and L. rhamnosus HN001 - enhanced immunity in the elderly
As you can see from the results of these studies, probiotics may play a role in cancer prevention, immunity, and the prevention and treatment of various infections.  However, it’s important to realize that there are many delivery systems for probiotic organisms and, depending upon the species and method of transit, not all (or none) of the helpful bacteria may survive to colonize the gut.

For those that do make the trip successfully, a technology complementary to probiotics has developed to aid their growth.  Prebiotics are food ingredients that are not digested by humans but that can be utilized by probiotic bacteria to spur their growth and aid their survival in the digestive tract.  They also can provide the building blocks used by bacteria to synthesize compounds beneficial to the host human.  Prebiotics generally take the form of carbohydrates and are often also classified as soluble fibers.  Popular prebiotics found in many food products include various types of oligosaccharides as well as inulin.  An interesting facet of prebiotic function is that the area of the digestive tract in which the prebiotic nourishes its target bacteria is dependent upon the chemical chain length of the prebiotic.  Short chain prebiotics are fermented more quickly, allowing them to feed bacteria inhabiting the primary areas of the digestive tract.  Longer chain prebiotics ferment more slowly and are consumed by bacteria living further along in the colon.  So-called “full-spectrum” prebiotics are comprised of compounds of many different chain lengths and are able to nourish the entire colon.

A final category of food product that is undergoing growth at the current time is the synbiotic.  Synbiotic foods contain both probiotic bacteria as well as prebiotic nutrients.  The idea is to get both the organism and its food in one shot.  While it’s a great idea, always be mindful when choosing synbiotics to ensure that both the bacteria and prebiotics are supplied at a level that has been shown to be beneficial.  As with all supplements, unscrupulous companies often include only miniscule amounts of expensive compounds simply for labeling and marketing claims.  Do your homework and make sure that you’re getting your money’s worth.

 

    Author

    Rob Bent is the founder and lead nutrition counselor at Nutrition Perfected.  He is a multi-sport athlete and works constantly to maximize sports performance through scientifically-guided nutritional optimization.

    Rob Bent of Nutrition Perfected
    Follow NPNYC on Twitter
    View my profile on LinkedIn

    Archives

    March 2011
    February 2011
    January 2011
    December 2010
    November 2010
    October 2010
    September 2010

    Categories

    All
    Exercise
    Nutrition

    RSS Feed