One of the most self-defeating comments I often hear from those I counsel on nutrition and exercise is that they “can’t make progress” because they are [insert age here]. You hear the same thing from individuals who are in their sixties, fifties, forties, and even thirties! It’s amazing how little confidence many people have in their bodies’ adaptability and capacity for positive development. Fortunately, the truth is that even quite elderly folks can make fantastic progress in areas including muscular strength, balance and stability, cardiovascular function, and overall measures of wellness (blood lipid profile, blood pressure, and others). I’d like to present just one example of a person who has made great progress in health and ability despite some very serious setbacks. She is living proof that no matter your age and level of fitness, you can make excellent improvements if you put your mind to it.
Barb was active as a young adult, skiing throughout college and overcoming a minor weight gain during college through the Weight Watchers program (decades ago, when it was a bit less “commercial”). In her late thirties, Barb got back into ice skating, a sport she had participated in as a child, but had given up long ago. For almost the next twenty years or so, she continued skating while balancing a job and raising two young children. Her rather serious dedication to skating and solid nutritional foundation allowed her to maintain a satisfactory and stable body weight and levels of strength and fitness. It goes to show you that, even with a real life to handle and an exercise regimen that is, at most, of moderate intensity, portion control and practical application of correct dietary fundamentals can hold you in good stead.
At 55 years old, Barb stopped skating and began to pursue professional photography, another lifelong passion. However, she kept up with her semiannual skiing trips. Unfortunately but not surprisingly, over the next couple of years she found herself less steady on her skis and felt like her ability to perform to her maximum had diminished along with her regular physical activity. Luckily for Barb, a family member who had an interest in weight lifting encouraged her to begin a resistance training program in order to get back and even improve her muscular strength and overall stability. At 58, Barb began a full body weight training protocol with a professional trainer. Despite her misgivings at the beginning, she made excellent progress and began to notice positive changes not only in her strength and performance, but also in the size of her muscles and the shape of her body. She was surprised that a weight lifting program could make such big improvements to a 58 year old woman. But that’s the beauty of the human body: no matter how old you are, if you eat correctly and exercise properly according to your goals, you can improve both your physical ability and body composition.
Unfortunately, after six months of productive training, Barb was dealt a terrible blow: she was diagnosed with endometrial cancer, a disease of the uterus. She underwent almost a year of treatment, including rounds of chemotherapy and radiation treatment. Her body was hit hard by the sometimes fatal medication and procedures, but she did far better than many other people facing similar situations. Her doctors and fellow cancer survivors commented on her strength and toughness throughout the treatment. In fact, she credits her improved physical condition going into the diagnosis with her ability to take on the punishing solution to her disease.
Despite being depleted and weakened by the chemo and radiation, Barb survived her treatments and was told that her cancer was in remission. Less than six months later, she was back in the gym with a new trainer, ready to begin the process of rebuilding her body. Now, a year and a half later, the only remnant of her battle with cancer is the slightly softer hair she grew after losing much of her original hairdo. She has been making consistent progress since undertaking her new weight training protocol and continues to push for improvement. At 61 years old, she is proof that age is no excuse for poor health or performance.
Keep this story in mind next time you feel like you might be “over the hill”. If a cancer survivor in her 60s can make noticeable improvements in her physical strength and body composition, so can you. Fortunately, we are born into an incredibly adaptable machine. Even if it’s been mistreated for decades, it will respond positively if you make the choice to change today. Excellent nutrition and properly executed exercise can make a big difference in how you look and feel, no matter what year you were born. In the upcoming new year, make your health and ability your biggest priorities and don’t get yourself down because you feel too old to improve. Whether you’re 30, 40, 50, 60 or beyond, the right fundamentals can make a difference for you.
Protein is slowly becoming better recognized for its uses besides basic muscle repair and maintenance. Academic research and the mass media together are starting to spread the word. Hopefully consumers will take notice and urge the food industry to invest their R&D efforts towards food that will maximize protein’s multiple beneficial effects, including satiety (the feeling of being full), blood sugar control, and relatively high thermal effect (energy used to process and integrate the dietary proteins into the body). I’ve chosen two recently published peer-reviewed journal articles and one mass media story to share with you highlighting some up-to-date information on related to protein and the human diet.
The first journal article (reference #1) was performed by an international team of researchers and was published in the New England Journal of Medicine at the end of November. This study investigated the utility of four different kinds of diets in keeping off lost weight. The four types of diets used in this study were: low protein content and low glycemic index, low protein content and high glycemic index, high protein content and low glycemic index, and high protein content and high glycemic index. 548 individuals completed the study, giving credence to the results of the work due to the large sample size. If the sample size of a study is low, conclusions drawn from its data are less likely to apply generally to the rest of the population. However, in this case, they had plenty of subjects.
The researchers found that the best diet for maintenance of lost weight was one that had a relatively high protein content and a relatively low glycemic index. If you’ve been reading this blog regularly, these results should come as no surprise. As we know, a low glycemic index diet helps to maintain satiety, stable energy levels, and low insulin levels, keeping the fat production machinery in low gear. A high protein content is beneficial on all levels, complementing the generally low glycemic index of the entire diet.
The second study (reference #2) was executed by a team of US researchers and was published in the journal Obesity in September. They aimed to evaluate the effects of protein consumption and meal frequency on hunger and satiety in overweight and obese males. They included only 13 subjects in this study, but it still serves as a good base for further research. The study participants were assigned to eat either 14% or 25% of their calories as protein. In addition, they were divided again into groups eating either three or six times per day.
The researchers concluded that a higher protein diet significantly increased satiety and that eating fewer meals may also help you feel fuller longer. While the first conclusion is not surprising, the idea that eating fewer meals may actually help satiety is unexpected. Generally, a greater meal frequency is suggested to help curb hunger, an effect I have found in my own counseling experience. However, the data on this issue was somewhat conflicting, possibly due to the quite small sample size of 13 individuals used in the study. I would suggest that a similar study be performed using far more subjects to clarify the results of this trial. In addition, I would also like to see a third group included that consumes around 40% of their calories from protein. I would hypothesize that the increased satiety seen in the 25% protein group vs. the 14% protein group in this initial study would be even more exaggerated with 40% of calories consumed as protein.
Finally, we have an example of the same theme presented by the mass media for consumption by the generally public (reference #3). Unlike the journal articles, which present pretty hard core statistical evidence at lengthy descriptions of their methods and reasoning, this MSNBC release lays out some simple principles and suggestions for everyday incorporation of protein into an everyday diet. They also back up their claims with evidence gleaned from recent scientific articles, which is nice to see. They touch on the thermal effect of protein, as well as its satiety-inducing properties. The article also mentions the current push to raise the FDA guidelines on suggested protein intake, which is a fantastic idea, in my opinion. We need more articles like this published by large, popular media outlets. The public needs to get information in simple, bite-sized packages and I’m glad to see MSNBC doing their part.
Both scientific journal articles and mass media stories can be good sources of nutrition-related information. However, before you believe either source, it’s always prudent to critically analyze how their conclusions were formulated and the reliability of their data. In addition, try to double check for further evidence supporting any new claim you run across. Even journal articles can be wrong from time to time, which is why replication is so important in the scientific process. Do your part to support the truth and rely only on strong, repeatable, data and well-founded, rational conclusions.
Since ancient times, humans have experimented with ways to increase the sweetness of foods and beverages without the use of sugar. Ancient Romans used sugar of lead (a.k.a. lead acetate) as a sugar substitute. For obvious reasons, using lead as a sweetener caused some serious health problems and its use was abandoned, though not for many centuries. With the accidental discovery of saccharine in 1879, the modern era of non-sugar sweeteners was born. Cyclamate, aspartame, acesulfame potassium, sucralose, neotame, stevia, and sugar alcohols have followed saccharine into the US market over the ensuing 131 years. However, despite their many benefits to the public, artificial sweeteners have come under almost constant fire from watchdog groups and the FDA since the early 1960s. Fortunately, almost all of the information underpinning the negative stigma surrounding sugar substitutes is based on either horribly faulty research or simply misinformation and ignorance. Let’s look at each sweetener and finally see where the truth actually lies.
First up is the 19th century granddaddy of them all: saccharine. Saccharine is about 300 times as sweet as sugar but can impart a bitter or metallic taste to a product, worsening as the concentration of saccharine increases. Its best use is often as one part of a sweetener system made up of two or more artificial sweeteners. Saccharine is best known in the US by the brand name Sweet’N Low, which is found in most restaurants in the pink single serving package. Though its use as a commercial sugar substitute was immediately recognized upon its discovery, saccharine’s use in mass-market food products was limited until World War I. During WWI and WWII, sugar was rationed due to military demands and so saccharine became a popular sugar substitute. Saccharine gained even more popularity in the 1960s and 1970s due to America’s growing interest in weight control at the time.
However, in the 1960s, fear began to spread over saccharine’s purported carcinogenicity due to a study that showed an increased incidence of bladder cancer in rats that were fed saccharine. In 1977, the FDA proposed a ban on saccharine, but congress acted to prevent the ban from taking effect. Though the sweetener was still allowed on the market, a warning label was required on all products into which it was incorporated. However, in 2000 the warning labels were removed due to recent discoveries proving that the mechanism by which saccharine causes cancer in rats does not apply to humans. The bottom line here is that saccharine is NOT dangerous unless you are a rat. Even California has accepted the truth by now, so you KNOW that there’s no reason to worry.
Second in line is cyclamate. Though it is approved for use in food in over 55 countries, cyclamate has been banned in the US since 1969. Cyclamate is 30-50 times as sweet as sugar, making it less powerful than some other artificial sweeteners, but it is inexpensive and generally has a good sweetness profile with little off-flavor. In many applications, it is blended 10:1 with saccharine for optimal sweetness while minimizing negative taste characteristics.
Problems for cyclamate began in 1969 when a study was published indicating that cyclamate cause bladder cancer in rats. Though the cyclamate exposure levels used in the study were gigantic compared to those seen in human consumption, the government banned the sweetener later that year. However, within 4 years, new evidence was presented to the FDA to repeal the ban on cyclamate. A scientific review panel was convened to interpret the new studies, which included over 20 experiments using mice, rats, guinea pigs, and rabbits. The panel concluded that there was no evidence indicating that cyclamate acted as a carcinogen. However, in 1980 the FDA denied the petition to allow cyclamate back into the US food supply. Since then, research into cyclamate’s safety has continued. To date, over 70 studies using a plethora of techniques have shown cyclamate to be non-mutagenic (not damaging to DNA). As well, the World Health Organization and other governing and regulatory bodies the world over have repeatedly affirmed cyclamate’s safety over the last 30 years. Unfortunately for those of us in the US, cyclamate got off on the wrong foot in this country and, while the rest of the world relies on the vast body of evidence indicating cyclamate’s non-harmful properties, our government instead has chosen to favor paranoia and fear as their regulatory guides in this case.
Next up is aspartame, possibly the most hated of all sugar substitutes. Aspartame is about 180 times as sweet as sugar and can lend a bitter taste to foods and drinks. Like saccharine, it is often used in combination with other artificial sweeteners to maximize its beneficial properties while minimizing its off-flavors. Aspartame is best known in the US as Nutrasweet or Equal and is often found in blue single serving packages. It was approved for use in all food products in 1996, though it had been previously approved for certain uses. Aspartame has been accused of causing brain cancer and numerous other problems due to three of its metabolites (breakdown products): methanol, aspartic acid, and phenylalanine.
The approval process of aspartame began in the mid 1970s and included a review of almost 200 studies on aspartame. Following its approval, aspartame has been comprehensively studied, finding no evidence of carcinogenic action at the levels currently consumed by humans. Studies with mice, rats, hamsters, and dogs, using aspartame doses as high as 4000 mg/kg bw/day (milligrams per kilogram of bodyweight per day [that equals 272 GRAMS(!) of aspartame per day for a 150 pound man]) have all found no evidence for adverse effects caused by the sweetener. Meta-analyses of aspartame safety studies have also failed to find evidence of carcinogenicity or genotoxicity. Aspartame is one of the most heavily studied food additives of all time due to the ongoing negative attention it has gotten over the past 40 years.
As far as its metabolites, research has shown conclusively that exposure from aspartame metabolism to methanol, aspartic acid, and phenylalanine is far outweighed by that from other dietary sources. The only legitimate risk of aspartame is to those individuals who suffer from the genetic disorder phenylketonuria (PKU). Fortunately, everyone is screened for PKU shortly after birth, so if you have it, you know about it. If any sugar substitute has run through the scientific gauntlet and come out the other side intact, it is aspartame. It has been studied extensively for decades and its safety is without question.
Another popular sweetener in the US is acesulfame potassium, also known as Ace K. IT is about 200 times as sweet as sugar and is known in the US by its brand names Sunett and Sweet One. It was discovered by accident (common theme, it seems!) in 1967 by a German chemist. Ace K is often found blended with sucralose (a.k.a. Splenda) to produce a more sugar-like sweetness profile while masking the sometimes bitter aftertaste of Ace K. It has also been widely used in conjunction with aspartame in the past, though in recent years sucralose has become favored due to its superior heat stability and taste profile. Ace K was approved by the FDA in 1988, but has since come under scrutiny. Animal studies have shown no evidence for carcinogenicity of Ace K, though a rat study did indicate that Ace K stimulates the release of insulin much like sugar. Despite the fact that the insulin-related study showed no hypoglycemia (low blood sugar) resulting from even the VERY large doses of Ace K given to the rodents, opponents of Ace K suggest that human consumption at much lower levels could produce a low blood sugar condition. However, more than 20 years worth of science and empirical data speaks for itself, showing Ace K to be an extremely safe and effective sugar substitute.
Our next sugar substitute is sucralose the current heavyweight champion of artificial sweeteners. Sucralose is widely marketed in the US under the Splenda brand name, but is available in other guises. It was discovered in 1989 in England and is approximately 600 times as sweet as sugar. In fact sucralose is based on sucrose (table sugar). The difference between sucrose and sucralose is that in the latter, three hydroxyl groups (an oxygen bound to a hydrogen) have been replaced by chlorine atoms. This change in structure makes sucralose indigestible to humans and much, much sweeter at the same time. However, sucralose retains some excellent properties such as acid and heat stability, very good solubility in water, and a sugar-like taste profile.
Sucralose has been studied extensively before and since its approval in the US in 1998. Over a hundred animal studies have unanimously shown no evidence of toxicity, carcinogenicity, mutagenicity or other detrimental effects from sucralose consumption. In fact, even a dose equivalent to 1,000 pounds of sucralose consumed in a single day by a 165-pound human produced no negative effects in rats. Even the crazies at the Center for Science in the Public Interest have deemed sucralose safe.
Of course, despite the library of evidence proving the safety of sucralose, someone will come out of the woodwork to try and throw a wrench in the works. The claim this time is that sucralose is harmful to humans because it is a member of a chemical class known as cholorocarbons that also contains many toxic substances. However, these claims are unfounded for a couple of reasons. First, sucralose is almost completely insoluble in non-polar solvents like fat. Therefore, sucralose will not accumulate in human fatty tissue like some other chlorocarbons. Secondly, sucralose does not dechlorinate in the human body. About 99% of ingested sucralose is excreted unchanged, with the other 1% undergoing limited metabolism and producing non-toxic metabolites. Sucralose is not processed within the body in any way similar to other, toxic chlorocarbons and so generalizations about chlorocarbon toxicity made to cover sucralose are simply wrong. Sucralose has been proven to be completely safe in all respects for human consumption.
Neotame is the most powerful sugar substitute approved for use in the US. A chemical cousin of aspartame, it is 10,000 times as sweet as sugar. Despite being on the commercial market since 2002, neotame is used only rarely in the US. It has a sweetness profile similar to aspartame and can also impart a similar bitter aftertaste. Because of its incredibly high sweetening power, it may be difficult for many food manufacturers to use precisely. Despite its drawbacks, one area in which neotame has an advantage over aspartame is in its metabolic byproducts. While aspartame is broken down into its two component amino acids, aspartic acid and phenylalanine, neotame contains an extra group of atoms that physically blocks access to the molecule by enzymes that would normally perform the amino acid cleavage. Metabolism of neotame produces very little phenylalanine and is therefore safe for consumption by those people suffering from PKU, unlike aspartame.
Neotame has come under similar fire as its cousin aspartame. However, due to its limited use no large-scale battles have erupted. The FDA approved neotame after reviewing 113 animal and human studies that evaluated the potential toxic, carcinogenic, mutagenic, and neurological effects of neotame. They determined that neotame posed no risk in any category to humans.
The last two sugar substitutes included in this review separate themselves from the rest of the class in that they are considered natural sweeteners by the FDA. First up for the naturals is stevia. Widely available under the brand names PureVia and Truvia and also known as Reb-A and rebiana, stevia was approved for use as a dietary supplement in the US in 1995 and as a food additive in 2008. Commercial stevia is made from high purity extracts from the species Stevia rebaudiana and is generally 200 to 300 times as sweet as sugar. Though it has a sweet taste, stevia’s taste profile and sweetness dynamics are quite different from that of sugar. In addition, it can impart a significantly bitter and/or metallic aftertaste to a food product. However, stevia is gaining in popularity as masking technologies tailored to the ingredient come of age and methods are found to make the best use of the sweetener.
Stevia’s long history begins in South America where it has been used for centuries as a sweetener and as an ingredient in local medicinal traditions. Stevia’s regulatory problems began in 1991 when the FDA restricted the import of stevia and labeled it unsafe after receiving complaints about toxicological concerns about the plant. However, between 2006 and 2008, a number of comprehensive reviews of stevia safety studies performed by both the World Health Organization and individual researchers concluded that the high purity extracts used commercially in the food industry do not have any carcinogenic, mutagenic, or toxic effects in humans, even at extremely high consumption levels. In early 2009, the FDA awarded rebaudioside A, the active ingredient in modern stevia extracts, GRAS (generally recognized as safe) status. Though stevia had a rough start in the US, the evidence now is clear and has been recognized properly by the FDA. Stevia is a safe sugar substitute and will likely carve out a well-deserved spot in the food industry’s reduced-calorie and natural products markets.
Last but not least, we have sugar alcohols. The name sugar alcohol can actually refer to a number of different, but chemically related compounds, including sorbitol, maltitol, mannitol, xylitol, erythritol, and others. They generally have less energy (calories per gram) than sugar’s four, but they also often provide less sweetness. However, they can be paired with other, high-power sugar substitutes to compensate for their low sweetening power. Xylitol and other sugar alcohols are commonly used in chewing gums because they cannot be digested by the bacteria resident in our mouths and therefore do not contribute to tooth decay. In addition, a number of sugar alcohols give a significant cooling sensation when their crystallized forms are put in the mouth due to their negative enthalpy of dissolution. It’s also worth mentioning that sugar alcohols do not have anything to do with ethyl alcohol, the compound we consume to get drunk. They are called alcohols simply because they have a hydroxyl (oxygen and hydrogen, also known as “alcohol”) group where a normal sugar would have a carbonyl (carbon double-bonded to oxygen) group.
With the exception of erythritol, the common sugar alcohols have one big drawback: gastrointestinal upset. Like normal sugar, sugar alcohols attract water. When sugar alcohols pass into the large intestine, they bring quite a bit of water along for the ride. This excess water can cause diarrhea and bloating, with the effects getting worse as the dose of sugar alcohol increases. In fact, sorbitol is used as a laxative in certain circumstances when a quick bowel movement is needed without the use of stimulants. The amount of sugar alcohol that will produce gastrointestinal problems varies between individuals as well as types of sugar alcohols. Some people can consume quite a bit of sugar alcohols with little to no ill effect, while others may have somewhat severe diarrhea with a light dose. You just have to try them out and see.
Erythritol is unique in that it has a much higher threshold for gastrointestinal upset than other sugar alcohols. Unlike other sugar alcohols, it is absorbed by the small intestine and excreted in the urine. Because it never makes it to the large intestine, diarrhea is generally avoided. In addition, while most sugar alcohols have 2-2.5 calories per gram, erythritol has only 0.2, making it a useful sugar substitute in low-calorie products. However, with only 60-70% of the sweetening power of sugar and government regulations limiting its maximum concentration in food products, erythritol almost always is seen in combination with other sweeteners, whether natural of artificial.
Artificial sweeteners and sugar substitutes have always come under attack from those especially wary of new additions to the food supply. However, in all of the cases mentioned in this article, scientific evidence has proven their worries to be misplaced. Sugar substitutes offer viable solutions to the fight between the human desire for sweet tastes and the global epidemic of obesity. As well, in most cases these compounds are also a blessing for people with diabetes because they don’t cause the large fluctuations in blood sugar seen with the use of sugar. Finally, sugar substitutes allow anyone with the desire to control their body composition and overall health to more easily control their body’s output of insulin and to keep their daily energy levels high and stable. Sugar substitutes are a fantastic resource and, while the safety research must be done to protect consumers, they should be valued and used whenever appropriate to benefit the health of the public.
Gluten is a protein composite found in wheat, barley, rye, and related species of plants. It has gained popular notice over the last few years as the culprit in celiac disease. Celiac disease is an autoimmune disorder that causes the body to attack the small intestine in response to gluten consumption, creating an inflammatory reaction in the tissue. As a result, the intestinal villi that line the surface of the small intestine shorten. The villi normally act to increase the surface area of the small intestine, helping it absorb nutrients from food. When the villi become blunted, the body is less able to utilize the food you consume, leading to weight loss, anemia, osteoporosis, fatigue, and vitamin deficiencies including A, D, E, K, and some B vitamins. Fortunately, celiac disease affects only about 1% of the US population. However, over the last few years, a growing trend has emerged of non-celiac individuals adhering to a gluten-free diet. But why? Are they on to something important or are they just mindlessly feeding on the latest fad diet hype and paranoia?
The Hartman Group, a market research firm, did some work to determine who is buying gluten-free products and why. One of the most striking findings was that only 7.5% of the people surveyed who had recently bought a gluten-free product had celiac disease. The other 92.5% fell into one of three categories, defined in the Hartman data as, “Those with an overall interest in health and wellness, those with an interest in ascetic-based practices of self-improvement, and the ever present fad dieters looking for the ‘flavor of the month’ diet trend.” Clearly, the last of the three groups is the least rational of all. Blindly going along with the latest fad diet is obviously a horrible idea. The nature of fad diets is to rely on hype, marketing, and often outright lies in order to make a quick buck off uneducated people. Not a plan for success. Anyway, it’s clear that this group of gluten-free purchasers is not operating on logical principles, so they are discarded from our discussion.
The next group is possibly the most interesting, if not mysterious: those adhering to a gluten-free diet because they seem to equate asceticism with self-improvement. Asceticism is the practice of self-denial in order to attain personal and/or spiritual discipline or other benefit. Unfortunately for these individuals, in area of human nutrition (and many other facets of life) discomfort, denial, and limiting choices without necessity are rarely useful solutions. The mindset of “pain = good” runs rampant through the modern exercise and nutrition culture. Burnout sets, regularly practiced forced reps, the grapefruit diet, and the ridiculously named “bootcamp” phenomenon are all examples of exercise and dietary regimens that focus mainly not on progress and sustainability but on maximizing exertion and discomfort. The fact is that these misguided methods produce short-term benefits at best and are often detrimental to progress. Limiting one’s food options without reasonable cause and believing that self-denial is a viable way to maximize nutrition and fitness are simply counterproductive. This group’s choice to maintain a gluten-free lifestyle is based not on science, but on an entirely misguided theory of how best to achieve optimal health and wellness.
Finally, we come to the last bunch of non-celiac gluten-free fans. These folks abhor wheat protein because of their “overall interest in health and wellness.” While these individuals may mean well in their attempts to optimize their diet, I’d bet that most of them were simply duped at some point into thinking that gluten is bad for the general population and not just for celiac sufferers. Let’s investigate some of the lies being spouted by the non-celiac, anti-gluten crowd and see where the “health and wellness” people might have been led astray.
First, there’s the claim that wheat products lead to blood sugar spikes and problems with insulin regulation. While it’s true that refined wheat products can negatively influence blood sugar stability, 100% whole wheat products generally have quite low glycemic indices and are productive additions to many meals. In addition, the wheat that is removed from gluten-free foods is often replaced with another refined flour, often from potato, corn, or rice. The high GIs of these ingredients make it unlikely that the gluten-free version of a product normally made from wheat flour will be any better than the original at controlling blood glucose and insulin levels. In fact, the gluten-free versions are very often worse! Unfortunately for celiac sufferers, it’s common for a patient to gain weight after being placed on a gluten-free diet. The idea that gluten-free means better blood glucose control is simply backwards.
Next up is the idea that gluten causes “leaky gut” disease. Leaky gut occurs when the proteins that bind together the cells lining the intestines stop working normally. This disruption allows nutrients and microorganisms from food to pass inappropriately through the intestinal wall and into the body. Symptoms of the disease include abdominal pain, muscle cramps and pains, malnutrition, poor exercise tolerance, and numerous other problems. Anti-gluten fanatics would have you believe that gluten consumption causes leaky gut even in people without celiac or other digestive diseases. However, the evidence indicates that leaky gut is not a cause but in fact an effect of celiac disease. When gluten inflames the intestines of a celiac sufferer, the intercellular proteins of the intestinal walls can become damaged, bringing about leaky gut. For those with a normal reaction to gluten, however, the inflammation and the resulting leaky gut do not occur. As with many subjects in science in general and especially in nutrition, the direction of causality is incredibly important. Understanding the difference between an association and a causal link is fundamental to effectively understanding scientific writing and to protecting yourself from the hype and lies of nutritional fanatics. Objective assessment is key.
Clearly, gluten-free diets are appropriate for those people with hypersensitivity to gluten. However, for those people with healthy guts able to process gluten products, a gluten-free diet is not a good idea. While I’m all for reducing high GI carbohydrates and controlling blood sugar and insulin levels properly, abhorring gluten is a terrible way to do it. Nutrient deficiencies, inconvenience, and increased cost of food are three reasons to avoid a gluten-free diet. Not to mention that the ideas behind the application of this medical diet to healthy individuals are simply foolish. Construct your dietary plans in a rational manner, using principles that provide you with nutritional guidelines that are not only effective, but also reasonable and sustainable within your lifestyle. Don’t fall for the hype and misinformation of the gluten-free crowd. If you need to avoid gluten, then definitely do what you need to do in order to take care of your body. If you believe that you may be presenting symptoms of gluten hypersensitivity, you can easily be tested by a doctor using blood tests and an intestinal biopsy. However, if you are not part of the 1% of Americans who suffer from celiac disease then learn use wheat and other grain products for their many benefits and see them as positive tools in your nutritional arsenal, not as enemies to be avoided at all cost.
Since I wrote the article about breakfast cereals in October, I've had a number of people mention their love or hatred of steel-cut oats. While these lightly processed oats are widely considered to be the most nutritious option around, they take approximately FOREVER to cook. However, a reader of the blog recently sent me a link to a page describing a method for preparing steel-cut oats in a much more convenient manner than the traditional "boiling from scratch" arrangement. In addition to steel-cut oats, this recipe also incorporates some pearl barley, which I thought was a interesting and worthy addition. The new and improved procedure:
1. Combine 1/3 cups steel-cut oats, 2 tbsp uncooked pearl barley, and 1 1/4 cups water in a microwavable bowl. Cover it and refrigerate for at least four hours, though an overnight soak will work and might be more convenient for a breakfast application.
2. Microwave on high power for three minutes, stir well, and microwave for another three minutes.
3. You now have ready-to-eat steel-cut oats with only six minutes cooking time! Feel free to add chopped or dried fruits, protein powder, spices, and/or nuts to complete your breakfast.
Remember, when constructing a breakfast with this sort of starchy ingredient, make sure to balance it with a significant portion of protein. Whether it comes from meat, dairy, eggs, or other sources, make sure it's there. Don't rely on a carbohydrate-heavy breakfast to keep you going. Just because something is "healthy" or "organic" doesn't mean that it's all you need for optimal nutrition. Balance is key!
The original source article can be found here