Adopting Orphaned Diseases
By Cheryl Pan
Determining how many diseases that affect humans is a very difficult task. Calculating the exact number of available treatments is even harder, if not impossible. While extensive research is being conducted on some of the most widely known diseases, such as Alzheimer’s and diabetes, other diseases are given relatively much less priority. For example, a particular family of diseases, duly referred to as “orphan diseases,” are often overlooked because of the small number of individuals who are affected. Last month, February 28, Rare Disease Day, marked a day of not only improving access to treatment for affected individuals and their families, but also raising general awareness to learn about and understand these diseases as a public health priority.
Orphan diseases are classified as rare disorders, often caused by genetic mutations, that lack sufficient support and resources for treatment development. Because the underlying biological mechanisms of a rare disease are often complex, it is difficult to study and design experiments. Additionally, there is relatively much less concern for the prevalence and impact of these diseases from the general public. Consequently, these diseases are seen as “orphans” on the pharmaceutical market. Technically, a disease is considered “rare” when it affects fewer than 200,000 people. However, despite their name, these rare orphan diseases are actually not rare at all. The National Institutes of Health (NIH) estimates that there are approximately 7,000 types of rare diseases that affect about 30 million people in the United States alone—about 10% of the entire U.S. population! Moreover, the NIH estimates that 50% of people affected worldwide by rare diseases are children. The Global Genes organization also estimates that rare diseases are responsible for about 35% of first-year born deaths.
Rare orphan diseases are often difficult to diagnose; their symptoms can be very unusual and difficult to treat. For example, fibrodysplasia ossificans progressiva (FOP), afflicting just one out of two million people, causes abnormal cartilage formation in muscles and other soft tissues, eventually turning the entire body into bone. Another example of atypical symptoms are associated with Progeria, which essentially causes children to age extremely quickly. This syndrome affects one in eight million newborns: within their first two years, babies develop wrinkly skin, joint stiffness, and heart problems. Ultimately, it is common for children with Progeria to die of heart disease by 14 years old. The small size of affected patient populations also presents its own challenge. Physicians who have limited experience with orphan diseases may not recognize that seemingly unrelated symptoms of muscle weakness, liver complications, and heart defects may be potentially linked to Pompe Disease, which occurs in just one in forty thousand births. Left untreated, infants affected with Pompe Disease usually die within twelve months.
These three diverse orphan diseases are only the tip of the iceberg, but there is still hope for treatment development. In 1983, the U.S. Congress passed the Orphan Drug Act, which recognized the need for greater research devoted to developing medicines to treat orphan diseases. Incentives such as grants, research design support, and FDA fee waivers have led to a remarkable increased interest from companies in developing orphan disease treatments. In the last decade, the FDA has approved more than 230 new orphan drugs. Some of the medicines recently approved to treat a rare disease include the first and only therapy for neonatal-onset multisystem inflammatory disease, as well as treatment for homozygous familial hypercholesterolemia, a genetic disorder that prevents patients from properly regulating low-density lipoprotein (LDL) cholesterol in their blood.
Just last year, the FDA approved 56 new drugs, several of which were treatments for various rare orphan diseases. While the initial approval of new medicines should be celebrated, the research cannot just stop there. Additional studies and data collection must be performed to continue expanding treatment options. EveryLife Foundation for Rare Diseases reports that despite the improving progress, 95% of rare diseases have no FDA-approved treatments. This startling statistic indicates that in reality, rare orphan diseases are actually a global health concern. Outside of research, there is also a need for greater awareness of and advocacy for these rare diseases. According to Global Genes, approximately 50% of rare diseases do not have a supporting disease-specific foundation. A prime opportunity to raise awareness took place on February 28, also known as Rare Disease Day. Established 11 years ago, this day allows countries all over the world to promote rare disease advocacy. Since then, the campaign has grown immensely. This past year, over 90 countries participated, hosting nearly 500 events worldwide and making the hashtag #RareDiseaseDay trend on Twitter.
Cheryl Pan is a freshman in Columbia College studying neuroscience and behavior. She is an editor for Columbia Science Review.
Cooking with Hominids
By Sirena Khanna
Food is often on our minds. We are the only species that watches competitive baking, eats tropical fruit in freezing climates, and adorns our food with gold. The mass production, transport, and consumption of foods from all over the world are feats that can only be derived by the human brain, and as it turns out, food is what helped shape our brains in the first place.
The story starts 1.9 million years ago when our ancestors, Homo erectus, first appeared in Eurasia (as the most recent fossil evidence suggests). Raw meat, as well as vegetables and fruit, were staples of the hominid diet. One day, our ancestors made a great culinary advancement and cooked meat entered the scene. By chance or tasteful insight, H. erectus combined meat and fire, and in the process created the world’s first barbeque.
Prehistoric cookouts were a success. H. erectus developed brains twice the size of their predecessors, H. habilis, and their teeth and body size decreased to about the size of a modern human’s. This is the basis for Richard Wrangham’s “cooking hypothesis,” which attributes the dramatic increase in H. erectus brain size that occurred 500,000 years ago to the advent of cooking. This newfound ability may have prevailed, he posits, because it allowed H. erectus to save a significant amount of time and energy that would otherwise have been spent on chewing raw meat.
Wrangham collected data on chimpanzees and found that they can spend up to 5 hours a day gnawing on fruit. At the chimpanzee’s chewing rate, H. erectus would have had to spend a quarter of their lives chewing and another quarter gathering food to sustain their energy needs if they did not cook their meat. Lucky for modern humans, H. erectus were the master chefs of hunter-gatherer society; cooking meat altered the chemical structure of proteins and starches such that it was easier to break down. Wrangham postulates that cooked meat expedited digestion, as evidenced by a decrease in gut size. The energy saved from digestion went into spurring the encephalization of H. erectus.
Wrangham’s hypothesis is a good start to explaining the sudden change in hominid brain size. There is, however, one complication: the discovery and use of fire. Some skeptics maintain that H. erectus did not control fire until 200,000 years ago, which doesn’t coincide with the “cooking hypothesis” timeline. Scientists are currently searching for evidence of man-made fires to investigate whether fire use did exist at the time of hominid brain expansion; however, fire use is difficult to pinpoint, as early hominids may have interacted with natural fires to cook before learning to control fire.
In the meantime, researchers Katherine Zink and Daniel Lieberman recently published a study that claims sliced meat might be responsible for the decrease in hominid teeth and jaw size. Similar to cooking, slicing meat makes meat easier to chew by breaking down the tissue and making the nutrients more accessible. Early hominids began to use stone tools at least 3 million years ago, so this timeline suggests that H. erectus used tools to slice their meat before they learned to cook it. The two proposals are not mutually exclusive, though: it is possible that slicing meat contributed to a decrease in tooth and jaw size, and later on cooked meat further propelled that shift.
The “expensive tissue hypothesis” also attempts to explain the physiological changes in H. erectus. It presents the paradox of having a metabolically expensive brain while maintaining energy-consuming organs. This hypothesis proposes that encephalization and reduction in gut size is only possible due to a diet of high-quality and easy to digest foods, such as various animal products like meat and bone marrow. Although the “expensive tissue hypothesis” does not focus on cooked or sliced meat, it does concur with the other postulates that animal product additions to the hominid diet were essential for the evolution of a larger human brain.
Whether sliced meat, cooked meat, or high-quality diets spurred larger brains is the primeval kitchen battle yet to be resolved. The growth could be due to a multitude of factors, so perhaps, on this cooking show, everyone wins. Over time, hominid teeth and gut size decreased while the brain steadily grew larger, especially the neocortex, a part of the brain’s center for higher cognitive functions. In terms of sheer size, the H. erectus ancestors, H. habilis, who lived 2 million years ago, had mature brain sizes of about 610 cubic centimeters; in comparison, the average adult human now has a brain size of 1,400 cubic centimeters.
Why does an increase in brain size matter? The general idea might be that bigger brains are better, but size is not everything. Neuroscientists Randy L. Buckner and Fenna M. Krienen propose that as hominid brains expanded, the tether-like network of neurons was ripped apart, allowing neurons to form new circuits. They call this the “tether hypothesis,” which elegantly explains why brain expansion matters for the evolution of brain function. Brain size alone, however, does not distinguish us from other species. Blackfish dolphins have twice as many neurons in the neocortex as humans, but their overall cognitive ability is not above a human’s.
Since there is no correlation between cognitive performance and the number of neocortical neurons, our superior cognitive abilities must be due to a combination of factors other than sheer brain size. There are many aspects of brain composition to consider, such as the sophistication of neural synapses and metabolism, the latter of which relates back to the three hypotheses about changes to the hominid diet. Changes to hominid diet, such as cooked or sliced meat, ultimately made energy consumption more efficient and fruitful for our ancestors. Greater energy-yielding diets led to gut, teeth, and jaw size reduction, while providing the energy for encephalization. As the only species that cooks, it is humbling to think that our relationship to food put us on a different cognitive trajectory than the rest of the animal kingdom.
Thanks to our ancestor’s penchant for firing up the grill, we can do things no other animal has come close to. But whereas meat-eating was an important factor in getting hominids to this point, the rampant consumption of meat is currently undoing humanity. Greenhouse gases produced by the processing and transportation of animal products threaten both humans and the environment, and antibiotics and hormones in meat give rise to health concerns like cardiovascular disease and undesirable genetic mutations. Nevertheless, meat consumption allowed our ancestors to develop larger brains that shaped who we are as humans today. As a tribute to our early cooks, we can imagine new ways to better our brains that do not rely on consuming meat at our current rate. If our ancestors could invent the barbecue with their small brains, then there is no doubt that bigger brains, with more tools, can achieve anything.
By Sophia Ahmed
New York City aims to install 1 gigawatt of solar capacity by 2030, which could power approximately 250,000 households. This would give these 250,000 households not only cleaner, but also cheaper energy; the average Brooklyn household could save $64,796 over 20 years with investment in solar energy. Additionally, using solar power in place of oil decreases greenhouse gas emissions as well as water pollution and hazardous waste production. Despite the benefits of using solar energy, high installation costs and technological barriers create obstacles and consumer hesitation. These issues could prevent the fulfillment of New York City’s goal, but labs at Columbia University’s Lenfast Center for Sustainable Energy are working to break through these restrictions.
While households can save $64,796 over 20 years from investing in solar power, the non-subsidized upfront cost of installing solar panels in Brooklyn isan average of $21,500. This high fee comes primarily from two sources: labor and hardware. Warranties, complex roofs, and updates on building electrical systems all increase the time and cost of labor required to install solar panels. Without a warranty, the maintenance of solar panels costs an average of $30 per megawatt hour; being that the average American household uses just below 11 megawatt hours of electricity per year, this approximately equates to a maximum of $6,600 in maintenance over twenty years without a warranty.
Additionally, complex roofs include rearranging a typical layout of panels, such as having to stack them on top of each other, increasing hardware and installation costs. Furthermore, if a roof is made of certain materials, like tile or slate, installation costs will also rise. Lowering labor costs becomes difficult because paying for warranties and installation complexities are unavoidable. This leaves decreasing hardware production cost as the most feasible solution to decreasing solar panel installation costs, which is exactly what the Esposito Research Group at Columbia University is doing.
The Esposito Group is evaluating solar energy systems to improve manufacturing techniques and panel efficiency. Moreover, it is also testing the use of 3D printing to manufacture photo-electrochemical devices for testing cells and reactors. This could speed up the testing of new technologies, thus decreasing research and development costs and potentially lowering overall prices. Additionally, if production using 3D printing proves efficient and effective, it could be used to produce panels for consumers in addition to being used in the lab. By using 3D printing to manufacture panels for the market, manufacturing costs could decrease and the quantity of panels could increase, leading to an overall decrease in hardware costs.
While lowering installation costs can encourage further household investment in solar energy, issues with current solar technology and efficiency also create consumer hesitation, preventing costs from decreasing. A primary issue with solar energy is its volatility in energy production. Despite the seemingly infinite supply of the sun, you can’t control its abundance on a daily basis, especially in the depths of a New York winter, when more electricity for heat is needed and there is less sun exposure. There are two potential solutions to this issue: increasing solar cell efficiency so panels can create more usable energy even when there is less sun exposure and increasing the energy storage capacity of solar panel systems.
Current solar panels convert an average of 15% of the sun’s incoming energy into usable energy. While this may seem like a small amount, increasing energy conversion efficiency will be challenging due to limits on the amount of sunlight that can be captured and converted to usable energy . For example,the Shockley-Queisser limit takes radiation, absorption, and energy loss into account to conclude that the maximum theoretical amount of sunlight capturable by a photovoltaic (solar) cell is 33.7%. Furthermore, the thermodynamic efficiency limit takes the different band-gap widths of photons into account to demonstrate that the absolute thermodynamic efficiency for solar cells is 86%. Band-gap widths are the gaps between electrons in a photon and they vary in every incoming photon, and solar cells can only successfully convert photon energy into usable energy for certain band-widths. Even if a technology that allowed panels to work with a variety of band-gap widths was developed, some incoming photon energy is released as heat, inherently making 100% efficiency impossible.
While the laws of thermodynamics remain unbreakable, the Shockley-Quiesser limit can be broken—but research has shown that this involves a complex and expensive process of adding extra layers to solar panels. But do we even need to explore these complex processes to make solar cells more efficient than their current rate? While the average solar cell only operates on a 15% solar conversion efficiency rate, this rate effectively powers a household on a sunny day. Thus, the primary problem with solar cells does not involve their efficiency but rather their lack of energy storage capacity for use on cloudy days.
The variable output of solar energy as well as the lack of solar energy at night makes improving energy storage crucial for the future of the solar industry. Two different lab groups at Columbia are working on novel methods for energy storage: batteries and fuel cells.
The Yang Research Group is tackling issues with batteries on two fronts. Its first method is to study dendrite formation in the negative electrodes of lithium batteries. Dendrites are filaments that form in the negative electrode, decreasing battery life and potentially causing units to inflame. By studying dendrite formation, battery life can be elongated and units made safer. The Yang lab has also developed a tri-layer structure that increases the energy density of a lithium battery and makes it cheaper to manufacture. By working towards making batteries cheaper, safer, and more efficient, the Yang Lab is furthering advancements in solar energy storage which can, in turn, encourage consumer confidence.
Fuel cells, another way solar energy can be stored, work by converting stored chemical energy into electricity when needed. While not working directly with solar energy, Diego Villarreal of Columbia University is working with reversible solid-oxide fuel cells, which convert excess solar energy into hydrogen, which can then be stored and used in a hydrogen fuel cell for when there is not enough solar energy supply.
The two primary issues driving consumer hesitation in solar power are costs and technological barriers. But these issues need to be further analyzed to find out how to drive costs down and advance solar technology. Costs themselves come from two main sources: labor and hardware, and given the inevitable complexities of the labor involved in panel installation, focusing on decreasing hardware costs appears to be the most effective route in decreasing overall costs. Among other technological frontiers in solar energy, efficiency and storage stand out as issues that can make consumers question investing in solar energy. When analyzing these two issues, it becomes evident that storage investment offers researchers a larger opportunity to make solar energy more widely used. By providing us with solutions to these issues, scientists at Columbia and many other institutions are making this world a greener and more sustainable place for us to live in.
Sophia Ahmed is a freshman in Columbia College planning to major in sustainable development
Beyond the Smog
By Alice Sardarian
Anthropogenic pollutants constantly seep into the environmental elements around us. The water we drink, the earth we cultivate, and the air we breathe, are all contaminated with health-damaging toxins. Pollution leads to the deaths of millions of people each year by propagating and spreading disease. Smog is a type of air pollution, originally labeled for its mixture of smoke and fog and causing over 3 million deaths per year, as reported by the World Health Organization. Most of these deaths occur in South Asia and China, where around 1 million lives are lost to air pollution each year. With its deadly effects, air pollution has been deemed the greatest environmental health risk today.
Air pollution increases risk of stroke, cardiovascular disease, asthma, bronchitis, and lung cancer. Some pollutants include black carbon, nitrogen dioxide, sulfur dioxide, lead, and other particulates. Black carbon is a sooty byproduct emitted from burning fossil fuels. It not only causes disease, but also spreads and covers the polar ice caps of Earth. The ice caps fail to reflect sunlight and contribute to the Earth’s increasing temperatures.
The smog in China has become so dense and suffocating that individuals have resorted to purchasing packaged air. “Clean air is actually a very rare commodity,” says Leo de Watts, the founder of AETHEAR, a British company that sells bottled fresh air from Wales. Products from this obscure market include air from as far as Australia’s Bondi beach and the Canadian Rocky Mountains, both of which sell for as much as $97 per bottle in China. Besides their apparent impracticality, these bottles of air do not address the roots of pollution, nor do they provide enough air to sustain breathing; one person would need eight to ten bottles a minute to survive on bottled air. When bottled air is the only way to breathe unpolluted air, breathing fresh air becomes a privilege for the wealthy.
All individuals within an environment are exposed to its elements and air pollution does not discriminate against socioeconomic background, age, or gender. Those most vulnerable to the destructive actions of pollutants are the very young and the elderly. Children’s immune systems are just developing and their lungs are still growing. Due to their small size and elevated respiratory rates, children intake more toxins than adults relative to body size. According to a recent report by the United Nations Children’s Fund (UNICEF), inhalation of these toxins is detrimental to children’s bodies and may damage brain development.The report notes that “every neural connection made during...brain development in early childhood forms the foundation for future neural connections, and ultimately influences the likelihood of healthy development of a child’s brain,” and their resulting success, or lack thereof, in life.
There are specific physiological mechanisms of pollutants that primarily affect young children. For example, pollutants easily disrupt children’s blood-brain barriers; small pollutant particles less than 2.5 microns wide (2.5 millionths of a meter—half the size of a human red blood cell) can sneak past this intentionally unsurpassable barrier and can cause inflammation in children, which has also been linked to the development of dementia in later years. Magnetite, another small particulate, can enter through the nerve used for smell reception and disrupt the brain with its magnetic charge. Other pollutants can degrade the white matter in the brain, which is critical for the formulation of neural connections in a child’s early life.
17 million children under the age of one are at risk of developing autism and ADD, earning lower academic grades, and suffering reduced IQs, due to early exposure to air pollutants above the World Health Organization’s recommended air pollution limit of 10µg/m^3. A study conducted by the Columbia Center for Children's Environmental Health notes that even prenatal exposure to polycyclic aromatic hydrocarbon, a common urban New York pollutant, significantly reduces mental development measured at age three.
The presence of caustic, toxic pollutants in the air eliminates opportunities for a healthy start in younger generations. In the United States, 38,000 total deaths per year are attributed to air pollution. In order to safeguard our future, and to reduce the irreversible health damage incurred by populations, efforts must be made to reduce fossil fuel consumption and to lead cleaner lives. Especially in the wake of recent forest fires and other environmental disruptions, it is critical to preserve our clean air—without denying the issue at hand and resorting to imported bottles of mountain air.
Alice Sardarian is a freshman at Barnard College studying Physiology & Organismal Biology in addition to premedical coursework. She is an editor for the Columbia Science Review.