text
stringlengths
237
516k
score
float64
3
4.9
Mass media interventions have been implemented to improve emergency response to stroke given the emergence of effective acute treatments. A new study shows that campaigns aimed at the public may raise awareness of symptoms and signs of stroke, but have limited impact on behavior. Mass media campaigns can be successful in improving knowledge and changing behaviors in other fields of health and safety promotion. Unlike other interventions, such as stroke patient education and community stroke screening programs, mass media campaigns have the potential to improve knowledge and awareness and change the behaviors of a large number of people. The study suggests that new campaigns to educate the public on the signs and symptoms of stroke should follow the principles of good design and be robustly evaluated. 1. Lecouturier J, Rodgers H, Murtagh MJ, et al. Systematic review of mass media interventions designed to improve public recognition of stroke symptoms, emergency response and early treatment. BMC Public Health 2010; 10: 784. (open access)
3.031479
We’re always fond of research that shows chewing gum makes you, well, smarter. Earlier this year, the Los Angeles Times reported on a research project funded by Wrigley at the Baylor College of Medicine. Those who took part in the study either chewed sugar-free gum during math class, during math homework, during math tests, or they didn’t chew gum at all. After 14 weeks, the students took a math test and had their math grades assessed. As the Times reported: Those who chewed gum had a 3% increase in standardized math test scores and had final math grades that were significantly better than the other students. Teachers observed that those who chewed gum seemed to require fewer breaks, sustain attention longer and remain quieter. While the research didn’t fully explain what the relationship was between gum chewing and math improvement, the lead researcher on the study, Dr. Craig Johnston, said that “there is research demonstrating an increase in blood flow in the brain during chewing.”
3.080865
How can water-related diseases be prevented during emergencies? Q: How can water-related diseases be prevented during emergencies? A: The three top priorities concerning drinking water and sanitation during an emergency situation are: - ensuring the provision of enough safe water for drinking and for personal hygiene to the people affected by the crisis; - ensuring that all people affected by the crisis have access to hygienic sanitation facilities; - promoting good hygiene behaviours. Following damage to existing sanitation systems or increased pressure due to large numbers of displaced or homeless people, effective and well-coordinated action by all those involved in the emergency response is critical. The first priority is to provide a sufficient quantity of water, even if its safety cannot be guaranteed, and to protect water sources from contamination. A minimum of 15 litres per person per day should be provided as soon as possible. During emergencies, people may use untreated water for laundry or bathing. Water-quality improvements should be made over succeeding days or weeks as a matter of urgency. Inadequate disposal of human excreta is a major health risk in emergency situations. It is essential to organize sanitation facilities immediately, such as designated defecation fields or collective trench latrines. Emergency facilities need to be progressively improved or replaced with simple pit latrines, ventilated improved pit latrines, or poor-flush latrines as the situation develops. All types of latrines need to be properly cleaned, disinfected and maintained. The provision of drinking water and sanitation services in health facilities is a top priority. Safe drinking water, basic sanitation facilities and safe disposal of infectious wastes will prevent the spread of disease and improve health conditions. In all cases, good hygiene practices are key to preventing disease transmission. Water should be provided in sufficient quantities to enable proper hygiene. Hands should be washed immediately after defecation, after handling babies' faeces, before preparing food and before eating.
3.736314
Astronomers find planets in unusually intimate dance around dying star Scientists have uncovered two pairs of planets so close to each other that they interact gravitationally. July 29, 2010 Provided by the California Institute of Technology, Pasadena July 29, 2010 This is an artist's conception of an extrasolar planet. Photo by NASA/JPL-Caltech/R. Hunt Scientists have found hundreds of extrasolar planets over the past decade and a half, most of them solitary worlds orbiting their parent star in seeming isolation. With further observation, however, one in three of these systems have two or more planets. Most of these systems contain planets that orbit too far from one another to feel each other's gravity. In just a few cases, astronomers have discovered planets near enough to one another to interact gravitationally. John A. Johnson from the California Institute of Technology (Caltech) and his colleagues have found two systems with pairs of gas giant planets locked in an orbital embrace. In one system, a planetary pair orbiting the massive, dying star HD 200964, located roughly 223 light-years from Earth, the planets are closer and tighter than any previously seen. "This new planet pair came in an unexpected package," said Johnson. "A planetary system with such closely spaced giant planets would be destroyed quickly if the planets weren't doing such a well synchronized dance," said Eric Ford from the University of Florida in Gainesville. "This makes it a real puzzle how the planets could have found their rhythm." All of the four newly discovered exoplanets are gas giants more massive than Jupiter, and like most exoplanets, were discovered by measuring the wobble, or Doppler shift, in the light emitted by their parent stars as the planets orbit around them. Surprisingly, however, the members of each pair are located remarkably close to one another. For example, the distance between the planets orbiting HD 200964 is occasionally just 0.35 astronomical unit (AU) — roughly 33 million miles (53 million kilometers) — comparable to the distance between Earth and Mars. The planets orbiting the second star, 24 Sextanis, located 244 light-years from Earth, are 0.75 AU apart, or about 70 million miles (113 million km). By comparison, Jupiter and Saturn are never less than 330 million miles (531 million km) apart. Because of their large masses and close proximity, the exoplanet pairs exert a large gravitational force on each other. The gravitational tug between HD 200964's two planets, for example, is 3 million times greater than the gravitational force between Earth and Mars, 700 times larger than that between Earth and the Moon, and 4 times larger than the pull of our Sun on Earth. Unlike the gas giants in our own solar system, the new planets are located comparatively close to their stars. The planets orbiting 24 Sextanis have orbital periods of 455 and 910 days, and the companions to HD 200964 have periods of 630 and 830 days. Jupiter, by contrast, takes about 12 Earth years to make one pass around the Sun. Planets often move around after they form, a process known as migration. Migration is thought to be commonplace — it even occurred to some extent within our own solar system — but it isn't orderly. Planets located farther out in the protoplanetary disk can migrate faster than those closer in, "so planets will cross paths and jostle each other around," Johnson said. "The only way they can get along and become stable is if they enter an orbital resonance." When planets are locked in an orbital resonance, their orbital periods are related by the ratio of two small integers. In a 2:1 resonance, for example, an outer planet will orbit its parent star once for every two orbits of the inner planet; in a 3:2 resonance, the outer planet will orbit two times for every three passes by the inner planet, and so forth. Such resonances are created by the gravitational influence of planets on one another. "There are many locations in a protoplanetary disk where planets can form," said Johnson. "It's very unlikely, however, that two planets would just happen to form at locations where they have periods in one of these ratios." A 2:1 resonance, which is the case for the planets orbiting 24 Sextanis, is the most stable and the most common pattern. "Planets tend to get stuck in the 2:1. It's like a really big pothole," Johnson said. "But if a planet is moving very fast" — racing in from the outer part of the protoplanetary disk, where it formed, toward its parent star — "it can pass over a 2:1. As it moves in closer, the next step is a 5:3, then a 3:2, and then a 4:3." Johnson and his colleagues have found that the pair of planets orbiting HD 200964 is locked in a 4:3 resonance. "The closest analogy in our solar system is Titan and Hyperion, two moons of Saturn, which also follow orbits synchronized in a 4:3 pattern," said Ford. "But the planets orbiting HD 200964 interact much more strongly, since each is around 20,000 times more massive than Titan and Hyperion combined." "This is the tightest system that's ever been discovered, and we're at a loss to explain why this happened," said Johnson. "This is the latest in a long line of strange discoveries about extrasolar planets, and it shows that exoplanets continuously have this ability to surprise us. Each time we think we can explain them, something else comes along." Johnson and his colleagues found the two systems using data from the Keck Subgiants Planet Survey — a search for planets around stars from 40 percent to 100 percent larger than our Sun. Sub-giants represent a class of stars that have evolved off the "main sequence" and have run out of hydrogen for nuclear fusion, causing their core to collapse and their outer envelope to swell. Sub-giants eventually become red giants — voluminous stars with big, puffy atmospheres that pulsate, making it difficult to detect the subtle spectral shifts caused by orbiting planets. "Sub-giants are rotating very slowly, and they're cool, but they haven't expanded enough to be too fluffy and too jittery," Johnson said. "They're 'Goldilocks' stars: not too fast, not too hot, not too fluffy, not too jittery" — and, therefore, ideal for planet hunting. "Right now, we're monitoring 450 of these massive stars, and we are finding swarms of planets," Johnson said. "Around these stars, we are seeing three to four times more planets out to a distance of about 3 AU than we see around main-sequence stars. Stellar mass has a huge influence on frequency of planet occurrence because the amount of raw material available to build planets scales with the mass of the star." Eventually, perhaps 10 or 100 million years from now, sub-giant stars like HD 200964 and 24 Sextanis will become red giants. They will throw off their outer atmospheres, swelling to the point where they could engulf the inner planet of their dancing pair, and will throw off mass, changing the gravitational dynamics of their whole system. "The planets will then move out, and their orbits will become unstable," Johnson said. "Most likely one of the planets will get flung out of the system completely." Look for this icon. This denotes premium subscriber content. Learn more »
3.513442
Every year, Earth Day is presented in our classrooms as a project and this year my class picked a theme for the entire school. We thought long and hard about the project and what we could do to make an impact. We thought about recycling, cutting back waste, and using less energy. Then the idea came to me. What about tree planting? When I looked out the classroom windows, all I saw was pavement. There was pavement everywhere –the school yard, playground, sidewalks, and roads. I could only see a handful of trees. It only makes sense to be just to the environment by making it more GREEN. Did you know trees should cover at least 40% of city land, but many times all we have is pavement for roads and sidewalks? It’s great to play ball on, but what about fresh air or shade during a hot summer day? Trees help fight off emissions from dirty car exhaust and shade trees save energy by cooling down open spaces. Trees are as important to human beings as food and water are. They keep the city air cool and clean. They provide oxygen and help us conserve resources. They keep rainwater from running off the land so that it saturates the earth. They also help control floods and hold soil in place, especially when it is dry. So this year, we’ll be planting 10 trees for Earth Day near our school campus, which were donated by some of the town’s community council. What about you? What does Earth Day mean to you? Danny is a soon to be freshman in one of Peachtree City’s schools. He’s an awesome soccer player and loves driving around the family golf cart. Editor's Note: The opinions expressed in Greenversations are those of the author. They do not reflect EPA policy, endorsement, or action, and EPA does not verify the accuracy or science of the contents of the blog.
3.261589
In Chapter 7, I found it very interesting how humans retain information differently, depending on how it is perceived. People are typically more capable of retaining echoic memories (sound) for longer (about 5 to 10 seconds), compared to iconic memory (visual) in which people only hold onto for a second. Also, once these things are in our short-term memory, which again is brief, they begin to decay, and be interfered with by new incoming memories. In order to retain information longer, "chunking" and "rehearsal" methods can be used. I found this part the most informative, because I find myself using an "elaborative rehearsal" technique it for schoolwork concepts. This technique utilizes relating or manipulating the new information to other information stored. Once you understand how your brain and how it remembers information, you can better retain information for future tests! There are exceptions to normal human memory, one case is of a man who is autistic but displays a skill of eidetic imagery, or photographic memory.
4.103089
Paul drew our attention a week or so ago to some research that appeared in the 9 July issue of Science, comparing the genome of the single-celled alga Chlamydomonas reinhardtii to the 2000-cell organism Volvox carteri. It turns out that the two related organisms have about the same number of genes -- 14,500. Moreover the genes are remarkably similar. The conclusion seems to be: The change from single-celled life to multicellularity was no so much what "tools" organisms had as how they used them. And what a change it was! About 700 million years ago invisibly small cells got together and soon the world was full of towering oaks and great blue whales. There is a real sense in which you and I are societies of cells that learned to cooperate and specialize to make for more efficient reproduction of the precious germs cells. It is only slightly with tongue in cheek that I'd say Chartres Cathedral, the plays of Shakespeare, Bach's Saint Mathew's Passion, and the Large Hadron Collider are just side-effects of a collectivity contrived by a string of genes to make copies of itself. Does that sound distressingly reductionistic. It doesn't need to be. Emergence is the name of the game. I once had the opportunity to watch the life cycle of a slime mold --- Dictyostelium discoideum -- under the microscope. At first they are invisible individuals, an uncountable army of free-roaming, single-celled amoebas, grazing on bacteria. Like other single-celled organisms, they multiply by splitting down the middle, two from one. Their population soars. Their food becomes scarce. Triggered by hunger, they gather in their tens of thousands, streaming like gleaming rivers to an assembly point, at last becoming visible to the eye in their slimy congregations. Surrendering their individuality, they heap themselves into a gooey blob half-a-millimeter high. The blob falls onto its side, becoming a sluglike creature. Some amoebas know they are at the front; others bring up the rear. The front end of the slug lifts as if to sniff the wind. The newly-contrived creature slithers on a film of slime toward light and warmth. As it slithers, the cells begin to change. The anterior cells are destined to become a stalk; the posterior cells will become spores. A bright, warm place is found. The slithering ceases. Anterior cells push down through the spore mass, becoming a slender pillar anchored at the base, lifting a perfect sphere of spores into the air. The spores are dormant amoebas that will travel on the air to form new colonies when the sphere bursts asunder. The stalk amoebas die; they have sacrificed themselves so that others might live. The fruiting tower is beautiful. Glittering. Translucent. An Ozmian minaret, sometimes as tall as this letter i. Fifty thousand amoebas pool their individual resources to build a reproductive spire that is as marvelous in its own amoebic way as the towers of Chartres. What I watched on the stage of the microscope was, in a sense, a recapitulation of one of the great chapters of life on Earth, the evolution of multicellularity, with all of the glorious side effects that seems to us to be -- in our exalted sense of exceptionalism -- the point of it all.
3.063931
The image above is substantially cool. But it’ll take a moment to explain why. Stick with this; you’ll like it. In the constellation of Cetus, the whale, is what appears to be a run of the mill red star. At a distance of about 400 light years, the fact that you can see it with your unaided eye at all means it’s an intrinsically luminous star: at that distance the Sun would be completely invisible. The star is a red giant, a star that was once much like the Sun but is now terminally ill. Stars make energy in their core through the fusion of light elements into heavier ones; the Sun is currently fusing hydrogen into helium. Eventually it will run out of hydrogen, and will begin to fuse helium into carbon and oxygen. In 7 billion years or so the helium in the core will run out as well. The carbon and oxygen ash from the process will form a ball about the size of the Earth. It will contract and get incredibly hot. Helium outside the core, previously unavailable for fusion (like having a spare can of gasoline in the trunk of your car) will start to fuse in a thin shell surrounding the core. This will dump vast amounts of heat into the outer part of the Sun, which will respond like any gas will when heated: it will expand and cool. The Sun will become a red giant. But thin shell helium fusion is unstable, and so a red giant can expand and contract, sometimes almost in a spasm, ejecting material off its surface, and briefly becoming very luminous before settling down again. This will happen three or four times for the Sun, and it will totally eject its outer layers, exposing the hot core to space*. When this is all done, the Sun will be a white dwarf, and will slowly cool for the next few hundred billion years. The image at the top shows a star that is undergoing this process right now. Called Mira — "wonderful" — it’s slightly more massive than the Sun, and far older. It has only a short time left — maybe only hundreds of thousands of years, maybe less — before its paroxysms slough off that last bit of outer layer, and it becomes a white dwarf. These spasms change its luminosity, and we see this as a brightening and dimming of the star; it’s sometimes too faint to see with the unaided eye, and other times can brighten considerably. Mira has long been studied by astronomers to give us insight on what will happen when the Sun dies. Observations have revealed the star isn’t round: that makes sense, since it is ejecting huge amounts of material in expanding clouds. It has a small companion, a more normal star that appears to be collecting some of the ejected material and forming it into a disk around itself. Mira is definitely wonderful, in the sense of evoking wonder. And now we have found out it’s even more amazing than we thought. Most stars near the Sun orbit the center of the Galaxy at roughly the same speed, but some are faster than others. Mira, it so happens, is plowing through this local region of space at about 130 kilometers per second (about 80 miles per second). There is gas and dust out there, a thin haze floating among the stars. As Mira screams through this fog, the gas it is ejecting as it convulses is blown backwards, leaving a long tail behind it — imagine running down the street with a smoke bomb in your hand and you’ll get the idea. Now take another look at the image at the top of this page. Mira is on the right hand side, and is moving left to right. The long tail of ejected material is incredible — it’s 13 light years long! It has taken Mira 30,000 years to move this distance, which means that the material in the left hand side of the tail was ejected 30 millennia ago. If you look at the location of the star itself, you’ll see a parabolic arc in front of it; that’s the bow shock, where Mira’s ejected material is slamming into the material between stars (called the interstellar medium or ISM). The images are in the ultraviolet, which means the gas is emitting UV radiation. This indicates that the material is being heated by the collision with the ISM, and is slowly losing that energy by glowing in the UV. The images were taken by the Galaxy Explorer (GalEx) mission. In a routine survey, an astronomer noticed that Mira looked fuzzy, so they took deeper images. The tail near the star was revealed, so they scheduled even more observations to trace it out… I can just imagine how surprised they were when they realized what they had found! The image is actually a mosaic of the images GalEx took. The material blown off will eventually merge with the ISM and form new stars. The elements created in the fusion forge deep inside of Mira will eventually find themselves in new stars, some of which will be like the Sun, or like Mira once was. They too will age, step through the fusion process in their cores, and eventually become red giants… and the cycle starts again. It’s quite possible that some of the heavier elements we see in the Sun itself were seeded into the Galaxy by some anonymous star like Mira more than 5 billion years ago. So when you look at this image of Mira with its comet-like tail, think on this: you are seeing a star’s way of making new stars. Like life itself, in death is renewal and the foundation for future generations. *Needless to say, the Earth doesn’t fare well in all this. Links to this Post - one of me » the home of paul turnbull » Blog Archive » links for 2007-08-15 | August 15, 2007 - JTGFX» Blog Archive » NASA discovers a "star unlike any seen before." | August 15, 2007 - ah! » NASA discovers a “star unlike any seen before.” | August 15, 2007 - Kevin Renfrow » Blog Archive » The ‘Wonderful’ star dies to create new life | August 16, 2007 - Jeff’s Home - NASA discovers a “star unlike any seen before.” | August 16, 2007 - Prime News Blog » Blog Archive » wonder women naked NASA discovers a "star unlike any seen before." | August 16, 2007 - Thursday Gospel Link at nicholasfiedler.com/blog | August 16, 2007 - Cartoons Plugin » Blog Archive » futurama bender hentai NASA discovers a "star unlike any seen before." | August 16, 2007 - Empty Headed | August 16, 2007 - SearchRoads » hand held v smile scooby doo NASA discovers a "star unlike any seen before." | August 16, 2007 - Cartoons Fans Lounge | August 16, 2007 - Moyock Blog » Blog Archive | August 17, 2007 - Mira « Meng Bomin | August 17, 2007 - Kevin Renfrow » Mira leaves giant tail across the sky - NASA discovers | August 19, 2007 - Episode 24 Links » Friends Hating Friends | August 20, 2007 - us beauty salons|beauty schools | July 3, 2008
3.501518
March 27, 2012 Science fiction writer Arthur C. Clarke once famously wrote, “Any sufficiently advanced technology is indistinguishable from magic.” While we’ve recently covered a number of incredible technologies that seem to prove Clarke’s point—progress on the way to an invisibility cloak and a sound gun that can silence the human voice, among others—a new camera developed by scientists at the Massachusetts Institute of Technology is a picture-perfect example. The camera, called CORNAR and developed by Ramesh Raskar and Andreas Velten of the M.I.T. Media Lab, makes innovative use of lasers to see around a solid impediment—in the experiments, a wall—and reveal an object on the other side. As explained in the video above, CORNAR uses a new form of photography, called “femto-photography,” to “see” through solid objects. Although it might sound like pure magic, the technique actually relies on a super-quick laser pulse—50 femtoseconds long, or 50 quadrillionths of a second—to construct a 3-D model of a hidden area behind a wall or corner. The concept is similar to a natural phenomenon: the way bats use echolocation to “see” in the dark. With bats, ultrasonic pulses are emitted to produce echoes, and the brain registers the time it takes for the echoes to return back to produce mental images of the surroundings. The camera uses a super-quick laser blast in much the same way. The laser pulse bounces off a wall, then into an area obscured from view. Some of the laser’s photons enter this area and then bounce back, eventually returning to the camera. Because of the incredibly short duration of the laser pulse, the camera can precisely calculate how long it would take the light to travel through the scene if it were empty. It then compares this with the actual laser “echoes”—the photons that return to the camera after hitting the figure within the hidden area, taking fractions of a second longer—to reconstruct the detailed 3-D model of the obscured room. The research team proposes a range of future applications for the technology. Rescue teams could use it to locate hidden survivors in a collapsed or burning building, or cars could be equipped to automatically locate vehicles on the other side of a blind corner. Minuscule endoscopic medical cameras could even use the technology to see around tight corners in the heart, lungs or colon during various procedures. Right now, all of these applications are purely theoretical, because the experimental setup is bulky, expensive and fragile. But the researchers note that research is currently being done on femtosecond lasers and light detectors that would simplify the device and enable it to be moved out of the lab more easily. Additionally, the process currently takes about 10 minutes, but they hope to reduce it to as little as 10 seconds. The possibilities for this type of technology are, quite frankly, hard to picture. Someday, like magic, your smartphone could be equipped with a camera that can take pictures of places you can’t even see. Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week. No Comments » No comments yet.
3.519368
The following books are about shapes. Captain Invincible and the Space Shapes. By Stuart J. Murphy. Illus. by Remy Simard. 2001. 33p. HarperCollins. (9780064467315). Gr. K-3. This comic strip written book is about a space Captain who explores three-dimensional shapes when traveling back to earth. The book lays the foundation for geometry by recognizing and classifying shapes like the cube, pyramid, sphere, cylinder, and cone. There is an adult and child section at the end of the story regarding activities to supplement the book. This picture book is about a happy wooden couple who builds their house of geometric wooden shapes that catches on fire. They change their house into a fire engine. Because of too much water, the couple build a boat that sails them to land. Then, they build a truck to travel on land that is transformed into a train. Cubes, Cones, Cylinders, & Spheres. By Tana Hoban. Illus. by author. 2000. HarperCollins Publishers. (9780688153250). Gr.PrK-4. The photographs in this picture book depicts cubes, cones, cylinders, and sphere in familiar environments -in our houses, on the street, and in our hands. The book recognizes that shapes are the things of everyday life. Grandfather Tang's Story. By Ann Tompert. Illus. by Robert Andrew Harper. 1997. 16p. Randon House Children's Books. (9780517885581). Gr.Prk-2. Grandfather Tang's story about the fox fairies that comes from the Chinese folklore. The two foxes challenge themselves and change into different animals. The book illustrates each animal using the Tangram puzzle throughout the story. Also, the book describes the background and the use of the Tangram. Selina and the Bear Paw Quilt. By Barbara Claasen Smucker. Illus. by Janet Wilson. 1995. 32p. Random House Children’s Books. (9780517709047). Gr. PrK-4. This book is about a Mennonite girl name Selina who lives on Pennsylvania in the 1860’s. She loves her farm and watching her grandma piece together quilts. Selina’s family decides to flee to Canada to avoid persecution; however, her grandma feels she is too elderly to make the long trip. Grandma creates a Bear Paw pattern using stitches and techniques that were brought to America by the early pioneers. Every square of the quilt created by Grandma represents Selina’s family history. Web sites for kids. Web sites for kids. Interactive fun math games for kids. Website provides resources such as games, worksheet, video, quizzes and pictures. The website, "Count on it" is a fun, simple, and innovative way to teach mathematics to children. Video explains the characteristics of a circle. This website provides practice, videos, fun, and tracks progress. Website provides worksheets for geometry such as: finding the difference between shape, tracing shapes, and recording the characteristics of shapes. Geometry through literature for problem solving, reasoning, and connection to real world. Lesson plan using book, "The M&M Brand Chocolate Candies Counting Book." by Barbara Barbieri McGrath. Introduce shapes circle, square, and triangle by using shape cards. K.11 The student will a) identify, describe, and trace plane geometric figures (circle, triangle, square, and rectangle); and b) compare the size (larger, smaller) and shape of plane geometric figures (circle, triangle, square, and rectangle). - Attribute blocks, relational attribute blocks, and tangrams are among the manipulative's that are particularly appropriate for sorting and comparing size.
3.320951
spots in solar's future? Solar energy got lots of attention in the 1970s. But there were plenty of problems to overcome. Now homeowners are again looking at solar to see what has changed. Atkin CS Monitor 2003.9.03 There's nothing like a mega blackout to recharge public interest in residential energy alternatives. Not long after the lights and air conditioning went out in the extensive electric-grid failure last month, many homeowners began wondering: Whatever happened to Home Power magazine, which focuses on renewable energy, saw a surge in online traffic, and Richard King, the US Department of Energy's solar guru, kept getting calls from people who sought solutions because they didn't want to lose power again. Homeowners feel they can't afford to be without their basement sump pump or home office for even a few hours, much less days. And their concern escalates when Ol' Man Winter is howling at the door and the furnace is stone-cold silent. Not surprisingly, many look to the sky and wonder if solar should be in their energy future. Home Power magazine estimates that 147,000 American homes run solely on solar electricity, and the Department of Energy says about 1.1 million homes use solar power of some kind. Still, it's safe to say that most people consider solar an infant on the energy landscape, just beginning to find its way. Whether as a backup, supplemental, or main energy supply, however, it holds intriguing potential, even if it's still far more costly than most people would like. The following questions and answers help summarize where residential solar has been, is now, and may be heading. Why has solar been slow to catch on? It got off to a shaky start in the 1970s, when escalating energy prices led many people to install solar hot-water systems in their homes. Much of the equipment, however, was unreliable and unattractive. What has changed to make solar a more appealing option? Partly it's growing public interest in exercising more control over fluctuating energy costs. Solar is environmentally attractive, too. The recent blackout has alsospurred inquiries. In the past 10 years, the big breakthrough has been the development of inverters that take the direct-current (DC) electricity generated by photovoltaic cells and convert it into alternating-current (AC) electricity, or common household current. This has opened the door to connecting domestic solar systems to the utility electrical grids in 38 states. The flow of energy can be measured into or out of the house, so that during daylight hours photovoltaic (PV) panels may actually feed surplus power onto the grid, causing the meter to spin backward, lowering the electric Industry standards have also improved since the 1970s. "Any major manufacturer of a PV panel has to have a minimum of a 20-year warranty," says Don Bradley, a solar home builder in Philadelphia.That's a longer warranty than on other components in the house, he adds. What are the advantages of solar electricity? It allows you to be your own producer of electricity, with no noise, no pollution, and no moving parts. It also is of higher quality than a utility company generally provides, says Richard Perez, publisher of Home Power magazine. For homeowners, this translates into appliances that are cooler running, more efficient, and have longer-lasting Does having solar electric mean your home will always have power, even during a blackout? It depends on what kind of system you have. Some homes have systems that are not connected to the local utility's electric grid at all. As a result, they won't be affected by power outages. When a homeowner uses solar power, but also supplements it with power from the grid, that additional electricity is lost in a blackout. To enjoy uninterrupted service, people on the grid can equip their system with a battery backup system. What is the primary obstacle to wider acceptance of solar energy? Price. The upfront costs of solar energy, can be jarring - often $10,000 to $20,000 or more. The cost of producing electricity from sunlight is approximately two to four times as expensive as from coal or gas, although in the past two decades, the price per kilowatt hour has come down from $1 to about 20 or 30 cents. Solar water heating ordinarily costs between $3,000 and $10,000, with most systems about $4,000 price range. Why is the price of solar so variable? Every house is different, and lifestyles vary, too. Plus, there's quite a range of financial incentives offered by state governments. For an existing house, the expenditure is comparable to buying a good previously owned car, Mr. Perez says, but some people are looking for a used Chevy and others a used Mercedes. BP Solar, a manufacturer of photovoltaic products and systems, offers an online cost estimator, using a person's ZIP Code, current energy usage, and preferred system capacity to provide a ballpark guesstimate. (See www.bpsolar.com/home solutions/solarsavingsestimator. States that offer significant financial incentives to bring the price down include California, New York, New Jersey, Pennsylvania, Colorado, Minnesota, and North Carolina. Where does solar make the most sense geographically? The amount of sunlight is only one factor, says Joel Gordes of Environmental Energy Solutions. New York State, he believes, is prime solar territory because of tax credits, rebates, and loan programs, as well as potential savings on energy bills. "New York has very high electric rates, so it makes sense economically to do a solar system," he explains. Utility customers in New York pay 13 or 14 cents per kilowatt hour, well above the national average of about 8 or 9 cents. This helps offset the fact that New York is less sunny than Arizona, where coal is plentiful and electric rates are only 7 or 8 cents per kilowatt hour. How can the large upfront cost of solar be made more palatable? Rolling it into a 30-year house mortgage helps, and lenders seem more inclined these days to finance solar this way. When building a new home, Mr. Bradley suggests that instead of spending $8,000 on fancy kitchen cabinets, marble countertops, or a cultured-stone fireplace, some people may want to put that money into a solar system. Can a do-it-yourselfer install a photovoltaic system? Although the technology has been simplified, installation of even plug-it-in systems is not for amateurs. "You still need to know where to plug it in, it's still carrying voltage, you still have to work on a roof," Bradley says. "You still need to have the absolute right tools, and you need a certified electrician to interact with the grid, and that's the same thing as with a central air- conditioning [system]." Home Depot now sells residential electric solar-power systems in selected stores, but they're offered as part of a full-service program that includes financing, installation, and service. Can solar be used for central heating or air conditioning, or is it limited to appliances? It's capable of handling all a home's electrical needs. Of course, the more you ask of a solar system, the bigger (and more expensive) it will need to be. Doesn't solar equipment spoil the look of a house? Manufacturers are working to develop flatter and less obtrusive panels. One company has come out with solar shingles that use thin-film technology to make a lightweight, flexible roofing product. These dark blue shingles can be nailed or screwed directly to plywood sheathing, just as asphalt shingles can. However, experts say, they are less efficient than traditional panels, so more roof surface must be covered. Plus they carry a premium price. Solar laminate material can also be bonded to the metal roofs increasingly used in home construction. How much repair and maintenance does solar-electric equipment Very little, according to industry experts, because there are no moving parts and little that can break. Inverters remain the weak point in most systems and sometimes need to be repaired or replaced. Routine maintenance includes checking the water level of backup batteries and keeping the connections clean, hosing off solar panels, and, if there's a significant drop-off in electricity produced, checking for loose wires, a faulty panel, or perhaps shade. If a person doesn't want to install panels but still wants to take advantage of solar, does adding a solar greenhouse or sunroom Passive solar (which includes sunrooms) is definitely worth looking into, since it's considered the most cost-effective option. If done properly, with the right orientation and siting, a passive solar greenhouse can be a tremendous asset. But done incorrectly it can be an "absolute nightmare" in terms of energy efficiency and comfort, says Bradley. If the space doesn't get enough sun, it can freeze in winter. Too much sun and it cooks in summer. Other passive-solar options include energy-efficient glass, insulated shades, and heat- absorbing walls. Have solar hot-water systems improved? Yes. "They are one of the best-kept secrets in the industry," says Joe Wiehagen of the National Association of Home Builders Research Center. Quality has improved and the industry has standardized much of the technology. They're also highly efficient.
3.080503
How Access to Clean Water Prevents Conflict Abstract : The number of people with improved access to safe drinking water is growing. According to UNICEF, since 1990, an additional 1.8 billion people are using an improved source of drinking water. Yet many people are living with water scarcity, particularly in Africa. The solutions highlighted here are just a few of the possible responses. Safe drinking water and sanitation in schools may serve as a way to keep girls in school, increasing their economic opportunities, and eventually, the health of their own children. Innovative ways to finance water entrepreneurs could open up an avenue for new investments and improve sustainability. Strengthening regional institutions, promoting scientific dialogue, and harnessing social capital can help to facilitate cooperation and reconciliation. Appropriate investments in water use, sanitation, and conservation are essential to reducing vulnerability among the poor, to ensuring sustainable development, and to promoting security in a period of climate change. Pour en savoir plus : http://www.thesolutionsjournal.com/node/1037#comment-form
3.381023
Click on image to view larger Mt. Geladandong (33f 12' N, 91f 09'E, 5800 m) is located in the Tanggula Mountains in the central region of the eastern Tibetan Plateau, and is the source region of the Yangtze river (the third longest river in the world). Grassland steppes lie to the south and east, and the regions to the north and west are arid. Summer precipitation is dominated by plateau monsoon circulation (Murakami, 1976), and winter precipitation is limited, though occasional large snowfall events result from westerly disturbances (Seko and Takashashi, 1991). Previous studies suggest that the Tanggula Mountains mark the northern boundary of the influence of the South Asian monsoon, and that north of this range recycled rainfall from continental precipitation is the dominant process (Tian et al, 2001; Araguas-Araguas et al., 1988). Click on image to view larger In the spring of 2004 an 87 m ice core was recovered from the South Geladandong Flat Topped Glacier in collaboration between the Climate Change Institute and the Joint Key Laboratory of Cryosphere and Environment (in association with Cold and Arid Regions Environmental and Engineering Research Institute (Lanzhou), Institute of Tibetan Plateau Research (Beijing), and Chinese Academy of Sciences/Chinese Academy of Meteorological Sciences (Beijing), China). This fall we will be returning to Geladandong to identify another ice core drilling site, and intend to recover an additional ice core. This core, in addition to the 2004 Geladandong ice core, will be used to investigate the climate and environmental variability of the Tibetan Plateau during the late Holocene. Dr. Shichang Kang from the Tibetan Plateau Institute is the leader of the expedition.
3.002019
World Book Encyclopedia is online! Tennessee Electronic Library Website Just for Kids! Find AR Books! What is Accelerated Reader (AR)? AR is a computer program that helps teachers manage and monitor children’s independent reading practice. Your child picks a book at his own level and reads it at his own pace. When finished, your child takes a short quiz on the computer. (Passing the quiz is an indication that your child understood what was read.) AR gives both children and teachers feedback based on the quiz results, which the teacher then uses to help your child set goals and direct ongoing reading practice.
3.678196
Music has been a part of human culture since prehistoric times. From the national anthem to rock anthems, music brings Americans together. Rhythms, familiar choruses, and song verses can unite people of various backgrounds with a sense of their shared history and culture. How has music, including Bruce Springsteen’s, shaped Americans’ understanding of our shared history? In this lesson, students will trace the ways musicians have responded to events on a national scale and furthered political dialogue among citizens. They will also compare music in countries where governments respect freedom of speech with those that don’t. Monday - Friday: 9:30 a.m. – 5 p.m. Saturday: 9:30 a.m. – 6 p.m. Sunday: 12 p.m. – 5 p.m. 525 Arch Street Philadelphia, Pennsylvania 19106
3.905386
By Lisa Asta, M.D. What are hives? Hives are an itchy skin rash -- red, raised bumps with a paler center - triggered by an irritant. They can show up anywhere on your child's body, from the skin to the inside of his mouth, and vary in size from 1/16 inch in diameter to many inches across. Hives, also known as urticaria or wheals, can pop up in one area, fade, and appear in a totally different place within a matter of hours. Studies show that 2 to 20 percent of children develop hives at one time or another. An episode of hives can be over in a few hours, but most take about 48 hours to completely disappear. Some stubborn cases may even last a few weeks. What causes them? Common triggers include food allergies, drugs, viruses, insect bites and stings, plants, exercise, heat, and cold. Unfortunately, finding the cause of your child's hives is rarely easy; many times, you and your doctor will be unable to identify the exact cause. And like most allergic reactions, your child may have been exposed to the irritant in the past without any problem. Here are some common triggers: Among kids taking those medications, hives may appear immediately after the first dose or sometimes days into the treatment. What's more, hives don't always appear the first time your child takes a particular medication; sometimes they erupt after he has taken the medicine on several different occasions. What makes the hives appear? Some children are simply more susceptible to certain irritants than others. Their immune systems reacts more quickly -- sometimes even to substances that are usually harmless -- and attack what appears to be an invader. When this occurs, the immune system releases a chemical called histamine to combat the irritants. Histamine makes blood vessels in the skin leaky, and the fluid that escapes gets trapped in the lower level of the skin, causing the bumpy hives. Histamine also provokes the itchy feeling that accompanies hives. Hives triggered by heat, cold, sun, and exercise are more of a mystery. Scientists don't yet know exactly why these rashes appear. When should I call the pediatrician? Most hives are harmless, but they can also signal a serious or even life-threatening condition. Contact your pediatrician or call 911 immediately if your child has any of these symptoms: In those instances, hives can be a sign of anaphylactic shock, a potentially fatal allergic reaction. These episodes progress rapidly, and can cause enough swelling around the lips, tongue, and mouth to block the airway; your child's blood pressure can also drop rapidly. If your child has a history of severe allergy to insect stings or foods and is carrying epinephrine, give him an injection and then seek medical attention immediately. You should also contact a pediatrician for a non-emergency appointment in these circumstances: How do I treat hives? You can use cool compresses or a cool bath to reduce irritation and itching, but since hives are a reaction to histamine, antihistamines are usually the most effective treatment. Benadryl (its generic name is diphenhydramine) is available over-the-counter in liquid and pills. Follow the dosing guidelines carefully (and contact your doctor for children under 2-years-old). Give Benadryl every 6 hours until the hives fade. Continue the medication, spacing the doses farther and farther apart, until you are sure the hives are no longer a problem. Your pediatrician may also recommend hydroxazine (Atarax), a prescription antihistamine. (Let your doctor know if you're using any over-the-counter medications for the hives.) Be aware that antihistamines make most children a little drowsy. How can I protect my child from hives? Avoid the irritant, if you know what it is. Teach your child to avoid trigger foods, and alert family, friends, school, and daycare. If your child is severely allergic, ask your pediatrician for a Medi-Alert bracelet, which will let medical workers know how to proceed in an emergency. Children with a history of life-threatening hives from foods or insect stings should carry epinephrine with them at all times. Epinephrine is available in automatic injection devices; talk to your pediatrician about when to use it, and always seek medical attention immediately after giving it. Pantell, Robert H. M.D., James F. Fries M.D., and Donald M. Vickery M.D. Taking Care of Your Child: A Parent's Illustrated Guide to Complete Medical Care, Eighth Edition. 2009. Da Capo Lifelong Books. Sidney Hurwitz, Clinical Pediatric Dermatology: A Textbook of Skin Disorders of Childhood and Adolescence, 2nd ed. W B Saunders Co 1993. Weston WL, Badgett JT. Urticaria. Pediatrics in Review Jul 1998;19(7):240-43. Last Updated: March 11, 2013 May 23: Catching Cancer Early Screening for lung cancer with low dose CT instead of chest x-ray can save lives, a new study finds.
3.854214
In the last 10 years, childhood obesity rates in America have been steadily increasing. Studies have shown that around nine million children are overweight in the US. The number of teens who are overweight has more than tripled since 1980. Why is this so dangerous and what does this mean? This article unearths the facts of childhood obesity and ways we can help prevent it in our own children. Obesity means that there is an excess amount of body fat. Because everyone’s body is different, there is not a set number does not exist for a lowest definition of obesity in children. Most professionals accept the published guidelines based on the Body Mass Index, which is modified for age and gender. Obesity is now among the most widespread medical problem affecting children and teens in the US. There are a number of childhood obesity causes. Obesity tends in run in families and genetics can play a factor, but do not cause obesity alone. Dietary habits are the biggest cause of obesity. Children’s diets have increasingly shifted away from balanced, healthy foods, to more processed foods and fast food – all of which are high in fat and calories. The other factor is a major decrease in daily physical activity. The continued advances in technology have lead to a more sedentary lifestyle, Fewer than half of children in the US have parents who engage in physical exercise. Studies have also showing that children and teens spend on average, over three hours of watching television daily. The effects of childhood obesity are many and can lead to life threatening diseases. Children who are overweight are much more likely to develop high cholesterol and high blood pressure. Both of these can lead to heart disease as an adult. Childhood obesity can also lead to diabetes, more specifically Type 2 Diabetes, which was once considered an adult disease. Obesity can also lead to low self-esteem. All of these are conditions that children should not have to deal with, especially at such a young age. So how can you prevent childhood obesity in your home? Preventing childhood obesity starts with promoting a healthy lifestyle. As a parent you can educate your children on healthy eating habits. Make sure you have healthy snacks and meals at your house for them to eat. You can also show them by example the importance of daily physical activity. You can do physical activity together as family and it doesn’t have to be a huge, planned activity – play soccer, go on a walk or bike ride around the neighborhood, have a water fight, or play catch. The physical activity can be anything that gets your kids moving and shouldn’t become a family chore. Help teach your children healthy meal portions, especially went eating out. The most important thing is for you to lead by example. Don’t focus on an exact weight, or on their unhealthy habits, make it more of a lifestyle change. As childhood obesity in America continues to increase it is important that we take the steps necessary to help our children learn how to eat healthy, make exercise a part of their everyday activity and teach them correct portions. These simple steps will not only help them avoid obesity now, but will help them develop healthy habits that they will carry with them for the rest of their life. It is important to lead by example as parents, because even though we are not aware of it, our children are watching the everyday choices that we make.
3.620312
Have you ever noticed a furry feeling when you’ve run your tongue over your teeth? Or have you noticed how much cleaner and smoother your teeth feel after having them professionally cleaned by a dentist or dental hygienist? Have you ever wondered why? Well, there’s a very simple explanation for that furry feeling. What you’re feeling is a layer of dental plaque that has built up over the tooth surfaces. Most people know that plaque is considered bad, but often they don’t understand what it really is or why it forms. How does plaque form? Plaque is a sticky layer of germs, food debris and proteins that grows on your teeth and is technically known as a biofilm. These germs occur naturally in everyone’s mouth. The sticky layer will form soon after you brush your teeth, thickening and spreading quickly as the germs multiply, particularly if those germs are ‘fed’ by frequent sugary foods or drinks passing over them. Every time you eat, you provide food for the germs in your plaque. This means you grow thicker plaque if you snack or graze throughout the day. That’s why your dentist will normally recommend that you limit your exposure to sugary, sticky or acidic foods to ideally only three or four occasions each day. And it’s why you’re better off having that sweet drink or treat as part of a normal meal than having it as a snack at another time. If you must snack, then sip some water soon afterwards to help wash away sugary and sticky debris from your teeth, or try to chew some sugar-free gum. Once plaque gets to be around twelve hours old, it feels quite furry on your teeth. Once it gets to be 24 hours old, we know that the plaque germs produce more acids and toxins which are factors in causing tooth decay and gum disease. How do I keep the plaque at bay? Most dentists recommend you brush your teeth every morning and night so that the layer of plaque is removed before it gets thick, furry and stubborn. If you must skip an opportunity to brush your teeth, then don’t skip cleaning your teeth before bedtime, otherwise your plaque will continue to grow while you’re asleep and will be more likely to be of an older, more damaging type. As plaque grows on all exposed tooth surfaces, remember to also floss your teeth at least once each day to clean those surfaces where your teeth meet and your brush can’t reach. That way, you’ll set yourself up for not only avoiding that furry feeling, but you’ll help to keep your teeth and gums healthy for a lifetime. What if I can’t get rid of that furry feeling? And if your teeth are still feeling furry despite all that brushing and flossing, make an appointment to see your dentist. He or she will check your teeth, remove any stubborn build-up and advise you on your cleaning technique, so that you’ll be in a better position to keep them smooth and shiny for longer! Westcoast International Dental Clinic Tel: +84 8 3825 6999
3.116828
You can view the current or previous issues of Diabetes Health online, in their entirety, anytime you want. Click Here To View Latest Health Research Articles Popular Health Research Articles Highly Recommended Health Research Articles Send a link to this page to your friends and colleagues. A new study has proven that use of a blood glucose meter with advanced features, when paired with diabetes education, more effectively manages blood glucose than using a basic feature meter. This information was presented at the recent 46th European Association for the Study of Diabetes (EASD) Annual Meeting in Stockholm, Sweden. The study was a six-month, randomized, multicenter prospective clinical outcomes study called ACT (Actions with the CONTOUR® Blood Glucose Meter and Behaviors in Frequent Testers). It was conducted at four clinical sites in the U.S. and evaluated the impact of diabetes education with use of advanced BGM features, versus diabetes education combined with the use of meters with basic features. Advanced features include a meal marker and reminder functions. Investigators also evaluated the influence of SMBG (Self-Management of Blood Glucose) information, motivation, and behavioral skills on measures of glycemic control via survey questions based on the Information-Motivation-Behavioral Skills (IMB) model. As many with diabetes already know, simply remembering to test blood sugar levels is a major obstacle, especially both before AND after meals. At the end of this study, about one quarter of participants (24 percent in the basic group, 23 percent in the advanced group) said that remembering to test their blood sugar before meals is difficult. However, 55 percent of participants who used basic meters also said that it was difficult to remember to test their blood sugar after meals, versus 23 percent of those who used the advanced meter features. Thus, utilizing the meal marker feature made remembering to test after meals easier. At the end of the study, more than 61 percent of participants who used the advanced features said they better understood how to make decisions on their own at home. Further 66 percent had more confidence in their meal choices since they started testing pre-meal and Post-meal blood sugars. Seventy-two percent of study participants who used the advanced features said they could use their meters in a more helpful way. Dr. William Fisher, Distinguished University Professor in the Department of Psychology at the University of Western Ontario in London, Canada, and co-developer of the IMB model, presented the study. He said, "There is a considerable amount of medical literature about adherence in diabetes, and a wide range of interventions have been shown to have a positive effect on knowledge, frequency, and accuracy of SMBG. Maintaining change in SMBG over time has been variable, however, and may be dependent upon regular reinforcement. What's been lacking is a well-integrated behavioral science model of factors that influence SMBG adherence. We are gratified to see that the IMB model for understanding and promoting health behavior change has provided evidence of utility in understanding SMBG in diabetes." Bayer's CONTOUR® USB and DIDGET meters were not used in the ACT clinical study. Both meters, however, are based on Bayer's CONTOUR® system, which was used in the ACT clinical study. Categories: Bayer Diabetes Care, Blood Glucose, Blood Sugar, Diabetes, Diabetes, Diabetes Education, Health Care, Health Research, Meters, Monitoring, National Institutes of Health, Type 1 Issues, Type 2 Issues Diabetes Health is the essential resource for people living with diabetes- both newly diagnosed and experienced as well as the professionals who care for them. We provide balanced expert news and information on living healthfully with diabetes. Each issue includes cutting-edge editorial coverage of new products, research, treatment options, and meaningful lifestyle issues.
3.052188
Chaos theory, the study of how tiny fluctuations can have tremendous effects within a moving system, emerged in mainstream physics about 30 years ago. The signature example of this line of thinking—the “butterfly effect”—is that a butterfly flapping its wings in Taipei can affect the weather over Toronto. Chaos theory, or nonlinear dynamics, is a mathematical way of determining the effects of small changes on systems so complex they look random. Chaos theory shook through the scientific community. Jupiter’s red spot, fractal geometry, and economic forecasting all became some of chaos’s most celebrated clients. Physicists and mathematicians heralded the birth of a new science, and some saw chaos theory as a revolution on a par with quantum mechanics. The revolution stretched into popular culture. From The Simpsons to Jurassic Park, chaos theory became fashionable and funny, terrifying and true. In the 21st century, chaos theory, for all its previous pomp, makes barely a peep on the mainstream radar. Still, it hasn’t gone away—far from it, says Harvard University physicist Paul Martin. “It’s become part of the arsenal of tools that people use,” Martin says. “It’s a collection of tools, and it’s a way of understanding phenomena that occur over a wide range of fields.” But calling it a revolution was “not wise,” Martin says. The applications of chaos theory touch almost every field, and trying to group them under one umbrella would be a useless and herculean task. “It’s too ubiquitous to be a discipline unto itself, and too many fields use it,” he says. “There isn’t any great virtue in unifying under the one word chaos. It’s not an independent discipline.”
3.189194
Resistant Starch shows significant promise for weight loss, thermogenesis (burning fat) colon health, insulin management, and glycemic control. But what is it? There are traditionally 2 types of Fiber: - Soluble Fiber - Insoluble Fiber But you may not have heard about a 3rd type of fiber called resistant starch or RS for short. Starch is composed of a number of carbohydrates, or sugars, that are linked together. The straight-chain variety is called amylose. There is even an enzyme is saliva (and also the pancrease) called amylase that helps begin to break down these sugars so that our bodies can use them for energy. The branched-chain variety is called amylopectin. Amylose, as mentioned above, is a straight chain and able to be packaged very tightly by stacking them one on top of the other. When they are arranged in this manner, they are very resistant to being broken down by enzymes. Thus, they are "Resistant Starch". Amylopectin is not able to be stacked in this manner due to its multiple branches. There are 4 varieties of RS: - RS1 - physically inaccessible. Found in seeds & legumes & unprocessed whole grains - RS2 - these occur in their natural, granular form. Found in uncooked potato, green banana flour, & high amylose corn - RS3 - produced by the cooking, and then cooling, of starchy foods. Examples include bread, corn flakes, cooked & chilled potatoes - RS4 - chemically modified. They are NOT found in nature and they have many different structures. It is called 'Resistant Starch' because it does not get broken down in the stomach or small intestine. Instead, it gets transported to the colon (large intestine) where it is broken down by colonic bacteria. This has a variety of benefits because this increases satiety (makes you feel full) and decreases the glycemic index of foods. Again, more on this in the coming days. Resistant Starch shows promise for numerous reasons and in a variety of conditions. They appear to improve colon health (by a variety of mechanisms), increase weight loss, change the way glucose is metabolized, promote fat burning, increase insulin sensitivity, improve glycemic control, and decrease calories. I will be discussing each of these in more detail over the next several days. Leslie Bonci, RD is the author of the American Diabetic Association's Guide to Better Digestion and she stated "Resistant Starch has the potential to become the next nutritional trend". Maybe we should get on board.
3.130264
Eclipse predictions on this web site are based on j=2 ephemerides for the Sun [Newcomb, 1895] and Moon [Brown, 1919, and Eckert, Jones and Clark, 1954]. The value used for the Moon's secular acceleration is n-dot = -26 arc-sec/cy*cy, as deduced by Morrison and Ward . The primary source of uncertainty in position of eclipse paths before 1000 CE is due to variations in Earth's rotation which is expressed through the parameter delta T. The value for delta-T was determined as follows: 1) pre-1600: delta T was derived from historical eclipse and occultation observations analyses by Stephenson 2) 1600-present: delta T was obtained from published observations 3) future: delta-T was extrapolated from current values Note that the predictions use a smaller value of k1 (=0.272281) than the one adopted by the 1982 IAU General Assembly (k=0.2725076). This results in a better approximation of Moon's minimum diameter and a slightly shorter total or longer annular eclipse when compared with calculations using the IAU value for k. 1k is the radius of the Moon expressed in units of Earth radii. All eclipse calculations are by Fred Espenak, and he assumes full responsibility for their accuracy. Permission is freely granted to reproduce this data when accompanied by an acknowledgment: Eclipse Predictions & WebMaster: Fred Espenak Planetary Systems Branch - Code 693
3.346569
Iconic memory is the visual sensory memory (SM) register pertaining to the visual domain and a fast-decaying store of visual information. It is a component of the visual memory system which also includes visual short term memory (VSTM) and long term memory (LTM). Iconic memory is described as a very brief (<1000 ms), pre-categorical, high capacity memory store. It contributes to VSTM by providing a coherent representation of our entire visual perception for a very brief period of time. Iconic memory assists in accounting for phenomena such as change blindness and continuity of experience during saccades. Iconic memory is no longer thought of as a single entity but instead, is composed of at least two distinctive components. Classic experiments including Sperling's partial report paradigm as well as modern techniques continue to provide insight into the nature of this SM store. The occurrence of a sustained physiological image of an object after its physical offset has been observed by many individuals throughout history. One of the earliest documented accounts of the phenomenon was by Aristotle who proposed that afterimages were involved in the experience of a dream. Natural observation of the light trail produced by glowing ember at the end of a quickly moving stick sparked the interest of researchers in the 1700s and 1800s. They became the first to begin empirical studies on this phenomenon which later became known as visible persistence. In the 1900s, the role of visible persistence in memory gained considerable attention due to its hypothesized role as a pre-categorical representation of visual information in VSTM. In 1960, George Sperling began his classic partial-report experiments to confirm the existence of visual sensory memory and some of its characteristics including capacity and duration. It was not until 1967 that Ulric Neisser termed this quickly decaying memory store iconic memory. Approximately 20 years after Sperling’s original experiments, two separate components of visual sensory memory began to emerge: visual persistence and informational persistence. Sperling’s experiments mainly tested the information pertaining to a stimulus, whereas others such as Coltheart performed directs tests of visual persistence. In 1978, Di Lollo proposed a two-state model of visual sensory memory. Although it has been debated throughout history, current understanding of iconic memory makes a clear distinction between visual and informational persistence which are tested differently and have fundamentally different properties. Informational persistence which is the basis behind iconic memory is thought to be the key contributor to visual short term memory as the precategorical sensory store. A similar storage area serves as a temporary warehouse for sounds. Components of Iconic Memory The two main components of iconic memory are visible persistence and informational persistence. The first is a relatively brief (150 ms) pre-categorical visual representation of the physical image created by the sensory system. This would be the "snapshot" of what the individual is looking at and perceiving. The second component is a longer lasting memory store which represents a coded version of the visual image into post-categorical information. This would be the "raw data" that is taken in and processed by the brain. A third component may also be considered which is neural persistence: the physical activity and recordings of the visual system. Neural persistence is generally represented by neuroscientific techniques such as EEG and fMRI. Visible Persistence Visible persistence is the phenomenal impression that a visual image remains present after its physical offset. This can be considered a by-product of neural persistence. Visible persistence is more sensitive to the physical parameters of the stimulus than informational persistence which is reflected in its two key properties.: - The duration of visible persistence is inversely related to stimulus duration. This means that the longer the physical stimulus is presented for, the faster the visual image decays in memory. - The duration of visible persistence is inversely related to stimulus luminance. When the luminance, or brightness of a stimulus is increased, the duration of visible persistence decreases. Due to the involvement of the neural system, visible persistence is highly dependent on the physiology of the photoreceptors and activation of different cell types in the visual cortex. This visible representation is subject to masking effects whereby the presentation of interfering stimulus during, or immediately after stimulus offset interferes with one’s ability to remember the stimulus. Different techniques have been used to attempt to identify the duration of visible persistence. The Duration of Stimulus Technique is one in which a probe stimulus (auditory "click") is presented simultaneously with the onset, and on a separate trial, with the offset of a visual display. The difference represents the duration of the visible store which was found to be approximately 100-200 ms. Alternatively, the Phenomenal Continuity and Moving Slit Technique estimated visible persistence to be 300 ms. In the first paradigm, an image is presented discontinuously with blank periods in between presentations. If the duration is short enough, the participant will perceive a continuous image. Similarly, the Moving Slit Technique is also based on the participant observing a continuous image. Only instead of flashing the entire stimulus on and off, only a very narrow portion or "slit" of the image is displayed. When the slit is oscillated at the correct speed, a complete image is viewed. Neural Basis of Visible Persistence Underlying visible persistence is neural persistence of the visual sensory pathway. A prolonged visual representation begins with activation of photoreceptors in the retina. Although activation in both rods and cones has been found to persist beyond the physical offset of a stimulus, the rod system persists longer than cones. Other cells involved in a sustained visible image include M and P retinal ganglion cells. M cells (transient cells), are active only during stimulus onset and stimulus offset. P cells (sustained cells), show continuous activity during stimulus onset, duration, and offset. Cortical persistence of the visual image has been found in the primary visual cortex (V1) in the occipital lobe which is responsible for processing visual information. Informational Persistence Information persistence represents the information about a stimulus that persists after its physical offset. It is visual in nature, but not visible. Sperling's experiments were a test of informational persistence. Stimulus duration is the key contributing factor to the duration of informational persistence. As stimulus duration increases, so does the duration of the visual code. The non-visual components represented by informational persistence include the abstract characteristics of the image, as well as its spatial location. Due to the nature of informational persistence, unlike visible persistence, it is immune to masking effects. The characteristics of this component of iconic memory suggest that it plays the key role in representing a post-categorical memory store for which VSTM can access information for consolidation. Neural Basis of Information Persistence Although less research exists regarding the neural representation of informational persistence compared to visual persistence, new electrophysiological techniques have begun to reveal cortical areas involved. Unlike visible persistence, informational persistence is thought to rely on higher-level visual areas beyond the visual cortex. The anterior superior temporal sulcus (STS), a part of the ventral stream, was found to be active in macaques during iconic memory tasks. This brain region is associated with object recognition and object identity. Iconic memory’s role in change detection has been related to activation in the middle occipital gyrus (MOG). MOG activation was found to persist for approximately 2000ms suggesting a possibility that iconic memory has a longer duration than what was currently thought. Iconic memory is also influenced by genetics and proteins produced in the brain. Brain-derived neurotrophic factor (BDNF) is a part of the neurotrophin family of nerve growth factors. Individuals with mutations to the BDNF gene which codes for BDNF have been shown to have shortened, less stable informational persistence. Role of Iconic Memory Iconic memory provides a smooth stream of visual information to the brain which can be extracted over an extended period of time by VSTM for consolidation into more stable forms. One of iconic memory's key roles is involved with change detection of our visual environment which assists in the perception of motion. Temporal Integration Iconic memory enables integrating visual information along a continuous stream of images, for example when watching a movie. In the primary visual cortex new stimuli do not erase information about previous stimuli. Instead the responses to the most recent stimulus contain about equal amounts of information about both this and the preceding stimulus. This one-back memory may be the main substrate for both the integration processes in iconic memory and masking effects. The particular outcome depends on whether the two subsequent component images (i.e., the “icons”) are meaningful only when isolated (masking) or only when superimposed (integration). Change Blindness The brief representation in iconic memory is thought to play a key role in the ability to detect change in a visual scene. The phenomenon of change blindness has provided insight into the nature of the iconic memory store and its role in vision. Change blindness refers to an inability to detect differences in two successive scenes separated by a very brief blank interval, or interstimulus interval (ISS). As such change blindness can be defined as being a slight lapse in iconic memory. When scenes are presented without an ISS, the change is easily detectable. It is thought that the detailed memory store of the scene in iconic memory is erased by each ISS, which renders the memory inaccessible. This reduces the ability to make comparisons between successive scenes. Saccadic Eye Movement It has been suggested that iconic memory plays a role in providing continuity of experience during saccadic eye movements. These rapid eye movements occur in approximately 30 ms and each fixation lasts for approximately 300 ms. Research suggests however, that memory for information between saccades is largely dependent on VSTM and not iconic memory. Instead of contributing to trans-saccadic memory, information stored in iconic memory is thought to actually be erased during saccades. A similar phenomenon occurs during eye-blinks whereby both automatic and intentional blinking disrupts the information stored in iconic memory. Development of Iconic memory The development of iconic memory begins at birth and continues as development of the primary and secondary visual system occurs. By 6 months of age, infants' iconic memory capacity approaches adults' By 5 years of age, children have developed the same unlimited capacity of iconic memory that adults possess (ref needed). The duration of informational persistence however increases from approximately 200 ms at age 5, to an asymptotic level of 1000 ms as an adult (>11 years). A small decrease in visual persistence occurs with age. A decrease of approximately 20 ms has been observed when comparing individuals in their early 20's to those in their late 60's. Throughout one’s lifetime, mild cognitive impairments (MCIs) may develop such as errors in episodic memory (autobiographical memory about people, places, and their contex), and working memory (the active processing component of STM) due to damage in hippocampal and association cortical areas. Episodic memories are autobiographical events that a person can discuss. Individuals with MCIs have be found to show decreased iconic memory capacity and duration. Iconic memory impairment in those with MCIs may be used as a predictor for the development of more severe deficits such as Alzheimer's disease and dementia later in life. Sperling's Partial Report Procedure In 1960, George Sperling became the first to use a partial report paradigm to investigate the bipartite model of VSTM. In Sperling's initial experiments in 1960, observers were presented with a tachistoscopic visual stimulus for a brief period of time (50 ms) consisting of either a 3x3 or 3x4 array of alphanumeric characters such as: - P Y F G - V J S A - D H B U Recall was based on a cue which followed the offset of the stimulus and directed the subject to recall a specific line of letters from the initial display. Memory performance was compared under two conditions: whole report and partial report. Whole Report The whole report condition required participants to recall as many elements from the original display in their proper spatial locations as possible. Participants were typically able to recall three to five characters from the twelve character display (~35%). This suggests that whole report is limited by a memory system with a capacity of four-to-five items. Partial Report The partial report condition required participants to identify a subset of the characters from the visual display using cued recall. The cue was a tone which sounded at various time intervals (~50 ms) following the offset of the stimulus. The frequency of the tone (high, medium, or low) indicated which set of characters within the display were to be reported. Due to the fact that participants did not know which row would be cued for recall, performance in the partial report condition can be regarded as a random sample of an observer's memory for the entire display. This type of sampling revealed that immediately after stimulus offset, participants could recall most letters (9 out of 12 letters) in a given row suggesting that 75% of the entire visual display was accessible to memory. This is a dramatic increase in the hypothesized capacity of iconic memory derived from full-report trials. Variations of the partial report procedure Visual Bar Cue A small variation in Sperling’s partial report procedure which yielded similar results was the use of a visual bar marker instead of an auditory tone as the retrieval cue. In this modification, participants were presented with a visual display of 2 rows of 8 letters for 50 ms. The probe was a visual bar placed above or below a letter’s position simultaneously with array offset. Participants had an average accuracy of 65% when asked to recall the designated letter. Temporal Variations Varying the time between the offset of the display and the auditory cue allowed Sperling to estimate the time course of sensory memory. Sperling deviated from the original procedure by varying tone presentation from immediately after stimulus offset, to 150, 500, or 1000 ms. Using this technique, the initial memory for a stimulus display was found to decay rapidly after display offset. At approximately 1000 ms after stimulus offset, there was no difference in recall between the partial-report and whole report conditions. Overall, experiments using partial report provided evidence for a rapidly decaying sensory trace lasting approximately 1000 ms after the offset of a display Circle cue and masking The effects of masking were identified by the use of a circle presented around a letter as the cue for recall. When the circle was presented before the visual stimulus onset or simultaneously with stimulus offset, recall matched that found when using a bar or tone. However, if a circle was used as a cue 100 ms after stimulus offset, there was decreased accuracy in recall. As the delay of circle presentation increased, accuracy once again improved. This phenomenon was an example of metacontrast masking. Masking was also observed when images such as random lines were presented immediately after stimulus offset. - Sperling, George (1960). "The information available in brief visual presentations". Psychological Monographs 74: 1–29. - Dick, A. O. (1974). "Iconic memory and its relation to perceptual processing and other memory mechanisms". Perception & Psychophysics 16 (3): 575–596. doi:10.3758/BF03198590. - Coltheart, Max (1980). "Iconic memory and visible persistence". Perception & Psychophysics 27 (3): 183–228. doi:10.3758/BF03204258. - Allen, Frank (1926). "The persistence of vision". American Journal of Physiological Optics 7: 439–457. - Neisser, Ulric (1967). Cognitive Psychology. New York: Appleton-Century-Crofts. - Di Lollo, Vincent (1980). "Temporal integration in visual memory". Journal of Experimental Psychology: General 109: 75–97. doi:10.1037/0096-34188.8.131.52. - Irwin, David; James Yeomans (1986). "Sensory Registration and Informational Persistence". Journal of Experimental Psychology: Human Perception and Performance 12 (3): 343–360. - Schacter, Daniel L (2009 - 2011). PYCHOLOGY. Catherine Woods. pp. 226. ISBN 13: 978- 1 - 4- 292- 3719- 2. - Loftus, Geoffrey; T. Bursey, J. Senders (1992). "On the time course of perceptual information that results from a brief visual presentation". Journal of Experimental Psychology 54: 535–554. - Long, Gerald (1980). "Iconic Memory: A Review and Critique of the Study of Short-Term Visual Storage". Psychological Bulletin 88 (3): 785–820. doi:10.1037/0033-2909.88.3.785. PMID 7003642. - Haber, R.; L. Standing (1970). "Direct measures of visual short-term visual storage". Quarterly Journal of Experimental Psychology 21: 216–229. - Irwin, David; Thomas, Laura (2008). "Neural Basis of Sensory Memory". In Steven Luck and Andrew Hollingworth. Visual Memory. New York, New York: Oxford University Press. pp. 32–35. ISBN 978-0-19-530548-7. - Levick, W.; J. Zacks (1970). "Responses of cat retinal ganglion cells to brief flashes of light". Journal of Physiology 206 (3): 677–700. PMC 1348672. PMID 5498512. Retrieved 2011-03-11. - Nikolić, Danko; S. Häusler, W. Singer and W. Maass (2009). "Distributed fading memory for stimulus properties in the primary visual cortex". In Victor, Jonathan D. PLoS Biology 7 (12): e1000260. doi:10.1371/journal.pbio.1000260. PMC 2785877. PMID 20027205. - Greene, Ernest (2007). "Information persistence in the integration of partial cues for object recognition". Perception & Psychophysics 69 (5): 772–784. doi:10.3758/BF03193778. - Beste, Christian; Daniel Schneider, Jörg Epplen, Larissa Arning (2011-02/2011-03). "The functional BDNF Val66Met polymorphism affects functions of pre-attentive visual sensory memory processes". Neuropharmacology 60 (2–3): 467–471. doi:10.1016/j.neuropharm.2010.10.028. PMID 21056046. - Urakawa, Tomokazu; Koji Inui, Koya Yamashiro, Emi Tanaka, Ryusuke Kakigi (2010). "Cortical dynamics of visual change detection based on sensory memory". NeuroImage 52 (1): 302–308. doi:10.1016/j.neuroimage.2010.03.071. PMID 20362678. - Becker, M.; H. Pashler, S. Anstis (2000). "The role of iconic memory in change-detection tasks". Perception 29 (3): 273–286. PMID 10889938. - Jonides, J.; D. Irwin, S. Yantis (1982). "Integrating visual information from successive fixations". Science 215 (4529): 192–194. doi:10.1126/science.7053571. PMID 7053571. - Thomas, Laura; David Irwin (2006). "Voluntary eyeblinks disrupt iconic memory". Perception & Psychophysics 68 (3): 475–488. doi:10.3758/BF03193691. - Blaser, Erik; Zsuzsa Kaldy (2010). "Infants Get Five Stars on Iconic Memory Tests: A Partial Report Test of 6-month-old Infants’ Iconic Memory Capacity". Psychological Science 21 (11): 1643–1645. PMID 20923928. - Walsh, David; Larry Thompson (1978). "Age Differences in Visual Sensory Memory". Journal of Gerontology 33 (3): 383–387. PMID 748430. - Averbach, E; Sperling, G (1961). "Short-term storage of information in vision". In C. Cherry. Information Theory. London: Butterworth. pp. 196–211. - Sperling, George (1967). "Successive approximations to a model for short-term memory". Acta Psychologica 27: 285–292. doi:10.1016/0001-6918(67)90070-4. PMID 6062221. - Averbach, E; A. Coriell (1961). "Short-term memory in vision". Bell Systems Technical Journal 40: 309–328. - Sperling, George (1963). "A model for visual memory tasks". Human Factors 5: 19–31. PMID 13990068.
3.239594
|Classification and external resources| Micrograph of a colonic pseudomembrane in Clostridium difficile colitis, a type of pseudomembranous colitis. Pseudomembranous colitis, a cause of antibiotic-associated diarrhea (AAD), is an inflammation of the colon. It is often, but not always, caused by the bacterium Clostridium difficile. Because of this, the informal name C. difficile colitis is also commonly used. The illness is characterized by offensive-smelling diarrhea, fever, and abdominal pain. In severe cases, life-threatening complications can develop, such as toxic megacolon. Signs and symptoms As noted above, pseudomembranous colitis is characterized by diarrhea, abdominal pain, and fever. Usually, the diarrhea is not bloody, although blood may be present if the affected individual is taking blood thinners or has an underlying lower bowel condition, such as inflamed hemorrhoids. Abdominal pain is almost always present and may be severe. So-called "peritoneal" signs (e.g., rebound tenderness) may be present. "Constitutional" signs such as fever, fatigue, and loss of appetite are prominent. In fact, one of the main ways of distinguishing pseudomembranous colitis from other antibiotic-associated diarrheal states is that patients with the former are "sick". That is, they are often prostrate, lethargic, and in general look unwell. Their "sick" appearance tends to be paralleled by the results of their blood tests, which often show anemia, an elevated white blood cell count, and low serum albumin. The use of clindamycin, broad-spectrum antibiotics such as cephalosporins, or any penicillin-based antibiotic such as amoxicillin causes the normal bacterial flora of the bowel to be altered. In particular, when the antibiotic kills off other competing bacteria in the intestine, any bacteria remaining will have less competition for space and nutrients. The net effect is to permit more extensive growth than normal of certain bacteria. Clostridium difficile is one such type of bacterium. In addition to proliferating in the bowel, C. difficile also produces toxins. Without either toxin A or toxin B, C. difficile may colonize the gut, but is unlikely to cause pseudomembranous colitis. To make the diagnosis, it is, of course, essential that the treating physician be aware of any recent antibiotic usage. The disease may occur as late as six months after antibiotic use. Although some relationship between dose/duration of antibiotic and the likelihood of developing pseudomembranous colitis occurs, it may occur even after a single dose of antibiotic. In fact, the use of a single-dose prophylactic antibiotic is a common practice in surgical and dental patients to prevent infections associated with a procedure. Hence, though unlikely to cause pseudomembranous colitis on a per-case basis, single-dose antibiotic treatment, by virtue of the large number of patients receiving such, is an important cause of pseudomembranous colitis. Use of proton pump inhibitor drugs, such as omeprazole for gastric reflux, or some forms of asthma inhaler, or all drugs with anticholinergic effects that slow the digestive transit time lead to retention of toxins and exacerbate the effects of broad-spectrum antibiotics. Prior to the advent of tests to detect C. difficile toxins, the diagnosis was most often made by colonoscopy or sigmoidoscopy. The appearance of "pseudomembranes" on the mucosa of the colon or rectum is diagnostic of the condition. The pseudomembranes are composed of an exudate made of inflammatory debris, white blood cells, etc. Although colonoscopy and sigmoidoscopy are still employed, stool testing for the presence of C. difficile toxins is now often the first-line diagnostic approach. Usually, only two toxins are tested for - toxin A and toxin B - but the organism produces several others. This test is not 100% accurate, with a considerable false-negative rate even with repeat testing. Another, more recent two-step approach involves testing for the presence of C. difficile in the stool and then testing for toxin production. The first step is performed by testing for the presence of the C. difficile GDH antigen. If the first step is positive, a second test, a PCR assay targeting the toxin genes, is performed. A randomized controlled trial using a probiotic drink containing Lactobacillus casei,L. bulgaricus, and Streptococcus thermophilus was reported to have some efficacy. This study was, however, sponsored by the company that produces the drink. Although intriguing, several other studies have been unable to demonstrate any benefit of oral supplements of similar bacteria at preventing C. difficile-associated diarrhea. The disease is treated either with oral vancomycin or with intravenous metronidazole. Oral metronidazole at doses of 500 mg three times a day for 10 to 14 days can be used for mild to moderate cases of C. difficile. Choice of drug depends on severity of disease and the ability to tolerate and absorb oral medications. Vancomycin treatment does present the risk of the development of vancomycin-resistant Enterococcus, though it is only minimally absorbed into the bloodstream from the gastrointestinal tract. Metronidazole has on occasion been associated with the development of pseudomembranous colitis. In these cases, metronidazole is still an effective treatment, since the cause of the colitis is not the antibiotic, but rather the change in bacterial flora from a previous round of antibiotics. C. difficile infections that do not respond to vancomycin or metronidazole are sometimes treated with oral rifaximin. Fidaxomicin, a new alternative, has been approved for treatment as of mid-2011. A small number of academic institutions have successfully treated pseudomembranous colitis with fecal transplants, but this therapy is typically reserved for severe recurrent infections and has demonstrated favorable outcomes for cases that are not curable by antimicrobial options. Cholestyramine and other bile acid sequestrants should not be used as adjunctive therapy because, though they may bind the C. difficile toxin, they can also inhibit the effects of the primary antibiotic. Several probiotic therapies have been used as adjunct therapies for pseudomembranous colitis. Saccharomyces boulardii (similar to baker's yeast) has been shown in one small study of 124 patients to reduce the recurrence rate of pseudomembranous colitis. A number of mechanisms have been proposed to explain this effect. Fecal bacteriotherapy, a medical treatment which involves restoration of colon homeostasis by reintroducing normal bacterial flora using faecal material obtained from a healthy donor, has been successfully used to treat acute pseudomembranous colitis. If antibiotics do not control the infection, the patient may require a colectomy (removal of the colon) for treatment of the colitis. In most cases, a patient presenting with pseudomembranous colitis has recently been on antibiotics. Antibiotics disturb the normal bowel bacterial flora. Certain antibiotics, such as ampicillin, have a higher propensity to create an environment where the bacteria causing pseudomembranous colitis can outcompete the normal gut flora. Clindamycin is the antibiotic classically associated with this disorder, but any antibiotic can cause the condition. Though they are not particularly likely to cause pseudomembranous colitis, cephalosporin antibiotics (such as cefazolin and cephalexin) account for a large percentage of cases due to their very frequent use. Diabetics and the elderly are also at increased risk, although half of cases are not associated with risk factors. Other risk factors include increasing age and recent major surgery. Some evidence shows proton pump inhibitors are a risk factor for C. difficile infection and pseudomembranous colitis, but others question whether this is a false association or statistical artifact (increased PPI use is itself a marker of increased age and co-morbid illness). Indeed, one large case-control study showed PPIs are not a risk factor. - Sarah A. Kuehne, Stephen T. Cartman, John T. Heap, Michelle L. Kelly, Alan Cockayne & Nigel P. Minton (2010). "The role of toxin A and toxin B inClostridium difficile infection". Nature 467 (7316): 711–3. doi:10.1038/nature09397. PMID 20844489. - Hickson M, D'Souza AL, Muthu N, et al. (2007). "Use of probiotic Lactobacillus preparation to prevent diarrhoea associated with antibiotics: [[Randomized controlled trial|randomised double blind]] [[placebo-controlled studies|placebo controlled trial]]". BMJ 335 (7610): 80. doi:10.1136/bmj.39231.599815.55. PMC 1914504. PMID 17604300. Wikilink embedded in URL title (help) - Brandt LJ, Reddy SS. Fecal microbiota transplantation for recurrent Clostridium difficile infection. J Clin Gastroenterol. 2011;45(suppl):S159–S167. - McFarland LV, Surawicz CM, Greenberg RN, et al. (1994). "A randomized placebo-controlled trial of Saccharomyces boulardii in combination with standard antibiotics for Clostridium difficile disease". JAMA 271 (24): 1913–18. doi:10.1001/jama.271.24.1913. PMID 8201735. - Schwan A, Sjölin S, Trottestam U, Aronsson B (1983). "Relapsing clostridium difficile enterocolitis cured by rectal infusion of homologous faeces.". Lancet 2 (8354): 845. doi:10.1016/S0140-6736(83)90753-5. PMID 6137662. - Paterson D, Iredell J, Whitby M (1994). "Putting back the bugs: bacterial treatment relieves chronic diarrhoea.". Med J Aust 160 (4): 232–3. PMID 8309401. - Borody T (2000). ""Flora Power"-- fecal bacteria cure chronic C. difficile diarrhea.". Am J Gastroenterol 95 (11): 3028–9. doi:10.1111/j.1572-0241.2000.03277.x. PMID 11095314. PDF - Katzung, Bertram G. (2007). Basic and Clinical Pharmacology. New York, NY: McGraw Hill Medical. p. 733. ISBN 978-0-07-145153-6. - Deshpande A, Pant C, Pasupuleti V (2011). "Association between Proton Pump Inhibitor therapy and Clostridium difficile infection in a Meta-Analysis". Clin. Gastroenterol. Hepatol. doi:10.1016/j.cgh.2011.09.030. PMID 22019794. - Dial S, Delaney C, Schneider V, Suissa S. (2006). "Proton pump inhibitor use and risk of community-acquired Clostridium difficile-associated disease defined by prescription for oral vancomycin therapy". CMAJ 175 (7): 745–48. doi:10.1503/cmaj.060284. PMC 1569908. PMID 17001054. - Pépin J, Saheb N, Coulombe M, et al. (2005). "Emergence of fluoroquinolones as the predominant risk factor for Clostridium difficile associated diarrhea: a cohort study during an epidemic in Quebec". Clin Infect Dis 41 (9): 1254–60. doi:10.1086/496986. PMID 16206099. - Lowe DO, Mamdani MM, Kopp A, Low DE, Juurlink DN (2006). "Proton pump inhibitors and hospitalization for Clostridium difficile-associated disease: a population-based study". Clin Infect Dis 43 (10): 1272–6. doi:10.1086/508453. PMID 17051491. |Wikimedia Commons has media related to: Pseudomembranous colitis| - PWA Health Group - Saccharomyces boulardii Info Sheet - Video depicting the colonoscopy of a colon with Pseudomembranous colitis
3.138348
|Native to||Poland (Silesian Voivodeship, Opole Voivodeship), Czech Republic (Moravia–Silesia, Jeseník)| |Region||Upper Silesia / Silesia| |Native speakers||509,000 (2011 census)| Silesian or Upper Silesian (Silesian: ślōnskŏ gŏdka, ślůnsko godka, Czech: slezský jazyk, Polish: język śląski) is a West Slavic lect, closely related to Polish and Czech. Its vocabulary has been significantly influenced by German language due to the existence of numerous Silesian German speakers in the area prior to World War II and after, until the 1990s. There is no consensus on whether Silesian is a separate language or a somewhat divergent dialect of Polish. The issue is largely unanswerable based on linguistic criteria, due to the existence of a dialect continuum between Polish and Czech formed by the Silesian and Lach varieties. The issue of whether language forms like Silesian and Lach represent minority languages in their own right is generally quite contentious in Europe due to the increased linguistic and political rights generally enjoyed by speakers of recognized minority languages, and Silesian is no exception. In this instance, local Silesians tend to advocate in favor of language status, while Poles and Czechs from other regions tend to advocate against this. International linguists tend toward giving it dialect status. Silesian speakers currently live in the region of Upper Silesia, which is split between southwestern Poland and the northeastern Czech Republic. At present Silesian is commonly spoken in the area between historical border of Silesia on the east and a line from Syców to Prudnik on the west as well as in the Rawicz area. Until 1945 Silesian was also spoken in enclaves in Lower Silesia, as Lower Silesian, a German dialect, was spoken by the ethnic German majority population of that region at the time. According to the last official census in Poland in 2011, about 509,000 people declared Silesian as their native language (in census 2002, about 60,000), and in the censuses in Poland, Czech Republic and Slovakia, nearly 0.9 million people declared Silesian nationality. In 2003, the National Publishing Company of Silesia (Narodowa Oficyna Śląska) commenced operations. This publisher was founded by the Alliance of the People of the Silesian Nation (Związek Ludności Narodowości Śląskiej) and it prints books about Silesia and books in Silesian language. On 30 June 2008 in the edifice of the Silesian Parliament in Katowice, a conference took place on the status of the Silesian language. This conference was a forum for politicians, linguists, representatives of interested organizations and persons who deal with the Silesian language. The conference was titled "Silesian — Still a Dialect or Already a Language?" (Śląsko godka — jeszcze gwara czy jednak już język?). Writing system Ślabikŏrzowy szrajbōnek is the relatively new alphabet created by the Pro Loquela Silesiana organization to cover speech of all Silesian dialects. It was approved by Silesian organizations affiliated in Rada Górnośląska. Ubuntu translation is in this alphabet and also Silesian Wikipedia use this alphabet. It's used in few books, including Silesian alphabet book. - Letters: A, Ã, B, C, Ć, D, E, F, G, H, I, J, K, L, Ł, M, N, Ń, O, Ŏ, Ō, Ô, Õ, P, R, S, Ś, T, U, W, Y, Z, Ź, Ż. One of the first alphabets created specifically for Silesian was created in the Interwar period Steuer's Silesian alphabet, used by Feliks Steuer to write his poems in Silesian. The alphabet consists of 30 graphemes and eight digraphs: - Letters: A, B, C, Ć, D, E, F, G, H, I, J, K, L, Ł, M, N, Ń, O, P, R, S, Ś, T, U, Ů, W, Y, Z, Ź, Ż - Digraphs: Au, Ch, Cz, Dz, Dź, Dż, Rz, Sz Sometimes it is also used other alphabets, for example "Tadzikowy muster" (for National Dictation Contest of the Silesian language) or Polish alphabet, however writing in this alphabet is problematic, it is impossible to save and appropriate emphasis all Silesian sounds. While the morphological differences between Silesian and the neighboring language of Polish have been researched extensively, grammatical differences have not been studied in great depth. One example is that, in contrast with Polish, Silesian retains the pluperfect (joech śe była uobaliyła — "I had slipped") and separate past conditional (jo bych śe była uobaliyła — "I would have slipped"). Another major difference is in question-forming. In Polish, questions that do not contain interrogative words are formed either by using intonation or the interrogative particle czy. In Silesian, questions that do not contain interrogative words are formed by using intonation (with a markedly different intonation pattern than in Polish) or inversion (e.g. je to na mapie?); there is no interrogative particle. The Lord's Prayer in Silesian, Polish and Czech. |Silesian (Steuer spelling)||Polish||Czech| Dialects of Silesian The Silesian language has many local dialects: - Dialects spoken in areas which are now part of the Czech Republic: - Dialects spoken in areas which are now part of Poland: Dialect vs language Opinions are divided among linguists about whether Silesian is a distinct language or a dialect of Polish. The issue can be contentious, as some Silesians consider themselves to be a nationality within Poland. Some linguists from Poland such as Jolanta Tambor, Juan Lajo, dr Tomasz Wicherkiewicz and philosopher dr hab Jerzy Dadaczyński, sociologist dr Elżbieta Anna Sekuła and sociolinguist Tomasz Kamusella support its status as a language. According to Stanisław Rospond, it is impossible to classify Silesian as a dialect of the contemporary Polish language because he considers it to be descended from the Old Polish language. Other Polish linguists, such as Jan Miodek and Edward Polański, do not support its status as a language. The Silesian linguist Reinhold Olesch supported the status of Silesian as a Slavic language. Norman Davies from the United Kingdom shows Silesian among Slavic languages in a diagram at the end of his book. Most linguists writing in English, such as Alexander M. Schenker, Robert A. Rothstein, and Roland Sussex and Paul Cubberley in their respective surveys of Slavic languages, list Silesian as a dialect of Polish, as does Encyclopædia Britannica. A similar disagreement exists concerning the neighboring Lach varieties, sometimes considered separate languages and sometimes dialects of Czech, although the latter opinion appears currently dominant, Gerd Hentschel wrote "Das Schlesische ... kann somit ... ohne Zweifel als Dialekt des Polnischen beschrieben werden" ("Silesian ... can thus ... without doubt be described as a dialect of Polish"). but, in a later scientific work - in his book Das Schlesische — eine neue (oder auch nicht neue) slavische Sprache (Silesian — a new (or not new) Slavic language?) he concludes that it's a language. The Silesian language has recently seen an increased use in culture, for example: - TV and radio stations (for example: TV Silesia, Sfera TV, Slonsky Radio, Radio Piekary); - music groups (for example: Hasiok, Dohtor Miód, FEET); - theatre (for example: Polterabend in Silesian Theatre); - film (for example: Grzeszny żywot Franciszka Buły ("The Sinful Life of Franciszek Buła") - books (for example, the so-called Silesian Bible; poetry: "Myśli ukryte" by Karol Gwóźdź) - teaching aids (for example, a Silesian basal reader) See also |Wikimedia Commons has media related to: Silesian language| |Wikinews has related news: Silesian language granted ISO code| - Narodowy Spis Powszechny Ludności i Mieszkań 2011. Raport z wyników - Central Statistical Office of Poland - "Ethnologue report for language code: szl". Ethnologue. Languages of the World. - Tomasz Kamusella. 2013. The Silesian Language in the Early 21st Century: A Speech Community on the Rollercoaster of Politics (pp 1-35). Die Welt der Slaven. Vol 58, No 1. - (Polish) "Ludność według języka używanego w kontaktach domowych i deklaracji narodowościowej w 2002 roku" [Population by language used at home and declarations of nationality in 2002] (XLS). Main Statistical Office of the Polish Government: report of Polish census of 2002. - "Obyvatelstvo podle národnosti podle krajů" (PDF). Czech Statistical Office. - "Národnost ve sčítání lidu v českých zemích" (PDF). Retrieved 2012-08-16. - National Minorities in the Slovak Republic - Ministry of Foreign Affairs of the Slovak Republic - (Polish) "Narodowa Oficyna Śląska" [National Publishing Company of Silesia]. - (English) "ISO documentation of Silesian language". SIL International. - (Polish) Dziennik Zachodni (2008). "Śląski wśród języków świata" [Silesian Among the Languages of the World]. Our News Katowice. - (Silesian) / (Polish) "National Dictation contest of the Silesian language". - (Polish) "Śląski wśród języków świata" [The Silesian language is a foreign language]. Dziennik Zachodni. 2008.[dead link] - (Polish) "Śląska Wikipedia już działa" [Silesian Wikipedia already operating]. Gazeta Wyborcza-Gospodarka. 2008. - (Polish) "Katowice: konferencja dotycząca statusu śląskiej mowy" [Katowice: Conference concerning the status of the Silesian language]. Polish Wikinews. 1 July 2008. Retrieved 6 April 2012. - Dz.U. 2012 nr 0 poz. 309 - Internet System of Legal Acts - Mirosław Syniawa: Ślabikŏrz niy dlŏ bajtli. Pro Loquela Silesiana. ISBN 978-83-62349-01-2 - "Ekspertyza naukowa prof. UŚ dr hab. Jolanty Tambor" (en: "The scientific expertise of Juan Lajo"), 2008 - "Ekspertyza naukowa pana Juana Lajo" (en: "The scientific expertise of Juan Lajo"), 2008 - "Ekspertyza naukowa dra Tomasza Wicherkiewicza" (en: "The scientific expertise of Tomasz Wicherkiewicz"), 2008 - "Ekspertyza naukowa ks. dra hab. Jerzego Dadaczyńskiego") (en: "The scientific expertise of Jerzy Dadaczyński"), 2008 - "Ekspertyza naukowa dr Elżbiety Anny Sekuły" (en: "The scientific expertise of Elżbieta Anna Sekuła"), 2008 - (Polish) Tomasz Kamusella. Schlonzska mowa — Język, Górny Śląsk i nacjonalizm [Silesian speech — language, Upper Silesia and nationalism]. ISBN 83-919589-2-2. - (English) Tomasz Kamusella (2003). "The Szlonzoks and their Language: Between Germany, Poland and Szlonzokian Nationalism" (PDF). European University Institute — Department of History and Civilization and Opole University. - "Polszczyzna śląska" - Stanisław Rospond, Ossolineum 1970, p. 80-87 - (German) Reinhold Olesch (1987). Zur schlesischen Sprachlandschaft: Ihr alter slawischer Anteil [On the Silesian language landscape: their old Slavic share]. pp. 32–45. - (Polish) Joanna Rostropowicz. Śląski był jego językiem ojczystym: Reinhold Olesch, 1910–1990 [Silesian was his mother tongue: Reinhold Olesch, 1910–1990]. - Krzysztof Kluczniok, Tomasz Zając (2004). Śląsk bogaty różnorodnością — kultur, narodów i wyznań. Historia lokalna na przykładzie wybranych powiatów, miast i gmin [Silesia, a rich diversity — of cultures, nations and religions. Local history, based on selected counties, cities and municipalities]. Urząd Gm. i M. Czerwionka-Leszczyny, Dom Współpracy Pol.-Niem., Czerwionka-Leszczyny. ISBN 83-920458-5-8. - (English) Norman Davies. Europe: A History. p. 1233. ISBN 0-19-820171-0. - Alexander M. Schenker, "Proto-Slavonic," The Slavonic Languages (1993, Routledge), pages 60-121. - Robert A. Rothstein, "Polish," The Slavonic Languages (1993, Routledge), pages 686-758. - Roland Sussex & Paul Cubberley, The Slavic Languages (2006, Cambridge University Press). - "Silesian". Encyclopædia Britannica. - (German) Dušan Šlosar. "http://www.uni-klu.ac.at/eeo/Tschechisch.pdf" (PDF ). - (German) Aleksandr Dulichenko format=PDF. "Lexikon der Sprachen des europäischen Ostens". - (Czech) Pavlína Kuldanová (2003). "Útvary Českého Národního Jazyka" [Services of the Czech National Language]. - (English) Ewald Osers (1949). Silesian Idiom and Language. New York. - (English) Slavonic Encyclopedia. pp. 1149–51. - (German) Gerd Hentschel. "Schlesisch". - Gerd Hentschel (Band 2, 2001). "Das Schlesische – eine neue (oder auch nicht neue) slavische Sprache?". Mitteleuropa – Osteuropa. Oldenburger Beiträge zur Kultur und Geschichte Ostmitteleuropas. ISBN 3-631-37648-0. - (Silesian) "www.slonskyradio.eu". - (Polish) "Po śląsku w kaplicy" [Once in the chapel of Silesia]. e-teatr.pl. - (Polish) "Stanisław Mutz — Polterabend". Silesian Theatre. - (Silesian) Przemysław Jedlicki, Mirosław Syniawa (13 February 2009). "Ślabikorz dlo Slůnzokůw". Gazeta Wyborcza Katowice. Archived from the original on 13 February 2009. |Wikibooks has a book on the topic of: Silesian| |Wikimedia Commons has media related to: Silesian language| |Silesian language edition of Wikipedia, the free encyclopedia| - Silesian language at Ethnologue (16th ed., 2009) - (English) Silesian dictionary - (Silesian) Pů našymu - (Silesian) Slonsko Lauba - (Silesian) Slunskoeka - (Silesian) Jynzyk S'loonski
3.751724
Durell, S. E. A. L. V. d., Goss-Custard, J. D. and Clarke, R. T., 2001. Modelling the Population Consequences of Age- and Sex-Related Differences in Winter Mortality in the Oystercatcher, Haematopus Ostralegus. Oikos, 95 (1), pp. 69-77. Full text not available from this repository. Official URL: http://ejournals.ebsco.com/direct.asp?ArticleID=29... A modelling approach is used to explore the effect of age and sex differences in oystercatcher (Haematopus ostralegus) winter mortality on population size, population structure and the population response to habitat loss or change. Increasing the mortality of first and second year birds reduced population size, but had very little effect on the proportion of the population that were adults. Increasing female mortality reduced population size and resulted in a male-biased population. A sex bias amongst birds of breeding age meant that there were fewer potential breeding pairs for a given population size, reducing the size of the breeding population and the breeding output. Increasing the mortality of one sex relative to the other reduced population size, even when mean adult mortality rates remained unchanged. Increasing the strength of density-dependent mortality in young birds caused a greater reduction in population size as habitat was lost. Increasing the strength of female density-dependent mortality had the same effect, even though male density-dependent mortality had been correspondingly reduced. Increasing density-independent or density-dependent winter mortality in one sex relative to another also exaggerated the disproportional effect of winter habitat loss on separate breeding subpopulations using the same overwintering area. These results suggest that any study of population dynamics should be aware of both age and sex differences in mortality. Conservationists should be particularly aware of any age or sex differences in diet or habitat use that may result in a differential response to environmental change. |Subjects:||Geography and Environmental Studies| Science > Biology and Botany |Group:||School of Applied Sciences > Centre for Conservation, Ecology and Environmental Change| |Deposited By:||INVALID USER| |Deposited On:||27 Nov 2008 19:20| |Last Modified:||07 Mar 2013 14:58| |Repository Staff Only -| |BU Staff Only -| |Help Guide -||Editing Your Items in BURO|
3.022938
Vivekanandan, E and Rajagopalan, M and Pillai, N G K (2009) Recent Trends in Sea Surface Temperature and its Impact on Oil Sardine. In: Global Climate Change and Indian Agriculture. Aggarwal, P K,(ed.) Indian Council of Agricultural Research, New Delhi, pp. 89-92. The oil sardine is a coastal, pelagic schooling fish, forming massive fisheries in India. It has high population to doubling time of less than 15 months and is probably the largest stock in the Indian Ocean (www.fishbase.org). Like many other small pelagics, the oil sardine also has shown pbpulation crashes and sudden recoveries in the past. It is a tropical fish, governed by the vagaries of ocean climatic conditions. It is known for its restricted distribution in the Malabar upwelling region along the southwest coast. It attains a maximum total length of about 22 cm and plays a crucial role in the ecosystem as a plankton feeder and as food for large predators. The annual average production is 3.8 lakh tones (15% of all India total catch) valued at about Rs 350 crores. It is a cheap source of protein and forms a staple, sustenance and nutritional food for millions of coastal people. |Item Type:||Book Section| |Uncontrolled Keywords:||Sea Surface Temperature; Oil Sardine| |Subjects:||Pelagic Fisheries > Oil sardine| Marine Environment > Climate change |Divisions:||CMFRI-Cochin > Marine Capture > Demersal Fisheries| CMFRI-Cochin > Marine Capture > Pelagic Fisheries |Deposited By:||Arun Surendran| |Deposited On:||10 May 2011 11:49| |Last Modified:||10 May 2011 11:49| Repository Staff Only: item control page
3.351149
Recent studies indicate that rising levels of obesity may become as damaging to the environment as an additional billion people on the planet. When considering the impact humans have on the world it’s not only numbers that count, but also the lifestyle of people. If we all keep eating more, producing more garbage and driving cars rather than walking, we will increase our carbon footprints even if the world’s population remains stagnant. The US has the highest number of clinically obese people per capita in the world. The weight problem in the States is becoming an epidemic as the US population accounts for just 6% of the world, but accounts for over 33% of global obesity. This additional weight has a considerable effect on people’s lives and resources. Countries such as Mexico and the UK are not far behind, and the overweight percentage of their populations is constantly increasing. If countries such as China and India soon follow similar trends the outlook is indeed bleak for our energy reserves. Not only will we need more food and transportation, but we’ll have fewer healthy people available to provide the necessary services to sustain a fat planet. Some people argue that wealthier countries will inevitably have a fatter population. However, Japan seems to be the exception to this rule, where the quality of life is high but the number of obese people is very low. Hot countries in the middle east show the opposite trend, where even poorer parts of the population are overweight, due to the fact that the hot climate encourages them to drive rather than walk. So it seems that developing countries being responsible for environmental problems due to high birth rates is a fallacy. They might produce more children, but these won’t consume nearly as many resources as your average American at KFC.
3.035047
Friday, June 12, 2009 at 11:50 AM Earlier this year, we announced the launch of the Transit Layer, a feature that makes it easier for citizens and tourists around the globe to access public transportation line information in their cities. We’re continuously expanding the coverage of Transit Layer, and have added support for transit systems in China, Japan and Russia, which are some of the most complex and remarkable metro lines in the world. The Transit Layer is not only useful to plan a visit, but also to see a city's history, structure of transportation systems, and daily life. Let’s take a look at the Transit Layers for Shanghai, Tokyo and Moscow. China has launched subway layers for 10 cities, including Beijing, Shanghai and Guangzhou. Beijing's first subway line has been in operation since 1969 and Shanghai's subway system opened in 1995. Now, Shanghai has 8 metro lines, 162 stations and 225 km of tracks, making it the longest network in China, exceeding even the Hong Kong MTR. To get a sense of how widely used the subway system is, take a look at this image of the queues that can form during rush hour. Subways are the most popular and convenient transportation method in Tokyo; most of downtown Tokyo is easily accessible via subways. The Tokyo Metro Ginza line (drawn in orange) was first opened in 1927. There were many trams at that time, but they have since been replaced by subways because of the increase in automobile usage and lack of capacity and speed. Thirteen lines are in operation today and they carry more than 2.9 billion passengers per year, making this the world's largest subway system. One of the interesting things about the Transit Layer is that you can see the layout of the transit system. In Tokyo, for instance, you might wonder why the subways take a circuitous path around the central area and avoid it. This is because the area contains the Imperial Palace, which used to be the Edo Castle where the Shogun lived during the pre-modern era. Also, many of the subways are located under historically significant roads. Though Tokyo is filled with modern architecture, it has not been built from scratch, but rather it has been built on top of the old city. The Moscow Metro is the world's second largest metro system (in terms of passenger rides) and quite possibly the most beautiful. If you ever come to Moscow, you should definitely visit the Metro (but it's best to avoid it during the rush hour if you have the choice!). Meanwhile, you can check out the Metro on the transit layer for Moscow, together with the bus, tram, trolleybus and monorail lines. We have also launched the detailed transit layer for St. Petersburg recently. All in all, we now provide Transit Layer coverage for 90 cities in 26 countries, including many of the worlds largest transit networks.
3.128296
INVASIVE PLANTS: Know them—Don&rsq... Read More uo;t grow them What are invasive plants? These non-native plant species are “overachievers.” Once established in natural areas, they outcompete native species. Invasive plants cause profound environmental and economic damage, and are a major threat to native habitats worldwide. Some invasives have escaped from our home gardens and public plantings into natural areas. Each state has different problematic plants. See other side for our top 20 offenders; Don’t buy these plants Although experts have determined that these plants are invasive in most of New England, and harmful to the environment, the plants listed in bold are still widely available in catalogs and nurseries. Norway maple Acer platanoides Bishop’s weed Aegopodium podagraria Garlic mustard Alliaria petiolata Japanese barberry Berberis thunbergii Oriental bittersweet Celastrus orbiculatus Swallowworts Cynanchum nigrum and C. rossicum Autumn olive Elaeagnus umbellata Burning bush Euonymus alatus Glossy buckthorn Frangula alnus Yellow flag iris Iris pseudacorus Blunt-leaved privet Ligustrum obtusifolium Shrub-like honeysuckles Lonicera morrowii, L. x bella, L. maackii, L. tatarica Japanese honeysuckle Lonicera japonica Purple loosestrife Lythrum salicaria Water-milfoils Myriophyllum aquaticum, M. heterophyllum, M. spicatum Common reed Phragmites australis Japanese knotweed Polygonum cuspidatum Common buckthorn Rhamnus cathartica Multiflora rose Rosa multiflora Water chestnut Trapa natans For a list of our recommended substitute plants and information about methods for removing invasive plants How you can help _ Learn which plants are invasive in your state. _ Purchase and grow only non-invasive plants. _ Ask your local garden center/nursery to stop selling _ Volunteer in your community to help control invasive plants. _ Inform your community about the threats posed by New England Wild Flower Society
3.154378
It's Black History Month: Celebrate, Learn, Enjoy Take some time out during the month to learn how black history has impacted your everyday life. It's that time of the year again. It's time to celebrate Black History Month. Growing up an '80s baby, this was the only time of the year where I learned about black history in school, both when I attended public and Catholic. I don't know if black history is really integrated into American history in the classroom setting, but in case you are like me, and didn't get that much exposure, here's a few pieces of knowledge. It is American history, isn't it? Black History Month: Is an observance of the history of the African diaspora in a number of countries outside of Africa. It is observed annually in the United States and Canada in February, while in the United Kingdom it is observed in October. African Diaspora: Is the dispersion of Africans during and after the trans-Atlantic slave trade and others en-route to India as slaves and source of labor. Black people all over the world in the Americas, Caribbean, Latin countries, India and in Europe are a part of the African Diaspora. The term comes from diaspeirein, which is Latin for disperse. Afro or Black: The Library of Congress terms the month African American History Month. Both terms are interchangeable. A Need to Recognize Contribution: As a Harvard-trained historian, Carter G. Woodson, had hopes to raise awareness of African American's contributions to civilization, according to the Library of Congress. This was realized when he and the organization he founded, the Association for the Study of Negro Life and History, conceived and announced Negro History Week in 1925. Given a Month: The celebration was expanded to a month in 1976, according to The American Presidency Project. President Gerald R. Ford urged Americans to “seize the opportunity to honor the too-often neglected accomplishments of black Americans in every area of endeavor throughout our history.” King's Assassination: Martin Luther King, Jr. was assassinated on friend, and famous author and poet, Maya Angelou's birthday on April 4, 1968, according to Biography. Angelou stopped celebrating her birthday for many years afterward, and sent flowers to King's widow every year until Coretta Scott King's death in 2006. No Passing: Before he was a renowned artist, Romare Bearden was also a talented baseball player, according to Biography. He was recruited by the Philadelphia Athletics on the pretext that he would agree to pass as white. He turned down the offer, instead choosing to work on his art. Assassination Attempts: Politician and educator Shirley Chisholm survived three assassination attempts during her campaign for the 1972 U.S. presidential election. Black Inventors: These individuals have played a pivotal role in how American live, work and entertain. View just a few of some notable inventions of black individuals, according to Famous Black Inventors and the Black Inventor Online Museum. - Frederick McKinley Jones: Jones patented more than 60 inventions in his lifetime. While more than 40 of those patents were in the field of refrigeration, Jones is most famous for inventing an automatic refrigeration system for long haul trucks and railroad cars. - Garrett A. Morgan: Morgan received a patent for the first gas mask invention in 1914. Morgan's other famous invention was the traffic signal. After witnessing an accident on a roadway, Morgan decided a device was needed to keep cars, buggies and pedestrians from colliding. After receiving a patent in 1923, he sold the rights to the invention to General Electric. - Dr. James E. West: A colleague, Gerhard Sessler, and he developed the mic (officially known as the Electroacoustic Transducer Electret Microphone) while with Bell Laboratories. They received a patent for it in 1962. - Benjamin Banneker: Developed the first clock built in the United States, studied astronomy and developed an almanac. He helped to create the layout of the building streets and monuments in Washington, D.C. - Marie Van Brittan Brown: She and her partner Albert Brown, applied for an invention patent in 1966 for a closed-circuit television security system. The device is considered the predecessor to the modern home security system. Brown's system had a set of four peep holes and a camera that could slide up and down to look out each one. Anything the camera picked up would appear on a monitor. - Patricia Bath: A pioneer in the field of ophthalmology, she created a laser-based device to perform cataracts surgery.
3.4846
The real inventor of the World Wide Web |By MARK OLLIG| Tim Berners-Lee is the person who wrote the programming code that we use when we “point and click” our way through the hyper-links of documents, sounds, videos and information that we access via the World Wide Web portion of the Internet. Berners-Lee called his creation a “global hypertext system.” Some people think that the Internet and the Web are the same, but this is not true. The Internet is basically a network made from computers, routers, gateways and cables used to send around little “packets” of information. A packet is a bit (no pun intended) like a postcard with a simple address on it. If you put the right address on a packet and gave it to any computer which is connected as part of the Internet, each computer would figure out which cable or path to send it down next so that it would get to its destination. That’s what the Internet does. It delivers packets anywhere in the world using various protocols and it can do this very quickly. Berners-Lee connects the Internet to the Web by saying “The Web exists because of programs which communicate between computers on the ‘Net. The Web connections are hypertext links. The Web could not be without the Net. The Web made the Net useful because people are really interested in information (not to mention knowledge and wisdom!) and don’t really want to have know about computers and cables.” In May of 1998, Tim Berners-Lee wrote a short piece on the history of the World Wide Web in which he says “. . .The dream behind the Web is of a common information space in which we communicate by sharing information.” Back in 1980, Berners-Lee was working with computer software programs to store information with random links. Nine years later, while he was working at the European Particle Physics Laboratory near Geneva Switzerland, also known as “CERN” which I found out is the French abbreviation for “Conseil Européen pour la Recherche Nucléaire.” Since the only French I remember was from a record album that comedian Steve Martin did back in late 1970s in which he says that when he was in France the only French he knew was how to order a ham and cheese omelet in the restaurant. Yep, the ol’ “omelette de jambon et de fromage.” I went to the language translation link on Google at: http://www.google.com/language_tools and found out that the translation to English of “Conseil Européen pour la Recherche Nucléaire” is: The Council European for the Nuclear Research. Tim Berners-Lee goes on to say that this is where in 1989, he first proposed that a global hypertext space be created in which any network-accessible information could be referred to by what he had called a UDI or “Universal Document Identifier.” This would become known today as the “Uniform Resource Locator” or “URL” that we are typing when going to a particular website. He finished the actual client-browser and the point and click hypertext editor he called the “WorldWideWeb Program” in 1990. Some of the very early web browsers had names like Erwise, Viola, Cello and Mosaic. Today we use browsers like Netscape, Mozilla Firefox and Internet Explorer. Berners-Lee said he was under pressure to define the future evolution of the ‘Web, so he decided to form the World Wide Web Consortium or W3C in September 1994 with a base at the Massachusetts Institute of Technology in the USA, and offices in France and in Japan. The W3C is a neutral, open forum where companies and organizations can discuss and come to agreement upon new computer protocols that will help see the Web develop to its full potential. It has been a center for education, issue raising and design. The website says their decisions are made by consensus. If you want to find out what the latest advancements being planned for the continued evolution of the World Wide Web are, I highly recommend you visit http://www.w3.org which states their mission is “To lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web.” I noted that under the photo of Tim Berners-Lee it says he is the Director of the W3C and also “The Inventor of the World Wide Web” which in this humble columnist’s opinion adds a bit of respectability to this website! For those of you out there that would like to see the original 1989 proposal (including a circles and arrows diagram) that Tim Berners-Lee first submitted and in which he coined the term: “WorldWideWeb” link over to: http://www.w3.org/History/1989/proposal.html and you will see what I call probably the ‘Webs most historical document when it comes to how we are able to navigate the Internet as we have come to use it today. The computer that Berners-Lee used to write the code for the first web-browser and also what became the first “web-server” was called the “NeXtcube.” You can see and read about it at: http://en.wikipedia.org/wiki/NeXTcube And if you go to this link: http://www.w3.org/History/1994/WWW/Journals/CACM/screensnap2_24c.gif you will see the “snapshot” of Tim Berners-Lee’s (and the world’s) very first website that he created and released to the High Energy Physics community.
3.234621
Before WW1 Italy was part of an alliance with Germany and Austria-Hungary, yet it didn't join them when the war started and it even joined the Allied side later during the war. Why did Italy do this? And if there were good reasons to join the Allies why did it ally itself with Germany and Austria in the first place? Italy's main issue was its enmity with Austria-Hungary, Germany's main ally. That made Italy the "odd man out" in the so-called Triple Alliance with the other two. Italy had joined (reluctantly) with Germany out of a fear of France. This occurred when France and Britain concluded an alliance that made Britain responsible for the mutual defense of the English Channel, and freed the French fleet to concentrate in the Mediterranean, possibly against Italy. When World War I broke out, Italy found that it had nothing to fear from France (or England or Russia for that matter). On the other hand, it would have a lot to fear from a victorious Austria Hungary, from which she had taken Lombardy and Venice in the 19th century (the former when allied with France). So when Britain and France offered Italy Tyrol and Trieste from Austria, Italy jumped at the bait and switched sides. It is easy to explain why Italy didn't join the war: they had little to gain from it, maybe they also didn't feel prepared. Alliances are always theory and a country can refuse to be dragged into a conflict with powers that are much stronger than it. The question why Italy later still decided to join is more difficult. This website gives the following answer:
3.661047
The picture of gravitational collapse provided by classical general relativity cannot be physically correct because it conflicts with ordinary quantum mechanics. For example, an event horizon makes it impossible to everywhere synchronize atomic clocks. As an alternative it has been proposed that the vacuum state has off-diagonal order, and that space-time undergoes a continuous phase transition near to where general relativity predicts there should be an event horizon. For example, it is expected that gravitational collapse of objects with masses greater than a few solar masses should lead to the formation of a compact object whose surface corresponds to a quantum critical surface for space-time, and whose interior differs from ordinary space-time only in having a much larger vacuum energy . I call such an object a “dark energy star“. –Dark Energy Stars (arXiv.org e-Print archive) According to Nature, this means “black holes ‘do not exist’“. Of course I don’t understand any of the physics involved, but this is a printout of a paper delivered at a conference. It hasn’t undergone peer-review.
3.05832
See also the layperson's introduction: Jumping frogs, squeaking bats and tapping woodpeckers and Picosecond ultrasonics with ultrashort light pulses. Ice is ubiquitous in the universe, forming on comets, moons and asteroids for example. It is more familiar on Planet Earth as the translucent lump in your fizzy drink, or those white crystals on the ski slopes. Less usefully, ice can also freeze up the windows of your house, or cling to aeroplane wings. It can also be a nuisance if it forms on the inside of a scientist's low temperature apparatus. Our original grand idea had nothing to do with ice, but was in fact to generate sound pulses with very short laser pulses in a film of the superconductor YBCO (standing for yttrium, barium, copper and oxygen) using picosecond ultrasonics. But instead of the clear acoustic echo signals expected, we saw rapid wiggles that slowly changed in form over several hours. Watch the animation and see how the signal changed. Click the image to see a 200 kB animation of the reflectivity change of the sample, shown over an 18 hour period. The laser is switched off between 10 and 15 hours of observation. Could this be the beginning of the discovery of a totally new and as-yet undiscovered property of superconductors? Could a Nobel prize be just around the corner? After scratching our heads for a very long time, we (well, Osamu) realised that in fact a miniscule layer of ice was slowly building up on the sample, up to a maximum thickness of one micron (10-6 m). Hooray, let's study ice then. So much for the Nobel prize, but what an appropriate topic for a group at North-leaning Hokkaido University. The ice thickness in our campus is somewhat thicker than in our experiment. By analysing the wiggles in the signal, we found we could monitor the ice thickness as it was growing, and derive a fair number of physical properties of ice for the first time at our very high ultrasonic frequencies around 10 GHz. Here we show how the experiment (top graph) and theory (bottom graph) nicely matched. Graph of the optical reflectivity during the ice film growth as a short acoustic pulse travels through it. Blue means an increase in reflectivity and red means a decrease. Click the figure to see a 280 kB animation of these results. Watching ice films grow for 20 hours or more is enough to make any researcher weary. No escape while the experiment is running. For more information see 'In-situ monitoring of the growth of ice films by laser picosecond acoustics,' S. Kashiwada, O. Matsuda, J. J. Baumberg, R. Li Voti, and O. B. Wright, J. Appl. Phys. 100, 073506 (2006). Back to the main page
3.892455
Pub. date: 2008 | Online Pub. Date: June 25, 2008 | DOI: 10.4135/9781412963978 | Print ISBN: 9781412909280 | Online ISBN: 9781412963978| Publisher:SAGE Publications, Inc.About this encyclopedia William Ming Liu Poverty is a global problem. Using the U.S. dollar as a hallmark for living standards, approximately 2.8 billion people live on less than 2 dollars a day, and almost 1.2 billion live on less than 1 dollar a day. Given the differing living standards across nations, a dollar has different weight depending on context. But in the United States, how much does it cost to live adequately? That is, what is the minimum one should expect to have to provide for adequate housing, food, health care, and transportation for instance? And more importantly, what measure should one use to indicate when an individual or family has fallen below these standards of acceptable living? To understand poverty in the United States, it is important to address (a) the consequences of poverty, (b) the definitions of poverty, and (c) counseling and psychology's understandings of poverty and social class and classism in relation ...
3.56565
Dr. Pia Fenimore, of Lancaster Pediatric Associates, answers questions about children’s health on the Ask the Expert feature at LancMoms.com. It’s spring, which means all the little critters of the world hatch and come out … and that makes pediatricians think of pinworms. That’s what you were thinking of, right? Pinworms, or Enterbiasis Vermicularis, are a parasite for which humans are the only host. This means you cannot get them from the dirt, your cat or your dog. You can, however, get them from your children and your friends. Infection occurs most commonly in children ages 5-10 and does not discriminate among race or socioeconomic backgrounds. You become infected by ingesting (eating, swallowing, breathing in) eggs or larvae. These eggs are most commonly found in the fingernails and bed linens of an infected person. At night, the female worms migrate from the small intestine, outside of the body to the anal skin folds. They do this to lay their eggs (our body temperature would kill the eggs if laid inside and, like a good mom, she is just protecting her young). The eggs become “infective” within a few hours and when they do, they become very itchy. The infected person scratches, gets the eggs under their fingernails and on their fingers, touches their mouth or eyes and — just like that — the life cycle of the pinworm has been propagated. The eggs are able to survive for several hours on surfaces and linens (several weeks in cool, humid environments). This means that an unsuspecting family member or friend can then acquire them on their fingers and infect themselves. Savvy little things, aren’t they? The No. 1 symptom of infection is a very itchy bottom at nighttime. This affects sleep and can even lead to skin infections from scratching. While most pediatricians will make the diagnosis based on history alone, you can confirm the presence of pinworms by doing a “Scotch Tape test.” This is performed by taking a clear piece of tape and pressing it against the anal skin folds, thus picking up the eggs on to the tape. The tape is then placed under a microscope, where it is very easy to see them and confirm the diagnosis.
3.314458
You probably think of gravity as curved spacetime. Surprisingly Einstein didnít, not quite. And neither should you. To understand gravity you have to take the ontological view. You have how to learn to see whatís there. And to do that, you have to put time to one side, because time isnít the same kind of dimension as the Dimensions of space. Yes, an object passing a planet traces a curved path, but you donít stare up at a plane and decide that itís a silver streak in the sky. You take a mental snapshot, flash, a picture of it in a timeless instant. Itís the same with gravity. Take the time-derivative of that curved spacetime. What you get is a gradient. And itís a gradient in space, not curved spacetime. But letís tackle it an easier way, via an old favourite. Think about a cannonball sitting on a rubber sheet. The cannonball is heavy, and it makes a depression that will deflect a rolling marble, or even cause the marble to circle like an orbit. Itís a nice analogy, but itís wrong. Itís wrong because it relies on gravity to pull the cannonball down in the first place. It uses gravity to give you a picture of gravity. To get a better handle on it, imagine youíre standing underneath the rubber sheet. Letís make that a silicone rubber sheet. Itís transparent, like my snorkel and mask. Grab hold of the rubber around the cannonball and pull it down further to give yourself some leeway. Now transfer your grip to the transparent silicone rubber itself. Gather it, pull it down some more. Now tie a knot in it underneath the cannonball, like youíd tie a knot in the neck of a balloon. Now pull it all the way down and let go. Boinggg! The cannonball is gone. Forget it. Now, what have we got? Weíve got a flat rubber sheet with a knot in it. The knot will stand in for a region of stress, where the rubber is under pressure. Stress is the same as pressure. Itís force per unit area, and force times distance gives us the units for both work and energy. So energy is stress times volume. The knot represents energy. Or matter if you prefer. OK hereís the deal. Surrounding the small central region of stress is a much larger region of tension extending outwards in all directions. Whenever you have a stress you always have a tension to balance it. It isnít always obvious, but itís always there, like reaction balances action, and force balances force. The tension gradually reduces as you move away from the stress. If you could measure it, you would measure a radial gradient. But measuring it is trickier than you think. Because in this analogy we canít use a marble rolling across a rubber sheet. This rubber sheet represents the world, thereís no stepping outside of it. Our ďmarbleĒ has to be within the rubber sheet, and a part of it, made out of the same stuff as that knot. We need an extra dimension. So turn your top hat upside down and tap it with your magic wand. Abracadabra! A flash of light and a puff of smoke, and that rubber sheet is now a solid block of clear silicone rubber extending in all directions. And youíre standing inside of it. Letís make you a ghost so you can glide around unimpeded, for the purposes of gedanken. Our knot is now three-dimensional, like a moebius doughnut, maybe a little silvery like a bubble underwater. Itís not really made out of anything, it hasnít got a colour, and it hasnít even got a surface. Itís a soliton, a topological defect, a travelling stress thatís basically a photon, but going nowhere fast because itís twisted round on itself. So E = hc/λ = pc = mc≤ means the momentum is now inertia, and we call it an electron. Our electron has replaced our cannonball, and now we need a photon to stand in for that rolling marble. Letís conjure one up, and send it propagating across our rubberworld so that it passes by our electron. We could run after it and take some snapshots with our ontological camera, but letís save that for another day. For now our photon is just a shear-wave ripple, travelling at a velocity determined by the stiffness and density of the medium. Thereís an equation for it in mechanics that goes like this: v = √(G/ρ) The G here isnít a gravitational constant, but is the shear modulus of elasticity, to do with rigidity. Itís different to the bulk modulus of elasticity, because itís a lot easier to bend something rather than compress its volume. The equation says a shear wave travels faster if the material gets stiffer, and slower if the density increases. In electrodynamics the velocity equation is remarkably similar. Youíve probably seen it before: c = √(1/ε0μ0) Here ε0 is permittivity and μ0 is permeability. The two are related by impedance √(μ0/ε0). High permittivity means a material will take a larger charge for the same voltage, for example Barium Titanate has 1200 times the permittivity of air, so we donít make capacitors out of air. High permeability means a material exhibits more magnetism when you change the charge. Iron has lots of it, wood doesnít, so magnets are made of iron. There are some marvellous similarities between mechanics and electrodynamics, though confusions abound too. With the piezoelectric effect you subject a material to mechanical stress and you get an electrical stress, a voltage, but high voltage is called high tension, which is negative stress. And electric current goes from negative to positive, so things are backwards. But letís come back to that another time, and just say higher impedance means lower velocity. Back in rubberworld, our photon-marble is passing our electron-cannonball. We notice it veers towards it a little. Thatís because where the rubberworld tension is slightly greater, the real-world impedance is slightly higher, so the velocity is slightly lower. What weíre seeing is refraction. Hereís the crucial point: our real world is like that rubberworld with the knot in it plus an extra dimension, and weíre made out of this stuff, along with our rulers and clocks. So we donít see the tension. We donít measure the change in c. But we can infer it. Like in the Pound-Rebka experiment, where a photon is blue-shifted at the bottom of the tower because c there is lower. Or in the Shapiro experiment, where the light takes longer to skim the sun because the c there is lower too. Thereís an equivalence going on here between General Relativity and Special Relativity, but itís tricky to spot. Imagine that I stay here on earth while you travel to Alpha Centauri in a very fast rocket travelling at .99c. We can use 1/√(1-v≤/c≤) to work out that you experience a sevenfold time dilation. (Multiply .99 by itself to get .98 and subtract this from one to get a fiftieth, which is roughly a seventh multiplied by a seventh). We normally think of time dilation as being matched by length contraction, but thatís only in the direction of travel. Hold up a metre ruler transverse to the direction of travel and itís the same old metre. Your metre is the same as my metre, and your time is dilated by a factor of seven, which means it takes a beam of your light seven times longer to traverse your transverse metre. Looking at it another way c = s/t and your t changed, your s didnít, so your c did. Your c is a seventh of mine. Donít get confused about this. Donít tell yourself that your lightbeam is following a diagonal path and has to cover a greater distance. Thatís introducing an absolute reference frame, mine. Stay in your own frame. Then when you come back after your year-long round trip, I aged seven years, but you only aged one. You aged less because your c was slower than mine, but you never noticed it at the time. The equivalence comes in because I could have slid you into a black box and subjected you to high gravity instead of sending you to Alpha Centauri. We know that ďclocks run slowĒ in a high gravity situation, just as they do when youíre travelling fast. And itís for the same simple reason. The c is reduced. But you wonít measure it as reduced, because itís just a distance/time conversion factor. Just like you when you go to the moon you donít get three ounces to the pound. I know itís difficult to stop thinking c is a constant. Yes itís always measured to be the same in all frames. But when you step back to see the big picture that is the whole gallery, when you look at all the frames side by side, you see what distinguishes them is the way c changes. Itís a constant, but it isnít constant. Once you realise that c changes in a ďgravitational fieldĒ you can allow yourself the epiphany of understanding gravitational potential energy. We know that E=mc≤, so a cannonball sitting quietly in space represents maybe 1011 Joules of energy. If the earth now trundles on to the scene, the cannonball will fall towards it, and just before impact will also have kinetic energy of say 109 Joules. Now hold it right there. Freeze frame. Where did that kinetic energy actually come from? Has it been sucked out of the earth? Has it been magically extracted from some zero-point bottomless bucket? Has it come from the ďgravitational fieldĒ? No. Thereís no free lunch from Mister Gravity. The energy came from the cannonball. And it hasnít come from its mass because mass is ďinvariantĒ. Only it isnít invariant because the mass has actually increased, check the Pound-Rebka experiment. So E=mc≤ and weíve got a pile of kinetic energy that hasnít come out of the m. Thereís only one place left it can have come from. The c. The c up there is greater than the c down here, and thereís a gradient in between. Thereís always a gradient in c when thereís gravity. Even across the width of an electron. Yes, the gradient might be very small. But it isnít negligible. If you think it is, as per the General Relativity Equivalence Principle, youíve just thrown the baby out with the bathwater. An accelerating frame with no tidal gradient isnít the same as a proper gravity situation. Thereís always a tidal force. The gradient has to be there. There can be no Uniform Gravitational Field. Because without that gradient, things donít fall down. Letís go back to rubberworld. But itís time we did a Reverse Image and made the rubber the ghost. Now youíre back to normal again take a look at that electron once more. Itís a travelling stress localised because itís going round in a circle. Stick this ring of light in a real gravity gradient, caused by a zillion other electrons some distance downaways. Whatís going to happen? Flash, take a picture. At a given instant we have a quantum of light travelling down like this ↓. Thereís a gradient top to bottom, but all it does is gives the photon a fractional blueshift. A little later take another picture. Flash. Now the photon is moving this ← way, and the upper portion of the photon wavefront is subject to a slightly higher c than the lower portion. So it bends, refracts, curves down a little. Later itís going this ↑ way and gets fractionally redshifted, and later still itís going this → way and curves down again. These bends translate into a different position for our electron. The bent photon path becomes electron motion. Only half the cycle got bent, so only half the reduced c goes into kinetic energy. The other half goes into mass, but itís only a scale-change falling out of the clear blue sky: So hereís your free lunch: Now you can understand why gravity is not some magical, mysterious, action-at-a-distance force. There is no curvature of spacetime, no hidden dimensions, no gravitons sleeting between masses. Thereís no energy being delivered, so gravity isnít even a force. Itís just the tension gradient that balances the stress that is mass/energy. And weíre just rubberworld Fatlanders getting to grips with our wrinkles and bumps. No energy delivered, extra mass to use as collateral... that means thereís no energy cost. So if we could somehow contrive a gradient that goes the other way... whoo, itíll be The Stars My Destination. But first of all we must also understand the thing we call Space. We must learn how light is a ripple of nothing, and how all the somethings are made from it. Itís a tale of something and nothing, and since nothing comes for free, there will be a Charge... Acknowledgements: thanks to J.G. Williamson and M.B. van der Mark for Is the electron a photon with toroidal topology? http://members.chello.nl/~n.benschop/electron.pdf, to Peter M Brown for his many papers on his excellent website http://www.geocities.com/physics_world/, to Robert A Close for for Is the Universe a Solid? http://home.att.net/~SolidUniverse/]home , to Reg Norgan for http://www.aethertheory.co.uk/pdfRFN/Aether_Why.pdf, to G S Sandhu for The Elastic Continuum http://www.geocities.com/gssandhu_1943/index.html to all the forum guys with their relevant posts and links, Wikipedia contributors, and to anybody who Iíve forgotten or whose pictures Iíve used. Thanks guys.
3.374082
Who is responsible for Deer Management? It's a common situation during summer, especially during a drought year. A landowner has some acreage that includes timber, pasture and a variety of row crops. He planted soybeans back by the timber, and the deer are eating them up. The farmer says "I enjoy having the deer around but their numbers are increasing and they are causing damage to my beans. You need to do something about this." To assess the problem, a Conservation Department representative may visit the area and suggest solutions. Included in this visit is a discussion of deer management practices, most importantly harvest. Usually the question is asked, "How many deer hunters do you allow on your property and how many did they take last year?" All too often in these situations, no hunting is allowed or it is limited to just a few relatives or friends who take mostly bucks. Therein lies the problem. Just as often, we receive complaints from landowners about a lack of deer on their property. Usually these landowners have purchased property because they enjoy hunting, but they are seeing fewer deer each deer season. Usually there are too many hunters and they are taking too many does each year. Deer densities are slowly decreasing. The common element in each of these situations is that landowners have a tremendous influence on deer densities on their properties. Deer and other wildlife belong to the citizens of Missouri. The Conservation Department is responsible for stewardship of these important resources. We set regulations to ensure that the resources will be maintained at levels in the best interest of the public. This means having enough deer so people who enjoy hunting and watching deer have a reasonable opportunity, but not so many that deer problems get out-of-hand. Deer hunting is essential in Missouri. When deer are not hunted, survival is high and deer numbers can rapidly increase. One research project in north central Missouri showed us that 95 out of 100 does living in unhunted areas survive each year. Under these conditions, a deer herd will quadruple in just 10 years. Harvesting bucks has little influence on overall population size because one buck can mate with many females. There can be fewer bucks than does without affecting the number of young. Doe harvest, therefore, is the key to deer management. Prior to the 1980s, deer management was relatively easy, because most landowners wanted more deer.
3.352404
Binomial Probability Formula Date: 03/22/2005 at 11:32:33 From: Missy Subject: probabilities Sue makes 70% of the free throws she attempts. She shoots three free throws in her warmup before a game. What is the probability that Sue makes two or more of the three free throws? I know the answer is .784, but I'm not sure how the book got it. I thought you would multiply 2/3 times 3/3, but obviously not. Date: 03/23/2005 at 23:15:30 From: Doctor Wilko Subject: Re: probabilities Hi Missy, Thanks for writing to Dr. Math! Sue makes 70% of the free throws she attempts; therefore 30% of the time she doesn't make her free throws. If Sue shoots three free throws, you want to know the probability that she will make two OR three free throws. Or in math terms: P(2 out of 3 shots successful) + P(3 out of 3 shots successful) Let's sidestep the exact formula for a minute and try to build up the rationale for how to solve this. For example, if I wanted to know "What is the probability that Sue will make 2 free throws in a row followed by a miss?" the answer would look like this: P(2 successes followed by 1 miss) = (0.7) * (0.7) * (0.3) = 0.147 This says that Sue will have a 14.7% probability of shooting 2 successful free throws followed by a miss. This doesn't quite help us (yet) because there are other ways that Sue could make these shots: success, miss, success miss, success, success There are actually three ways that she could make these shots AND each one has the same probability. success, success, miss = 0.147 success, miss, success = 0.147 miss, success, success = 0.147 + ---------- 0.441 So, this finally answers our first part of our question. P(2 out of 3 shots successful) = 0.441 Now we need to find out P(3 out of 3 shots successful). Well, if she takes three shots and they are all successful, then the probability would look as follows: P(3 out of 3 shots successful) = (0.70) * (0.70) * (0.70) = 0.343 There aren't different arrangements of the shots like before because all three shots are successful. So, to answer the main question: P(2 out of 3 shots successful) + P(3 out of 3 shots successful) 0.441 + 0.343 = 0.784 With Sue's 70% success rate when shooting free throws, she has a 78.4% probability of making two or more free throws when shooting three free throws total. I took some time to develop the rationale of how to solve this problem. Once you understand what I did, there is a formula called the Binomial Probability Formula, which will let you calculate problems a lot faster than reasoning through it like I did. The formula looks like: P(x) = nCx * p^x * q^(n-x) n = number of trials x = number of successes among n trials p = probability of success in any one trial q = probability of failure in any one trial (q = 1 - p) nCx = combinations of n items, choose x If you recognized your problem as a binomial distribution problem (see links below), then you could go straight to the formula: P(x) = nCx * p^x * q^(n-x) P(2 or 3 successful shots) = P(2 out of 3 shots successful) + P(3 out of 3 shots successful) = 3C2 * (0.70)^2 * (0.30)^1 + 3C3 * (0.70)^3 * (0.30)^0 0.441 + 0.343 = 0.784 Here's some links from our archives that will elaborate more on binomial probabilities. The third link below is similar to your problem, except it uses dice instead of basketballs: Binomial Experiments http://mathforum.org/library/drmath/view/63982.html Binomial Probability http://mathforum.org/library/drmath/view/56189.html Probability of Rolling a 2 At Least Twice http://mathforum.org/library/drmath/view/57596.html Does this help? Please write back if you need anything else. :-) - Doctor Wilko, The Math Forum http://mathforum.org/dr.math/ Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
3.035866
by Ludwig von Mises Preface to the 1944 Edition The main issue in present-day social and political conflicts is whether or not man should give away freedom, private initiative, and individual responsibility and surrender to the guardianship of a gigantic apparatus of compulsion and coercion, the socialist state. Should authoritarian totalitarianism be substituted for individualism and democracy? Should the citizen be transformed into a subject, a subordinate in an all-embracing army of conscripted labor, bound to obey unconditionally the orders of his superiors? Should he be deprived of his most precious privilege to choose means and ends and to shape his own life? Our age has witnessed a triumphal advance of the socialist cause. As much as half a century ago an eminent British statesman, Sir William Harcourt, asserted: “We are all socialists now.” At that time this statement was premature as far as Great Britain was concerned, but today it is almost literally true for that country, once the cradle of modern liberty. It is no less true with regard to continental Europe. America alone is still free to choose. And the decision of the American people will determine the outcome for the whole of mankind. The problems involved in the antagonism between socialism and capitalism can be attacked from various viewpoints. At present it seems as if an investigation of the expansion of bureaucratic agencies is the most expedient avenue of approach. An analysis of bureaucratism offers an excellent opportunity to recognize the fundamental problems of the controversy. Although the evolution of bureaucratism has been very rapid in these last years, America is still, compared with the rest of the world, only superficially afflicted. It shows only a few of the characteristic features of bureaucratic management. A scrutiny of bureaucratism in this country would be incomplete therefore if it did not deal with some aspects and results of the movement which became visible only in countries with an older bureaucratic tradition. Such a study must analyze the experiences of the classical countries of bureaucratism—France, Germany, and Russia. However it is not the object of such occasional references to European conditions to obscure the radical difference which exists, with regard to bureaucratism, between the political and social mentality of America and that of continental Europe. To the American mind the notion of an Obrigkeit, a government the authority of which is not derived from the people, was and is unknown. It is even extremely difficult to explain to a man for whom the writings of Milton and Paine, the Declaration of Independence, the Constitution and the Gettysburg Address are the fountain springs of political education, what this German term Obrigkeit implies and what an Obrigkeits‑Staat is. Perhaps the two following quotations will help to elucidate the matter. On January 15, 1838, the Prussian Minister of the Interior, G. A. R. von Rochow, declared in reply to a petition of citizens of a Prussian city: “It is not seemly for a subject to apply the yardstick of his wretched intellect to the acts of the Chief of the State and to arrogate to himself, in haughty insolence, a public judgment about their fairness.” This was in the days in which German liberalism challenged absolutism, and public opinion vehemently resented this piece of overbearing bureaucratic pretension. Half a century later German liberalism was stone dead. The Kaiser’s Sozialpolitik, the Statist system of government interference with business and of aggressive nationalism, had supplanted it. Nobody minded when the Rector of the Imperial University of Strassburg quietly characterized the German system of government thus: “Our officials . . . will never tolerate anybody’s wresting the power from their hands, certainly not parliamentary majorities whom we know how to deal with in a masterly way. No kind of rule is endured so easily or accepted so gratefully as that of high-minded and highly educated civil servants. The German State is a State of the supremacy of officialdom—let us hope that it will remain so.” Such aphorisms could not be enunciated by any American. It could not happen here. Cf. G. M. Trevelyan, A Shortened History of England (London, 1942), p. 510.
3.172639
This page contains the answers to the questionnaire in chapter 2 (GPRS) All answers have been held as short as possible and require an understanding and study of the corresponding chapter of the book. Chapter 2, GPRS: When data is transferred over a circuit switched channel, a dedicated connection is established between two parties. Data is sent without any overhead like lower level addressing. Bandwidth and delay are constant. In a packet switched network on the other hand, there is no direct connection between the endpoints of a session. Resources in the network are only used for the connection when data is sent. Data is sent in packets which have to contain a source and destination address in order to be transported through the network. This also enables N:N connections in the network, i.e. a subscriber can communicate with any subscriber without establishing a physical connection first. Depending on the load of the network, bandwidth and delay for a connection can vary. This is a clear disadvantage compared to a circuit switched channel. Due to the bursty nature of many information exchanges the advantage of the packet switched approach on the other hand is to use more bandwidth during the burst which decreases transmission time. As GPRS is a packet switched network, resources or the air interface are only assigned to a user when data is actually sent. This tremendously increases the capacity of the network especially for applications such as web surfing which only send and receive data at irregular intervals. Several timeslots can be assigned to a subscriber simultaneously to increase throughput. If the physical connection to the network is lost (e.g. due to bad reception quality) the logical connection persists. As soon as the physical connection has been reestablished, data transfer on higher layer resumes without the user having to reestablish another channel manually as would be the case for a circuit switched connection. Dynamic coding schemes allow to adapt the ratio of error correction and detection bits to user data bits. For good transmission conditions the redundancy information in a block can be reduced which in turn increases the overall transmission speed of the user data. During times of bad reception, more error detection and correction bits are inserted which ensures that the link remains stable. While in GPRS ready state the SGSN can send data to the mobile terminal without delay. In this state, the SGSN is aware of the cell which the subscriber uses to communicate and thus can forward incoming packets directly to the PCU responsible for this cell. The PCU does not need to page the subscriber and can immediately assign resources on the air interface. When changing the cell in ready state the mobile station has to send a cell update message to the SGSN. Once the mobile station is in GPRS standby state, the SGSN is only aware of the location are of the subscriber, as the mobile station only has to report cell changes when a location area boundary is crossed. This reduces the mobiles energy consumption. In order to send data frames to a mobile in standby state, the SGSN has to page the subscriber first. The mobile station responds with an empty frame and thus implicitly changes into the ready state again. Once the SGSN receives the empty frame it is aware again of the cell the mobile station uses and can then forward the frame. In practice, no handovers are performed for GPRS today (Network Control Order = 0). The mobile station has to perform cell changes on its own. In case a cell change has to be performed during an ongoing data transfer due to deteriorating reception conditions it is necessary to interrupt the transmission and perform the cell change. Afterwards the mobile station reports to the SGSN from the new cell by continuing to send data. The SGSN detects the cell change as the cell global ID is part of every incoming frame and can thus change its routing of incoming Internet packets to the new cell. GPRS requires the following network nodes: A) The serving GPRS support node (SGSN) which is responsible for mobility management and session management (GMM/SM). B) The gateway GPRS support node (GGSN) which is the interface between the GPRS network and the Internet. The GGSN is responsible for the assignment of IP addresses to the mobile subscribers and hides subscriber mobility from the Internet. C) The packet control unit (PCU) is the interface between the GPRS core network and the radio network. The PCU is responsible for packet scheduling, assignment of timeslots to the subscribers and terminates the RLC/MAC protocol. GPRS assigns resources (timeslots) to a subscriber only for the time required to send the data. Furthermore, timeslots are not exclusively assigned to a single subscriber but only in blocks of four bursts. This way, timeslots can be used to transfer data to several subscribers at the same time. The temporary block flow with the temporary block identifier describes which data blocks are addressed to which device currently listening on a timeslot. An Inter-SGSN routing area update is performed if the mobile device roams into a cell which is connected to a new SGSN. As the new cell belongs to a new routing area, the mobile device attempts a routing area update. The new SGSN then detects that the mobile device is currently registered with a different SGSN and thus sends a message to the previous SGSN to retrieve authentication information. After authenticating the mobile station, the HLR is informed that the subscriber has changed its location to the new SGSN. Furthermore, the GGSN is informed of the position change so it can forward incoming packets to the new SGSN in the future. Once all of these actions are performed, the routing area update in the core network is complete and the subscriber gets a confirmation from the SGSN the operation was performed successfully. The GPRS core network between the SGSN and GGSN use the IP protocol for routing the IP data frames of subscribers. These are not transferred directly, however, but are encapsulated into GPRS tunneling protocol (GTP) frames. Part of the encapsulated frame is the IP address of the mobile device and the source/destination of the frame. Thus, a GTP frame contains two source and two destination IP addresses. This mechanism has the advantage that no routing table updates are required in routers between these two network components if the user is roaming into the area of another SGSN. In addition, the GPRS core network is decoupled from the Internet and the GPRS user as it is not possible to directly access these components from outside the local GPRS core network. The user does not have to change any settings on his/her device for international roaming. All packets that are sent and received are always routed through the GGSN in the subscriber’s home network. This is possible as the access point name (APN) is a qualified domain name and the SGSN inserts the mobile country code (MCC) and the mobile network code (MNC) as well as a top level domain (‘.gprs’) to the APN string received by the subscriber during the connection establishment. This domain name is then sent to a DNS server which resolves the domain name into the IP address of the GGSN in the subscriber’s home network. During a GPRS attach, the mobile device registers with the network. Afterwards, the network is aware that the device has been switched on and in which routing area it is located. Up to this point no IP address has been assigned to the mobile device and no data can be transmitted. The IP address is only assigned to a mobile device during the PDP context activation procedure. Billing is also only invoked during the activation of a PDP context. In order to transfer data via GPRS to and from the Internet, a PDP context has to be established between the mobile device and the GPRS network. During the establishment of a PDP context the mobile device sends the access point name which identifies the GGSN and profile to be used to establish a connection to the internet. (Also see answer 10) MMS messages are always exchanged between a mobile device and the MMS gateway which is located behind the GGSN. For sending an MMS the mobile device first establishes a GPRS connection and uses the APN which the network operator has foreseen for the MMS service. Usually it is only possible to reach the MMS gateway via this APN. Afterwards the MMS, which has many similarities to an eMail, is sent by using an HTTP-PUT push command. This command is also used by web browsers to send the input the user has made on a web page in text fields, etc. to the web server. Once the MMS is received by the MMS gateway it is stored and attempt is made to deliver the message to the destination. If the destination is a mobile subscriber, an SMS is sent to inform the mobile device of the waiting MMS message. Depending on the configuration of the mobile device it either establishes a GPRS connection immediately or queries the user first before doing so. To receive the MMS message the mobile device uses the HTTP-GET command. This command is also used by web browsers to request web pages from a web server. An MMS message has many similarities to an eMail. The header for example is structured in a similar way as an eMail header and just contains additional X-MMS-tags which contain MMS specific information. Text and pictures are sent in the “body” of the MMS and are separated by Multipurpose Internet Mail Extension (MIME) separators. The first part of an MMS body is the description of the general layout of the message. SMIL, an XML language, is used for this purpose. Further MIME parts of the MMS then contain the text, pictures, videos, etc.
3.339281