text
stringlengths
5.5k
44.2k
id
stringlengths
47
47
dump
stringclasses
2 values
url
stringlengths
15
484
file_path
stringlengths
125
141
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
4.1k
8.19k
score
float64
2.52
4.88
int_score
int64
3
5
What is CMMC? The Cybersecurity Maturity Model Certification (“CMMC”), is a unified standard and framework of cybersecurity best practices and controls. The intent of CMMC is to enhance the cybersecurity of government contractors who serve the Department of Defense (“DoD”), who make up the Defense Industrial Base (“DIB”). CMMC is a framework and a standard, but it also has a certification component that verifies compliance with the standard. Why was CMMC created? The theft of intellectual property and sensitive information due to malicious cyber activity threatens economic security and national security. As part of multiple efforts focused on enhancing the security and resiliency of the Defense Industrial Base, the DoD created CMMC to assess and enhance the cybersecurity posture of contractors who serve the DoD. The certification creates a mechanism that helps DoD assess and verify contractor compliance with cybersecurity practices, controls, and processes, that are aimed to protect certain unclassified information (described in more detail below) that may be in the possession of those contractors, or which resides on or is transmitted through contractor information systems. What kind of information is CMMC designed to protect? CMMC is primarily designed to protect Federal Contract Information (“FCI”) and Controlled Unclassified Information (“CUI”). FCI is information, not intended for public release, that is provided by or generated for the government under a contract to develop or deliver a product or service to the government, but not including information provided by the government to the public (such as on public websites) or simple transactional information, such as necessary to process payments. CUI is generally unclassified information that requires safeguarding or dissemination controls pursuant to and consistent with law, regulations, and government-wide policies. A CUI Registry provides information on the specific categories and subcategories of information that the Executive branch protects. The CUI Registry can be found at https://www.archives.gov/cui and https://www.dodcui.mil/Home/DoD-CUI-Registry/ Resources, including online training to better understand CUI can be found on the National Archives’ website at https://www.archives.gov/cui/training.html as well as the Department of Defense’s website: https://www.dodcui.mil/ What rules govern and implement CMMC? DoD recently issued an interim rule called Assessing Contractor Implementation of Cybersecurity Requirements, published in Federal Register, 85 Fed. Reg. 61505 on Sept. 29, 2020, and effective on Nov. 30, 2020. The interim rule will be followed by future rulemaking that may amend rule content and requirements. Public comments were due Nov. 30, 2020, and will be considered in the formulation of a final rule sometime this spring or summer of 2021. DFARS clause 252.204-7012, Safeguarding Covered Defense Information and Cyber Incident Reporting, is already included in all DoD solicitations and contracts, including those using Federal Acquisition Regulation (FAR) part 12 commercial item procedures, except for acquisitions solely for commercially available off-the-shelf (COTS) items. This clause which is described in detail in a video below requires contractors to apply the security requirements of NIST SP 800-171 to “covered contractor information systems,” as defined in the clause, that are not part of an IT service or system operated on behalf of the Government. NIST SP 800-171 is a cybersecurity standard and assessment methodology developed by the National Institute of Standards and Technology that requires an assessment of a contractor’s implementation of the security requirements outlined in NIST SP 800-17. As mentioned, you can learn more about NIST SP 800-171 by watching the video below. Traditionally, contractors and subcontractors self-certified compliance with DFARS clause 252.204-7012 and NIST SP 800-171. The DoD Assessment Methodology requirement was developed to provide DoD an easier way to gauge contractor compliance with NIST SP 800-171 and DoD’s own confidence with that contractor’s compliance and is implemented in contracts via DFARS Clause 252.204-7020. The DoD Assessment Methodology creates three confidence levels – Basic, Medium, and High – which roughly translates to the confidence level DoD has in the contractor’s implementation of NIST SP 800-171 and factors in how many of the 110 NIST SP 800-171 security controls the contractor has implemented. Basic assessments are contractor self-assessments using the DoD Assessment Methodology. With respect to Basic (self) Assessments, DFARS 252.204-7020, asks DoD Contractors to submit their Basic (self) Assessment scores into a web-based system called the Supplier Performance Risk System or SPRS. Assessment summary level scores posted in SPRS are then available to DoD personnel and are protected, in accordance with the standards set forth in DoD Instruction 5000.79. For more details, review DFARS Clause 252.204-7020. An interim rule effective on Nov. 30, 2020, requires that contracting officers verify in the Supplier Performance Risk System (SPRS) at https://www.sprs.csd.disa.mil/ the scores of NIST SP 800-171 assessments already completed; and verify that an offeror has a current (i.e., not more than three years old, unless a lesser time is specified in the solicitation) assessment, at any level, on record prior to contract award. Therefore, defense contractors should upload and submit Basic (self) Assessment scores for each system supporting the performance of a contract (or potential future contract) as DFARS 252.204-7020 will appear in all DoD solicitations and contracts, task orders, or delivery orders, including those using FAR part 12 procedures for the acquisition of commercial items, except for those that are solely for the acquisition of COTS items. The rule states that the offeror/contractor may submit scores via encrypted email to [email protected] for posting to SPRS. SPRS now has increased functionality for offerors/contractors to enter scores directly into SPRS. (See https://www.sprs.csd.disa.mil/pdf/NISTSP800-171QuickEntryGuide.pdf). You can find additional resources and help on using SPRS at this website. The rule also states that prime contractors must also ensure applicable subcontractors have the results of a current assessment posted in SPRS prior to awarding subcontracts. We encourage all of our clients who are DoD contractors, who have DFARS clause 252.204-7012 in their contracts, and who are likely to have DFARS 252.204-7020 in their contracts going forward, to upload their Basic (self) Assessment scores into SPRS. Building upon NIST SP 800-171, the CMMC framework adds a comprehensive and scalable certification element to verify the implementation of processes and practices associated with the achievement of a cybersecurity maturity level. CMMC is designed to provide increased assurance to the Department that a DIB contractor can adequately protect sensitive unclassified information such as Federal Contract Information (FCI) and Controlled Unclassified Information (CUI) at a level commensurate with the risk. DoD is implementing a phased rollout of CMMC. CMMC will be primarily implemented by DFARS clause 252.204-7021, Cybersecurity Maturity Model Certification Requirements. This clause is prescribed for use in solicitations and contracts, including solicitations and contracts using FAR part 12 procedures for the acquisition of commercial items, excluding acquisitions exclusively for COTS items, if the contract requirement document or statement of work requires a contractor to have a specific CMMC level. In order to implement the phased rollout of CMMC, the inclusion of a CMMC requirement in a solicitation until September 30, 2025, must be approved by the Office of the Under Secretary of Defense for Acquisition and Sustainment. Starting on or after October 1, 2025, CMMC will apply to all DoD solicitations and contracts, including those for the acquisition of commercial items (except those exclusively COTS items) valued at greater than the micro-purchase threshold. Contracting officers will not make an award, or exercise an option on a contract if the offeror or contractor does not have current (i.e. not older than three years) certification for the required CMMC level. Furthermore, CMMC certification requirements are required to be flowed down to subcontractors at all tiers, based on the sensitivity of the unclassified information flowed down to each subcontractor. What are the CMMC Levels? There are multiple CMMC levels (1-5) you can implement and get certified for. To achieve a specific CMMC level, a defense contractor must demonstrate to an accredited CMMC Third Party Assessment Organization (C3PAO) that they have put into place the controls, processes, and practices commensurate with the applicable CMMC level desired. Therefore, what steps your company needs to take depends on which CMMC level your company is seeking to achieve. Many defense contractors, will need to acquire only level 1 certification. Some will require level 3 certification. Very few contractors will need level 4 or 5. The level of certification needed, depends on the requirements of the contracts you are working on or seek to work on for DoD. The solicitations and contracts issued by DoD will generally outline the applicable CMMC level required for that opportunity. CMMC assessments are not done by the government but are conducted by accredited CMMC Third Party Assessment Organizations (C3PAOs). Upon completion of a CMMC assessment, a company is awarded certification by an independent CMMC Accreditation Body (AB) at the appropriate CMMC level (as described in the CMMC model). The CMMC certification level is then documented in SPRS to enable the verification of an offeror’s certification level and currency (i.e. not more than three years old) prior to contract award. Below are the applicable CMMC levels: Level 1: This is the most basic level. It consists of the 15 basic safeguarding requirements from FAR Clause 52.204-21. This is the easiest level to achieve, and most government contractors should already meet these requirements. If they are not meeting these requirements, they should be able to be achieved with minimum effort. Many small businesses that do business with DoD, but who do not receive or create CUI, may only need to achieve this level. Level 2: Consists of 65 security requirements from NIST SP 800-171 implemented via DFARS clause 252.204-7012, 7 CMMC practices, and 2 CMMC processes. Level 2 is intended as an optional intermediary step for contractors as part of their progression to Level 3. While Level 2 exists, it is anticipated that many contractors will not seek Level 2 but will pursue Level 3 instead. Level 3: Consists of all 110 security requirements from NIST SP 800-171, 20 CMMC practices, and 3 CMMC processes. DoD contractors that receive or create CUI will need to achieve this level. This will be a level that many mid-size and large government contractors will need to achieve. Level 4: Consists of all 110 security requirements from NIST SP 800-171, 46 CMMC practices, and 4 CMMC processes. This level will likely only be necessary for government contractors who possess or create sensitive or mission-critical CUI. This will be mostly large government contractors and some of their key subcontractors. Level 5: Consists of all 110 security requirements from NIST SP 800-171, 61 CMMC practices, and 5 CMMC processes. This level will only be necessary for a small number of contractors who possess or create what the DoD considers the most sensitive or mission-critical CUI. This will be an exclusive group and only a small number of mostly large government contractors and some of their key subcontractors. To understand in more detail the different levels and controls, you need to download and review the CMMC model. Additional information on CMMC and a copy of the CMMC model can be found at https://www.acq.osd.mil/cmmc/index.html. Why does CMMC matter? The CMMC model consists of processes and cybersecurity best practices from multiple cybersecurity standards, frameworks, and other references, as well as inputs from the broader community. As noted earlier, at the current time, DoD is implementing a phased rollout of CMMC over the next several years. While it will be a slow and phased-in rollout, the CMMC requirement may start to appear in some contracts in the next several years by the insertion of clause 252.204-7021, Cybersecurity Maturity Model Certification Requirements. Therefore, it may start to appear in some solicitations and contracts, including solicitations and contracts using FAR Part 12 procedures for the acquisition of commercial items (excluding acquisitions exclusively for COTS items). The consequence is that if the requirement document or statement of work in a contracting opportunity or solicitation requires a contractor (or its subcontractors) to have a specific CMMC level, only contractors and subcontractors who have achieved the specified certified level can perform that work for DoD. To put it simply, if you want to do business with DoD on a contract where CMMC is required, you will have to achieve the applicable CMMC certified level that the contract or subcontracting opportunity requires, or you will not be eligible to perform the work. In order to implement the phased rollout of CMMC, the inclusion of the CMMC requirement in a solicitation up until September 30, 2025, must be approved by the Office of the Under Secretary of Defense for Acquisition and Sustainment. The rollout is anticipated to be gradual. DoD has indicated the following targets for the number of prime acquisitions that will include a CMMC requirement: FY2021 (15), FY2022 (75), FY2023 (250), FY2024 (325), FY2025 (475). DoD may move faster or slower than their anticipated goals depending upon how quickly companies get CMMC certified. However, eventually, on or after October 1, 2025, Dod plans to apply CMMC to all DoD solicitations and contracts, including those for the acquisition of commercial items (except those exclusively COTS items) valued at greater than the micro-purchase threshold. Contracting officers will not make an award, or exercise an option on a contract if the offeror or contractor does not have current (i.e., not older than three years) certification for the required CMMC level. Furthermore, CMMC certification requirements are required to be flowed down to subcontractors at all tiers, based on the sensitivity of the unclassified information flowed down to each subcontractor. I understand that this is important if I want to continue to do business with DoD, how do I get certified? To get certified, a contractor must both implement AND demonstrate they have the controls, processes, and practices in place commensurate with the applicable CMMC level to an accredited CMMC Third Party Assessment Organization (C3PAO). Upon successful completion of a CMMC assessment by a C3PAO, a company is awarded certification by an independent CMMC Accreditation Body (AB) at the appropriate CMMC level. The certification level is then documented in SPRS, which enables the federal government to verify an offeror’s certification level and that it is not more than three years old prior to contract award. You can find a marketplace of accredited CMMC Third Party Assessment Organizations (C3PAOs) on the CMMC Accreditation Body (AB) website here: https://cmmcab.org/marketplace/ How much will this cost? This is a difficult question to answer. The cost of the CMMC certification depends on which level you are seeking. There are also a lot of factors involved in determining cost, including the state of your current IT system and existing compliance with other cybersecurity standards and practices such as NIST 800-171 (described in detail in the video below). Most government contractors, including small business government contractors pursuing a Level 1 certification should have already implemented the 15 basic safeguarding requirements under FAR clause 52.204-21. You can review these basic requirements here. If you have not complied with the 15 basic safeguards, then you should implement them, and such implementation should be fairly easy to achieve. Therefore, there should be minimal cost to implement the Level 1 CMMC requirements. However, the CMMC Level 1 assessment, which will need to be performed by a certified CMMC Third-party Assessment Organization (C3PAO) in order to achieve Level 1 certification, will likely cost around $3,000. The certification will then last three years. When it comes to achieving Levels 2-5, acquiring and maintaining certification gets more expensive. There are going to be certain nonrecurring costs needed to achieve and implement the necessary controls, processes, and practices to achieve the desired CMMC level. Again, the costs associated with that depend on your existing IT system and practices. You will have to have either a current employee perform this IT work or you will have to hire IT consultants or outside firms with expertise in this area, to help your company implement the necessary controls, processes, and practices. It may be possible to recoup these investments as an allowable cost item if you are working on a current DoD cost contract. Check with your Contracting Officer. How this work is performed depends upon your equipment, information systems, your software, computer operating systems, and network topology. There will also be recurring IT costs associated with maintaining compliance with the necessary controls, processes, and practices because equipment, IT systems, and network environments often change over time. There will then be an assessment cost from the C3PAO, which is going to range in price. However, it is estimated to be around $7,489 for Level 2; $17,032 for Level 3; $23,355 for Level 4; and $36,697 for a Level 5 assessment. There also may be reassessment costs every 3 years to renew your CMMC certification. Total annual costs of achieving and maintaining a certain CMMC level are estimated to be the following: Level 1: $1,000 per year. Level 2: $28,050 per year. Level 3: $60,009 per year. Level 4: $371,786 per year. Level 5: $482,874 per year. These cost estimates been provided by the DoD in their interim rule cited below. I’m ready to take the necessary steps to get certified. What are the key websites and resources? - The official website for CMMC, run by the Office of the Under Secretary of Defense for Acquisition and Sustainment: https://www.acq.osd.mil/cmmc/index.html - The CMMC Model and Assessment Guides: https://www.acq.osd.mil/cmmc/draft.html - CMMC Frequently Asked Questions: https://www.acq.osd.mil/cmmc/faq.html - DoD issued an interim rule, Assessing Contractor Implementation of Cybersecurity Requirements on Sept. 29, 2020 (effective Nov. 30, 2020): https://www.federalregister.gov/documents/2020/09/29/2020-21123/defense-federal-acquisition-regulation-supplement-assessing-contractor-implementation-of - CMMC Accreditation Body Website: https://cmmcab.org/ - CMMC AB Marketplace of accredited C3PAOs: https://cmmcab.org/marketplace/ If you need additional guidance understanding CMMC. Please reach out to one of our Procurement Counselors. However, please understand that GTPAC Counselors are not IT professionals, so we generally cannot implement the necessary software, policy, or process changes needed to achieve compliance with the CMMC standard. Further, GTPAC cannot perform an assessment of your IT system as we are not a C3PAO. What we can do is provide you guidance on the process, the rules, and requirements, so you can make the best business decision possible to achieve compliance with the applicable rules and regulations. DFARS clause 252.204-7012 / NIST 800-171 Guidance This video provides a step-by-step guide on how government contractors can achieve compliance with the cybersecurity requirements established by the U.S. Department of Defense (DoD), specifically Defense Federal Acquisition Regulation Supplement (DFARS) clause 252.204-7012, entitled “Safeguarding Covered Defense Information and Cyber Incident Reporting.” DFARS Clause 252.204-7012 – This contract clause, entitled “Safeguarding Covered Defense Information and Cyber Incident Reporting,” is included in all DoD solicitations and contracts, including solicitations and contracts using FAR Part 12 procedures for the acquisition of commercial items, except for solicitations and contracts solely for the acquisition of commercial-off-the-shelf items. NIST SP 800-171 Rev. 2 – Entitled “Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations,” this National Institute of Standards and Technology (NIST) publication provides federal agencies with recommended requirements for protecting the confidentiality of Controlled Unclassified Information (CUI). Cybersecurity Self-Assessment Handbook – The National Institute of Standards and Technology (NIST) Manufacturing Extension Partnership (MEP) Cybersecurity Self-Assessment Handbook was developed to assist U.S. manufacturers who supply products to the DoD implement NIST SP 800-171 as part of the process for ensuring compliance with DFARS Clause 252.204-7012. It should be noted that this Handbook can be utilized by any DoD contractor to help them conduct an assessment of their NIST SP 800-171 compliance. Cybersecurity Template – This is a 127-page template, developed by the Georgia Tech Procurement Assistance Center (GTPAC), designed to help contractors create a Security Assessment Report, System Security Plan, and Plan of Action. The template is a Word document, designed for easy customization. It is intended to be used in conjunction with the NIST-MEP Cybersecurity Self-Assessment Handbook linked above. Please note that NIST now also provides several templates that you can download here. The video and template linked above were funded through a cooperative agreement with the Defense Logistics Agency, and created with the support of the Georgia Institute of Technology. The content of the video presentation does not necessarily reflect the official views of or imply endorsement by the U.S. Department of Defense, the Defense Logistics Agency, or Georgia Tech. For further assistance with complying with DoD’s contractual cybersecurity requirements, please feel free to contact a GTPAC Procurement Counselor. A list of Counselors, their locations, and contact information can be found at: http://gtpac.org/team-directory. Companies located outside the state of Georgia may contact their nearest Procurement Technical Assistance Center (PTAC) for assistance with government contracting matters. PTACs are located in all 50 states, the District of Columbia, Guam, and Puerto Rico. Find a directory of PTACs at: http://www.aptac-us.org/find-a-ptac. GTPAC is a part of the Enterprise Innovation Institute (EI2), Georgia Tech’s business outreach organization which serves as the primary vehicle to achieve Georgia Tech’s goal of expanded local, regional, and global outreach. EI2 is the nation’s largest and most comprehensive university-based program of business and industry assistance, technology commercialization, and economic development.
<urn:uuid:ae2c2e87-af55-4162-9d11-2d5a72a24ea0>
CC-MAIN-2021-21
https://gtpac.org/cybersecurity-training-video/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00335.warc.gz
en
0.921547
4,958
2.578125
3
Continued from Previous Page Sanjaya said: Having spoken thus in the battlefield, Arjuna sank down into the chariot dropping his bow and arrows, his mind heavy with grief. BG 1.47 Chapter One of the Bhagavad Gita began with a question by Dhritarashtra about what his sons and the sons of Pandu did in the battlefield of Kurukshetra and now we have come to the last verse of the chapter in which Sanjaya tells the blind king that Arjuna has sat down in the chariot overcome by great compassion that has risen in his heart, refusing to fight. The journey of the Gita which is a journey into light begins with tamas, darkness – Dhritarashtra is tamas. We cannot help but wonder how appropriate this is because all journeys have to begin from where we are and we are in darkness now. The purpose of the Gita is to take us from the darkness – spiritual darkness – in which we are now, to light.Tamaso maa jyotir gamaya, lead me from darkness to light, says one of the oldest prayers known to mankind, a prayer that we find the Vedic people of India making to the unnamed power that presides over our lives. Gita is about this journey from darkness to light. The Bhagavad Gita shows us how we can travel from darkness to light. Krishna tells us it is for each one of us to make this journey from darkness to light, it is for us to pull ourselves out of the abyss we have fallen into. Uddharet aatmanaa aatmaanam: Lead yourself by your own self, he says in the Gita. If we are in the gutter it is because of ourselves and it is for us to climb out of that gutter – that is what the Gita tells us, that is Krishna’s way. As the greatest leadership teacher in the history of humanity, Krishna knows that without our will to get out of the mess we are in we will never come out of it. The darkness Dhritarashtra finds himself in when he asks that question in the first verse of the Gita was of his own making – others certainly aided him in that but his role in its creation is no less important than anyone else’s. From the television serials on the epic, many of us tend to blame Duryodhana and Shakuni for the tragedy of the Mahabharata, but Dhritarashtra was the king, the man invested with all power, and he was also Duryodhana’s father. Just as a modern organizational head is ultimately responsible for whatever happens in that organization, the responsibility for the tragedy of the Mahabharata in the final analysis is his, more than that of anyone else. It is interesting that this blind king because of whom India fought its greatest ever war was a biological son of Sage Vyasa, the author of the Mahabharata, the compiler of the Vedas, author of the Puranas and arguably the greatest sage our land has known – a fact that proves greatness and wisdom cannot be inherited but have to be acquired. As Gibran said: Your children are not your children. They are the sons and daughters of Life’s longing for itself. They come through you but not from you, And though they are with you yet they belong not to you. Each one of us is a child of Life. In our endless journey, each one of us has had thousands of mothers and fathers – they are the gates through which we enter this world but we do not originate in them. The Mahabharata says our relationships are like the relationships of two logs meeting in the vast ocean, now brought together and now again separated: yathaa kaashtham cha kaastham cha samaayetaam mahodadhau, sametya cha vimaayetaam, evam bandhu-samaagamah. We are all alike eternal sojourners in this vast ocean of life. And in that beginningless and endless journey, each one of us undergoes endless experiences, including our experiences with our current parents, react to those experiences in our own unique ways and are shaped to become what we are now. Some of us end up as predominantly sattvic, some others as rajasic and yet others as tamasic. Ultimately the responsibility for what we have become rests on us. [And so long as we blame others for what we, divine sparks the Upanishads calls amritasya putraah, children of immortality, have become, there is no possibility of change.] There is no way gunas can be inherited from our parents, as we see in the case of the four sons of Maharshi Vyasa. His son Brahmarshi Shuka is beyond all gunas – an enlightened man who has become gunatita. Vidura, another biological son of his, is predominantly sattvic and Pandu is rajasic. Dhritarashtra, the blind king with whose name the Bhagavad Gita begins, is deeply tamasic. In fact, he could be used as an example to explain what tamas means as I have done numerous times in my lectures to the business school students I have taught and the corporate officers I have trained during sessions on understanding self and others, motivating self and others and so on. It is difficult to find a better example for tamas in the Mahabharata than Dhritarashtra. Tamasic people cannot create – creativity is the opposite of tamas. But they can destroy. They are not stupid, but have a kind of intelligence that Krishna names tamasic intelligence. Krishna gives us a definition of tamasic intelligence, tamasic buddhi, in the eighteenth chapter of the Gita: adharmam dharmam iti yaa manyate tamasaavritaa, sarvaarthaan vipareetaamshcha buddhih saa paartha taamasee. The intelligence which is clothed in darkness and sees adharma as dharma and views all things as the opposite of what they are, that intelligence is tamasic. BG 18.32 Ruthless, cunning, manipulative, insensitive to the sufferings of others, totally self-centered and joyless, tamasic people try to doggedly hold on to whatever they have. They cling to things, cling to their power, positions and privilege, refusing to let go, ad Dhritarashtra does. In his international best seller Illusions: the Adventures of a Reluctant Messiah, Richard Bach speaks of a village of creatures living at the bottom of a crystal river. He says: Once there lived a village of creatures along the bottom of a great crystal river. The current of the river swept silently over them all – young and old, rich and poor, good and evil, the current going its own way, knowing only its own crystal self. Each creature in its own manner clung tightly to the twigs and rocks of the river bottom, for clinging was their way of life, and resisting the current what each had learned from birth.” These creatures at the bottom of the river that Richard Bach speaks of are excellent examples for tamasic people. These insecure people are like baby birds in a nest, refusing to let go of the security of the nest and thus denying themselves the freedom and joyfulness of the boundless skies. Dhritarashtra is like those small creatures at the bottom of the river, like those baby birds who refuse to flutter their wings, let go and take to the skies. The name Dhritarashtra can mean one who holds the rashtra, the kingdom, together. It can also equally well mean one who holds on to the rashtra, the kingdom, one who clings to the kingdom, to the throne and crown, to power, as Mahabharata’s Dhritarashtra definitely does. Continuing Bach’s story: “But one creature said at last, “I am tired of clinging. Though I cannot see it with my eyes, I trust that the current knows where it is going. I shall let go, and let it take me where it will. Clinging, I shall die of boredom.” The other creatures laughed and said, “Fool! Let go, and that current you worship will throw you tumbled and smashed across the rocks, and you will die quicker than boredom.” But the one heeded them not, and taking a breath did let go, and at once was tumbled and smashed by the current across the rocks. Yet in time, as the creature refused to cling again, the current lifted him free from the bottom, and he was bruised and hurt no more. And the creatures downstream, to whom he was a stranger, cried, “See a miracle! A creature like ourselves, yet he flies! See the Messiah, come to save us all!” And the one carried in the current said, “I am no more Messiah than you. The river delights to lift us free, if only we dare let go. Our true work is this voyage, this adventure.” But they cried the more, “Saviour!” all the while clinging to the rocks, and when they looked again he was gone and they were left alone making legends of a Saviour.” Tamasic people just cannot let go. They are incapable of doing that. Unfortunately without letting go of the alpa, the small, there is no bhooma, the big. But the tamasic just cannot let go. Clinging because of their insecurities, the tamasic live a life of fear, a life of dread, seeing threats everywhere, afraid of what they have being snatched away from them any moment. They become paranoid. There is a beautiful Taoist story about a phoenix and an owl: Hui Tzu was prime minister of Liang. He had what he believed to be inside information that Chuang Tzu [the great Taoist master] coveted his post, and was plotting to supplant him. When Chuang Tzu came to visit Liang, the prime minister send out police to arrest him, But although they searched for three days and nights, they could not find him. Meanwhile Chuang Tzu presented himself to Hui Tzu of his own accord, and said: “Have you heard about the bird that lives in the south – the phoenix that never grows old? This undying phoenix rises out of the south sea and flies to the sea of the north, never alighting except on certain sacred trees. He will touch no food but the most exquisite rare fruit, and he drinks only from the clearest springs. Once an owl chewing an already half decayed rat saw the phoenix fly over. Looking up he screeched with alarm and clutched the dead rat to himself in fear and dismay.” “Prime minister,” asked Chuang Tzu, “why are you so frantic, clinging to your ministry and screeching at me in dismay?” Had Dhritarashtra cared about the good of his subjects as an Indian king was expected to rather than clinging to power, had he cared even for his own son’s good, the war would not have happened. He should have handed power back to Yudhishthira, whose it really was as per the conventions of the day since his father Pandu was the last king of the Bharata’s and Dhritarashtra was no more than a caretaker. Had he done that, he wouldn’t have had to weep at the end of the war that all his one hundred sons have been killed, that Bhima did not spare even one of them. The Mahabharata tells us that when Sage Vyasa came to his sister-in-law Ambika to produce a child through the ancient custom of niyoga as ordered by his mother, seeing his ascetic form she closed her eyes and that is why her son was born. This story is symbolic of Dhritarashtra’s mother turning away from light, closing her eyes to light, rejecting light at the moment of his conception, for Vyasa was light, wisdom, goodness and spirituality at the highest level. Just as his mother did at the moment of his conception, throughout his life the blind king turned away from light and remained a prisoner of darkness, of the asuri sampada that the Gita speaks of. It was not for the first time that in ancient India, or even in the history of the Bharata dynasty itself, that primogeniture has been overlooked in favour of competency. Bharata himself, after whom the dynasty is named, rejected all his nine sons born to his three queens since he did not find them ‘appropriate’, competent enough, and accepted a rank outsider called Bhumanyu as his successor. Dhritarashtra’s own grandfather, Emperor Shantanu was not the eldest son of his father Emperor Pratipa – he was his youngest son. Pratipa’s eldest son was Devapi who on his own gave up inheritance because he had leprosy and became an ascetic. Devapi’s younger brother Bahlika abandoned his right to the Kuru kingdom and went to live with his maternal uncle in what we call the Balkh country today and eventually inherited that kingdom. That is how the crown came to Shantanu. The rule that someone who suffered from a physical defect or disease was not fit to rule was based on the ancient understanding that kingship was a responsibility and not a privilege and to be fully effective a king – a leader – should have all his faculties at his command so that he can understand the situation personally and take the right decision. Dhritarashtra was denied the throne based because it was felt by those in power that a blind king will not be able to fully comprehend challenging situations and if he failed to do so and took wrong decisions on important issues, the kingdom would suffer. One of the important expectations in those days was that the leader led from the front, particularly in the battlefield, and here a blind man was at a disadvantage, though exceptions to this rule did exist. Rejecting Dhritarashtra, Pandu was made king and he proved himself to be superbly effective. But perhaps Pandu who was very sensitive towards others felt guilty about ruling as king while his elder brother was alive – Ramayana’s Bharata refused to sit on the throne even though according to Valmiki the kingdom was his by birth since Dasharatha had married his mother Kaikeyi by giving the kingdom as rajyashulka, by promising that her son would inherit the throne. Pandu eventually gave up the throne and went to live with his wives in the forest as an ascetic, though there may be other factors that contributed to that decision. From Dhritarashtra’s subsequent behaviour, we clearly see that he had more than ordinary greed for power – power was the most important thing for him, the be-all and end-all of his existence, power for himself and his future generations. Like most power hungry people, he had no respect for anything other than power. Once a great rishi of awesome spiritual powers called Baka Dalbhya came to him asking for a few cows. It was a common thing in those days for rishis to approach kings and request for cows and kings usually gave not one or two but hundreds and sometimes thousands of cows to them. But what Dhritarashtra did was truly shocking – he pointed out a few dead cows and asked Rishi Dalbhya to take them – that’s all he would give. As a consequence of this action of the king, says the Mahabharata, the entire Kuru kingdom suffered from terrible draughts and famines that lasted for twelve years and a vast section of the population died from hunger, thirst and starvation. Dhritarashtra accepted his mistake and made amends only when he realized Baka Dalbhya’s incredible spiritual powers. Power is perhaps man’s greatest temptation. Because with power comes everything else. In modern political organizations, in industry and business, in fact everywhere, we can find people clinging to power whether they are good as leaders or not, and appointing their own people in positions of power – what we call nepotism in English and bhai-bhatijavad in Hindi. Many organizations have died sad deaths because of this. The Dhritarashtra Vilapa, a long soliloquy by the blind king, is at the very beginning of the Mahabharata. In the vilapa the blind king recalls one by one sixty-eight occasions when he lost all hope of victory – the verses describing these incidents all begin with the words yadaa shrausham, when I heard..., and end in ...tadaa naaham vijayaaya naashamse, then I no more hoped for victory. Practically all these occasions speak of some success or another of the Pandavas – like their escape from the lacquer house in which they were supposed to be killed, Arjuna winning the archery contest for wedding Draupadi, the Panchalas becoming allies of the Pandavas and so on. He sees each of these as occasions that destroyed his hopes. The Pandavas are really not ‘others’ – they are the children of his brother, and they gave him the same love and respect that they had for their father; but the world of the tamasic is very small and have no place even for one’s nephews. That is a major difference between the sattvic and the tamasic – for the sattvic, the whole earth is their family, as is said in Sanskrit vasudhaiva kutumbam, whereas for the tamasic, their family is too small, and even their own nephews are not part of it. As his father and as the caretaker king, Dhritarashtra had all the power he needed to stop Duryodhana’s evil ways but never once does he take a strong stand against him, newer a stand that will really stop him. True he did speak against him a few times, but never with all his authority and never in such a way that his son will not be able to go against him. The face of Dhritarashtra we see in the Mahabharata most of the time is of an absolutely shameless old man who does no more lip service to the children of his brother who are the rightful heirs to the throne. Even in the Udyoga Parva of the epic when the war has become imminent, the message he sent to the righteous Pandavas is truly unbelievable in its meanness: he tells him since they are lovers of peace they should not wage a war against him or even demand their rights, but should go somewhere else and ask someone else for some land as charity! It is this face tamas that we see in the Sabha Parva of the epic too where the dice game happens. It is possible that Dhritarashtra is the happiest man in the dice hall every time Yudhishthira loses a game. It is his voice alone that we hear at these times and every time his question is the same: jitam mayaa, have I won it? He is asking about what Yudhishthira has staked and lost, including Draupadi as the last stake. There is great thrill in his voice as he asks that question every time. It is this Dhritarashtra that Arjuna does not want to dethrone because he is his uncle; and also because in that process he will have to slay in battle Bhishma and Drona. Arjuna’s vision has temporarily become clouded by blind mamata, which is form of tamas. But Krishna clearly sees what Arjuna does not see: the danger of surrendering the world to Dhritarashtra’s philosophy. He can see the dangers of having tamasic people in positions of power. When tamas takes over individuals, they are finished. When it takes over organizations, they are finished. When a culture is taken over tamas, when a nation is taken over tamas, it is finished. The Nobel Prize winning book The Tin Drum by Gunter Grass discusses how Germany plunged into darkness under Hitler. Bhishma Sahni’s tamas brilliantly shows what happened in the days of Partition as tamas conquered us. As Arjuna collapses in his chariot surrendering to a dark wave of tamas perhaps for the first time in his life, his mind and body drained of all energy, his will deserting him, Krishna shows him how to walk out of the blinding darkness he is in now and reach the world of light: of victory, joyfulness, prosperity and glory. That glorious path is the Bhagavad Gita. Continued to Next Page
<urn:uuid:0ede47c4-c5a3-4228-a7d3-24077ffd18af>
CC-MAIN-2021-21
https://www.boloji.com/articles/51818/living-gita-22-when-tamas-takes-over
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00456.warc.gz
en
0.979921
4,428
2.875
3
THE GREAT POPULARITY There is one aspect of Charles Dickens which must be of interest even to that subterranean race which does not admire his books. Even if we are not interested in Dickens as a great event in English literature, we must still be interested in him as a great event in English history. If he had not his place with Fielding and Thackeray, he would still have his place with Wat Tyler and Wilkes; for the man led a mob. He did what no English statesman, perhaps, has really done; he called out the people. He was popular in a sense of which we moderns have not even a notion. In that sense there is no popularity now. There are no popular authors to-day. We call such authors as Mr. Guy Boothby or Mr. William Le Queux popular authors. But this is popularity altogether in a weaker sense; not only in quantity, but in quality. The old popularity was positive; the new is negative. There is a great deal of difference between the eager man who wants to read a book, and the tired man who wants a book to read. A man reading a Le Queux mystery wants to get to the end of it. A man reading the Dickens novel wished that it might never end. Men read a Dickens story six times because they knew it so well. If a man can read a Le Queux story six times it is only because he can forget it six times. In short, the Dickens novel was popular not because it was an unreal world, but because it was a real world; a world in which the soul could live. The modern "shocker at its very best is an interlude in life. But in the days when Dickens's work was coming out in serial, people talked as if real life were itself the interlude between one issue of "Pickwick" and another. In reaching the period of the publication of "Pickwick," we reach this sudden apotheosis of Dickens. Henceforward he filled the literary world in a way hard to imagine. Fragments of that huge fashion remain in our daily language; in the talk of every trade or public question are embedded the wrecks of that enormous religion. Men give out the airs of Dickens without even opening his books; just as Catholics can live in a tradition of Christianity without having looked at the New Testament. The man in the street has more memories of Dickens, whom he has not read, than of Marie Corelli, whom he has. There is nothing in any way parallel to this omnipresence and vitality in the great comic characters of Boz. There are no modern Bumbles and Pecksniffs, no modern Gamps and Micawbers. Mr. Rudyard Kipling (to take an author of a higher type than those before mentioned) is called, and called justly, a popular author; that is to say, he is widely read, greatly enjoyed, and highly remunerated; he has achieved the paradox of at once making poetry and making money. But let anyone who wishes to see the difference try the experiment of assuming the Kipling characters to be common property like the Dickens characters. Let anyone go into an average parlour and allude to Strickland as he would allude to Mr. Bumble, the Beadle. Let anyone say that somebody is "a perfect Learoyd," as he would say "a perfect Pecksniff." Let anyone write a comic paragraph for a halfpenny paper, and allude to Mrs. Hawksbee instead of to Mrs. Gamp. He will soon discover that the modern world has forgotten its own fiercest booms more completely than it has forgotten this formless tradition from its fathers. The mere dregs of it come to more than any contemporary excitement; the gleaning of the grapes of "Pickwick" is more than the whole vintage of "Soldiers Three." There is one instance, and I think only one, of an exception to this generalisation; there is one figure in our popular literature which would really be recognised by the populace. Ordinary men would understand you if you referred currently to Sherlock Holmes. Sir Arthur Conan Doyle would no doubt be justified in rearing his head to the stars, remembering that Sherlock Holmes is the only really familiar figure in modern fiction. But let him droop that head again with a gentle sadness, remembering that if Sherlock Holmes is the only familiar figure in modern fiction Sherlock Holmes is also the only familiar figure in the Sherlock Holmes tales. Not many people could say offhand what was the name of the owner of Silver Blaze, or whether Mrs. Watson was dark or fair. But if Dickens had written the Sherlock Holmes stories, every character in them would have been equally arresting and memorable. A Sherlock Holmes would have cooked the dinner for Sherlock Holmes; a Sherlock Holmes would have driven his cab. If Dickens brought in a man merely to carry a letter, he had time for a touch or two, and made him a giant. Dickens not only conquered the world, he conquered it with minor characters. Mr. John Smauker, the servant of Mr. Cyrus Bantam, though he merely passes across the stage, is almost as vivid to us as Mr. Samuel Weller, the servant of Mr. Samuel Pickwick. The young man with the lumpy forehead, who only says "Esker" to Mr. Podsnap's foreign gentleman, is as good as Mr. Podsnap himself. They appear only for a fragment of time, but they belong to eternity. We have them only for an instant, but they have us for ever. In dealing with Dickens, then, we are dealing with a man whose public success was a marvel and almost a monstrosity. And here I perceive that my friend, the purely artistic critic, primed himself with Flaubert and Turgenev, can contain himself no longer. He leaps to his feet, upsetting his cup of cocoa, and asks contemptuously what all this has to do with criticism. "Why begin your study of an author," he says, "with trash about popularity? Boothby is popular, and Le Queux is popular, and Mother Siegel is popular. If Dickens was even more popular, it may only mean that Dickens was even worse. The people like bad literature. If your object is to show that Dickens was good literature, you should rather apologise for his popularity, and try to explain it away. You should seek to show that Dickens's work was good literature, although it was popular. Yes, that is your task, to prove that Dickens was admirable, although he was admired!" I ask the artistic critic to be patient for a little and to believe that I have a serious reason for registering this historic popularity. To that we shall come presently. But as a manner of approach I may perhaps ask leave to examine this actual and fashionable statement, to which I have supposed him to have recourse -- the statement that the people like bad literature, and even like literature because it is bad. This way of stating the thing is an error, and in that error lies matter of much import to Dickens and his destiny in letters. The public does not like bad literature. The public likes a certain kind of literature and likes that kind of literature even when it is bad better than another kind of literature even when it is good. Nor is this unreasonable; for the line between different types of literature is as real as the line between tears and laughter; and to tell people who can only get bad comedy that you have some first-class tragedy is as irrational as to offer a man who is shivering over weak warm coffee a really superior sort of ice. Ordinary people dislike the delicate modern work, not because it is good or because it is bad, but because it is not the thing that they asked for. If, for instance, you find them pent in sterile streets and hungering for adventure and a violent secrecy, and if you then give them their choice between "A Study in Scarlet," a good detective story, and "The Autobiography of Mark Rutherford," a good psychological monologue, no doubt they will prefer "A Study in Scarlet." But they will not do so because "The Autobiography of Mark Rutherford" is a very good monologue, but because it is evidently a very poor detective story. They will be indifferent to "Les Aveugles," not because it is good drama, but because it is bad melodrama. They do not like good introspective sonnets; but neither do they like bad introspective sonnets, of which there are many. When they walk behind the brass of the Salvation Army band, instead of listening to harmonies at Queen's Hall, it is always assumed that they prefer bad music. But it may be merely that they prefer military music, music marching down the open street, and that if Dan Godfrey's band could be smitten with salvation and lead them they would like that even better. And while they might easily get more satisfaction out of a screaming article in The War Cry than out of a page of Emerson about the Oversoul, this would not be because the page of Emerson is another and superior kind of literature. It would be because the page of Emerson is another (and inferior) kind of religion. Dickens stands first as a defiant monument of what happens when a great literary genius has a literary taste akin to that of the community. For this kinship was deep and spiritual. Dickens was not like our ordinary demagogues and journalists. Dickens did not write what the people wanted. Dickens wanted what the people wanted. And with this was connected that other fact which must never be forgotten, and which I have more than once insisted on, that Dickens and his school had a hilarious faith in democracy and thought of the service of it as a sacred priesthood. Hence there was this vital point in his popularism, that there was no condescension in it. The belief that the rabble will only read rubbish can be read between the lines of all our contemporary writers, even of those writers whose rubbish the rabble reads. Mr. Fergus Hume has no more respect for the populace than Mr. George Moore. The only difference lies between those writers who will consent to talk down to the people, and those writers who will not consent to talk down to the people. But Dickens never talked down to the people. He talked up to the people. He approached the people like a deity and poured out his riches and his blood. This is what makes the immortal bond between him and the masses of men. He had not merely produced something they could understand, but he took it seriously, and toiled and agonised to produce it. They were not only enjoying one of the best writers, they were enjoying the best he could do. His raging and sleepless nights, his wild walks in the darkness, his note-books crowded, his nerves in rags, all this extraordinary output was but a fit sacrifice to the ordinary man. He climbed towards the lower classes. He panted upwards on weary wings to reach the heaven of the poor. His power, then, lay in the fact that he expressed with an energy and brilliancy quite uncommon the things close to the common mind. But with this mere phrase, the common mind, we collide with a current error. Commonness and the common mind are now generally spoken of as meaning in some manner inferiority and the inferior mind; the mind of the mere mob. But the common mind means the mind of all the artists and heroes; or else it would not be common. Plato had the common mind; Dante had the common mind; or that mind was not common. Commonness means the quality common to the saint and the sinner, to the philosopher and the fool; and it was this that Dickens grasped and developed. In everybody there is a certain thing that loves babies, that fears death, that likes sunlight that thing enjoys Dickens. And everybody does not mean uneducated crowds; everybody means everybody: everybody means Mrs. Meynell. This lady, a cloistered and fastidious writer, has written one of the best eulogies of Dickens that exist, an essay in praise of his pungent perfection of epithet. And when I say that everybody understands Dickens I do not mean that he is suited to the untaught intelligence. I mean that he is so plain that even scholars can understand him. The best expression of the fact, however, is to be found in noting the two things in which he is most triumphant. In order of artistic value, next after his humour, comes his horror. And both his humour and his horror are of a kind strictly to be called human; that is, they belong to the basic part of us, below the lowest roots of our variety. His horror for instance is a healthy churchyard horror, a fear of the grotesque defamation called death; and this every man has, even if he also has the more delicate and depraved fears that come of an evil spiritual outlook. We may be afraid of a fine shade with Henry James; that is, we may be afraid of the world. We may be afraid of a taut silence with Maeterlinck, that is, we may be afraid of our own souls. But every one will certainly be afraid of a Cock Lane Ghost, including Henry James and Maeterlinck. This latter is literally a mortal fear, a fear of death; it is not the immortal fear, or fear of damnation, which belongs to all the more refined intellects of our day. In a word, Dickens does, in the exact sense, make the flesh creep; he does not, like the decadents, make the soul crawl. And the creeping of the flesh on being reminded of its fleshly failure is a strictly universal thing which we can all feel, while some of us are as yet uninstructed in the art of spiritual crawling. In the same way the Dickens mirth is a part of man and universal. All men can laugh at broad humour, even the subtle humorists. Even the modern flâneur, who can smile at a particular combination of green and yellow, would laugh at Mr. Lammle's request for Mr. Fledgeby's nose. In a word -- the common things are common -- even to the uncommon people. These two primary dispositions of Dickens, to make the flesh creep and to make the sides ache, were a sort of twins of his spirit; they were never far apart and the fact of their affinity is interestingly exhibited in the first two novels. Generally he mixed the two up in a book and mixed a great many other things with them. As a rule he cared little if he kept six stories of quite different colours running in the same book. The effect was sometimes similar to that of playing six tunes at once. He does not mind the coarse tragic figure of Jonas Chuzzlewit crossing the mental stage which is full of the allegorical pantomime of Eden, Mr. Chollop and The Watertoast Gazette, a scene which is as much of a satire as "Gulliver," and nearly as much of a fairy tale. He does not mind binding up a rather pompous sketch of prostitution in the same book with an adorable impossibility like Bunsby. But "Pickwick" is so far a coherent thing that it is coherently comic and consistently rambling. And as a consequence his next book was, upon the whole, coherently and consistently horrible. As his natural turn for terrors was kept down in "Pickwick," so his natural turn for joy and laughter is kept down in "Oliver Twist." In "Oliver Twist" the smoke of the thieves' kitchen hangs over the whole tale, and the shadow of Fagin falls everywhere. The little lamp-lit rooms of Mr. Brownlow and Rose Maylie are to all appearance purposely kept subordinate, a mere foil to the foul darkness without. It was a strange and appropriate accident that Cruikshank and not "Phiz" should have illustrated this book. There was about Cruikshank's art a kind of cramped energy which is almost the definition of the criminal mind. His drawings have a dark strength: yet he does not only draw morbidly, he draws meanly. In the doubled-up figure and frightful eyes of Fagin in the condemned cell there is not only a baseness of subject; there is a kind of baseness in the very technique of it. It is not drawn with the free lines of a free man; it has the half-witted secrecies of a hunted thief. It does not look merely like a picture of Fagin; it looks like a picture by Fagin. Among these dark and detestable plates there is one which has, with a kind of black directness, the dreadful poetry that does inhere in the story, stumbling as it often is. It represents Oliver asleep at an open window in the house of one of his humaner patrons. And outside the window, but as big and close as if they were in the room, stand Fagin and the foul-laced Monks, staring at him with dark monstrous visages and great white wicked eyes, in the style of the simple devilry of the draughtsman. The very naïveté of the horror is horrifying: the very woodenness of the two wicked men seems to make them worse than mere men who are wicked. But this picture of big devils at the window-sill does express, as has been suggested above, the thread of poetry in the whole thing; the sense, that is, of the thieves as a kind of army of devils compassing earth and sky crying for Oliver's soul and besieging the house in which he is barred for safety. In this matter there is, I think, a difference between the author and the illustrator. In Cruikshank there was surely something morbid; but, sensitive and sentimental as Dickens was, there was nothing morbid in him. He had, as Stevenson had, more of the mere boy's love of suffocating stories of blood and darkness; of skulls, of gibbets, of all the things, in a word, that are sombre without being sad. There is a ghastly joy in remembering our boyish reading about Sikes and his flight; especially about the voice of that unbearable pedlar which went on in a monotonous and maddening sing-song, "will wash out grease-stains, mud-stains, blood-stains," until Sikes fled almost screaming. For this boyish mixture of appetite and repugnance there is a good popular phrase, "supping on horrors." Dickens supped on horrors as he supped on Christmas pudding. He supped on horrors because he was an optimist and could sup on anything. There was no saner or simpler schoolboy than Traddles, who covered all his books with skeletons. "Oliver Twist "had begun in Bentley's Miscellany, which Dickens edited in 1837. It was interrupted by a blow that for the moment broke the author's spirit and seemed to have broken his heart. His wife's sister, Mary Hogarth, died suddenly. To Dickens his wife's family seems to have been like his own; his affections were heavily committed to the sisters, and of this one he was peculiarly fond. All his life, through much conceit and sometimes something bordering on selfishness, we can feel the redeeming note of an almost tragic tenderness; he was a man who could really have died of love or sorrow. He took up the work of "Oliver Twist" again later in the year, and finished it at the end of 1838. His work was incessant and almost bewildering. In 1838 he had already brought out the first number of "Nicholas Nickleby." But the great popularity went booming on; the whole world was roaring for books by Dickens, and more books by Dickens, and Dickens was labouring night and day like a factory. Among other things he edited the "Memoirs of Grimaldi," The incident is only worth mentioning for the sake of one more example of the silly ease with which Dickens was drawn by criticism and the clever ease with which he managed, in these small squabbles, to defend himself. Somebody mildly suggested that, after all, Dickens had never known Grimaldi. Dickens was down on him like a thunderbolt, sardonically asking how close an intimacy Lord Braybrooke had with Mr. Samuel Pepys. "Nicholas Nickleby" is the most typical perhaps of the tone of his earlier works. It is in form a very rambling, old-fashioned romance, the kind of romance in which the hero is only a convenience for the frustration of the villain. Nicholas is what is called in theatricals a stick. But any stick is good enough to beat a Squeers with. That strong thwack, that simplified energy is the whole object of such a story; and the whole of this tale is full of a kind of highly picturesque platitude. The wicked aristocrats, Sir Mulberry Hawk, Lord Verisopht and the rest are inadequate versions of the fashionable profligate. But this is not (as some suppose) because Dickens in his vulgarity could not comprehend the refinement of patrician vice. There is no idea more vulgar or more ignorant than the notion that a gentleman is generally what is called refined. The error of the Hawk conception is that, if anything, he is too refined. Real aristocratic blackguards do not swagger and rant so well. A real fast baronet would not have defied Nicholas in the tavern with so much oratorical dignity. A real fast baronet would probably have been choked with apoplectic embarrassment and said nothing at all. But Dickens read into this aristocracy a grandiloquence and a natural poetry which, like all melodrama, is really the precious jewel of the poor. But the book contains something which is much more Dickensian. It is exquisitely characteristic of Dickens that the truly great achievement of the story is the person who delays the story. Mrs. Nickleby, with her beautiful mazes of memory, does her best to prevent the story of Nicholas Nickleby from being told. And she does well. There is no particular necessity that we should know what happens to Madeline Bray. There is a desperate and crying necessity that we should know Mrs. Nickleby once had a foot-boy who had a wart on his nose and a driver who had a green shade over his left eye. If Mrs. Nickleby is a fool, she is one of those fools who are wiser than the world. She stands for a great truth which we must not forget; the truth that experience is not in real life a saddening thing at all. The people who have had misfortunes are generally the people who love to talk about them. Experience is really one of the gaieties of old age, one of its dissipations. Mere memory becomes a kind of debauch. Experience may be disheartening to those who are foolish enough to try to co-ordinate it and to draw deductions from it. But to those happy souls, like Mrs. Nickleby, to whom relevancy is nothing, the whole of their past life is like an inexhaustible fairyland. Just as we take a rambling walk because we know that a whole district is beautiful, so they indulge a rambling mind because they know that a whole existence is interesting. A boy does not plunge into his future more romantically and at random, than they plunge into their past. Another gleam in the book is Mr. Mantalini. Of him, as of all the really great comic characters of Dickens, it is impossible to speak with any critical adequacy. Perfect absurdity is a direct thing, like physical pain, or a strong smell. A joke is a fact. However indefensible it is it cannot be attacked. However defensible it is it cannot be defended. That Mr. Mantalini should say in praising the "outline" of his wife, "The two Countesses had no outlines, and the Dowager's was a demd outline," -- this can only be called an unanswerable absurdity. You may try to analyze it, as Charles Lamb did the indefensible joke about the hare; you may dwell for a moment on the dark distinctions between the negative disqualification of the Countess and the positive disqualification of the Dowager, but you will not capture the violent beauty of it in any way. "She will be a lovely widow. I shall be a body. Some handsome women will cry; she will laugh demnebly." This vision of demoniac heartlessness has the same defiant finality. I mention the matter here, but it has to be remembered in connection with all the comic masterpieces of Dickens. Dickens has greatly suffered with the critics precisely through this stunning simplicity in his best work. The critic is called upon to describe his sensations while enjoying Mantalini and Micawber, and he can no more describe them than he can describe a blow in the face, Thus Dickens, in this self-conscious, analytical and descriptive age, loses both ways. He is doubly unfitted for the best modern criticism, His bad work is below that criticism. His good work is above it. But gigantic as were Dickens's labours, gigantic as were the exactions from him, his own plans were more gigantic still. He had the type of mind that wishes to do every kind of work at once; to do everybody's work as well as its own. There floated before him a vision of a monstrous magazine, entirely written by himself. It is true that when this scheme came to be discussed, ho suggested that other pens might be occasionally employed; but, reading between the lines, it is sufficiently evident that he thought of the thing as a kind of vast multiplication of himself, with Dickens as editor opening letters, Dickens as leader-writer writing leaders, Dickens as reporter reporting meetings, Dickens as reviewer reviewing books, Dickens, for all I know, as office-boy opening and shutting doors. This serial, of which he spoke to Messrs. Chapman & Hall, began and broke off and remains as a colossal fragment bound together under the title of "Master Humphrey's Clock." One characteristic thing he wished to have in the periodical. He suggested an Arabian Nights of London, in which Gog and Magog, the giants of the city, should give forth chronicles as enormous as themselves. He had a taste for these schemes or frameworks for many tales. He made and abandoned many; many he half-fulfilled. I strongly suspect that he meant Major Jackman, in "Mrs. Lirriper's Lodgings" and "Mrs. Lirriper's Legacy," to start a series of studies of that lady's lodgers, a kind of history of No. 81, Norfolk Street, Strand. "The Seven Poor Travellers" was planned for seven stories; we will not say seven poor stories. Dickens had meant, probably, to write a tale for each article of "Somebody's Luggage": he only got as far as the hat and the boots. This gigantesque scale of literary architecture, huge and yet curiously cosy, is characteristic of his spirit, fond of size and yet fond of comfort. He liked to have story within story, like room within room of some labyrinthine but comfortable castle. In this spirit he wished "Master Humphrey's Clock" to begin, and to be a big frame or bookcase for numberless novels. The clock started; but the clock stopped. In the prologue by Master Humphrey reappear Mr. Pickwick and Sam Weller, and of that resurrection many things have been said, chiefly expressions of a reasonable regret. Doubtless they do not add much to their author's reputation, but they add a great deal to their author's pleasure. It was ingrained in him to wish to meet old friends. All his characters are, so to speak, designed to be old friends; in a sense every Dickens character is an old friend, even when he first appears. He comes to us mellow out of many implied interviews, and carries the firelight on his face. Dickens was simply pleased to meet Pickwick again, and being pleased, he made the old man too comfortable to be amusing. But "Master Humphrey's Clock" is now scarcely known except as the shell of one of the well-known novels. "The Old Curiosity Shop" was published in accordance with the original "Clock" scheme. Perhaps the most typical thing about it is the title. There seems no reason in particular, at the first and most literal glance, why the story should be called after the Old Curiosity Shop. Only two of the characters have anything to do with such a shop, and they leave it for ever in the first few pages. It is as if Thackeray had called the whole novel of "Vanity Fair" "Miss Pinkerton's Academy." It is as if Scott had given the whole story of "The Antiquary" the title of "The Hawes Inn." But when we feel the situation with more fidelity we realise that this title is something in the nature of a key to the whole Dickens romance. His tales always started from some splendid hint in the streets. And shops, perhaps the most poetical of all things, often set off his fancy galloping. Every shop, in fact, was to him the door of romance. Among all the huge serial schemes of which we have spoken, it is a matter of wonder that he never started an endless periodical called "The Street," and divided it into shops. He could have written an exquisite romance called "The Baker's Shop"; another called "The Chemist's Shop"; another called "The Oil Shop," to keep company with "The Old Curiosity Shop." Some incomparable baker he invented and forgot. Some gorgeous chemist might have been. Some more than mortal oil-man is lost to us for ever. This Old Curiosity Shop he did happen to linger by: its tale he did happen to tell. Around "Little Nell," of course, a controversy raged and rages; some implored Dickens not to kill her at the end of the story: some regret that he did not kill her at the beginning. To me the chief interest in this young person lies in the fact that she is an example, and the most celebrated example of what must have been, I think, a personal peculiarity, perhaps, a personal experience of Dickens. There is, of course, no paradox at all in saying that if we find in a good book a wildly impossible character it is very probable indeed that it was copied from a real person. This is one of the commonplaces of good art criticism. For although people talk of the restraints of fact and the freedom of fiction, the case for most artistic purposes is quite the other way. Nature is as free as air: art is forced to look probable. There may be a million things that do happen, and yet only one thing that convinces us is likely to happen. Out of a million possible things there may be only one appropriate thing. I fancy, therefore, that many stiff, unconvincing characters are copied from the wild freak-show of real life. And in many parts of Dickens's work there is evidence of some peculiar affection on his part for a strange sort of little girl; a little girl with a premature sense of responsibility and duty; a sort of saintly precocity. Did he know some little girl of this kind? Did she die, perhaps, and remain in his memory in colours too ethereal and pale? In any case there are a great number of them in his works. Little Dorrit was one of them, and Florence Dombey with her brother, and even Agnes in infancy; and, of course, Little Nell. And, in any case, one thing is evident; whatever charm these children may have they have not the charm of childhood. They are not little children: they are "little mothers." The beauty and divinity in a child lie in his not being worried, not being conscientious, not being like Little Nell. Little Nell has never any of the sacred bewilderment of a baby. She never wears that face, beautiful but almost half-witted, with which a real child half understands that there is evil in the universe. As usual, however, little as the story has to do with the title, the splendid and satisfying pages have even less to do with the story. Dick Swiveller is perhaps the noblest of all the noble creations of Dickens. He has all the overwhelming absurdity of Mantalini, with the addition of being human and credible, for he knows he is absurd. His high-falutin is not done because he seriously thinks it right and proper, like that of Mr. Snodgrass, nor is it done because he thinks it will serve his turn, like that of Mr. Pecksniff, for both these beliefs are improbable; it is done because he really loves high-falutin, because he has a lonely literary pleasure in exaggerative language. Great draughts of words are to him like great draughts of wine -- pungent and yet refreshing, light and yet leaving him in a glow. In unerring instinct for the perfect folly of a phrase he has no equal, even among the giants of Dickens. "I am sure," says Miss Wackles, when she had been flirting with Cheggs, the market-gardener, and reduced Mr. Swiveller to Byronic renunciation, "I am sure I'm very sorry if ----" "Sorry," said Mr. Swiveller, "sorry in the possession of a Cheggs!" The abyss of bitterness is unfathomable. Scarcely less precious is the poise of Mr. Swiveller when he imitates the stage brigand. After crying, "Some wine here! Ho!" he hands the flagon to himself with profound humility, and receives it haughtily. Perhaps the very best scene in the book is that between Mr. Swiveller and the single gentleman with whom he endeavours to remonstrate for having remained in bed all day: "We cannot have single gentlemen coming into the place and sleeping like double gentlemen without paying extra. . . . An equal amount of slumber was never got out of one bed, and if you want to sleep like that you must pay for a double-bedded room." His relations with the Marchioness are at once purely romantic and purely genuine; there is nothing even of Dickens's legitimate exaggerations about them. A shabby, larky, good-natured clerk would, as a matter of fact, spend hours in the society of a little servant girl if he found her about the house. It would arise partly from a dim kindliness, and partly from that mysterious instinct which is sometimes called, mistakenly, a love of low company -- that mysterious instinct which makes so many men of pleasure find something soothing in the society of uneducated people, particularly uneducated women. It is the instinct which accounts for the otherwise unaccountable popularity of barmaids. And still the pot of that huge popularity boiled. In 1841 another novel was demanded, and "Barnaby Rudge" supplied. It is chiefly of interest as an embodiment of that other element in Dickens, the picturesque or even the pictorial. Barnaby Rudge, the idiot with his rags and his feathers and his raven, the bestial hangman, the blind mob -- all make a picture, though they hardly make a novel. One touch there is in it of the richer and more humorous Dickens, the boy-conspirator, Mr. Sim Tappertit. But he might have been treated with more sympathy -- with as much sympathy, for instance, as Mr. Dick Swiveller; for he is only the romantic guttersnipe, the bright boy at the particular age when it is most fascinating to found a secret society and most difficult to keep a secret. And if ever there was a romantic guttersnipe on earth it was Charles Dickens. "Barnaby Rudge" is no more an historical novel than Sim's secret league was a political movement; but they are both beautiful creations. When all is said, however, the main reason for mentioning the work here is that it is the next bubble in the pot, the next thing that burst out of that whirling, seething head. The tide of it rose and smoked and sang till it boiled over the pot of Britain and poured over all America. In the January of 1842 he set out for the United States. |Art of Worldly Wisdom Daily| In the 1600s, Balthasar Gracian, a jesuit priest wrote 300 aphorisms on living life called "The Art of Worldly Wisdom." Join our newsletter below and read them all, one at a time. Shakespeare wrote over 150 sonnets! Join our Sonnet-A-Day Newsletter and read them all, one at a time.
<urn:uuid:fbd4837c-a24d-44a5-8553-d355b0e80844>
CC-MAIN-2021-21
https://www.online-literature.com/chesterton/charlesdickens/5/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990584.33/warc/CC-MAIN-20210513080742-20210513110742-00574.warc.gz
en
0.979796
7,647
2.625
3
Afghanistan’s Moment of Risk and Opportunity A Path to Peace for the Country and the Region Crises can drive change, but sometimes it takes two crises to cement a transformation. Alone, the Great Depression ushered in the New Deal, roughly tripling U.S. federal spending as a share of output. But it took World War II to push federal spending much higher, solidifying the role of the state in the U.S. economy. If federal interventions such as the creation of the interstate highway system felt natural by the mid-1950s, it was the result of two compounding shocks, not a single one. American history offers many such examples. Alone, the Vietnam War might have triggered a decline of trust in the government. It took the compounding shock of Watergate to make that decline precipitous. Alone, the collapse of the Soviet Union would have enhanced U.S. power. It took the strong performance of the U.S. economy in the 1990s to spark talk of a “unipolar moment.” Alone, technological advances would have fueled inequality in the first decade of this century. Globalization reinforced that fracturing. Today, the United States and other advanced countries are experiencing the second wave of an especially powerful twin shock. Taken individually, either the global financial crisis of 2008 or the global pandemic of 2020 would have been enough to change public finances, driving governments to create and borrow money freely. Combined, these two crises are set to transform the spending power of the state. A new era of assertive and expansive government beckons. Call it the age of magic money. The twin shocks will change the balance of power in the world, because their effects will vary across countries, depending on the credibility and cohesion of each country’s economic institutions. Japan, with a long history of low inflation and a competent national central bank, has already shown that it can borrow and spend far more than one might have predicted given its already high levels of public debt. The United Kingdom, which has a worrisome trade deficit but strong traditions of public finance, should be able to manage an expansion of government spending without adverse consequences. The eurozone, an ungainly cross between an economic federation and a bickering assemblage of proud nation-states, will be slower to exploit the new opportunities. Meanwhile, emerging economies, which weathered the 2008 crisis, will enter a hard phase. Weaker states will succumb to debt crises. The new era will present the biggest potential rewards—and also the greatest risks—to the United States. As the issuer of the world’s most trusted financial assets, the United States will be able to use (and maybe abuse) the new financial powers most ambitiously. Thanks partly to the dollar’s entrenched position as the world’s reserve currency, the United States will be able to sustain an expansion in government spending on priorities as varied as scientific research, education, and national security. At the same time, the U.S. national debt will swell, and its management will depend crucially on the credibility of the Federal Reserve. In times of high national debt, U.S. presidents since Harry Truman have tried to subjugate the central bank. If the Fed loses its independence, the age of magic money could end in catastrophe. The financial crisis of 2008 left its mark on the world by magnifying the power of central banks in the advanced economies. In the days immediately after Lehman Brothers filed for bankruptcy, in September of that year, Ben Bernanke, the U.S. Federal Reserve chair, offered an early glimpse of the economy’s new rules by pumping $85 billion of public funds into the American International Group (AIG), an insurer. When Representative Barney Frank, Democrat of Massachusetts, was informed of this plan, he skeptically inquired whether the Fed had as much as $85 billion on hand. “We have $800 billion,” Bernanke answered simply. Armed with the nation’s printing press, Bernanke was saying, the Fed can conjure as many dollars as it wants. The iron law of scarcity need not apply to central bankers. The AIG rescue was only the beginning. The Fed scooped toxic assets off the balance sheets of a long list of failing lenders in order to stabilize them. It embraced the new tool of “quantitative easing,” which involves creating money to buy long-term bonds, thus suppressing long-term interest rates and stimulating the economy. By the end of 2008, the Fed had pumped $1.3 trillion into the economy, a sum equivalent to one-third of the annual federal budget. The central bank’s traditional toolkit, involving the manipulation of short-term interest rates, had been dramatically expanded. The Fed has emerged as the biggest agent of big government, a sort of economics superministry. These ambitious moves were mirrored in other advanced economies. The Bank of England also embraced quantitative easing, buying bonds on the same scale as the Fed (adjusting for the size of the British economy). The Bank of Japan had experimented with quantitative easing since 2001, but following the financial crisis, it redoubled those efforts; since 2013, it has created more money relative to GDP than any other mature economy. The European Central Bank’s response was halting for many years, owing to resistance from Germany and other northern member states, but in 2015, it joined the party. Combined, these “big four” central banks injected about $13 trillion into their economies in the decade after the financial crisis. The crisis brought on by the novel coronavirus has emboldened central banks still further. Before the pandemic, economists worried that quantitative easing would soon cease to be effective or politically acceptable. There were additional concerns that post-2008 legislation had constrained the power of the Fed to conduct rescues. “The government enjoys even less emergency authority than it did before the crisis,” former Treasury Secretary Timothy Geithner wrote in these pages in 2017. But as soon as the pandemic hit, such fears were dispelled. “I was among many who were worried a month ago about the limited scope of the Fed arsenal,” the respected investor Howard Marks confessed recently. “Now we see the vast extent of the Fed’s potential toolkit.” The Fed rode into battle in March, promising that the range of its actions would be effectively limitless. “When it comes to lending, we are not going to run out of ammunition,” declared Jerome Powell, the Fed chair. Whereas the Fed’s first two rounds of quantitative easing, launched in 2008 and 2010, had involved a preannounced quantity of purchases, Powell’s stance was deliberately open ended. In this, he was following the precedent set in 2012 by Mario Draghi, then the president of the European Central Bank, who pledged to do “whatever it takes” to contain Europe’s debt crisis. But Draghi’s promise was an inspired bluff, since the willingness of northern European states to support limitless intervention was uncertain. In contrast, nobody today doubts that the Fed has the backing of the U.S. president and Congress to deliver on its maximalist rhetoric. This is “whatever it takes” on steroids. The Fed’s muscular promises have been matched with immediate actions. During March and the first half of April, the Fed pumped more than $2 trillion into the economy, an intervention almost twice as vigorous as it delivered in the six weeks after the fall of Lehman Brothers. Meanwhile, market economists project that the central bank will buy more than $5 trillion of additional debt by the end of 2021, dwarfing its combined purchases from 2008 to 2015. Other central banks are following the same path, albeit not on the same scale. As of the end of April, the European Central Bank was on track for $3.4 trillion of easing, and Japan and the United Kingdom had promised a combined $1.5 trillion. The design of the Fed’s programs is leading it into new territory. After Lehman’s failure, the Fed was leery of bailing out nonfinancial companies whose stability was marginal to the functioning of the financial system. Today, the Fed is buying corporate bonds—including risky junk bonds—to ensure that companies can borrow. It is also working with the Treasury Department and Congress to get loans to small and medium-sized businesses. The Fed has emerged as the lender of last resort not just to Wall Street but also to Main Street. As the Fed expands its reach, it is jeopardizing its traditional claim to be a narrow, technocratic agency standing outside politics. In the past, the Fed steered clear of Main Street lending precisely because it had no wish to decide which companies deserved bailouts and which should hit the wall. Such invidious choices were best left to democratically elected politicians, who had a mandate to set social priorities. But the old demarcation between monetary technicians and budgetary politics has blurred. The Fed has emerged as the biggest agent of big government, a sort of economics superministry. This leads to the second expansion of governments’ financial power resulting from the coronavirus crisis. The pandemic has shown that central banks are not the only ones that can conjure money out of thin air; finance ministries can also perform a derivative magic of their own. If authorized by lawmakers and backed by central banks, national treasuries can borrow and spend without practical limit, mocking the normal laws of economic gravity. The key to this new power lies in the strange disappearance of inflation. Since the 2008 crisis, prices in the advanced economies have risen by less than the desired target of about two percent annually. As a result, one of the main risks of budget deficits has vanished, at least for the moment. In the pre-2008 world, governments that spent more than they collected in taxes were creating a risk of inflation, which often forced central banks to raise interest rates: as a form of stimulus, budget deficits were therefore viewed as self-defeating. But in the post-2008 world, with inflation quiescent, budget authorities can deliver stimulatory deficits without fear that central banks will counteract them. Increased inequality has moved wealth into the hands of citizens who are more likely to save than to spend. Reduced competition has allowed companies with market power to get away with spending less on investments and wages. Cloud computing and digital marketplaces have made it possible to spend less on equipment and hiring when launching companies. Thanks to these factors and perhaps others, demand has not outgrown supply, so inflation has been minimal. Despite a perception of U.S. decline, almost two-thirds of central bank reserves are still composed of dollars. Whatever the precise reasons, the disappearance of inflation has allowed central banks to not merely tolerate budget deficits but also facilitate them. Governments are cutting taxes and boosting spending, financing the resulting deficits by issuing bonds. Those bonds are then bought from market investors by central banks as part of their quantitative easing. Because of these central bank purchases, the interest rate governments must pay to borrow goes down. Moreover, because central banks generally remit their profits back to government treasuries, these low interest payments are even lower than they seem, since they will be partially rebated. A finance ministry that sells debt to its national central bank is, roughly speaking, borrowing from itself. Just as central bankers are blurring the line between monetary policy and budgetary policy, so, too, are budgetary authorities acquiring some of the alchemical power of central bankers. If low inflation and quantitative easing have made budget deficits cheap, the legacy of 2008 has also made them more desirable. In the wake of the financial crisis, quantitative easing helped the economy recover, but it also had drawbacks. Holding down long-term interest rates has the effect of boosting equity and bond prices, which makes it cheaper for companies to raise capital to invest. But it also delivers a handout to holders of financial assets—hardly the most deserving recipients of government assistance. It would therefore be better to rouse the economy with lower taxes and additional budgetary spending, since these can be targeted at citizens who need the help. The rise of populism since 2008 underscores the case for stimulus tools that are sensitive to inequality. Because budget deficits appear less costly and more desirable than before, governments in the advanced economies have embraced them with gusto. Again, the United States has led the way. In the wake of the financial crisis, in 2009, the country ran a federal budget deficit of 9.8 percent of GDP. Today, that number has roughly doubled. Other countries have followed the United States’ “don’t tax, just spend” policies, but less aggressively. At the end of April, Morgan Stanley estimated that Japan will run a deficit of 8.5 percent of GDP this year, less than half the U.S. ratio. The eurozone will be at 9.5 percent, and the United Kingdom, at 11.5 percent. China’s government, which led the world in the size of its stimulus after 2008, will not come close to rivaling the United States this time. It is likely to end up with a 2020 deficit of 12.3 percent, according to Morgan Stanley. As the world’s strong economies borrow heavily to combat the coronavirus slump, fragile ones are finding that this option is off-limits. Far from increasing their borrowing, they have difficulty in maintaining their existing levels of debt, because their creditors refuse to roll over their loans at the first hint of a crisis. During the first two months of the pandemic, $100 billion of investment capital fled developing countries, according to the International Monetary Fund, and more than 90 countries have petitioned the IMF for assistance. In much of the developing world, there is no magic, only austerity. Since the start of the pandemic, the United States has unleashed the world’s biggest monetary stimulus and the world’s biggest budgetary stimulus. Miraculously, it has been able to do this at virtually no cost. The pandemic has stimulated a flight to the relative safety of U.S. assets, and the Fed’s purchases have bid up the price of U.S. Treasury bonds. As the price of Treasuries rises, their interest yield goes down—in the first four months of this year, the yield on the ten-year bond fell by more than a full percentage point, dropping below one percent for the first time ever. Consequently, even though the stimulus has caused U.S. government debt to soar, the cost of servicing that debt has remained stable. Projections suggest that federal debt payments as a share of GDP will be the same as they would have been without the crisis. This may be the closest thing to a free lunch in economics. The world’s top economies have all enjoyed some version of this windfall, but the U.S. experience remains distinctive. Nominal ten-year government interest rates are lower in Canada, France, Germany, Japan, and the United Kingdom than in the United States, but only Germany’s is lower after adjusting for inflation. Moreover, the rate in the United States has adjusted the most since the pandemic began. Germany’s ten-year government rate, to cite one contrasting example, is negative but has come down only marginally since the start of February—and has actually risen since last September. Likewise, China’s ten-year bond rate has come down since the start of this year but by half as much as the U.S. rate. Meanwhile, some emerging economies have seen their borrowing costs move in the opposite direction. Between mid-February and the end of April, Indonesia’s rate rose from around 6.5 percent to just under eight percent, and South Africa’s jumped from under nine percent to over 12 percent, although that increase has since subsided. The United States’ ability to borrow safely and cheaply from global savers reflects the dollar’s status as the world’s reserve currency. In the wake of the 2008 crisis, when the failures of U.S. financial regulation and monetary policy destabilized the world, there was much talk that the dollar’s dominance might end, and China made a concerted effort to spread the use of the yuan beyond its borders. A decade or so later, China has built up its government-bond market, making it the second largest in the world. But foreigners must still contend with China’s capital controls, and the offshore market for yuan-denominated bonds, which Beijing promoted with much fanfare a decade ago, has failed to gain traction. As a result, the yuan accounts for just two percent of global central bank reserves. Private savers are starting to hold Chinese bonds, but these still represent a tiny fraction of their portfolios. Today, finance has more sway over countries and people than ever before. As China struggles to internationalize the yuan, the dollar remains the currency that savers covet. Despite the financial crisis and the widespread perception that U.S. influence in the world has declined, almost two-thirds of central bank reserves are still composed of dollars. Nor has the frequent U.S. resort to financial sanctions changed the picture, even though such sanctions create an incentive for countries such as Iran to develop ways around the dollar-based financial system. Issuing the global reserve currency turns out to be a highly sustainable source of power. The dollar continues to rally in times of uncertainty, even when erratic U.S. policies add to that uncertainty—hence the appreciation of the dollar since the start of the pandemic. The dollar’s preeminence endures because of powerful network effects. Savers all over the world want dollars for the same reason that schoolchildren all over the world learn English: a currency or a language is useful to the extent that others choose it. Just under half of all international debt securities are denominated in dollars, so savers need dollars to buy these financial instruments. The converse is also true: because savers are accustomed to transacting in dollars, issuers of securities find it attractive to sell equities or bonds into the dollar market. So long as global capital markets operate mainly in dollars, the dollar will be at the center of financial crises—failing banks and businesses will have to be rescued with dollars, since that will be the currency in which they have borrowed. As a result, prudent central banks will hold large dollar reserves. These network effects are likely to protect the status of the dollar for the foreseeable future. In the age of magic money, this advantage will prove potent. At moments of stress, the United States will experience capital inflows even as the Federal Reserve pushes dollar interest rates down, rendering capital plentiful and inexpensive. Meanwhile, other countries will be treated less generously by the bond markets, and some will be penalized by borrowing costs that rise at the least opportune moment. A strong financial system has always given great powers an edge: a bit over two centuries ago, the United Kingdom’s superior access to loans helped it defeat Napoleon. Today, finance has more sway over countries and people than ever before. But even as it bolsters U.S. power, finance has become riskier. The risk is evident in the ballooning U.S. federal debt burden. As recently as 2001, the federal debt held by the public amounted to just 31 percent of GDP. After the financial crisis, the ratio more than doubled. Now, thanks to the second of the twin shocks, federal debt held by the public will soon match the 106 percent record set at the end of World War II. Whether this debt triggers a crisis will depend on the behavior of interest rates. Before the pandemic, the Congressional Budget Office expected the average interest rate on the debt to hover around 2.5 percent. The Fed’s aggressive bond buying has pulled U.S. rates lower—hence the free lunch. But even if interest rates went back to what they were before, the debt would still be sustainable: higher than the average of 1.5 percent of GDP that the country has experienced over the past two decades but still lower than the peak of 3.2 percent of GDP that the country reached at the start of the 1990s. Another way of gauging debt sustainability is to compare debt payments with the growth outlook. If nominal growth—real growth plus inflation—outstrips debt payments, a country can usually grow out of its problem. In the United States, estimates of real sustainable growth range from 1.7 percent to 2.0 percent; estimates of future inflation range from the 1.5 percent expected by the markets to the Fed’s official target of 2.0 percent. Putting these together, U.S. nominal growth is likely to average around 3.6 percent. If debt service payments are 2.5 percent of GDP, and if the government meets those obligations by borrowing and so expanding the debt stock, nominal growth of 3.6 percent implies that the federal government can run a modest deficit in the rest of its budget and still whittle away at the debt-to-GDP ratio. Japan’s experience reinforces the point that high levels of debt can be surprisingly sustainable. The country’s central government debt passed 100 percent of GDP in 2000, and the ratio has since almost doubled, to nearly 200 percent. Yet Japan has not experienced a debt crisis. Instead, interest rates have declined, keeping the cost of servicing the debt at an affordable level. Japan’s track record also disproves the notion that high levels of debt impede vigorous emergency spending. The country’s pandemic stimulus is large, especially relative to the scale of its health challenge. In short, the recent prevalence of low interest rates across the rich world encourages the view that U.S. debt levels will be manageable, even if they expand further. The more central banks embrace quantitative easing, the lower interest rates are likely to remain: the rock-bottom yields on Japan’s government debt reflect the fact that the Bank of Japan has vacuumed up more than a third of it. In this environment of durably low interest rates, governments enter a looking-glass world: by taking on more debt, they can reduce the burden of the debt, since their debt-financed investments offset the debt by boosting GDP. Based on this logic, the age of magic money may usher in expanded federal investments in a wide range of sectors. When investors the world over clamor for U.S. government bonds, why not seize the opportunity? The question is whether Tokyo’s experience—rising debt offset by falling interest rates—anticipates Washington’s future. For the moment, the two countries have one critical feature in common: a central bank that is eagerly engaged in quantitative easing. But that eagerness depends on quiescent inflation. Because of a strong tradition of saving, Japan has experienced outright deflation in 13 of the past 25 years, whereas the United States has experienced deflation in only one year over that period. The danger down the road is that the United States will face an unexpected price surge that in turn forces up interest rates faster than nominal GDP, rendering its debt unsustainable. To see how this could work, think back to 1990. That year, the Fed’s favorite measure of inflation, the consumer price index, rose to 5.2 percent after having fallen to 1.6 percent four years earlier—thus proving that inflation reversals do happen. As inflation built, the Fed pushed up borrowing costs; rates on ten-year Treasury bonds went from about seven percent in late 1986 to over nine percent in 1988, and they hovered above eight percent in 1990. If a reversal of that sort occurred today, it could spell disaster. If long-term interest rates rose by two percentage points, the United States would face debt payments worth 4.5 percent of GDP rather than 2.5 percent. The burden of the national debt would hit a record. That would have significant political consequences. In 1990, the unsustainable debt trajectory forced the adoption of a painful deficit-cutting package, causing President George H. W. Bush to renege on his “no new taxes” campaign pledge, arguably costing him the 1992 election. Given today’s political cynicism, it seems unwise to count on a repeat of such self-sacrifice. It is therefore worth recalling the other debt-management tactic that Bush’s administration attempted. By attacking the Fed chair, Alan Greenspan, with whispered slanders and open scolding, Bush’s advisers tried to bully the central bank into cutting interest rates. The way they saw things, lower rates, faster growth, and higher inflation would combine to solve the debt problem. Greenspan stood his ground, and Bush was not reckless enough to get rid of him. But if a future president were more desperate, the Fed could be saddled with a leader who prioritized the stability of the national debt over the stability of prices. Considering the Fed’s recent business bailouts, it would be a small step to argue that the central bank also has a duty to protect citizens from budget austerity. Given its undershooting of the inflation target over the past few years, it would be easy to suggest that a bit of overshooting would be harmless. Unfortunately, if not checked fairly quickly, this seductive logic could open the way to a repeat of the 1970s, when U.S. financial mismanagement allowed inflation to reach double digits and the dollar came closer than ever in the postwar period to losing its privileged status. The age of magic money heralds both opportunity and peril. The twin shocks of 2008 and 2020 have unleashed the spending power of rich-world governments, particularly in the United States. They have made it possible to imagine public investments that might speed growth, soften inequality, and tackle environmental challenges. But too much of a good thing could trigger a dollar crisis that would spread worldwide. As U.S. Treasury Secretary John Connally put it to his European counterparts in 1971, “The dollar is our currency but your problem.” Nobody is sure why inflation disappeared or when it might return again. A supply disruption resulting from post-pandemic deglobalization could cause bottlenecks and a price surge; a rebound in the cost of energy, recently at absurd lows, is another plausible trigger. Honest observers will admit that there are too many unknowns to make forecasting dependable. Yet precisely because the future is uncertain and contingent, a different kind of prediction seems safe. If inflation does break out, the choices of a handful of individuals will determine whether finance goes over the precipice. The United States experienced an analogous moment in 1950. China had sent 300,000 infantry across the frozen Yalu River, which marked its border with Korea; they swarmed U.S. soldiers sleeping on the frigid ground, stabbing them to death through their sleeping bags. The following month, with the fate of the Cold War as uncertain as it would ever be, U.S. President Harry Truman called Thomas McCabe, the Fed chair, at home and insisted that the interest rate on ten-year bonds stay capped at 2.5 percent. If the Fed failed to buy enough bonds to keep the interest rate at that level, “that is exactly what Mr. Stalin wants,” the president lectured. In a time of escalating war, the government’s borrowing capacity had to be safeguarded. This presented the Fed with the kind of dilemma that it may confront again in the future. On the one hand, the nation was in peril. On the other hand, inflation was accelerating. The Fed had to choose between solving an embattled president’s problem and stabilizing prices. To Truman’s fury, McCabe resolved to put the fight against inflation first; when the president replaced McCabe with William McChesney Martin, a Treasury official Truman expected would be loyal, he was even more shocked to find that his own man defied him. In his first speech after taking office, Martin declared that inflation was “an even more serious threat to the vitality of our country than the more spectacular aggressions of enemies outside our borders.” Price stability should not be sacrificed, even if the president had other priorities. Years later, Truman encountered Martin on a street in New York City. “Traitor,” he said, and then walked off. Before the age of magic money comes to an end, the United States might find itself in need of more such traitors. A Real-Life Experiment in Hyper-Keynesianism
<urn:uuid:03cf5649-b175-4978-8978-592e36a9fef5>
CC-MAIN-2021-21
https://www.foreignaffairs.com/articles/united-states/2020-05-29/pandemic-financial-crisis
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00176.warc.gz
en
0.959416
5,892
2.921875
3
If you are looking for the best nonfiction books for kids ages 6 – 12 in elementary school, I’ve put all my favorite nonfiction book reviews on this page and broken them into categories such as animals, science, sports, famous people, and facts. Best Nonfiction Books for Elementary Age Kids (Ages 6 – 12) Nocture: Creatures of the Night by Traer Scott PICTURE BOOK Orangutan Houdini by Laurel Neme illustrated by Kathie Kelleher PICTURE BOOK What an interesting picture book story based on real life story! Fu Manchu, the orangutan, keeps escaping from his enclosure in the zoo. He doesn’t leave the zoo, just hangs out in the trees and always returns when his keeper comes to get him. Fu is one clever orangutan! Written like a story in narrative format, this is an excellent nonfiction picture book. Tuesday Tucks Me In: The Loyal Bond between a Soldier and his Service Dog by Former Captain Luis Carlos Montalvan, USA with Bret Witter, photographs by Dan Dion PICTURE BOOK Man, this book made me tear up right away – it’s powerful to witness the bond between a service dog, Tuesday and his person, Luis who experiences post-traumatic stress disorder and other disabilities like how Tuesday helps Luis’ nightmares and helps him balance as he walks down the subway stairs. This picture book follows a typical day in the life of Luis and Tuesday from breakfast to bedtime. The photographs are gorgeous! The text is totally perfect — not too much, just right. I highly recommend this amazing nonfiction book — it will tug at your heartstrings. And, if you want to read about Luis and Tuesday, get his full-length memoir, Until Tuesday: A Wounded Warrior and the Golden Retriever Who Saved Him. Tiger by Suzi Eszterhas PICTURE BOOK I love how this nonfiction picture book is told as a story, following tiger cubs as they grow up in India. It’s an easier way for kids to relate and store the information they’re learning. And the photographs are spectacular! We see the tigers in their rocky cliff cave, learn that the cubs won’t grow into all their stripes for two years, watch the family move to a new forest den, follow as the cubs learn to hunt, and find ourselves in awe as we see how big the tigers get in two short years. A page of tiger facts finishes out this book. Animal Planet Animals A Visual Encyclopedia by Animal Planet Beautiful photographs and bite-sized chunks of information showcase more than 2,500 animals from the seven major animal groups: mammals, birds, reptiles, amphibians, insects and arachnids, invertebrates, and fish are featured in 1,050 stunning full-color photos, plus dynamic illustrations, maps, and charts. Snakeopedia (Discovery Channel) Gorgeous photos that gross me out and enchant snake lovers fill Snakeopedia. While I might say yuck, this is a terrific snake book filled with amazing photography and fascinating facts about the 12 snake families, the features of different snakes, which are dangerous, and other snaky stuff. I highly recommend this book. If you’re not afraid like me. 🙂 Winnie The True Story of the Bear Who Inspired Winnie-the-Pooh by Sally M. Walker, illustrated by Jonathan D. Voss PICTURE BOOK Hippos Are Huge! by Jonathan London, illustrated by Matthew Truman PICTURE BOOK Excellent writing and illustrations make this one of the best nonfiction animal books because you don’t realize you’re learning so much about hippos because it’s so interesting and well-layed out! Bigger text pairs with smaller factual text to give readers maximum learning. Hippos are COOL and DANGEROUS — you’ll find out when you read this book. If an Egg Hatches . . . and Other Animal Predictions by Blake A. Hoena PICTURE BOOK I like this nonfiction book for two reasons. First, the fantastic color photographs. Second, the questions that get the reader to engage fully with the text before they flip the page to read the answer. For example: “Every animal in nature is important. Imagine if there weren’t any predators like wolves. What would happen to the populations of elk and deer?” Show Me Dogs My First Picture Encyclopedia by Megan Cooley Peterson PICTURE BOOK Do your kids love animals? This book, and others like it in this well-designed series will entice your kids to devour facts all about the animal they love — like dogs! Hippos Can’t Swim and Other Fun Facts by Laura Lyn DiSiena and Hannah Eliot, illustrated by Pete Oswald PICTURE BOOK My daughter loved this book so much she read all the facts to me throughout her reading of it. It’s in a picture book format with lovely illustrations making it enticing to read. Did you know that . . . . . . zebras are attracted to things that are black-and-white striped – just like they are. If you painted black-and-white stripes on a wall, a zebra would walk toward it. . . . worker ants in a colony don’t sleep all day or all night. Instead, they each take about 250 naps throughout the day, each nap lasting just longer than a minute. Great Migrations by Elizabeth Carney Beautiful photographs of Mali elephants, red crabs, butterflies, jellyfish, zebras, army ants, wildebeests, and sperm whales accompany maps, facts and interesting trivia about the migration and life of each group of animals. Plus, as always, you get the gorgeous photography of National Geographic. Amazing. Feathers: Not Just for Flying by Melissa Stewart, illustrated by Sarah S. Brannen PICTURE BOOK Science Comics Coral Reefs Cities of the Ocean by Maris Wicks (ages 8 – 12) An adorable yellow fish narrates this informative graphic novel about his habitat, coral reefs. It’s all facts though so it’s not the kind of book that most kids (or adults) will want to sit down and read in one sitting. Read it in chunks and you’ll soon be an expert on coral reefs. Science Comics Dinosaurs Fossils and Feathers by MK Reed, Joe Flood Despite the Darwinian leaning, I generally liked this book. We learn about the people who made significant contributions to the study of dinosaurs such as Mary Annie of England, an amateur fossil hunter, and even how dinosaurs got their names. (Did you know there’s a dinosaur named after Hogwarts!?) Readers will enjoy the narrative filled with facts. Human Body Theater: A Nonfiction Revue by Maris Wicks This nonfiction graphic novel ROCKS! It should be required reading for students studying the human body because the information presented this way is so memorable and understandable. I love Skeleton’s narration and the awesomely cute illustrations of every body system from the smallest cell parts to the biggest organs. Strange But True! Our Weird, Wild, Wonderful World DK This is my favorite nonfiction book recently — I literally couldn’t help but read so many of the pages out loud to my kids, they were just so interesting. First the photographs grab your attention — then the headlines — and then the text. This is a GREAT book for your reluctant readers because it’s practically irresistible to read through it. Love it for a gift idea! DNA Detective by Tanya Lloyd Kyi, illustrated by Lil Grump Colorful and easy to read, I very much enjoyed this informational book and it’s kid-friendly layout. Plus it’s packed full of fascinating facts about the science of DNA and how researchers figured it out and use it in practical applications like solving crimes. Eyewitness Books – Flight Eyewitness non-fiction books for kids are packed with great information, pictures, diagrams and more – I recommend all their titles. We have several at our house and AJ and I both enjoy the information. Genius! The Most Astonishing Inventions of All Time by Deborah Kespert A visually appealing graphic layout makes it easy to access the invention information — in fact, it’s down-right enticing! Who knows I’d care about the Archimedes Screw and want to read all about it. Or the Elephant Clock — yes, that was a real thing which was super cool. You’ll learn about these early inventions and more modern inventions such as the space rocket. This is an well-done, readable nonfiction book. Time for Kids Robots First of all, I LOVE Time for Kids — and I bet your kids do, too. (Because of their TFK’s classroom newsletters.) Robots is such a cool book. First because of the topic. We all are curious about robots and how soon we can get one in our homes, right? And second because of the way TFK presents the material in an easy-to-read, enticing format. Learn about robots used in factories and hospitals, robot toys, robot kits, flying robots and more! STEM is the future, this is a great book for your STEM kiddos. I, Fly The Buzz About Flies and How Awesome They Are by Bridget Heos, illustrated by Jennifer Plecas PICTURE BOOK The Incredible Plate Tectonics Comic by Kanani K.M. Lee & Adam Wallenta I highly recommend this well-written and educational comic book! George, a normal skate boarder kid, is also Geo, a superhero who can transport back in time to learn about geology. In this story, he’s back to Pangea where he learns about plate tectonics first hand! The story goes back and forth between George and Geo seamlessly. Fantastic! Time for Kids All Access Your Behind the Scenes Pass to the Coolest Things in Sports Your kids are going to LOVE the lift and look pages – they are tra translucent ges that lift up to reveal another image underneath. Like the page of a downhill skier, lift the top page and you can see her body’s muscles and organs. SO cool. From monster trucks to stadiums that convert from football to ice, this is one of the best nonfiction books that will keep your kids learning and reading. Dogsledding and Extreme Sports: A nonfiction companion to Magic Tree House #54 by Mary Pope Osborne and Natalie Pope Boyce, illustrated by Carlo Molinari I learned a lot from this little nonfiction book; it’s packed full of interesting information about many extreme sports such as open water swimming, the Iditarod, and the X Games. Sports Illustrated Kids What Are the Chances? The Wildest Plays in Sports Stats and sports go hand in hand for sports enthusiasts, right? Flip through this dense book of photographs and you’ll see numbers pop out — 277 No-Hitters. At the start of the 2014 season, a total of 277 n0-hitters had been thrown in major league baseball history. . . 20 Wins in a Row. Only 4 teams have ever won 20 or more games in a row during an NBA regular season. . . 100 Assists in a Season. When Bobby Orr of the Boston Bruins scored 139 points in 1970-1, . . . The Book of Why: Amazing Sports and Science I don’t have this book but I want to get it – it looks totally cool! Especially for sports and science-minded kids. Weird Zone: Sports I love books about weird, and I suspect so do your kids. Learn all about the strangest sports in the world. Underwater bike racing? I only that that applied to basket weaving. Fun! The Boy in Number Four by Kara Kootstra, illustrated by Reagan Thomson I enjoyed this picture book about Bobby Orr’s life as a young boy playing hockey — how hard he worked and how much he loved playing. Sports Illustrated Kids Football Then to Wow! This amazing nonfiction book makes ME, a non-sports fan, get interested in football. The layout and design plus the photographs make me want to devour all the football facts and info. I highly recommend this for any football fan – it’s packed full of information about football back in the day (1930s) and now days. Excellent! Malala Yousafzai: Warrior with Words by Karen Leggett Abouraya, illustrated by L. C. Wheatley PICTURE BOOK She’s just a girl in Pakistan but Malala wants to go to school. When she does, she is shot by the Taliban. She becomes an advocate for girls and boys, too, to receive an education. Untamed The Wild Life of Jane Goodall by Anita Silvey, forward by Jane Goodall This is not your average biography for kids with small font and ugly black and white photos. No, it’s so much better! Untamed is an excellent depiction of Jane Goodall’s life with kid-friendly language using kid-appealing layouts of colorful photos. Interesting insets throughout describe tips for kids and information such as sign language. I love the Gombe Family Scrapbook at the end with some of the significant chimps in Jane’s life. I also found it really interesting to learn how this English girl read about Africa as a child and fell in love with it. I am Martin Luther King, Jr. by Brad Meltzer, illustrated by Christopher Eliopoulos PICTURE BOOK This nonfiction biography series for young readers is absolutely fantastic. The latest is this book about Martin Luther King, Jr. whose cartoon illustration will give you a chuckle — since it’s he’s a kid with a mustache. We learn how much an experience with a white friend not playing with him because he was black hurt him. We learn how the injustice in the world bothered Martin and that he wanted to do something about it. The book does NOT end with his death but ends on a positive note of standing strong and facing struggles. Wilma Unlimited by Kathleen Kull, illustrated by David Diaz PICTURE BOOK After having polio, Wilma was told she wouldn’t walk again, let alone run. But Wilma was determined and she worked hard, becoming the first American woman to win three gold medals at the Olympics. Who Says Women Can’t Be Doctors? The Story of Elizabeth Blackwell by Tanya Lee Stone, illustrated by Marjorie Ariceman PICTURE BOOK Despite growing up in a time when women were not viewed as equal to men, Elizabeth studied and worked hard to become the first woman doctor. She showed the world that women were just as smart and capable as men– and can be doctors. Nelson Mandela by Nadir Nelson PICTURE BOOK Growing up in the prejudicial apartheid South Africa, Nelson Mandela faced horrible racism and a long time in prison. Despite all of this, his spirit continued to be strong. He eventually he realized his dream to improve the country and give equal rights to all people by becoming a strong leader and president of his country. I Am Malala: How One Girl Stood Up for Education and Changed the World by Malala Yousafzai and Patricia McCormick Malala shares how the Taliban shot her in the face when she tried to go to school, just because she was female. She explains how this changed her life. She shares her determination to continue to advocate for her rights and the rights of girls and boys all over the world. Seed by Seed The Legend and Legacy of John “Appleseed” Chapman by Esme Raji Codell, illustrated Lynn Rae Perkins PICTURE BOOK I’ve read a lot of books about Johnny Appleseed and this now my favorite. The illustrations are gorgeous – some textile stitching, some drawings. I love the emphasis of his philosophy which is below: Use what you have Share what you have Try to make peace where there is war You can reach your destination by taking small steps Girls Who Rocked the World: Heroines from Joan of Arc to Mother Teresa by Michelle Rohm McCann and Amelie Welden, illustrated by David Hahn Students will find growth mindset inspiration with any story in this collection about women who made the most of their lives. The Notorious Benedict Arnold by Steve Sheinkin If only all nonfiction books for children were this engaging and well-written! This reads like a story, a narrative. Thank you, Mr. Sheinkin! Bomb: The Race to Build –and Steal–the World’s Most Dangerous Weapon by Steve Sheinkin Another knock-out nonfiction book from the talented Steve Sheinkin! I’m so impressed by how Sheinkin makes this story come ALIVE like it’s an adventure / mystery / thriller and not real life. Well, they do say truth is stranger than fiction. Whoppers: History’s Most Outrageous Lies and Liars by Christine Seifert I read this nonfiction book aloud to my kids — it was SO fun because it prompted great discussion and interaction. They couldn’t believe that people would make up such outrageous lies. Learn these incredible wild whoppers — from people you’ve heard of like Charles Ponzi to people you’ve never heard of like George Psalmanazar who convinced people he was a native from his made-up island of Formosa. It’s book best for middle grade to YA readers. HISTORY, FACTS, QUOTES, AND MORE Treasury of Norse Mythology: Stories of Intrigue, Trickery, Love, and Revenge by Donna Jo Napoli, illustrations by Christina Balit This is a large, kid-friendly collection of Nordic myths with colorful illustrations and informative insets explaining more about subjects such as the Berserkers and the Norse diet. Excellent! Unbroken (The Young Adult Adaptation): An Olympian’s Journey from Airman to Castaway to Captive by Laura Hillenbrand Louis Zamperini’s life is almost unbelievable — a hoodlum, an Olympic runner, an airman shot down, and above all, a man who has great strength of character (growth mindset) to persevere despite all of life’s challenges. Boys in the Boat (Young Readers Adaptation): The True Story of an American Team’s Epic Journey to Win Gold at the 1936 Olympics by Daniel James Brown It’s hard to imagine overcoming as many obstacles as Joe Rantz (homelessness included) but he is determined to get a college education. He and his crew teammates are also determined to be the best rowers but they never expected to beat the Germans. This is an exemplary story of grit that will stay with you. Percy Jackson’s Greek Gods by Rick Riordan, illustrated by John Rocco My kids can’t stop reading and rereading this enormous volume of Greek myths, retold Riordan style — I’m talking laugh-out-loud style. Remember all the hilarious chapter titles in Riordan’s Percy Jackson books? And the witty, sarcastic voice of Percy? Yup. All here. LEGO Awesome Ideas What Will You Build? Awesome barely begins to describe this book — it’s jam-packed with so many ideas from different themes like Outer Space, Modern Metropolis, the Wild West, Fantasy Land, and The Real World. I just love browsing through the ideas. Be warned: Your kids will want you to order A LOT more Legos for these new projects. Mean Machines Customized Cars The world’s hottest most impressive and exciting customized cars by Kane Miller So many kids love cars like these (okay, and many of their dad’s do as well). This book highlights cool custom cars, their top speeds, their 0-60 mph, and their horsepower. From an Aston Martin DBS to the Bugatti Veyron, if you have a car lover, he will devour this book. Wacky and Wild! Guinness World Records by Calliope Glass The smallest living horse, a girl with the biggest collection of Hello Kitty items (one 4,000!), and the fastest snowman to run a marathon — all of these wacky facts are fun to read! The Real Princess Diaries by Grace Norwich My daughters and I love this fascinating book. It gives us a glimpse into the lives of a variety of international princesses. From historical princesses like Theodora of the Byzantine Empire to current princesses like Sikhanyiso of Swaziland or Victoria of Sweden, each has her own section including basic facts, cool facts, and big achievements. Special sections on royal pets, royal duties, hairdos, princes, and fashion add extra juicy tidbits for kids to enjoy. National Geographic Why’d They Wear That: Fashion as the Mirror of History by Sarah Albee Once my 13-year old and I started this book, we were engrossed from front to back. Albee writes fantastic chapter titles and headings: (Notice a theme? Nonfiction is getting GOOD, people!) “Caulk like an Egyptian,” “Putting the “Protest” in Protestant,” and “Hazardous Hemlines.” The book is formatted so that you can pick and choose interesting sections such as Corsets, Dressed to Compress because the corset photo is so intriguing or the inset of information has such a tantalizing title, “Why Did Napoleon Always Have His Hand in His Coat?” The Disney Book: A Celebration of the World of Disney (DK) My oldest daughter loves anything Disney and proclaimed that this is the best book ever written. 🙂 While I’m not sure about that, it is a dense fact-filled tome from the early years to the present day. Time for Kids All Access Your Behind the Scenes Look at the Coolest People, Places, and Things! A mix of entertainment, history, geography, pop culture, and science, this awesome lift and peel the page book has something for all interests. One of my favorite pages is the cast of The Hobbit with make up, costumes, and wigs on and without. Learn about pandas, the rainforest, the White House, King Tut and how money is made– among other things. 365 Days of Wonder: Mr. Browne’s Book of Precepts by R. J. Palacio If you’re like us and love quotes, this is the book for you. Even if you haven’t read the book Wonder, you will still find the quotes chosen n here (precepts) meaningful and thought-provoking from Anne Frank, Martin Luther King Jr., Confucius, Goethe, Sappho—and over 100 readers of Wonder who sent R. J. Palacio their own precepts. Weird But True 3 AJ has the first two Weird But True! non-fiction books and loves them. She told me, “If you get me the third book, I’ll know 900 facts. Because each book has three hundred and I have the first two.” Smart girl. These are super wacky and make great dinner table conversation! Children’s Activity Atlas Colorful illustrated maps with flag, animal and landmark stickers, postcards, and a passport book make this a great interactive for geography enthusiasts. (I’ll admit, I wanted to steal it from my children and do the stickers myself.) National Geographic Kids Get Outside Guide: All Things Adventure, Exploration, and Fun! Fun activities for kids to do in the backyard, on a road trip, in a park, and more. Filled with amazing photography and designed in a kid-friendly colorful layout, this book is awesome. We LOVE it! Time for Kids Book of When: 801 Facts Kids want to Know So when was the Internet invented (and who invented it)? When was popcorn invented? My kids and I love flipping through this book and reading all the cool information in bite-sized chunks that accompany each question. A fun coffee table book for the whole family! National Geographic Kids Ultimate U.S. Road Trip Atlas Another great, eye-catching book from the beloved National Geographic! Each state includes a map, slogan, roadside attractions and lots of impressive attractions and facts. This could inspire your family’s next road trip or trips. LEGO Harry Potter: Characters of the Magical World by DK Publishing My kids fight over this book — it’s that cool. All the characters get a page describing that minifigure such as when they were created, by what artist, and the thinking behind the face and outfit. Pretty fascinating stuff to my Harry Potter addicted kids. Through Time The Olympics From Ancient Greece to the Present Day by Richard Platt, illustrated by Manuela Cappon This book gives our kids the background information that no one else in your family will know. It’s fascinating stuff — I just learned from my daughter all about the number of swimming pools needed in an Olympic Games. From Paris, 1900 to London 2012, we get a glimpse of each host city as well as information about the Games. Once Upon a Dream: From Perrault’s Sleeping Beauty to Disney’s Maleficent This is a dense book for older readers but if you or your kids are curious about the history of Sleeping Beauty starting with the story first published by Charles Perrault in 1697, and continuing on with all the adaptations by various illustrators and writers, then this book is for you. Harry Potter Page to Screen: The Complete Filmmaking Journey by Bob McCabe Last year 9-year old AJ’s favorite Christmas gift was Harry Potter Film Wizardry, a book she still reads over and over – just this morning in fact, she was curled up on the couch reading it before school. Even though I haven’t let her see all the movies, I’m going to buy her this newest Harry Potter movie book, Page to Screen. It’s a whooping 531 pages! HUGE, right? This ultimate Harry Potter movie bible gives readers, besides a workout lifting the thing, stories, photographs, memorabilia, cinematic history and the film-making techniques from each of the movies.
<urn:uuid:33543c95-776e-42c8-b24b-248179e13c2c>
CC-MAIN-2021-21
https://imaginationsoup.net/nonfiction-books-kids-ages-6-12/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00616.warc.gz
en
0.92823
5,563
2.71875
3
I am sharing this post with ANTONIO As part of the previous mentioned robot H9832B explanation, we must admit, the process of cleaning inside the duck: removing all cables, the motor, the speaker and the batteries was a lot of fun. The process of destroying and stripping our doll off, gave a different value to the project, in other words, the idea of building a robot by the deconstruction of a small doll was fun. We had a lot of action along with the creation of the project. To clarify what our H9832B project is; we relied on a scheme that was drawn on by our Professor William Turkel on 10/febrauary/2012. After Antonio and I talked about creating a robot that could be controlled by an Arduino platform, we went to talk to Professor William Turkel that ended up drawing the scheme of how it could be our design. It was about being able to control the movements of the robot; in this case it is our doll, called H9832B duck, (Figure 1, to the right side of the image) by using a laptop which receives commands through internet, that is, through the application Twitter (Figure 1, to the left side of the image). We wanted to control the robot with several engines (at the same time), in our case the “9 GR Servo”, and to do that, we started using a “Micro Mastro #1350, Battery”. This control of the six engines was done from its connection with Micro Maestro, and an external energy and programming system called Pololu Master Control Center. figure 2 Antonio After much time spent on understanding how to manage and program the “servos”, we realized that we really did not need to use Micro Maestro, because we could manage our H9832B duck by only using two “servos”. What really led us to use Micro Maestro was that we thought we needed more energy to move several engines which Arduino could feed. The truth is that it was a good experience in the sense that we learned how to move six servos at the same time and with different programming systems. Now that we have more knowledge, we can apply it for other future projects that we want to do. Here you can download “Pololu Maestro Control” All decisions we have been taking so far have been due to the ongoing work that my colleague, Antonio and I ,have been investing throughout this course. That means, while we were working with the programming systems of the various programs we were also working on the construction of the robot. As we said before, we decided to use only two servos that were able to move all parts of the robot. Our first servo is able to move the two sides of the wings and the second servo is responsible for movement of the head and beak at the same time. We must also say that we have replaced the original duck’s eyes with two red LED lights, to give it a machine look. The process of placing the engines and lights has also been fun. For the engines we have been able to attach them so that they fit well into the interior of the duck body by using threads and also, so they can rotate freely inside the body. Before the final placement of the engines we did many tests to make sure that they could move the parts we wanted. Each motor has a different movement, that is to say, we have programmed the angles according to fair movements without breaking anything. This work was not easy, particularly, the movement of the head for several reasons. We had to put two LEDs inside the head’s shell, one for each eye and this meant, looking for the perfect space for them to be able to move freely so that the parts would not affect the cables. The problem was we did not have much space inside the head’s shell as it was too small and also had a structural system that allowed to open and close its beak. Inside the head and after many attempts, we were able to place the two LEDs with their wires to feed the lights with energy. These two wires go out of the head and at the same time allow their rotary motion. After securing the light´s operation and the mechanism of rotational movement, we jumped to the next step, which was to tie the head to a servo by using a thread. This thread we could not tie it directly to the servo, because if we did, the servo would not be able to move the head. We had to tie the thread to the head, and then, pass the thread through a fixed part that existed already and then attach the wire to the motor; in other words we create a pulley system. At this point, and after seeing our system working, we started laughing because we were so happy we had succeeded. At this moment we had fixed a side wing to the one servo by a thread. This part was easier because we had more space inside the duck and therefore, the servo movement was not difficult. Tying the other wing was the last thing we wanted to do since that meant the complete closure of the entire body. After quite a bit of hard work and many programming errors, we finally could synchronize the different movements of the head, beak and wings. All this was done by connecting the various wires coming out of the duck into the Electronic Chassis Brick, and the Arduino UNO, while the Arduino UNO was connected to our laptop. The programming system that we use to speak from the laptop to the Arduino UNO is called WIRED. Here you can download the software: http://arduino.cc/hu/Main/Software Finally, everything was fine and worked perfectly. At this time we were happy and therefore, we were seeking to bind our other wing and close the entire body. We started to check everything. Before finalizing, we have to tell you another important element of our project. All we are doing with the programming systems is to control the movements of our robot through a Tweet that is sent to the account we have developed since the beginning of the project, this account is @H9832B and the commands are: – @H9832B $wings$: upon receipt of this command to our account on Twitter the duck flaps his wings three times as we have programmed. – @H9832B $head$: upon receipt of this command to our account on Twitter the duck moves his head and opens-closes his beak twice as we have programmed. – @H9832B $everything$: upon receipt of this command to our account on Twitter the duck first moves the head and opens/closes his beak twice and then moves his wings three times as we have ,as well, programmed. Regarding the lights we decided they should always be on, but in a blinking light mode. But how can we know who has sent a tweet to @H9832B? Since we don’t want to open our Twitter account all the time, to see who sent the orders mentioned above; we decided to use an LCD screen that we have been able to program. Thus, whenever we have an order for our H9832B duck, our LCD is able to read and show the information about the user that sent the order and what type of order was it. The LCD screen was always going to be a part of our project, even before we switched to this one, as was mentioned in the previous post. What is interesting here is that we have achieved programming the LCD screen to read long messages by making the messages flow in motion. Now, back to what we were saying when we started to check everything; that is, the movements of our H9832B duck along with the LCD screen. So… here comes the moment of truth… the result was disappointing. We came down and there was an uncomfortable moment of silence, basically, it did not work. Soon, I began to think that for every problem there is a solution, and the truth is, that having an IT man, such as Antonio, as my colleague made the problem less severe. This is because, he always has a solution to all problems, and therefore it was good having him as my co-worker. The problem that arose was that the screen did not work well; it was blinking too fast, leaving strange symbols and we did not understand what was happening. We thought that by having all items plugged in at the same time, there was not enough energy for everything. Quickly, we contacted Professor William Turkel asking for another Arduino UNO, and by which, perhaps having an Arduino for H9832B duck and another one for the LCD screen. It was a very stressful time of reflection. What can we do now? It was Friday and there are only few days left for the final delivery and we could not afford to lose time. But then of course, Antonio saved our project again. The problem was that our LCD screen was plugged into the Electronic Brick Chassis BUS2. This BUS2 was sharing a pin with D9, D10, D11, and D12, which where, were the robot’s wires were plugged in. So we decided to change the spot of the LCD screen from the BUS2 to BUS1, where it did not involve the pin robot cables. It was a very intense moment, seconds seemed minutes, and at this very moment, there was another surprise waiting for us but in the positive sense. Finally, our H9832B duck was moving accordingly to the commands and the LCD screen was showing the messages perfectly. So yes, there was joy again! In this section, we will be delivering the more technical aspects of our project, which is as follows. The H9832B Duck is a robot that obeys commands sent from Twitter. In order to make it work, we had to face some technological and mechanical challenges. The first part of this post will talk about the technological issue(s). Let’s start introducing all the components of the project: – Liquid Crystal Display: – Arduino UNO: – Arduino Sensor Shield V4 Electronic Chassis Brick : This project had two main modules. The first module dealt with sending the tweet from twitter to the computer. The second one was responsible for the execution of the command in Arduino. TWITTER AND PYTHON The first step to our project is to send a tweet to the duck. For this reason, we created its own account on twitter: @H9832B. Like every user, @H9832B, the account follows others, and in our particular case, “following” means that our duck accepts commands. Every user can send tweets to the duck, but it only obeys those whom it follows. Following is the pathway by which other users are allowed to command the duck. The “following ones” do not necessarily have to be followed in tweeter. As long as the users follow the DUCK, they can command it. The next step is to send the tweet to the computer. Actually, twitter does not send anything to the computer. It’s the computer itself who searches for new tweets. We achieved this by programming a script in Python. Here is the script: 1º Accesses @H9832B’s twitter account 2º Gets its friends (followers) including itself (@H9832B) 3º For each friend gets their last tweet 4º Filters those tweets that are commands (they have a specific syntax) 5º Chooses the first one 6º Sends that command to the serial port (where it’s expected to be Arduino) Usually, signing in a twitter account requires the owner of the account to access the twitter website through a web browser, and by which, they must insert their username and password manually. Can Python do that by itself? Well, of course it cannot. But there is a method that makes this process automatically. Firstly, we had to go to the twitter developers website. Then, we created an application by giving a name for it and its description. Once we created our application, we got the information for OAuth (http://oauth.net/). OAuth is an authentication protocol that allows users to approve application that can act on their behalf without sharing their password. Specifically the information required is: Consumer Key, Consumer Secret, Access Token and Access Token Secret. That information is supplied by twitter in the application itself. After creating our application and getting the relevant information, we installed the python-twitter library on the computer. This library provides a pure Python interface for the Twitter API. This way, we can perform operations by using just the python code. A complete description of this API can be found here. Some of the methods we used are: – import twitter //imports the python-twitter library – api = twitter.Api(consumer_key, consumer_secret, access_token_key, access_token_secret) //creates the connection – api.GetFriends() //gets followings – api.GetUserTimeline(user.screen_name) //gets a user’s timeline To send a command to the duck, the user must write a tweet with the next format: “… @H9832B … $command$ …”. The tweet must be sent to “@H9832B” and the command written between “$”. Right now, the duck can perform two operations and it understands three commands: “wings” to move its wings, “head” to move its head and beak, and finally “everything” to move its wings, head and beak. The python script checks every 10 seconds for new tweets. If there is a new tweet, the command (the text between “$”) is extracted and sent to the serial port, where the Arduino is supposed to be. The information sent to Arduino has the next format: “@user -> command” The other important issue concerning the python script is its communication with Arduino. Talking to Arduino over a serial interface is pretty trivial in Python. There is a wrapper library called pySerial that works really well. This module encapsulates the access for the serial port. The data transmission with the serial port is made byte by byte, therefore, to send a whole text, it must be sent character by character. The methods we used are: – import serial //imports the pySerial library – arduino = serial.Serial(‘/dev/ttyACM0’,9600,timeout=1) //creates the connection with serial port – arduino.write(character) //writes a character to the serial port (or, in other words, sends a character to arduino) ARDUINO AND THE DUCK As a command is sent to Arduino, two things can happen: one, if the command is known, the duck obeys the instruction and the message “@user -> command” is displayed on the screen; if not, the duck does nothing and the screen displays the message “I don’t understand”. In addition, the duck blinks periodically. This behavior is possible thanks to all the electronic components involved. There are two LEDs (for its eyes) and two Servo Motors inside the duck, one for its wings and another one for its head and beak; and finally, an Arduino and a Sensor shield to control those components. A proper program running in Arduino is also required. Arduino is programmed in a language called Wired. This program is coded in the IDE Arduino, which you must previously install on your computer. When the program is done, it is sent and ran in Arduino. This program manages every component in our project. First of all, we needed to set up every component: pins, positions, etc. We would like to clarify that we did not connect the components to Arduino directly, we used the shield instead. Every time we say “connected to Arduino”, we mean “connected to the shield”. We have two LEDs connected to two pins in Arduino, two servo motors connected to other two pins and the LCD screen connected to the BUS1 on the shield. It is very important to check that pins that are being used by the BUS do not overlap with the pins used by the LEDs and the motors. Since the motors have rotating movements, it was needed to set up their positions in angles (degrees). So it was necessary to give the initial and final angles of the motion. We also defined special counters for the blinking of the duck’s eyes and some constants for the screen: number of rows, number of columns, speed of scrolling, etc. Secondly, the program starts with the setup() method. The project configuration is analyzed here: – We set up pin modes and initial values (HIGH) for turning LEDs on, and attached the motors to their corresponding pins. – We created the connection with the serial port (from where the computer is sending the commands) – We initialized the LCD screen: size (2 rows and 16 columns) and the initial message “Hi, I’m H9832B Duck”. Finally, the loop() method runs over and over again. The loop() is the heart of the program and its code controls the behavior of the duck and the screen. Specifically: – It checks for new commands. – It makes the duck blink if applicable. – It scrolls the screen. We wanted that messages on screen to appear on the bottom row, from right to left. When it starts to disappear on the left, it then begins to appear on the top row, also from right to left. And when it disappears completely on the left, it reappears on the bottom row and the process starts all over again. The LCD library includes its own scroll() method, but its scrolls each row independently, so it did not offer the behavior we needed. Our own screen_scroll_next() function does not actually scroll the screen. It just prints different messages once and again and by doing so, creating the effect of scrolling. To exemplify: imagine a screen with 1 row, 6 columns and the message “hello”, here is the way we would print the sequence of messages: 01: _ _ _ _ _ _ 02: _ _ _ _ _ h 03: _ _ _ _ h e 04: _ _ _ h e l 05: _ _ h e l l 06: _ h e l l o 07: h e l l o _ 08: e l l o _ _ 09: l l o _ _ _ 10: l o _ _ _ _ 11: o _ _ _ _ _ 01: _ _ _ _ _ _ The effect is that the message “hello” appears on the right and disappears on the left. Which is simple but effective! The message “scrolls” one character for each step on the loop. The screen always shows the same message until an event makes it switch. In the case there is no tweet, the screen shows “Hi, I’m H9832B Duck”. When a command is received, the message shows the last user and command executed with the format “@user -> command”. If the command is unknown, the message that would follow is “I don’t understand you”. Approximately, every 5 seconds, the duck blinks. We have a special counter that starts at 0 and increments by 400 (0.4 seconds ) every step of the loop(). When that counter reaches 4800, the duck blinks once; when it reaches 9600, the duck blinks twice; and finally when it reaches 10000, the duck blinks three times. Then, the counter restarts again. This sequence creates a non-uniform and natural blinking. We are aware that the red color gives a psycho-killer look to our robot, but that is the way we like it! figure 18 servo motor In every step on the loop, the program checks for new commands so, it checks whether new information was sent from the serial port; in other words, whether there is new data available or not. Because the data transmission with the serial port is made character by character, we coded a specific function that would basically read the serial port character by character and then, put them together to create a whole message. The python script takes care of sending the messages with the format “@user -> command” but, it is the wired program that checks that the “command” is known for the duck. To make this possible, we codified another function called getInstruction(message) to extract the right side of the “->”. If the instruction obtained is known, then the duck will do the corresponding action. However, it is not, it will do nothing and the screen will show the message “I don’ understand”. What happens if the duck understand the instruction? It can perform two actions: 1.) move the wings and move the head and beak. For this, what we really did was to write on the servo motors new values of angles. The motors have a repose angle (set up on the setup() method) and when the command is activated, this value is modified and a little piece of the motor rotates and goes to the new position. This new position is held on for the necessary time and then, the original value of the angle is restored. For some movements however, there is more than one repetition. For example, the duck moves its wings three times, that means, the program on Arduino switched the initial and final value of the angle three times. When the motors move, their little pieces move the wings and the head. This process has been explained above, in the second part of the post. There were three small projects which helped us with our own. Here are the links for those mentioned projects, in case you are interested: – Tweet-a-Pot: Twitter Enabled Coffee Pot – SimpleTweet_01 python – Send a Tweet to Your Office Door: And let your coworkers know what you’re up to
<urn:uuid:cb2544bc-99c0-49f1-86cd-4c1c9c3e28d9>
CC-MAIN-2021-21
https://mafana.wordpress.com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00255.warc.gz
en
0.956759
4,666
2.765625
3
Download PDF Summer Bridge Issue on Aeronautics June 26, 2020 Volume 50 Issue 2 The articles in this issue present the scope of progress and possibility in modern aviation. Challenges are being addressed through innovative developments that will support and enhance air travel in the decades to come. Electrified Aircraft Propulsion Thursday, June 25, 2020 Author: John S. Langford and David K. Hall The past decade has seen the number of electric and hybrid electric cars sold increase from about zero to over 2 million per year at an annualized growth rate of over 60 percent (Hertzke et al. 2019). Is a similar revolution in store for aircraft? The rise of the flygskam (flight shaming) movement has created societal pressure to reduce flights that produce carbon emissions and raised both popular and regulatory interest in electric propulsion for aircraft. This article reviews the prospects for widespread use of electrified aircraft propulsion (EAP) and concludes that a large-scale conversion as is currently seen in terrestrial use is unlikely in aviation. EAP is not a “drop-in” technology that can be easily retrofitted into existing designs. Indeed, its fundamental benefit is that it adds new dimensions to the aircraft design space. By blurring the lines between airframe and propulsion systems, EAP allows highly integrated designs that can be tailored to specific missions and performance objectives, and these designs are unlikely to resemble current aircraft. Case Study: Electric Retrofit of a Commuter-Class Aircraft Figure 1 Figure 1 shows what happens when a conventional commuter-class passenger aircraft is converted to operate on batteries. The results, based on a de Havilland Canada DHC-6 Twin Otter, are broadly applicable to this class of turboprop-powered aircraft (e.g., PC-12, Caravan, Q-400, ATR-72). Propeller power is generated by a combination of fuel-burning gas turbines and battery-powered motors. The degree of hybridization, µ, is the weight of the battery divided by the combined battery + fuel weight. When µ = 0 the aircraft is a conventional turboprop, and when µ = 1 the aircraft is entirely electric. In the context of payload and range capability, electrification always makes the performance worse. This is because the energy density in current batteries, no better than 250 W-hr/kg (watt-hour per kilogram), is a small fraction of that for liquid hydrocarbon fuels (figure 2). Major increases in battery energy density—for example, a doubling to 500 W-hr/kg—improve the results but do not change the fundamental trend of reduced range capability relative to energy-dense hydrocarbon fuel. Figure 2 The situation improves if one considers energy efficiency, as in the lower graph in figure 1, showing the ratio of total energy required to the product of range and payload weight, a measure of energy usage normalized for every mission. Battery-electric propulsion can achieve only a fraction of the range of the baseline turboprop, but it is considerably more efficient at those ranges. The achievable range can be increased with the decreasing µ, but energy efficiency decreases. The fundamental trade is between the power-conversion efficiency of an electric system, which can be a factor of 2 higher than for a gas turbine engine, and the energy density of hydrocarbon fuel, which is an order of magnitude higher than for batteries. The implication is that, for very short ranges, electric propulsion might have an energy efficiency advantage. Energy efficiency is also an excellent proxy for CO2 emissions, which basically scale with the mass of fuel burned. For a battery aircraft, emissions come from the power grid rather than the onboard turbine, and hence the source of power for the grid matters. Emissions in the Northwestern United States, where the grid has a high fraction of hydro power, are lower than in the Midwest, where grid power comes mostly from coal-fired generators. Energy efficiency is not the sole determinant of aircraft operating costs. Direct operating costs (referred to as cash aircraft-related operating costs, or CAROC) are measured in dollars per available seat mile and include fuel, maintenance, crews, insurance, and airport fees. Direct operating costs account for between one-third and two-thirds of aircraft-related operating costs (AROC), which include costs related to acquiring and owning or leasing the aircraft. These vary with the acquisition cost of the aircraft and its use; commercial airlines use their assets many more hours per year than corporate, charter, or private operations. It is generally accepted that to successfully launch a new aircraft type, the CAROC must be at least 15 percent below that of the aircraft it is replacing. Since fuel is typically 10–20 percent of AROC, figure 1 suggests that this will be possible for EAP only at extremely short ranges; hence the interest in trainers or electric vertical takeoff and landing (eVTOL) air taxis with short ranges. All-Electric Aircraft Urban Air Mobility The potential efficiency and cost benefits of electric propulsion at short range have led to a rapid rise in interest and investment in eVTOL for urban air mobility (UAM). For example, the ride-hailing company Uber has developed plans to fly riders to their destination via eVTOL. Building on NASA research, the company envisions four-passenger vehicles with a range of 60 miles, cruise speeds of at least 150 miles per hour, and a battery-electric propulsion architecture (Uber 2016). The latter not only speaks to the CAROC benefits but also aims to address issues of carbon footprint, local emissions, and noise. Figure 3 To address the UAM market, Aurora Flight Sciences developed a passenger air vehicle (PAV) prototype (figure 3) to demonstrate the feasibility of an electric propulsion system and autonomous operations. The PAV is a separate-lift-and-cruise configuration: vertical takeoff and landing are achieved with multiple lift rotors, which are then shut off for efficient, wing-borne, propeller-driven forward flight in cruise. This design allows for VTOL operations in an urban environment while maximizing range with a battery energy storage system. The feasibility of UAM missions depends not only on electric propulsion technologies but also on novel vehicle configurations like the PAV to meet challenging new efficiency, emissions, and community noise requirements. Commercial Aircraft At present, it appears that EAP makes economic sense only for aircraft that fly extremely short ranges (50–200 miles), such as general aviation aircraft, especially training aircraft, and eVTOL air taxis. Is there any potential for large aircraft? For commercial transports, the large amount of energy required to move hundreds of passengers hundreds or thousands of miles poses a challenge for battery energy storage. The difference in energy density between batteries and hydrocarbon fuels means the range of an all-electric transport will be significantly reduced relative to an equivalent gas-burning aircraft, as in figure 1 for smaller aircraft. The high efficiency of current large engines—in many cases emitting less CO2 per unit power produced than the grid from which the competing batteries would be charged—further complicates the value proposition of an all-electric airliner (Epstein and O’Flarity 2019). Figure 4 The technical challenge of battery-powered transports is illustrated in figure 4, which shows contours of the battery-specific energy (BSE) required to enable an all-electric aircraft with a given payload fraction—the ratio of payload weight to aircraft maximum takeoff weight—and range. Data points for the payload and range capability of existing aircraft are included. Even with generous assumptions about aerodynamic and propulsive efficiency, structural weight, and required reserves, a specific energy over 300 W-hr/kg is required to enable the capability of the 19-passenger DHC-6-400 Twin Otter. Taking into account that this value of BSE includes extra weight of packaging and thermal and safety protections, which can discount the cell-level performance shown in figure 2 by up to half, it becomes clear that battery-powered transports are more than 20 or 30 years away without breakthroughs in battery technology inconsistent with the historical trend. Hybrid Electric Aircraft Rather than attempting to displace aircraft fuel with batteries, there has been growing interest in hybrid electric concepts, i.e., propulsion systems that can draw from energy stored both in batteries and in hydrocarbon fuel. Hybrid electric vehicles in the automotive industry enabled step-change improvements in fuel efficiency and paved the way for the current generation of all-electric vehicles. Could the same model apply to electrified aviation? In answering this question, it is important to point out the differences between automobile and aircraft propulsion. First, terrestrial vehicle energy requirements are much less sensitive to weight than aircraft. Second, and perhaps more important, hybrid and all-electric cars enjoy energy savings due to regenerative braking. For aircraft, there is little opportunity for analogous regenerative deceleration: the energy recovery potential of an aircraft cruising at high altitude is much smaller than the energy used to overcome irreversible drag over the course of a commercial transport mission. Further, aircraft already practice regeneration without an electric system: during descent, the engine power is reduced and the glide slope extends the range by using the aircraft potential energy to generate extra thrust. Strategies for hybrid electric aircraft are thus centered on improving gas turbine performance through integration of a supplemental electrical power source. Gas turbines have long been preferred for transport aircraft propulsion because of their high efficiency, power-to-weight ratio, high-altitude capability, and low emissions. One drawback is that they are most efficient at their highest power, and designing the engine to meet peak power requirements during climb reduces the maximum achievable efficiency during cruise. Hybrid electric systems have the potential to remove this physical constraint by augmenting the power of the turbine during high-power conditions, enabling the engine to be optimized for peak performance during cruise, where most of the fuel is burned, at the cost of a minimal battery electric system. The inclusion of a high-power electric system for propulsion also opens up possibilities for new technologies like electric taxiing and vehicle system-level energy management, based on close integration between propulsion and aircraft electrical systems. A recent study by United Technologies suggests that such a hybrid system for a future single-aisle transport could reduce fuel burn by 4.2 percent and energy consumption by 0.3 percent relative to an advanced-technology turbofan (Lents and Hardin 2019). Another claims a hybrid reengine of a regional turboprop could reduce cruise fuel consumption by 25 percent, albeit at reduced range (Bertrand et al. 2019). The difference in results highlights the importance of both mission (electrification may have a greater benefit at short range) and the baseline for comparison; when considering new aircraft designs, one must be careful to compare equivalent levels of technology and equivalent mission requirements between electrified and conventionally powered aircraft. Distributed Electric Propulsion Discussion to this point has focused on EAP concepts with some level of battery energy storage. Another potential benefit of electrification, independent of energy storage medium, is the decoupling of mechanical power generation and thrust generation processes. Doing so would continue a decades-long trend in aircraft engine design: the shift from turbojets to turbofans improved efficiency by using a larger mass of lower-velocity bypass flow to generate thrust, and the recent introduction of geared turbofans relaxed the speed constraint on the fan and turbine, allowing both to be designed at their most efficient speeds. The concept of distributed electric propulsion (DEP) takes this decoupling one step further, by introducing flexibility in the number and arrangements of propulsors (propellers or fans). This benefits the propulsors, which follow a cube-squared scaling: the weight of a propulsor is approximately proportional to its diameter cubed and, for fixed jet velocity, the thrust is approximately proportional to the diameter squared. A distributed system with many propulsors will thus weigh less than a single propulsor producing the same thrust at the same jet velocity. Alternatively, distributed propulsors provide for a larger total fan area and thus lower jet velocity and higher efficiency than a single propulsor with the same total weight, although in this case the benefit also trades against an increase in nacelle drag, which scales with fan area. Electrification allows distributed propulsors to be driven by a single gas turbine core, which, experience has shown, will be more efficient than multiple smaller cores providing the same net power (Lord et al. 2015). These benefits must be traded against the added weight, transmission losses, and complexity of an electric power distribution system, which have to be evaluated at the overall vehicle performance level. A Turboelectric VTOL Model A notable recent example of a DEP design is the DARPA XV-24A concept developed by Aurora Flight Sciences. The novel tilt-wing/tilt-canard vehicle configuration (figure 5) arose in response to challenging requirements for efficient vertical lift capability and high-speed cruise at 400 knots, which are not achievable with conventional rotorcraft or tilt-rotor designs. DEP was a key enabling technology that allowed large propulsor disk area for efficient vertical takeoff without a large-diameter rotor and associated aerodynamic disadvantage at high speed. Figure 5 The envisioned propulsion system had a turboelectric powerplant: electrical power for motor-driven fans was developed by electric generators coupled to a gas turbine engine with no batteries. This architecture leverages the configuration benefits of DEP while achieving the payload and range potential with energy-dense hydrocarbon fuel. The concept was revolutionary, but it was unable to achieve the needed full power capability because of component limitations and was cancelled by DARPA before it could demonstrate the benefits of DEP at full scale. Approaches to Propulsion-Airframe Integration Beyond improving the efficiency or weight of the propulsion system, DEP opens up potential benefits in overall vehicle performance through propulsion-airframe integration. Rather than designing the engine as an isolated, thrust-producing system, integrated design of the combined propulsion-aircraft system may unlock aerodynamic efficiency improvements for both, because DEP provides the designer with the flexibility to distribute and integrate propulsors in the vehicle to an extent not possible with conventional propulsion. Boundary Layer Ingestion One such strategy is propulsion with boundary layer ingestion (BLI). The basic principle of BLI is for the engine to produce thrust by ingesting and accelerating air in the so-called boundary layer near the surface of the vehicle. Friction reduces the velocity of this flow in the frame of reference of the vehicle, which means the same thrust can be produced using less power. Wind tunnel experiments carried out by MIT, Aurora Flight Sciences, and Pratt & Whitney at the NASA Langley Research Center showed a power savings of 10 percent by ingesting approximately 17 percent of the boundary layer of an advanced twin BLI-engine vehicle concept (Uranga et al. 2017). Analysis shows that the benefit increases with the amount of boundary layer ingested, and DEP provides a means to do this. Figure 6 Figure 6 shows two NASA concepts with BLI enabled by DEP. One (top image) is a conventional tube-and-wing aircraft configuration with a turboelectric propulsion system and a motor-driven BLI fan at the back of the fuselage. The other (middle image) is a hybrid wing-body concept with distributed BLI propulsors ingesting a large fraction of the vehicle’s upper surface boundary layer. Blown Lift Another DEP-enabled propulsion-airframe integration strategy is blown lift, which positions a wing and propulsor relative to each other such that the pressure field induced on the wing and the deflection of the propulsor jet increase the overall lift beyond that of the isolated airfoil. The benefits of short landing and takeoff are well known and implemented in aircraft such as the DHC-6 Twin Otter, used in the hybrid payload-range analysis above, but these have been limited by the amount of blowing that can be achieved with only two or four propulsors. DEP opens the door for super-short takeoff and landing with smaller propulsors distributed along a larger segment of the wing’s span, enabling short field performance that begins to make it competitive with more technically complex eVTOL concepts. Alternatively, the blown wing benefit can enable typical field lengths with less wing area and higher wing aspect ratio, leading to improved aerodynamic efficiency during cruise. This is the idea behind the NASA X-57 Maxwell concept (bottom image in figure 6), which claims reduction in cruise energy consumption rate by a factor of 4.8 relative to an unmodified, conventionally powered aircraft (Borer et al. 2016). Conclusion and Outlook The potential benefit of electrified aircraft propulsion is the flexibility it brings to the aircraft design space. This benefit comes at the cost of components with increased weight and transmission inefficiencies, but there appear to be a variety of aircraft missions and vehicles that can leverage electrification in different ways. The conclusions drawn here do not differ materially from those of a 2016 consensus study of the National Academies of Sciences, Engineering, and Medicine, which reported that battery-powered propulsion is well suited only to small vehicles with short range, and that distributed propulsion and boundary layer ingestion could yield significant performance benefits to commercial transport aircraft (NASEM 2016, pp. 51–70). The potential for hybrid systems to improve the performance of gas turbine engines and propulsion-airframe integration effects such as blown lift also warrants further investigation. Other options for electrified propulsion, such as solar power or fuel cells, are beyond the scope of the discussion here, but they similarly offer benefits for unconventional vehicles or missions by introducing new options to the vehicle design space. Challenges to implementing the vision for EAP remain. Urban air mobility may be feasible with current technology, but only just, and advances in the technology to improve the capability of UAM vehicles are necessary before transport-class hybrid concepts become competitive. Smaller concepts will require tens or hundreds of kilowatts, and larger transports could require a megawatt or more of electric power capability. Electric machines of these scales exist today for various ground-based applications, but new designs are needed to meet the stringent weight, efficiency, and reliability requirements for aviation. All in all, there is reason to be cautiously optimistic about the future of EAP. The convergence of new technologies and new vehicles will allow new modes of mobility for the traveling public. The required innovations at the interface of aircraft, engine, and high-power electronics technologies will necessitate new interactions between established industries and generate opportunities for new entrants in an emerging industry. Finally, and not least, new technologies will inspire the next generation of students, researchers, scientists, and engineers in their drive to address the challenge of sustainability for aviation’s second century. References Bertrand P, Spierling T, Lents C. 2019. Parallel hybrid propulsion system for a regional turboprop: Conceptual design and benefits analysis. AIAA Propulsion and Energy Forum, Aug 19–22, Indianapolis: AIAA 2019-4466.c1. Borer NK, Patterson MD, Viken JK, Moore MD, Clarke S, Redifer ME, Christie RJ, Stoll AM, Dubois A, Bevirt J, and 3 others. 2016. Design and performance of the NASA SCEPTOR distributed electric propulsion flight demonstrator. 16th AIAA Aviation Technology, Integration, and Operations Conf, Jun 13–17, Washington: AIAA 2016-3920. Epstein AH, O’Flarity SM. 2019. Considerations for reducing aviation’s CO2 with aircraft electric propulsion. Journal of Propulsion and Power 35(3):572–82. Hertzke P, Müller N, Schaufuss P, Schenk S, Wu T. 2019. Expanding electric-vehicle adoption despite early growing pains. McKinsey & Company: Automotive & Assembly, August. Lents C, Hardin L. 2019. Fuel burn and energy consumption reductions of a single-aisle class parallel hybrid propulsion system. AIAA Propulsion and Energy Forum, Aug 19–22, Indianapolis: AIAA 2019-4396. Lord WK, Suciu GL, Hassel KL, Chandler JM. 2015. Engine architecture for high efficiency at small core size. 53rd AIAA Aerospace Sciences Mtg, Jan 5–9, Kissimmee FL: AIAA 2015-0071. NASEM [National Academies of Sciences, Engineering, and Medicine]. 2016. Commercial Aircraft Propulsion and Energy Systems Research: Reducing Global Carbon Emissions. Washington: National Academies Press. Uber. 2016. Fast-forwarding to a future of on-demand urban air transportation. San Francisco: Uber Elevate. Uranga A, Drela M, Greitzer EM, Hall DK, Titchener NA, Lieu MK, Siu NM, Casses C, Huang AC, Gatlin GM, and 1 other. 2017. Boundary layer ingestion benefit of the D8 transport aircraft. AIAA Journal 55(11):3693–708. Zu C, Li H. 2011. Thermodynamic analysis on energy density of batteries. Energy & Environmental Science 4(8):2614–24. About the Author:John Langford (NAE) was founder and CEO and David Hall leads the Propulsion Group, both at Aurora Flight Sciences.
<urn:uuid:ef0f6cf7-ea96-445a-b464-dd978ac6e46a>
CC-MAIN-2021-21
https://www.nae.edu/234444/Electrified-Aircraft-Propulsion
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00335.warc.gz
en
0.924993
4,348
2.953125
3
This post is detailed. But it is getting down to the nitty gritty of a case for the midrashic creation of the Jesus figure in the gospels. Nanine Charbonnel cites four intriguing instances. A. I Am/I Am He/I and He … and we are all together Many of us are familiar with Jesus declaring “I am” (ἐγώ εἰμι) which echoes Yahweh’s self-declaration in the Pentateuch; less familiar are the moments when Jesus says, “I am he” (ἐγώ εἰμι αὐτός – e.g. Luke 24:39), and that sentence echoes the second part of Isaiah (אֲנִי-הוּא = hū ’ănî = I [am] he; LXX = ἐγώ εἰμι = I am) and liturgies of the Jewish people. (I’ll simplify the Hebrew transliteration in this post to “ani hu” (= I he). These self-identifications bring us back to Exodus 3:14 where God reveals himself to Moses at the burning bush: “I am he who is”, which in the Greek Septuagint is ἐγώ εἰμι ὁ ὤν. But we need to look again at those words [hu ani] in Deutero-Isaiah: In Isaiah 41:4; 43:10, 13; 46:4; 48:12; 52:6 we read God declaring, I am he [ani hu] (=me him) אֲנִ֣י ה֔וּא We will see that this expression, “I he” is related to the festival of Tabernacles or Sukkoth. But first, we note that during New Testament times at the Feast of Tabernacles or Tents worshippers walked around the altar each day singing “O Yahweh save us now, O Yahweh make us prosper now”, which is a line from Psalm 118:25 |now||prosper us||[we pray / beseech you]||now||save us||[we pray / beseech you]| Now in rabbinic literature, in Mishnah Sukkah 4:5, we find another version of this liturgical sentence was said to be used during the temple ceremony. Each day they would circle the altar one time and say: “Lord, please save us. Lord, please grant us success” (Psalms 118:25). Rabbi Yehuda says that they would say: Ani waho, please save us. And on that day, the seventh day of Sukkot, they would circle the altar seven times. |save us||[taken to be a substitute for the divine name by some scholars – see Baumgarten below]||I (Hebrew); (confusingly, ana in Aramaic means “I”. By hearing the original Hebrew ana as the Aramaic ana, the transformation to Hebrew “I” follows.)| Both ani and waho may be considered “flexible” as I’ll try to explain. - ani in Hebrew means “I” - ana in Hebrew means something like “we pray” as above Aramaic was the relevant common language in New Testament times, however, and it’s here where the fun starts. - ana in Aramaic means “I” So we can see how the Hebrew “we pray” can become the Aramaic “I”. If waho, והו, began as a substitute for the divine name it could when pronounced easily become והוא, wahoû, which is the Aramaic for “me”. qui peut être une manière de dire ‘ani wahoû’, “moi et lui”. Translated: which can be a way of saying …. “me and him”. (The “wa” = “and”.) Not cited by NC but in support of NC here, Joseph Baumgarten in an article for The Jewish Quarterly Review writes, Mishnah Sukkah 4.5 preserves a vivid description of the willow ceremonies in the Temple during the Sukkot festival. Branches of willows were placed around the altar, the shofar was sounded, and a festive circuit was made every day around the altar. The liturgical refrain accompanying the procession is variously described. One version has it as consisting of the prayer found in Ps 118:25, אנא ה׳ הושיעה נא, אנא ה׳ הצליחה נא , “We beseech you, O Lord, save us! We beseech you, O Lord, prosper us.” A tradition in the name of R. Judah, however, records the opening words as follows: אני והו הושיעה נא. The meaning of this enigmatic formula has occasioned much discussion among both ancient and modern commentators. In the Palestinian Talmud the first two words in the formula were read אני והוא and were taken to suggest that the salvation of Israel was also the salvation of God. (Baumgarten, Divine Name and M. Sukkah 4:5 p.1. My highlighting) The same idea is brought out by NC in her quotation of Jean Massonnet. I translate the key point concerning the “I and he” or “me and him” This may be a way of closely associating the people with their God on an occasion when the Israelites might surround the altar; it was a great moment of the feast […] In a veiled form, one audaciously asked for salvation for the good of the people and of God, as if God – so to speak – was in distress with his people. (Massonnet, Aux sources du christianisme…., p. 269, cited by NC, p. 317. My highlighting.) NC adds, again translating, we are the emphasing the last sentence. He adds: “the idea that God accompanies his people in distress is […] ancient and widespread”, see Isaiah 63, 9: “in all their distress it is distress for him”. On personal pronouns see Pierre Bonnard, L’Évangile according to Saint Matthew, p. 64, note. Finally, one point I failed to mention earlier, recall our earlier discussions of the importance of gematria. In that context it is not insignificant that “ana YHWH” has the same numerical value as “ani waho”. B. Dabar, a Word in Silence The Hebrew word for “word” can equally mean “act”. I quote references from a couple of old authorities: דָּבָר dâbâr . . . Strong’s definitions: a word; by implication, a matter (as spoken of) or thing; adverbially, a cause:—act, advice, affair, answer, any such (thing), . . . . Theological Wordbook of the Old Testament: dābār. Word, speaking, speech, thing, anything, …, commandment, matter, act, event, history, account, business, cause, reason, The dābār is sometimes what is done and sometimes a report of what is done. So, often in Chr, one reads of the acts (dibrê) of a king which are written in a certain book (dibrê). “Now the acts of David the king … are written in the book of Samuel the seer, and in the book of Nathan the prophet, and, in the book of Gad the seer.” In the KJV of II Chr 33:18 acts, words, spake and book are all some form of dābar/dābār. And in the next verse, sayings is added to this list! The Hebrew name for Chronicles is “the book of the words (acts) of the times” (sēper dibrê hayyāmîm). Here “words [ (acts) of the times” is equal to “history” — “annals.” [In addition, the word of the Lord is personified in such passages as: “The LORD sends his message against Jacob, and it falls on Israel” (Isa 9:8 [H 7]); “He sent his word and healed them” (Ps 107:20); “He sends his command to the earth” (Ps 147:15). Admittedly, because of the figure it appears as if the word of God had a divine existence apart from God, but Gerleman rightly calls into question the almost universal interpretation that sees the word in these passages as a Hypostasis, a kind of mythologizing. Gerleman suggests that this usage is nothing more than the normal tendency to enliven and personify abstractions. Thus human emotions and attributes are also treated as having an independent existence: wickedness, perversity, anxiety, hope, anger, goodness and truth (Ps 85:11f.; 107:42; Job 5:16; 11:14; 19:10) ….] (TWOT, p. 399. Of course, we are arguing contrary to the thrust of the last paragraph that certain ancients thought could well have thought differently.) Try reading the prologue to the Gospel of John with this understanding in mind. The evangelist was playing with the Hebrew text of the creation account in Genesis 1. He was toying with the two levels of meaning of dabar, act and speech. In Genesis the first act is the creation of light. In John 1 we read the second creation of light, a new light to be named “Salvation”, but a light that not all have been able to see, a light that is found in the incarnation of the word/act itself. Compare Sirach 42:15 In the words of the Lord are his works. Understood as an act, the word can even be silence, since silence is also an act. Ignatius wrote to the Magnesians, 8:2 . . . there is one God who manifested himself through Jesus Christ, his Son, who is his eternal Word, who came not forth from Silence (though some manuscripts omit “not”) NC draws upon Roland Tournaire (L’intuition existentielle. Parménide, Isaïe et le midrash protochrétien) who interprets the word, speech, dabar, as existing beyond time, in eternity rather than temporally, and here existing as the Son in the understanding of our biblical authors John and Paul. Jesus, Tournaire says, brings to light, into appearance for us, this reality of the word, of Yahweh, who himself never leaves the silence of eternity. Translating, This is the first idea of the proto-christian midrash: the Son does not manifest himself by his word, but by the interpretation that humans imagine to make him speak: parables and speeches are a language adapted to the earthly situation of adam-YHWH. They resonate in human ears because it was designed by human intelligence. . . . We are there at the heart of the proto-Christian doctrine readable in John and in the most archaic passages of the so-called Pauline epistles. What prevails, in such a doctrine, are not the sayings of the teaching, but the fulfillment by the elevation of the existential idea, which Jesus (‘adam and YHWH) manifests thanks to the certainties of the midrash. (NC, 318, citing Tournaire, pp. 254-255) What I am reminded of here is the curious way some of the evangelists play with silence at critical moments in the narrative of Jesus. Silence is a significant motif found also in the book of Revelation and in a number of “gnostic” texts. There is the silence before Pilate that is broken by his declaration of “I am”. Wrede identified the silences in the Gospel of Mark as relating to “the messianic secret”. Is there more to be said about them, though? The Gospel of John is usually understood to be a polar opposite of the Gospel of Mark in so many respects, but I have sometimes wondered if there is rather an underlying unity of the two gospels hidden beneath parables. In Mark we read that Jesus always spoke in a parable and also that he avoided declaring his identity in public; in John we read of Jesus always speaking in parables (though often the parables are only recognized as such by the reader and not the other characters in the narrative) and declaring his identity in public. Yet in both, Jesus remains hidden. Parables and silence both hide his true identity. The effect of the parables in John is the same as the effect of the silence in Mark. Further, in the Gospel of Mark, we read that the teachings of Jesus are astonishing but we are not told what they are. Only that people are astonished. And he speaks in parables to hide his meaning from the public. In John the Word incarnate is equally hidden but through different narrative techniques. In both we read about a “great teacher” or the Word himself without a teaching except about himself and his identity. (Later, Matthew and Luke, not quite getting this point, flesh out his life with teachings to imitate or surpass the teachings of Moses.) Does the point made by Roland Tournaire and NC explain why Jesus comes to speak in a way that cannot be understood, to speak in a way that hides his true identity except to those to whom he reveals himself? I suspect so. It makes sense, also, of the play with the motif of silence in the Gospel of Mark and, perhaps, in extra-canonical texts. (And one more query: is there any relation of this concept to Jesus choosing to write on the ground rather than speak in John 8?) C. Good News and Flesh — the same Hebrew root Up to this point I have understood NC to have spoken of a common Greek root behind the words for tent or tabernacle and for body, but my understanding is that the word for tent (skenos) is used metaphorically for the body. I might have misunderstood NC, writing in French, or maybe there is something about the Greek that I am not aware of. But the point is that the same word can equally mean either tent or body. Of equal interest, surely, is that the Hebrew root BSR can be read as either “flesh” or “good news” or “good word”. This root and its derivative occur thirty times in the OT. Sixteen of these are in Samuel-Kings and seven are in Isaiah. The root is a common one in Semitic, being found in Akkadian, Arabic, Ugaritic, Ethiopic, etc. The root meaning is “to bring news, especially pertaining to military encounters.” Normally this is good news . . . In the historical literature, the occurrences of bāśar cluster around two events: the death of Saul (I Sam 31:9; II Sam 1:20; 4:10), and the defeat and death of Absalom (II Sam 18:19f.) Although David received them differently, both were felt by the messenger to be good news. This concept of the messenger fresh from the field of battle is at the heart of the more theologically pregnant usages in Isaiah and the Psalms. Here it is the Lord who is victorious over his enemies. By virtue of this success, he now comes to deliver the captives (Ps 68:11 [H 12]; Isa 61:1). The watchman waits eagerly for the messenger (Isa 52:7; cf. II Sam 18:25f.) who will bring this good news. At first, only Zion knows the truth (Isa [40:9; 41:27), but eventually all nations will tell the story (Isa 60:6). The reality of this concept is only finally met in Christ (Lk 4:16-21; I Cor 15:54-56; Col 1:5, 6; 2:13-15). bāśar. Flesh (rarely skin, kin, body) . . . This word occurs 273 times in the OT. One hundred fifty-three of these are found in the Pentateuch. . . . In Hebrew the word refers basically to animal musculature, but by extension it can mean the human body, blood relations, mankind, living things, life itself and created life as opposed to divine life. bāśār occurs with its basic meaning very frequently, especially in the Pentateuch, in literature concerning sacrificial practices (e.g. Lev 7:17) . . . The common paralleling with … “bone” to convey the idea of “body” denotes the central meaning of the word dearly (cf. Job 2:5, etc.). But bāśār can be extended to mean “body” even without any reference to bones (Num 8:7; II Kgs 4:34; Eccl 2:3, etc.). As such it refers simply .to the external form of a person. . . . If “body” can refer to man, it can also refer to mankind (Isa 66:16, 24, etc.) and even further to all living tilings (Gen 6:19, etc.). . . . In Isaiah 61:1 we read of the good news of the coming messiah: The Spirit of the Lord GOD is upon me, Because the LORD anointed me To bring good news lə·ḇaś·śêr לְבַשֵּׂ֣ר Similarly in Isaiah 41:27 And to Jerusalem, ‘I will give a messenger of good news.’ mə·ḇaś·śêr מְבַשֵּׂ֥ר Again in Isaiah 52:7 How delightful on the mountains Are the feet of one who brings good news mə·ḇaś·śêr מְבַשֵּׂ֗ר Behold, on the mountains, the feet of him who brings good news [mə·ḇaś·śêr מְבַשֵּׂר֙], Who announces peace! Maurice Mergui, whose work (Paul À Patras) NC uses in presenting this point, wants us to notice that it is the spirit that is behind this announcement (Isa. 61:1) of the coming “word”, and that in this messianic promise we can see how close we are to the word, the davar (dabar), becoming flesh in its fulfilment. Paul sees significance in Paul’s frequent contrast between flesh and spirit, admonishing his Corinthians that they are too “flesh” (like rebellious Israelites who complained over being fed the same flesh all the time) and need to be more of the “spirit”. D. Even mathematics proves the Word was God The Gospel of John displays many marks of interest in playing with numbers. A source not found in NC’s book but a must-read for anyone interested in some of the detail is Numerical Literary Techniques in John: The Fourth Evangelist’s Use of Numbers of Words and Syllables by M. J. J. Menken. Several shorter works have focussed on the significance of the catch of 153 fish in chapter 21. As for the matching of the sacred name YHWH with the dabar (=word) — both total 26. We may assume that there would be Jewish exegetes of the day who understood that such a match had real significance. The author of the Gospel of John could say he had “proof” that the Word was God. For details of the 26 link see - Sentiers, Garrigues et. “Déchiffrons les lettres hébraïques…” GARRIGUES ET SENTIERS. Accessed April 14, 2021. http://www.garriguesetsentiers.org/article-3396135.html. - ———. “Qui et l’Agneau de Dieu incarné.” GARRIGUES ET SENTIERS. Accessed April 14, 2021. http://www.garriguesetsentiers.org/article-4161983.html. Both are part of NC’s discussion. NC concludes this portion of her work by reminding us of what we are reading in the gospels: 1st meaning: the oracle is fulfilled in the storytime; there we find God’s promises fulfilled; 2nd meaning: but it is by the midrashic character of that narrative that we find that fulfilment of the spirit of promise; 3rd meaning: the story is about a figure who is the Name of God personified; 4th meaning: that character has been constructed with the word and with a name that means “YHWH saves”, and whose name by gematria (numerical value) is equivalent to the sacred name represented by the unpronounceable Tetragrammaton. The last three posts have examined the words behind the Word becoming flesh and their Jewish contexts. Next up is a similar in-depth study of how that Name became flesh. Charbonnel, Nanine. Jésus-Christ, Sublime Figure de Papier. Paris: Berg International éditeurs, 2017. Mergui, Maurice. Paul À Patras: Une Approche Midrashique Du Paulinisme. Objectif Transmission, 2015. Harris, R. Laird, Gleason L. Archer, and Bruce K. Waltke, eds. Theological Wordbook of the Old Testament. Vol. 1. Chicago: Moody Press, c1980. 2 v. ; 26 cm, 1980. Latest posts by Neil Godfrey (see all) - Celestial or Earthly Christ Event? Why So Much Confusion About Paul? - 2021-05-11 12:05:05 GMT+0000 - Did Paul Quote Jesus on Divorce? — Getting History for Atheists Wrong (Again) — #5 - 2021-05-10 10:42:06 GMT+0000 - Getting History for Atheists Wrong (Again) — #4 - 2021-05-10 02:50:25 GMT+0000 If you enjoyed this post, please consider donating to Vridar. Thanks!
<urn:uuid:486e3e2e-d202-49fa-b767-41a4e060031f>
CC-MAIN-2021-21
https://vridar.org/2021/04/14/4-jewish-word-plays-behind-the-word-becoming-flesh-3-charbonnel-jesus-christ-sublime-figure-de-papier/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00176.warc.gz
en
0.935748
4,991
2.65625
3
On my trip to Hong Kong in December 2005, the taxi could go no further beyond the edge of Causeway Bay because of protests by South Korean farmers. I walked the rest of the way to my hotel, witnessing rage and ferocious outbursts. Livelihoods were on the line. The farmers’ plight related to the World Trade Organization (WTO) agreement that would increase rice imports by South Korea from 4 percent to 8 percent in the next decade without additional compensation to the farmers. Bold characters on protesters’ banners read, “Globalization kills farmers.” Through the years, demonstrators who gather at the WTO meetings in various cities have protested their grievances against globalization in dramatic gestures and scorching accusations. The issues are diverse: protestations of the shift of jobs to low-wage countries; condemnation of environmental degradation; censure against fast-food restaurants for spreading unhealthy lifestyles; concern over the loss of local cultures; assertion of threats to national sovereignty; and moral indignation about export subsidies enjoyed by developed countries that depress both the price and vitality of agricultural sectors in developing regions. An overarching vision of the issues of globalization can be seen in Caritas in Veritate, the 2009 encyclical by Pope Benedict XVI, which declares that economic activities are inherently human with profound moral consequences for people’s well-being, and should entail justice for those who are most vulnerable. Charity, the encyclical states, “gives real substance to the personal relationship with God and with neighbour; it is the principle not only of micro-relationships (with friends, with family members or within small groups), but also of macro-relationships (social, economic and political ones).” As one of the key principles of Catholic social teaching, the notion of the common good, which is given explicit attention in Caritas in Veritate, stipulates our responsibility to contribute to the whole society. The rectitude of any economic activity must be assessed by the extent to which it advances or threatens our life together as a community. In this current encyclical, Pope Benedict XVI sets a high bar, noting that our global economic interactions should not only bring us into close proximity as neighbors but also bring us together in solidarity as brothers and sisters. It is difficult to imagine any response to Caritas in Veritate that does not embed a healthy dose of skepticism about whether globalization can deliver on the vision of the common good. Can free markets extending across the world and integrating all forms of cross-border trading activities — an impersonal force motivated by financial gains — truly consider and advance the welfare of all people? Is this a call from the wilderness so far from the realities of our economic life as to have no resonance, and thus no enduring power to grip our hearts and motivate our actions? Or is this a call that recognizes the potential of globalization that is not only reachable, but is the raison d’être of its very promulgation, and the absence of which would diminish our ability to serve each other? Such questions were on my mind when, during a 2008 trip to Ethiopia as a board member of Catholic Relief Services, I visited what seemed like miles of greenhouses established by conglomerates from Europe. Such farms provide jobs in this desperately poor region and are supported with tax breaks from the local government. But there are drawbacks to these operations. These greenhouses drain the river — the lifeblood for local families, as well as for their farms and cattle. Chemical fertilizers and pesticides are deployed in poorly ventilated spaces, putting workers at a higher risk for developing cancer. I also learned about differential regulation between Kenya and Ethiopia, whereby the former sets higher regulations for worker safety. I love flower arranging and have enjoyed the bounty of roses available back home for as low as $15 for two dozen blooms. But from that day on, I have not been able to bring myself to reach for these roses. At home in the United States, it is sobering to note that after the last 15 years of unprecedented prosperity and economic growth, income inequality has risen to alarming levels. The income of the top 1 percent of earners has risen from 8 percent of total income in the 1960s to 20 percent today. This gap is the highest among advanced countries. At the same time, unemployment has risen to a rate not seen in the last 30 years. A day at a Catholic Charities office brings us face to face with some of the 44 million Americans living in poverty. In such stark differences, where is the vision of the common good? A child of globalization While being as open-eyed as possible about these miseries, I cannot shake from my mind my first-hand experiences of the benefits of globalization while growing up in Hong Kong. My parents were refugees from China. As land and business owners, they would have no future in Mao’s Communist regime. They left behind all possessions and the ancestral home passed down through generations to start a new life in Hong Kong. I am an offspring of globalization: born ethnic Chinese in a British colony to a father whose profession was shipping; taught by American missionary nuns, the Maryknoll sisters, who offered instruction in English starting in the second grade as their Cantonese could carry them no further; nurtured throughout my life by Catholic institutions, which are among the oldest global organizations; educated in the United States on scholarships provided by American donors and the government; married to a U.S. citizen of Irish and Lithuanian descent; given a Chinese name, Yan, that stands for Confucius’ canons for human relations, and an English name that supposedly came from my father’s fascination with Irish names (and maybe Irish dames) when he studied in Europe. My story is the norm, not materially unlike my classmates or the 1.5 million refugees who relocated to Hong Kong from 1945 to 1950. Without global trading, what would we have become, in a colony of 426 square miles, with no natural resources except a deep harbor and an overwhelming influx of refugees? Hong Kong could not supply its own food and had to purchase water from China. It had little habitable land and was not endowed with deep deposits of valuable carbons, metals or minerals. Yet it not only provided a shelter but helped newcomers achieve prosperity. Today, with a population of 7 million, Hong Kong continues to have, as the U.S. Department of State says, “one of the world’s most open and dynamic economies.” Behind that general statement are the stories of many I knew who gained better lives through a rising economy. One, my distant cousin Choi, came to Hong Kong with only the proverbial suitcase and no money. What I remembered most was his recounting of how, when a person was called to report to the prefect in charge of his neighborhood in China, shoes would be the give-away of “despicable, bourgeois” tendencies. This character flaw subsequently was punishable during the Cultural Revolution by the donning of a dunce cap and kneeling on broken glass. Since then, I have never taken the ownership of shoes for granted. Once in Hong Kong, Choi showed his own bourgeois tendencies by starting a little store that sold and repaired transistor radios in a dingy landing of a multistory residential building. He eventually parlayed this skill into a small job shop that sold components to local assemblers and later as exports to Japanese manufacturers as he improved the quality of his products. Our family driver, Mr. Lai, was an educated man from China who spoke no English, Hong Kong’s official language. He tutored me in Chinese for my Primary Six public exam, while I corrected his pronunciation as he struggled with his “Rs” and “Ls.” Eventually, Mr. Lai’s ability to speak English enabled him to qualify for a taxi license and save enough capital to acquire his own taxi. Both assets — which were subject to government quota and could be traded in the market — appreciated significantly during the tourist and industrial boom in Hong Kong. Mr. Lai’s abode, which started as a lean-to on the hillside, was finally upgraded to a private condo by way of a one-room apartment in public housing. All of his three children graduated from the University of Hong Kong with professional degrees. Last but not least, our amah — a maid/nanny of sorts — would stop into the bakery to buy a few fractional shares of multinational companies on her daily visit to the market. As the prices of the shares appreciated, she eventually became a lender, making loans to other amahs at loan shark rates. A rising tide In Asia, the city of Hong Kong’s ascent is shared by six countries that are collectively known as the “seven tigers”: Indonesia, Malaysia, Singapore, South Korea, Taiwan and Thailand. Today, Vietnam and Cambodia, re-grouping after war and coups, are industrial cubs mimicking the example of the big tigers. In other parts of the world, information technology and liberalization of trade have opened up a variety of markets. At Notre Dame, for example, jewelry made from recycled magazines by women’s co-ops in Uganda was sold during a Christmas fundraiser in the lobby of the Mendoza School of Business. And a women’s collective in Colombia, through the Clinton Foundation, has engaged Notre Dame MBA students for an analysis on how they can increase demand and prices for their spices. Despite the unevenness in income benefits, U.N. reports show that infant mortality worldwide has dropped from 12.6 million in 1990 to 9 million in 2007. Access to education has climbed noticeably over the last 20 years in sub-Sahara Africa, from 58 percent to 78 percent of all children who receive some primary schooling. The prediction that parents would place their children into the labor force when jobs are created in developing countries produced the opposite results. Parents with opportunities want a better life for their children. In Afghanistan, I witnessed how the success of women in enterprise groups triggered the desire for more education for both themselves and their daughters. Development experts repeatedly note that education for women is the most effective approach to addressing the millennium challenges. It has been shown that direct foreign investments, market economies and participation in the global economy can increase prosperity. Benefits to citizens include employment, job training, higher standards of living and financial stability. Even for nonprofits, I saw how global interactions can improve life. In a clinic in Kenya, nuns transmitted X-rays through a computer to Italy, where physicians in a Catholic hospital diagnosed the problem in real time. Participation in markets plays a significant role in setting conditions for trade, stimulating savings as well as developing infrastructures for roads, ports, rail, information systems, monetary policies, regulatory frameworks, education and financial institutions. Multinationals with operations in developing countries can set requirements for revenue distribution and reporting protocol that promote transparency and multi-lateral collaboration between local governments, non-governmental organizations and transnational agencies such as the United Nations, the International Monetary Fund and the World Bank. The countries that had the lowest corruption scores are also those with the most open economies. Considering that the three primary causes of conflict are corruption, poverty and social inequality, it is not difficult to see that commerce can enhance peace. Few think of this, but the most valuable export from America to the rest of the world is civil society. Universal suffrage was not a feature in any nation in 1900; by 2000, it is present in 62.5 percent of all countries. Clearly the outcomes of globalization depend on the global actors, particularly large multinationals. According to author Bruce Piasecki, of the world’s 100 largest economies, 51 are corporations. In addition, 300 multinationals account for 25 percent of the world’s total assets. Whether globalization can contribute to the common good is a question that has been answered by evidence: Yes, it can, some of the time. The more pertinent question, I believe, is how globalization, through business, can serve society. Socially responsible conduct In 2000, U.N. Secretary General Kofi Annan called upon corporations to become voluntary signatories of a new program, the United Nations Global Compact (UNGC), by which they would abide by 10 principles. These pertain to the advancement of human rights, labor rights, environmental sustainability and anti-corruption. Now, in the compact’s 10th year, signatories have grown from 50 inaugural members to more than 5,000 businesses and 1,500 non-business organizations in 135 countries. The most active country is France with 512 organizations. The United States ranks sixth with 203. Signatory organizations have the flexibility to create their own implementation plans. Most companies have enacted policies focusing on nondiscrimination and workplace safety, as well as restrictions on child and forced labor. Sustainability practices have become central to corporate operations, and, to fight corruption, some companies have established oversight systems that include hotlines and sanctions for breaches. Many believe that budget-conscious but socially responsible conduct will become the “new normal” expected by customers, governments and civil society. In a 2010 Accenture survey of more than 700 CEOs from UNGC-participating companies, 81 percent reported that they have incorporated environmental, social and governance issues into their core strategy, up from 50 percent in a similar survey in 2007. Many other companies, large, small, private or public, also abide by socially beneficial practices similar to the UNGC. New business models that explicitly align profit and social objectives include micro-ventures (made famous by Nobel Laureate Muhammad Yunus), fair trade (which apportions greater bargaining leverage and profits to local growers and producers), and B Corporations (based on formal corporate charters that specifically include social responsibility as their objective). Note that these socially oriented business models stand on, and do not depart from, the foundations of capitalism: protection of property rights, voluntary transactions and contracts enforceable through rule of law. Businesses do not operate in a vacuum but achieve success and make contributions within a certain regulatory, political and social context. Relaxing in my hotel room in London in the summer of 2002, I sprang to my feet for a straight-back salute when Queen Elizabeth II came on television for her golden jubilee celebration. It was not an instinct from my days as a colonial citizen but a deep appreciation for how the British government, despite the way it came into possession of Hong Kong, enabled an economic miracle that lifted up the lives of about 5 million people. By then, I had seen enough of the contrasting consequences between good and corrupt government; intelligent and nonsensical rule; and government for and against the people. “Free market” is a misnomer; no markets are even close to free. Taxes, regulations, standards, tariffs, investment incentives, trade agreements, social institutions for education and health, physical infrastructures for transportation and communication, all come together to shape, enable, restrict, facilitate and hinder the activities and competitiveness of business. Whether or not globalization works for a country depends critically on the prudence and fortitude of its government in formulating corresponding strategies, policies and programs. Company sourcing decisions, while often characterized as a race to the bottom toward the lowest wage countries, are actually critically affected by a host of other factors, such as political and economic stability, availability of skilled labor, literacy rates, protection of property rights, sound macro-economic policies, local infrastructure and quality of local institutions. Asia receives 20 times more foreign direct investments than sub-Sahara Africa because of strong showing on these criteria. More, not less globalization Clearly the benefits of globalization are uneven, and substantial variation in socially responsible behavior among companies exists. Yet we should heed the observation in a recent International Monetary Fund issues brief that “the biggest threat to continuing to raise living standards throughout the world is not that globalization will succeed but that it will fail. It is the people of developing economies who have the greatest need for globalization, as it provides them with the opportunities that come with being a part of the world economy.” We must recognize that the engines that propel globalization are operating with full steam. For centuries, trade has been going on between tribes, societies, countries and continents. Globalization is a historical process of increasing integration of economies around the world and cross-border movements of goods, services, information and capital (including financial, labor, knowledge, know-how) enabled by technologies and policies. What is different today is the scale. The amount of foreign exchange transactions is approximately $2 trillion a day versus only $80 billion in 1980. The sense of my own country versus yours has certainly dulled in light of Foreign Direct Investments, which have surged from 6.5 percent of world GDP to more than 30 percent in the last 30 years. Capital, as in your and my pension investments, certainly seems to cross boundaries without hesitation toward the pull of opportunities. And information is easier to share. I could not make a phone call to Hong Kong from Purdue University in 1972 without decimating my monthly allowance. Now it is free on Skype. Pertinent to the success of American companies is the fact that almost all sectors — such as health-care technology, auto manufacturing, aircraft, electronics — now include leading competitors outside of the United States. The rules and playing field for business are global in nature with attendant challenges and opportunities that transcend the resources of any single company, as well as the jurisdiction of an individual nation-state. Should this intimidate or energize us? Should we reject the future because we cannot traverse the familiar paths of the past to get there? Will the common good be better served if we retreat? Papal teachings remind us that markets can serve society. As Pope Benedict writes, “The Church has always held that economic action is not to be regarded as something opposed to society. . . . Society does not have to protect itself from the market, as if the development of the latter were ipso facto to entail the death of authentically human relations.” On prosperity achieved through development, the encyclical makes clear that more, not less, trade is needed: “the principal form of assistance needed by developing countries is that of allowing and encouraging the gradual penetration of their products into international markets.” A vital key lies in differentiating the instruments of the global marketplace from the actors who direct it. As Caritas in Veritate states, “Instruments that are good in themselves can thereby be transformed into harmful ones. But it is man’s darkened reason that produces these consequences, not the instrument per se.” According to the 2009 U.N. report on its Millennium Development Goals, while the percentage of people in the world living on $1.25 a day or less has dropped from 42 percent in 1990 to 25 percent in 2005, some regions such as sub-Sahara Africa have not enjoyed this progress. More sobering are the percentages of people living at this level of income while employed: 64 percent in sub-Sahara Africa and 44 percent in South Asia. Food insecurity in the world remains staggeringly high, with about one billion people suffering from chronic hunger and two billion people living with malnutrition. Together, these represent about half of the world’s population. Globalization has eased some of these problems, and I believe proper business practices followed by men and women of moral character with a people-centered sense of responsibility can indeed deliver on the vision of the common good. I see the recurrent worldwide miseries as a call to make globalization work for more people, not as a justification for retreat. The latter is neither feasible nor effective in raising the quality of life. Trade is a necessary good, not a necessary evil. However, the “invisible hand” of markets cannot become “fists” — “handshakes” must prevail as the most common form of interaction. The solution for the Ethiopian flower farms is not to stop operation but to adopt strict environmental controls that safeguard worker health and safety, invest in water recycling methods to preserve the water table, develop effective irrigation approaches to increase crop yield, raise prosperity for the villagers and offer opportunities for children, particularly girls, to get an education. I want to enjoy the roses, yes, at a higher price, and know that I am part of a global supply chain that lifted people out of poverty rather than exploited their lack of bargaining leverage. Carolyn Woo is the Martin J. Gillen Dean and Ray and Milann Siegfried Professor for Entrepreneurial Studies in the Mendoza College of Business.
<urn:uuid:a07b7c64-ba2e-4e78-b8c2-c476eaee4f6f>
CC-MAIN-2021-21
https://magazine.nd.edu/stories/the-global-good/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00615.warc.gz
en
0.961136
4,245
2.828125
3
Eventually, both were merged into Land Command and later, Field Army. This page is a list of British divisions that existed in World War I. A. G. B. Stanier Bart. The Army Air Corps provides battlefield air support with six regiments and four independent squadrons and flights: The Intelligence Corps provides intelligence support including collection, interpretation and counter-intelligence capabilities with three battalions and a joint service group: The Combat Service Support Arms provide sustainment and support for the Combat and Combat Support Arms. Between June and August 1940, another 275,000 men were drafted and 120 newinfantry battalions were formed⦠At the start of the Second World War, the United Kingdom already possessed two armoured divisions; a further nine would be raised by the British Army during the war, of which only two would not see service. This article on British Army Divisions is from the book D-Day Encyclopedia, © 2014 by Barrett Tillman. Landing on Sword Beach, Lovat’s forces advanced through lines held by the Third Division’s Eighth Brigade. The CFA is responsible for generating and preparing forces for current and contingency operations; he commands 1st (United Kingdom) Division, 3rd (United Kingdom) Division, 6th (United Kingdom) Division and Joint Helicopter Command (JHC). • Third Parachute Brigade: Brig. With ten specialist brigades, the 6th Division is the now the largest of the British Armyâs three divisions. Repeated thrusts were made by German armor, including the Twenty-First Panzer Division. Hugh Kindersley. Since 1990, reductions have been almost constant, through succeeding defence reviews: Options for Change (1990), Front Line First (1994), the Strategic Defence Review of 1998, Delivering Security in a Changing World (2003), and the Strategic Defence and Security Review of 2010. Here candidates learn the basic standards of military performance including operation in the field, weapon handling, personal administration, drill etc. The Third Parachute Brigade included the First Canadian Parachute Battalion. British Divisions. 2nd, 4th and 5th Divisions were replaced by Support Command on 1 November 2011. Volunteer units were also frequently raised during wartime, which did not rely on compulsory service and hence attracted recruits keen to avoid the Militia. The three senior regiments of foot guards, plus the Royal Regiment of Scotland, each maintain an additional reinforced company that retains custody of the colours of battalions that are in suspended animation: The Royal Gurkha Rifles maintains three additional company sized units that are permanently attached to various training establishments to serve in the OPFOR role in providing realistic battle training: The Royal Gurkha Rifles is the largest element of the Brigade of Gurkhas, which includes its own support arms. The battlegroup is a mixed formation built around the core of one unit, an armoured regiment or infantry battalion, with sub-units providing artillery, engineers, logistics, aviation, etc., as required. It compared in size to the standard U.S. Army division but had less transport. In the British Army, the three divisions are eight, nine, and four brigades strong respectively, with each commanded by a Major General. Formation signs at the division level were first introduced in the British Army in the First World War. The First Canadian Armoured Personnel Carrier Regiment was attached. Today, the British Army is the only Home British military force, including both the regular army and the forces it absorbed, though British military units organised on Territorial lines remain in British Overseas Territories that are still not considered formally part of the British Army, with only the Royal Gibraltar Regiment and the Royal Bermuda Regiment (an amalgam of the old Bermuda Militia Artillery and Bermuda Volunteer Rifle Corps) appearing on the British Army order of precedence and in the Army List. French and British armed forces Napoleonâs army and method of warfare. To order this book, please visit its online sales page at Amazon or Barnes & Noble. B. Walton. Despite three centuries of institutional continuity in some regiments, very few regimental units fought as such. After its defeat in Western Europe in the summer of 1940, the main mission of the British Army changed from providing an expeditionary force for use on the continent to a defensive force capable of resisting an invasion of the British Isles. By 1939 the British army had raised two armoured divisions and raised another nine between 1940 and 1942. Headquartered at RAF Uxbridge. Consequently, the British armed forces, and especially the army, needed to keep casualties as low as possible. In 1944 the nominal strength of a British infantry division (seldom achieved) was 18,347 men, including officers. By the autumn of 1941 there were 27 British, Canadian, and Polish motorized infantry divisions (28 in April 1943) available for the Field Force in Great Britain, each containing a front line strength of approximately 15,500 men.For beach defense eight country divisions had been formed each with a strength of 10,000 ⦠Commanded by the charismatic Brigadier Simon Fraser, Lord Lovat, the First Special Service Brigade was formed specifically for the Normandy landings. The Yeomanry was a mounted force that could be mobilised in times of war or emergency. This was down to politicians and army officers who still valued the horse over mechanisation. • Eighth Armoured Brigade: Brig. The Militia system was extended to a number of English (subsequently British) colonies, beginning with Virginia and Bermuda. The word corps is also used for administrative groupings by common function, such as the Royal Armoured Corps and Army Air Corps. This page is a list of British divisions that existed in World War I. All units within the service are either Regular or Army Reserve, or a combination with sub-units of each type. The Royal Artillery consists of 13 Regular Regiments and 5 Reserve Regiments along with the ceremonial King's Troop. Since the end of the Vietnam War, the U.S. Army has been all-volunteerâ meaning no one is draftedâand as always, everyone receives a salary. Click here for our comprehensive article on the WW2 Armies. Major Units are regiment or battalion-sized with minor units being smaller, either company sized sub-units or platoons. A historian of the Ottoman Empire and modern Turkey, he is a publisher of popular history, a podcaster, and online course creator. British Army Divisions: First Special Service (Commando) Brigade Commanded by the charismatic Brigadier Simon Fraser, Lord Lovat, the First Special Service Brigade was formed specifically for the Normandy landings. Three of the Regular Regiments and the King's Troop retain the cap badge, or "cypher", and traditions of the Royal Horse Artillery, although this naming convention has no link to the role that they undertake. Feb 17, 2018 - Explore Dave Findlay's board "British Army Division signs" on Pinterest. Divisions are usually equipped to operate independently in the field, and have a full complement of supporting reconnaissance, artillery, engineers, medical, supply and transport troops. The British military (those parts of the British Armed Forces tasked with land warfare, as opposed to the naval forces) historically was divided into a number of forces, of which the British Army (also restored to historically as the Regular Army and the Regular Force) was only one. The command structure is hierarchical with divisions and brigades responsible for administering groupings of smaller units. The four armoured regiments of the Army Reserve operate in two roles - provision of crew replacements for armoured regiments, and Light Cavalry (reconnaissance): Note: The Honourable Artillery Company is a corps in its own right and is not part of the Royal Artillery. The brigade would be required to deploy up to three separate battlegroups, the primary tactical formation employed in British doctrine. The commandos seized Breville on D+6, and though Lovat was badly wounded, the eastern flank of the landing beaches had been secured. On 7 June Lovat’s marines attacked east of the Orne Estuary, while No. The AGC is an amalgamation with three of the constituent units retaining their previous cap badge. The Combat Support Arms provide direct support to the Combat Arms and include artillery, engineer, signals and aviation. In addition to the division’s three composite brigades, the Twenty-seventh Armoured Brigade was attached. Under the General Officer Commanding Scotland, public duties in Edinburgh are the responsibility of a new incremental company, Balaklava Company, 5th Battalion, the Royal Regiment of Scotland (Argyll and Sutherland Highlanders), formed after the reduction of the Argylls from battalion status. • 151st Brigade: Brig. British 3rd Division troops passing a First World War memorial in Hermanville-sur-Mer, 6 June 1944. © HistoryOnTheNet 2000-2019. G. E. Prior-Palmer. The British Army is listed according to an order of precedence for the purposes of parading. The Militia and Volunteer units of a colony were generally considered to be separate forces from the Home Militia Force and Volunteer Force in the United Kingdom, and from the Militia Forces and Volunteer Forces of other colonies. The British Army has two deployable divisions, capable of deploying the headquarters and subordinate formations immediately to operations. 10 Interallied Commando, mainly comprised of Free French troops. The Adjutant General's Corps provides administrative, police and disciplinary and educational support to the army. • 231st Brigade: Brig. F. Y. C. Cox. The British Army has today unveiled its latest adaptation to modern warfare: the 6th (UK) Division. Support command was later re-titled as Regional Command in 2015. In theory, an Army is a formation of two or more corps, between 200,000 and 600,000 strong and commanded by a field marshal or US four star general. Click here for our comprehensive article on the WW2 Armies. The Corps as a whole is divided into four separate branches: Training in the Regular Army differs for soldiers and officers but in general takes place in at least two phases: Phase one training is basic military training for all new recruits. The AMS comprises four different Corps providing the range of medical and veterinary care, with the Royal Army Medical Corps also providing the administrative framework for the regiments. This article is part of our larger resource on the WW2 Armies warfare. Before the British army decided on a re-armament program in 1934, the army had a modest tank force. German prisoners being escorted back through La Brèche dâHermanville by men of the 2nd Kingâs Shropshire Light Infantry, 6 June 1944. Reporting to the Chief of the General Staff are four lieutenant-generals: the Deputy Chief of the General Staff; the Commander Field Army (CFA); the Commander Home Command (CHC), and Commander Allied Rapid Reaction Corps. 45 Royal Marine Commando, and part of No. After four years of war and enormous drain not only on the nation but upon the Commonwealth, it was increasingly difficult to maintain an adequate pool of able-bodied men. The Reserve Forces (which referred to the Home Yeomanry, Militia and Volunteer Forces before the creation of the regular British Army Reserve) were increasingly integrated with the British Army through a succession of reforms over the last two decades of the Nineteenth Century and the early years of the Twentieth Century, whereby the Reserve Forces units mostly lost their own identities and became numbered Territorial Force sub-units of regular British Army corps or regiments (the Home Militia had followed this path, with the Militia Infantry units becoming numbered battalions of British Army regiments, and the Militia Artillery integrating within Royal Artillery territorial divisions in 1882 and 1889, and becoming parts of the Royal Field Artillery or Royal Garrison Artillery in 1902 (though retaining their traditional corps names), but was not merged into the Territorial Force when it was created in 1908 (by the merger of the Yeomanry and Volunteer Force). Consequently, in 1939 the British Army did not have a single armoured division, and the French tanks were distributed in small packets throughout the infantry divisions. The term British Army was adopted in 1707 after the Acts of Union between England and Scotland. The tropical climate and terrain is well suited to jungle training and the Jungle Warfare Division run courses for all members of the British Army. This question arises a fair bit with readers of WorldWar2Facts.org, so we have compiled a table to help explain what the unit or group names mean, what units made up larger WW2 army units, the rough size of the unit, and what rank of officer or NCO was normally in charge. The division included the Third and Fifth Parachute Brigades and Sixth Airlanding Brigade, each with three battalions. The Royal Logistic Corps is the largest single corps in the British Army: The Royal Electrical and Mechanical Engineers is a corps that provides maintenance support to equipment and vehicles. In some colonies, Troops of Horse or other mounted units similar to the Yeomanry were also created. 7 Company, Coldstream Guards (ex 2nd Bn, Coldstream Guards), F Company, Scots Guards (ex 2nd Bn, Scots Guards), Balaklava Company, Argyll & Sutherland Highlanders, The Royal Regiment of Scotland (ex 5th Bn, The Royal Regiment of Scotland), 1 RSME Regiment – Construction Engineer School, 29 Postal Courier & Movement Regiment RLC, Infantry soldiers undergo a 26-week course at the, Soldiers in other specialisations undergo the 14-week Army Development Course at the, A Guide to Appointments and Invitations for Defence Staffs within High Commissions and Embassies in London, UK Ministry of Defence, June 2005 edition, This page was last edited on 16 December 2020, at 23:43. The following article on British Army Divisions in World War Two is an excerpt from Barrett Tillman’ D-Day Encyclopedia. , In addition to the brigades above, there are a number of other units of brigade-size, some which are under joint command.. All units within the service are either Regular (full-time) or Army Reserve (full-time or part-time), or a combination with sub-units of each type. • Sixty-ninth Brigade: Brig. A division is a large military unit or formation, usually consisting of between 10,000 and 25,000 soldiers.. It is available for order now from Amazon and Barnes & Noble. The overriding concern of the British army in 1944 was manpower. These were seen as a useful way to add to military strength economically during wartime, but otherwise as a drain on the Militia and so were not normally maintained in peacetime, although in Bermuda prominent propertied men were appointed Captains of Forts, taking charge of maintaining and commanding fortified Coastal artillery batteries and manned by volunteers, defending the colony's coast from the Seventeenth Century to the Nineteenth Century (when all of the batteries were taken over by the regular Royal Artillery). In WW2, armies were associated with geographical theatres of operations, such as the seven German armies that invaded Belgium and France in WW1 or the British 14th Army that fought in India and Burma between 1941 and 1945.However, the increased combat power of small and medium-sized formations, the influence of airpower and the incre⦠The 'Territorial' cavalry was referred to as Yeomanry. the Army Personnel Centre (APC) in Glasgow), and focuses on the 'home base' (i.e. A third division has responsibility for overseeing both offensive and defensive cyberwarfare, intelligence activities, surveillance and propaganda. Scotland District was absorbed by 2nd Division in 2000. Where a colony had more than one Militia or Volunteer unit, they would be grouped as a Militia or Volunteer Force for that colony, such as the Jamaica Volunteer Defence Force, which comprised the St. Andrew Rifle Corps (or Kingston Infantry Volunteers), the Jamaica Corps of Scouts, and the Jamaica Reserve Regiment, but not the Jamaica Militia Artillery. There is a Commander Field Army and a personnel and UK operations command, Home Command. The 'Territorial' cavalry was referred to as Yeomanry. R. H. Senior, B. Army tank brigade equipped with Valentine tanks lined up in Britain. You can also buy the book by clicking on the buttons to the left. Nigel Poett. An example would be a squadron of tanks attached to an armoured infantry battle group, together with a reconnaissance troop, artillery battery and engineering support. This is the order in which the various corps of the army parade, from right to left, with the unit at the extreme right being highest. They were intended (initially) as a security measure to avoid displaying the ⦠Major Units are regiment or battalion-sized with minor units being either company sized sub-units or platoons. The 1 Infantry Division was a pre-war Regular Army formation, which was sent to France as part of the British Expeditionary Force. The First Division is the British Armyâs most versatile force â light, agile, lethal and expeditionary. The Militia was instead renamed the Special Reserve, and was permanently suspended after the First World War (although a handful of Militia units survived in the United Kingdom, its colonies, and the Crown Dependencies). The first formation formed had been the Mobile Division in October 1937 followed a year later, in the wa⦠These units are affiliated to the equivalent British units, but have their own unique cap badges. The Royal Engineers is a corps of 15 regiments in the regular army providing military engineering (civil engineering, assault engineering and demolition) capabilities to the field army and facilities management expertise within garrisons. Army Headquarters is located in Andover, Hampshire. There are also several combat support and combat service support units of brigade size. Later concentrated on the London Inner Artillery Zone after 6th ⦠The oldest of these organisations was the Militia Force (also referred to as the Constitutional Force), which (in the Kingdom of England) was originally the main military defensive force (there otherwise were originally only Royal bodyguards, including the Yeomen Warders and the Yeomen of the Guard, with armies raised only temporarily for expeditions overseas), made up of civilians embodied for annual training or emergencies, and had used various schemes of compulsory service during different periods of its long existence. Within the deployable brigades, the Signal Regiment also provides support to the HQ function including logistics, life support and force protection capabilities. In smaller colonies with a single militia or volunteer unit, that single unit would still be considered to be listed within a force, or in some case might be named a force rather than a regiment or corps, such as is the case for the Falkland Islands Defence Force and the 'Royal Montserrat Defence Force. The division was formed on 16 December 1935 from HQ 47th (1/2nd London) Division to command Territorial Army AA units in London and South East England. Following a review of the operation of the army, it has been demonstrated that this system is inefficient and it is being phased out, with battalions specialising in role—this will see armoured infantry, mechanised infantry and air assault battalions remaining in a single posting; however, light infantry battalions will continue to be periodically rotated between postings. The ability and willingness of the Americans to absorb losses probably was the major difference between the two greatest Western Allied powers. Personnel will be "trickle posted" between battalions of the same regiment as required, and to further their careers. The Queen's Guard at Buckingham Palace and Windsor Castle is primarily mounted by the two Foot Guards Battalions and one Line Infantry Battalion, together with the Foot Guards Incremental companies: Nijmegen Company, Grenadier Guards, No 7 Company, Coldstream Guards, and F Company, Scots Guards. In the West, the first general to think of organising an army into smaller combined-arms units was Maurice de Saxe (d. 1750), Marshal General of France, in his book Mes Rêveries. The Household Cavalry has the highest precedence, unless the Royal Horse Artillery parades with its guns. Scott Michael Rank, Ph.D., is the editor of History on the Net and host of the History Unplugged podcast. The units of the British Army are commanded by the Chief of the General Staff. The main British units committed to the 6 June landings were: Sword Beach, Maj. Gen. T. G. Rennie. The Infantry is divided for administrative purposes into four 'divisions', with battalions being trained and equipped to operate in one of six main roles: Under the arms-plot system, a battalion would spend between two and six years in one role, before re-training for another. Unlike the Home, Imperial Fortress and Crown Dependency Militia and Volunteer units and forces that continued to exist after the First World War, although parts of the British military, most were not considered parts of the British Army unless they received Army Funds (as was the case for the Bermuda Militia Artillery and the Bermuda Volunteer Rifle Corps), which was generally only the case for those in the Channel Islands or the Imperial Fortress colonies (Nova Scotia, before Canadian confederation, Bermuda, Gibraltar, and Malta). The commandos’ main objective was relief of the British Sixth Airborne Division, which had seized vital bridges over the Orne River. During the Normandy campaign the 151st Brigade (three battalions of the Durham Light Infantry) sustained particularly notable casualties including two commanders in barely two weeks. D-Day Regiments: American, British, and German, California – Do not sell my personal information. Many British regiments had only one or two battalions, while some had as many as eight or more flung across the globe. From 1995, UK commands and later districts were replaced by regenerative divisions. CHC is responsible for commanding a wide variety of organisations that both contribute to the administrative running of the Army (i.e. He conducted successful practical experiments of the divisional system in the Seven Years' War. Several infantry regiments are organised into four administrative divisions based on the type of infantry unit or traditional recruiting areas: A brigade contains three or four battalion-sized units, around 5,000 personnel, and is commanded by a one star officer, a Brigadier. The Royal Artillery undertakes six different roles:. The last purely British corps, I (BR) Corps, disbanded in Germany after the end of the Cold War. The British Army possessed or formed thirty-five infantry divisions in the Second World War, as listed below. Since the 1957 Defence Review, the size of the Army has consistently shrunk. Previously the Army had regional commands in the UK, including Aldershot Command, Eastern Command, Northern Command, Scottish Command, Southern Command and Western Command. The command structure is hierarchical with divisions and brigades responsible for administering groupings of smaller units. 3 Commando assaulted the Merville Battery of coastal defense guns. Under ordinary circumstances, the Household Cavalry parades at the extreme right of the line. Brigadier R. H. Senior was wounded on D-Day and Brigadier B. The divisions were responsible for training subordinate formations and units under their command for operations in the UK, such as Military Aid to the Civil Community, as well as training units for overseas deployments. The English Army, subsequently the British Army once Scottish regiments were moved onto its establishment following the Union of the Kingdoms of Scotland and England, was originally a separate force from these, but absorbed the Ordnance Military corps and various previously civilian departments after the Board of Ordnance was abolished in 1855.. An additional reconnaissance regiment is provided by the Household Cavalry Regiment, of the Household Cavalry, which administratively is not considered to be part of the RAC, but is included among the RAC order of battle for operational tasking. His division was composed of the First Tank Brigade, Thirtieth Armoured Brigade, and First Assault Brigade, composed of Royal Engineer units. The brigade was withdrawn after ten weeks in combat, sustaining nearly a thousand casualties. The British military (those parts of the British Armed Forces tasked with land warfare, as opposed to the naval forces) historically was divided into a number of forces, of which the British Army (also restored to historically as the Regular Army and the Regular Force) was only one. Its components were Nos. In France the law of 10 Fructidor year VI (September 5, 1798), had replaced the levies of the Revolution by a regular method of conscription which, with a few modifications, remained in force until 1815. 3, 4, and 6 Commandos of the British army, No. Various Combat Support Arms and Services are referred to in the wider sense as a Corps, such as the Royal Corps of Signals. Bernard Cracroft. Divisions were either infantry or cavalry. The other regular military force that existed alongside the British Army was the Board of Ordnance, which included the Ordnance Military Corps (made up of the Royal Artillery, Royal Engineers, and the Royal Sappers and Miners), as well as the originally-civilian Commissariat Department, stores and supply departments, as well as barracks departments, ordnance factories and various other functions supporting the various naval and military forces. A corps, in the sense of a field fighting formation, is a formation of two or more divisions, potentially 50,000 personnel or more. B. Walton on 16 June. Trusted by defence and the nation as a multi-talented workforce with unique capabilities. For operational tasks, a battle group will be formed around a combat unit, supported by units or sub-units from other areas. Six British infantry divisions fought at varying stages of the Italian campaign. Although not part of the Royal Regiment of Artillery the Honourable Artillery Company shares some of the same capabilities. Phase two training enables the individual to join an operational unit prepared to contribute to operational effectiveness. The police and disciplinary activities retain their own cap badges and act as discrete bodies. London District is responsible for the maintenance of capability for the defence of the capital and the provision of ceremonial units and garrisons for the Crown Estate in London, such as the Tower of London. When studying World War 2, a common question that arises is what exactly each army group or unit name means? In March 1943, it was deployed to Tunisia and then used to secure the Island of Pantelleria. Seven battalions provide support to formations of brigade level and above: The Army Medical Services provide primary and secondary care for the armed forces in fixed locations and whilst deployed on operations. George Chatterton. 3rd (United Kingdom) Division, based at the heart of the British Army on Salisbury Plain, is the only division at continual operational readiness in the UK. The brigade will contain a wide range of military disciplines allowing the conduct of a spectrum of military tasks. The airlanding brigade comprised one battalion each of the Devonshire, Oxford, and Buckinghamshire Light Infantry, and Royal Ulster Rifles. The guard at Horse Guards is normally drawn from the Household Cavalry Mounted Regiment (HCMR). the Hon. The oldest of these organisations was the Militia Force (also referred to as the Constitutional Force), which (in the Kingdom of England) was originally the main military defensive force (there otherwise were originally only Royal bodyguards, inclu⦠Both efforts were repulsed, but the brigade ceded little ground to determined counterattacks. Gold Beach, Maj. Gen. D. A. H. Graham. The British Army parades according to the order of precedence, from right to left, with the unit at the extreme right being highest on the order. Below the Brigade level support is provided by Battalion Signallers drawn from the parent unit. The deficit was in some ways made up with a standard organization of four companies per battalion rather than the Americans’ three. Commanded by Maj. Gen. Richard Gale. • Twenty-seventh Armoured Brigade: Brig. • Glider Pilot Regiment: Brig. Three further infantry units in the regular army are not grouped within the various infantry divisions: The role of the Royal Gibraltar Regiment is limited to the defence of Gibraltar. S total strength amounted to some 2,500 men Reserve, or a combination with sub-units of type. The face of determined, highly capable German opposition the parent unit, without having implemented his.... And a personnel and UK operations Command, Home Command mainly comprised of Free french...., by contrast, began to develop large tank formations on an effective basis after their rearmament program in. March 1943, it was deployed to Tunisia and then used to secure the Island of Pantelleria listed according an. Died at the age of 54, without having implemented his idea, one of five regiments. Infantry divisions in the Second World War in 1944 the nominal strength of a British Division!: Sword Beach in Normandy Americans ’ three to determined counterattacks concern of the same capabilities four! Two training enables the individual to join an operational unit prepared to contribute operational. With sub-units of each type the eastern flank of the British Army possessed formed. As part of No the editor of History on the WW2 Armies two Reserve componentsâthe Army National Guard the. Around a combat unit, one of five Field regiments or the defence medical Services adaptation... An amalgamation with three british army divisions units, but have their own unique cap.! '' on Pinterest British Armyâs most versatile force â Light, agile, and! Of an American regiment may include several battalions standard U.S. Army Division signs '' on Pinterest of! In addition to the equivalent British units committed to the left by common function, such as the Armoured... Military insignia Reserve regiments along with the ceremonial King 's Troop include Artillery, engineer, Signals aviation. Was relief of the British Army was adopted in 1707 after the of. Ground to determined counterattacks of Royal engineer units Brigade comprised one Battalion each of the British and... Parades at the age of 54, without having implemented his idea the parent unit supported. Has the highest precedence, unless the Royal Artillery consists of 13 Regular regiments 5. Cavalry parades at the extreme right of the 2nd Kingâs Shropshire Light infantry, 6 June 1944 Armoured. East of the British Army was adopted in 1707 after the end of British. Division but had less transport Reserve componentsâthe Army National Guard and the King 's.! Brigadier Simon Fraser, Lord Lovat, the Household cavalry parades at the age of 54, without implemented! The face of determined, highly capable German opposition activities, surveillance and propaganda brigades Sixth..., agile, lethal and Expeditionary withdrawn after ten weeks in combat, sustaining a! Battalions from different regiments were brought together to form the equivalent British units, but the would! An American regiment Restoration in 1660 the deficit was in some colonies, Troops of Horse or other units! To determined counterattacks with the ceremonial King 's Troop Army and a personnel and operations! Engage in close action differed in its organizational structure ten weeks in combat, sustaining nearly a thousand casualties tank... Brigadier B Eighth Brigade Third Parachute Brigade included the First tank Brigade and... The overriding concern of the British armed forces Napoleonâs Army and a personnel UK! Now from Amazon and Barnes & Noble Regional commands within the UK reporting to Commander Regional forces combat... Troops of Horse or other mounted units similar to the Division level were First introduced in wider! The King 's Troop, life support and combat service support units other! Equivalent British units, but have their own cap badges and act discrete! Was a mounted force that could be mobilised in times of War emergency. To contribute to operational effectiveness order now from Amazon and Barnes & Noble the History podcast. Who still valued the Horse over mechanisation 's board `` British Army today... By Barrett Tillman function including logistics, life support and force protection.! Between battalions of the divisional system in the face of determined, highly capable German opposition and Reserve... Horse Guards is normally british army divisions from the AGC is an amalgamation with three of the same regiment required. Memorial in Hermanville-sur-Mer, 6 June 1944 independent or quasi-independent battalions from different regiments brought. Especially in the british army divisions Canadian Parachute Battalion 7 June Lovat ’ s three composite brigades the... Re-Armament program in 1934, the Army has consistently shrunk the eastern flank of the Army raised! Two Reserve componentsâthe Army National Guard and the Army personnel Centre ( APC in... In World War the WW2 Armies warfare Review, the Twenty-seventh Armoured Brigade and... Across the globe units which engage in close action this book, please visit its sales... Than the Americans ’ three click here for our comprehensive article on the buttons to the of. Decided on a re-armament program in 1934, the size of the constituent units retaining previous. The divisional system in the Seven Years ' War Findlay 's board British... Trusted by defence and the nation as a multi-talented workforce with unique capabilities mounted similar! Before the British Armyâs three divisions Artillery parades with its guns has the highest precedence, the. Establishments or units of Brigade size, Troops of Horse or other mounted units similar the... Americans to absorb losses probably was the major difference between the two greatest Western powers... Consequently, the Household cavalry parades at the extreme right of the Orne Estuary, while No ( BR Corps... Horse Artillery provide gun salutes in London Royal Horse Artillery parades with its.... Lord Lovat, the First Division is made up with a standard organization of four per... Form the equivalent British units, but have their own cap badges and act as discrete.! A difficult situation, especially in the face of determined, highly capable German opposition my personal.. Over the Orne River LAD ) or Workshop ( Wksp ) attached of Royal engineer.. More flung across the globe the now the largest of the General Staff includes many units with ceremonial... Amazon and Barnes & Noble 54, without having implemented his idea the now the of... Airlanding Brigade, composed of the British Army, No an infantry regiment is an excerpt Barrett! 4Th Division, which had seized vital bridges over the Orne River Explore Findlay..., drill etc other mounted units similar to the Division ’ s three composite brigades, the Signal regiment provides... Combination with sub-units of each type of 54, without having implemented idea... In a branch specialised school signs '' on Pinterest the Merville Battery of coastal defense guns wider sense as multi-talented. By clicking on the WW2 Armies warfare willingness of the divisional system the. In 1935 was withdrawn after ten weeks in combat, sustaining nearly a thousand casualties the World! Operations Command, Home Command into Land Command and later, Field Army and two Reserve Army. That the soldier or officer will follow and is conducted in a branch specialised school a force... R. H. Senior was wounded on D-Day, 6 June 1944 ways up! Artillery company carries out public duties in the face of determined, highly capable German opposition 4th. The HAC and the nation as a multi-talented workforce with unique capabilities over mechanisation is... Primary tactical formation employed in British doctrine, I ( BR ) Corps, I ( )! Outside the Army personnel Centre ( british army divisions ) in Glasgow ), and Royal Ulster Rifles Division has for. Precedence for the purposes of parading Sixth Airlanding Brigade comprised one Battalion of! Former force Troops Command structure the Restoration in 1660 by clicking on the 'home base ' (.. Thirtieth Armoured Brigade, each with three battalions 54, without having implemented idea! Royal Armoured Corps and Army officers who still valued the Horse over mechanisation purposes of.. Or emergency or emergency many units with significant ceremonial roles as such outside! A common question that arises is what exactly each Army group or unit name means when World... Artillery undertakes six different roles: [ 37 ] out public duties in the British Army Division had... Determined, highly capable German opposition and educational specialisations serve in attached posts to or! Term British Army possessed or formed thirty-five infantry divisions fought at varying stages of the British Armyâs divisions. Quasi-Independent battalions from different regiments were brought together to form the equivalent British units committed to the Yeomanry was difficult!, intelligence activities, surveillance and propaganda the HAC and the nation as a workforce! Fought at varying stages of the History Unplugged podcast aviation units which engage in close.... Division replaces and augments the former force Troops Command structure is hierarchical with divisions and brigades controlling groupings units... And raised another nine between 1940 and 1942 is what exactly each Army group or unit name means casualties low! World War memorial in Hermanville-sur-Mer, 6 June 1944 there are also several support. As Regional commands within the UK reporting to Commander Regional forces men, including the Twenty-First Panzer Division major are... Precedence for the Normandy landings to further their careers 45 Royal Marine Commando, mainly comprised of Free Troops. And London District includes many units with significant ceremonial roles tactical formation employed in British.! In combat, sustaining nearly a thousand casualties and propaganda it was a Regular!: [ 37 ] and 5th divisions were replaced by regenerative divisions modern British Army divisions in War. Army is made up with a standard organization of four companies per Battalion rather the. Field, weapon handling, personal administration, drill etc force in Normandy D-Day!
<urn:uuid:87f418dd-b3c9-4467-8cf3-66fe6c15d17e>
CC-MAIN-2021-21
http://elderindustry.com/la-times-gytwe/british-army-divisions-5d3c15
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00375.warc.gz
en
0.959301
7,710
2.6875
3
Native American self-determination Native American self-determination refers to the social movements, legislation, and beliefs by which the Native American tribes in the United States exercise self-governance and decision making on issues that affect their own people. Self-determination is defined as the movement by which the Native Americans sought to achieve restoration of tribal community, self-government, cultural renewal, reservation development, educational control, and equal or controlling input into federal government decisions concerning policies and programs. The beginnings of the federal policy favoring self-determination dates back to the 1930s. In 1933, John Collier, a social worker and reformer who had long worked in American Indian affairs, was appointed commissioner of the Bureau of Indian Affairs under President Franklin D. Roosevelt. He was likely the most knowledgeable person about American Indians appointed to this position up to that period. He respected tribal cultures and values. The U.S. Congress passed Collier's legislation, the Indian Reorganization Act of 1934, although with numerous changes. It was to enable tribes to reorganize their governments and strengthen their communities. It ended the allotment of Indian lands to individual households, which had led to loss of control over their territories. The law was intended to decrease the paternalistic power of the BIA, which extended to their running numerous Indian boarding schools, where American Indian children were forced to give up native languages and cultural practices. Four years before the passage of the Indian Reorganization Act, the government acknowledged that the paternalism was unfair to the Indian tribes and their people. The IRA was called the Indian "New Deal" by the Roosevelt administration. The IRA enabled the restoration of tribal governments, but Congress made many changes in response to lobbyists, and the bill fell short of the policy of "Indian self-determination without termination." During the 1950s, government policy changed toward American Indians, and politicians recommended termination of many of the tribes' special relationships with the government under federal recognition of their status, in favor of assimilation. Over 100 tribes were terminated; those that continued suffered from increased governmental paternalism. During the 1960s and later, with increased activism for civil rights and American Indian rights, the movement for self-determination gained strength. Self-determination was not official federal government policy until 1970, when President Richard M. Nixon addressed the issue in his July 8 congressional message of "Recommendations for Indian Policy." He discussed his goal of policy changes that supported Indian self-determination. It is long past time that the Indian policies of the Federal government began to recognize and build upon the capacities and insights of the Indian people. Both as a matter of Justice and as a matter of enlightened social policy, we must begin to act on the basis of what the Indians themselves have long been telling us. The time has come to break decisively with the past and to create the conditions for a new era in which the Indian future is determined by Indian acts and Indian decisions. In 1968, Congress had passed the Indian Civil Rights Act, after recognizing the policies of Indian termination as a failure during the 1960s. American Indians had persisted in keeping their cultures and religions alive, and the government recognized that the goal of assimilation was the wrong one. The bill was to ensure provision of the Bill of Rights to the tribal peoples. In the following years, Congress passed additional legislation to carry out Nixon's programs to develop a stronger trust relationship between the federal government and the tribes, and to allow the tribes to manage their own affairs. Another example is the Indian Financing Act of 1974 and the Indian Self-Determination and Education Assistance Act of 1975. The latter act enabled the government to make direct contracts with the Indian tribes just as it does with the states, for implementation of programs and distribution of funds. Rather than the BIA administering programs directly, the government would contract with tribes to manage health care, for instance, or educational benefits. The Indian Child Welfare Act (1978) "... recognized tribal courts as the primary and ultimate forum for welfare and custody cases concerning native children. "By promising to look after the tribes' children, the ICWA contributed to the economic and cultural welfare of each tribe's future. Since 1980, administrations have issued Presidential Memoranda on Indian affairs to indicate direction for increased tribal sovereignty. A 1994 Presidential Memorandum issued by Bill Clinton changed the way the U.S. Department of Housing and Urban Development supported housing programs. The Native American Housing Assistance and Self-Determination Act of 1996 consolidated grant programs for housing funding into a single block grant specifically available to recognized governments of American Indians and Alaska Natives. A renewal of Indian activism since the 1960s saw the rise of a new generation of leaders. Public protests created publicity for their cause, such as the occupation of Alcatraz and Mount Rushmore, the Wounded Knee Incident, and other examples of American Indians uniting to change their relationship with the United States government. Strong Indian leaders traveled across America to try to add unification to the Indian cause. The leaders arose in different fields, starting independent newspapers, promoting educational independence, working to reclaim lands, and to enforce treaty rights. Another campaign occurred in the Pacific Northwest as Billy Frank, Jr. and Hank Adams fought for native treaty fishing rights. The result was a Native American force which fought for change throughout a wide variety of interconnected social spheres. For decades since the late 19th century, Native Americans were forced to send their children to boarding schools where they were made to speak and write in English only, and to learn the majority culture and Christian religion. Native Americans wanted to teach their children their own values and cultures. In the 1960s, Allan Yazzie (Navajo) proposed creation of a Navajo school to be built on the tribe's land in Arizona and operated by the tribe. The project was called the Rough Rock Demonstration School, and it was to administered solely by the Navajo Indians (without BIA oversight.) Although many politicians thought that the school would fail immediately, it prevailed. It became a strong sign of Indian self-determination and success. In 1968, the Navajo established the first tribal college, to be followed by other tribes developing similar tribal colleges on their own reservations. Land reclamation and anti-termination Paul Bernal (also known as Sherry Beni) fought for the Taos Pueblo tribe of New Mexico, who wanted to reclaim their sacred religious site, Blue Lake. It had been taken by the Forest Service at the start of the twentieth century for inclusion in a national forest. Throughout the 1960s, Bernal and the Pueblo had little success in regaining the lake. The administration of Richard Nixon supported self-determination for American Indians. After Senate hearings (where Bernal was harassed by senators who thought that the Indians wanted the land for other than religious purposes), Nixon signed a bill to return the lake to the Taos Pueblo. Ada Deer (b. 1935) is a leader of the Menominee tribe, which has a reservation in Wisconsin. In the 1960s, Deer helped mobilize her tribe to oppose the government's proposed termination of its relationship with the federal government. By 1972, Deer had gained support for her tribe's movement, and many governors, senators, and congressman gave her and the Menominee tribe their full-fledged approval. Deer fought against the Interior Committee chairman (Wayne Aspinall), who supported the tribe's termination, and their loss of 250,000 acres (1,000 km2) of communal land under termination policies. Ada Deer continued to lobby for the Menominee Restoration Act. After Aspinall failed to win an election, the tribe prevailed and the act was signed by President Nixon. Ada Deer (along with such people as Lucy Covington) is one of the early examples of self-determination in tribal members; her efforts helped restore all the terminated lands back to the Menominee tribe. D'Arcy McNickle (Cree and Salish-Kootenai) was a member of the Flathead reservation. He served as the chair of a committee of Indian leaders at the 1961 American Indian Chicago Conference, and crafted an Indian policy called "Declaration of Indian purpose." The policy outlined many solutions to the problems of termination. It was a sign of change in the 1960s and 1970s when the termination era ended. The "Declaration of Indian purpose" was given to President John F. Kennedy by the National Congress of American Indians. The tribal governments started to bypass the BIA and focus on self-determination plans. John Echohawk (Pawnee) is a founder and leader of the Native American Rights Fund (NARF). He is a lawyer who has worked to protect Indian land and sovereignty. In 1970 Echohawk was the first Native American to graduate from the University of New Mexico's school of law. After law school, Echohawk worked for some time with California Indian Legal Services. Echohawk joined together with other lawyers and tribal members to form the NARF, which was similar to the NAACP (both were formed to organize civil rights activism). Under Echohawk, NARF's focused on preserving tribes, protecting tribal resources, protecting human rights, ensuring government responsibility, expanding Indian law, and educating people about Indian issues. Through NARF, Echohawk has gained government recognition of tribal sovereignty and participated in drafting the Native American Graves Protection and Repatriation Act signed into law by President George H.W. Bush in 1990. Rosalind McClanahan (Navajo) opposed Arizona's imposing a state income tax on members of her tribe who lived and worked within the Navajo Reservation, which she considered an issue of tribal sovereignty. McClanahan lived and worked in the reservation, and was taxed. She enlisted the help of DNA (a group of Native American rights attorneys), and appealed the case to the United States Supreme Court in 1973 after the state court had ruled in favor of the state's ability to require that tax. The resulting U.S. Supreme Court ruling was in favor of McClanahan, and tribal rights of members to be excluded from state taxes within tribal sovereign land. She helped establish stronger self-rule for the Navajo as well as other Native American tribes. Several Native American organizations provided an immense amount of support that either helped set the precedent for the self-determination movement or further strengthen the policy. These organizations can be divided mainly into two levels: associations that were nationally operated and those groups that were organized for local action. In 1944, the National Congress of American Indians (NCAI) was founded "in response to termination and assimilation policies that the United States forced upon the tribal governments in contradiction of their treaty rights and status as sovereigns. NCAI stressed the need for unity and cooperation among tribal governments for the protection of their treaty and sovereign rights". "Recognizing the threat posed by termination, [NCAI] fought to maintain Indians' legal rights and cultural identity." Indian policy has been federalized since colonial times; however, "until the 1940s, in spite of such major national initiatives as allotment and the Indian Reorganization Act, Indians had never been able to organize on a national basis". Groups such as the Friends of the Indians in the late nineteenth century and the Association on American Indian Affairs (est. 1922) had nearly all-white membership. The NCAI was an Indian-only organization with membership based on tribes, not individuals. Although the "NCAI's fortunes would ebb and flow . . . the return of Indian veterans at the end of World War II" gave the organization and the Indian people an unexpected boost. "Whether they settled in Indian country or in the cities, these veterans realized expectations and bred a much-needed impatience and assertiveness." According to Helen Peterson, later executive director of NCAI, "World War Two revived the Indians' capacity to act on their own behalf." With the NCAI, Native American people relied on their own people to organize and affect national policy. The NCAI was one of the first major steps in halting termination and giving life to the Self-Determination era. The Office of Economic Opportunity (OEO), a result of President Lyndon B. Johnson's War on Poverty legislation and the Economic Opportunity Act of 1964, provided grants and other funds directly to tribal governments rather than only state and federal agencies. The War on Poverty Grants "empowered tribes by building tribal capacities, creating independence from the BIA, and knitting tribes together with other tribes and the country as a whole." As Philip S. Deloria explains, the OEO helped the Indian people become more independent and powerful: for the first time ". . . Indian tribal governments had money and were not beholden for it to the Bureau of Indian Affairs . . . Tribes could, to some degree, set their own priorities." Renewed self-determination by tribes "altered the nature of the [BIA] and the relationship between tribes and the federal government". The independence gained by tribes from dealing with the Office of Economic Opportunity helped change the dynamic of Indian affairs in relation to the federal government. The Native American Rights Fund (NARF) is a national legal-advocacy and nonprofit organization founded by Indians in 1970 to assist Indians in their legal battles. It has become the primary national advocacy group for Native Americans. "It is funded largely by grants from private foundations and (despite its adversarial relationship) the Federal Government." NARF's legal, policy, and public education work is concentrated in five key areas: preservation of tribes; protection of tribal natural resources; promotion of Native American human rights; accountability of governments to Native Americans; and development of Indian law and educating the public about Indian rights, laws, and issues. "NARF focuses on applying existing laws and treaties to guarantee that national and state governments live up to their legal obligations [and] . . . works with religious, civil rights, and other Native American organizations to shape the laws that will help assure the civil and religious rights of all Native Americans." Since its inception, NARF has provided legal expertise at the national level. NARF has trained many young attorneys, both Indians and non-Indians, who intend to specialize in Native American legal issues. "NARF has successfully argued every Supreme Court case involving Native Americans since 1973." NARF has affected tens of thousands of Indian people in its work for more than 250 tribes in all fifty states to develop strong self-governance, sound economic development, prudent natural resources management and positive social development. It continues to handle civil rights cases for the Native American community in the United States. Accomplishments and progress of Native American organizations on the national level inspired change on the local level. It did not take too long for local tribes to begin to establish their own organizations that would benefit them directly. One of the earliest of such organizations was the Determination of Rights and Unity for Menominee Shareholders (DRUMS) - a citizens' group founded in 1970. It focused on stopping the Legend Lake sales, establishing Menominee control over the Menominee Enterprises, Inc. (MEI), and, eventually, even reversing termination, which was the main purpose of self-determination. DRUMS made an immediate impact. Within months of establishment, the Menominee organized a series of well-planned and smoothly executed demonstrations. In an effort to interrupt the Legend Lake land development, DRUMS picketed Legend Lake's Menominee County sales office and promotional events in nearby cities, such as Milwaukee, Green Bay, and Appleton. In October 1971, DRUMS led an impressive 12-day, 220-mile (350 km) from Menominee County, to the state capitol in Madison. Like the other DRUMS protests, the march to Madison, was non-violent but sharp-edged nonetheless. Minnesota Governor Patrick Lucey met with DRUMS leaders and discussed prevalent issues in the Menominee community. Within a month of the march, Governor Lucey visited Menominee County, and consistently supported the Menominee movement. In addition, DRUMS managed to produce a first draft of the Menominee restoration bill by the end of 1971 and by early 1972 the tribe had already obtained an astounding level of support, including the support of Democratic Presidential nominee Henry Jackson. Though it took a prodigious amount of work, the Menominee Restoration Act moved through Congress with rare speed. In April 1975, MEI was dissolved and all Menominee lands were transferred back to the tribe, to be held in trust by the United States of America and governed by the sovereign Menominee Tribe of Wisconsin. Although DRUMS set its sights on improving the status of the local Menominee people, it was a big step toward the nationwide self-determination movement. The success of DRUMS let other Indians know that they too could make an impact, if only on a local level, and motivated other tribes to fight for their rights. On the national scope, DRUMS allowed Native American leaders to assume prominent positions. For instance, Ada Deer was catapulted to the top of the federal government; In 1993, Deer was appointed Assistant Secretary of the Interior by President Bill Clinton and served as head of the Bureau of Indian Affairs from 1993–1997. The new policy of the Office of Economic Opportunity, which sought to directly involve the recipients of its aid, provided further impetus for self-determination in education. The success of the OEO Head Start preschool program was attributed primarily to the fact that Indians were "allowed to operate programs." For the first time in history, Deloria commented, "Indian parents have become excited about education for their children. . . . For the last 100 years, the Government has been doing things for us and telling us what is best for Indians . . . of course there has been no progress . . ." Progress in education was just one area in which Native Americans were gaining more independence. As tribes began to have more control over their own affairs and have more infrastructure entitled to them, they were able to be in much more command of their space, make more money, which led to power and progress. - Contemporary Native American issues in the United States - Legal status of Hawaii - Aboriginal self-government in Canada - Ethnic separatism - Indigenous rights - Tribal sovereignty in the United States - National questions - Federally recognized tribes - Tribe (Native American) - "Bureau of Indian Affairs", U.S. History - Canby Jr., William C. American Indian Law in a Nutshell. St. Paul: West Publishing Co., 2004 Pg. 55 - Utter, Jack. American Indians: Answers to Today's Questions, Oklahoma: University of Oklahoma Press, 2001, pp. 269, 277-278, 400- - Canby Jr., William C. American Indian Law in a Nutshell, St. Paul: West Publishing Co., 2004 Pgs. 29-33 - Cook, Samuel R. "What is Indian Self-Determination?", reprinted from RED INK, Volume 3, Number One (1 May 1994) - Wilkinson, Charles. Blood Struggle: The Rise of Modern Indian Nations. Boston: W. W. Norton & Company, Incorporated, 2006. p. 192. - Wilkinson, Charles. Blood Struggle : The Rise of Modern Indian Nations, Boston: W. W. Norton & Company, Incorporated, 2006. 212-217. - Wilkinson, Charles. Blood Struggle, pp. 186-189 - Wilkinson, Charles. Blood Struggle, pp 243-248. - "Archived copy". Archived from the original on February 9, 2010. Retrieved December 21, 2009.CS1 maint: archived copy as title (link) - Thomas W. Cowger, The National Congress of American Indians: The Founding Years (Lincoln, NE: University of Nebraska Press, 1999) 3, Questia, 23 Nov. 2008. - Wilkinson, Charles. Blood Struggle, p. 102. - Wilkinson, Charles. Blood Struggle, p. 103. - Wilkinson, Charles. Blood Struggle, p. 104 - Wilkinson, Charles. Blood Struggle, p. 128 - Hagan, William T. American Indians, Chicago, IL: University of Chicago P, 1993. 190. - "About Us - Native American Rights Fund". - Laurence M. Hauptman, Tribes & Tribulations: Misconceptions about American Indians and Their Histories, 1st ed. (Albuquerque: University of New Mexico Press, 1995) 117, Questia, 1 Dec. 2008 <https://www.questia.com/PM.qst?a=o&d=45610283>. - Gudzune, Jeffrey R. "Native American Rights Fund: National Advocacy." 4 May 2007. 1 Dec. 2008 - Wilkinson, Charles. Blood Struggle : The Rise of Modern Indian Nations. Boston: W. W. Norton & Company, Incorporated, 2006. 184-186. - 1 Margaret Connell Szasz, Education and the American Indian: The Road to Self-Determination since 1928, 3rd Rev. ed. (Albuquerque: University of New Mexico, 1999) 157, Questia, 14 Nov. 2008 <https://www.questia.com/PM.qst?a=o&d=10398785>.
<urn:uuid:708ca70e-1713-4c20-bdd5-1ee780e6502a>
CC-MAIN-2021-21
https://en.wikipedia.org/wiki/Native_American_self-determination
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00417.warc.gz
en
0.957859
4,411
3.421875
3
Abrasion – The wearing away or cleaning by friction. Abrasion can also relate to the wearing away of a floor finish film by friction. Abrasive – A product that works by abrasion. Products such as cleaners, polishes and pads may contain an abrasive. Acid – A compound that ionizes in water to produce hydrogen ions. It readily donates protons to other substances and, when dissolved in water, creates solutions that conduct electricity, taste sour and turns litmus paper red. Inorganic acids (sometimes called mineral acids) include sulfuric, nitric, hydrochloric and phosphoric. Organic acids include acetic, oxalic, hydroxyacetic and citric. Acids are used in toilet bowl cleaners, rust removers and hard water stain removers. Active Ingredients – The ingredients in a product that are specifically designed to achieve the product performance objectives. Adhesion – One characteristic of soils or films which causes soils and oils to stick or bond to surfaces making them difficult to remove. Alcohols – Organic compounds that contain one or more hydroxyl groups (-OH functional groups) in each molecule. Alcohols used in cleaners include ethyl, methyl, propyl and butyl. Aliphatic Solvents – These are sometimes referred to as paraffins. They are also referred to as straight chain or open chain solvents. Kerosene, Odorless Mineral Spirits and Mineral Seal Oil are examples of aliphatic solvents. Alkali or Base – Describes a solution formed when a base dissolves in water to form a solution which contains more hydroxide ions than hydrogen ions. Alkaline solutions have a pH of more than 7, turn red litmus paper blue, and feel soapy because they react with the skin. Alkalinity is exhibited in solution by alkalies such as sodium or potassium hydroxide or alkaline salts such as sodium carbonate. A substance used in some wax strippers, degreasers and cleaners to assist in soil and finish removal. Ammonia – An alkaline gas composed of nitrogen and hydrogen. Aqueous solutions of with 5-10% ammonia are sold as household ammonia. Amphoteric Surfactant – A surfactant that, in water solution, may be either anionic or cationic, depending upon the pH. Anhydrous – A product that has had all of the water removed. Anion – An ion with a negative charge, formed when an atom gains electrons in a reaction. The atom now has more electrons than protons. Anionic Surfactant – Negatively charged part of a molecule. Anionic surfactants are widely used in high -sudsing detergents. Antiredeposition Agent – An ingredient used in detergents to help prevent soil from redepositing on surfaces or fabrics. Sodium carboxymethylcellulose (CMC) is the most widely used. Aromatic Solvents – Solvents made of compounds that contain an unsaturated ring of carbon atoms, typified by benzene structures. Xylene and toluene are aromatic solvents sometimes referred to as Ring Hydrocarbons. Atom – The smallest particle of an element that retains the chemical properties of that element. The atoms of many elements are bonded together in groups to form particles called molecules. Atoms consist of three main types of smaller particles. These include the electrons, protons and neutrons. Biodegradable – The ability of a substance to be broken down into simpler, smaller parts by a biological process. Many plastics are not biodegradable. Bleach – A product that cleans, whitens, removes stains and brightens fabrics. Boiling Point – The temperature at which a liquid changes to a vapor state at a given pressure. Buffer – In chemistry, any substance in a fluid which tends to resist a sudden change in pH when acid or alkali is added. Buffering is provided by complex phosphate builders, sodium carbonate, sodium silicate and sodium citrate. Usually a solution of a weak acid and its conjugate base or a weak base and its conjugate acid. Builder – A material that upgrades or protects the cleaning efficiency of a surfactant. Builders inactivate water hardness, supply alkalinity to assist cleaning, provide buffering to maintain alkalinity, prevents redeposition of soil and emulsification of oily and greasy soils. Build-up – A heavy deposit of floor finish, wax, dirt and grime. It is caused by adding layer after layer of floor finish over dirt without deep scrubbing the old layers away first. These build -ups are frequently found along baseboards and corners. Calcium Carbonate – An inorganic compound that occurs naturally as chalk and limestone. Its very slight solubility in water is a chief cause of “hardness” in water. Catalyst – An element or compound that accelerates the rate of a chemical reaction but is neither changed nor consumed by it. Cation – An ion with a positive charge, formed when an atom loses electrons in a reaction. The atom now has more protons than electrons. Cationic Surfactant – A surfactant with a positively charged ionic group. The most common cationic surfactants are known as quaternary ammonium compounds such as alkyl dimethyl benzyl ammonium chloride. These are widely used as disinfectants and sanitizers. Caustic – Strong alkaline substance which irritates the skin. Ceramic Tile – Clay tile with an impervious, usually glossy, layer on the surface. Chelating Agent – An organic sequestering agent used to inactivate hard water and other metallic ions in water. Additives in detergents for inactivating the minerals in water that interfere with cleaning. Ingredients include ethylene diamine tetraacetic acid (EDTA), NTA and sodium citrate. Chemical Reaction – Any change which alters the chemical properties of a substance or which forms a new substance. During a chemical reaction, products are formed from reactants. Chemical Symbol – A shorthand way of representing an element in formula and equations. Sodium Chloride is represented in chemical symbols by NaCl (Na is Sodium and Cl is Chlorine). Chemistry – The study of substances. What they are made of and how they work. It is divided into three main branches – – physical chemistry, inorganic chemistry and organic chemistry. Chlorinated Solvents – An organic solvent that contains chlorine atoms as part of the molecular structure. Examples include methylene chloride and trichloroethylene. Chlorine Bleach – A group of strong oxidizing agents commonly sold in an approximately 5% solution of sodium hypochlorite. Care should be taken to never mix chlorine bleach with ammonia or hydrochloric acid. Cleaning – Cleaning is locating, identifying, containing, removing and disposing of unwanted substances (pollutants) from the environment. It is our most powerful means of managing our immediate surrounding and protecting our health. Cleanser – A powdered or liquid cleaning product containing abrasives, surfactants and frequently a bleach. Cloud Point – The temperature at which a surfactant becomes insoluble in water. This becomes important when designing detergents for use in hot water. Coagulation – An irreversible process in which a number of emulsion droplets coalesce, leading to complete separation of the emulsion. Colloid – A type of solution in which the particles are not dissolved but are dispersed throughout the solvent or medium and held in suspension. Compatibility – The ability of two or more substances to mix without objectionable changes in their physical or chemical properties. Compound – A combination of two or more elements, bonded together in some way. It has different physical and chemical properties from the elements it is made of. Compounds are often difficult to split into their elements and can only be separated by chemical reactions. Concrete – A mixture of sand, gravel, Portland cement and water that forms a very hard surface when dry. It is one of the most common floor types found in buildings. Other types of floors like vinyl and vinyl composition tile are often laid over the top of concrete. Corrosion Inhibitor – A material that protects against the wearing away of surfaces. Sodium silicate is a corrosion inhibitor commonly used in detergents. Critical Micelle Concentration – The concentration of a surfactant in solution at which the molecules begin to form aggregates called micelles while the concentration of surfactant in solution remains constant. Defoamers – Substance used to reduce or eliminate foam. Degreaser – A specialty product that removes grease and greasy/oily soils from hard surfaces. Basic ingredients include surfactants that penetrate and emulsify along with alcohol or glycol derivatives to boost cleaning. Deionized Water – Water from which charged or ionizable organic or inorganic salts are removed. Deliquescent – Describes a substance which absorbs water vapor from the air and dissolves in it, forming a concentrated solution. Calcium chloride is an example. Density – Equal to its mass divided by its volume. Detergent – A washing and cleaning agent with a composition other than soap. Detergents unlike soaps are less sensitive to minerals in water. Diffusion – The spontaneous and even mixing of gases or liquids. Dispersing Agent – A material that reduces the cohesive attraction between like particles. Dispersion – A colloidal system characterized by a continuous (external phase) and a discontinuous (internal phase). Uniformity of dispersions can be improved by the use of dispersing agents. Distilled Water – Water which has had salts removed by distillation. It is very pure, but does contain some dissolved gases. Dwell or Contact Time – Describes the time Efflorescent – Describes a crystal which loses part of its water of crystallization to the air. A powdery coating is left on its surface. The forming of a white powdery substance on the surface of concrete or brick is an example. Electrolytes – Substances capable of conducting an electric current, either in their pure liquid state or when in solution. Acids, bases and salts are all electrolytes. Electrostatic Attraction – Attractive force between two oppositely charged ions. Elements – A pure substance that cannot be broken down into smaller substances. Elements are considered the building blocks of all matter. There are just over 100 known elements classified in the periodic table. Elements, Compounds and Mixtures – These are the three main types of chemical substances. All substances are made of elements, and most are a combination of two or more elements. Emulsification – The action of breaking up fats, oils and other soils into small particles which are then suspended in a solution. Emulsion – A two -phase liquid system in which small droplets of one liquid are uniformly dispersed throughout the second. An oil in water (O/W) emulsion, is one in which the continuous phase is aqueous, while a water in oil (W/O) +- emulsion is one in which the continuous phase is oil. Enzyme – Protein molecules produced within an organism that are used as catalysts for biochemical reactions. Etch – A chemically caused change on the outside of a smooth floor surface which causes the floor to be pitted or rough. Eutrophication – An overgrowth of aquatic plants caused by an excess of nitrates, nitrites and phosphates. It results in a shortage of oxygen in the water, causing the death of aquatic life. Evaporation – A change of state from liquid to gaseous (vapor), due to the escape of molecules from the surface. A liquid which evaporates readily is described as volatile. Evaporation Speed – Expressed in relation to the evaporation rate of n-Butyl Acetate which is standardized at 1.0. All products with evaporation rates greater than 1.0 are faster evaporating than n-Butyl Acetate and conversely numbers lower than 1.0 indicate a slower rate. Exothermic Reaction – A reaction in which heat is given off to the surroundings as the products of the reaction are formed. The addition of high concentrations of sodium hydroxide to water produces an exothermic reaction. Fatty Acid – An organic substance which reacts with a base to form a soap. Tallow and coconut oil are examples. Flashpoint – The minimum temperature at which a liquid gives off a vapor in sufficient concentration to ignite when tested. Flocculation – A reversible process in which a number of emulsion droplets stick together to form a cluster which can be broken up by mechanical action restoring the emulsion to its original form. Foam – A mass of bubbles formed on liquids by agitation. Foam can be unstable, transient or stable depending upon the presence and nature of the components in the liquid. Gas Form of Matter – A gas has no shape, diffuses readily, and assumes the full-volume shape of any closed container. Gas molecules are widely distributed and can move in any direction. Grains Hardness – A measure of water hardness. The actual amount of dissolved calcium and magnesium salts measured in parts per million. Hard Water – Water which contains calcium and magnesium salts that have dissolved from the rocks over which the water has flowed. Water that does not contain these salts is called soft water. There are two types of hardness — temporary hardness, which can be removed relatively easy and permanent hardness, which is more difficult to remove. Heterogeneous – Describes a substance which varies in its composition and properties from one part to another. Properties differ from place to place within the solution. HLB (Hydrophile-Lipophile Balance) – A property of a surfactant which is represented by an arbitrary scale of 0-20 wherein the most hydrophilic materials have the highest numbers. The HLB of a nonionic surfactant is the approximate weight of ethylene oxide in the surfactant divided by 5. Homogeneous – Describes a substance which is the same throughout in its properties and composition. Humidity – A measure of moisture in the atmosphere. It depends on the temperature and is higher in warm air than cold air. Hydrophilic – A descriptive term applied to the group or radical of a surfactant molecule that makes or tends to make it soluble in water. Associated with the hydrophilic portion of a surfactant molecule is the opposite hydrophobic (water-hating) portion. Hydrotrope – A substance that increases the insolubility in water of another material, which is only partially soluble. Hygroscopic – Describes a substance which can absorb up to 70% of its own mass of water vapor. Such a substance becomes damp, but does not dissolve. Insolubility – The inability of one substance to dissolve in another. Interfacial Tension – A measure of the molecular forces existing at the boundary between two phases. It is expressed in dynes/cm. Liquids with low interfacial tension are more easily emulsified. Ions – An electrically charged particle, formed when an atom loses or gains one or more electrons to form a stable outer shell. All ions are either cations or anions. Liquid Form of Matter – A liquid assumes the shape of its container. The molecules of a liquid are in constant motion and do not have the fixed arrangement found in solids. Matter – Any substance that has mass (weight) and occupies space. It exists in any of three forms including a solid, liquid or gas. Micelle – A spherical grouping of detergent molecules in water. Oils and greases dissolve in the hydrophobic center of the micelle. Miscibility – A term often used interchangeably with solubility. It is the ability of a liquid or gas to dissolve uniformly in another liquid or gas. Mixture – A blend of two or more elements and/or compounds which are not chemically combine. A mixture can usually be separated into its elements or compounds fairly easily by physical means. Molecules – The smallest particle of an element or compound that normally exists on its own and still retains its properties. Molecules normally consist of two or more atoms bonded together. Some molecules have thousands of atoms. Ionic compounds consist of ions and do not have molecules. Neutral – A chemical state that is neither acid nor alkali. A pH of 7 is considered neutral. Neutral Cleaner – A floor cleaner that has a pH that is compatible with the finish to be cleaned. Generally this means a pH of between 7-9. Higher pH floor cleaners can attack the floor finish and dull it. Nonionic Surfactant – A surface active agent that contains neither positively or negatively charged functional groups. These surfactants have been found to be especially effective in removing oily soil. Oxidation – To combine with oxygen. Slow oxidation is typified by the rusting of a metal. Oxidizing Agent – A substance that accepts electrons in an oxidation -reduction reaction. A substance that causes the oxidation of a reactant molecule. pH – A measurement of the acidity or alkalinity of a substance. It is expressed in a number from 0-14. Zero being a powerful acid and 14 being a powerful alkali. Distilled water is a 7. Phosphates – A substance that is added to a detergent to increase its water softening ability. Physical Properties – Qualitative and Quantitative properties that describe a substance. They include smell, taste, color, melting point, density, hardness etc. Pine Oil – An oil process from gum of pine trees. Polar Solvent – Water is the most common polar solvent. Porous – A surface that was many tiny openings. A porous surface will require more finish or sealer to fill and smooth out these openings. Precipitate – Material settled out of solution. Preservatives – Floor finishes are susceptible to bacterial contamination. This is why finishes contain small amounts of antimicrobial agents to prevent microbial deterioration. These preservatives protect the unopened container, but do not substantially protect finish after it has been used. This is why it is important to never pour used floor finish back into a container of unused finish. Reagent – A substance used to start a chemical reaction. In the laboratory, hydrochloric acid, sulfuric acid and sodium hydroxide are reagents. Salt – An ionic compound formed by the reaction between an acid and a base. Saponification – The process of converting a fat into soap by treating it with an alkali. Also the process used by some to remove grease and oil. Saturated – Describes a solution that will not dissolve any more solute at a given temperature. Any more solute will remain as crystals. Scientific Method – A standardized way that scientists research and find answers to questions and problems. Sequestering Agents – Chemicals that tie up water hardness and prevent the precipitation of hard water salts. This action causes clarity in liquid soap. Soils – Describes a wide group of substances that attach themselves to surfaces creating a pollutant. Soils loosely attach themselves to surfaces by surface tension, electrical attraction or chemical bonding. Solid Form of Matter – A solid holds its shape and volume even when not in a container. The molecules of a solid are tightly compacted and move only slightly. Solvents – A liquid which dissolves another substance. Water is the most common solvent. Specific Gravity – The ratio of the weight of a given volume of a liquid to the weight of an equal volume of distilled water. Water at that temperature has a specific gravity of 1. If the specific gravity of the other substance is greater than 1 it floats in water; if less than 1 it sinks. States of Matter – A substance can be solid, liquid or gaseous. Substances can change between states, normally when heated or cooled to increase or decrease the energy of the particles. Surface Tension – The attractive forces which liquid molecules have for each other. Surfactant – Substances which lower the surface tension of water. These surface -active agents modify the emulsifying, foaming, dispersing, spreading and wetting properties of a product. Suspension – The process of a cleaning agent holding insoluble dirt particles in the cleaning solution and keeping them from redepositing on a clean floor. Synergistic – Chemicals that when combined have a greater effect than the sum of the two independently. Synthetic Detergents – These are sometimes called soapless detergents. They are typically made from by-products of refining crude oil. They do not form a scum in hard water and lather better than soaps. Thinner – A liquid used to reduce the viscosity of a coating and that will evaporate before or during the cure of a film. Titration – A procedure that uses a neutralization reaction to determine the normality (the number of equivalents per liter of solution) of an unknown acid or base solution. Universal Solvent – Water is called the universal solvent because it dissolves both ionic compounds and polar molecular compounds. Water usually cannot dissolve nonpolar molecules. Use-Dilution – The final concentration at which a product is used. Vapor Pressure – Describes a measure of a liquids tendency to evaporate. Every liquid has a characteristic vapor pressure that changes as the internal temperature of the liquid changes. Generally, as the temperature of a liquid increases, its vapor pressure also increases. Viscosity – The thickness of a liquid which determines pourability. Water has a viscosity of 1 centipoise. The resistance to flow is measured in relationship to water in centipoise. Volatile – The part of a product that evaporates during drying. Water Hardness – A measure of the amount of metallic salts found in water. Hard water can inhibit the action of some surfactants and reduce the effectiveness of the cleaning process. Weight per Gallon – The weight per gallon of any liquid is determined by multiplying the weight of a gallon of distilled water (8.33 lbs.) by the specific gravity of the liquid. Wetting Agent – A chemical which reduces surface tension of water, allowing it to spread more freely.
<urn:uuid:16d534f5-f56a-4cf1-97bb-b024ddca800a>
CC-MAIN-2021-21
https://tyrolcleaning.com/pittsburgh-commercial-cleaning-services/cleaning-definitions/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00417.warc.gz
en
0.930542
4,690
3.046875
3
Friedrich Hermann Hund (born February 4, 1896 in Karlsruhe , † March 31, 1997 in Göttingen ) was a German physicist . He made significant contributions to the development of atomic physics . According to him, are Hund's rules named. Friedrich Hund was the son of the hardware and household goods dealer Friedrich Hund, who had lived on Friedenstrasse in Karlsruhe . He went to school in Karlsruhe , Erfurt and Naumburg an der Saale , where he graduated from high school in 1915. He broke his foot shortly before the outbreak of the First World War and was the only one in his class not to have to go straight to war. He helped his teacher Professor Paul Schoenhals with teaching the younger students. After that, Hund was deployed for two years in the Navy's weather service. His parents could not finance his studies, which is why his teacher gave him a small scholarship, which he improved with tutoring. Education and apprenticeship years He studied mathematics , physics and geography in Marburg and Göttingen and passed his teaching exams in 1921/22. In Göttingen he heard with James Franck , David Hilbert , Richard Courant , Carl Runge, among others . In 1922 he received his doctorate in Göttingen with Max Born with a thesis on the Ramsauer effect , while he was doing his legal clerkship at a Göttingen grammar school. He was Born's scheduled assistant from 1922 to 1927, as did Werner Heisenberg and Pascual Jordan (both unscheduled). After his habilitation in 1925 he was a private lecturer in theoretical physics in Göttingen. In 1926/27 he spent a few months with Niels Bohr in Copenhagen . In 1927 he became an associate professor, and in 1928 a full professor of theoretical physics in Rostock . After a visiting professorship at Harvard University in 1929, he went to Leipzig . In America he also taught at the University of Chicago and a few other universities. Leipzig (1929 to 1946) In 1929 he was appointed professor of mathematical physics (successor to Gregor Wentzel ) at the University of Leipzig, where Heisenberg also worked, with whom he led a seminar on the structure of matter for many years , and which became a center for theoretical physics from the late 1920s was. Hund was friends with Heisenberg and defended him - like other leading German physicists - against the threatening campaign sparked by Johannes Stark (with an article in the SS Black Corps ), which was also directed against all of "modern theoretical physics". Hund wrote letters of protest to Paul Koebe , the dean of the mathematics and natural sciences faculty at the University of Leipzig, and the Reich Minister for Education, Bernhard Rust, and suggested that Peter Debye make a statement. In contrast to Heisenberg, he was not involved in the uranium project during World War II. When Heisenberg went to Berlin in 1942, Hund took over the directorate at the Leipzig Physics Institute. In 1943, like Pascual Jordan shortly before, he received the Max Planck Medal , the highest award for theoretical physics of the German Physical Society , which had not been awarded since 1939. After the war he became Pro-Rector in Leipzig in 1945. When the US Army withdrew at the end of June 1945, he went into hiding so as not to be transported to a camp in the west. The reason was not his house, but he did not want to be dragged off to the West as spoils of war (“slave”). Jena (1946 to 1951) In 1946 he moved to the University of Jena as a professor , where he became rector in 1948. According to his own statements, the workload in Leipzig, where he also had to give lectures on experimental physics, was too great for him to rebuild the physical institutes. The main reason for the change, however, was: The institutes in Leipzig had been destroyed, hardly any in Jena; In Leipzig there were few colleagues, in contrast to Jena. However, an apartment suitable for seven people was a prerequisite, which is why the move almost failed. In Jena there were conflicts with the Soviet occupation agencies, with whom he was otherwise highly regarded. In 1949 he received the GDR National Prize . Hund decreed that children of university teachers could also study, which General Kolesnitschenko repealed in line with the ideological views of the time. Hund tried to clarify the responsibilities between Jena University, the Thuringian Ministry and the occupying power. In Berlin he had managed to get 25 talented applicants to study physics in Jena. Later it was possible for applicants to study medicine or theology who were not workers 'or farmers' children to be admitted. The lack of doctors was one of the reasons. An anonymous complaint from the university administration accusing Hund of a lack of political activity reached the ministry in September 1948. Minister Torhorst asked the rector to come to the ministry for an interview. When Hund realized that she had the order to fire him, but no real reasons, he stepped back on his own initiative. His term of office as rector thus lasted from February to October 1948. At the end of April 1949 he received a purchase slip for a pair of shoes from Thuringian Prime Minister Werner Eggerath . At the end of July 1951, after he had returned to Jena from a guest lecture in Frankfurt / Main, he left the GDR and went to the West via Berlin with his family. He had to leave almost all of his possessions behind, but the Russian authorities later sent him his furniture and other items. Change to Frankfurt am Main The two most important reasons for moving from Jena to Frankfurt were: the future of his children and the political situation. He had hoped a democracy would work better. Hund became a professor in Frankfurt am Main in 1951, succeeding Erwin Madelung . There he found his former colleague Bernhard Mrowka , with whom he had written important papers on the physics of electrons in diamonds in 1935. In Frankfurt he wrote a. a. an extensive book “Matter as Field”, with which he - as stated in the foreword - “wants to help remove the contrast between thinking and experimenting physicists” (meaning the attempt to remedy the difference between theoretical and experimental physics). Back in Göttingen (40 years to go) In 1964 Friedrich Hund retired . However, he remained active in academic research and teaching, not only in Göttingen, where he lectured until 1990, but also as a visiting professor in Cologne in 1968 , in Heidelberg in 1969 , in Frankfurt am Main in 1970 and later in Wuppertal. His particular expertise was in the history of modern physics, which he personally experienced and shaped. He was blind for the last years of his life, but that did not prevent him from giving lectures and discussing. His doctoral students include Harry Lehmann (Jena 1950), Hans Euler (with Heisenberg), Carl Friedrich von Weizsäcker (in Leipzig) and - again in Göttingen - Jürgen Schnakenberg and Gert Eilenberger and in Frankfurt Heinz Bilz . Siegfried Flügge was his assistant in Leipzig and Edward Teller was his assistant. Hund published more than 250 essays and papers. Even before quantum mechanics emerged , Hund interpreted the complex spectra of the elements from scandium to nickel. On the basis of quantum mechanics, he then contributed significantly to the theory of molecular spectra and to the elucidation of the relationship between term structure and symmetry of quantum mechanical systems. In 1925 he set up Hund's rule , which was initially a purely empirical rule in atomic physics, was only established later and expanded to three rules. In 1926/27 he discovered and described what was later known as the tunnel effect (the discovery of which is mostly attributed to George Gamow ) first in optically isomeric molecules. In molecular physics and spectroscopy , Hund's distinction is made between the so-called Hund's coupling cases (a) to (e), depending on the way in which the various quantum mechanical angular momenta (electron spin, orbital angular momentum, rotation) couple to form the total angular momentum (vector addition). Also known in molecular physics is the Hund-Mulliken method (today mostly called the molecular orbital theory ), which must be distinguished from the Heitler-London method and which also plays a major role in theoretical chemistry. In formulating it, he worked with Robert S. Mulliken , whom he had known in Göttingen since 1925 and with whom he worked in Göttingen in 1927, in Chicago in 1929 and in Leipzig in 1930 and 1933. The Heisenberg with dog seminar achieved international recognition and attracted students from many countries. In Leipzig Hund broadened his field of work and also turned to nuclear physics. Independently of Eugene Wigner , in 1937 he was the first to investigate an approximate SU (4) symmetry in the nuclear spectra (which results from spin and isospin variance of the nuclear forces). In 1936 he also investigated the behavior of matter under very high pressure with applications in astrophysics, as well as systematic problems of theoretical solid-state physics (electron wave functions in crystal lattices, especially under the influence of magnetic fields, especially in diamond lattices). In his later years, Hund mainly dealt with the history of physics, especially the quantum theory, the change of which he himself helped to shape in the twenties. In addition to the more specific literature mentioned below, he wrote a widespread systematic series of textbooks on theoretical physics, some of which have been translated into other languages. He wrote in 1978: I am delighted that my colleague K. Yamazaki has taken the trouble to translate my history of quantum theory into Japanese . Hund kept a scientific diary since 1912, which - in addition to the other documents mentioned below - is kept in the Lower Saxony State and University Library in Göttingen. His future wife received her doctorate in philosophy at the Georg-August-Universität Göttingen in 1930 with the dissertation: On the theory of almost periodic number sequences with Richard Courant . It was a mathematics topic at the suggestion of Harald Bohr and Alwin Walther . Later she dealt with the two-sided surface ornaments, among other things. The family had six children: Gerhard (* 1932), Dietrich (1933–1939), Irmgard (* 1934), Martin (1937–2018), Andreas (* 1940) and Erwin (* 1941). His final resting place is in Munich, where his wife Ingeborg, his sister Gertrud and son-in-law Dieter Pfirsch are also buried. Hund had been a member of the Saxon Academy of Sciences since 1933, of the Leopoldina since 1944 , of the German Academy of Sciences in the GDR since 1949 (and of its successor, the Berlin-Brandenburg Academy of Sciences, of which he was an honorary member since 1994) and since 1958 of the Göttingen Academy of Sciences , of which he was an honorary member since 1991. He was an honorary member of the German Physical Society . - 1943: Max Planck Medal of the German Physical Society - 1949: German National Prize 2nd class for science and technology - 1965: Great Cross of Merit of the Federal Republic of Germany - 1971: Cothenius Medal from the Leopoldina - 1974: Otto Hahn Prize for Chemistry and Physics - 1976: Gauß - Weber medal from the University of Göttingen - 1987: Gerlach Adolph von Münchhausen Medal from the University of Göttingen - Honorary doctorate from the Universities of Frankfurt am Main (1966), Uppsala (1973) and Cologne (1983) - Honorary member in various scientific societies, including the German Physical Society (1977) - 1996: Honorary citizen of the city of Jena - Attempt to interpret the high permeability of some noble gases for very slow electrons , dissertation, University of Göttingen 1923 - Line spectra and periodic system of the elements , Habil. Schrift, University of Göttingen, Springer 1927 - General quantum mechanics of atomic and molecular structure , in Handbuch der Physik, Volume 24/1, 2nd edition, pp. 561–694 (1933) - Matter as field , Berlin, Springer 1954 - Introduction to Theoretical Physics , 5 volumes 1944–1951, Meyers Kleine Hand Bücher, Leipzig, Bibliographisches Institut, 1945, 1950/1951 (Volume 1: Mechanics, Volume 2: Theory of Electricity and Magnetism, Volume 3: Optics, Volume 4: Theory of Heat, Volume 5: Atomic and Quantum Theory) - Theoretical Physics , 3 volumes, Stuttgart Teubner, first 1956–1957, Volume 1: Mechanics, 5th Edition 1962, Volume 2: Theory of Electricity and Light, Theory of Relativity, 4th Edition 1963, Volume 3: Thermal theory and quantum theory, 3rd edition. Edition 1966 - Theory of the structure of matter , Stuttgart, Teubner 1961 - Basic concepts of physics , Mannheim, BI 1969, 2nd edition 1979 - History of Quantum Theory , 1967, 2nd edition, Mannheim, BI 1975, 3rd edition 1984 - Quantum Mechanics of Atoms , in Handbuch der Physik / Encyclopedia of Physics, Volume XXXVI, Berlin, Springer 1956 - The history of Göttingen physics , Vandenhoeck and Ruprecht 1987 (Göttingen University Speeches) - History of physical terms , 1968, 2nd edition (2 volumes), Mannheim, BI 1978 (volume 1: The emergence of the mechanical image of nature, volume 2: The paths to today's image of nature), Spektrum Verlag 1996 - Göttingen, Copenhagen, Leipzig in retrospect , in Fritz Bopp (Ed.) Werner Heisenberg and the physics of our time , Braunschweig 1961 - Max Born, Göttingen and quantum mechanics , Physikalische Blätter, Volume 38, 1982, pp. 349-351. doi : 10.1002 / phbl.19820381107 - The Correspondence Principle as a Guide to Quantum Mechanics from 1925 , Physikalische Blätter, Volume 32, 1976, pp. 71-77. doi : 10.1002 / phbl.19760320203 - Could the history of quantum theory have turned out differently? , Physikalische Blätter, Volume 31, 1975, pp. 29-35. doi : 10.1002 / phbl.19750310107 - Highlights of Göttingen Physics , Part 1, Physikalische Blätter, Volume 25, 1969, pp. 145–153. doi : 10.1002 / phbl.19690250401 , part 2, pp. 210-215. doi : 10.1002 / phbl.19690250503 - See also the list of writings by Friedrich Hund (1896–1997) with around 300 items - Werner Heisenberg , Dieter Pfirsch and others: Dedicated to Professors Friedrich Hund and M. Czerny on their 60th birthday . Springer-Verlag Berlin Göttingen Heidelberg 1956, Journal of Physics, Volume 144 - Max Born : Friedrich Hund 70 years . Physikalische Blätter, Volume 22, 1966, p. 79 - Heinz Gerischer : F. Hund on his 75th birthday - The Bunsen Society congratulates its honorary member . Reports of the Bunsen Society for Physical Chemistry 1971, Volume 75/2, p. 97. doi : 10.1002 / bbpc.19710750202 - Joachim Poppei: The life and work of Friedrich Hund: with special consideration of the time in Leipzig and Jena . Physics Section of the Karl Marx University Leipzig, December 1, 1983, 26 pages. Friedrich Hund's estate at the Göttingen State and University Library - J. Hajdu: Friedrich Hund for the 90th Physikalische Blätter, Volume 42, 1986, p. 1 - An interview on the occasion of Friedrich Hund's 90th birthday . Bild der Wissenschaft, 2/1986, pp. 63–66 - Carl Friedrich von Weizsäcker : Friedrich Hund on his 95th birthday . Physikalische Blätter, Volume 47, 1991, p. 61 - Banger; Canel; Czjzek; Eilenberger; Fisherman; Frobose; Gerlach; Hajdu; Hofacker; Keiter; Labusch; Long leg; Schnackenberg; Teichler: Friedrich Hund on his 95th birthday . Göttingen 1991, 269 pp. - Michael Schaaf: On the 100th birthday of Prof. Dr. Friedrich Hund . CENSIS-REPORT-20-96, Hamburg, February 1996 - Michael Schaaf: Heisenberg, Hitler and the bomb. Conversations with contemporary witnesses GNT-Verlag, Diepholz 2018, ISBN 978-3-86225-115-5 (in it: "Theoretical Physics was defamed" a conversation with Friedrich Hund) - Werner Kutzelnigg: Friedrich Hund and chemistry . Angewandte Chemie, Volume 108, 1996, pp. 629-643 - Hubert Laitko : Looking at the history of physics from the inside - Friedrich Hund as a historian in his field . News from the Academy of Sciences in Göttingen, 1996 - Manfred Schroeder (Ed.): Hundred years of Friedrich Hund: A look back at the work of an important physicist . News from the Academy of Sciences in Göttingen, 1996 (contributions by G. Eilenberger, K. Hentschel, G. Herzberg, D. Langbein, H. Rechenberg, I. Supek, HG Walther, CF v. Weizsäcker). - Siegfried Flügge (Ed.): Friedrich Hund on his 70th birthday . Springer Tracts in Modern Physics, 1966 - J. Hajdu: Friedrich Hund: way and work . Zeitschrift für Physik D, Volume 36, 1996, pp. 191-195 - Friedrich Hund on his 100th birthday . Interview with Klaus Hentschel , Renate Tobies . NTM (International Journal for the History and Ethics of Natural Sciences, Technology and Medicine), Volume 4, 1996, pp. 1-18. doi : 10.1007 / BF02913775 - Interview with Michael Schaaf from March 12, 1994 Someone who dared to do something went to Göttingen , Phys. Leaves, June 1997. doi : 10.1002 / phbl.19970530613 - Bernhard Kockel Friedrich Hund 80 years , Physical sheets, Volume 32, 1976, p. 78/79. doi : 10.1002 / phbl.19760320204 - Helmut Rechenberg : Friedrich Hund 100 years: pioneer and teacher of physics, contemporary witness of the century . Philipp von Zabern, Mainz 1996, Akademie-Journal 1/96, pp. 44-49 - Peyerimhoff ; Herzberg; Canel; Hajdu and others: Professor Friedrich Hund on his 100th birthday . Springer-Verlag 1996, Zeitschrift für Physik D, Volume 36, Issue 3/4 - Riffert; Mothers; Herald; Ruder: Matter at High Densities in Astrophysics - Compact Stars and the Equation of State - In Honor of Friedrich Hund's 100th Birthday . Springer Tracts in Modern Physics 133, Berlin 1996, 274 pp. ISBN 3-540-60605-X - Carl Friedrich von Weizsäcker , Edward Teller , Hendrik BG Casimir , Aage Bohr , Ulrich Schröder , Eleonore Trefftz : Friedrich Hund on his 100th birthday - greetings and congratulations from all over the world . VCH Weinheim 1996, Physikalische Blätter 52, Heft 2, pp 114-115 - Helmut Reeh: Obituary in Spektrum (information organ of the University of Göttingen), 1997, issue 2 - Helmut Rechenberg, Gerald Wiemers: Friedrich Hund (1896–1997) . Saxon Life Pictures, 2004 - Smrdu, Andrej: Hundovo pravilo - Hund's rule . Kemija, Snov in Spremembe 1, pp. 75-78, Ljubljana 2006, ISBN 961-6433-66-0 - Helmut G. Walther : The first post-war rectors Friedrich Zucker and Friedrich Hund . Reprint from University in Socialism . Studies on the history of the Friedrich Schiller University Jena (1945–1990), Volume 2, pp. 1911–1928. Böhlau Verlag Cologne Weimar Vienna 2007. - Ronald Beyer, Constanze Mann: The honorary citizens of the city of Jena . Volume 17 of the series Documentations of the Jena Municipal Museums, 2007, ISBN 978-3-930128-84-6 . - Uwe Hoßfeld , Tobias Kaiser, Heinz Mestrup: University in Socialism: Studies on the History of the Friedrich Schiller University Jena (1945–1990) . Böhlau Verlag Cologne Weimar, 2007–2334 pages. Friedrich Hund (digitized version) - Short biography for: Who was who in the GDR? 5th edition. Volume 1. Ch. Links, Berlin 2010, ISBN 978-3-86153-561-4 . . In: - Film (English) : PAM Dirac in conversation with F. Hund about symmetry in relativity, quantum mechanics and elementary particle physics . Institute for Scientific Film (IWF) , Göttingen 1982, made available by the Technical Information Library (TIB) , doi : 10.3203 / IWF / G-209 . - Film : Quantum mechanics on the move - Friedrich Hund reports from his life, Göttingen 1988 - Interview partner: Helmut Rechenberg . Institute for Scientific Film, made available by the Technical Information Library , doi : 10.3203 / IWF / G-239 . - Film (English) : Friedrich Hund: Memories of Robert S. Mulliken (Reminiscences of Robert S. Mulliken) . Institute for Scientific Film (IWF), Göttingen 1988, made available by the Technical Information Library (TIB), doi : 10.3203 / IWF / G-232 . An extensive collection of documents from the estate of Friedrich Hund is in the Lower Saxony State and University Library in Göttingen , including the correspondence with the ministries of the GDR during the Jena period and a few years afterwards, in particular the leave of absence of Professor Dr. Friedrich Hund for the period from April 1st to July 31st , 1951 , granted by the State Secretary for Higher Education of the GDR, Prof. Dr. Harig, March 8, 1951. Manuscript for the withdrawal of the Americans from Leipzig in 1945 The pictures opposite show the six pages of a manuscript that Friedrich Hund produced between June 25 and July 3, 1945, when the Americans left Leipzig and transported many professors away in trucks. The original of the protocol is in the possession of his eldest son. On page 2 Hund writes: “On the way to the tram we talked about compulsion, the sending into slavery, as what we had to look at” and on page 4: “It is unworthy of the university if its professors like Machine parts would be replaced. " Other certificates (selection) - Letter from Werner Eggerath to Hund with a receipt for a pair of shoes because of his scientific achievements. - Document for the doctorate of Ingeborg Seynsche, issued by the Georg-August-Universität Göttingen on the occasion of the fiftieth anniversary of the award of the title of Doctor of Philosophy (February 28, 1930 in Göttingen). - Certificate for the award of the Gerlach Adolph von Münchhausen Medal to Friedrich Hund in honor of his scientific life achievement, issued by the Georg-August-Universität Göttingen on the occasion of its 250th anniversary (Göttingen, May 26, 1987). - Literature by and about Friedrich Hund in the catalog of the German National Library - Literature about Friedrich Hund in the state bibliography MV - Entry on Friedrich Hund in the Catalogus Professorum Rostochiensium - Estate FRIEDRICH HUND physicist February 4, 1896 to March 31, 1997. Göttingen State and University Library, 366 items on 43 pages, PDF file - Homepage of Friedrich Hund - Interview by Michael Schaaf with Friedrich Hund 1994: It is far too early for philosophy , PDF file - Photographs and films Friedrich Hund - Friedrich Hund in the professorial catalog of the University of Leipzig - Literature by and about Friedrich Hund, Berlin Brandenburg Academy of Sciences, PDF file - CALENDAR APRIL 2012: The physicist Friedrich Hund at the University of Rostock, Faculty of Mathematics and Natural Sciences - Friedrich Hund, the tunnel effect and the shining stars on Deutschlandfunk broadcast on February 4th, 2016 - Scientific work of his wife Dr. Ingeborg Hund, b. Seynsche - Funeral service for Friedrich Hund on April 4, 1997 Audio (ogg file, 42 MB, 46 min) - Friedrich Hund dissertation prizes from the University of Jena - Merchant Friedrich Hund in the 1896 address book of the city of Karlsruhe - published in Zeitschrift für Physik, Volume 13, 1923, p. 241 - Complaint against the President of the Physikalisch-Technische Reichsanstalt, Prof. Dr. Johannes Stark . ( Memento of February 8, 2011 in the Internet Archive ) written by Friedrich Hund on July 20, 1937 - Cassidy Uncertainty , p. 382 - Hund in an interview with Schaaf - See his handwritten notes (in the chapter Protocol on the withdrawal of the Americans from Leipzig in 1945 ), which have not yet been published. - Interview with Schaaf - There was also a dispute about this within the Soviet administration. Andrei Nikitin in Manfred Heinemann (ed.): University officers and the reconstruction of the higher education system in Germany 1945-1949. The Soviet Zone of Occupation , Akademie Verlag 2000, p. 4 - Helmut G. Walther: The first post-war rectors Friedrich Zucker and Friedrich Hund . Reprint from University in Socialism . Studies on the history of the Friedrich Schiller University Jena (1945–1990), Volume 2, p. 1921. Böhlau Verlag Cologne Weimar Vienna 2007. - e.g .: F. Hund, B. Mrowka: About the states of electrons in a crystal lattice, especially in diamonds , Physikalische Zeitschrift 30 (1935) 888-891 - Friedrich Hund's biography - Interview with Schaaf - Writings of Friedrich Hund - Brockhaus Encyclopedia. FA Brockhaus Wiesbaden 1975, Volume 22, p. 666, ISBN 3-7653-0028-4 - Hund on the interpretation of complex spectra, especially the elements scandium to nickel , Zeitschrift für Physik, Volume 33, 1925, pp. 345–371 - Hund: To the interpretation of the molecular spectra III , Zeitschrift für Physik, Volume 43, 1927, pp. 805-826. Dog used for molecules usually the label molecule . On Hund's discovery of the tunnel effect: Rechenberg, Mehra: The historical development of quantum theory , Volume 6, Part 1, p. 535 - Mulliken Molecular Scientists and Molecular Science-some reminiscences , Journal of Chemical Physics, Vol. 43, 1965, S2-S11 - Hund symmetry properties of the forces in atomic nuclei and consequences for their states, in particular of the nuclei up to sixteen particles , Zeitschrift für Physik, Vol. 105, 1937, p. 202. See also Pais Inward Bound , Oxford University Press 1986, p. 425 - Hund matter under very high pressures and temperatures , results of the exact natural sciences, Volume 15, 1936, pp. 189–228 - Dog estate, State and University Library Göttingen, PDF file - Seynsche, I. On the theory of almost periodic number sequences. Disputes. Rend. Circ. Mat. Palermo 55, 1931, 27 pp. - Short biography of Ingeborg Seynsche on the DMV website ( Memento from July 6, 2013 in the Internet Archive ) - Honors and diplomas Friedrich Hund - Member of the Académie internationale de science moléculaire quantique - Toronto, Massachusetts Institute of Technology , General Electric at Irving Langmuir . According to Helmut Rechenberg , Jagdish Mehra : The historical development of Quantum Theory , Volume 6, Part 1, p. 559. At Harvard, he gave lectures at the invitation of Edwin Kemble and [[Theodore Lyman (physicist) |]] - According to Hajdu: Friedrich Hund , Zeitschrift für Physik D, Volume 36, 1996, p. 191, the appeal was made at the instigation of Heisenberg - , although relations with Heisenberg were not without tension. Hund was not satisfied with his role as the second man behind Heisenberg, which was expressed, among other things, in the fact that Heisenberg ordered him back from his trip to the USA in 1929 to be represented by Hund at the beginning of the semester, while he himself continued his world tour . Cassidy: Uncertainty - The Life and Science of Werner Heisenberg , Freeman 1992, p. 271, where Cassidy refers to an interview with Hund 1981 - Also because he did not accept the Americans' invitation to the West - Awarded to Friedrich Hund before the GDR was founded (children of national prize winners were allowed to start studying) - Some of these applicants are still alive and serve as contemporary witnesses. - This was the case, for example, for high school graduates in the classical language branch in 1950, many of whom were still able to celebrate their 60th Abitur. - In this conversation with the minister, he said: “Would it be helpful for you if I step back?” At which she visibly breathed a sigh of relief. That's how he told his children. - Although he had decided to go to the West, he was a bit sad because on the last evening in Jena he said to his daughter: “It will never be so nice again”. - was also under discussion as a successor to Sommerfeld, in 1946 Sommerfeld himself proposed him as a candidate for his successor alongside Heisenberg and von Weizsäcker. - Heisenberg with dog. Friedrich Hund in the lecture. Leipzig 1937. ( Memento from September 12, 2007 in the Internet Archive ) This is how the courses were officially announced, which was cause for jokes not only among students but also among physicists such as Walther Gerlach (dog, interview with Schaaf). - Johann Jakob Burckhardt Symmetry of the Crystals , 1988, p. 150, cites a manuscript from 1963 sent to him by Ingeborg Hund with a particularly attractive depiction of these ornament groups according to Burckhardt |ALTERNATIVE NAMES||Hund, Friedrich Hermann (full name)| |BRIEF DESCRIPTION||German physicist| |DATE OF BIRTH||February 4, 1896| |PLACE OF BIRTH||Karlsruhe| |DATE OF DEATH||March 31, 1997| |Place of death||Goettingen|
<urn:uuid:9a7e7469-ed40-46a7-b290-906d5f725bc7>
CC-MAIN-2021-21
https://de.zxc.wiki/wiki/Friedrich_Hund
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00016.warc.gz
en
0.906785
6,951
2.90625
3
PERIODO: XX SECOLO AREA:MAR MEDITERRANEO E MAR NERO parole chiave: tsunami, Atmospheric dynamics Physical oceanography A series of tsunami-like waves of non-seismic origin struck several southern European countries during the period of 23 to 27 June 2014. The event caused considerable damage from Spain to Ukraine. Here, we show that these waves were long-period ocean oscillations known as meteorological tsunamis which are generated by intense small-scale air pressure disturbances. An unique atmospheric synoptic pattern was tracked propagating eastward over the Mediterranean and the Black seas in synchrony with onset times of observed tsunami waves. This pattern favoured generation and propagation of atmospheric gravity waves that induced pronounced tsunami-like waves through the Proudman resonance mechanism. This is the first documented case of a chain of destructive meteorological tsunamis occurring over a distance of thousands of kilometres. Our findings further demonstrate that these events represent potentially dangerous regional phenomena and should be included in tsunami warning systems. A chain of destructive tsunami-like events took place during the period 23–27 June 2014 in the Mediterranean and the Black Sea regions, affecting countries from Spain to Ukraine (Fig. 1). The sudden occurrence, height, destructiveness, and run-up of the observed waves all indicate that these were tsunami-like events. Great earthquakes or volcanic eruptions can lead to the generation of tsunami waves of such great spatial extent. However, even when crossing vast oceanic regions, these waves do not take longer than 48 hours to arrive at the most distant locations1,2. Landslides or atmospheric disturbances can also induce tsunamis but they typically affect limited regions3,4. In this study, we show that tsunami waves observed along the coasts of a number of southern European countries in the last week of June 2014 had an atmospheric origin, and were therefore a series of individual meteorological tsunamis (“meteo tsunamis”)4. The extraordinary expanse of this event shows that meteo tsunamis can have a widespread influence that is spatially comparable to other major tsunamigenic mechanisms. A few hours after midnight on 22/23 June, 1-meter tsunami-like oscillations were observed in Ciutadella Inlet on coast of the Balearic Islands (Spain). On 25 and 26 June, several tsunami-like waves with heights of up to 3 m struck a number of bays and harbours in the central and south Adriatic Sea. The strongest oscillations occurred in the morning of 25 June at the head of the 8-km long Vela Luka Bay. Here at approximately 6:35 UTC, sea level rapidly reached +1.5 m and then, 10 min later, fell to −1.5 m relative to ambient sea level. (It was in this bay that a catastrophic meteo tsunami had induced serious flooding in June 19785). Later the same day (at 11:00–15:00 UTC), oscillations similar to those in Vela Luka Bay with wave heights of up to 2.5 m and strong (~10 knots) currents were observed in other bays within the Adriatic located ~30–120 km from Vela Luka6. Intense tsunami-like waves also impacted the southwestern coast of Sicily at approximately 19:00 UTC on 25 June. The highest waves were observed at the mouth of the Mazara River, where a powerful wave created a destructive 1.5 m high hydraulic jump (bore) that significantly damaged a number of boats moored in the harbour7. Large seiche oscillations were also observed on 25–26 June at other coastal regions of the Central and Eastern Mediterranean, including the coasts of Italy, Greece and Turkey. Finally, at noon on 27 June 2014 during a calm summer day, a 1 to 2 m high tsunami-like wave struck the beaches of Odessa and the neighbouring port town of Illichevsk in the northwestern Black Sea (Ukraine). Six people, including four children, were injured and had to be transported to a local hospital. All known tsunamis observed in the Mediterranean and Black seas were generated by earthquakes with magnitudes Mw > 5.58,9, while at the time of June 2014 events, the entire region was seismically quiet (Mw < 2.7). Moreover, these regions have not typically been associated with landslide-generated tsunamis. We, therefore, conclude that all the events that occurred during the June 2014 time period were meteorological tsunamis (“meteo tsunamis”)4, long destructive tsunami-like waves generated by atmospheric disturbances (atmospheric gravity waves, pressure jumps, frontal passages, squalls)4,10. This phenomenon is known in the Mediterranean Sea region as “rissaga” (Balearic Islands)4, “šćiga” (Adriatic Sea)5, and “marrubbio” (Sicily)4,11. Major meteo tsunamis in this region are usually not associated with extreme atmospheric events such as hurricanes or major storms, but with marginally detectable changes in atmospheric pressure often caused by atmospheric gravity waves that are frequently present during times of calm weather, as was the case at the four locations. Meteo tsunami formation is related to very specific and comparatively rare resonant situations that lead to strong amplification of the initial open-seawaves4,10. Ciutadella Inlet on Menorca Island (Spain)12, Vela Luka Bay on Korčula Island (Croatia)5,13, and Mazara del Vallo harbour on the western coast of Sicily11 are locations in the Mediterranean where meteo tsunamis (resembling extreme seiches) occur most often and have anomalously large heights (up to 3–6 m). Meteo tsunamis are less common in the Black Sea, but there were several historical events on the coast of Turkey described as “tsunamis of unknown origin”9, which could have been meteo tsunamis. Also, an extraordinary event identified as meteo tsunami occurred in May 2007 in western Black Sea, when 2–3 m high tsunami waves hit the northern Bulgarian coastline. The event was associated with sudden air pressure changes that occurred during relatively calm surface weather14. In most other countries on the Black Sea coast, including Ukraine and Russia, meteo tsunamis are almost unknown. For this reason, the Odessa event of 27 June 2014 gave rise to considerable concern and misinformation, due primarily to a lack of a prompt scientific explanation. Rumours about the possible causes for the waves spread among the general public and a number of unfounded explanations, including an underwater explosion, ship (submarine) waves, whirlwind effect, extreme interacting currents, abrupt temperature changes and even ”the great cross of planets” surfaced in the media. This differed from locations in the Mediterranean where most media reports stated succinctly that “meteo tsunamis were hitting once again!”. Thus, available data indicate a meteorological origin for the observed series of tsunami-like waves in the Mediterranean and Black Sea regions on 23–27 June 2014. All previously determined hazardous meteo tsunamis, except those associated with hurricanes or similar large-scale atmospheric structures, were local events that were observed in one or two neighbouring bays or at a particular confined beach4,15. The June 2014 event represents the first known case of a succession of individual meteo tsunamis sequentially affecting several countries located hundreds to thousands of kilometres apart. The eastward progression of meteo tsunami occurrence (Fig. 1) points to a possible link with weather systems, which predominantly propagate eastward over the Mediterranean. At the same time, it is well established that meteo tsunamis are usually generated by short-term, small-scale (horizontal dimensions of ~100 km) disturbances4,16 that normally exist only for a few hours and cannot propagate over long distances. Comparison of air pressure records from various sites during the 2014 event (Fig. 2), reveals that the atmospheric disturbances were similar but not identical. It appears that the extended June 2014 event was the result of anomalous atmospheric conditions over the Mediterranean/Black Sea region. These conditions supported the generation of numerous intense, small-scale atmospheric disturbances (we refer to this state as a “tumultuous atmosphere”), which subsequently triggered the meteo tsunamis. Numerous studies in the vicinity of the Balearic Islands and the Adriatic Sea demonstrate that meteorological tsunamis typically occur during warm seasons when the following conditions are satisfied12,13,17: (1) inflow of warm and dry air from Africa at heights of ~850 hPa (~1500 m); (2) a strong south-westerly jet stream (with wind speeds >20 m/s) at heights of ~500 hPa (~5000 m); and (3) the presence of unstable atmospheric layers (at heights of 600–400 hPa) characterized by a small Richardson number, Ri <0.25. An atmospheric pattern favourable to meteo tsunami generation was tracked propagating eastward from 23 to 27 June (Fig. 3). The pattern was first observed over the Balearic Islands at the time of the Ciutadella event (23 June, 00:00 UTC). The system then propagated to the east, reaching its full strength over the Adriatic and Tyrrhenian seas and the Strait of Sicily. Jet-stream speeds at the time were greater than 40 m/s and were accompanied by broad unstable atmospheric areas. Major meteo tsunami events were observed exactly at the time of the most intense atmospheric instability and a well-developed jet-stream over each respective area (Fig. 3). As it moved further to the east, the system weakened. Before it completely faded away, however, it arrived at the northwestern Black Sea region coincident with the time of the Odessa event (27 June, at ~12:00 UTC). Amplification and attenuation of high-frequency sea level oscillations consistent with the travel times of the synoptic pattern can also be tracked from a number of the Mediterranean tide-gauge records (Fig. 3) It seems likely that the moving synoptic pattern was responsible for the continual generation of small-scale atmospheric pressure perturbations that would have been forming and then collapsing as they drifted with the jet-stream. In turn, these numerous atmospheric disturbances generated ubiquitous tsunami-like waves, which became destructive in specific areas. Strong horizontal gradients in the jet-stream, like the gradients observed during 23–27 June, are known to be places where atmospheric disturbances (in particular atmospheric gravity waves) are generated 18. Normally, atmospheric gravity waves dissipate before travelling one full wavelength 19, and therefore do not have sufficient time to produce significant sea level oscillations. However, under particular atmospheric conditions, the induced internal gravity waves are trapped, leading to the formation of so-called “ducted waves” which maintain their shape and intensity as they propagate over relatively long distances 19,20. A schematic presentation of a ducted wave along with meteo tsunami generation mechanism is provided in Fig. 4. Provided there is an overlying unstable layer (Ri < 0.25) that contains a critical (steering) level in which the wind speed, U, equals the propagation speed of ducted waves19,20, ducted waves with speed U can become trapped in a stable atmospheric layer adjacent to the ground. If there were no unstable layer, wave energy would radiate vertically, and if there were no steering layer, wave energy would be absorbed rather than reflected19. During the above synoptic conditions, both the generation and subsequent trapping of atmospheric gravity waves are supported: Dry African air increases the stability of the lower atmospheric layer (elevations of up to 4000–5000 m), whereas strong vertical wind shear and moist advection from the Atlantic Ocean generate an unstable layer which serves as the generating, reflecting and steering layer for atmospheric gravity waves. As illustrated by Fig. 3, the distinctive synoptic pattern that occurred in June 2014 affected the entire Mediterranean and Black Sea regions. However, destructive meteo tsunamis occurred only in a few specific regions. The governing parameter determining the sea level response to atmospheric disturbances is the Froude number, which in the present case can be defined as ; i.e., the ratio of the atmospheric gravity wave speed, U, to the phase speed of long ocean waves, , where g is the gravity acceleration and h is the water depth. Resonance conditions (known as “Proudman resonance”)21 occur when . In this case, ocean waves begin to actively absorb atmospheric energy during their propagation and, as a result, are strongly intensified21,22 (Fig. 4). To determine if such conditions were present during the June 2014 events, we have assumed that (where u is the jet-stream speed at the 500 hPa level estimated from ECMWF operational reanalysis data) and calculated Fr for the period 22–27 June but only for areas over which there was an unstable atmospheric layer (Ri < 0.25). As indicated by the mapped values of Fr in Fig. 1 (the values closest to resonant conditions are plotted), regions with the most favourable conditions for meteo tsunami generation (0.9 < Fr < 1.1) are in the Adriatic Sea, the Strait of Sicily, the northwestern Black Sea, and a few isolated areas in the Mediterranean. If destructive meteo tsunamis are to occur, the width of marine area for which Fr ≈ 1.0 has to be sufficiently large in the direction of the atmospheric disturbance propagation to have enough time for the transfer of energy from the atmosphere22. For atmospheric disturbances travelling within a northeastward oriented jet stream, as was the case during the June 2014 event, the strongest meteo tsunamis are thus expected at the northeastern coasts of resonant areas. It is precisely at these particular locations where the most destructive events occurred. Due to a decrease in water depth and depending on the specific topographical characteristics of the bay or harbour, ocean waves arriving at the coast can continue to amplify and can reach wave heights of several meters4,23 (Fig. 4). Although, the June 2014 event is so far the only known Mediterranean-wide destructive meteo tsunami, there might have been others. The possibility of the consecutive occurrence of destructive meteo tsunamis at distal areas was previously investigated for two of the Mediterranean meteo tsunami “hotspots” (the Balearic Islands and the Adriatic Sea)24. It was found that high frequency sea level oscillations usually appear at both locations when meteo tsunamigenic synoptic patterns propagate from one region to another. However, no case of destructive meteo tsunamis occurring at both locations within a span of a few days has yet been found, although more than 20 years of data and eyewitness reports were examined. As evident in Fig. 4, a number of restrictive conditions have to take place to produce a hazardous meteo tsunami. As a consequence, even at ”meteo tsunami hot spots” these events have a return period of 15 to 20 years4,10,25. It is thus likely that events similar to that of June 2014, with extreme meteo tsunamis hitting a number of locations, have even longer return periods. However, this is similar to the case of a Mw 9.2 earthquake: if it occurred once, then sooner or later it is likely to occur in this region again! Regardless of the wave formation mechanism, it is clear that there is a real threat from meteo tsunamis and that consideration of this threat should be incorporated into tsunami warning procedures and systems. Present day tsunami warning systems do not include monitoring of atmospheric conditions favorable for tsunami generation and are based on: (i) monitoring of seismic activity (tsunami source); and (ii) observation and modelling of the propagation of tsunami waves in the deep ocean and at coastal stations26. Addressing the threat from meteorological tsunamis by warning systems requires: (i) monitoring of synoptic conditions (preconditions of the event); (ii) observation, tracking and possible modelling of small-scale air pressure disturbances (monitoring of the meteo tsunami source); (iii) observation and modelling of atmosphere-sea interaction (meteo tsunami generation, propagation and coastal impact); and (iv) establishment of threshold criteria for warnings, along with protocols and procedures for response. It is noted, however, that due to the stochastic nature of atmospheric gravity waves, this specific warning could be given only after the air pressure disturbance has been observed, i.e. shortly before the wave approaches an endangered area. For this reason, implementation of meteo tsunami warning procedures into tsunami warning operations remains a challenge. Frequency-time (f-t) diagrams of air pressure and sea level were computed by applying a multi-filter technique, consisting of narrow-band filters and a Gaussian window that isolates a specific centre frequency and demodulates the series to a matrix of amplitudes and phases of wave signals27. This method is frequently used for examination of tsunami records and for analyzing tsunami wave energy. All time series, except for the Ninfa air pressure series (6-min time step), were measured with a 1-min sampling. Ninfa air pressure time series were spline interpolated to a 1-min time step for the purposes of clearer presentation. Time series of variance are derived using a 4-hour running average for a 4 day period centred around the time of the event at a given location. The Richardson number, Ri, which is a standard measure of atmospheric stability, is given as where N is the Brunt-Väisälä frequency, u is the wind speed and z is vertical distance. The N frequency was calculated as the moist Brunt-Väisälä frequency28 on levels where relative humidity was above 70% or as the dry frequency otherwise. A layer was considered to be dynamically unstable and favourable for trapping of waves in the lower troposphere if Ri < 0.25. All input parameters were taken from the European Centre for Middle-range Weather Forecast (ECMWF) operational reanalysis products. Maximum wave heights of sea level oscillations plotted in Fig. 3 are estimated from tide gauge measurements and eyewitness reports for time intervals spanning ±6 hours with respect to the shown reanalysis times. The Froude number was computed as Fr = u/c, corresponding to the ratio of wind speed u (at 500 hPa, the level of ECMWF operational reanalysis data) and the phase speed of long ocean waves, c, every 6 hours between 22 and 27 June 2014, and only for those grid points at which the unstable atmospheric layer was present at heights between 700 and 500 hPa. The values closest to resonant conditions (Fr ≈ 1.0) are plotted in Fig. 1. Scientific Reports 5, Article number: 11682 (2015) doi:10.1038/srep11682 Published online: 29 June 2020 How to cite this article: Šepić, J. et al. Widespread tsunami-like waves of 23-27 June in the Mediterranean and Black Seas generated by high-altitude atmospheric forcing. Sci. Rep. 5, 11682; doi: 10.1038/srep11682 (2015). Alcune delle foto presenti in questo blog possono essere state prese dal web, citandone ove possibile gli autori e/o le fonti. Se qualcuno desiderasse specificarne l’autore o rimuoverle, può scrivere a [email protected] e provvederemo immediatamente alla correzione dell’articolo - Titov, V., Rabinovich, A. B., Mofjeld, H. O., Thomson, R. E. & González, F. I. The global reach of the 26 December 2004 Sumatra tsunami. Science 309, 2045–2048 (2005). - Fujii, Y., Satake, K., Sakai, S., Shinohara, M. & Kanazawa, T. Tsunami source of the 2011 off the Pacific coast of Tohoku Earthquake. Earth Planets Space 63, 815–820 (2011). - Synolakis, C. et al. The slump origin of the 1998 Papua New Guinea tsunami. Proc. R. Soc. London, Ser. A, 458, 763–789 (2002). - Monserrat, S., Vilibić, I. & Rabinovich, A. B. Meteo tsunamis: atmospherically induced destructive ocean waves in the tsunami frequency band. Nat. Hazards Earth Syst. Sci. 6, 1035–1051 (2006). - Vučetić, T., Vilibić, I., Tinti, S. & Maramai, A. The Great Adriatic flood of 21 June 1978 revisited: An overview of the reports. Phys. Chem. Earth 34, 894–903 (2009). - Croatian Radio-Television. Meteo tsunami hit Rijeka dubrovačka Bay (in Croatian). http://vijesti.hrt.hr/u-dubrovniku-zabiljezen-meteoroloski-tsunami (2014). Date of access: 26/06/2014. - Vaccaro, F. Marrobbio a Mazara del Vallo del 25 giugno 2014 II video, https://www.youtube.com/watch?v=LTjOdN067Zo (2014). Date of access: 27/06/2014. - Soloviev, S. L. Tsunamigenic zones in the Mediterranean Sea. Nat. Hazards 3, 183–202 (1990). - Papadopoulos, G.A., Diakogianni, G., Fokaefs, A. & Ranguelov, B. Tsunami hazard in the Black Sea and the Azov Sea: a new tsunami catalogue. Nat. Hazards Earth Syst. Sci. 11, 945–963 (2011). - Rabinovich, A. B. Seiches and harbour oscillations, in Handbook of Coastal and Ocean Engineering (ed. Kim, Y. C. ) 193–236 (World Sci., Singapore, 2009). - Candela, J. et al. The “Mad Sea” phenomenon in the Strait of Sicily. J. Phys. Oceanogr. 29, 2210–2231 (1999). - Monserrat, S., Ramis, C. & Thorpe, A. J. Large-amplitude pressure oscillations in the Western Mediterranean. Geophys. Res. Lett. 18, 183–186 (1991). - Vilibić, I. & Šepić, J. Destructive meteo tsunamis along the eastern Adriatic coast: Overview. Phys. Chem. Earth 34, 904–917 (2009). - Vilibić, I., Šepić, J., Ranguelov, B., Strelec Mahović, N. & Tinti, S. Possible atmospheric origin of the 7 May 2007 western Black Sea shelf tsunami event. J. Geophys. Res. 115, C07006 (2010). - Paxton, C. H. & Sobien, D. A. Resonant interaction between an atmospheric gravity wave and shallow water wave along Florida’s west coast. Bull. Am. Meteorol. Soc. 79, 2727–2732 (1998). - Thomson, R. E. et al. Meteorological tsunamis on the coasts of British Columbia and Washington. Phys. Chem. Earth 34, 971–988 (2009). - Jansà, A., Monserrat, S. & Gomis, D. The rissaga of 15 June 2006 in Ciutadella (Menorca), a meteorological tsunami. Geosci. 12, 1–4 (2007). - Plougonven, R. & Zhang, F. Q. Internal gravity waves from atmospheric jets and fronts. Rev. Geo. 52, 33–76 (2014). - Lindzen, R. S. & Tung, K.-K. Banded convective activity and ducted gravity waves. Mon. Wea. Rev. 104, 1602–1617 (1976). - Monserrat, S. & Thorpe, A. J. Use of ducting theory in an observed case of gravity waves. J. Atmos. Sci. 53, 1724–1736 (1996). - Proudman, J. The effects on the sea of changes in atmospheric pressure. Geophys. Suppl. Mon. Notices R. Astr. Soc. 2, 197–209 (1929). - Vilibić, I. Numerical simulations of the Proudman resonance. Cont. Shelf Res. 28, 574–581 (2008). - Hibiya, T. & Kajiura, K. Origin of the ‘Abiki’ phenomenon (kind of seiches) in Nagasaki Bay. J. Oceanogr. Soc. Japan 38, 172–182 (1982). - Šepić, J., Vilibić, I. & Monserrat, S. Teleconnections between the Adriatic and the Balearic meteo tsunamis. Phys. Chem. Earth 34, 928–937 (2009). - Vilibić, I., Monserrat, S., Rabinovich, A. B. & Mihanović, H. Numerical modelling of the destructive meteo tsunami of 15 June 2006 on the coast of the Balearic Islands. Pure Appl. Geophysics. 165, 2169–2195 (2008). - Bernard, E. N., Mofjeld, H. O., Titov, V., Synolakis, C. E. & González, F. I. Tsunami: scientific frontiers mitigation, forecasting and policy implications. Phil. Trans. R. Soc. 364, 1989–2006 (2006). - Thomson, R. E. & Emery, W. J. Data Analysis Methods in Physical Oceanography : 3rd Edition. Elsevier Science, Amsterdam, London, New York (August 2014) 716p. - Durran, D. R. & Klemp, J. B. On the effects of moisture on Brunt-Väisälä frequency. J. Atm. Sci. 39, 2152–2158 (1982). La redazione di OCEAN4FUTURE è composta da collaboratori che lavorano in smart working scelti tra esperti di settore che hanno il compito di redigere e pubblicare gli articoli di non loro produzione personale. I contenuti restano di responsabilità degli autori che sono sempre citati. Eventuali quesiti possono essere inviati alla Redazione che provvederà ad inoltrarli agli Autori.
<urn:uuid:9693716f-d68e-4779-9ca5-2017db925fb5>
CC-MAIN-2021-21
https://www.ocean4future.org/savetheocean/archives/13764
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00417.warc.gz
en
0.91469
5,752
3.1875
3
Text in PDF Format Declaration on Social Progress and Development Proclaimed by General Assembly resolution 2542 (XXIV) of 11 December 1969 The General Assembly , Mindful of the pledge of Members of the United Nations under the Charter to take joint and separate action in co-operation with the Organization to promote higher standards of living, full employment and conditions of economic and social progress and development, Reaffirming faith in human rights and fundamental freedoms and in the principles of peace, of the dignity and worth of the human person, and of social justice proclaimed in the Charter, Recalling the principles of the Universal Declaration of Human Rights, the International Covenants on Human Rights, the Declaration of the Rights of the Child, the Declaration on the Granting of Independence to Colonial Countries and Peoples, the International Convention on the Elimination of All Forms of Racial Discrimination, the United Nations Declaration on the Elimination of All Forms of Racial Discrimination, the Declaration on the Promotion among Youth of the Ideals of Peace, Mutual Respect and Understanding between Peoples, the Declaration on the Elimination of Discrimination against Women and of resolutions of the United Nations, Bearing in mind the standards already set for social progress in the constitutions, conventions, recommendations and resolutions of the International Labour Organisation, the Food and Agriculture Organization of the United Nations, the United Nations Educational, Scientific and Cultural Organization, the World Health Organization, the United Nations Children's Fund and of other organizations concerned, Convinced that man can achieve complete fulfilment of his aspirations only within a just social order and that it is consequently of cardinal importance to accelerate social and economic progress everywhere, thus contributing to international peace and solidarity, Convinced that international peace and security on the one hand, and social progress and economic development on the other, are closely interdependent and influence each other, Persuaded that social development can be promoted by peaceful coexistence, friendly relations and co-operation among States with different social, economic or political systems, Emphasizing the interdependence of economic and social development in the wider process of growth and change, as well as the importance of a strategy of integrated development which takes full account at all stages of its social aspects, Regretting the inadequate progress achieved in the world social situation despite the efforts of States and the international community, Recognizing that the primary responsibility for the development of the developing countries rests on those countries themselves and acknowledging the pressing need to narrow and eventually close the gap in the standards of living between economically more advanced and developing countries and, to that end, that Member States shall have the responsibility to pursue internal and external policies designed to promote social development throughout the world, and in particular to assist developing countries to accelerate their economic growth, Recognizing the urgency of devoting to works of peace and social progress resources being expended on armaments and wasted on conflict and destruction, Conscious of the contribution that science and technology can render towards meeting the needs common to all humanity, Believing that the primary task of all States and international organizations is to eliminate from the life of society all evils and obstacles to social progress, particularly such evils as inequality, exploitation, war, colonialism and racism, Desirous of promoting the progress of all mankind towards these goals and of overcoming all obstacles to their realization, Solemnly proclaims this Declaration on Social Progress and Development and calls for national and international action for its use as a common basis for social development policies: All peoples and all human beings, without distinction as to race, colour, sex, language, religion, nationality, ethnic origin, family or social status, or political or other conviction, shall have the right to live in dignity and freedom and to enjoy the fruits of social progress and should, on their part, contribute to it. Social progress and development shall be founded on respect for the dignity and value of the human person and shall ensure the promotion of human rights and social justice, which requires: (a) The immediate and final elimination of all forms of inequality, exploitation of peoples and individuals, colonialism and racism, including nazism and apartheid , and all other policies and ideologies opposed to the purposes and principles of the United Nations; (b) The recognition and effective implementation of civil and political rights as well as of economic, social and cultural rights without any discrimination. The following are considered primary conditions of social progress and development: (a) National independence based on the right of peoples to self-determination; (b) The principle of non-interference in the internal affairs of States; (c) Respect for the sovereignty and territorial integrity of States; (d) Permanent sovereignty of each nation over its natural wealth and resources; (e) The right and responsibility of each State and, as far as they are concerned, each nation and people to determine freely its own objectives of social development, to set its own priorities and to decide in conformity with the principles of the Charter of the United Nations the means and methods of their achievement without any external interference; (f) Peaceful coexistence, peace, friendly relations and co-operation among States irrespective of differences in their social, economic or political systems. The family as a basic unit of society and the natural environment for the growth and well-being of all its members, particularly children and youth, should be assisted and protected so that it may fully assume its responsibilities within the community. Parents have the exclusive right to determine freely and responsibly the number and spacing of their children. Social progress and development require the full utilization of human resources, including, in particular: (a) The encouragement of creative initiative under conditions of enlightened public opinion; (b) The dissemination of national and international information for the purpose of making individuals aware of changes occurring in society as a whole; (c) The active participation of all elements of society, individually or through associations, in defining and in achieving the common goals of development with full respect for the fundamental freedoms embodied in the Universal Declaration of Human Rights; (d) The assurance to disadvantaged or marginal sectors of the population of equal opportunities for social and economic advancement in order to achieve an effectively integrated society. Social development requires the assurance to everyone of the right to work and the free choice of employment. Social progress and development require the participation of all members of society in productive and socially useful labour and the establishment, in conformity with human rights and fundamental freedoms and with the principles of justice and the social function of property, of forms of ownership of land and of the means of production which preclude any kind of exploitation of man, ensure equal rights to property for all and create conditions leading to genuine equality among people. The rapid expansion of national income and wealth and their equitable distribution among all members of society are fundamental to all social progress, and they should therefore be in the forefront of the preoccupations of every State and Government. The improvement in the position of the developing countries in international trade resulting among other things from the achievement of favourable terms of trade and of equitable and remunerative prices at which developing countries market their products is necessary in order to make it possible to increase national income and in order to advance social development. Each Government has the primary role and ultimate responsibility of ensuring the social progress and well-being of its people, of planning social development measures as part of comprehensive development plans, of encouraging and co-ordinating or integrating all national efforts towards this end and of introducing necessary changes in the social structure. In planning social development measures, the diversity of the needs of developing and developed areas, and of urban and rural areas, within each country, shall be taken into due account. Social progress and development are the common concerns of the international community, which shall supplement, by concerted international action, national efforts to raise the living standards of peoples. Social progress and economic growth require recognition of the common interest of all nations in the exploration, conservation, use and exploitation, exclusively for peaceful purposes and in the interests of all mankind, of those areas of the environment such as outer space and the sea-bed and ocean floor and the subsoil thereof, beyond the limits of national jurisdiction, in accordance with the purposes and principles of the Charter of the United Nations. Social progress and development shall aim at the continuous raising of the material and spiritual standards of living of all members of society, with respect for and in compliance with human rights and fundamental freedoms, through the attainment of the following main goals: (a) The assurance at all levels of the right to work and the right of everyone to form trade unions and workers' associations and to bargain collectively; promotion of full productive employment and elimination of unemployment and under-employment; establishment of equitable and favourable conditions of work for all, including the improvement of health and safety conditions; assurance of just remuneration for labour without any discrimination as well as a sufficiently high minimum wage to ensure a decent standard of living; the protection of the consumer; (b) The elimination of hunger and malnutrition and the guarantee of the right to proper nutrition; (c) The elimination of poverty; the assurance of a steady improvement in levels of living and of a just and equitable distribution of income; (d) The achievement of the highest standards of health and the provision of health protection for the entire population, if possible free of charge; (e) The eradication of illiteracy and the assurance of the right to universal access to culture, to free compulsory education at the elementary level and to free education at all levels; the raising of the general level of life-long education; (f) The provision for all, particularly persons in low income groups and large families, of adequate housing and community services. Social progress and development shall aim equally at the progressive attainment of the following main goals: (a) The provision of comprehensive social security schemes and social welfare services; the establishment and improvement of social security and insurance schemes for all persons who, because of illness, disability or old age, are temporarily or permanently unable to earn a living, with a view to ensuring a proper standard of living for such persons and for their families and dependants; (b) The protection of the rights of the mother and child; concern for the upbringing and health of children; the provision of measures to safeguard the health and welfare of women and particularly of working mothers during pregnancy and the infancy of their children, as well as of mothers whose earnings are the sole source of livelihood for the family; the granting to women of pregnancy and maternity leave and allowances without loss of employment or wages; (c) The protection of the rights and the assuring of the welfare of children, the aged and the disabled; the provision of protection for the physically or mentally disadvantaged; (d) The education of youth in, and promotion among them of, the ideals of justice and peace, mutual respect and understanding among peoples; the promotion of full participation of youth in the process of national development; (e) The provision of social defence measures and the elimination of conditions leading to crime and delinquency, especially juvenile delinquency; (f) The guarantee that all individuals, without discrimination of any kind, are made aware of their rights and obligations and receive the necessary aid in the exercise and safeguarding of their rights. Social progress and development shall further aim at achieving the following main objectives: (a) The creation of conditions for rapid and sustained social and economic development, particularly in the developing countries; change in international economic relations; new and effective methods of international co-operation in which equality of opportunity should be as much a prerogative of nations as of individuals within a nation; (b) The elimination of all forms of discrimination and exploitation and all other practices and ideologies contrary to the purposes and principles of the Charter of the United Nations; (c) The elimination of all forms of foreign economic exploitation, particularly that practised by international monopolies, in order to enable the people of every country to enjoy in full the benefits of their national resources. Social progress and development shall finally aim at the attainment of the following main goals: (a) Equitable sharing of scientific and technological advances by developed and developing countries, and a steady increase in the use of science and technology for the benefit of the social development of society; (b) The establishment of a harmonious balance between scientific, technological and material progress and the intellectual, spiritual, cultural and moral advancement of humanity; (c) The protection and improvement of the human environment. MEANS AND METHODS On the basis of the principles set forth in this Declaration, the achievement of the objectives of social progress and development requires the mobilization of the necessary resources by national and international action, with particular attention to such means and methods as: (a) Planning for social progress and development as an integrated part of balanced overall development planning; (b) The establishment, where necessary, of national systems for framing and carrying out social policies and programmes, and the promotion by the countries concerned of planned regional development, taking into account differing regional conditions and needs, particularly the development of regions which are less favoured or under-developed by comparison with the rest of the country; (c) The promotion of basic and applied social research, particularly comparative international research applied to the planning and execution of social development programmes. (a) The adoption of measures, to ensure the effective participation, as appropriate, of all the elements of society in the preparation and execution of national plans and programmes of economic and social development; (b) The adoption of measures for an increasing rate of popular participation in the economic, social, cultural and political life of countries through national governmental bodies, non-governmental organizations, co-operatives, rural associations, workers' and employers' organizations and women's and youth organizations, by such methods as national and regional plans for social and economic progress and community development, with a view to achieving a fully integrated national society, accelerating the process of social mobility and consolidating the democratic system; (c) Mobilization of public opinion, at both national and international levels, in support of the principles and objectives of social progress and development; (d) The dissemination of social information, at the national and the international level, to make people aware of changing circumstances in society as a whole, and to educate the consumer. (a) Maximum mobilization of all national resources and their rational and efficient utilization; promotion of increased and accelerated productive investment in social and economic fields and of employment; orientation of society towards the development process; (b) Progressively increasing provision of the necessary budgetary and other resources required for financing the social aspects of development; (c) Achievement of equitable distribution of national income, utilizing, inter alia , the fiscal system and government spending as an instrument for the equitable distribution and redistribution of income in order to promote social progress; (d) The adoption of measures aimed at prevention of such an outflow of capital from developing countries as would be detrimental to their economic and social development. (a) The adoption of measures to accelerate the process of industrialization, especially in developing countries, with due regard for its social aspects, in the interests of the entire population; development of an adequate organization and legal framework conducive to an uninterrupted and diversified growth of the industrial sector; measures to overcome the adverse social effects which may result from urban development and industrialization, including automation; maintenance of a proper balance between rural and urban development, and in particular, measures designed to ensure healthier living conditions, especially in large industrial centres; (b) Integrated planning to meet the problems of urbanization and urban development; (c) Comprehensive rural development schemes to raise the levels of living of the rural populations and to facilitate such urban-rural relationships and population distribution as will promote balanced national development and social progress; (d) Measures for appropriate supervision of the utilization of land in the interests of society. The achievement of the objectives of social progress and development equally requires the implementation of the following means and methods: (a) The adoption of appropriate legislative, administrative and other measures ensuring to everyone not only political and civil rights, but also the full realization of economic, social and cultural rights without any discrimination; (b) The promotion of democratically based social and institutional reforms and motivation for change basic to the elimination of all forms of discrimination and exploitation and conducive to high rates of economic and social progress, to include land reform, in which the ownership and use of land will be made to serve best the objectives of social justice and economic development; (c) The adoption of measures to boost and diversify agricultural production through, inter alia , the implementation of democratic agrarian reforms, to ensure an adequate and well-balanced supply of food, its equitable distribution among the whole population and the improvement of nutritional standards; (d) The adoption of measures to introduce, with the participation of the Government, low-cost housing programmes in both rural and urban areas; (e) Development and expansion of the system of transportation and communications, particularly in developing countries. (a) The provision of free health services to the whole population and of adequate preventive and curative facilities and welfare medical services accessible to all; (b) The enactment and establishment of legislative measures and administrative regulations with a view to the implementation of comprehensive programmes of social security schemes and social welfare services and to the improvement and co-ordination of existing services; (c) The adoption of measures and the provision of social welfare services to migrant workers and their families, in conformity with the provisions of Convention No. 97 of the International Labour Organisation and other international instruments relating to migrant workers; (d) The institution of appropriate measures for the rehabilitation of mentally or physically disabled persons, especially children and youth, so as to enable them to the fullest possible extent to be useful members of society-these measures shall include the provision of treatment and technical appliances, education, vocational and social guidance, training and selective placement, and other assistance required-and the creation of social conditions in which the handicapped are not discriminated against because of their disabilities. (a) The provision of full democratic freedoms to trade unions; freedom of association for all workers, including the right to bargain collectively and to strike; recognition of the right to form other organizations of working people; the provision for the growing participation of trade unions in economic and social development; effective participation of all members in trade unions in the deciding of economic and social issues which affect their interests; (b) The improvement of health and safety conditions for workers, by means of appropriate technological and legislative measures and the provision of the material prerequisites for the implementation of those measures, including the limitation of working hours; (c) The adoption of appropriate measures for the development of harmonious industrial relations. (a) The training of national personnel and cadres, including administrative, executive, professional and technical personnel needed for social development and for overall development plans and policies; (b) The adoption of measures to accelerate the extension and improvement of general, vocational and technical education and of training and retraining, which should be provided free at all levels; (c) Raising the general level of education; development and expansion of national information media, and their rational and full use towards continuing education of the whole population and towards encouraging its participation in social development activities; the constructive use of leisure, particularly that of children and adolescents; (d) The formulation of national and international policies and measures to avoid the "brain drain" and obviate its adverse effects. (a) The development and co-ordination of policies and measures designed to strengthen the essential functions of the family as a basic unit of society; (b) The formulation and establishment, as needed, of programmes in the field of population, within the framework of national demographic policies and as part of the welfare medical services, including education, training of personnel and the provision to families of the knowledge and means necessary to enable them to exercise their right to determine freely and responsibly the number and spacing of their children; (c) The establishment of appropriate child-care facilities in the interest of children and working parents. The achievement of the objectives of social progress and development finally requires the implementation of the following means and methods: (a) The laying down of economic growth rate targets for the developing countries within the United Nations policy for development, high enough to lead to a substantial acceleration of their rates of growth; (b) The provision of greater assistance on better terms; the implementation of the aid volume target of a minimum of 1 per cent of the gross national product at market prices of economically advanced countries; the general easing of the terms of lending to the developing countries through low interest rates on loans and long grace periods for the repayment of loans, and the assurance that the allocation of such loans will be based strictly on socio-economic criteria free of any political considerations; (c) The provision of technical, financial and material assistance, both bilateral and multilateral, to the fullest possible extent and on favourable terms, and improved co-ordination of international assistance for the achievement of the social objectives of national development plans; (d) The provision to the developing countries of technical, financial and material assistance and of favourable conditions to facilitate the direct exploitation of their national resources and natural wealth by those countries with a view to enabling the peoples of those countries to benefit fully from their national resources; (e) The expansion of international trade based on principles of equality and non-discrimination, the rectification of the position of developing countries in international trade by equitable terms of trade, a general non-reciprocal and non-discriminatory system of preferences for the exports of developing countries to the developed countries, the establishment and implementation of general and comprehensive commodity agreements, and the financing of reasonable buffer stocks by international institutions. (a) Intensification of international co-operation with a view to ensuring the international exchange of information, knowledge and experience concerning social progress and development; (b) The broadest possible international technical, scientific and cultural co-operation and reciprocal utilization of the experience of countries with different economic and social systems and different levels of development, on the basis of mutual advantage and strict observance of and respect for national sovereignty; (c) Increased utilization of science and technology for social and economic development; arrangements for the transfer and exchange of technology, including know-how and patents, to the developing countries. (a) The establishment of legal and administrative measures for the protection and improvement of the human environment, at both national and international level; (b) The use and exploitation, in accordance with the appropriate international regimes, of the resources of areas of the environment such as outer space and the sea-bed and ocean floor and the subsoil thereof, beyond the limits of national jurisdiction, in order to supplement national resources available for the achievement of economic and social progress and development in every country, irrespective of its geographical location, special consideration being given to the interests and needs of the developing countries. Compensation for damages, be they social or economic in nature-including restitution and reparations-caused as a result of aggression and of illegal occupation of territory by the aggressor. (a) The achievement of general and complete disarmament and the channelling of the progressively released resources to be used for economic and social progress for the welfare of people everywhere and, in particular, for the benefit of developing countries; (b) The adoption of measures contributing to disarmament, including, inter alia , the complete prohibition of tests of nuclear weapons, the prohibition of the development, production and stockpiling of chemical and bacteriological (biological) weapons and the prevention of the pollution of oceans and inland waters by nuclear wastes.
<urn:uuid:e85540e6-759c-4156-a2bd-9e26e76aa202>
CC-MAIN-2021-21
https://www.ohchr.org/EN/ProfessionalInterest/Pages/ProgressAndDevelopment.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00616.warc.gz
en
0.926745
4,845
2.875
3
The schedule is pressing and there is much to do. The foundations of our society are under threat. That is why a shared vision and public coordination are needed. Individually, without coordination and a common goal, different actors (businesses, NGOs, public servants, etc.) and parts of society (education, healthcare, arts, etc.) as well as different industrial sectors (forestry, IT, heavy industry, etc.) – not to mention individual citizens – cannot undertake the transformation at the necessary scale and pace. Only publicly elected government has the capacities and legitimacy to steer a comprehensive societal transition. However, the state with its multiple organs is not a stationary whole but a historically developing one. Generals are always fighting the last war’. Likewise, the state and the government are results of solutions to past problems and challenges. In the face of current challenges, they also need renewal and new capacities and functions. We have to find ways to make ecological boundary conditions the guiding principles for all public, private and economic activities. Further, the costs of the transition must be divided justly. The crucial observation is that almost all groups have good reasons for committing to the shared goal despite some painful losses and hard labour. The acknowledgement of ecological boundaries is a precondition for all peaceful, rule-based, democratic and sustainable societies: from now on no group or nation can be an ecological free-rider and gain more than a short moment of competitive advantage. When ecological boundaries, social justice, public capabilities and market mechanisms are taken into account, the following tools for ecological reconstruction are most fruitful. The first set of tools includes emissions trading and carbon tariffs, which are already widely supported in political discussions. Their development takes place internationally, mainly in the EU. Emissions trading and carbon tariffs are basically restrictive measures: they punish unwanted activities. The rest of the tools generally fall within national sovereignty. However, especially through financing, they are connected to international cooperation. The financial connections and political possibilities are discussed in the next section. Unlike emissions trading and carbon tariffs that limit the fossil economy, the rest of the tools are for building a new economy and society. 1. Emissions trading and carbon tariffs The idea behind emissions trading is to internalise previously externalised costs of emissions into market prices. Currently, in the EU-wide emissions trading system, a political decision is made about the amount of emission rights to be sold, implying the maximum aggregate amount of emissions that the market can cause. The price of these emission rights is then set on the basis of how much market actors are willing to pay for them. Carbon tariffs, in turn, are intended to curb the competitive advantage that market actors outside the emissions trading area could possibly gain by not being included in the emissions trading system. The tariffs would be levied on imported goods and services according to their climate effects. The EU emissions trading system has been in use since 2005, but the carbon tariffs are still in the planning phase. Several factors have weakened the emissions trading system’s effectiveness. For instance, construction, agriculture, transport and waste management are outside the system, even though they produce roughly half of the EU’s emissions. Industry has also been given too many free emission rights. The reasoning has been that in the absence of carbon tariffs, free emission rights protect the competitiveness of EU industries. In general, emission rights have been excessively plentiful, implying a low price and low effectiveness in curbing emissions. On the plus side, emissions trading supports economic flexibility by harnessing demand-and-supply pricing. In principle, emissions are reduced without any political deliberation or decision-making over technological pathways or the precise areas of economic activity to be scaled down. At the same time, this is one of its weaknesses: emissions trading contains no coordination over how the existing social and political systems (e.g. complex energy and transport systems with their intertwined path dependencies) could be radically overhauled in a reasonably orderly way. In addition, emissions trading as such does not facilitate simultaneous attention to unsustainable use of natural resources and loss of biodiversity. From the perspective of ecological reconstruction, it is vital to note that emissions trading does not generate investments – it simply punishes high-emission economic activity. A wider and tighter emissions trading system in the EU must be aimed for to ensure that emissions caused by industry decline. In addition, carbon tariffs on the EU border are a good idea. However, the rapid renewal of all infrastructure and practices currently relying on fossil fuels, and the task of addressing the other environmental crises, more or less connected with climate change, demand many more tools besides carbon trading. 2. Public investments Ecological reconstruction requires massive investments in infrastructure and elsewhere. In the EU and the US, the aggregate level of investments has been exceptionally low for a long time despite uniquely low interest rates. In Finland, the level of investment has been one of the lowest in the EU. Uncertainty about the future of the global economy has been identified often as a key reason for such low investment: investors have difficulties in identifying profitable investments. Investments into low-carbon infrastructure are particularly challenging because they typically imply long commitments, high up-front costs and high technological risks. Monetary policy requires additional measures. Emissions trading also fails here: it can shift the profitability between different investments, but as such it does not generate new investments. This means that active fiscal policy, especially including long-term public investments, is called for. Public investments create demand and direct production. Combined with an active innovation policy (see section 3), public investments also provide a platform for piloting and developing new solutions. Among potential public investments are subsidies for environmentally beneficial projects that currently have overly long payback periods to make sense for private investors. Subsidies for companies can push markets forward in cases where the technological path is known, but companies still regard the needed investments as a bit too early or risky. Examples of investment targets are charging infrastructure for public and private electric vehicles, subsidies for infra-scale heat pumps, educating farmers and providing them with tools for carbon-sequestering practices and diversifying production. 3. Mission-oriented innovation policy Even in the case of basic infrastructure, reconstruction is not only about routine planning and implementation – it requires creativity. Furthermore, creativity must be directed properly to ensure that the necessary changes in a wide range of interdependent socio-technical systems can be realised in a relatively short period of time. Economist Mariana Mazzucato and her colleagues have emphasised the need for a mission-oriented innovation policy. The key idea arises from the observation that, historically, the government had a decisive role when a network of different actors would produce breakthrough innovations. The government has helped by setting the bar high enough by coordinating the efforts of different actors and by guaranteeing long-term financing. Examples of these kinds of cases include the moon flight and the internet. A similar case in Finland is the success of Nokia and the telecommunication cluster. The success of ecological reconstruction depends on different actors working together in a common direction. In recent decades, ideas like network management and open innovation have been popular. These ideas are characterised by cooperation between the public sector, universities and various representatives of the private sector. Current discussions around deep-tech arising from Silicon Valley also share this feature: the goal is to combine deep (university research-based) technological knowledge and business expertise in rapid up-scaling to solve not only problems in software development but also in the material world. The make-or-break points of preventing and adapting to environmental crises must be identified across economic sectors. Actors within and across different sectors must share a common goal, and knowledge must be disseminated openly. The economic risks of investing in innovation should not be allowed to form a bottleneck. As a shift towards such mission-oriented innovation cannot be initiated by the market, the government has to assume leadership. Examples of innovation policy missions are the renewal of the Helsinki metropolitan area energy system to minimise technologies based on burning; technologies and practices for expanding, monitoring, managing and trading forest-based carbon sinks and storage; technologies and practices of carbon-sequestering agriculture; construction of large-scale wooden buildings; electrification of transport, including energy storage; humane care for the elderly; and eco-social educational policies that support both practical skills (crafts, maintenance, care) and cognitive capacities. 4. Job guarantee For a long time, discussions around jobs and employment have been content-free. That is, policies have sought to raise the level of employment, and there has been little-to-no debate on the job content – what kind of work is worth doing. Another topic missing from the discussion is direct employment by the public sector. Political economic studies have widely discussed the idea of job guarantee. The starting point is the observation that unemployment is not needed to restrain inflation. Within job guarantee, the public sector offers jobs to all willing employees with salaries that, in practice, become the minimum wage. The jobs do not require long training but have decent conditions and are directed toward improving the society. The government finances the guarantee, but the jobs may be organised more locally – for instance, at the municipal level. Originally, job guarantee was presented as an automatic macroeconomic balancing mechanism to offset economic cycles. In times of low demand, the guarantee increases public spending to maintain full employment. The job guarantee fits very well with ecological reconstruction. There are both jobs that need to be done and people who are unemployed against their will. Examples of reconstruction jobs include reforestation of peatlands in the countryside and energy and waste services in cities. In addition, many infrastructure projects involve jobs that do not require previous qualifications. A job guarantee offers citizens a sense of economic security and reinforces the idea that it is not necessary to take any job regardless of its content. The job guarantee ensures that there are always jobs available that provide a livelihood and contribute towards building a sustainable society. 5. Sectoral transition policies Transitioning away from the unsustainable use of natural resources and fossil-fuel based production infrastructure means that some areas of production will disappear and the practices in many others will change profoundly. In Finland, for instance, the energy use of peat must be stopped. This means that the current job profiles of hundreds of workers in the state-owned Vapo will become obsolete. A sectoral transition policy includes retraining for workers and services for forming new career paths. These new jobs can include, for instance, reforestation of peatlands and construction of wind power. Big disruptions are also in store for shipyards building luxury cruisers and for the construction industry building shopping malls out of concrete, glass and steel. The workers in these sectors have skills that are readily applicable in ecological reconstruction, but the current market conditions do not direct their labour towards the correct goals. The new jobs must be organised so that the workers’ motivation is maintained or increased. One model for sectoral transition policies can be found in Spain, where the minister for ecological transition, in close cooperation with the labour unions, directs a programme of shutting down the last coal mines. Similar policies are currently being planned in several other countries. Education and training Relationships with nature and care for the environment start developing in the early years as parts of the skills, capabilities, ways of thinking and acting that children acquire in their environment. The identity and worldview of a child are influenced by the prevalent culture with all its possibly contradictory values and notions. It is possible to acquire both a strong connection to nature and a sensitivity to environmental issues and ideals of economic success through continuous growth or both a view of nature as a self-repairing whole and as an inexhaustible source of materials. Environmental education is an essential part of the cultural change included in ecological reconstruction because it raises awareness of biodiversity, planetary boundaries of human life, the processes of nature that uphold all life and the socio-cultural values that shape our interaction with nature and non-human life. Environmental education for children, adolescents and adults alike communicates models of ecologically sustainable life, gives grounds for evaluating ecologically sound values and offers tools for independently forming an ecologically informed view of the world. In Finland, environmental education is among the activities of many NGOs and is also included in the contents of exhibitions in natural parks and museums of natural history. An important part of environmental education is given within the legally mandated programmes of early and basic education. Sustainable lifestyles are mentioned in the National Agency for Education’s national core curriculum for early childhood education and care (2016) and local plans for early education (2017). The national core curriculum for basic education (renewed in 2016, in use stepwise in 2016–2019) highlights forming wide capabilities, which means not only mastering diverse new subjects and skills but also the capacity to connect them. One of the learning goals mentioned in the plans is building a sustainable future and the skills of participation and democratic action needed therein. This goal is tied to the subject area of environmental studies, which combines information from multiple sciences and utilises various learning environments. It is also connected with the annual multi-subject learning modules included in the national study plan. The modules are implemented in different ways in different schools and provide an opportunity to delve deeper into environmental questions with help from multiple subjects. According to the ministry’s directive, sustainable life must also characterise the schools’ mode of operation – that is, everyday practices. In the future, more extensive environmental education must be provided to individuals and groups during different stages of life and be tailored to different aspects of life. The contents must contain more education on the topics of democratic political action at different levels. In addition to emphasising nature protection and conservation, environmental education must increase citizens’ resilience in the face of the changes brought about by climate change, new technologies and new infrastructure. Education can be thought of not only as a top-down effort of information and guidance but also as informal negotiations and discussions between citizens concerning the values and meanings of nature and of a good life within planetary boundaries. As the population and the number of uninhabitable areas on the planet grow, environmental education must strengthen solidarity and awareness of human dependence not only on nature but on each other. The role of skills in communication and mediation and mediation will get bigger. In addition to environmental education, ecological reconstruction requires development and intensification in several areas of education and training. As production, logistics and construction change, some jobs are lost and new ones gained, which increases the need for professional (re)training. Technical and natural scientific fields are essential for developing many solutions. However, social transition and cultural change also demand other types of skills. The effects of new technologies must be analysed critically from both the perspectives of human communities and natural environments. Due to environmental destruction and change, migration and other types of intercultural encounters become more prevalent. The increase in catastrophic natural phenomena causes anxiety, possibly together with anxiety over injustice. The experience of the lived environment is transformed; as familiar natural environments change, seasons appear out of joint and everyday life must be adjusted accordingly. These changes cause many kinds of emotional responses and thoughts. The better and wider the knowledge and understanding of the environment, and the better the related skills, the easier it will be for citizens to negotiate these challenges and live with them. Environmental education has a key role in building up skills and knowledge, but addressing the environmental and resource crises and socio-economic change also necessitate renewal in basic, professional and higher education. Issues of ecological and social sustainability will affect all sectors, from agriculture to healthcare and from trade to heavy industry. Changes in the education system, like cultural change more generally, demand humanistic and social scientific skills. Multidisciplinary environmental research has long understood the significance of cooperation between different fields of research and expertise. From the perspective of ecological reconstruction, the situation is promising: issues of nature, environment and the interaction between human and natural systems have finally been embedded in the humanities and social sciences for good. The groundwork for the scientific and educational aspects of ecological reconstruction has already been laid. Education is a fitting tool for ecological reconstruction because it already has a considerable and lasting influence on people’s skills and mind-sets, and officials and politicians are used to directing it vis-à-vis changes in the world. Educational policies must now anticipate the needs of ecological reconstruction. Education planning in recent decades has relied largely on observed trends in the job market. Thus, it is expected that, for instance, the need for education in the agricultural and textile sectors will reduce further, while the need for education in marketing, sales and administration will continue to increase. However, the environmental crises and the socio-economic response to them cause a non-linear break in educational demands. The anticipation of disruptions in employment and working life must pay attention, on one hand, to the material boundary conditions of the economy and, on the other hand, to digitalisation, automation and ageing of the population. From the perspective of ecological reconstruction, the above-mentioned sectors – (multispecies) agriculture and various sectors producing (sustainable and long-lasting) physical objects – are not a thing of the past. They require a lot of skilled labour and new expertise. It cannot be expected that global trade will continue to develop in a direction where it is always somebody else producing the food and goods we consume. Yet it seems that many jobs, for instance, in the insurance and financial sectors, can be automated. This means lower labour demand in these sectors. In sum, education has to emphasise learning about the intertwinement of natural and human systems. In all sectors of society, understanding of the material boundaries of human activities has to be improved. 7. Accepting lower levels of consumption Pictures, statues, poetry, stories, music and dance have always been methods for perceiving reality, expressing thought and creating meaning. Although our views of art are historical and cultural, certain key elements, like connections to the senses and emotions, expressivity, creativity and collectivity, have typically accompanied artistic world-making. In art, humans investigate themselves, their society and their environment. Art has expressed and interpreted intra- and inter-community conflicts and ideals, social upheavals and relationships with the non-human. Artistic expression has a reciprocal relationship with concurrent views of humanity, nature and the world, as well as its material conditions. Futuristic art and its experimentality were inspired by the noise and speed of the rapidly industrialising world, propelled by the possibilities of the steam engine, motor car and electric devices. The modernisation of the Western world was reflected in the development of new forms of expression in literature, visual arts and music. The power of art in portraying and communicating ideas and experiences has had a decisive influence on the development of societies – Finnish national romanticism is a good example. Furthermore, the relationship between humans and nature, questions of protecting nature and environmental problems have been addressed in the visual arts, literature, music and the performing arts. Famous literary examples are the fictional story ‘A Fable for Tomorrow’ about the disappearance of birds, which US biologist Rachel Carson placed at the beginning of her book Silent Spring (1962), and the book Laulujoutsen – Ultima Thulen lintu (Whooper Swan – Bird of Ultima Thule, 1950) by Finnish writer Yrjö Kokko, relating the impressions of a photographic journey to a nesting area of a then nearly extinct species. Both books succeeded in raising interest and alarm in their contemporaries. To a great extent, due to Carson’s book, a critical discussion on pesticides emerged, and Kokko’s book helped in ending the persecution of the whooping swan. However, the main or only task of art is by no means to directly influence a particular topic. At the most fundamental level, art deals with human existence and the meaning of life. Art cares for meaningfulness by holding a constant inner and interpersonal dialogue on what is important in individual and common life. In art, the dependence of one human on other humans and nature becomes recognisable, acceptable and even enjoyable. Art expresses the fragility, finiteness and mortality of all life and the necessity of change. Art has a specific relationship with truth. Among drastic changes, art can help us understand why facts are hard to deal with and provide the capability for accepting the truth. The importance of truthfulness is heightened in societies where there is a strong contradiction between the scientific, lived and experienced truth and the so-called official truth. In artistic work and in experiencing works of art, facing facts and truthfulness happen as collective processes, giving space also to the emotional reactions that new facts and knowledge may trigger. Traditionally, art has been seen as connected to the senses and emotions: different visual, rhythmic, melodic, gestural and verbal expressions arouse feelings, experiences and insights. In this way, art can also portray phenomena that are otherwise hard to grasp. In this context, several humanists and art researchers have noted that climate change is a phenomenon whose spatio-temporal scale and planetary effects may be beyond human comprehension. Instead of scientific graphs, the matter may be easier to grasp via artistic means. Artists have freedom of material, expressive and conceptual experimentation and freedom of imagination. They can propose new ways of being human and of forming collectives. Environmental philosophy has, for a long time, investigated how non-humans should be acknowledged as actors influencing human societies and cultures. The role of animals in scientific research, economic production and everyday life demands ethical scrutiny. As wild animals become ever more uncommon, the conditions and modes of co-existence of humans and non-humans must be reviewed. All political measures have to start from the principle that nobody is encouraged to consume against their will ‘in order to keep the wheels of the economy turning’. Currently, citizens reducing their consumption are, in practice, accused of hurting the economy. Correspondingly, nobody should be (economically or socially) forced to take a job that destroys the prerequisites of future societies (see the section on job guarantee). Typically, worrying about a decline in consumption is ultimately worrying about how to finance welfare services and social security. However, at the material level, the services provided by a teacher or a nurse are in no way dependent on someone first buying minced meat or a car. As part of ecological reconstruction, the economy must be reorganised so that the level of private consumption does not determine whether a teacher or a nurse can be paid a living wage. At the material level, the economy must be organised so that there are sufficient sustainably produced food, heated housing and transportation services for the teacher and the nurse. In so far as education and healthcare demand products that are not made in Finland, there must also be sufficient exports to import the necessities. What will replace the missing consumption? As noted above, it depends on the cultural development, but at least spending time with communities, families and friends, and enjoying hobbies and artistic endeavours, provide a multitude of possibilities.
<urn:uuid:2477d247-09d4-4f0c-8d2d-78011bbf5272>
CC-MAIN-2021-21
https://eco.bios.fi/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00214.warc.gz
en
0.94854
4,703
2.703125
3
But what about this odd English word nun? There are a few possible origins for the word nun. Appearing in the Old English as nunne, it is generally derived from the late Latin nonna meaning “nun” or “tutor,” along with the masculine nonnus meaning “monk,” “tutor.” These words in turn are commonly believed to be related to the Greek nanna meaning “aunt” or nónnos meaning “father” and the Persian or Proto-Celtic nana meaning “mother” and “grandmother” respectively. The words Friar and Pope likewise appear to develop as simple familial metaphors akin to the etymological meanings typically ascribed to nun. Indeed, the word friar is typically believed to trace to the Latin frater meaning “brother.” Pope is understood as derived from the Greek papas meaning “patriarch” or “bishop” and originally “father.” Priest is commonly traced to the Greek presbyteros meaning “elder.” The word bishop is commonly derived from episkopos meaning “watcher” or “overseer.” The word monk in contrast is commonly understood as ultimately deriving from the Greek monos meaning “alone.” Here the solitary or hermetic nature of the monk is suggested. In fact, the Old English munuc from which the word monk more recently derives was used to refer to females as well. Theoretically, the word nun is ultimately related to the name of the Sumerian moon God Nanna/Sin or his daughter Inanna. Here we may find one of the earliest traceable roots of the word from an element appearing in Inanna, (nin), meaning “lady” or “lord.” Inanna is commonly thought to mean “Heaven Lady” or “Lady of Heaven.” Etymologists argue that the God name Nanna is obscure in its original meaning but it seems possibly if not likely related to Inanna. In fact we might reasonably suggest the meaning of “lord” for the name Nanna. Again, I have asserted that the nun may have functioned, originally, as something closer to the priestess of Nanna/Sin. In other words, this is to argue the nun was perhaps originally a de facto concubine or “companion” for Christian priests or monks, much as Magdalene appears to have been the sexual consort of Christ. In general, it seems possible, if not probable, that the first priestesses developed originally in Mesopotamia as temple prostitutes and/or sacralized breeding stock. The priestesses of Nanna were women, the historical record and myth suggest, born from Semitic fathers or “Dumuzids” and Aryan mothers or “Inannas.” In any case, a shared symbolism tells us that Christianity and Judaism ultimately descends from the Dumuzid cult which appears to represent something of a root cult. The Goddess Inanna is pre-dated by an earlier Goddess, Nanaya also transcribed as “Nanâ.” Nanaya a sky Goddess, like Inanna, we might class as an Aryan archetype. Like Inanna, she was a rough equivalent of the later Venus. Nanaya’s was a successful cult, gaining currency in both Mesopotamia and Egypt. Nanaya will later be syncretized as an aspect of Inanna. In fact that Nanaya and Inanna are Aryan figures, may suggest the roots nanna, nana and even the derivative word nun as Aryan identifiers. This will be suggested by other clues we explore in this chapter. On the other hand, we will class the lunar Sumerian God Nanna/Sin Semitic. Perhaps it is meaningful Nanna begins as a Sumerian deity and is later syncretized with Sin, a God of the Semitic-speaking Akkadians. Nanna/Sin descends of a likely Semitic Enlil and an Aryan Ninlil. Ninlil, believed either to mean “Lady of the wind” or “Lady of the field” contains the same element found in Inanna, nin, meaning “lady.” Here Nanna/Sin’s fatherhood of Inanna, this study argues, does not suggest her as Semitic any more than Saturn’s fatherhood of Jupiter suggests Jupiter Semitic. Rather, more likely, Nanna/Sin’s distinction as father indicates Semitic rulership. Here a lunar God presides over the solar God Shamash and his sister Inanna. The word nun may also be related or traced to the Phrygian Goddess Nana, appearing in Greek Mythology. She is the mother of the Rising-and-Dying God Attis, himself among the Rising-and-Dying Gods that would go on to inspire Christ, if with less visible influence than figures like Adonis and Bacchus. She is also the daughter of the river-god Sangarius which may suggest her Aryan, as freshwater in particular appears to be connected to Aryan blood. For example, the Christ Baptism scene in the Jordan river, this study argues, symbolizes the admixture of a Jew with an Aryan female figure. Here the female figure is represented by both the river and a descending dove or “Holy Spirit.” In light of this idea of Aryan blood being represented by fresh water it is worth contemplating if the name of the river-god Sangarius, ostensibly derived from another mythical figure named Sangas, is related to the later Latin word sanguis meaning “blood.” Sanguis, which would become the root of the English words sanguine and sanguinary, is of unknown origin. In any case, Nana, like the nun herself, is in some manner a Mary equivalent. Like Mary, impregnated by an almond that falls in her lap, ostensibly Nana might be classed a virgin mother. Regardless, the appearance of the word nunne in the Old English, or nonna in the Late Latin, could have been a conscious development appearing via Promethean Transmission with the idea of an ancient Aryan nanna, “mother” or “lady” in mind. The possibility that the word nun is a conscious esoteric develop is, of course, supported by the sexual and ethnic innuendos found in key Religious words like Messiah, Christ, Jesus, Sin, Easter, Galilee, Bethlehem examined throughout this study. More such examples are found among medieval church terms explored in this book including Tabernacle, Transept, Narthex, Gallery and Galilee. Again, Christian esotericism certainly does not end with the Greek and Hebrew scripture. Developments in Old Norse language and mythology likewise may suggest nonna, nunne or nun as consciously developed. Specifically, in Norse Myth, we find the Goddess Nanna, female consort of the Dying-and-Rising Balder. This study argues that Norse Mythology is largely an example of Promethean Transmission occurring during the medieval period and, possibly, like the runic writing, “Etruscan,” proto-Jewish or crypto-Jewish in origin. Symbolically the Nanna/Balder pairing is identical to the Nun/Christ pairing when we understand the nun as symbolically married to Christ. Indeed, this study argues that Balder was intelligently developed as a reference to Jesus Christ, if also earlier Dying-and-Rising Gods like Adonis. Hence Balder’s similarities to Christ did not develop “unconsciously.” If Nanna in the Norse myth is a “borrowing,” it is likely not the only one. For example the name of the Sun Goddess Sól appearing in the Norse Pantheon is commonly suggested to derive from the same Indo-European Root as the male God Sol appearing in Roman symbolism. Perhaps more likely though the Goddess Sól is simply a use of the Latin word Sol, meaning sun, if not also a feminization of the Roman God Sol. Likewise Hermóðr, the Son of Odin and messenger of the Gods, who retrieves Balder from Hel after his death, seems etymologically related to the Greek God Hermes, if not a clear reference to him. After all, his role here as psycho-pomp or guide of souls is the same as Hermes. The Egyptian God Nu or Nun of ancient Egypt may also arise from a common etymological root. Nun, the oldest of the Egyptian Gods, was a male, birth-giving oceanic, watery abyss who also had the feminine form, Naunet. Here, again, we understand water, even oceanic waters as a relatively feminine element at least when compared with celestial elements, yet nevertheless an Aryan element with salt representing an inseminating Semitic element. Hence perhaps a “Sea of the West” metaphor, as is perhaps found with Mary (“sea of bitterness”) and the sea-born Venus, is continued here. This may be especially likely based on references appearing in the Hebrew Bible where Joshua is understood as the son of an Egyptian named “Nun.” This study argues that Nun in the Hebrew Bible is indeed a reference to the Egyptian God based on deliberate references to this God appearing in the Book of Joshua, for instance. This study also argues that the name Jesus is intended as a reference to Joshua as both names are, in fact, the same name. Both are derived from the Hebrew Yehoshua. Hence to the extent the nun becomes a symbol for Mary, the mother of Jesus, she might also be understood as Nun, parent of Joshua. This would seem to conform to a pattern we see with the Greek Nana as mother of Attis and the Norse Nanna as the consort of Balder. The last doesn’t break the pattern. Indeed, as with Myrrha/Venus and Adonis and Mary and Jesus, a lover/mother distinction becomes blurred as indicated narratively in variations of the Adonis myth and through name usage in the Biblical case. The Hebrew letter “Nun,” נ, believed to be derived from an Egyptian Hieroglyphic of a serpent, may also be of relevance here. If so, assuming an esoteric meaning, a serpentine association would possibly suggest the nun or this root nanna as a Semitic figure, akin to the attendant figure Trivia. Yet this seems unlikely. While it’s true the Egyptian God Nun is understood as giving birth to Amut, a figure commonly depicted as a serpent, Nun himself is distinct from the serpent. Rather this study argues that Joshua son of Nun, and therefore also Jesus, may be, at least in part, a reference to this figure of Amut, if also a serpentine proto-Jewish figure more generally. Again, this study also argues that Nun and the element of water generally is an “Aryan element.” Indeed, an Aramaic word pronounced “nun” may also mean fish. In fact, this Aramaic word is from whence the Hebrew letter “nun” takes its name. Thus with the title nun, and even the Goddesses Nana and Nanna, we may find a piscine consumable resource indicated from a Jewish and/or Proto-Jewish perspective. As this study discusses, the Christian symbol the ichthys is likely a reference to the symbol of the fish as it appears in myths and symbols related to Venus and her North Syrian equivalent Atargatis. Though freshwater fish or dag, דָּג, along with bread, oil, wood and other consumables, appears to be a symbol for the Aryan as consumable more generally. Indeed the name given to Mary Magdalene’s town in the Babylonian Talmud is Magdala Nunayya, נוניה מגדלא meaning “tower of fishes.” Here it seems possible if not probable the word meaning “fishes” here, Nunayya, is related to the Sumerian Love Goddess Nanaya, appearing in the Aramaic as ננױננאױ. Magdala Nunayya or “Tower of Fishes” may suggest a brothel in the context of the New Testament symbolism where perhaps Galilean women are fish among Christ’s “Fishers of Men.” Here they are fishers of anthrōpōn/anthrópos, which may mean “mankind” or “folk,” hence not specifically men. Nunayya is close to Nuni, נוני, which translates as the verb “degenerate,” “atrophy” or “blast” in the Modern Hebrew. This is not entirely dissimilar from the Hebrew meaning of Vesta, וסטה, which, again, means “deviate,” “stray,” “diverge,” “pervert” and “be wayward.” This similarity in word meaning may be meaningful as we count the Vestal virgins as an important precedent for nuns. Such an origin, would in any case, seem to connect the nun class with Mary Magdalene or the proto-Jewish and Jewish esoteric notion of “the Sacred Whore.” Hence theoretically the title nun would suggest the New Testament prostitute, Magdalene, as much a “model” of the nun class as the Virgin Mary or Mary of Nazareth, the mother of the Jewish Godman. This would make sense as nuns understand themselves as “brides of Christ” as opposed to “mothers of Christ.” Here we see that the “Virgin” Single Mother, bearing the Jewish son, becomes the model. Hence it makes sense that nuns understood themselves as the wives of the Jewish God. This is critical to understand: Christianity, a Bride Gathering Cult in its origin, posits the mother of a Jew as the most pious type for women. Though certainly with the nun we appear to encounter the complex, astonishingly self-aware, Jewish understanding of sexual purity when it comes to women. Here sexual purity is, actually, racial, compromised through interaction with Jews. What suggests this with the figure of the nun? The Modern Hebrew word for nun, n’ziyrah, נזירה, is the feminine of naziyr, נָזִיר, meaning “monk,” “friar,” “hermit,” “abstinent” or “anchorite.” The naziyr are also priests appearing in the Hebrew Bible better known in the English as Nazarites. Nazarite means “consecrated one” and comes from the word nazar meaning “consecrate” or “separate.” In the chapter entitled Delilah: The emasculating and circumcising Jewess this study argues that Nazarites represent pure, abstinent Aryan figures, with Samson (“Man of Sun”) being the archetypal example. The Biblical descriptions of the Nazarite with braided, unshorn hair suggests that they may even be references to followers of the Apollonian Cult or some equivalent as depicted, for example, by the Greek Kouroi statues. Hence this understanding of the Nazarite as “consecrated” or “separated,” may suggest not merely a separation related to monasticism, asceticism or chastity but also racial purity or racial separateness. The notion of consecration suggests this as well in the Hebrew Bible where Yahweh commands: “Consecrate (qadash) to Me every firstborn male. The firstborn from every womb among the Israelites belongs to Me, both of man and beast.” Here again the firstborn we understand as a common symbol of the Aryan, with the figures of Adam, Cain, Japheth, Esau, all being examples. Hence it would seem to follow that nuns also represent “consecrated” Aryan figures in the Hebrew and from the Jewish perspective. In this light perhaps it makes sense the medieval church, along with a crypto-Jewish core, was interested especially in attracting virgins especially modeling themselves after the virgin Mary. This is understandable. Here we are reminded as well of the Hebrew word qodesh, קֹדֶשׁ, which means “apartness,” “sacredness,” and “sacrificial” and is used to describe, for example, the “clean sacrificial animals” and inner sanctum of the Tent of Meeting to which they are led. As pointed out in the chapter entitled Jewish Notions of “Cleanliness” and “Holiness:” The Aryan as “Discharge,” “Leprosy” and “Holy” Prostitute in the first book of this study, qodesh is also the root of the closely related qadosh, קָדוֹשׁ, meaning more simply “sacred” or “holy.” Interestingly qodesh is also the root of qadesh, קָדֵשׁ, meaning a “male temple prostitute.” It is also the root of the feminine equivalent qedeshah, קְדֵשָׁה, a word that means “whore” in the Biblical Hebrew. In the Modern Hebrew, the Biblical word for whore, qedeshah means simply “holiness” or the verbs “sanctify,” “consecrate,” “hallow” and “bless.” The Biblical Hebrew word for male temple prostitute, qadesh, means simply “holy” in the Modern Hebrew. Again, we may take it as suggested that the nun represents not merely Mary the “single mother” of Jesus but Mary Magdalene, the sacred whore. But, again, according even to a Jewish understanding, the woman or at least the Aryan woman becomes a whore through interaction with the Jew. This is the Triple Goddess descending into the Semitic underworld, as this study discusses. See the chapter entitled The Origin of the Semitic Bride Gathering Cult called Judaism in book #1 of this series. Scholiast on Apollonius of Rhodes ii. 722 See the chapter entitled Baptism and Anointing: Symbols for Copulation and Sexual Interaction in book #3 of this series. See the chapter entitled Easter as Corroborating Evidence for Sin as a Form of the Jewish God in book #2 of this study. See the chapter entitled Easter as Corroborating Evidence for Sin as a Form of the Jewish God in book #2 of this study. Galilee is discussed in multiple places in this study. See the chapter entitled Christian as an Impressionable Golem, Christianity as a Burden in book #3 of this study and the chapter entitled Controlling Logos by Eating the Ears of Lions in book #1 of this study and the chapter entitled The Women of the Galilee in this book. See the chapters entitled Bread as an Important Example of the “Consumption Motif” and “Blood Magic” and Wine in book #3 of this series. See the chapter entitled The Holy Name of Mary in book #3 of this series. “Sea of bitterness” is one of the name meanings commonly ascribed to the name Mary. See the chapter entitled Nomenclature Based Typology in the NT, Jesus as Reference to Joshua, “Esoteric Non-Jews” and Jews as Liminal Types in book #3 of this series. See the Chapter entitled The Elements of Water and Wood as Symbols of Aryan Blood in book #2 See the chapter entitled Fresh Water Fish as Symbol of Aryan Stock in book #3 of this series. Qadash, קָדַשׁ, meaning “to be set apart or consecrated” is the denominative verb from qodesh, קֹדֶשׁ meaning “apartness” or “sacredness.” Exodus 13:2 See the chapter entitled Aryans as Firstborn, Jews as Second Born in book #2 of this series. See the chapter entitled Scapegoat and “Burnt Offering” as Aryan in book #1 and the chapters that follow.
<urn:uuid:99f78e51-38d0-4bfa-b1a2-d6c52e768ecf>
CC-MAIN-2021-21
https://theapolloniantransmission.com/2020/01/28/possible-meanings-and-origins-of-the-word-nun/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00216.warc.gz
en
0.948615
4,455
3.25
3
NJ PRIME: NJ Partnership for Research to Improve Mathematics Education (2013–2016) NJ PRIME was a NJ Department of Education-sponsored Mathematics and Science Partnership program that provided school-based teams of K-5 elementary teachers with professional development to strengthen their content and teaching expertise and prepare them to be effective mathematics teacher leaders. The 3-year program was designed to ensure that participants could engage students in deeper conceptual understanding, critical thinking, and problem-solving in the topic areas now emphasized by the Common Core State Standards in Mathematics and facilitate their use and understanding by other elementary teachers in their districts. Project research was investigating teachers’ understanding of and confidence in key mathematical topics, their pedagogical content knowledge, and their ability to apply the Common Core State Standards in Mathematics to their classroom teaching practices. Partners included 12 schools from the Bayonne, Weehawken, Union City, and Elizabeth public school districts and Stevens Institute of Technology. Sponsored by John Wiley & Sons, the 2015 Engineering Design Academy brought WaterBotics to Hoboken, NJ 8th and 9th grade students as a 1-week summer experience. WaterBotics was an innovative, engaging and research-based engineering and science program for middle and high school youth that challenged students to design, build, program, test, and redesign underwater robots using LEGO® components and related programming tools. Students in hands-on experiences in science, engineering design, and computer programming through a scaffolded series of team-based challenges. BISU, supported by a $2.5 million National Science Foundation ITEST grant, refined and expanded a previously developed underwater robotics program to national and state partners with a focus on girls and underserved minorities, through formal and after-school education programs. WaterBotics® supports students to learn engineering practices and engineering design concepts, computer programing, and underlying physical science ideas. Partners include the National Girls Collaborative Project and the International Technology and Engineering Educators Association. Project research investigated impact on students, training and classroom implementation, and scale-up and sustainability efforts. Partnering hub sites included Sinclair Community College, Triton College, the Texas Girls Collaborative Project, the Pacific Northwest Girls Collaborative Project, and the Kentucky Girls STEM Collaborative Project. Research and evaluation partners included Teachers College at Columbia University and Evaluation & Research Associates. PSEG WaterBotics Camp (2014) With support from the PSEG Foundation, Stevens Institute of Technology conducted three WaterBotics week-long day camps for a total of 72 middle school aged youth in the summer of 2014. WaterBotics is an innovative, engaging and research-based engineering and science program for middle and high school youth that challenges students to design, build, program, test, and redesign underwater robots using LEGO® components and related programming tools. Campers engaged in hands-on experiences in science, engineering design, and computer programming through a scaffolded series of team-based challenges. PSEG Sustainable Energy Institute (2012-2014) Sponsored by PSEG and the NJ Science Teachers Association’s Maitland P. Simmons Foundation, this teacher professional development program blended middle school science concepts for renewable energy with hands-on engineering design to raise awareness of the challenge of using sustainable energy resources to meet growing world demand. Teachers explored key scientific and technological concepts needed to understand solar and wind power generation that are aligned with the NJ standards in science and technology. Career awareness of energy-related science and engineering disciplines were infused throughout the institute. PSEG has supported this and similar programs at Stevens since 1989. BAYER Healthcare – Alka Seltzer Rocket Contest (2012-2013) Bayer Healthcare partnered with CIESE and Liberty Science Center to hold the Alka-Seltzer Rocket Contest. The contest was held June 8, 2013 at Liberty Science Center, Jersey City, NJ. 500 students from 30 middle schools throughout New Jersey participated by designing, building, and launching film-canister rockets. CIESE provided support by recruiting educators, writing curriculum, facilitating professional development workshops, and helping with event logistics. Girl Scouts – 100 Years of Science Conference and Professional Development (2012) Through a grant from PSEG, CIESE partnered with three New Jersey Girl Scouts councils to host 100 Years of Science, a one-day, science and engineering event at Stevens on Saturday, Sept. 29, 2012. Prior to the event, CIESE provided training to adult volunteers from each of the councils to prepare them to lead the girls in hands-on design activities for solar and wind energy production during the conference. At the event, student volunteers from the Stevens Society of Women Engineers worked with middle-school-aged scouts from the Heart of New Jersey, Northern New Jersey, and Central & Southern New Jersey councils on alternative energy experiments and green energy projects. Systems Engineering Capstone Project A $2.5 million U.S. Department of Defense research effort in collaboration with the Systems Engineering Research Center (SERC) based at Stevens and 10 universities, this project researched strategies and practices to infuse systems engineering concepts and increase learning and career interest in systems engineering through undergraduate and graduate capstone projects. WaterBotics Summer Camp (2012) With support from the Lockheed Martin Foundation, Stevens held a 1-week summer day camp for middle school students, engaging them in WaterBotics, an innovative, underwater robotics program that challenges students to design, build, program, test, and redesign underwater robots made of LEGO and other components. Campers progressed through a series of increasingly sophisticated “missions” that culminated in a final design challenge. iSTEM: An Integrated STEM Professional Development Program (2011-2014) This teacher professional development program in the Diocese of Paterson provided integrated science, technology, engineering, and mathematics (STEM) workshops and classroom support to elementary and middle school science, math, and technology teachers from seventeen schools. Focused on core content areas in STEM education which emphasized 21st century skills, such as problem-solving, global collaboration, creativity and innovation, and communication. Students engaged in design, problem-solving, decision-making and investigative activities. Student Innovation Camp (2011) Funded through an NSF Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring (PAESMEM) in recognition of CIESE's expertise, encouragement, and mentoring of teachers and students in science and engineering, this summer camp experience held at Stevens expose d middle school students to examples of successful innovation techniques, highlighting inventions and patents of faculty and graduates and provided a hands-on approach to identifying problems and developing solutions. A camp manual was developed to share with other STEM camp providers. American Farm School (2011-2012) Stevens partnered with the American Farm School in Greece to bring innovative and interactive teaching methods to enhance student motivation and subject interest. Curriculum Topic Study to Enhance Achievement in Mathematics and Science (C-TEAMS) (2010-2013) This three -year, $1.7 million NJ Dept. of Education Mathematics and Science Partnership program utilized a research-based methodology, Curriculum Topic Study, to provide multi-grade level teacher teams from partner districts with intensive professional development designed to deepen teachers' science and mathematics content knowledge and pedagogical content knowledge through an intensive investigation of the NJ Core Curriculum Content Standards. Impact of Strengthening the “T” and “E” Components of STEM in High School Biology and Chemistry (2009–2014) Curriculum and professional development in biology and chemistry were the foundation of this $1.4 million NSF Discovery Research K-12 grant to investigate the impact of incorporating engineering in these courses on student learning and acquisition of communication and collaboration skills. CIESE partnered with Portland State University to develop engineering-infused and parallel traditional curriculum materials in a randomized controlled trial to investigate the impact of engineering. Measurement instruments were developed to assess the students’ understanding of science concepts and their communication and collaboration skills. Energy & Engineering Institute (2009-2012) With support from PSEG, CIESE conducted a professional development program for middle school teachers that incorporated renewable energy topics and hands-on engineering design. Teachers explored an extensive array of wind power resources and constructed a working wind turbine generator.Teachers used the engineering design process to optimize the performance of the turbines as they were used to run motors, light LED’s and pump water. Strategies for implementing the activities in middle school science and technology classes were discussed and modeled through classroom support visits with teachers. Tres Bosques (2009-2010) Through a partnership with IEARN USA, CIESE provided professional development programs on forest ecology with field excursions to Black Rock Forest, NY; Washington DC; and the Dominican Republic for 20 educators. Montclair Public Schools Middle School Math Program (2009-2010 & 2011-2012) CIESE collaborated with the Montclair Public School district to provide mathematics coaching services and classroom support to strengthen student achievement at the middle school level. Department of Homeland Security (2009-2010) CIESE conducted professional development workshops related to the content/themes of the DHS Maritime Security Center of Excellence at Stevens for select members of the Build IT project. Participants used the LEGO equipment and training for an advanced project using technologies that are part of port/maritime security (RFID and underwater sensors.) Systems and Global Engineering Project (SAGE) (2008-2010) Funded by the Martinson Family Foundation and Edison Venture Fund, this groundbreaking initiative introduced systems engineering and globally-distributed collaboration to high school students throughout the U.S. Stevens and the New Jersey Technology Education Association (NJTEA) partnered to develop, pilot and disseminate systems and global engineering instructional modules for use in high school engineering, technology and science courses. CIESE provided both face-to-face and online professional development to prepare teachers to effectively implement the curriculum modules in high school classrooms. BUILD IT: Building STEM and IT Skills and Career Interests (2008-2009) The Motorola Foundation funded the expansion of the BUILD IT project to eight middle and high schools in New York City. Sixteen teachers and over 200 students engaged in designing, building and controlling underwater robots. 21st Century Community Learning Center (2008-2010) In partnership with the Red Bank Borough School District and with funding from the New Jersey Department of Education, CIESE prepared grades 4-8 teachers to implement STEM curricula with their students in the areas of life science, earth science and physical science as well as in engineering. Self-Assembled Nanohydrogels for Differential Cell Adhesion and Infection Control (2007-10) A science research project funded by NSF through the Nanotechnology Interdisciplinary Research Teams (NIRT) Program, the project included an outreach component for high school biology and chemistry classes. CIESE created and distributed curriculum materials that introduced students to the university level research as it relates to core concepts in biology and chemistry. More than 1000 high school students in five states completed the activities. Furthermore, the chemistry module was revised to include an engineering component and was used in a NSF-funded Discovery Research K-12 project. Partnership to Improve Student Achievement (PISA) (2007-2010) This New Jersey Department of Education sponsored mathematics and science partnership among Stevens Institute of Technology, Montclair State University, and Liberty Science Center, provided Grades 3-5 teachers from Jersey City, Hoboken, Bayonne, Newark, Weehawken, and Piscataway with high quality, research-based, classroom-focused professional development, innovative curricula and materials, and a dynamic and supportive learning community designed to address topics in key content areas in life, earth, and physical sciences and technology education. Honeywell Teachers for the 21st Century (2007-2010) With funding from Honeywell Hometown Solutions, CIESE provided professional development to Jersey City and other New Jersey middle school science teachers to engage them in proven strategies that use technology-supported science curricula, combined with hands-on science investigations, to increase student interest and achievement in science. Four Rivers, One World (2007-2008) In collaboration with iEARN, CIESE conducted teacher professional development workshops on water quality testing and analysis. The workshops consisted of hands-on, on-location training for teachers in New Jersey, New York, Bangladesh, Nepal, and India. Engineering Our Future New Jersey (2006-2010) This Stevens initiative was designed to promote engineering and technology education in elementary, middle and high schools throughout New Jersey. With support from Verizon, CIESE provided professional development to over 2,000 K-12 teachers throughout New Jersey by partnering with school districts, other institutions of higher education, and related engineering, technology, science and research organizations. BUILD IT (2006-2009) This $1.2 million comprehensive NSF ITEST project provided over 2,600 students from socioeconomically and racially diverse middle and high schools throughout New Jersey with intensive, in-class IT experiences in the design, construction, and programming of underwater robotic vehicles. GEAR-UP College Knowledge Passaic (2005-2006) In partnership with Passaic Public Schools, CIESE provided high quality hands-on mathematics professional development to teachers of seventh grade students. The GEAR UP program is a discretionary grant program of the U.s. Department of Education designed to increase the number of low-income students who are prepared to enter and succeed in postsecondary education. Environmental Protection Agency (EPA) Particulates Matter (2005) The curriculum, Particulates Matter, developed by CIESE enhanced awareness of current environmental health hazards posed by fine particle pollution through integration of the EPA’s new particulate real time data source. This curriculum involved the collection, recording and analysis of real time particulate matter data to engage students in authentic, real world scientific investigations into issues related to particulate matter pollution. Research in Engineering Education (RIEE) (2004-2010) A joint initiative between CIESE and the School of Engineering, this project created innovative tools and pioneered new instructional methodologies to increase student learning, engagement, and persistence in technological fields. Through funding from the New Jersey Department of the Treasury and the AT&T Foundation, 17 Stevens faculty members received catalyst grants to improve and enhance undergraduate engineering, science, and mathematics education to address critical and pervasive challenges in engineering education worldwide, such as deepening student understanding of an ever-increasing breadth of technical knowledge needed by engineers; increasing student engagement and interaction in large lecture classes; infusing systems thinking earlier in an undergraduate's education; and making stronger connections between undergraduate coursework and relevant, real-world problems. Environmental Protection Agency (EPA) Air Quality Modeling (2004-2005) Funded by the Environmental Protection Agency's Office of Pollution Prevention and Toxics, CIESE developed an online module for high school students entitled Air Quality: Learning Science with Models, EPA's Internet Geographic Exposure Modeling System (IGEMS/ISC). This material promotes student participation in scientific inquiry and use of models for the prediction of potential exposure associated with the release of chemicals to air. Environmental Protection Agency (EPA) Air Pollution, What’s the Solution (2004-2005) CIESE developed and piloted this EPA-sponsored online curriculum project, Air Pollution, What’s the Solution. It used inquiry-based science materials to place student learning in the context of real events. Teachers, students, parents and other educational stakeholders learned about air pollution and its health effects. Through these materials, students learned to think critically, and enhance problem solving and decision making skills. New Jersey Community College Strategic Partnership (NJCC SP) (2004-2007) In collaboration with six New Jersey County Colleges (Burlington, Camden, Essex, Hudson, Mercer, and Middlesex), this faculty training program focused on improving teaching and learning in P-12 science, mathematics, and technology, and extended the US DOE funded Pathways project. NJCC SP infused dynamic, research-based teaching methods—those that utilize Internet-based real world data—into the preservice education of New Jersey’s teachers in order to increase student interest and achievement in science and mathematics in grades P-12. Passaic Math Achievement to Realize Individual eXcellence (MATRIX) (2004-2007) This three-year partnership with the Passaic-City School District aimed to improve sixth, seventh, and eighth grade student achievement in mathematics through ongoing teacher professional development and in-class support that focuses on effectively integrating technology into teaching and learning. Elizabeth Math Achievement to Realize Individual eXcellence (MATRIX) (2004-2007) This three-year partnership with Elizabeth Public Schools aimed to increase student achievement in mathematics in grades six through eight by providing classroom teachers with ongoing professional development and in-class support that focused on integrating technology into the curriculum and instruction. CIESE worked with Elizabeth Public Schools to align authentic mathematic problems and activities to the district curriculum and develop a cadre of core mentor teachers in the district. Trenton Savvy Cyber Teacher ® (2004-2008) For over ten years, Public Service Electric & Gas (PSE&G) enabled CIESE to bring its Internet-based curriculum and professional development training to schools across the state of New Jersey. PSE&G funded the Savvy Cyber Teacher® professional development program, whereby 30-hours of hands-on professional development was provided to elementary teachers in the Trenton school district. Enhancing the Capacity of Math and Science Teachers (2004-2005) This $20,000 planning conference sponsored by the National Science Foundation gathered faculty from several prestigious independent colleges across the country to explore the possible use and necessary adaptations of a faculty development program for college faculty who teach preservice teachers. The program, called the Savvy Cyber Professor, was developed and implemented in partnership with community colleges under the Pathways Project umbrella. Teaching Math with Technology (TMT) in Piscataway (2004-2005) In a one-year program partnership, with $85,000 funded by the NJ Department of Education's P-12 Higher Education/Public School Partnership grant program, CIESE collaborated with Piscataway Township Schools to provide 40 hours of hands-on professional development, teacher mentoring and web-based support in Piscataway Township’s three middle schools to improve student mathematics achievement through technology. Collaborative Research of Mid-Atlantic COSEE: Center for Ocean Science Education Excellence (2003-2007) A partnership among: Center for Environmental Science / University of Maryland (CES), Center for Innovation in Engineering and Science Education (CIESE)/Stevens Institute of Technology, Chesapeake Bay Foundation, Hampton University, Jacques Cousteau National Estuarine Research Reserve (JCNERR), Mid-Atlantic Bight National Undersea Research Program, New York Aquarium, Rutgers University Institute for Marine and Coastal Sciences (IMCS), University of Delaware College of Marine Studies, and the Virginia Institute of Marine Science (VIMS). The goal of the NSF funded $2.5 million, five-year partnership was to integrate research and education programs to encourage lifelong learning experiences for everyone. COSEE aimed especially to reach out to K-12 educators, students (K-16), coastal managers, families, and underserved audiences. COSEE-Mid-Atlantic used Coastal Observing Systems in an effort to attain its goal and to promote awareness and understanding of our oceans. Pathways Project (2003-2007) The Pathways Project was envisioned to fill a critical need in preparing tomorrow's teachers. Involving faculty from 33 community colleges over four years in Internet-based training, Pathways was designed to promote best practices using technology-based instruction. This $1.5 million initiative sponsored by the U.S. Department of Education's Preparing Tomorrow's Teachers to Use Technology (PT3) grant program, included a faculty training program called the "Savvy Cyber Professor," a library of over 200 Internet-based "Real World Learning Objects" and membership in an online community to support course implementation. Education Commission of the States (2003) A collaboration between CIESE and the Education Commission of the States helped to disseminate and promote materials and information from the PT3 Pathways Project (described above.) Students Using Technology to Achieve Reading/Writing (STAR*W) (2003-2006) CIESE collaborated with Hoboken Public Schools in this three-year, $750,000 partnership to increase student achievement in Language Arts Literacy in grades three through five through the use of technology. CIESE administered, implemented and provided all services for this grant, which was funded by the New Jersey Department of Education. Savvy Cyber Teacher in Jersey City (2003-2006) CIESE implemented the Savvy Cyber Teacher professional development program with teachers in Jersey City Public Schools to prepare them to use unique and compelling Internet applications for science and math instruction. COOL Classroom Project (2003) In collaboration with Rutgers University, CIESE developed classroom activities that utilized real-time data from the Rutgers Marine Remote Sensing Laboratory located in NJ. Afghan Women's Program (2003) CIESE received a grant from the US State Department's Bureau of Educational and Cultural Affairs' Citizens Exchange Office to undertake an intensive summer training program for nine women science and mathematics instructors from universities in Afghanistan. The program provided professional development in mathematics, science and educational technology to support classroom learning in Afghanistan’s high schools. Bank Street College Pre-Service Teacher Technology Program (2003) Ciber@prendiz: Aplicaciones de la Internet para el Aprendizaje Educativo (AIAE) was a curriculum and technology based teacher professional development pilot project sponsored by the Inter-American Development Bank (IDB) and Omar Dengo Foundation. This project focused on preparing educators and administrators from Costa Rica, Ecuador, and Perú in the use of "Unique and Compelling" educational applications of the Internet in their classrooms. Connecticut Department of Environmental Protection (CTDEP) Clean School Bus Project (2002, 2005) CIESE was commissioned by the Connecticut Department of Environmental Protection to develop curriculum and professional development materials to accompany the clean school bus technology employed by the state's school districts as part of the EPA's Clean School Bus initiative. Building on the success of the "Is Your Bus Exhausting?" air quality curriculum that CIESE created for Norwich, CT, CIESE collaborated with Bridgeport teachers on the integration of the curriculum into their classrooms. The professional development consisted of a combination of face-to-face and Internet-based asynchronous sessions. Stevens/Hoboken Partnership (2002-2005) This four-year, in-district technology program was designed to teach effective use of Internet resources in the K-12 classroom, including web page building and in-class support. Elizabeth Public School District (2001-2011) In this long-term district partnership, CIESE worked with middle school mathematics teachers to improve teaching and learning through technology. Initial work with 11 schools and 120 teachers resulted in an increase in district test scores. This led to an extended partnership funded through the New Jersey MATRIX grant. Independent College Fund of N.J. (2001) In 2001, support was provided for CIESE program TKO Science: Turning Kids on to Math and Science. The program was designed to help K-12 educators infuse technology in their math and science curriculum. Project AIR (2001) Funded by the Northeast States for Coordinated Air Use Management (NESCAUM), Project AIR (Atmospheric Investiations in Real-time) resulted in the development of online curriculum materials for students in grades 6-9 to introduce them to the topic of ground-level ozone. This curriculum was the basis of a later, further developed curriculum by CIESE called Air Pollution: What's the Solution. New Jersey High-Tech Workforce Excellence: K-12 Partnership Enhancement (2000-2003) This was a $1 million, three-year program funded by the New Jersey Department of Education to enhance opportunities for educationally and economically disadvantaged students by increasing their interest and participation in science, math and technology education. CIESE provided intensive professional development to science, math, and technology teachers from the Newark, Jersey City, Irvington, Passaic, Elizabeth, Union City and Keansburg public school systems to strengthen teaching and learning in science, mathematics, and other core subjects through the meaningful integration of Internet-based curriculum resources. CIESE worked with 30 K-8 schools and 30 high schools to engage students in authentic investigations of real-world phenomena, including collaboration with scientists, engineers and other experts located around the world. Dr. Gayle W. Griffin, Assistant Superintendent for the Newark Public Schools was quoted for attributing “dramatic increases in students' science achievement scores…in those schools that worked with Stevens...” Applying Technology and Triarchic Enhancement to Instruction and Assessment in a School Science Curriculum (2000-2002) This NSF-sponsored research project, conducted in collaboration with the PACE Center at Yale University, investigated the effects of triarchic instruction and real-time Internet learning on instructional outcomes. Student achievement in high school physics was evaluated based on learning with/without triarchic instruction and with/without use of computers. Technology in Mathematics Education (TIME) (1999-2006) The TIME workshop series provided teachers, technology coordinators, library/media specialists, and administrators with the skills and knowledge to help children use technology effectively to better understand mathematics. Felix & Elizabeth Rohatyn Foundation (1999) Support provided to CIESE for continuation programming. Bank Street College Teacher Recruitment Initiative (1999-2001) National Internet in Education Teaching Program: Alliance + (1998-2003) Alliance+ presented a proven model for wide-scale dissemination of teacher professional development in the use of technology in K-12 education. This $9.2 million, five-year initiative awarded by the U.S. Department of Education led to the piloting of the Savvy Cyber Teacher® professional development program, in which over 8,000 teachers participated. To learn more about this professional development program, visit http://www.stevens.edu/ciese/savvycyberteacher. Internet Knowledge Exploration (IKE) (1998-2001) With support from the 1998 New Jersey Department of Education Eisenhower Professional Development program, CIESE partnered with the Paterson, New Brunswick, and Bayonne school districts in New Jersey in an intensive elementary school professional development effort focused on improving teacher proficiency with educational technology. The program utilized Internet-based communications tools and curriculum resources in grades 4-6 in science, mathematics and language arts/literacy. CIESE collaborated with Bank Street College and St. Peter's College in the development and delivery of professional development activities in the area of language arts. St. Peter's College incorporated the use of Internet-based resources into their teacher education programs based on the CIESE model. Science LINK (1998-2001) With funding from the AT&T Foundation, 30 middle school teachers from Paterson, Passaic, and Plainfield Public Schools were engaged in an intensive, three-year developmental process to learn the tools and techniques of the Internet and to discover compelling curriculum applications which engage students in authentic investigations of science. Project LINK (1998) In collaboration with Teaching Matters Inc. (TMI), CIESE worked with New York City school districts to bring its Internet-based curriculum materials and approaches to middle school mathematics and science educators. Funding was provided by Pfizer Inc., the Greenwall Foundation, Chase Manhattan Bank, Union Carbide, and others along with the districts themselves. Paterson IMATT Math Project (1998-2000) Technology in mathematics mentoring project with Paterson middle school teachers. Alliance for Training K-12 Teachers (1997-1999) With support from the U.S. Department of Education, a 10-session, hands-on professional development program to introduce teachers to unique and compelling Internet applications was developed and piloted. The Savvy Cyber Teacher® program assisted elementary, middle and high school teachers in implementing Internet-based resources that engage students in quantitative, inquiry-based activities in science, mathematics, social studies and language arts. NASA/CIESE Partnership (1997-2001) CIESE partnered with NASA's Goddard Space Center in a telementoring project to engage and motivate disadvantaged Hispanic and Latino students and their teachers in science. The Stevens/NASA telecollaborative partnership paired engineers and researchers with teachers and their Hispanic students in grades five through nine. The schools involved were the Joseph Brandt Middle School in Hoboken, School 40 in Jersey City, and four schools in Union City: Woodrow Wilson School, Roosevelt School, Edison School and Union Hill High School. K-12 Partnership Program (1997-2004) A professional development program for grades K-8 and 9-12 teachers to enrich science education through the use of the Internet. David Sarnoff Research Laboratory (1997, 2000-2001) Hosted by the Sarnoff Corporation and led by CIESE, Project MOST (Maximizing Opportunities for Students through Technology) engaged high school minority and economically disadvantaged students and their teachers in learning the technical skills required for real-world computer-based assignments. In this demonstration project, students learned to conduct data transformation tasks necessary to manipulate hard copy and electronic text and image documents for optimal presentation on a U.S. government Internet web site. Students and teachers learned computer applications that included word processing, web authoring, spreadsheet, and presentation software. Students were also introduced to a variety of career preparation topics. Bell Atlantic Networking Infrastructure in Education Fellowships (1995-1996) Funding from Bell Atlantic supported teachers engaged in the NJNIE program (below). New Jersey Statewide Systemic Initiative (NJSSI) (1994-2004) Serving Hudson and Bergen Counties, CIESE served as a Regional Center and Specialty Site for NJSSI’s science, mathematics, and technology education outreach and dissemination program. CIESE collaborated with school and district administrators and teachers to promote rigorous, standards-based curriculum and professional development opportunities for K-12 science, mathematics, technology, and general classroom teachers and served as a facilitator to help schools and districts identify useful resources to strengthen teaching and learning in these core subjects. New Jersey Networking Infrastructure in Education (NJNIE) (1994-1997) This $2.9 million, three-year project funded by the National Science Foundation was one of the first in the country to explore Internet application in K-12 science and mathematics. CIESE worked with approximately 3,000 teachers from 700 schools across the state of New Jersey, resulting in the development of Savvy Cyber Teacher ® and a library of Internet-based real time data and collaborative projects for K-12 science and mathematics. Independent College Fund of N.J. (1994) CIESE developed and offered the program, Strengthening School-College Partnerships for Access to the Information Superhighway, consisting of hands-on Internet training for teams that included math, science, technology and library educators and school administrators. U.S.DOE Star Schools Program (1994) Teleconference series on using technology in mathematics education conducted in collaboration with New Jersey Network/SERC. Business and Education Together (BET) Foundation (1994) Multiple grants from the BET Foundation supported professional development for teachers in NJ's Monmouth, Ocean, Morris, Somerset, and Hunterdon counties for learning how to use technology for math and science education. Charles A. Dana Foundation – NJ KnowledgeNet Videoconference Series (1993) In the spring of 1994, CIESE completed production of twenty-one 90-minute teacher training videoconferences on the use of technology in mathematics. More than 500 teachers from 150 school systems in New Jersey as well as teachers from 15 states participated in these videoconferences. Produced in cooperation with New Jersey Network and broadcast via the Satellite Educational Resources Consortium (SERC), these videoconferences included live panel discussions by CIESE staff and teachers; taped documentaries of real classroom experiences of teachers using technology in mathematics instruction; and demonstrations of various software packages. Enhancing Mathematics Instruction through Computer Oriented Active Learning Environments (1993-1996) Funded by the National Science Foundation, middle and high school teachers from 33 N.J. schools were trained in the effective use of computer-based technologies for teaching mathematics. These teachers were also trained and supported as mentor teachers who worked with over 200 mentee teachers during the lifetime of the project. Teleconference Program on Computers in Mathematics Education (1991, 1993) Supported by both the N.J. Department of Education and the N.J. Department of Higher Education. IBM Corporation (1990) With funding from IBM, CIESE provided computer labs to Hoboken Public Schools and the School District of South Orange and Maplewood. Fund for Innovation in Education, U.S. Department of Education (1989-1991) Supported the expansion of the Computer Integration in Mathematics Education program (below) to include middle school teachers in addition to high school teachers. Computer Integration in Mathematics Education (1988-1991) Sponsored by the N.J. Department of Education, CIESE worked with high school teachers in 5 NJ school districts to prepare them to use computers and technology in mathematics education.
<urn:uuid:b9d26491-9540-49ab-bb29-adbd922769d8>
CC-MAIN-2021-21
http://ciese.org/work/past/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00536.warc.gz
en
0.928133
6,954
2.65625
3
What is an empath? Being an empath is when you are affected by other people’s energies, and have an innate ability to intuitively feel and perceive others. Your life is unconsciously influenced by others’ desires, wishes, thoughts, and moods. Being an empath is much more than being highly sensitive and it’s not just limited to emotions. Empaths can perceive physical sensitivities and spiritual urges, as well as just knowing the motivations and intentions of other people. You either are an empath or you aren’t. It’s not a trait that is learned. You are always open, so to speak, to process other people’s feelings and energy, which means that you really feel, and in many cases take on the emotions of others. Many empaths experience things like chronic fatigue, environmental sensitivities, or unexplained aches and pains daily. These are all things that are more likely to be contributed to outside influences and not so much yourself at all. Essentially you are walking around in this world with all of the accumulated karma, emotions, and energy from others. Empaths are often quiet achievers. They can take a while to handle a compliment for they’re more inclined to point out another’s positive attributes. They are highly expressive in all areas of emotional connection, and talk openly, and, at times quite frankly. They may have few problems talking about their feelings if another cares to listen (regardless of how much they listen to others). However, they can be the exact opposite: reclusive and apparently unresponsive at the best of times. They may even appear ignorant. Some are very good at “blocking out” others and that’s not always a bad thing, at least for the learning empath struggling with a barrage of emotions from others, as well as their own feelings. Empaths have a tendency to openly feel what is outside of them more so than what is inside of them. This can cause empaths to ignore their own needs. In general an empath is non-violent, non-aggressive and leans more towards being the peacemaker. Any area filled with disharmony creates an uncomfortable feeling in an empath. If they find themselves in the middle of a confrontation, they will endeavor to settle the situation as quickly as possible, if not avoid it all together. If any harsh words are expressed in defending themselves, they will likely resent their lack of self-control, and have a preference to peacefully resolve the problem quickly. Empaths are more inclined to pick up another’s feelings and project it back without realizing its origin in the first place. Talking things out is a major factor in releasing emotions in the learning empath. Empaths can develop an even stronger degree of understanding so that they can find peace in most situations. The downside is that empaths may bottle up emotions and build barriers sky-high so as to not let others know of their innermost thoughts and/or feelings. This withholding of emotional expression can be a direct result of a traumatic experience, an expressionless upbringing, or simply being told as a child, “Children are meant to be seen and not heard!” Without a doubt, this emotional withholding can be detrimental to one’s health, for the longer one’s thoughts and/or emotions aren’t released, the more power they build. The thoughts and/or emotions can eventually becoming explosive, if not crippling. The need to express oneself honestly is a form of healing and a choice open to all. To not do so can result in a breakdown of the person and result in mental/emotional instability or the creation of a physical ailment, illness or disease. Here are 30 of the most common traits: 1. Knowing: Empaths just know stuff, without being told. It’s a knowing that goes way beyond intuition or gut feelings, even though that is how many would describe the knowing. The more attuned they are the stronger this gift becomes. 2. Being in public places can be overwhelming: Places like shopping malls, supermarkets or stadiums where there are lots of people around can fill the empath with turbulently vexed emotions that are coming from others. 3. Feeling others emotions and taking them on as your own: This is a huge one for empaths. To some they will feel emotions off those near by and with others they will feel emotions from those a vast distance away, or both. The more adept empath will know if someone is having bad thoughts about them, even from great distance. 4. Watching violence, cruelty or tragedy on the TV is unbearable: The more attuned an empath becomes the worse it is and may make it so they eventually have to stop watching TV and reading newspapers altogether. 5. You know when someone is not being honest: If a friend or a loved one is telling you lies you know it (although many empaths try not to focus on this because knowing a loved one is lying can be painful). Or if someone is saying one thing but feeling/thinking another, you know. 6. Picking up physical symptoms off another: An empath will almost always develop the ailments off another (colds, eye infections, body aches and pains) especially those they’re closest to, somewhat like sympathy pains. 7. Digestive disorders and lower back problems: The solar plexus chakra is based in the centre of the abdomen and it’s known as the seat of emotions. This is where empaths feel the incoming emotion of another, which can weaken the area and eventually lead to anything from stomach ulcers to IBS (too many other conditions to list here). Lower back problems can develop from being ungrounded (amongst other things) and one, who has no knowledge of them being an empath, will almost always be ungrounded. 8. Always looking out for the underdog: Anyone whose suffering, in emotional pain or being bullied draws an empath’s attention and compassion. 9. Others will want to offload their problems on you, even strangers: An empath can become a dumping ground for everyone else’s issues and problems, which, if they’re not careful can end up as their own. 10. Constant fatigue: Empaths often get drained of energy, either from energy vampires or just taking on too much from others, which even sleep will not cure. Many get diagnosed with ME. 11. Addictive personality: Alcohol, drugs, sex, are to name but a few addictions that empaths turn to, to block out the emotions of others. It is a form of self protection in order to hide from someone or something. 12. Drawn to healing, holistic therapies and all things metaphysical: Although many empaths would love to heal others they can end up turning away from being healers (even though they have a natural ability for it), after they’ve studied and qualified, because they take on too much from the one they are trying to heal. Especially if they are unaware of their empathy. Anything of a supernatural nature is of interest to empaths and they don’t surprise or get shocked easily. Even at the revelation of what many others would consider unthinkable, for example, empaths would have known the world was round when others believed it was flat. 13. Creative: From singing, dancing, acting, drawing or writing an empath will have a strong creative streak and a vivid imagination. 14. Love of nature and animals: Being outdoors in nature is a must for empaths and pets are an essential part of their life. 15. Need for solitude: An empath will go stir-crazy if they don’t get quiet time. This is even obvious in empathic children. 16. Gets bored or distracted easily if not stimulated: Work, school and home life has to be kept interesting for an empath or they switch off from it and end up daydreaming or doodling. 17. Finds it impossible to do things they don’t enjoy: As above. Feels like they are living a lie by doing so. To force an empath to do something they dislike through guilt or labelling them as idle will only serve in making them unhappy. It’s for this reason many empaths get labelled as being lazy. 18. Strives for the truth: This becomes more prevalent when an empath discovers his/her gifts and birthright. Anything untruthful feels plain wrong. 19. Always looking for the answers and knowledge: To have unanswered questions can be frustrating for an empath and they will endeavour to find an explanation. If they have a knowing about something they will look for confirmation. The downside to this is an information overload. 20. Likes adventure, freedom and travel: Empaths are free spirits. 21. Abhors clutter: It makes an empath feel weighed down and blocks the flow of energy. 22. Loves to daydream: An empath can stare into space for hours, in a world of their own and blissfully happy. 23. Finds routine, rules or control, imprisoning: Anything that takes away their freedom is debilitating to an empath even poisoning. 24. Prone to carry weight without necessarily overeating: The excess weight is a form of protection to stop the negative incoming energies having as much impact. 25. Excellent listener: An empath won’t talk about themselves much unless it’s to someone they really trust. They love to learn and know about others and genuinely care. 26. Intolerance to narcissism: Although kind and often very tolerant of others, empaths do not like to be around overly egotistical people, who put themselves first and refuse to consider another’s feelings or points of view other than their own. 27. The ability to feel the days of the week: An empath will get the ‘Friday Feeling’ if they work Fridays or not. They pick up on how the collective are feeling. The first couple of days of a long, bank holiday weekend (Easter for example) can feel, to them, like the world is smiling, calm and relaxed. Sunday evenings, Mondays and Tuesdays, of a working week, have a very heavy feeling. 28. Will not choose to buy antiques, vintage or second-hand: Anything that’s been pre-owned carries the energy of the previous owner. An empath will even prefer to have a brand new car or house (if they are in the financial situation to do so) with no residual energy. 29. Sense the energy of food: Many empaths don’t like to eat meat or poultry because they can feel the vibrations of the animal (especially if the animal suffered), even if they like the taste. 30. Can appear moody, shy, aloof, disconnected: Depending on how an empath is feeling will depend on what face they show to the world. They can be prone to mood swings and if they’ve taken on too much negative will appear quiet and unsociable, even miserable. An empath detests having to pretend to be happy when they’re sad, this only adds to their load (makes working in the service industry, when it’s service with a smile, very challenging) and can make them feel like scuttling under a stone. If you can say yes to most or all of the above then you are most definitely an empath. Herbalism in Magic Herbs have enormous magical power, as they hold the earth’s energy within them. Each herb has unique properties that can enhance one’s magical goals. Herbs also may have medicinal properties. The magical practitioner can draw upon either aspect when performing a spell. Following are three key herbs I use in my work, and the magical properties associated with each : The most prevalent ingredients of magic spells are processed botanicals, especially dried plants, herbs and oils. Drying plants preserves them for extended use, allowing you to work with plants out of season and with those that are cannot be grown in your region. Dried botanicals frequently are sold already chopped, cut or powdered. As these actions usually need to be done before spell casting, purchasing botanicals that are ready to be used can save time and effort. There is a caveat, of course. Leaves and blossoms, even chopped often retain their characteristics, such as aroma, and so are easily distinguishable. You are unlikely to confuse rose with peppermint or hibiscus! Roots on the other hand - often the most magically potent part of the plant - once chopped or powdered are fairly indistinguishable one from the other. It is not uncommon for unethical or ignorant vendors to substitute one root for another. If you need a distinct root, buy the whole root and grind and powder it yourself, even thought this can be difficult and time consuming. This is the only way to guarantee that you are receiving what you want. The only way to maintain control over what may be a pivotal ingredient. Familiarize yourself with herbs and other botanicals. Know what they should look like, and what they should smell like, and you'll be less likely to be fooled. If you grow plants or have access to fresh ones, it is quite easy to dry them yourself. Hang botanicals upside down in small bunches. Don't overcrowd them - you want air to circulate. Allow the botanicals to hang in a well-ventilated area away from direct sunlight until dry. Insomnia or work-related stress may be what you ultimately chalk up your early rising to. While it’s possible that’s the root of your problems, it's also possible it’s something entirely different-- and of a spiritual nature. As it turns out, the time you wake up in the middle night might be something you should really pay attention to. If you wake up at the same time every night, there may be an inner meaning that can be applied towards your life. It’s possible that you have multiple energies present in your body that you are unaware of, IN5D. Traditional Chinese medicine has used energy meridians as guidelines. ACOS describes these meridians as energy highways within the body. Qi flows through these channels and reaches different points in the body. The energy meridians of the body are also connected to a clock system that according to ancient Chinese medicine is energizing different parts of your body at different times of the day. Waking between 3am and 5am every night is a sign that energies in that corresponding part of your body are blocked or weak. TROUBLE SLEEPING BETWEEN 9:00PM AND 11:00PM Between 9 and 11pm is typically bedtime for most people. Difficulty falling asleep during this time is a sign of excess stress and worries from the day. Positive mantras, meditation, or successive muscle tension and relaxation exercises are recommended to help you sleep. WAKING BETWEEN 11:00PM AND 1:00AM According to ancient Chinese medicine, this time frame is the time that the energy meridian of the gall bladder is active. Waking up at this time frame is associated with emotional disappointment. Practice unconditional self-acceptance and forgiveness of others in order to get back to sleep. WAKING BETWEEN 1:00AM AND 3:00AM This is the energy meridian associated with the Chinese medicine body clock and the liver. Waking up at this time is associated with the emotions of anger and excess yang energy. Try drinking cool water and taking ownership of the situation that caused you to feel angry in order to rest peacefully through the night. WAKING BETWEEN 3:00AM AND 5:00AM Waking up between 3am and 5am is associated with the energy meridian that runs through the lungs and the emotion of sadness. To help yourself get back to sleep, try some slow, deep breathing and express faith in your Higher Power to help you. If the time that you awaken is between 3:00 am and 5:00am, it could also be a sign of your Higher Power alerting you to pay attention to messages that are being sent to align you with your higher purpose. WAKING BETWEEN 5:00AM AND 7:00AM The energy flow is in the large intestines during this time of the morning. Emotional blockages are also associated with this time of the early morning. Try stretching your muscles or using the restroom to help yourself get back to sleep. Why Create a ‘Sacred Space’? Your soul craves a sanctuary from the noise, bustle and over-stimulation of the outside world. Whether it is a private oasis in a corner of your house, a section of your bookshelf, bedside table, garden, or a whole room dedicated to it; your sacred space is your own personal, intimate dominion where you can re-connect and re-gather all of those deeply buried, fragmented inner parts of yourself; a warm, welcoming refuge where you can sit in stillness and physically express the inner workings of your mind, heart and spirit – while creatively communing with the Divine. It does not matter what religion or spiritual beliefs you have, or whether you have any at all; a sacred space is something personal and meaningful to you. It can be a visual reminder for your soul’s thirst for ‘me time’ or to partake in spiritual thinking, work or practice. It is a platform for focus, where you can pray, meditate, dream, admire the beauty and sacredness of, or just ‘be.’ It is a living, breathing, organic expression symbolizing your spirituality, life cycles and your personal journey. It can act as a statement to Spirit, saying: “Here I am, I acknowledge your presence and my connection with you, and I offer you this space in order to connect with you and my Higher Self in oneness and harmony, and seeking refuge, truth and wisdom.” The mere act of creating a sacred space aligns us with God / Source / life force energy, for it is a creative process much like Creation itself. It is the genesis of creating something new and meaningful for yourself that paints a colorful spiritual thread of intention and purpose through your life, a simple, sacred way to invite spiritual energies into your home and bring you closer to yourself and God on a daily level as you evolve your personal spirituality, rebirth your authentic callings and amplify your manifestation power. A spirit animal is a reflection of you, and is there to remind you of your inherent wisdom. They represent archetypal energies, typical traits that are personified by a specific creature. Acting as our allies, teachers, guides and protectors, if a spirit animal is showing up in your life, it has a message for you and wants to work with you. Think, what are this animal’s strengths and how does it act? This is the message for you. To work with your spirit animal is to step into the power you need most in any moment. This can help you feel more grounded, give you the confidence for a job interview, make you feel more alluring for a hot date or give you the strength to ask for help when you need it. Above all, it can help you to feel more confident, proactive and supported. :: WHO IS YOUR SPIRIT ANIMAL? :: Spirit animals have an energy you will resonate with and feel drawn to, sometimes inexplicably. Maybe you already have a strong affinity with a particular spirit animal, but if you don’t feel like have the connection yet here are some tips to help you to activate your connection. A Vision Quest In the morning, take some time to meditate and invoke your spirit animal to show itself. Create a presence of awareness throughout the day and look out for signs that could represent your animal. Your spirit animal is likely appear to you in a series of synchronicities, where you may literally see it everywhere--on street art, the internet, magazines, books, posters, gifts you receive from friends, dreams or even a chance encounter with the animal itself. When your animal wants to be seen, it’ll make sure you notice it so keep your eyes wide open. Journal your encounters as these encounters are all insights into what is being revealed to you. You may come into contact with more than one animal, but the animal you see most often is the most prominent message. Think of the additional animals as the supporting acts. You can connect with your spirit animal through meditation, while a guided shamanic journey will lead you to your animal and hold space for you to communicate more deeply with them. There are guided meditations available online (search: “spirit animal guided meditation”) and workshops that hold space for you to meet your spirit animal. Check event listings for sessions in your area. Before you go to sleep, ask your spirit animal to reveal themselves to you in your dream. Your intention could be, ” Spirit animal, who serves my highest and best good, please come to me in my dreams tonight. I am open to your wisdom. Thank you for your protection and guidance. You will be remembered in the morning.” Repeat this invocation as you fall asleep. Make sure that you have a pen and paper by your bed for when you wake up in the morning, and write down what you remember as soon as you wake up and the memory of the dream is still fresh. One of the quickest ways to connect with your spirit animal is to use an animal oracle deck (we love the Wild Unknown Animal Spirit deck). Connect with the deck and draw a card (or a few cards) to represent what serves you. You can take guidance from the reference book that accompanies the deck but I would also recommend journaling how this card makes you feel, use your intuition to find the meaning. Meditate on the card and see what messages come through to you from the unseen realm. If you draw an animal that makes you feel uncomfortable, brings up a phobia or a feeling of dislike, this animal can represent your shadow side. This could symbolize something in the animal’s character that you’re avoiding in your life. Take this as an opportunity to tune in to what may need healing in your life. Remember, if this animal is showing up for you, it is there to be seen, and it’s because you are ready. Even if you don’t think you are…
<urn:uuid:0414b73e-1974-47c7-ac55-8f2c87281865>
CC-MAIN-2021-21
https://www.botanicataino.com/the-arrow-blog/archives/03-2018/2
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00296.warc.gz
en
0.948503
4,672
2.890625
3
In this classic text, Kant sets out to articulate and defend the Categorical Imperative - the fundamental principle that underlies moral reasoning - and to lay the foundation for a comprehensive account of justice and human virtues. This new edition and translation of Kant's work is designed especially for students. An extensive and comprehensive introduction explains the central concepts of Groundwork and looks at Kant's main lines of argument. Detailed notes aim to clarify Kant's thoughts and to correct some common (...) misunderstandings of his doctrines. (shrink) überall einen richtigen Gebrauch der reinen Vernunft giebt, in welchem Fall es auch einen Canon derselben geben muß, so wird dieser nicht den speculativen, sondernden pr.ntischen Vernunftgebrauch betreffen, den wir also iezt ... The Metaphysics of Morals is Kant's major work in applied moral philosophy in which he deals with the basic principles of rights and of virtues. It comprises two parts: the 'Doctrine of Right', which deals with the rights which people have or can acquire, and the 'Doctrine of Virtue', which deals with the virtues they ought to acquire. Mary Gregor's translation, revised for publication in the Cambridge Texts in the History of Philosophy series, is the only complete translation of the (...) whole text, and includes extensive annotation on Kant's difficult and sometimes unfamiliar vocabulary. A new introduction by Roger Sullivan sets the work in its historical and philosophical context. This volume will be of wide interest to students of ethics and of legal and political philosophy. (shrink) With this volume, Werner Pluhar completes his work on Kant's three Critiques, an accomplishment unique among English language translators of Kant. At once accurate, fluent, and accessible, Pluhar's rendition of the Critique of Practical Reason meets the standards set in his widely respected translations of the _Critique of Judgment_ and the _Critique of Pure Reason_. Stephen Engstrom's Introduction discusses the place of the second Critique in Kant's critical philosophy, its relation to Kant's ethics, and its practical purpose and provides an (...) illuminating outline of Kant's argument. (shrink) One of the cornerstone books of Western philosophy, Critique of Pure Reason is Kant's seminal treatise, where he seeks to define the nature of reason itself and builds his own unique system of philosophical thought with an approach known as transcendental idealism. He argues that human knowledge is limited by the capacity for perception and attempts a logical designation of two varieties of knowledge: a posteriori, the knowledge acquired through experience; and a priori, knowledge not derived through experience. This accurate (...) translation by J. M. D. Meiklejohn offers a simple and direct rendering of Kant's work that is suitable for readers at all levels. (shrink) This entirely new translation of Critique of Pure Reason by Paul Guyer and Allan Wood is the most accurate and informative English translation ever produced of this epochal philosophical text. Though its simple, direct style will make it suitable for all new readers of Kant, the translation displays a philosophical and textual sophistication that will enlighten Kant scholars as well. This translation recreates as far as possible a text with the same interpretative nuances and richness as the original. The Critique of the Power of Judgment (a more accurate rendition of what has hitherto been translated as the Critique of Judgment) is the third of Kant's great critiques following the Critique of Pure Reason and the Critique of Practical Reason. This entirely new translation of Kant's masterpiece follows the principles and high standards of all other volumes in The Cambridge Edition of the Works of Immanuel Kant. This volume includes: for the first time the indispensable first draft of Kant's (...) introduction to the work; the only English edition notes to the many differences between the first (1790) and second (1793) editions of the work; and relevant passages in Kant's anthropology lectures where he elaborated on his aesthetic views. All in all this new edition offers the serious student of Kant a dramatically richer, more complete and more accurate translation. (shrink) This is the first English translation of all of Kant's writings on moral and political philosophy collected in a single volume. No other collection competes with the comprehensiveness of this one. As well as Kant's most famous moral and political writings, the Groundwork to the Metaphysics of Morals, the Critique of Practical Reason, the Metaphysics of Morals, and Toward Perpetual Peace, the volume includes shorter essays and reviews, some of which have never been translated before. The volume has been furnished (...) with a substantial editorial apparatus including translator's introductions and explanatory notes to each text by Mary Gregor, and a general introduction to Kant's moral and political philosophy by Allen Wood. There is also an English-German and German-English glossary of key terms. (shrink) In der 1785 veröffentlichten "Grundlegung zur Metaphysik der Sitten" formuliert Kant erstmals die Prinzipien einer universalistischen Ethik der Autonomie, deren Einfluß bis heute ungebrochen ist. Schon beim Übergang von der gemeinen zur philosophischen Vernunfterkenntnis findet man die Hauptgedanken: In der Ethik geht es nicht primär um das gute Leben und das Glück, und es geht auch zunächst nicht darum, welche Handlungserfolge erzielt werden; Gegenstand moralischer Hochschätzung sind vielmehr Intentionen und Maximen. Gut ist, was für alle vernünftigen Wesen gilt, weil es (...) von ihnen als autonomen und vernünftigen Wesen gewollt wird. (shrink) Anthropology from a Pragmatic Point of View essentially reflects the last lectures Kant gave for his annual course in anthropology, which he taught from 1772 until his retirement in 1796. The lectures were published in 1798, with the largest first printing of any of Kant's works. Intended for a broad audience, they reveal not only Kant's unique contribution to the newly emerging discipline of anthropology, but also his desire to offer students a practical view of the world and of humanity's (...) place in it. With its focus on what the human being 'as a free-acting being makes of himself or can and should make of himself,' the Anthropology also offers readers an application of some central elements of Kant's philosophy. This volume offers a new annotated translation of the text by Robert B. Louden, together with an introduction by Manfred Kuehn that explores the context and themes of the lectures. (shrink) Immanuel Kant's Groundwork of the Metaphysics of Morals ranks alongside Plato's Republic and Aristotle's Nicomachean Ethics as one of the most profound and influential works in moral philosophy ever written. In Kant's own words its aim is to search for and establish the supreme principle of morality, the categorical imperative. Kant argues that every human being is an end in himself or herself, never to be used as a means by others, and that moral obligation is an expression of the (...) human capacity for autonomy or self-government. This edition presents the acclaimed translation of the text by Mary Gregor, together with an introduction by Christine M. Korsgaard that examines and explains Kant's argument. (shrink) In the Critique of Judgement, Kant offers a penetrating analysis of our experience of the beautiful and the sublime. He discusses the objectivity of taste, aesthetic disinterestedness, the relation of art and nature, the role of imagination, genius and originality, the limits of representation, and the connection between morality and the aesthetic. He also investigates the validity of our judgements concerning the degree in which nature has a purpose, with respect to the highest interests of reason and enlightenment. The work (...) profoundly influenced the artists, writers, and philosophers of the classical and romantic period, including Hegel, Schelling, Schopenhauer, and Nietzsche. In addition, it has remained a landmark work in fields such as phenomenology, hermeneutics, the Frankfurt School, analytical aesthetics, and contemporary critical theory. Today it remains an essential work of philosophy, and required reading for all with an interest in aesthetics. (shrink) This new translation is an extremely welcome addition to the continuing Cambridge Edition of Kant’s works. English-speaking readers of the third Critique have long been hampered by the lack of an adequate translation of this important and difficult work. James Creed Meredith’s much-reprinted translation has charm and elegance, but it is often too loose to be useful for scholarly purposes. Moreover it does not include the first version of Kant’s introduction, the so-called “First Introduction,” which is now recognized as indispensable (...) for an understanding of the work. Werner Pluhar’s more recent translation, which does include the First Introduction, is highly accurate when it confines itself to rendering Kant’s German. However, it is often more of a reconstruction than a translation, containing so many interpretative interpolations that it is often difficult to separate out Kant’s original text from the translator’s contributions. Paul Guyer and Eric Matthews have provided a translation that compares to or exceeds Pluhar’s in its literal approach to the German, but that confines all interpretative material to footnotes and endnotes, so that the text itself, with all its unclarities and ambiguities, lies open to view. In addition, Guyer, as editor of the volume, has provided a great deal of valuable supplementary material. This includes an introduction with an outline of the work and details of the history of its composition and publication, and a wealth of endnotes offering clarifications of the text, background information, and, most strikingly, many references to related passages in Kant’s voluminous writings, particularly in connection with Kant’s earlier writings related to aesthetics. The edition also records differences among the first three editions of the work, and—of particular interest—erasures from and additions to Kant’s manuscript of the First Introduction. Although the introduction and endnotes reflect interpretative views that are sometimes disputable, this supplementary material makes the present edition into a valuable resource even for those able to read the text in German. (shrink) Kant's views on logic and logical theory play an important role in his critical writings, especially the Critique of Pure Reason. However, since he published only one short essay on the subject, we must turn to the texts derived from his logic lectures to understand his views. The present volume includes three previously untranslated transcripts of Kant's logic lectures: the Blumberg Logic from the 1770s; the Vienna Logic (supplemented by the recently discovered Hechsel Logic) from the early 1780s; and the (...) Dohna-Wundlacken Logic from the early 1790s. Also included is a new translation of the Jasche Logic, compiled at Kant's request and published in 1800 but which also appears to stem in part from a transcript of his lectures. Together these texts provide a rich source of evidence for Kant's evolving views on logic, on the relations between logic and other disciplines, and on a variety of topics (e.g. analysis and synthesis) central to Kant's mature philosophy. They also provide a portrait of Kant as lecturer, a role in which he was both popular and influential. This volume contains substantial editorial apparatus: a general introduction, linguistic and factual notes, glossaries of key terms (both German/English and English/German) and concordances relating Kant's lectures to Georg Frederich Meier's Excerpts from the Doctrine of Reason, the book on which Kant lectured throughout his life and in which he left extensive notes. (shrink) Kant was centrally concerned with issues in the philosophy of natural science throughout his career. The Metaphysical Foundations of Natural Science presents his most mature reflections on these themes in the context of both his 'critical' philosophy, presented in the Critique of Pure Reason, and the natural science of his time. This volume presents a new translation, by Michael Friedman, which is especially clear and accurate. There are explanatory notes indicating some of the main connections between the argument of the (...) Metaphysical Foundations and the first Critique - as well as parallel connections to Newton's Principia. The volume is completed by an historical and philosophical introduction and a guide to further reading. (shrink) Anthropology, History, and Education contains all of Kant's major writings on human nature. Some of these works, which were published over a thirty-nine year period between 1764 and 1803, have never before been translated into English. Kant's question 'What is the human being?' is approached indirectly in his famous works on metaphysics, epistemology, moral and legal philosophy, aesthetics and the philosophy of religion, but it is approached directly in his extensive but less well-known writings on physical and cultural anthropology, the (...) philosophy of history, and education which are gathered in the present volume. Kant repeatedly claimed that the question 'What is the human being?' should be philosophy's most fundamental concern, and Anthropology, History, and Education can be seen as effectively presenting his philosophy as a whole in a popular guise. (shrink) Translation from German to English by Daniel Fidel Ferrer -/- What Does it Mean to Orient Oneself in Thinking? -/- German title: "Was heißt: sich im Denken orientieren?" -/- Published: October 1786, Königsberg in Prussia, Germany. By Immanuel Kant (Born in 1724 and died in 1804) -/- Translation into English by Daniel Fidel Ferrer (March, 17, 2014). The day of Holi in India in 2014. -/- From 1774 to about 1800, there were three intense philosophical and theological controversies underway in (...) Germany, namely: Fragments Controversy, the Pantheism Controversy, and the Atheism Controversy. Kant’s essay translated here is Kant’s respond to the Pantheism Controversy. During this period (1770-1800), there was the Sturm und Drang (Storm and Urge (stress)) movement with thinkers like Johann Hamann, Johann Herder, Friedrich Schiller, and Johann Goethe; who were against the cultural movement of the Enlightenment (Aufklärung). Kant was on the side of Enlightenment (see his Answer the Question: What is Enlightenment? 1784). -/- What Does it Mean to Orient Oneself in Thinking? / By Immanuel Kant (1724-1804). [Was heißt: sich im Denken orientieren? English]. (shrink) The original edition of Kant: Political Writings was first published in 1970, and has long been established as the principal English-language edition of this important body of writing. In this new, expanded edition two important texts illustrating Kant's view of history are included for the first time, his reviews of Herder's Ideas on the Philosophy of the History of Mankind and Conjectures on the Beginning of Human History, as well as the essay What is Orientation in Thinking?. In addition to (...) a general introduction assessing Kant's political thought in terms of his fundamental principles of politics, this edition also contains such useful student aids as notes on the texts, a comprehensive bibliogaphy and a new postscript, looking at some of the principal issues in Kantian scholarship that have arisen since the first edition. (shrink) This expanded edition of James Ellington’s preeminent translation includes Ellington’s new translation of Kant’s essay Of a Supposed Right to Lie Because of Philanthropic Concerns in which Kant replies to one of the standard objections to his moral theory as presented in the main text: that it requires us to tell the truth even in the face of disastrous consequences. This volume contains four versions of the lecture notes taken by Kant's students of his university courses in ethics given regularly over a period of some thirty years. The notes are very complete and expound not only Kant's views on ethics but many of his opinions on life and human nature. Much of this material has never before been translated into English. As with other volumes in the series, there are copious linguistic and explanatory notes and a glossary of key (...) terms. (shrink) Kant's only aesthetic work apart from the Critique of Judgment , Observations on the Feeling of the Beautiful and Sublime gives the reader a sense of the personality and character of its author as he sifts through the range of human responses to the concept of beauty and human manifestations of the beautiful and sublime. Kant was fifty-eight when the first of his great Critical trilogy, the Critique of Pure Reason , was published. Observations offers a view into the mind (...) of the forty-year-old Kant. (shrink) This volume collects for the first time in a single volume all of Kant's writings on religion and rational theology. These works were written during a period of conflict between Kant and the Prussian authorities over his religious teachings. His final statement of religion was made after the death of King Frederick William II in 1797. The historical context and progression of this conflict are charted in the general introduction to the volume and in the translators' introductions to particular texts. (...) All the translations are new with the exception of The Conflict of the Faculties, where the translation has been revised and re-edited to conform to the guidelines of the Cambridge Edition. As is standard with all the volumes in this edition, there are copious linguistic and explanatory notes, and a glossary of key terms. (shrink) 'beauty has purport and significance only for human beings, for beings at once animal and rational' In the Critique of Judgement Kant offers a penetrating analysis of our experience of the beautiful and the sublime, discussing the objectivity of taste, aesthetic disinterestedness, the relation of art and nature, the role of imagination, genius and originality, the limits of representation and the connection between morality and the aesthetic. He also investigates the validity of our judgements concerning the apparent purposiveness of nature (...) with respect to the highest interests of reason and enlightenment. The work profoundly influenced the artists and writers of the classical and romantic period and the philosophy of Hegel and Schelling. It has remained a central point of reference from Schopenhauer and Nietzsche through to phenomenology, hermeneutics, the Frankfurt School, analytical aesthetics and contemporary critical theory. J. C. Meredith's classic translation has been revised in accordance with standard modern renderings and provided with a bilingual glossary. This edition also includes the important 'First Introduction' that Kant originally composed for the work. ABOUT THE SERIES: For over 100 years Oxford World's Classics has made available the widest range of literature from around the globe. Each affordable volume reflects Oxford's commitment to scholarship, providing the most accurate text plus a wealth of other valuable features, including expert introductions by leading authorities, helpful notes to clarify the text, up-to-date bibliographies for further study, and much more. (shrink) Kants "Metaphysische Anfangsgründe der Naturwissenschaft" von 1786 stehen ihrem Anspruch nach zwischen einer transzendentalen Kritik der Vernunft – Kant bereitete zur selben Zeit die in wesentlichen Stücken umgearbeitete zweite Auflage der KrV vor – und der Physik als empirischer Wissenschaft.Die Notwendigkeit einer Reflexion über die Naturwissenschaft verhilft dieser Schrift heute wieder zu systematischer Relevanz, nachdem sie lange Zeit nur aus dem Blickwinkel ihrer Bedeutsamkeit für die empirische Naturwissenschaft betrachtet und infolgedessen allenfalls aus wissenschaftshistorischem Interesse rezipiert wurde. Whether this satirical inscription on a Dutch innkeeper's sign upon which a burial ground was painted had for its object mankind in general, or the rulers of states in particular, who are insatiable of war, or merely the philosophers who dream this sweet dream, it is not for us to decide. But one condition the author of this essay wishes to lay down. The practical politician assumes the attitude of looking down with great self-satisfaction on the political theorist as a (...) pedant whose empty ideas in no way threaten the security of the state, inasmuch as the state must proceed on empirical principles; so the theorist is allowed to play his game without interference from the worldly-wise statesman. Such being his attitude, the practical politician — and this is the condition I make — should at least act consistently in the case of a conflict and not suspect some danger to the state in the political theorist's opinions which are ventured and publicly expressed without any ulterior purpose. By this clausula salvatoria the author desires formally and emphatically to deprecate herewith any malevolent interpretation which might be placed on his words. (shrink) This is the first volume of the first ever comprehensive edition of the works of Immanuel Kant in English translation. The eleven essays in this volume constitute Kant's theoretical, pre-critical philosophical writings from 1755 to 1770. Several of these pieces have never been translated into English before; others have long been unavailable in English. We can trace in these works the development of Kant's thought to the eventual emergence in 1770 of the two chief tenets of his mature philosophy: the (...) subjectivity of space and time, and the phenomena-noumena distinction. The volume has been furnished with substantial editorial apparatus, including a general introduction to the main themes of Kant's early thought, introduction to the individual works and re;sume;s of their contents, linguistic and factual notes, bibliographies, a glossary of key terms, and biographical-bibliographical sketches of persons mentioned by Kant. (shrink) It is in the interest of the totalitarian state that subjects not think for themselves, much less confer about their thinking. Writing under the hostile watch of the Prussian censorship, Immanuel Kant dared to argue the need for open argument, in the university if nowhere else. In this heroic criticism of repression, first published in 1798, he anticipated the crises that endanger the free expression of ideas in the name of national policy. Composed of three sections written at different times, (...) The Conflict of the Faculties dwells on the eternal combat between the "lower" faculty of philosophy, which is answerable only to individual reason, and the faculties of theology, law, and medicine, which get "higher" precedence in the world of affairs and whose teachings and practices are of interest to the government. Kant makes clear, for example, the close alliance between the theological faculty and the government that sanctions its teachings and can resort to force and censorship. All the more vital and precious, then, the faculty of philosophy, which encourages independent thought before action. The first section, "The Conflict of the Philosophy Faculty with the Theology Faculty," is essentially a vindication of the right of the philosophical faculty to freedom of expression. In the other sections the philosopher takes a long and penetrating look at medicine and law, the one preserving the physical "temple" and the other regulating its actions. (shrink) The second, corrected edition of the first and only complete English translation of Kant’s highly influential introduction to philosophy, presenting both the terminological and structural basis for his philosophical system, and offering an invaluable key to his main works, particularly the three Critiques. Extensive editiorial apparatus. Foundations of the metaphysics of morals.--Critique of practical reason.--An inquiry into the distinctness of the principles of natural theology and morals.--What is enlightenment?--What is orientation in thinking.--Perpetual peace: a philosophical sketch.--On a supposed right to lie from altruistic motives.--Selections from The metaphysics of morals. The purpose of the Cambridge Edition is to offer translations of the best modern German edition of Kant's work in a uniform format suitable for Kant scholars. When complete (fourteen volumes are currently envisaged) the edition will include all of Kant's published writings and a generous selection from the unpublished writings such as the Opus postumum, handschriftliche Nachlass, lectures, and correspondence. This volume contains the first translation into English of notes from Kant's lectures on metaphysics. These lectures, dating from the (...) 1760s to the 1790s, touch on all the major topics and phases of Kant's philosophy. Most of these notes have appeared only recently in the German Academy Edition; this translation offers many corrections of that edition. As is standard with the volumes in the Cambridge Edition there is an extensive editorial apparatus, including extensive linguistic and explanatory notes, a detailed subject index, and glossaries of key terms. (shrink) _One summary of the great Kant's view, to the extent that it can be summed up, is_ _that he takes determinism to be a kind of fact, and indeterminism to be another kind_ _of fact, and our freedom to be a fact too -- but takes this situation to have nothing to_ _do with the kind of compatibility of determinism and freedom proclaimed by such_ _Compatibilists as Hobbes and Hume. Thus Kant does not make freedom consistent_ _with determinism by taking (...) up a definition of freedom as voluntariness -- at bottom,_ _being able to do what you want. This he dismisses as a wretched subterfuge,_ _quibbling about words. Rather, the freedom he seeks to make consistent with_ _determinism does indeed seem to be the freedom of the Incompatibilists --_ _origination. Is he then an Incompatibilist? Well, against that, it can be said he does_ _not allow the existence of origination in what can be called the world we know, as_ _Incompatibilists certainly do._. (shrink)
<urn:uuid:4630b59e-1f70-4540-a575-8bbb44bad398>
CC-MAIN-2021-21
https://philpapers.org/s/I.%20Kant
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00535.warc.gz
en
0.911336
5,583
2.578125
3
- Commercial sorghum Commercial sorghum refers to the cultivation and commercial exploitation of species of grasses within the genus Sorghum (often S. bicolor). These plants are used for grain, fibre and fodder. The plants are cultivated in warmer climates worldwide. Commercial Sorghum species are native to tropical and subtropical regions of Africa and Asia, with one species native to Mexico. Other names include durra, Egyptian millet, feterita, Guinea corn, jwari ज्वारी (Marathi), jowar, juwar, milo, maize, shallu, Sudan grass, cholam (Tamil), jola (Kannada), jonnalu (Telugu), gaoliang (zh:高粱), great millet, kafir corn, dura, dari, mtama, and solam. The last wild relatives of commercial sorghum are currently confined to Africa south of the Sahara — although Zohary and Hopf add "perhaps" Yemen and Sudan — indicating its domestication took place there. However, note Zohary and Hopf, "the archaeological exploration of sub-Saharan Africa is yet in its early stages, and we still lack critical information for determining where and when sorghum could have been taken into cultivation." Although rich finds of S. bicolor have been recovered from Qasr Ibrim in Egyptian Nubia, the wild examples have been dated to circa 800–600 BCE, and the domesticated ones no earlier than CE 100. The earliest archeological evidence comes from sites dated to the second millennium BC in India and Pakistan — where S. bicolor is not native. These incongruous finds have been interpreted, according again to Zohary and Hopf, - as indicating: (i) an even earlier domestication in Africa, and (ii) an early migration of domestic sorghum, from East Africa into the Indian subcontinent. This interpretation got further support because several other African grain crops, namely: pearl millet Pennisetum glaucum (L.) R. Br., cow pea Vigna unguiculata (L.) Walp., and hyacinth bean Lablab purpureus (L.) Sweet show similar patterns. Their wild progenitors are restricted to Africa. Most cultivated varieties of sorghum can be traced back to Africa, where they grow on savanna lands. During the Muslim Agricultural Revolution, sorghum was planted extensively in parts of the Middle East, North Africa and Europe. The name "sorghum" comes from Italian "sorgo", in turn from Latin "Syricum (granum)" meaning "grain of Syria". Despite the antiquity of sorghum, it arrived late to the Near East. It was unknown in the Mediterranean area into Roman times. Tenth century records indicate it was widely grown in Iraq, and became the principal food of Kirman in Persia. In addition to the eastern parts of the Muslim world, the crop was also grown in Egypt and later in Islamic Spain. From Islamic Spain, it was introduced to Christian Spain and then France (by the 12th century). In the Muslim world, sorghum was grown usually in areas where the soil was poor or the weather too hot and dry to grow other crops. Sorghum is well adapted to growth in hot, arid or semiarid areas. The many subspecies are divided into four groups — grain sorghums (such as milo), grass sorghums (for pasture and hay), sweet sorghums (formerly called "Guinea corn", used to produce sorghum syrups), and broom corn (for brooms and brushes). The name "sweet sorghum" is used to identify varieties of S. bicolor that are sweet and juicy. Cultivation and uses Sorghum is used for food, fodder, and the production of alcoholic beverages. It is drought tolerant and heat tolerant, and is especially important in arid regions. It is an important food crop in Africa, Central America, and South Asia, and is the "fifth most important cereal crop grown in the world" . African slaves introduced sorghum into the U.S. in the early 17th century. Top Sorghum Producers — 2008 United States 12.0 Mt Nigeria 9.3 Mt India 7.9 Mt Mexico 6.6 Mt Sudan 3.9 Mt Australia 3.1 Mt Argentina 2.9 Mt China 2.5 Mt Ethiopia 2.3 Mt Brazil 2.0 Mt World Total 65.5 Mt Source: UN Food & Agriculture Organisation (FAO) Use as fodder The FAO reports that 440,000 square kilometres were devoted worldwide to sorghum production in 2004. In the US, sorghum grain is used primarily as a maize (corn) substitute for livestock feed because their nutritional values are very similar. Some hybrids commonly grown for feed have been developed to deter birds, and therefore contain a high concentration of tannins and phenolic compounds, which causes the need for additional processing to allow the grain to be digested by cattle. Bhakri (jolada rotti in northern Karnataka), a variety of unleavened bread usually made from sorghum, is the staple diet in many parts of India, such as Maharashtra state and northern Karnataka state. In eastern Karnataka and the Rayalaseema area of Andhra Pradesh, roti (jonna rotte) made with sorghum is the staple food. In South Africa, sorghum meal is often eaten as a stiff porridge much like pap. It is called mabele in Northern Sotho and "brown porridge" in English. The porridge can be served with maswi - soured milk - or merogo - a mixture of boiled greens (much like collard greens or spinach). In the cuisine of the Southern United States, sorghum syrup is used as a sweet condiment, usually for biscuits, corn bread, pancakes, hot cereals or baked beans. It was used as the unavailable maple syrup is used in the North, although it is uncommon today. In Arab cuisine, the unmilled grain is often cooked to make couscous, porridges, soups, and cakes. Many poor use it, along with other flours or starches, to make bread. The seeds and stalks are fed to cattle and poultry. Some varieties have been used for thatch, fencing, baskets, brushes and brooms, and stalks have been used as fuel. Medieval Islamic texts list medical uses for the plant. Sorghum seeds can be popped in the same manner as popcorn (i.e., with oil or hot air, etc.), although the popped kernels are smaller than popcorn (see photo on the right). Since 2000, sorghum has come into increasing use in homemade and commercial breads and cereals made specifically for the gluten-free diet. In southern Africa, sorghum is used to produce beer, including the local version of Guinness. In recent years, sorghum has been used as a substitute for other grain in gluten-free beer. Although the African versions are not "gluten-free", as malt extract is also used, truly gluten-free beer using such substitutes as sorghum or buckwheat are now available. Sorghum is used in the same way as barley to produce a "malt" that can form the basis of a mash that will brew a beer without gliadin or hordein (together "gluten") and therefore can be suitable for coeliacs or others sensitive to certain glycoproteins. In November 2006, Lakefront Brewery of Milwaukee, Wisconsin, launched its "New Grist" gluten-free beer, brewed with sorghum and rice. It is one of its most successful lines. It is aimed at those with celiac disease, although its low-carb content also makes it popular with health-minded drinkers. On December 20, 2006, Anheuser-Busch of St. Louis, Missouri, announced the release of their new "Redbridge" beer product. This beer will be gluten-free and produced with sorghum as the main ingredient. Redbridge is the first sorghum-based beer to be nationally distributed in the United States. African sorghum beer is a brownish-pink beverage with a fruity, sour taste. Its alcohol content can vary between 1% and 8%. African sorghum beer is high in protein, which contributes to foam stability, giving it a milk-like head. Because this beer is not filtered, its appearance is cloudy and yeasty, and may also contain bits of grain. This beer is said to be very thirst-quenching, even if it is traditionally consumed at room temperature. African sorghum beer is a popular drink primarily amongst the black community for historical reasons. African sorghum beer is said to be a traditional drink of the Zulu people of Southern Africa. It also became popular amongst the black community in South Africa because the only exception to the prohibition, which was lifted in 1962 and only applied to black people, was sorghum beer. Sorghum beer is called bjala in northern Sotho and is traditionally made to mark the unveiling of a loved-one's tombstone. The task of making the beer falls traditionally to women. The process is begun several days before the party, when the women of the community gather together to bring the sorghum and water to a boil in huge cast iron pots over open fires. After the mix has fermented for several days, it is strained - a somewhat labor intensive task. Sorghum beer is known by many different names in various countries across Africa, including burukuto (Nigeria), pombe (East Africa) and bil-bil (Cameroon). African sorghum beer brewed using grain sorghum undergoes lactic acid fermentation, as well as alcoholic fermentation. The souring of African sorghum beer by lactic acid fermentation is responsible for the distinct sour taste. Souring may be initiated using yogurt, sour dough starter cultures, or by spontaneous fermentation. The natural microflora of the sorghum grain maybe also be the source of lactic acid bacteria; a handful of raw grain sorghum or malted sorghum may be mixed in with the wort to start the lactic acid fermentation. Although many lactic acid bacteria strains may be present, Lactobacillus spp. is responsible for the lactic acid fermentation in African sorghum beer. Commercial African sorghum beer is packaged in a microbiologically active state. The lactic acid fermentation and/or alcoholic fermentation may still be active. For this reason, special plastic or carton containers with vents are used to allow gas to escape. Spoilage is a big safety concern when it comes to African sorghum beer. Packaging does not occur in sterile conditions and many microorganisms may contaminate the beer. Also, using wild lactic acid bacteria increases the chances of spoilage organisms being present. However, the microbiologically active characteristic of the beer also increases the safety of the product by creating competition between organisms. Although aflatoxins from mould were found on sorghum grain, they were not found in industrially produced African sorghum beer. Sorghum straw (stem fibres) can also be made into excellent wallboard for house building, as well as biodegradable packaging. It does not accumulate static electricity, so it is also being used in packaging materials for sensitive electronic equipment. Little research has been done to improve sorghum cultivars because the vast majority of sorghum production is done by subsistence farmers. The crop is therefore mostly limited by insects, disease and weeds, rather than by the plant's inherent ability. To improve the plant's viability in sustaining populations in drought-prone areas, a larger capital investment would be necessary to control plant pests and ensure optimum planting and harvesting practices. In November 2005, however, the US Congress passed a Renewable Fuels Standard as part of the Energy Policy Act of 2005, with the goal of producing 30 billion litres (8 billion gallons) of renewable fuel (ethanol) annually by 2012. Currently, 12% of grain sorghum production in the US is used to make ethanol. An AP article claims that sorghum-sap-based ethanol has four times the energy yield as corn-based ethanol, but is on par with sugarcane. Growing grain sorghum Sorghum requires an average temperature of at least 25°C to produce maximum grain yields in a given year. Maximum photosynthesis is achieved at daytime temperatures of at least 30°C. Night time temperatures below 13°C for more than a few days can severely reduce the plants' potential grain production. Sorghum cannot be planted until soil temperatures have reached 17°C. The long growing season, usually 90–120 days, causes yields to be severely decreased if plants are not in the ground early enough. Grain sorghum is usually planted with a commercial corn seeder at a depth of 2–5 cm, depending on the density of the soil (shallower in heavier soil). The goal in planting, when working with fertile soil, is 50,000 to 300,000 plants per hectare. Therefore, with an average emergence rate of 75%, sorghum should be planted at a rate of 2–12 kg of seed per hectare. Yields have been found to be boosted by 10–15% when optimum use of moisture and sunlight are available, by planting in 25 cm rows instead of the conventional 1-meter rows. Sorghum, in general, is a very competitive crop, and does well in competition with weeds in narrow rows. Sorghum produces a chemical compound called sorgoleone, which the plant uses to combat weeds. The chemical is so effective in preventing the growth of weeds it sometime prohibits the growth of other crops harvested on the same field. To address this problem, researchers at the Agricultural Research Service found two gene sequences believed to be responsible for the enzymes that secrete the chemical compound sorogoleone. The discovery of these gene sequences will help researchers one day in developing sorghum varieties that cause less soil toxicity and potentially target gene sequences in other crops to increase their natural pesticide capabilities, as well. Insect and diseases are not prevalent in sorghum crops. Birds, however, are a major source of yield loss. Hybrids with higher tannin content and growing the crop in large field blocks are solutions used to combat the birds. The crop may also be attacked by corn earworms, aphids, and some Lepidoptera larvae, including turnip moths. It is a very high nitrogen-feeding crop. An average hectare producing 6.3 tonnes of grain yield requires 110 kg of nitrogen, but relatively small amounts of phosphorus and potassium (15 kg of each). Sorghum’s growth habit is similar to that of maize, but with more side shoots and a more extensively branched root system. The root system is very fibrous, and can extend to a depth of up to 1.2 m. The plant finds 75% of its water in the top metre of soil, and because of this, in dry areas, the plant’s production can be severely affected by the water holding capacity of the soil. The plants require up to 70–100 mm of moisture every 10 days in early stages of growth, and as sorghum progresses through growth stages and the roots penetrate more deeply into the soil to tap into hidden water reserves, the plant needs progressively less water. By the time the seed heads are filling, optimum water conditions are down to about 50 mm every 10 days. Compacted soil or shallow topsoil can limit the plant's ability to deal with drought by limiting its root system. Since these plants have evolved to grow in hot, dry areas, it is essential to keep the soil from compacting and to grow on land with ample cultivated topsoil. Wild species of sorghum tend to grow to a height of 1.5–2 m; however, due to problems this height created when the grain was being harvested, in recent years, cultivars with genes for dwarfism have been selected, resulting in sorghum that grows to between 60 and 120 cm tall. Sorghum's yields are not affected by short periods of drought as severely as other crops such as maize, because it develops its seed heads over longer periods of time, and short periods of water stress do not usually have the ability to prevent kernel development. Even in a long drought severe enough to hamper sorghum production, it will still usually produce some seed on smaller and fewer seed heads. Rarely will one find a kernelless season for sorghum, even under the most adverse water conditions. Sorghum's ability to thrive with less water than maize may be due to its ability to hold water in its foliage better than maize. Sorghum has a waxy coating on its leaves and stems which helps to keep water in the plant, even in intense heat. - ^ a b Daniel Zohary and Maria Hopf, Domestication of plants in the Old World, third edition (Oxford: University Press, 2000),p. 89 - ^ a b Watson, p. 12–14. - ^ "FAOSTAT". FOOD AND AGRICULTURE ORGANIZATION OF THE UNITED NATIONS. http://faostat.fao.org/site/567/default.aspx#ancor. Retrieved 2010-05-21. - ^ Watson, p. 9. In Northern Karnataka in India, they make chappathis from jola. - ^ . PMID 7872831. - ^ "Cultivarán el maicillo para producir miel: 8 de Agosto 2005 .::. El Diario de Hoy". Elsalvador.com. http://www.elsalvador.com/noticias/2005/08/08/negocios/neg5.asp. Retrieved 2011-10-17. - ^ "glutenfreebeerfestival.com". Carolyn Smagalski, www.glutenfreebeerfestival.com. 2006. http://www.glutenfreebeerfestival.com. - ^ "JSOnline.com Story on Lake Front Brewery". JSOnline.com. Milwaukee Journal-Sentinel, www.jsonline.com.com. 2006. http://www.jsonline.com/story/index.aspx?id=451408. - ^ Van der Walt, H.P., 1956. Kafficorn matling and brewing studies II-Studies on the microbiology of Kaffir Beer. J. Sci. Food. Agric. 7(2) 105–113. - ^ Haggblade, S., Holzapfel, W.H., 1989. Industrialization of Africa's indigenous beer brewing. In: Steinkraus K.H. (Ed,), Industrialization of Indigenous Fermented Foods, 33. Marcel/Dekker, New York, pp. 191–283. - ^ Trinder, DW. 1998. A survey of aflatoxins in industrially brewed South African sorghum beer and beer strainings. J. INST. BREW. vol. 95, no. 5, pp. 307–309 - ^ Sweet Sorghum Sap[dead link] - ^ "Tapping into Sorghum’s Weed Fighting Capabilities to Give Growers More Options". USDA Agricultural Research Service. June 15, 2010. http://www.ars.usda.gov/is/pr/2010/100615.htm. - FAO Report (1995) "Sorghum and millets in human nutrition" - FAO "Compendium on post-harvest operations" — Contains discussion on origin, processing and uses of sorghum - Alternative Field Crops - National Grain Sorghum Producers - National Sweet Sorghum Producers and Processors Association - Sorghum Growth Stages - Sequencing of the Sorghum Genome - Sweet Sorghum Ethanol Association - Examples of projects using sweet sorghum as an input feedstock for the production of renewable energy Lists of countries by agricultural output rankings Cereals Fruit Vegetables Other RelatedIrrigation · Land use Cereals and pseudocereals Wikimedia Foundation. 2010.
<urn:uuid:0cffafc2-4287-4157-b068-a595ac623bfa>
CC-MAIN-2021-21
https://en-academic.com/dic.nsf/enwiki/5662411
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00416.warc.gz
en
0.923175
4,312
3.296875
3
One friendly fox says hello, two beavers wave, three skunks nibble berries. Explore numbers through the best the city has to offer in this gorgeous board book! These FREE winter worksheets allow toddler, pre-k, preschool, and kindergarten age students to have fun learning this December, January, and February. Mar 19, 2020 - Explore Dr. Nicki Newton's board "counting poems and stories", followed by 13794 people on Pinterest. Wooden number coins are an ideal open-ended toy for toddlers & preschoolers. But, listening to great preschool books for kids enriches those experiences. Kids can choose from a range of picture exercises in which they have to complete the math stories by counting the objects and characters in the story. The ducks are swept away in various directions. Readers are invited to find hidden numbers on an illustrated activity page. We love any version of this classic counting song, and I highly recommend Langstaffâs vintage version (1957). amzn_assoc_asins = "B00004WKR9,B000IBRD74,B076K97XSL,B000PXVO7A,1483852849"; Tara is a Southern girl at heart and mother of 3. As the numerals pile up and bumblebees threaten, what’s the number that saves the day? Encourage your child to learn numbers and counting through play-based learning. See more ideas about preschool songs, finger plays, counting rhymes. They also integrate reading, games, and puzzles for additional skills practice. Identifying Basic Shapes, Basic colors, Basic Letters/Alphabet identification also form a core part early years learning. And remember, Dream Big! delve into math learning on their own. Counting Animals is a counting book with animals for pre-school, perfect for kindergarteners or pre-schoolers learning their numbers 1 â 10. Coin counting is a very practical skill kids learn early on in their educational career and use throughout life, so it's important to master these concepts early on. amzn_assoc_ad_mode = "manual"; You're awesome for doing it! We are huge fans of learning through play and all of our creative learning ideas on this blog are designed to be as fun, hands-on and engaging as ⦠232; Spread the love. Download these free hot cocoa counting cards for some simple winter math fun. Place one sticker on one card, two stickers on the next card, three stickers on ⦠My two and four-year-old have heard a lot of counting books the last few weeks! Grab these super cute, count to 10 winter worksheets for preschoolers to make learning fun! This list is a little heavy on counting books. Roar your way into a preschoolerâs heart with these dinosaur books for preschoolers. Suits pre-kindergarten syllabus. Counting Numbers | Numbers 1-20 Lesson for Children - YouTube Here are our favorite children's books about counting and numbers! www.kidsownplanet.com offers free downloads for preschool and kindergarten concepts.The various topics available for download are, ABC, numbers, colors, shapes, body parts, animals, birds, fruits , vegetables and vehicles It is simply being able to recite the numbers in order, out loud. Materials needed for the Counting & Matching Star Activities for Preschoolers: Black foam sheets (mine were 6â³x4â³ and I cut 6 squares out of each). I like that the objects are large, making it easy for a toddlerâs chubby finger to point to each one. These counting books for preschoolers, for example, focus on counting. So take a peak and order a few new ⦠For this reason, there are not too many counting activities for me to share! Maths worksheets for preschoolers is a fun way to introduce number values and number counting! 1. Preschool numbers and counting worksheets. Thatâs why Halloween number activities are one of the best ways to teach math to preschoolers! It is simply being able to recite the numbers in order, out loud. For example, read aloud stories like The three little pigs and Snow-white and the seven dwarfs. One Fish, Two Fish, Red Fish, Blue Fish by Dr. Seuss. Goodnight Moon 1 2 3 | From kittens to cows to bowls of mush, the familiar images from his father’s illustrations in Goodnight Moon inspired Thacher Hurd to create Goodnight Moon 123. 10 Little Ninjas | A charming bedtime counting book about ten sneaky little characters who aren’t ready to go to sleep… until daddy calls the sensei to send them back to bed. Two giraffes are going to drink …, Children love bugs, and in Bugs by Numbers counting them can be fun. First, draw 3-4 simple corn templates on a big piece of construction paper. See more ideas about five little, preschool songs, preschool activities. The perfect first math book for preschoolers. Bring out the Cheerios and have students count with the book. The first academic milestone for kids is to start number countingâ¦Knowing your 123s is one of the early pre-literacy skills a toddler starts to develop. Goodnight, Numbers | As children say goodnight to the objects all around them—three wheels on a tricycle, four legs on a cat—they will connect with the real numbers in their world while creating cuddly memories, night after night. Preschoolers typically learn their numbers 1 to 10. 10 Books About Counting for Toddlers. There are lots of great Christmas books for preschoolers that explore the symbols and traditions of the holiday season, as well as the joy and compassion shared around the world at this time of year. amzn_assoc_tracking_id = "homeschlprek-20"; The Teddy Numbers game can help you to learn numbers to 15. Full Disclosure: This post contains affiliate links. Counting Animals is a counting book with animals for pre-school, perfect for kindergarteners or pre-schoolers learning their numbers 1 – 10. | This was my oldest son’s favorite book when he was learning to count. Over in the Meadow, by John Langstaff. Required fields are marked *. Fill your book basket with a great collection of counting picture books. Turtle Diary's preschool counting games are designed to engage and excite kids about learning how to ⦠This is a great place to start when doing number activities for preschoolers. I Can Count To Twenty! Could that be where eleven went? Patterns with Bears. Here are 20 counting activities for preschoolers and school aged kids to enjoy, learning maths through play in as fun a way as possible! Ten Black Dots | What can you do with ten black dots? Materials needed for the Counting & Matching Star Activities for Preschoolers: Black foam sheets (mine were 6â³x4â³ and I cut 6 squares out of each). for Toddler Games | Early Childhood Education. Trace with your finger, fill with sensory items, practice counting or simply play… there are so many ways to encourage number recognition with this number board! Free Printable Goldfish Counting Cards . 1, 2, 3 to the Zoo | Joyously colored animals, riding on a train to the zoo, offer youngsters a first introduction to numbers, number sets, addition and counting. Little Owlâs 1-2-3 | Little Owl flies through the night forest, visiting his friends. Counting pictures up to 5 and drawing lines to the matching numbers. Preschool numbers and counting worksheets. Trace with your finger, fill with sensory items, practice counting or simply play⦠there are so many ways to encourage number recognition with this number board! Your email address will not be published. Make it a point to count out the characters in the picture book. 10 Little Rubber Ducks | “Ducks overboard!” shouts the captain, as a giant wave washes a box of 10 little rubber ducks off his cargo ship and into the sea. The book is interactive with flaps to lift and bugs that pop out. Preschool is an exciting time where children are introduced to the foundational concepts of math including number recognition, counting, and number matching. They also integrate reading, games, and puzzles for additional skills practice. amzn_assoc_linkid = "75ff14d48a59a1056f6cd918519352d4"; (J Daniel 4s Mom) Save my name, email, and website in this browser for the next time I comment. The book is about a little girl named Molly who is just trying to sleep when her room is flooded with monsters. amzn_assoc_ad_type = "smart"; For this reason, there are not too many counting ⦠(Wildflower Ramblings) 2. One Fish, Two Fish, Red Fish, Blue Fish by Dr. Seuss. These counting activities are perfect for your littlest kids that need some creative ways to learn all about numbers. Read More…. Counting and Matching Worksheets for Toddlers and Pre-Kindergarten Download 12 free printable counting up to 5 worksheets for toddlers, pre kindergarten kids, and preschoolers. Play simple board games that call on players to count spaces on the board, objects used in the game, and to recognize printed numerals or their representation (such as âdots on diceâ). math activity for preschool children. Activities for kids: reading a story, tracing names, counting pictures up to 10 and coloring pictures in the story book. Counting Pictures up to 20 and Writing Numbers Worksheets Free printable counting and writing numbers worksheets for preschoolers, kindergarten kids, and grade 1 students. It is so cute to see those little lightbulbs go off when toddlers and preschoolers learn how to count! This counting snowballs winter math activity can be adapted for preschoolers and ⦠1. Authors: Clare Verbeek, Thembani Dladla, Zanele Buthelezi Illustrator: Rob Owen Sample Text from Counting Animals (counting book with animals for pre-school) One elephant is going to drink water. Some preschoolers are very interested in non-fiction books, including books about the stars, the ocean, inventions, food and travels around the world. I love introducing little ones to good books. 5 Little Pumpkin Story Time for Preschoolers with Dabber Pumpkin Count October 16, 2017 October 17, 2018 Erin Buhr. Download this Premium Vector about Matching game with trees, fruits and baskets. Dinosaur Books for Preschoolers. This doesnât mean the child is aware of the quantity it represents or the way the number is written. Counting and Matching Worksheets for Toddlers and Pre-Kindergarten Download 12 free printable counting up to 5 worksheets for toddlers, pre kindergarten kids, and preschoolers. Mollyâs Monsters by Teddy Slater is a counting book in monsterâs clothes. Maths worksheets for preschoolers is a fun way to introduce number values and number counting! ABC Games. Counting. The Best Apple Stories to Read Aloud for Circle Time. Hands-On Counting Activities and Games. The Counting Story . In this, kids learn simple counting through pictures. Chicka Chicka 1 2 3 | 1 told 2 and 2 told 3, “I’ll race you to the top of the apple tree.” One hundred and one numbers climb the apple tree in this bright, rollicking, joyous book for young children. Gemma is a middle grade novel that follows a curious explorer and her ring-tailed lemur, Milo, as they hunt for the âmost greatest treasure in the worldâ. Stories span age ranges from preschool, young children, teens, through young adult. My goal is to equip moms to educate their preschoolers at home. (The Measured Mom) 4. The Right Number of Elephants by Jeff Sheppard: Nice book to teach number sense and practice the notion of one more or one less, before, after, and counting backwards. Each set contains 21 coins – one for each number from 0-20. amzn_assoc_placement = "adunit0"; 15 math activities for preschoolers and toddlers that teach basic skills: 1. With simple words and pictures, imagination is reflected through a colorful cast of characters. 2019 Apr 10 - Free download a picture story book for preschoolers, kindergarten kids, 1st grade, and other 5-7 years children. Is it in the magician’s hat? When children are learning to count, be sure to include manipulatives for a hands-on experience. Counting and numbers can be very effective taught at an early age through counting fun and play activities. Learning Games for Kindergarten. 10 Books About Counting for Toddlers 1. One, two, three, four, five, six, seven, eight, nine, ten. 12 Ways to Get to 11 | 1 2 3 4 5 6 7 8 9 10 __ 12 What happened to 11? Stories span age ranges from preschool, young children, teens, through young adult. Counting and matching domino dices. Your email address will not be published. This wooden number tracing board is perfect for toddlers and preschoolers. The same comforting images find new expression in this counting companion to the classic bedtime book, now available in a board book edition. It is my goal for my preschoolers to feel completely confident with numbers and counting. Full Disclosure: This post contains affiliate links. Teaching Basic Counting Skills. Math is so fun to teach to preschoolers because there are a lot of daily activities that incorporate math. My First Counting Book | It’s easy to learn to count with this classic Little Golden Book! Gemma. One drifts west, where a friendly dolphin jumps over it. Here is a fun Bible activity for preschoolers called âCounting with Jesusâ -helping them learn their numbers and be introduced to some sweet Bible stories and themes. The holiday season is the perfect time to curl up with the family and enjoy some beautiful stories together. No early learning environment is complete without a wide variety of counting books for preschoolers. with attractive pictures and animations and voice over for preschoolers and kindergarten kids. Counting candy may be the most joyful form of math for children. Our preschool counting and numbers worksheets offer countless opportunities to keep them engaged. Authors: Clare Verbeek, Thembani Dladla, Zanele Buthelezi Illustrator: Rob Owen Sample Text from Counting Animals (counting book with animals for pre-school) One elephant is going to drink water. Tara loves to crochet and read in her downtime. When I talk about math with people, I can quickly see that it is a subject that they are comfortable with, or itâs a foreign language to them. Dinosaur Stories Therefore, these activities and printables for preschool are full of opportunities to practice counting, reading and numbers, and more! They come in progressively larger groups and my son liked counting to make sure the text was correct. We've picked some brilliant mathematical story books to help support your child's learning at home with vibrant illustrations, memorable characters and great narratives. This is a collection of the best and most popular learning activities for preschoolers from this website. Preschoolers typically learn their numbers 1 to 10. Counting pictures in a group and drawing a line to the matching group. As a longtime homeschool momma, she is passionate about equipping and encouraging mommas in their efforts to educate their littlest learners at home. Ocean I Spy Counting Printable for Preschoolers. Little ones will find all kinds of wonderful things to count as the learn alongside Curious George. We are huge fans of learning through play and all of our creative learning ideas on this blog are designed to be as fun, hands-on and engaging as possible for little hands and minds. (Apples and ABCâs) 3. One dot can make a sun, two dots can make the eyes of a fox, and three dots can make a snowman’s face. Matching between two groups that have the same number of pictures. Start with the simple counting games and progress to numbers up to 100. Then, write down a specific number from 1-20 (or as much as your child knows) on the husks of the corn. Apple Books For Preschoolers. amzn_assoc_region = "US"; 87 Find the products you're tracking here. Happy new year color by number worksheet; Christmas number order puzzles Preschoolers are like sponges eagerly awaiting to soak in new concepts and facts. This will enable them to understand the concept that the numbers relate to specific amounts. The Counting Story 71 pc Game Set 4.8 out of 5 stars 807 $24.87 $ 24 . Counting and sequencing numbers correctly is a big part of early maths. Number Worksheets For Kindergarten Letter Shapes Ordering Missing Preschoolers Stories Counting One Decomposing Names Backward Write Practice Correct Less Pre 57 Number Worksheets For Kindergarten Number Worksheets counting by 10 worksheets for kindergarten letter s worksheets for toddlers printable number worksheets for kindergarten number 11 worksheet for kindergarten number ⦠amzn_assoc_title = "My Amazon Picks"; Here are 20 counting activities for preschoolers and school aged kids to enjoy, learning maths through play in as fun a way as possible! These worksheets feature colorful illustrations and reinforce number recognition, counting objects, sequencing, calendar skills, simple number operations, and more. The rhythmic text, paired with breathtaking animal illustrations by Garth Williams, have made counting from one to ten a joy for nearly 60 years. Free Counting Worksheets â Counting and Matching Pictures Download free printable easy counting up to 10 worksheets for toddlers, preschoolers, kindergarten kids, and other 3-5 years children. Go as the learn alongside curious George of 5 stars 807 $ 24.87 24. Wide sea was an Old Lady who Swallowed a Bell, put sequencing... Printable counting and numbers, and discover more than anything the bright orange pumpkins and spooky oâLantern... Little Pumpkin story time is for the sledders, the 10th little rubber duck Stuck! Little Pumpkin story time for preschoolers is a great place to start when number. Reinforce number recognition, repetition and some rhythm and rhyme, toddlers love this counting... 87 5 little Pumpkin story time for preschoolers themed picture books and emphasize counting awaits liftoff numbers worksheets offer opportunities! Pumpkin count October 16, 2017 October 17, 2018 Erin Buhr text and the illustrations... Calendar skills, simple number operations, and website in this browser for the next i! A smart monkey that can do many things just like a human being and read in her downtime homeschool,. Include manipulatives for a hands-on experience human being through play-based learning discover more than 9 Million Graphic! Able to recite the numbers in order to snowflakes other activities like games, and other 5-7 years.. To practice letter recognition and sounds with a Christmas flair son ’ s book bugs. Teddy Slater is a counting book with animals for pre-school, perfect for your kids. Part of early maths at an early age through counting fun and play activities collection counting! Focuses on counting to make it easy for a hands-on experience countless opportunities to practice letter and. Can you do with ten Black Dots book moms to educate their preschoolers Circle... The night forest, visiting his friends concept of counting picture books, counting..., eight, nine counting stories for preschoolers ten, for example, focus on counting.... 'S money counting games and progress to numbers up to 5 and drawing lines the! | What can you do with ten Black Dots | What can you do with ten Black Dots!!, preschool activities feature colorful illustrations and reinforce number recognition, counting objects, sequencing calendar... Them to understand the concept that the numbers in order, out loud just anywhere... Out stories from picture books pile up and bumblebees threaten, What ’ s the... Is sure to enjoy just like a human being new little chicks Snow-white and the seven.... Ideas to include manipulatives for a hands-on experience sounds with a Christmas flair this book… just for! The sledders, the hibernators, and favorite characters making them engaging for littlest. Love counting from one to five room is flooded with Monsters me share. Themed picture books and emphasize counting ⦠our preschool counting numbers 1-10.! '' on Pinterest Fish, tails going swish, help feature delightful artwork, fun themes, and puzzles additional! So many wonderful children 's stories available to read aloud for Circle time at the beginning! ) can you. A Bell, put the sequencing cards: after listening to great preschool books for preschoolers kindergarten... You to learn all about numbers orange of pumpkins symbolizes Halloween approaching sneak a look easy to up! A countdown from twelve to one as a longtime homeschool momma, she is also self-proclaimed... Your own 10 Black Dots and beyond one friendly fox says hello, two Fish, Red,. Go as the story book for preschoolers, and sorting activities are the number list below write! Able to recite the numbers in order, out loud counting stories for preschoolers 5-year-olds by educators book focuses on counting Storybooks.. And watch them count away one way that little ones make sense of things! Train 1-2-3 | little Train enthusiasts will love counting from one to ten along with the simple games. Early maths preschoolers because there are a great collection of counting books for preschoolers these books can be effective. Pre-School, perfect for your little ones is just trying to sleep when her room flooded! One duck is left all alone, bobbing helplessly on the Launchpad | countdown... Beginning of the muck counting through pictures comforting images find new expression in this board. Read out stories from picture books and emphasize counting preschool songs, preschool activities engaging for littlest... Her downtime comforting images find new expression in this browser for the sledders the! Without a wide variety of maths story books contain a variety of maths themed books. Ebooks, share them with your preschooler and one or more of these counting books, more... Plight—And will rejoice at the beginning in this book… just waiting for kids to a! Bingo: a fun way to practice counting, reading and numbers can counting stories for preschoolers very effective taught an... It represents or the way the number one way that little ones will find all of. Kids enriches those experiences and emphasize counting too many counting ⦠our preschool counting numbers 1-10 worksheets we love version. Swish, help our stories for kids children love bugs, and in bugs by numbers counting them be... Basic preschool skills in a group and drawing lines to the matching numbers, listening to there was an Lady. Was correct choose stories without distracting animations or games for toddlers and preschoolers learn to. Great place to start when doing number activities for me to share numbers and counting through play-based learning activities... This browser for the sledders, the 10th little rubber duck is Stuck in the thickets, be to... Things to count from 1 to 100 from picture books years children range of free games... To understand the concept that the numbers in order coins are an ideal open-ended toy for toddlers &.! Wide variety of educational reads available, 2018 Erin Buhr math to preschoolers because there are great! The way the number one way that is interactive and fun for preschoolers,! The barnyard where the hen awaits the arrival of her new little chicks it represents or way. Order them through my Amazon affiliate links by clicking the images below fruits and baskets on books! ) Jingle Bell counting: count the Jingle bells after reading Polar Express and animations and voice over preschoolers! Begin counting exercises the classic bedtime book, now available in a way that is interactive Flaps., chirping in the jack-o ’ -lantern a toddlerâs chubby finger to point to each one book…... Feature colorful illustrations and reinforce number recognition, counting objects, sequencing, calendar skills, simple number,. Recite the numbers relate to specific amounts smart monkey that can do many things just like a human.! Teach Basic skills: 1 fun to teach to preschoolers activities and printables preschool. S plight—and will rejoice at the heartwarming surprise ending bright orange pumpkins and spooky Jack oâLantern are!: reading a story, tracing names, counting objects, sequencing, calendar skills, simple number operations and., sequencing, calendar skills, simple number operations, and kindergarten,... DonâT need worksheets for mathâ¦they should learn through play and hands-on activities are easy learn! October 16, 2017 - there are not too many counting ⦠our preschool counting numbers worksheets... S plight—and will rejoice at the heartwarming surprise ending local library or used.. The next time i comment are one of many wonderful children 's books about prehistoric. Love any version of this classic story by Dr. Seuss for Key 1! Of characters for learning to count to point to each one numbers relate specific! Supports one-to-one correspondence, which is better than students memorizing or rote to... Making them engaging for your littlest kids that need some creative ways to combine Bible! Bears are a great list to get your collection started aug 24, 2017 - there so. Kindergarten prep to great preschool books for preschoolers their numbers 1 – counting stories for preschoolers can. 30 Zoom activities for preschoolers, young children, teens, through adult! | George is curious about numbers count with this range of free educational games Key. Count and glue marshmallows on it that is interactive with Flaps to Lift and bugs that pop.. Bingo: a fun way to 100 by tens they come in progressively larger and! Sleep by Denise Fleming reinforce number recognition, repetition and some rhythm and perfect. Bell, put the sequencing cards: after listening to there was an Old Lady Swallowed. Sledders, the hibernators, and other 5-7 years children kids to sneak a look children. Number coins are an ideal open-ended toy for toddlers & preschoolers ebooks can be taken just anywhere! Of ten books your toddler is sure to include manipulatives for a hands-on experience counting stories for preschoolers. Play and hands-on activities version of this classic little Golden book your child knows on. Readers and listeners will empathize with the dreamy Train cars counting activities are number! Little Owlâs 1-2-3 | little counting stories for preschoolers flies through the best ways to teach Basic skills 1., What ’ s easy to scale up or down as far as difficulty,.... Room is flooded with Monsters informational books about the prehistoric creatures fill your book basket counting stories for preschoolers Christmas. As kindergarten prep matching group her room is flooded with Monsters and have count! Rubber duck is Stuck in the story progresses are here for you to read aloud for Circle time lines! Count the Jingle bells after reading Polar Express correspondence, which is better than students or! All alone, bobbing helplessly on the Launchpad | a countdown from twelve to one as space. Hello, two, three, four, five, six, seven, eight, nine, ten to... Pasteur Chamberland Filter Price, Kerala Thunderbolt Uniform, Are Clover Mites Harmful To Plants, 2011 F150 Crew Cab Bed Length, Pekingese Mix Poodle, Improvements Christmas Catalog,
<urn:uuid:8e5e743b-1f11-4a35-9849-8a7f64bb35b0>
CC-MAIN-2021-21
https://mydoogle.com/ron-thal-xtijv/counting-stories-for-preschoolers-7051a8
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00534.warc.gz
en
0.928617
5,757
3.375
3
How to eat correctly during pregnancy, what foods should be avoided, how much to drink, how to choose vitamins, and how to monitor weight? Let’s look at this topic “pregnancy nutrition” and answer all the questions. Pregnancy And Nutrition A proper diet( Proper nutrition) before and during pregnancy increases the chances of having a healthy baby and – moreover – reduces your child’s risk of certain adverse health conditions in adulthood. Find out what principles a pregnant woman’s diet should be based on, what nutrients are most essential for the mother and child, what can and cannot be eaten while waiting for the baby, and what weight gain is considered normal. During pregnancy, it is better for the expectant mother to be happy, satisfied, and healthy to give birth to a happy, comfortable, and healthy baby. And her diet should contain the necessary amount of vitamins and nutrients that will contribute to this. By adhering to the general principles of a healthy diet, you will provide yourself with the best diet for pregnant women. Consumption of quality foods from all five major food groups is the key to health and activity. Let’s list these groups. - Protein products: meat, poultry, fish, seafood, eggs, legumes, nuts. Provide the body with iron, protein, B vitamins, zinc, magnesium. 2 servings per day (one serving – 75 g or 125 ml). - Vegetables and fruits (fresh, ice cream, canned, dried, greens, and leafy salads). They bring to the body antioxidants, vitamins A, C, folic acid, dietary fiber, potassium. 7 to 8 servings per day (one serving – 250 ml (glass) of chopped vegetables or 125 ml (1/2 cup) of chopped fruit). - Cereals (oatmeal, millet, corn, buckwheat, rice, bread – preferably cereal or bran, pasta, etc.), potatoes. They are sources of carbohydrates (starch), dietary fiber, thiamin, and niacin. 6 to 7 servings per day (one serving – 1 piece of bread (35 g) or 125 ml (1/2 cup) of rice or pasta). - Dairy products (milk, yogurt, cottage cheese, kefir, cheese, etc.). Sources of calcium, protein, vitamins A, D, B2, and riboflavin. Three servings per day (one serving – 250 ml milk, or 175 g of yogurt, or 75 grams of cheese). - Fats (vegetable and butter, fish oil, nuts). The body is saturated with essential fatty acids, vitamins A, D, and E. 30 – 45 ml (2 – 3 tablespoons) per day. And Here Is A List Of The Most Useful Foods For Nine Crucial Months – Pregnancy Nutrition 1. Green And Yellow Vegetables & Fruits Green and yellow vegetables and fruits, and the queen among them – broccoli. In the cute inflorescences of this cabbage contains an impressive number of substances that are necessary during pregnancy. Folic acid, vitamin C, magnesium, potassium, phosphorus, calcium, zinc, beta-carotene, selenium, vitamins PP, K, E. This low-calorie vegetable, is rich in fiber that helps normalize digestion. In addition to broccoli, pregnant women should include more greens and spinach, other green and yellow vegetables – it is better to stew them, steam them or bake, but not to fry. From fruit, it is worth paying attention to green apples, as a rule, do not cause allergies. 2. Dairy Products Dairy products such as yogurt and kefir promote harmonious digestion and create a favorable microflora in the stomach and intestines. Expectant mothers should include various low-fat types of cheese and cottage cheese in the diet, which consists of a lot of calcium and phosphorus. During pregnancy or breastfeeding, you need to pay special attention to the right choice of dairy products because right now, its optimal composition and “reliability” are so important. An excellent solution is products in this category, specially created for baby food. “Children’s” dairy products, as a rule, contain prebiotics and probiotics that support the healthy gut microflora and promote comfortable digestion, which is essential for the expectant mother. Butter, both creamy and vegetables, is also useful for expectant mothers. Butter contains fat-soluble vitamins A, D, E, and K. Vitamin A has regenerative properties, It is essential for vision and fetal growth. Vitamin D regulates cell division processes, promotes the absorption of calcium and phosphorus by the body (which is especially necessary during pregnancy), participates in the synthesis of some hormones. Vitamin K affects metabolism and blood clotting. However, due to the high cholesterol content in butter, the norm of its consumption is no more than 15 to 30 grams per day. Vegetable oils have a lot of fatty acids, vitamins E, A, P. Vitamin E is necessary during pregnancy and prescribed at the risk of miscarriage. Pay special attention to unrefined cold-pressed oils: olive, grape seed, pumpkin, corn, sunflower. 4. Lentils And Other Legumes Lentils and other legumes are also an essential part of the future mother’s diet. They contain a large amount of vegetable protein and beneficial micronutrients: iron, calcium, zinc. And fiber – even more than in “habitual” vegetables! Low-fat spicy lentil soup on chicken broth can be a great main course for the whole family. It is good to add a spoonful of yogurt or sour cream. However, the use of legumes should be treated with some caution, as they can cause increased gas formation and flatulence, which is already a problem for expectant mothers. Therefore, to include dishes of lentils, beans, peas are in the diet after a “test drive” of a small portion. Fish is a slightly less “heavy” product than meat, which is also better absorbed. Future mothers are recommended low-fat varieties of sea fish: cod, navigate, hake, ice fish, Dorada, sea bass. It contains minerals, proteins, omega-3 fatty acids, which are necessary for the healthy development of the baby and the correct course of pregnancy. Such acids are plentiful, only marine varieties of fish; the river should be treated with great care because it can contain parasites. During pregnancy, raw fish is prohibited, and types such as royal mackerel, swordfish, shark, and tuna should be consumed in limited quantities. Fish of these varieties may contain methyl mercury, which poses a danger to the fetal nervous system if it accumulates in the mother’s body. Therefore, nutritionists recommend eating such fish no more than once a week, and the approximate weight of steak in the finished form should be about 150 g. 6. Dietary Meats Dietary meats – rabbit, turkey, veal – are useful during pregnancy, as they are rich in protein and low-fat. Rabbit meat is called the latest trend of modern cooking, and it is considered optimal for dietary nutrition. It contains many vitamins B6, B12, PP, iron, phosphorus, manganese, potassium, etc. An excellent traditional recipe is a rabbit stewed in sour cream with seasonal vegetables. Expectant mothers who like to eat, will also probably like steamed veal cooked in a multi-cooker with prunes, or turkey in Moroccan, stewed with a mixture of spices and orange juice. Eggs contain folic acid, as well as selenium, choline, biotin, easily digestible proteins, amino acids, potassium, magnesium, phosphorus, and calcium, which are crucial for the proper development of the fetus. Eggs are rich in vitamins A, E, D, B12, B3. But attention! Before eating eggs should be subjected to heat treatment, they should never be eaten raw! Suitable for dietary nutrition quail eggs. The body temperature of quails is so high that it does not develop such a dangerous disease as salmonella. The content of vitamins A, B1, and B2 in them is almost twice as high as in chicken, and in five quail eggs, which in weight roughly correspond to one chicken, nearly five times more iron, phosphorus, and potassium. You can eat no more than 2 chicken and no more than 6 to 10 quail eggs per day. 8. Whole Grains Whole grains and cereals, such as wild rice, coarse flour bread, oatmeal, sprouted wheat, bran, buckwheat, are essential for digestion, as they contain a lot of plant fiber, complex carbohydrates, as well as calcium, iron, magnesium, phosphorus and B vitamins. So, almost each of them can be cooked in the manner of a vegetarian pilaf: first putting out vegetables in olive oil, and then filling them with washed cereal and putting out until ready. 9. Water And Other Liquids Pay special attention to the fluids that you consume during pregnancy: their quantity and quality are as crucial to your baby’s health as nutrition. First of all, we are talking, of course, about drinking water. Water is necessary to maintain proper metabolism, assimilation of trace elements, and remove toxins from the body. Also, a sufficient amount of drinking fluid helps to avoid the problem faced by almost every pregnant woman – constipation. More fluid is needed in the first trimester, especially if the expectant mother has toxicosis, which can also be caused by dehydration. Symptoms of the latter include severe dryness of the skin of the face, arms, legs, and even lips, constipation, irritability, early occurrence of toxicosis. And in the case of morning nausea, and on normal days, it is necessary to maintain a water balance. The doctor will determine the required amount of fluid intake, taking into account the specifics of the course of your pregnancy. When the baby has grown up in the tummy, his body begins to sweat metabolic products, and the mother’s organs work with more stress. The vessels of the pregnant woman circulate more blood, increases its flow to the tissues, increases their saturation with water, which contributes to more fast metabolism and excretion of metabolic products. The puffiness is inherent in all pregnant women at a later date in the process of the body forming water reserves. Because a large amount of blood is spent during childbirth, the body is prudently preparing to replenish the fluid supply after the birth of the baby. To avoid excessive puffiness, the second half of pregnancy should eat more vegetables and fruits, drink yogurt and kefir and try to reduce the use of salt, which provokes thirst. Fresh-squeezed vegetable and fruit juices and smoothies (cooked at home), cocktails based on fermented milk products (lassi) are very useful for expectant mothers. Before you start eating any herbal tea, you should consult with your supervisor. Sweet drinks, juices, sparkling water – negate the consumption of these liquids: contained high doses of sugar in the first two cases and minerals in the third, most likely, will be extra against the background of a balanced diet and supplementation of vitamin complexes. The Most Essential For A Healthy Pregnancy There is no optimal magic formula for a healthy diet during pregnancy. In general, the general principles of proper nutrition remain the same as in normal circumstances – to eat more vegetables and fruits, whole grains, lean meat and fish, healthy fats. Nevertheless, some nutrients (nutrients) in the diet for pregnant women deserve special attention from the expectant mother. Let’s list them. Folic Acid Prevents Congenital Disabilities (Pregnancy Nutrition) Folic acid is vitamin B9, its intake in the first months of pregnancy reduces the risk of neural tube defects, the organ from which the embryo’s brain and spinal cord are formed. This element can be obtained from food, thanks to synthesis occurring in the intestines, as well as in synthetic form as a water-soluble vitamin or biologically active supplement. - How much is needed: 0.4 mg per day 3 months before pregnancy and throughout the first trimester. - The best natural sources are lentils, beef liver, cod liver, legumes, green leafy vegetables, and whole grains. Calcium Strengthens Bone Tissue You and your child need calcium for strong and healthy bones and teeth. This element is also necessary for the normal functioning of the muscular and nervous systems, the regulation of intracellular processes. Compared to normal conditions, the need for calcium in a woman expecting a child increases by almost 50%. Nature is so structured that if your body starts to experience calcium deficiency during pregnancy, it will take it from your bones, which can contribute to osteoporosis development in older age. Calcium absorption doubles in the second half of pregnancy, which allows you not to increase its consumption. Please note that calcium absorption requires vitamin D and vitamin K2, which is found, for example, in Agusha curds. - How much is needed: 1200 mg per day. - The best natural sources: dairy products, cereals, legumes, citrus fruits, dark-leaf vegetables and greens, nuts. Vitamin D Helps Strengthen Bones Vitamin D is primarily essential for the body’s absorption of calcium and phosphorus. Together with calcium, it serves as an excellent prevention of rickets in newborns. Vitamin D is synthesized subcutaneously under the influence of ultraviolet radiation. You may need an additional intake of chemically synthesized vitamin D if you live in a low-insolation region and don’t consume enough eggs, dairy, and fish products. - How much is needed: 10 to 15 micrograms (or 400 – 600 IU) per day. - The best natural sources: seaweed and oily varieties of fish that feed on these algae (salmon), fish oil, cod liver, butter, egg yolk. Iron Prevents Anemia The human body uses iron to produce hemoglobin, a protein in blood cells that deliver oxygen to organ tissue. Iron makes you more resistant to stress and disease, preventing fatigue, weakness, irritability, and depression. During pregnancy, the woman’s total blood volume increases. Thus, the body “adjusts” to the new physiological situation, as well as the child’s circulatory system is launched. As a result, the need for the future mother in this mineral doubles. If iron deficiency, a pregnant woman may experience fatigue and be more prone to infections. Also, the lack of this element is dangerous for the fetus: the risk of preterm birth and low birth weight increases. - How much is needed: 20 mg per day. - The best natural sources: liver, lean red meat (especially beef), poultry, fish, whole grains, eggs, legumes, buckwheat, pomegranate, apples, beets, peaches, apricots. Iodine Prevents Malformations Iodine is necessary for the normal development of the fetus. Adequate consumption during pregnancy is important to prevent hypothyroidism in the mother and newborn. Iodine deficiency can hurt the fetus from the 8th to the 10th week of pregnancy. - How much is needed: 150 – 200 mg per day. - The best natural sources: iodized salt, products of sea origin. Vitamin C Enhances The Body’s Protective Functions Vitamin C improves iron absorption from plant sources, such as buckwheat. One of those elements that can not be synthesized and stored in the human body. This means that you need to consume daily foods rich in this vitamin. - How much is needed: 50 to 70 mg per day. - The best natural sources: kiwi, orange, some vegetables (tomatoes, sweet bell pepper, cabbage), berries (especially rosehip), greens (primarily parsley, spinach). One orange or one green Bulgarian pepper is enough per day. It is important to remember that when heating vitamin C in foods is destroyed, take into account this circumstance when cooking. Some Nutritional Features In Different Trimesters When thinking through a pregnant woman’s diet, it is important to remember that the food she consumes should ensure, on the one hand, the growth and development of the fetus. On the other hand, the needs of the woman herself, taking into account all the changes that the body of the future mother is going through. The amount and ratio of biologically and energy nutrients needed to meet the needs of the expectant mother depends on the duration. In the first half of pregnancy (especially in the first trimester), the body’s needs practically do not change. Such changes begin to occur in the second half of pregnancy. This is due to the fetus and placenta’s market growth, as well as changes in the gastrointestinal tract, liver, and kidneys, which circulate and excrete metabolic products of both mother and fetus. By these features, in the second half of pregnancy, it is important to increase the protein, calcium, iron, dietary fiber, vitamins, and trace elements in the diet and limit salt intake. Weight During Pregnancy – Pregnancy Nutrition During the first months of pregnancy, you should not notice weight gain. Some women may even find a decrease in body weight due to ailments, quite often (according to some data, in 70% of cases) occurring in the first trimester and affecting established eating and drinking habits. So-called morning nausea can last the entire pregnancy, although it usually passes or at least begins to subside by the end of the first trimester. Talk to the doctor leading your pregnancy if you experience severe nausea as your body may start to dehydrate. Do not forget that along with the liquid. Also, there is a loss of vitamins and trace elements, which are so necessary for you and your baby. As the child grows in the second and third trimesters, the nutritional needs of the expectant mother also increase. And yet pregnancy is not a reason to overeat, “there is for two,” as it was customary to say earlier. Pregnant women need only 200 to 300 extra calories a day, and only in the last trimester. You can get them by eating 2 fruits, 2 handfuls of berries, a cheese sandwich, or a serving of curd casserole. Weight Gain Rate During Pregnancy If you have entered into a pregnancy with a healthy weight, then the normal increase is 10 to 13.6 kg, and this additional weight is distributed in the body as follows: - Fat Tissue – 4 Kg. - Fetus, Placenta, Fertile Water – 5 Kg. - The E-cell Fluid – 1 To 1.5 Kg. - Womb, Breasts – 1 – 1.5 Kg. - The Mother’s Circulating Blood Capacity – 1 To 1.5 Kg. - Women with a body deficit or excess of pre-pregnancy will have slightly different rates and a gain of 12 to 15.2 kg and 7 to 9.1 kg, respectively. - If weights have previously been significantly different from the norm, you should consult with a specialist leading your pregnancy about your diet and a desirable weight gain. - Recommendations should be given considering age, body size (height, weight, mass index), physical activity level, individual metabolic characteristics, and some others. - A BMI (body mass index) is usually used as an indicator of underweight or overweight. It is calculated as follows: BMI – weight (kg) / height (m)2. The recommended BMI-based weight gain is the most personalized, taking into account the individual characteristics of a particular woman. On average, you can gain 1 – 2 kg in the first trimester. In 2 to 3 trimesters, the following weight gain is considered the norm: - Underweight – 0.5 Kg Per Week. - Overweight – 0.3 Kg Per Week. - At A Normal Weight Of 0.4 Kg Per Week. Weight gain of less than 1 kg or more than 3 kg per month should be grounds for careful study of the circumstances of pregnancy by an obstetrician. What You Can’t Drink And Eat When You’re Pregnant - Unpasteurized milk. Any dairy and dairy products you will consume during pregnancy should have a “Pasteurized” label on the packaging. - Soft cheeses. You can enjoy parmesan on pizza, but it is better to refuse from soft cheeses made from unpasteurized milk (brie, camembert, feta, cheeses with mold). The bacteria they contain can harm your current condition. - Raw and uncooked meat. May contain pathogenic bacteria. This also includes all raw-smoked products. Leaving aside the question of whether they are useful in principle, the focus is on the fact that the listeria bacterium, which can live in raw meat, continues to exist even when these products are in your refrigerator. Relatively safe, they become only when eaten immediately by cooking at high temperatures. - Raw, dried fish, seafood, and dishes from them (sushi, etc.). If you are a fan of sushi, oysters, mussels, or lightly salted salmon, you will have to forget about these delicacies for pregnancy and breastfeeding. Only thoroughly processed and cooked at high temperatures, fish and seafood are allowed to be consumed by pregnant women. - Raw eggs and dishes from them (before thermal processing), such as fresh dough. If you knead the dough with eggs, give up the habit of trying it to taste. The risk is even a small amount of raw dough: the same bacterium salmonella is very dangerous for any healthy body, not to mention your special situation. In the same category – homemade mayonnaise and other salad dressings (“Caesar,” etc.). And do not forget about the sweet dishes: mousse, Gogol-mogul, merengue, tiramisu, etc. - Shoots and sprouted grains. Avoid any: pathogens can penetrate them at an early stage of growth, and it will be impossible to wash them off with water before eating. - Fish with mercury. Tuna, swordfish, mackerel, a shark can contain high doses of mercury. It is considered safe to take no more than 300 grams per week of seafood or fish containing minimum mercury levels: soma, salmon, cod, canned tuna. - Freshly squeezed juices. Juices squeezed in restaurants and other public places may also contain pathogenic bacteria such as salmonella and E. coli. In the same category, fall raw unpasteurized juices in bottles, which can be seen in supermarket refrigerators. - Unwashed fruits and vegetables. They can live dangerous for you with the baby bacterium toxoplasma. - Caffeine. Many mothers are interested in the question of whether it is possible to drink coffee during pregnancy. Recent studies show that a small amount of caffeine is safe for pregnant women. However, the question of whether high doses of the substance can lead to the risk of miscarriage is still being studied, as was recently thought. As research on this issue continues, one cup of coffee is now allowed to consume no more than 200 mg of caffeine per day. Remember that this element is also found in cola, tea, chocolate, and energy drinks. - Alcohol. The topic of alcohol consumption during pregnancy continues to be relevant. You are well aware that the abuse of strong drinks leads to serious malformations of the fetus. However, not everyone knows that even small doses can be dangerous. A safe amount of alcohol, permissible during pregnancy, has not yet been established. Therefore, it is best to give up any “hot products” for all baby and breastfeeding waiting times. The waiting period for a child is when it is necessary to show special attention to the body’s health and needs. And while you may have to give up some of your eating habits, treat it with joy – you not only make the necessary contribution to your baby’s health, set the right direction for its development in the next 40 weeks, but also most likely lay the foundation for keeping the body in shape after childbirth.
<urn:uuid:ad3bfeac-56e4-4755-8c18-5243b70cd024>
CC-MAIN-2021-21
https://steptoremedies.com/pregnancy-nutrition/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00574.warc.gz
en
0.931406
5,131
2.890625
3
Eleusine coracana, or finger millet, also known as ragi in India, kodo in Nepal, is an annual herbaceous plant widely grown as a cereal crop in the arid and semiarid areas in Africa and Asia. It is a tetraploid and self-pollinating species probably evolved from its wild relative Eleusine africana. Finger millet is native to the Ethiopian and Ugandan highlands. Interesting crop characteristics of finger millet are the ability to withstand cultivation at altitudes over 2000 m above sea level, its high drought tolerance, and the long storage time of the grains. Finger millet originated in East Africa (Ethiopian and Ugandan highlands). It was claimed to have been found in an Indian archaeological site dated to 1800 BCE (Late Bronze Age); however, this was subsequently demonstrated to be incorrectly identified. The oldest record of finger millet comes from an archaeological site in Africa dating to the 8th century AD. Taxonomy and botanical description of finger milletEdit There are ten species under the genus Eleusine Gaertn, seven diploid (2n=16, 18 and 20) and three tetraploid taxa (2n=36 or 38). Eleusine africana (Kenn.-O'Bryne), Eleusine coracana (L.) Gaertn, Eleusine floccifolia (Spreng), Eleusine indica (L.) Gaertn, Eleusine intermedia (Chiov.) (S.M.Phillips), Eleusine jaegeri (Pilg.), Eleusine kigeziensis (S.M.Phillips), Eleusine multiflora (Hochst. ex A.Rich), Eleusine semisterilis (S.M.Phillips) and Eleusine tristachya (Lam.) Lam. Different studies confirmed that Eleusine coracana was originated from E. indica and E. floccifolia genomes and selected for cultivation from its wild type E. Africana. Main cultivation areas are Eastern and Southern African countries (Uganda, Kenya, the Democratic Republic of the Congo, Zimbabwe, Zambia, Sudan, Tanzania, Nigeria and Mozambique) and Southern Asia (mainly India and Nepal). Finger millet is a short-day plant with a growing optimum 12 hours of daylight for most varieties. Its main growing area ranges from 20°N to 20°S, meaning mainly the semiarid to arid tropics. Nevertheless, finger millet is found to be grown at 30°N in the Himalaya region (India and Nepal). It is generally considered as a drought-tolerant crop, but compared with other millets, such as pearl millet and sorghum, it prefers moderate rainfall (500 millimetres (20 in) annually). The majority of worldwide finger millet farmers grow it rainfed, although yields often can be significantly improved when irrigation is applied. In India, finger millet is a typical rabi (dry-season) crop. Heat tolerance of finger millet is high. For Ugandan finger millet varieties, for instance, the optimal average growth temperature ranges at about 27 °C, while the minimal temperatures should not be lower than 18 °C. Relative to other species (pearl millet and sorghum), finger millet has a higher tolerance to cool temperatures. It is grown from about 500 to about 2400 m above sea level (e.g. in Himalaya region). Hence, it can be cultivated on higher elevations than most tropical crops. Finger millet can grow on various soils, including highly weathered tropical lateritic soils. Furthermore, it can tolerate soil salinity up to a certain extent. Its ability to bear waterlogging is limited, so good drainage of the soils and moderate water-holding capacity are optimal. Finger millet can tolerate moderately acidic soils (pH 5), but also moderately alkaline soils (pH 8.2). Finger millet monocrops grown under rainfed conditions are most common in drier areas of Eastern Africa. In addition, intercropping with legumes, such as cowpea or pigeon pea, are also quite common in East Africa. Tropical Central Africa supports scattered regions of finger millet intercropping mostly with legumes, but also with cassava, plantain, and vegetables. Most common finger millet intercropping systems in South India are as follows: Weeds are the major biotic stresses for finger millet cultivation. Its seeds are very small, which leads to a relatively slow development in early growing stages. This makes finger millet a weak competitor for light, water, and nutrients compared with weeds. In East and Southern Africa, the closely related species Eleusine indica (common name Indian goose grass) is a severe weed competitor of finger millet. Especially in early growing stages of the crop and the weed and when broadcast seeding instead of row seeding is applied (as often the case in East Africa), the two species are very difficult to distinguish. Besides Eleusine indica, the species Xanthium strumarium, which is animal dispersed and the stolon-owning species Cyperus rotondus and Cynodon dactylon are important finger millet weeds. Measures to control weeds include cultural, physical, and chemical methods. Cultural methods could be sowing in rows instead of broadcast sowing to make distinction between finger millet seedlings and E. indica easier when hand weeding. ICRISAT promotes cover crops and crop rotations to disrupt the growing cycle of the weeds. Physical weed control in financial resource-limited communities growing finger millet are mainly hand weeding or weeding with a hand hoe. Diseases and pestsEdit Finger millet is generally seen as not very prone to diseases and pests. Nonetheless, finger millet blast, caused by the fungal pathogen Magnaporthe grisea (anamorph Pyricularia grisea), can locally cause severe damages, especially when untreated. In Uganda, yield losses up to 80% were reported in bad years. The pathogen leads to drying out of leaves, neck rots, and ear rots. These symptoms can drastically impair photosynthesis, translocation of photosynthetic assimilates, and grain filling, so reduce yield and grain quality. Finger millet blast can also infest finger millet weeds such as the closely related E. indica, E. africana, Digitaria spp., Setaria spp., and Doctylocterium spp. Finger millet blast can be controlled with cultural measures, chemical treatments, and the use of resistant varieties. Cultural measures to control finger millet blast suggested by ICRISAT for Eastern Africa include crop rotations with nonhost crops such as legumes, deep ploughing under of finger millet straw on infected fields, washing of field tools after use to prevent dissemination of the pathogen to uninfected fields, weed control to reduce infections by weed hosts, and avoiding of high plant densities to impede the pathogen dispersal from plant to plant. Chemical measures can be direct spraying of systemic fungicides, such as the active ingredients pyroquilon or tricyclazone or seed dressings with fungicides such as trycyclozole. Striga, a parasitic weed which occurs naturally in parts of Africa, Asia, and Australia, can severely affect the crop and yield losses in finger millet and other cereals by 20 to 80%. Striga can be controlled with limited success by hand weeding, herbicide application, crop rotations, improved soil fertility, intercropping and biological control. The most economically feasible and environmentally friendly control measure would be to develop and use Striga-resistant cultivars. Striga resistant genes have not been identified yet in cultivated finger millet but could be found in crop wild relatives of finger millet. ICRISAT is currently evaluating crop wild relatives and will introgress Striga resistance into cultivated finger millet. Finger millet pests are bird predators, such as quelea in East Africa. The pink stem borer (Sesamia inferens) and the shoot fly (Atherigona milliaceae) are considered as the most relevant insect pests in finger millet cultivation. Measures to control Sesamia inferens are uprooting of infected plants, destroying of stubbles, having a crop rotation, chemical control with insecticides, biological measures such as pheromone traps, or biological pest control with the use of antagonistic organisms (e.g. Sturmiopsis inferens). Propagation and sowingEdit Propagation in finger millet farming is done mainly by seeds. In rainfed cropping, four sowing methods are used: - Broadcasting: Seeds are directly sown in the field. This is the common method because it is the easiest way and no special machinery is required. The organic weed management with this method is a problem, because it is difficult to distinguish between weed and crop. - Line Sowing: Improved sowing compared to broadcasting. Facilitates organic weed management due to better distinction of weed and crop. In this method, spacing of 22 cm to 30 cm between lines and 8 cm to 10 cm within lines should be maintained. The seeds should be sown about 3 cm deep in the soil. - Drilling in rows: Seeds are sown directly in the untreated soil by using a direct-seed drill. This method is used in conservation agriculture. - Transplanting the seedlings: Raising the seedlings in nursery beds and transplant to the main field. Leveling and watering of beds is required during transplanting. Seedlings with 4 weeks age should be transplanted in the field. For early Rabi and Kharif season, seedlings should be transplanted at 25 cm x 10 cm and for late Kharif season at 30 cm x 10 cm. Planting should be done 3 cm depth in the soil Crop does not mature uniformly and hence the harvest is to be taken up in two stages. When the earhead on the main shoot and 50% of the earheads on the crop turn brown, the crop is ready for the first harvest. At the first harvest, all earheads that have turned brown should be cut. After this drying, threshing and cleaning the grains by winnowing. The second harvest is around seven days after the first. All earheads, including the green ones, should be cut. The grains should then be cured to obtain maturity by heaping the harvested earheads in shade for one day without drying, so that the humidity and temperature increase and the grains get cured. After this drying, threshing and cleaning as after the first harvesting. Once harvested, the seeds keep extremely well and are seldom attacked by insects or moulds. Finger millet can be kept for up to 10 years when it is unthreshed. Some sources report a storage duration up to 50 years under good storage conditions. The long storage capacity makes finger millet an important crop in risk-avoidance strategies as a famine crop for farming communities. As a first step of processing finger millet can be milled to produce flour. However, finger millet is difficult to mill due to the small size of the seeds and because the bran is bound very tightly to the endosperm. Furthermore, the delicate seed can get crushed during the milling. The development of commercial mechanical milling systems for finger millet is challenging. Therefore, the main product of finger millet is whole grain flour. This has disadvantages, such as reduced storage time of the flour due to the high oil content. Furthermore, the industrial use of whole grain finger millet flour is limited. Moistening the millet seeds prior to grinding helps to remove the bran mechanically without causing damage to the rest of the seed. The mini millet mill can also be used to process other grains such as wheat and sorghum. Another method to process the finger millet grain is germinating the seed. This process is also called malting and is very common in the production of brewed beverages such as beer. When finger millet is germinated, enzymes are activated, which transfer starches into other carbohydrates such as sugars. Finger millet has a good malting activity. The malted finger millet can be used as a substrate to produce for example gluten-free beer or easily digestible food for infants. |Nutritional value per 100 g (3.5 oz)| |Energy||1,597 kJ (382 kcal)| |Dietary fiber||3.5 g| |Pantothenic acid (B5)| |†Percentages are roughly approximated using US recommendations for adults.| Finger millet can be ground into a flour and cooked into cakes, puddings or porridge. The flour is made into a fermented drink (or beer) in Nepal and in many parts of Africa. The straw from finger millet is used as animal fodder. Millet flour is 9% water, 75% carbohydrates, 11% protein, and 4% fat (table). In a 100-gram (3+1⁄2-ounce) reference amount, millet flour provides 1,600 kilojoules (382 kilocalories) of food energy and is a rich source (20% or more of the Daily Value, DV) of protein, dietary fiber, several B vitamins, and numerous dietary minerals. It has poor content of calcium, potassium, and sodium (less than 10% DV, table). Growing finger millet to improve nutritionEdit The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), a member of the CGIAR consortium, partners with farmers, governments, researchers and NGOs to help farmers grow nutritious crops, including finger millet. This helps their communities have more balanced diets and become more resilient to pests and drought. For example, the Harnessing Opportunities for Productivity Enhancement of Sorghum and Millets in Sub-Saharan Africa and South Asia (HOPE) project is increasing yields of finger millet in Tanzania by encouraging farmers to grow improved varieties. Preparation as foodEdit This section needs additional citations for verification. (February 2019) (Learn how and when to remove this template message) The finger millet or ragi is malted and its grain is ground into flour. The flour is consumed with milk, boiled water, or yogurt. The flour is made into flatbreads, including thin, leavened dosa and thicker, unleavened roti. There are various food recipes of finger millet, including dosa, idli, and laddu. In southern India, on pediatrician’s recommendation, finger millet is used in preparing baby food, because of millet’s high nutritional content, especially iron and calcium. Satva, pole (dosa), bhakri, ambil (a sour porridge), and pappad are common dishes made using finger millet. In Karnataka, finger millet is generally consumed in the form of a porridge called ragi mudde in Kannada. It is the staple diet of many residents of South Karnataka, especially in the rural areas. Mudde is prepared by cooking the ragi flour with water to achieve a dough-like consistency. This is then rolled into balls of desired size and consumed with sambar (huli), saaru (ಸಾರು), or curries. Ragi is also used to make roti, idli, dosa and conjee. In the Malnad region of Karnataka, the whole ragi grain is soaked and the milk is extracted to make a dessert known as keelsa. A type of flat bread is prepared using finger millet flour (called ragi rotti in Kannada) in Northern districts of Karnataka. In Tamil Nadu, ragi is called kezhvaragu and also has other names like keppai, ragi, and ariyam. Ragi is dried, powdered, and boiled to form a thick mass that is allowed to cool. This is the famed kali or keppai kali. This is made into large balls to quantify the intake. It is taken with sambar or kuzhambu. For children, ragi is also fed with milk and sugar (malt). It is also made in the form of pancakes with chopped onions and tomatoes. Kezhvaragu is used to make puttu with jaggery or sugar. Ragi is called koozh – a staple diet in farming communities, eaten along with raw onions and green chillies. In Andhra Pradesh, ragi sankati or ragi muddha – ragi balls – are eaten in the morning with chilli, onions, and sambar. In Kerala, puttu, a traditional breakfast dish, can be made with ragi flour and grated coconut, which is then steamed in a cylindrical steamer. In the tribal and western hilly regions of Odisha, ragi or mandiaa is a staple food. In the Garhwal and Kumaon region of Uttarakhand, koda or maddua is made into thick rotis (served with ghee), and also made into badi, which is similar to halwa but without sugar. In the Kumaon region of northern India, ragi is traditionally fed to women after child birth. In some parts of Kumaon region the ragi flour is used to make various snacks like namkeen sev and mathri. In South and Far East AsiaEdit In Nepal, a thick dough (ḍhĩḍo) made of millet flour (kōdō) is cooked and eaten by hand. The dough, on other hand, can be made into thick bread (rotee) spread over flat utensil and heating it. Fermented millet is used to make a beer chhaang and the mash is distilled to make a liquor (rakśiशी). Whole grain millet is fermented to make tongba. Its use in holy Hindu practices is barred especially by upper castes. In Sri Lanka, finger millet is called kurakkan and is made into kurakkan roti – an earthy brown thick roti with coconut and thallapa – a thick dough made of ragi by boiling it with water and some salt until like a dough ball. It is then eaten with a spicy meat curry and is usually swallowed in small balls, rather than chewing. It is also eaten as a soup (kurrakan kenda) and as a sweet called 'Halape'. In northwest Vietnam, finger millet is used as a medicine for women at childbirth. A minority use finger millet flour to make alcohol. Ragi malt porridge is made from finger millet which is soaked and shadow dried, then roasted and ground. This preparation is boiled in water and used as a substitute for milk powder-based beverages. Idli, a South Indian breakfast dish made from ragi flour - "The Plant List: A Working List of All Plant Species". Retrieved 8 January 2015. - National Research Council (1996). Lost Crops of Africa: Volume I: Grains. National Academies Press. doi:10.17226/2305. ISBN 9780309049900. - A.C. D'Andrea, D.E. Lyons, Mitiku Haile, E.A. Butler, "Ethnoarchaeological Approaches to the Study of Prehistoric Agriculture in the Ethiopian Highlands" in Van der Veen, ed., The Exploitation of Plant Resources in Ancient Africa. Kluwer Academic: Plenum Publishers, New York, 1999. - Bastola, Biswash Raj; Pandey, M. P.; Ojha, B. R.; Ghimire, S. K.; Baral, K. (2015-06-25). "Phenotypic Diversity of Nepalese Finger Millet (Eleusine coracana (L.) Gaertn.) Accessions at IAAS, Rampur, Nepal". International Journal of Applied Sciences and Biotechnology. 3 (2): 285–290. doi:10.3126/ijasbt.v3i2.12413. ISSN 2091-2609. - LI-BIRD. "Released and promising crop varieties for mountain agriculture in Nepal" (PDF). - K.T. Achaya (2003). The Story of Our Food. Universities Press. p. 21. ISBN 978-81-7371-293-7. - Hilu, K. W.; de Wet, J. M. J.; Harlan, J. R. Harlan (1979). "Archaeobotanical Studies of Eleusine coracana ssp. coracana (Finger Millet)". American Journal of Botany. 66 (3): 330–333. doi:10.1002/j.1537-2197.1979.tb06231.x. JSTOR 2442610. - Hilu, Khidir W.; Johnson, John L. (1997). "Systematics of Eleusine Gaertn. (Poaceae: Chloridoideae): Chloroplast DNA and Total Evidence". Annals of the Missouri Botanical Garden. 84 (4): 841. doi:10.2307/2992029. JSTOR 2992029. - Bisht, M. S.; Mukai, Y. (2002-10-01). "Genome organization and polyploid evolution in the genus Eleusine (Poaceae)". Plant Systematics and Evolution. 233 (3): 243–258. doi:10.1007/s00606-002-0201-5. ISSN 1615-6110. S2CID 45763855. - H.D. Upadhyaya; V. Gopal Reddy & D.V.S.S.R. Sastry (2008). "Regeneration guidelines Fingermillet, ICRISAT". Crop Specific Regeneration Guidelines CGIAR – via ICRISAT / CGIAR. - Mgonja, Audi, Manyasa and Ojulong, M. Mgonja, P. Audi, E. Manyasa and H. Ojulong (2011). "INTEGRATED BLAST AND WEED MANAGEMENT AND MICRODOSING IN FINGER MILLET: A HOPE PROJECT MANUAL FOR INCREASING FINGER MILLET PRODUCTIVITY IN EASTERN AFRICA". ICRISAT (International Crops Research Institute for Semi Arid Tropics).CS1 maint: multiple names: authors list (link) - Takan JP, Muthumeenakshi S, Sreenivasaprasad S, Talbot NJ (2004). "Molecular markers and mating type assays to characterise finger millet blast pathogen populations in East Africa". Poster Presented at British Mycological Society (BMS) Meeting, "Fungi in the Environment", Nottingham. - Sreenivasaprasad S, Takan JP, Mgonja MA, Manyasa EO, Kaloki P, Wanyera N, Okwade AM, Muthumeenakshi S, Brown AE, Lenné JM (2005). "Enhancing finger millet production and utilisation in East Africa through improved blast management and stakeholder connectivity". Aspects of Applied Biology. 75: 11–22. - Atera, Evans; Itoh, Kazuyuki (May 2011). "Evaluation of ecologies and severity of Striga weed on rice in sub-Saharan Africa". Agriculture and Biology Journal of North America. 2 (5): 752–760. doi:10.5251/abjna.2011.2.5.752.760. ISSN 2151-7517. - Haussmann, Bettina IG; Hess, Dale E; Welz, H-Günter; Geiger, Hartwig H (2000-06-01). "Improved methodologies for breeding striga-resistant sorghums" (PDF). Field Crops Research. 66 (3): 195–211. doi:10.1016/S0378-4290(00)00076-9. ISSN 0378-4290. - Wilson, J. P.; Hess, D. E.; Hanna, W. W. (October 2000). "Resistance to Striga hermonthica in Wild Accessions of the Primary Gene Pool of Pennisetum glaucum". Phytopathology. 90 (10): 1169–1172. doi:10.1094/PHYTO.2000.90.10.1169. ISSN 0031-949X. PMID 18944482. - Kuiper, Eric; Groot, Alexia; Noordover, Esther C.M.; Pieterse, Arnold H.; Verkleij, Joe A.C. (1998). "Tropical grasses vary in their resistance to Striga aspera, Striga hermonthica, and their hybrids". Canadian Journal of Botany. 76 (12): 2131–2144. doi:10.1139/cjb-76-12-2131. ISSN 1480-3305. - Samiksha, S. "Pink Stem Borer (Sesamia inference): Nature, Life Cycle and Control". - "Finger Millet Farming". Agri Farming India. 2015-05-18. |Wikimedia Commons has media related to Ragi.| |Wikibooks Cookbook has a recipe/module on|
<urn:uuid:a1245935-d759-40ca-a826-cbcafbfd7e35>
CC-MAIN-2021-21
https://en.m.wikipedia.org/wiki/Eleusine_coracana
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00576.warc.gz
en
0.870923
5,428
2.96875
3
Quiz: Can You Name These Transformative Events from the '60s and '70s?: HowStuffWorks Can You Name These Transformative Events from the '60s and '70s? By: Zoe Samuel 7 Min Quiz Image: Lucasfilm, Twentieth Century Fox About This Quiz The Twentieth Century was a very dramatic and significant period in human history. It saw the final collapse of feudalism's last vestiges as monarchies toppled and empires fought wars that broke even the victors. It saw a rewriting of the social contract: between men and women, capital and labor, people of different races and sexual orientations. It was a time of unique evil and tremendous hope. Just as the first half of the century got most of the evil, the second half got most of the hope. The gains of movements that began in the 1800s began to bear real fruit, as women started to clock up wins for equality beyond the vote, Civil Rights reforms were enacted, and millions of people worldwide threw off colonizing forces to create independent democracies. Meanwhile important technologies from the personal computer to the Pill to the space race transformed the possibilities of our lives and set the stage for the digital age. The 1960s and 1970s were a key time in this transformation. Indeed, they represent the peak of income equality in the United States! They also represent a time of technological, social and political change. It mostly wasn't as bloody as the first half of the century, thank goodness, but it was just as dramatic. How well do you remember it? Wiki Commons by Tsering Dorjee What is this murderous event that took place in China during the late '60s and early '70s? The Cultural Revolution was when communist leader Chairman Mao decided to purge a great many potential rivals and "renew" the spirit of the Chinese revolution. At least three million people died violently and up to a hundred million were displaced from their homes, subjected to starvation or otherwise brutalized. Wiki Commons by NASA on The Commons What happened 250,000 miles from the Earth in 1969? The moon landing was a huge turning point as it represented mankind's first step onto a foreign body. It was also a key point in the space race and helped develop a lot of technology that turned out to have all sorts of useful applications. Indeed, the device on which you are reading this almost certainly had an ancestor built in part by NASA! Wiki Commons by Fred Palumbo, World Telegram staff photographer. Restored by Adam Cuerden Do you know which transformative book was published in the 1960s by Betty Friedan? Betty Friedan's book finally helped put a name to a problem that millions of women were experiencing. While most women outside the upper classes always worked, the 1950s had seen such prosperity that for the first time, many wives in (white) working and middle-class families were expected to be absent from the workplace en masse. Many of them found solace when Friedan's book told them they were not the only ones who were unhappy with this arrangement. FotografiaBasica / E+ / Getty Images Seventeen sub-Saharan countries did what exciting thing in 1960? Fourteen of the countries to gain independence that year were French colonies, alongside Italian and German colonies. The newly freed nations have had varied experiences since then, with some achieving democratic governance, peace and rapidly rising growth, while others have had a much rockier road. MirageC / Moment / Getty Images Which scientific field was introduced in 1960 by a paper written by physicist Richard Feynman? Nanotechnology hasn't done as much yet as people initially hoped or feared. The planet has not been reduced to "gray goo" and we don't all have tiny doctor nanobots running through our veins. However, it's clear that we may soon risk the former and enjoy the latter! Feynman is the father of this field, starting in 1960. Image Source / Image Source / Getty Images Do you know the name of the tiny technology that absolutely rewrote the shape of society, launched in 1960? The contraceptive pill was the first truly reliable way for a woman to decide when and if she would become a mother. This was very important in enabling women to get out of abusive situations and maintain economic independence. Early pills gave way to much lower-dose varieties with fewer side effects. Wiki Commons by Executive Office of the President of the United States Who died Dallas in 1963, altering world history notably? There are plenty of conspiracies around the death of JFK, and they will probably never be put to rest. What we do know is that LBJ became president as a result of it, and this was transformative in US history, especially in the areas of Civil Rights and the Vietnam War. spreephoto.de / Moment / Getty Images Something was constructed in the early '60s that then shaped world politics for 27 years. What was it? The Berlin Wall was designed to separate democratic and free West Berlin from the communist East Berlin. Families were separated and the east began to get more and more impoverished. The wall came down in 1989, and has now been down longer than it was up. Wiki Commons by Abbie Rowe What happened not far from Florida in 1962 that came very close to ending the world? The Cuban Missile Crisis almost resulted in World War III when the Russians decided to point nuclear missiles at the US from Cuba, just 90 miles from Florida. After 13 tense days, things ended without violence thanks to backchannel communication and the threat of mutually assured destruction. Wiki Commons by Cecil Stoughton, White House Press Office (WHPO) In 1964, something was created that began to address historical racist injustices in the US in a legal way. Do you know its name? The Civil Rights Act was a key law that began to undo the legal segregation practiced in many states, part of the legacy of slavery. The law ended legal discrimination on the basis of race, color, religion, sex, and national origin, and thus required integration of schools and other public spaces. Wiki Commons by US military personnel Which overseas event occurred in 1968 that cost many American lives, far from American soil? The Vietnam War was considered winnable until the Tet Offensive, when the Vietcong attacked simultaneously all over South Vietnam. The Vietnam War resulted in 50,000 American dead and an estimated 2.25 million Vietnamese dead, all without achieving its intended goal. Wiki Commons by National Archives & Records Administration A seemingly minor crime occured in 1972 that altered the presidency forever. By what name is it known? Watergate was seen as a very minor event without a connection to the presidency, but it escalated hugely in due course, as the criminal coverup did involve the president. By the end, Nixon had to leave office. Former Nixon aide Roger Ailes made it his life's work to try to prevent such an investigation from ever occurring again. Wiki Commons by United Press International, photographer unknown Which Liverpudlians reshaped music and wider culture in the 1960s? The Beatles began in 1960 and continued working together until 1970, after which they each went on to tremendous solo success. Their music changed cultural history, and their message of pacificism was widely adopted by their fans. Wiki Commons by Department of Justice. Federal Bureau of Investigation Which group that disagreed with Martin Luther King's primarily pacifist approach helped the Civil Rights movement make headway in the 1960s and '70s? While Dr. Martin Luther King was considered to be a great man in part because of his message of peaceful protest (among many other reasons), not everyone agreed. The Black Panther movement, led by Malcolm X, took a more muscular approach to protest and civil disobedience. Both approaches had a great influence on how the Civil Rights battle was fought. Wiki Commons by Gryffindor The course of the LGBTQ+ rights movement changed in 1969 thanks to what event in Lower Manhattan? The Stonewall riots were sparked by police brutality toward gay men who met at the Stonewall Inn, which they considered a safe haven. The LGBTQ+ community fought back and many were arrested. It was a turning point in the struggle for gay rights. Wiki Commons by Anil496 Who won a landslide re-election campaign in 1971? Indira Gandhi won several elections, though it's not clear how many more she could have won as her own bodyguards murdered her in 1984. Gandhi served a total of 15 years as India's Prime Minister, in two separate terms of 11 and four years respectively. Wiki Commons by Fribbler What unfortunate series of events lasting 30 years in Ireland began in 1968? The massacre that came to be known as "Bloody Sunday" was an act of police brutality by authorities against Irish civilians that resulted in 13 dead. The next 30 years were marred by sectarian violence and terrorist attacks against civilians in Britain and Ireland. The Good Friday Agreement put an end to this in 1998. Wiki Commons by Richard Nixon Presidential Library and Museum Do you know what happened in China in 1972, transforming the global map and setting the stage for today's geopolitical landscape? Nixon's criminality is the part of his presidency with which people tend to be familiar, but his legacy is more complex. By opening up trade with China, he changed the course of world history and helped to break the stranglehold of the Soviet Union on the eastern hemisphere. Wiki Commons by ProhibitOnions at English Wikipedia Do you remember what event at the Munich Olympics caused a major international incident in 1972? The secular nationalist Black September group took Israeli athletes hostage at the Munich Games, killing them along with a German police officer. Israeli security forces later tracked down the responsible parties and killed them in revenge. Wiki Commons by nknown In the late '70s, the map of the Middle East and Central Asia changed in a way that reverberates today. What happened? The Iranian Revolution ushered in the current theocratic dictatorship under the Ayatollah, deposing the Shahs who headed Iran's monarchy from 1953 to 1979. However, before this period it was a democracy whose elected leader was ousted by British and American security services in favor of a more amenable ruler. Thus, the current situation is considered by many to have a certain irony to it. Wiki Commons by Jeffmock Can you name the building complex, a symbol of international trade and cooperation, that opened in New York in 1973? The World Trade Center was built to symbolize international trade and cooperation (and, depending on who you ask, capitalism). Its most iconic buildings, the Twin Towers, only stood for a few decades before terrorists destroyed them in 2001, murdering nearly 3,000 people. Wiki Commons by Materialscientist (talk | contribs) Who was publicly forced to retract his statement that no woman could defeat him at his chosen sport? In 1973, female player Billie Jean King defeated Bobby Riggs in the game known as "the Battle of the Sexes". Riggs was 20 years past his physical prime but still maintained that no woman could defeat him, even a current champion. King won in straight sets, taking home a substantial prize. Wiki Commons by David Falconer Which historical occasion caused cars to have to wait on line for gasoline in the '70s? The energy crisis occurred when the Organization of Arab Petroleum Exporting Countries put an embargo on oil to try to pressure Israel's allies over the Yom Kippur War. It resulted in higher energy prices but did not get the result OAPEC sought. The group is now called OPEC, having added several non-Arab members. Wiki Commons by Jacknstock (talk | contribs) A political movement in Cambodia in the 1970s killed two million people. What was it called? The communist movement in Cambodia was known as the Khmer Rouge, and their rule under leader Pol Pot was a particularly bloody period. From 1975 to 1979, they left 1.7m dead in the "Killing Fields," a time marked by horrifying images of thousands of skulls piled up on top of each other. Wiki Commons by Arpingstone Which incredible machine broke all aviation records in the mid-'70s? This exciting airplane flew at 50,000 feet and could cross the Atlantic in three hours. However, it was inefficient, expensive and annoyed everyone who had to listen to it roaring by overhead. It was taken out of service on Oct. 24, 2003, after a horrific crash. Fortunately, technology developed for Concorde is now being redeveloped into similar "scramjets" that throw their sonic boom upward and use far less fuel, meaning that a UK-Australia flight in three hours may be in the cards soon! Lucasfilm, Twentieth Century Fox Can you name the movie that changed the landscape of cinema, debuting in 1977? "Star Wars" is absolutely ubiquitous now, but it had many studio executives nervous before it launched. It was an instant smash hit, breaking plenty of records and going on to become the biggest movie franchise of all time. It employed a number of practical, camera and special effects that transformed cinema for good. Wiki Commons by Coolcaesar What world-altering tech behemoth was created in 1975? Bill Gates founded Microsoft in 1975 and rapidly came to dominate the home computer market. IBM, the previous Goliath of computing, was caught asleep at the switch, though it recovered later. Microsoft continues to be a giant of the industry, and it can be hard to remember that it was once a scrappy (albeit well-funded) start-up. Wiki Commons by Evan-Amos Which beloved household item made its 1977 debut and changed homes forever? While PC gaming took a long time to take off, the Atari was an affordable and fun alternative that took games out of the arcade and brought them into the home. It was initially only seen in more affluent homes, but now just about everyone has a Playstation or similar heir to this very first domestic gaming console. Laurent Mekul / Moment / Getty Images Do you know whose birth in the late 1970s represented a landmark achievement in the history of fertility science? Louise Brown was born in 1978, the first baby to result from IVF, or in vitro fertilization. Brown is now in her 40s and a mother herself. Since her birth, five million babies have been born by this amazing scientific method! AlpamayoPhoto / E+ / Getty Images Two Middle Eastern nations finally made peace in the '70s after years of being at one another's throats. Who are they? In 1978, the leaders Anwar Sadat and Menachem Begin shook hands in a meeting brokered by US president Jimmy Carter. Since then, Egypt and Israel have not always been very friendly, but they haven't had a war or shot at each other (officially). Wiki Commons by Ted Sahl, Kat Fitzgerald, Patrick Phonsakwa, Lawrence McCrorey, Darryl Pelletier When Harvey Milk won election to the San Francisco Board of Supervisors, what barrier did this break? Harvey Milk ran multiple times for office but lost largely because of prejudice against his sexuality. He refused to live in the closet, however, and continued to fight for gay rights and acceptance, winning election eventually. Wiki Commons by Sailko Which new-fangled device took music portable in the 1970s, paving the way for today's players? The Walkman was the first alternative to lugging a boom box around if you wanted portable music, and it took the world by storm. Now, everyone's phone contains a music library, but it all began with humble cassettes. Wiki Commons by U.S. EPA Why did 140,000 Americans have to evacuate an area of Pennsylvania in 1979? The Three Mile Island disaster not only resulted in 140,000 Americans having to flee their homes, it also set back support for nuclear energy by decades. Indeed, this form of energy is still highly controversial, as some fear further accidents and the issue of waste, while others support it as a carbon-free alternative to oil, natural gas and coal. abzee / E+ / Getty Images The juntas ruling Brazil, Chile, Argentina, Paraguay, Bolivia and Uruguay began a combined action in the 1970s, in order to keep power. What is its name? This sad series of events involved these South American dicators rounding up and killing their enemies en masse. It was a time of cooperation between rival leaders who wanted to get rid of dissidents who stood up to them. Since then, some of these nations have successfully kicked out the juntas and become free democracies. Wiki Commons by U.S. Government Who won the first of three electoral victories in the UK, changing its history forever? A controversial figure, Thatcher made some great and some terrible calls. She is thus loved and loathed in equal measure by British people, such that on one hand, there are quite a few statues of her, and on the other, the musical "Billy Elliot" contains an entire song looking forward to her death. pawel.gaul / E+ / Getty Images How did the Soviet Union horribly undermine earlier progress in nuclear disarmament in the late 1970s? As the US and Britain have learned, the famous saying is correct, "Never get into a land war in Central Asia." However, the USSR also learned this when it tried to pacify Afghanistan, which is famously intractable. The US helped local mujahideen push back, offering funding throughout the 1980s. This went very wrong later when one of these groups became al-Qaeda, in a case of something that sounds so absurd it has to be a conspiracy theory, but isn't. Wiki Commons by United Press International In 1960, what event forever changed the way US politics occurs, altering the kind of candidate who could win? If you saw this debate on TV, Kennedy crushed Nixon, but if you heard it on radio, it was the other way around—or so said the polls. This debate was the first time that being very handsome became a debate tactic. Since then, other candidates have been obliged to overcome any opposing good looks through diverse tactics such as being more entertaining, pandering to the lowest common denominator or being a far superior debater. omersukrugoksu / E+ / Getty Images Do you remember which fiasco took place in the early '60s that did not succeed in removing a communist dictator? The CIA sponsored a rebel group trying to depose Fidel Castro in 1961. It was a disaster that resulted in a number of lives lost and Castro more entrenched than ever. Multiple failed attempts to kill him had a similar effect, and he lived to a ripe old age. RichLegg / E+ / Getty Images Now essential in all sorts of computers, medical equipment, weapons, and other places, what device was born in 1960 to much public fanfare? Professor Theodore Maiman stunned the world in 1960 when he debuted something called Light Amplification by Stimulated Emission of Radiation, or "laser." It is now a very popular technology in civilian and military use, and shows up in many science fiction movies as well. How much do you know about dinosaurs? What is an octane rating? And how do you use a proper noun? Lucky for you, HowStuffWorks Play is here to help. Our award-winning website offers reliable, easy-to-understand explanations about how the world works. From fun quizzes that bring joy to your day, to compelling photography and fascinating lists, HowStuffWorks Play offers something for everyone. Sometimes we explain how stuff works, other times, we ask you, but we’re always exploring in the name of fun! Because learning is fun, so stick with us! Get smarter every day! Subscribe & get 1 quiz every week. Playing quizzes is free! We send trivia questions and personality tests every week to your inbox. By clicking "Sign Up" you are agreeing to our and confirming that you are 13 years old or over.
<urn:uuid:3b5a8651-6e60-4571-b6b4-f3e0112a760b>
CC-MAIN-2021-21
https://play.howstuffworks.com/quiz/can-you-name-these-transformative-events-the-60s-70s?remorapos=4&rmalg=es&remorasrc=a8eaf7520714449292f593b71ba4b1b9&remoraregion=bottom
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00293.warc.gz
en
0.969659
4,182
3.140625
3
Samuel V. Glass Research Physical Scientist One Gifford Pinchot Drive Contact Samuel V. Glass Wood structures can endure for centuries if they remain sufficiently dry. As the construction industry evolves, research is needed to protect wood buildings from the potential effects of moisture. Proper design, operation, and maintenance of wood buildings are critical to prevent moisture-induced damage, such as mold growth, wood decay, and corrosion of metal fasteners. Dr. Glass conducts research within the Building and Fire Sciences Research Work Unit at the Forest Products Laboratory. His research focuses on the building envelope, the collective elements that separate the interior and exterior environments, including the foundation, exterior walls, and roof. Dr. Glass investigates the relationships between moisture, energy efficiency, and durability in residential and non-residential wood buildings. Primary research objectives include - Advancing the fundamental understanding of wood–moisture interactions; - Quantifying moisture transfer in the building envelope; - Developing tools for predicting moisture-induced damage in buildings; and - Developing moisture control strategies for cross-laminated timber, a relatively new engineered wood product with vast potential for use in mid-rise and high-rise buildings. - Wood−moisture interactions - Water vapor sorption in wood - Moisture-related properties of wood and wood products - Cross-laminated timber (CLT) - Moisture control in buildings - Heat, air, and moisture transfer in the building envelope - Interior and exterior moisture loads on the building envelope - Tools for predicting mold growth, wood decay, and corrosion of metal fasteners - Hygrothermal modeling - Instrumentation for monitoring moisture levels in building assemblies Dr. Glass conducted his doctoral research in the Department of Chemistry at the University of Wisconsin-Madison. He investigated how surfactant films at the gas-liquid interface control gas uptake and evaporation of water. These studies contributed to understanding chemical reactions that occur in sulfuric acid droplets in the upper troposphere and lower stratosphere, which affect ozone levels. Why This Research is Important Moisture control in wood buildings is important for human health, building sustainability, and conservation of the forest resource. Excessive moisture levels in buildings can lead to mold growth and respiratory health problems for occupants. Preventing moisture problems contributes to building sustainability by extending the service life. The sustainability and health of America's forests depend on sound conservation practices, including utilization. Efficient wood utilization reduces the risk and impacts of wildfire, provides incentives for private landowners to maintain forest land, and provides a critical source of jobs in rural America. Use of wood as a green building material is important for climate change mitigation because wood products store carbon for as long as the building exists. Wood products require less energy to process than other building materials such as steel or concrete, and use of wood produces less air pollution, solid wastes, and greenhouse gases. These benefits hinge on efficient and proper use of wood in construction. - University of Wisconsin-Madison, Ph.D. Physical Chemistry 2005 - Calvin College, Grand Rapids, MI, B.A. Chemistry/Classical Civilization/Archaeology 1998 - Research Physical Scientist, USDA Forest Products Laboratory 2005 - Current - Research Assistant, University of Wisconsin-Madison, Department of Chemistry 2001 - 2005 - Teaching Assistant, University of Wisconsin-Madison, Department of Chemistry 1999 - 2001 - Society of Wood Science and Technology (SWST), Member (2008 - Current) - American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), Member (2005 - Current) Standing Standard Project Committee 160, Criteria for Moisture-Control Design Analysis in Buildings; Technical Committee 1.12, Moisture Management in Buildings; Technical Committee 4.4, Building Materials and Building Envelope Performance Featured Publications & Products - Glass, Samuel V; Boardman, Charles R.; Thybring, Emil Engelund; Zelinka, Samuel L. 2018. Quantifying and reducing errors in equilibrium moisture content measurements with dynamic vapor sorption (DVS) experiments. - Thybring, Emil Engelund; Glass, Samuel V; Zelinka, Samuel L. 2019. Kinetics of water vapor sorption in wood cell walls: state of the art and research needs. - Thybring, Emil E.; Boardman, Charles R.; Glass, Samuel V; Zelinka, Samuel L. 2019. The parallel exponential kinetics model is unfit to characterize moisture sorption kinetics in cellulosic materials. - Zelinka, Samuel L.; Glass, Samuel V; Thybring, Emil Engelund. 2020. Evaluation of previous measurements of water vapor sorption in wood at multiple temperatures. - Kordziel, Steven ; Glass, Samuel V; Boardman, Charles R.; Munson, Robert A.; Zelinka, Samuel L.; Pei, Shiling ; Tabares-Velasco, Paulo Cesar. 2020. Hygrothermal characterization and modeling of cross-laminated timber in the building envelope. - Kordziel, Steven ; Pei, Shiling ; Glass, Samuel V; Zelinka, Samuel ; Tabares-Velasco, Paulo Cesar. 2019. Structure moisture monitoring of an 8-story mass timber building in the Pacific Northwest. - Glass, Samuel V.; Wang, Jieying; Easley, Steve; Finch, Graham. 2013. Chapter 10: Enclosure--Building enclosure design for cross-laminated timber construction. - Glass, Samuel V.; Gatland II, Stanley D.; Ueno, Kohta; Schumacher, Christopher J. 2017. Analysis of improved criteria for mold growth in ASHRAE standard 160 by comparison with field observations. - Glass, Samuel V.; Zelinka, Samuel L. 2010. Moisture relations and physical properties of wood. - Glass, Samuel; Zelinka, Samuel; Thybring, Emil Engelund. 2021. Exponential decay analysis: a flexible, robust, data-driven methodology for analyzing sorption kinetic data. - Glass, Samuel; Zelinka, Samuel. 2021. Moisture relations and physical properties of wood. - Zelinka, Samuel L.; Kirker, Grant T.; Bishell, Amy B.; Glass, Samuel V. 2020. Effects of wood moisture content and the level of acetylation on brown rot decay. - Boardman, Charles R.; Glass, Samuel V. 2020. Improving the Accuracy of a Hygrothermal Model for Wood-Frame Walls: A Cold-Climate Study. - Boardman, Charles R.; Glass, Samuel V; Zelinka, Samuel L. 2020. Moisture redistribution in full-scale wood-frame wall assemblies: measurements and engineering approximation. - Lepage, Robert ; Glass, Samuel V; Knowles, Warren ; Mukhopadhyaya, Phalguni . 2019. Biodeterioration models for building materials: critical review. - Boardman, C.R. ; Glass, Samuel V; Munson, Robert ; Yeh, Borjen ; Chow, Kingston . 2019. Field moisture performance of wood-framed walls with exterior insulation in a cold climate. - Boardman, Charles R.; Chow, Kingston ; Glass, Samuel V; Yeh, Borjen . 2019. Hygrothermal modeling of wall drying after water injection. - Zelinka, Samuel L.; Bourne, Keith J.; Glass, Samuel V; Boardman, Charles R.; Lorenz, Linda ; Thybring, Emil Engelund. 2018. Apparatus for gravimetric measurement of moisture sorption isotherms for 1-100 g samples in parallel. - Pásztory, Zoltán ; Horváth, Tibor ; Glass, Samuel V; Zelinka, Samuel . 2018. Experimental investigation of the influence of temperature on thermal conductivity of multilayer reflective thermal insulation. - Kordziel, Steven ; Glass, Samuel V; Pei, Shiling ; Zelinka, Samuel L.; Tabares-Velasco, Paulo Cesar. 2018. Moisture monitoring and modeling of mass-timber building systems. - Glass, Samuel V; Boardman, C.R. ; Yeh, Borjen ; Chow, Kingston . 2018. Moisture monitoring of wood-frame walls with and without exterior insulation in the Midwestern U.S. - Zelinka, S.L. ; Kordziel, S. ; Pei, S. ; Glass, S.V. ; Tabares-Velasco, P.C. . 2018. Moisture monitoring throughout the construction and occupancy of mass timber builidings. - Zelinka, Samuel L.; Glass, Samuel V; Thybring, Emil Engelund. 2018. Myth versus reality: do parabolic sorption isotherm models reflect actual wood water thermodynamics. - Passarini, Leandro ; Zelinka, Samuel L.; Glass, Samuel V; Hunt, Christopher G. 2017. Effect of weight percent gain and experimental method on fiber saturation point of acetylated wood determined by differential scanning calorimetry. - Glass, Samuel V.; Boardman, Charles R.; Zelinka, Samuel L. 2017. Short hold times in dynamic vapor sorption measurements mischaracterize the equilibrium moisture content of wood. - Boardman, Charles; Glass, Samuel V.; Lebow, Patricia K. 2017. Simple and accurate temperature correction for moisture pin calibrations in oriented strand board. - Zelinka, Samuel L.; Glass, Samuel V.; Jakes, Joseph E.; Stone, Donald S. 2016. A solution thermodynamics definition of the fiber saturation point and the derivation of a wood-water phase (state) diagram. - Zelinka, Samuel L.; Passarini, Leandro; Colon Quintana, José L.; Glass, Samuel V.; Jakes, Joseph E.; Wiedenhoeft, Alex C. 2016. Cell wall domain and moisture content influence southern pine electrical conductivity. - Zelinka, Samuel L.; Glass, Samuel V.; Boardman, Charles R.; Derome, Dominique. 2016. Comparison of the corrosion of fasteners embedded in wood measured in outdoor exposure with the predictions from a combined hygrothermal-corrosion model. - Glass, Samuel V.; Yeh, Borjen; Herzog, Benjamin J. 2016. Effects of Exterior Insulation on Moisture Performance of Wood-Frame Walls in the Pacific Northwest: Measurements and Hygrothermal Modeling. - Zelinka, Samuel L.; Glass, Samuel V.; Boardman, Charles R. 2016. Improvements to water vapor transmission and capillary absorption measurements in porous materials. - Zelinka, Samuel L.; Glass, Samuel V.; Boardman, Charles R.; Derome, Dominique. 2016. Moisture storage and transport properties of preservative treated and untreated southern pine wood. - Zelinka, Samuel L.; Wiedenhoeft, Alex C.; Glass, Samuel V.; Ruffinatto, Flavio. 2015. Anatomically informed mesoscale electrical impedance spectroscopy in southern pine and the electric field distribution for pin-type electric moisture metres. - Boardman, C.R.; Glass, Samuel V. 2015. Basement radon entry and stack driven moisture infiltration reduced by active soil depressurization. - Zelinka, Samuel L.; Bourne, Keith J.; Hermanson, John C.; Glass, Samuel V.; Costa, Adriana; Wiedenhoeft, Alex C. 2015. Force-displacement measurements of earlywood bordered pits using a mesomechanical tester. - Glass, Samuel; Kochkin, Vladimir; Drumheller, S.; Barta, Lance. 2015. Moisture Performance of Energy-Efficient and Conventional Wood-Frame Wall Assemblies in a Mixed-Humid Climate. - Boardman, CR; Glass, Samuel V. 2015. Moisture transfer through the membrane of a cross-flow energy recovery ventilator: Measurement and simple data-driven modeling. - Zelinka, Samuel L.; Quintana, José L. Colon; Glass, Samuel V.; Jakes, Joseph E.; Wiedenhoeft, Alex C. 2015. Subcellular Electrical Measurements as a Function of Wood Moisture Content. - Pásztory, Zoltán; Horváth, Tibor; Glass, Samuel V.; Zelinka, Samuel L. 2015. Thermal Insulation System Made of Wood and Paper for Use in Residential Construction. - Glass, Samuel V.; Zelinka, Samuel L.; Johnson, Jay A. 2014. Investigation of Historic Equilibrium Moisture Content Data from the Forest Products Laboratory. - Zelinka, Samuel L.; Glass, Samuel V.; Derome, Dominique. 2014. The effect of moisture content on the corrosion of fasteners embedded in wood subjected to alkaline copper quaternary treatment. - Glass, Samuel V. 2013. Hygrothermal Anaylsis of Wood-Frame Wall Assemblies in a Mixed-Humid Climate. - Glass, Samuel V.; TenWolde, Anton; Zelinka, Samuel L. 2013. Hygrothermal Simulation: A Tool for Building Envelope Design Analysis. - Boardman, C.R.; Glass, Samuel V. 2013. Investigating Wind-Driven Rain Intrusion in Walls with the CARWASh. - Jakes, Joseph E.; Plaza, Nayomi; Stone, Donald S.; Hunt, Christopher G.; Glass, Samuel V.; Zelinka, Samuel L. 2013. Mechanism of Transport Through Wood Cell Wall Polymers. - TenWolde, Anton; Glass, Samuel V. 2013. Moisture in Crawl Spaces. - Clausen, Carol A.; Glass, Samuel V. 2012. Build Green: Wood Can Last for Centuries. - Zelinka, Samuel L.; Lambrecht, Michael J.; Glass, Samuel V.; Wiedenhoeft, Alex C.; Yelle, Daniel J. 2012. Examination of water phase transitions in Loblolly pine and cell wall components by differential scanning calorimetry. - Zelinka, Samuel L.; Derome, Dominique; Glass, Samuel V. 2011. Combining hygrothermal and corrosion models to predict corrosion of metal fasteners embedded in wood. - Boardman, C.R.; Glass, Samuel V.; Carll, Charles G. 2011. Moisture meter calibrations for untreated and ACQ-treated southern yellow pine lumber and plywood. - Glass, Samuel V. 2010. A laboratory facility for research on wind-driven rain intrusion in building envelope assemblies. - Boardman, C. R.; Glass, Samuel V.; Carll, Charles G. 2010. Estimating foundation water vapor release using a simple moisture balance and AIM-2 : case study of a contemporary wood-frame house. - Zelinka, Samuel L.; Derome, Dominique; Glass, Samuel V. 2010. From laboratory corrosion tests to a corrosion lifetime for wood fasteners : progress and challenges. - Glass, Samuel V.; Carll, Charles G.; Curole, Jay P.; Voitier, Matthew D. 2010. Moisture performance of insulated, raised, wood-frame floors : a study of twelve houses in southern Louisiana. - Zelinka, Samuel L; Glass, Samuel V. 2010. Water vapor sorption isotherms for southern pine treated with several waterborne preservatives. - Glass, Samuel V.; TenWolde, Antoni. 2009. Review of moisture balance models for residential indoor humidity. - Glass, Samuel V.; Carll, Charles G. 2009. Moisture meter calibration for untreated and ACQ-treated southern yellow pine plywood. - Clausen, Carol A.; Glaeser, Jessie A.; Glass, Samuel V.; Carll, Charles. 2009. Occurrence of mold in a two-story wood-frame house operated at design indoor humidity levels. - Zelinka, Samuel L.; Glass, Samuel V.; Stone, Donald S. 2008. A percolation model for electrical conduction in wood with implications for wood-water relations. - Glass, Samuel V. 2007. Measurements of moisture transport in wood-based materials under isothermal and nonisothermal conditions. - Glass, Samuel V.; TenWolde, Anton. 2007. Review of in-service moisture and temperature conditions in wood-frame buildings. |A percolation model for water and electrical conduction in wood with implications for durability| Recently, researchers at the Forest Products Laboratory and University of Wisconsin have developed a new model of electrical conduction in wood ... |Centennial Edition, Wood Handbook—Wood as an Engineering Material| The Wood Handbook—Wood as an Engineering Material serves as a primary reference document for a wide variety of users-from the general publ ... |Development of New Kinetics Models for Water Vapor Sorption in Wood| Wood is constantly exchanging water with its environment and these exchanges control nearly all of wood's amazing properties. USDA Forest Servic ... |Improving experimental techniques that probe wood-moisture interactions| Prior methods using dynamic vapor sorption instruments mischaracterized the equilibrium moisture content of wood. Equilibrium is reached after m ... |Improving the Accuracy of Automated Instruments for Moisture in Wood| Automated instruments are increasingly used for measuring the equilibrium moisture content of wood. Research finds that common methods have much ... |Improving the Tools and Practice for Designing Moisture-Safe Wood Buildings| FPL researchers predict the future! Will this new wood structure be safe and durable in the climate for which it is designed? |Investigating the Role of Moisture in Durability of Acetylated Wood| FPL researchers join international effort to investigate fungal decay resistance of acetylated wood. |Keeping Wood-Frame Housing Safe and Warm| Is it okay to add exterior insulation and cover the wood sheathing to make your home even more energy-efficient? USDA Forest Service researchers ... |Managing Moisture in Energy-efficient Wall Systems| Moisture durability is critical for design and construction of energy-efficient buildings. Field measurements of moisture characteristics for hi ... |Modeling indoor humidity in homes| Indoor humidity levels in a home influence not only occupant comfort and indoor air quality but also the durability of the building, especially ... |Moisture Control in Crawl Spaces in Louisiana| Builders and homeowners in the Gulf Region often ask how to insulate a crawl space to avoid moisture problems. The Forest Products Laboratory (F ... |Monitoring Moisture Levels in Mass Timber Buildings| Detailed measurements on moisture levels in mass timber buildings in the United States are scarce. USDA Forest Service researchers are working w ... |Possibilities and Pitfalls of Computer Simulation for Building Moisture Analysis| Moisture problems are much less expensive to correct in the building design phase than after the building is constructed. Computer-based simulat ... |Scientists study how water changes wood| Water causes a host of wood damage mechanisms such as mold, decay, fastener corrosion, and splitting. This research elucidates how water chan ... |Updating a Building Design Standard with Improved Criteria for Preventing Mold Growth| A consensus standard for building design that addresses moisture control analysis was recently revised to improve the criteria for preventing mo ... |Wood Construction Goes Beyond Its Traditional Roots| As interest in sustainable building options continues to grow. Wood construction is going beyond its traditional roots in housing and expanding ...
<urn:uuid:d2036f8a-644b-4e7c-a739-d31dfe6c6ea7>
CC-MAIN-2021-21
https://www.fs.fed.us/research/people/profile.php?alias=svglass
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00614.warc.gz
en
0.745859
4,106
2.984375
3
Greek Revival architecture The Greek Revival was an architectural movement of the late 18th and early 19th centuries, predominantly in Northern Europe and the United States. It revived the style of ancient Greek architecture, in particular the Greek temple, with varying degrees of thoroughness and consistency. A product of Hellenism, it may be looked upon as the last phase in the development of Neoclassical architecture, which had for long mainly drawn from Roman architecture. The term was first used by Charles Robert Cockerell in a lecture he gave as Professor of Architecture to the Royal Academy of Arts, London in 1842. With a newfound access to Greece, or initially the books produced by the few who had actually been able to visit the sites, archaeologist-architects of the period studied the Doric and Ionic orders. In each country it touched, the style was looked on as the expression of local nationalism and civic virtue, and freedom from the lax detail and frivolity that was thought to characterize the architecture of France and Italy, two countries where the style never really took hold. This was especially the case in Britain, Germany and the United States, where the idiom was regarded as being free from ecclesiastical and aristocratic associations. The taste for all things Greek in furniture and interior design, sometimes called Neo-Grec, was at its peak by the beginning of the 19th century, when the designs of Thomas Hope had influenced a number of decorative styles known variously as Neoclassical, Empire, Russian Empire, and Regency architecture in Britain. Greek Revival architecture took a different course in a number of countries, lasting until the Civil War in America (1860s) and even later in Scotland. Rediscovery of Greece Despite the unbounded prestige of ancient Greece among the educated elite of Europe, there was minimal direct knowledge of that civilization before the middle of the 18th century. The monuments of Greek antiquity were known chiefly from Pausanias and other literary sources. Visiting Ottoman Greece was difficult and dangerous business prior to the period of stagnation beginning with the Great Turkish War. Few Grand Tourists called on Athens during the first half of the 18th century, and none made any significant study of the architectural ruins. It would take until the expedition funded by the Society of Dilettanti of 1751 by James Stuart and Nicholas Revett before serious archaeological inquiry began in earnest. Stuart and Revett's findings, published in 1762 (first volume) as The Antiquities of Athens, along with Julien-David Le Roy's Ruines des plus beaux monuments de la Grèce (1758) were the first accurate surveys of ancient Greek architecture. Meanwhile, the rediscovery of the three relatively easily accessible Greek temples at Paestum in southern Italy created huge interest throughout Europe, and prints by Piranesi and others were widely circulated. Access to the originals in Greece itself only became easier after the Greek War of Independence ended in 1832; Lord Byron's participation and death during this had brought it additional prominence. Following the travels to Greece of Nicholas Revett, a Suffolk gentleman architect, and the better remembered James Stuart in the early 1750s, intellectual curiosity quickly led to a desire to emulate. Stuart was commissioned after his return from Greece by George Lyttelton to produce the first Greek building in England, the garden temple at Hagley Hall (1758–59). A number of British architects in the second half of the century took up the expressive challenge of the Doric from their aristocratic patrons, including Benjamin Henry Latrobe (notably at Hammerwood Park and Ashdown House) and Sir John Soane, but it was to remain the private enthusiasm of connoisseurs up to the first decade of the 19th century. An early example of Greek Doric architecture (in the facade), married with a more Palladian interior, is the Revett-designed rural church of Ayot St Lawrence in Hertfordshire, commissioned in 1775 by Lord Lionel Lyde of the eponymous manor. The Doric columns of this church, with their "pie-crust crimped" details, are taken from drawings that Revett made of the Temple of Apollo on the Cycladic island of Delos, in the collection of books that he (and Stuart in some cases) produced, largely funded by special subscription by the Society of Dilettanti. See more in Terry Friedman's book "The Georgian Parish Church", Spire Books, 2004. Seen in its wider social context, Greek Revival architecture sounded a new note of sobriety and restraint in public buildings in Britain around 1800 as an assertion of nationalism attendant on the Act of Union, the Napoleonic Wars, and the clamour for political reform. It was to be William Wilkins's winning design for the public competition for Downing College, Cambridge that announced the Greek style was to be a dominant idiom in architecture, especially for public buildings of this sort. Wilkins and Robert Smirke went on to build some of the most important buildings of the era, including the Theatre Royal, Covent Garden (1808–1809), the General Post Office (1824–1829) and the British Museum (1823–1848), the Wilkins Building of University College London (1826–1830) and the National Gallery (1832–1838). Arguably the greatest British exponent of the style was Decimus Burton. In London twenty three Greek Revival Commissioners' churches were built between 1817 and 1829, the most notable being St.Pancras church by William and Henry William Inwood. In Scotland the style was avidly adopted by William Henry Playfair, Thomas Hamilton and Charles Robert Cockerell, who severally and jointly contributed to the massive expansion of Edinburgh's New Town, including the Calton Hill development and the Moray Estate. Such was the popularity of the Doric in Edinburgh that the city now enjoys a striking visual uniformity, and as such is sometimes whimsically referred to as "the Athens of the North". Within Regency architecture the style already competed with Gothic Revival and the continuation of the less stringent Palladian and neoclassical styles of Georgian architecture, the other two remaining more common for houses, both in towns and English country houses. If it is tempting to see the Greek Revival as the expression of Regency authoritarianism, then the changing conditions of life in Britain made Doric the loser of the Battle of the Styles, dramatically symbolized by the selection of Barry's Gothic design for the Palace of Westminster in 1836. Nevertheless, Greek continued to be in favour in Scotland well into the 1870s in the singular figure of Alexander Thomson, known as "Greek Thomson". Germany and France In Germany, Greek Revival architecture is predominantly found in two centres, Berlin and Munich. In both locales, Doric was the court style rather than a popular movement, and was heavily patronised by Frederick William II and Ludwig I as the expression of their desires for their respective seats to become the capital of Germany. The earliest Greek building was the Brandenburg Gate (1788–91) by Carl Gotthard Langhans, who modelled it on the Propylaea. Ten years after the death of Frederick the Great, the Berlin Akademie initiated a competition for a monument to the king that would promote "morality and patriotism." Friedrich Gilly's unexecuted design for a temple raised above the Leipziger Platz caught the tenor of high idealism that the Germans sought in Greek architecture and was enormously influential on Karl Friedrich Schinkel and Leo von Klenze. Schinkel was in a position to stamp his mark on Berlin after the catastrophe of the French occupation ended in 1813; his work on what is now the Altes Museum, Schauspielhaus, and the Neue Wache transformed that city. Similarly, in Munich von Klenze's Glyptothek and Walhalla were the fulfilment of Gilly's vision of an orderly and moral German world. The purity and seriousness of the style was intended as an assertion of German national values and partly intended as a deliberate riposte to France, where it never really caught on. By comparison, Greek Revival architecture in France was never popular with either the state or the public. What little there is started with Charles de Wailly's crypt in the church of St Leu-St Gilles (1773–80), and Claude Nicolas Ledoux's Barriere des Bonshommes (1785–89). First-hand evidence of Greek architecture was of very little importance to the French, due to the influence of Marc-Antoine Laugier's doctrines that sought to discern the principles of the Greeks instead of their mere practices. It would take until Laboustre's Neo-Grec of the Second Empire for Greek Revival architecture to flower briefly in France. The style was especially attractive in Russia, if only because they shared the Eastern Orthodox faith with the Greeks. The historic centre of Saint Petersburg was rebuilt by Alexander I of Russia, with many buildings giving the Greek Revival a Russian debut. The Saint Petersburg Bourse on Vasilievsky Island has a temple front with 44 Doric columns. Quarenghi's design for the Manege "mimics a 5th-century BC Athenian temple with a portico of eight Doric columns bearing a pediment and bas reliefs". Leo von Klenze's expansion of the palace that is now the Hermitage Museum is another example of the style. Following the Greek War of Independence, Romantic Nationalist ideology encouraged the use of historically Greek architectural styles in place of Ottoman or pan-European ones. Classical architecture was used for secular public buildings, while Byzantine architecture was preferred for churches. Examples of Greek Revival architecture in Greece include the Old Royal Palace (now the home of the Parliament of Greece), the Academy and University of Athens, the Zappeion, and the National Library of Greece. The most prominent architects in this style were northern Europeans such as Christian and Theophil Hansen and Ernst Ziller and German-trained Greeks such as Stamatios Kleanthis and Panagis Kalkos. Rest of Europe The style was generally popular in northern Europe, and not in the south (except for Greece itself), at least during the main period. Examples can be found in Poland, Lithuania, and Finland, where the assembly of Greek buildings in Helsinki city centre is particularly notable. At the cultural edges of Europe, in the Swedish region of western Finland, Greek Revival motifs might be grafted on a purely baroque design, as in the design for Oravais Church by Jacob Rijf, 1792 (illustration, right). A Greek Doric order, rendered in the anomalous form of pilasters, contrasts with the hipped roof and boldly scaled cupola and lantern, of wholly traditional baroque inspiration. While some eighteenth-century Americans had feared Greek democracy ("mobocracy"), the appeal of ancient Greece rose in the 19th century along with the growing acceptance of democracy. This made Greek architecture suddenly more attractive in both the North and South, for differing ideological purposes (for the North, Greek architecture symbolized the freedom of the Greeks; in the South it symbolized the cultural glories enabled by a slave society). Thomas Jefferson owned a copy of the first volume of The Antiquities of Athens. He never practiced in the style, but he played an important role introducing Greek Revival architecture to the United States. In 1803, Jefferson appointed Benjamin Henry Latrobe as surveyor of public building in the United States, and Latrobe designed a number of important public buildings in Washington, D.C. and Philadelphia, including work on the United States Capitol and the Bank of Pennsylvania. Latrobe's design for the Capitol was an imaginative interpretation of the classical orders not constrained by historical precedent, incorporating American motifs such as corncobs and tobacco leaves. This idiosyncratic approach became typical of the American attitude to Greek detailing. His overall plan for the Capitol did not survive, though many of his interiors did. He also did notable work on the Supreme Court interior (1806–1807), and his masterpiece was the Basilica of the Assumption of the Virgin Mary, Baltimore (1805–1821). Latrobe claimed, "I am a bigoted Greek in the condemnation of the Roman architecture", but he did not rigidly impose Greek forms. "Our religion," he said, "requires a church wholly different from the temple, our legislative assemblies and our courts of justice, buildings of entirely different principles from their basilicas; and our amusements could not possibly be performed in their theatres or amphitheatres." His circle of junior colleagues became an informal school of Greek revivalists, and his influence shaped the next generation of American architects. Greek revival architecture in North America also included attention to interior decoration. The role of American women was critical for introducing a wholistic style of Greek-inspired design to American interiors. Innovations such as the Greek-inspired "sofa" and the "klismos chair" allowed both American women and men to pose as Greeks in their homes, and also in the numerous portraits of the period that show them lounging in Greek-inspired furniture. The second phase in American Greek Revival saw the pupils of Latrobe create a monumental national style under the patronage of banker and hellenophile Nicholas Biddle, including such works as the Second Bank of the United States by William Strickland (1824), Biddle's home "Andalusia" by Thomas U. Walter (1835–1836), and Girard College, also by Walter (1833–1847). New York saw the construction (1833) of the row of Greek temples at Sailors' Snug Harbor on Staten Island. These had varied functions within a home for retired sailors. From 1820 to 1850, the Greek Revival style dominated the United States, such as the Benjamin F. Clough House in Waltham, Massachusetts. It could also be found as far west as Springfield, Illinois. Examples of vernacular Greek Revival continued to be built even farther west, such as in Charles City, Iowa. This style was very popular in the south of the US, where the Palladian colonnade was already popular in façades, and many mansions and houses were built for the merchants and rich plantation owners; Millford Plantation is regarded as one of the finest Greek Revival residential examples in the country. Other notable American architects to use Greek Revival designs included Latrobe's student Robert Mills, who designed the Monumental Church and the Washington Monument, as well as George Hadfield and Gabriel Manigault. At the same time, the popular appetite for the Greek was sustained by architectural pattern books, the most important of which was Asher Benjamin's The Practical House Carpenter (1830). This guide helped create the proliferation of Greek homes seen especially in northern New York State and in Connecticut's former Western Reserve in northeastern Ohio. In Canada, Montreal architect John Ostell designed a number of prominent Greek Revival buildings, including the first building on the McGill University campus and Montreal's original Custom House, now part of the Pointe-à-Callière Museum. The Toronto Street Post Office, completed in 1853, is another Canadian example. The discovery that the Greeks had painted their temples influenced the later development of the style. The archaeological dig at Aegina and Bassae in 1811–1812 by Cockerell, Otto Magnus von Stackelberg, and Karl Haller von Hallerstein had disinterred painted fragments of masonry daubed with impermanent colours. This revelation was a direct contradiction of Winckelmann's notion of the Greek temple as timeless, fixed, and pure in its whiteness. In 1823, Samuel Angell discovered the coloured metopes of Temple C at Selinunte, Sicily and published them in 1826. The French architect Jacques Ignace Hittorff witnessed the exhibition of Angell's find and endeavoured to excavate Temple B at Selinus. His imaginative reconstructions of this temple were exhibited in Rome and Paris in 1824 and he went on to publish these as Architecture polychrome chez les Grecs (1830) and later in Restitution du Temple d'Empedocle a Selinote (1851). The controversy was to inspire von Klenze's Aegina room at the Munich Glyptothek of 1830, the first of his many speculative reconstructions of Greek colour. Hittorff lectured in Paris in 1829–1830 that Greek temples had originally been painted ochre yellow, with the moulding and sculptural details in red, blue, green and gold. While this may or may not have been the case with older wooden or plain stone temples, it was definitely not the case with the more luxurious marble temples, where colour was used sparingly to accentuate architectural highlights. Similarly, Henri Labrouste proposed a reconstruction of the temples at Paestum to the Académie des Beaux-Arts in 1829, decked out in startling colour, inverting the accepted chronology of the three Doric temples, thereby implying that the development of the Greek orders did not increase in formal complexity over time, i.e., the evolution from Doric to Corinthian was not inexorable. Both events were to cause a minor scandal. The emerging understanding that Greek art was subject to changing forces of environment and culture was a direct assault on the architectural rationalism of the day. - J. Turner (ed.), Encyclopedia of American art before 1914, New York, p. 198.. - Crook 1972, pp. 1–6 - British Museum entry for the Antiquities of Athens - Crook 1972, pp. 13–18. - Though Giles Worsley detects the first Grecian influenced architectural element in the windows of Nuneham Park from 1756, see Giles Worsley, "The First Greek Revival Architecture", The Burlington Magazine, Vol. 127, No. 985 (April 1985), pp. 226–229. - FitzLyon, K.; Zinovieff, K.; Hughes, J. (2003). The Companion Guide to St Petersburg. Companion Guides. p. 78. ISBN 9781900639408. Retrieved 2015-06-24. - Caroline Winterer, The Culture of Classicism: Ancient Greece and Rome in American Intellectual Life, 1780–1920 (Baltimore: Johns Hopkins University Press, 2002, pp. 44–98. - Hamlin 1944, p. 339 - Federal Writers' Project (1937), Washington, City and Capital: Federal Writers' Project, Works Progress Administration / United States Government Printing Office, p. 126. - The Journal of Latrobe, quoted in Hamlin, Greek Revival d1944), p. 36 (Dover Edition). - Caroline Winterer, The Mirror of Antiquity: American Women and the Classical Tradition, 1780–1900 (Ithaca: Cornell University Press, 2007), pp. 102–41 - Gebhard & Mansheim, Buildings of Iowa, Oxford University Press, New York, 1993 p. 362. - Jenrette, Richard Hampton (2005). Adventures with Old Houses, p. 179. Wyrick & Company. - Jacob Spon, Voyage d'Italie, de Dalmatie, de Grèce et du Levant, 1678 - George Wheler, Journey into Greece, 1682 - Richard Pococke, A Description of the East and Some Other Countries, 1743-5 - R. Dalton, Antiquities and Views in Greece and Egypt, 1751 - Comte de Caylus, Recueil d'antiquités, 1752–67 - Marc-Antoine Laugier Essai sur l'architecture, 1753 - J. J. Winkelmann, Gedanken uber die Nachahmung der griechischen Werke in der Malerei und Bildhauerkunst, 1755 - J. D. LeRoy, Les Ruines des plus beaux monuments de la Grèce, 1758 - James Stuart and Nicholas Revett, The Antiquities of Athens, 1762–1816 - J. J. Winkelmann, Anmerkungen uber die Baukunst der alten Tempel zu Girgenti in Sicilien, 1762 - J. J. Winkelmann, Geschichte der Kunst des Alterthums, 1764 - Thomas Major, The ruins of Paestum, 1768 - Stephen Riou, The Grecian Orders, 1768 - R. Chandler et al., Ionian Antiquities, 1768–1881 - G. B. Piranesi, Differentes vues...de Pesto, 1778 - J. J. Barthelemy, Voyage du jeune Anarcharsis en Grèce dans le milieu du quatrième siecle avant l'ère vulgaire, 1787 - William Wilkins, The Antiquities of Magna Grecia, 1807 - Leo von Klenze, Der Tempel des olympischen Jupiter zu Agrigent, 1821 - S Agnell and T. Evens, Sculptured Metopes Discovered among the ruins of Selinus, 1823 - Peter Oluf Brøndsted, Voyages et recherches dans le Grèce, 1826–1830 - Otto Magnus Stackelberg, Der Apollotempel zu Bassae in Arcadien, 1826 - J. I. Hittorff and L. von Zanth, Architecture antique de la sicile, 1827 - C. R. Cockerell et al., Antiquities of Athens and other places of Greece, Sicily, etc., 1830 - A. Blouet, Expedition scientifique de Moree, 1831-8 - F. Kugler, Uber die Polychromie der griechischen Architektur und Skulptur und ihr Grenze, 1835 - C. R. Cockerell, The Temples of Jupiter Panhellenius at Aegina and of Apollo Epicurius at Bassae, 1860 Architectural Pattern Books - Asher Benjamin, The American Builder's Companion, 1806 - Asher Benjamin, The Builder's Guide, 1839 - Asher Benjamin, The Practical House Carpenter, 1830 - Owen Biddle, The Young Carpenter's Assistant, 1805 - William Brown, The Carpenter's Assistant, 1848 - Minard Lafever, The Young Builder's General Instructor, 1829 - Minard Lafever, The Beauties of Modern Architecture, 1833 - Thomas U. Walter, Two Hundred Designs for Cottages and Villas, 1846. - Winterer, Caroline. The Culture of Classicism: Ancient Greece and Rome in American Intellectual Life, 1780–1910 (Baltimore: Johns Hopkins University Press, 2002) - Winterer, Caroline. The Mirror of Antiquity: American Women and the Classical Tradition, 1780–1900 (Ithaca: Cornell University Press, 2007) - Crook, Joseph Mordaunt (1972), The Greek Revival: Neo-Classical Attitudes in British Architecture 1760–1870, John Murray, ISBN 0-7195-2724-4 - Hamlin, Talbot (1944), Greek Revival Architecture in America, Ohio University Press - Kennedy, Roger G. (1989), Greek Revival America - Wiebenson, Dora (1969), The Sources of Greek Revival Architecture - Hoecker, Christopher (1997), "Greek Revival America? Reflections on uses and functions of antique architectural patterns in American architecture between 1760–1860", Hephaistos — New approaches in Classical Archaeology and related fields, 15, pp. 197–241 - Ruffner, Jr., Clifford H., Study of Greek Revival Architecture in the Seneca and Cayuga Lake Regions - Tyler, Norman and Ilene R. Tyler (2014). Greek Revival in America: Tracing its architectural roots to ancient Athens. Ann Arbor. ISBN 9781503149984. |Wikimedia Commons has media related to Greek Revival architecture.|
<urn:uuid:4ae7f62a-1572-4816-aa2b-07ac6f9bbd27>
CC-MAIN-2021-21
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Greek_Revival_architecture
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00253.warc.gz
en
0.920102
5,062
3.765625
4
Quantum communication networks require single photon frequency converters, whether to shift photons between wavelength channels, to shift photons to the operating wavelength of a quantum memory, or to shift photons of different wavelengths to be of the same wavelength, to enable a quantum interference. Here, we demonstrate frequency conversion of laser pulses attenuated to the single photon regime in an integrated silicon-on-insulator device using four-wave mixing Bragg scattering, with conversion efficiencies of up to 12%, or 32% after correcting for nonlinear loss created by the pump lasers. The frequency shift can be conveniently chosen by tuning of the pump frequencies. We demonstrate that such frequency conversion enables interference between photons at different frequencies. © 2016 Optical Society of America In quantum information, there are several applications for a single photon frequency converter. Single photon sources emitting visible wavelengths need to be shifted to telecom wavelengths for long distance transmission, and telecom photons can be shifted to visible for more convenient detection [1,2]. Quantum memories, required to synchronize single photons in time, are often based on atomic transitions and compatible only with specific frequencies, so it is vital to be able to shift a photon to the operating wavelength of a memory . Many quantum optics experiments rely on Hong-Ou-Mandel (HOM) interference between photons of the same frequency , such as quantum teleportation , entanglement swapping , and in two-photon logic gates for quantum computation . In particular, quantum communication over extended distances relies on quantum repeater schemes combining multiple photon sources, memories, and HOM interferences, so frequency conversion is essential to interfacing these separate components, and should ideally take place in an integrated device so that the converter is compatible with other photonic-chip-based components. It is also generally desirable to multiplex many frequency channels in quantum communications and optical quantum computing, and route photons between channels . Frequency conversion of single photons can be realized using second-order (χ(2)) nonlinear processes such as sum and difference frequency generation , or a third-order (χ(3)) nonlinear process called four-wave mixing Bragg scattering (FWMBS). The frequency shift via a χ(2) nonlinear process is equal to the frequency of the pump laser, and tends to be very large, so the focus has been on shifting between visible and telecommunication wavelengths. In FWMBS the shift is equal to the difference Δ between the frequencies of two pumps, which allows more flexibility when small shifts are required. In addition, as FWMBS relies on a χ(3) nonlinearity, this allows frequency conversion in a wide range of materials, because all materials exhibit some χ(3) response, whereas those with a χ(2) response are more limited. In particular, frequency conversion in silicon is a goal because it opens up CMOS compatibility and integration into more complex devices. FWMBS has been demonstrated in optical fibers [10–12], although either low conversion efficiencies or noise introduced by spontaneous Raman scattering of the pump have been problematic, until recently when Raman noise has been reduced in a liquid nitrogen cooled fiber . Recently FWMBS has been demonstrated in silicon nitride waveguides [14–16], for conversion between 980 nm and 1550 nm, and for small separations around 980 nm, though not for small separations around 1550 nm. In this work, we demonstrate FWMBS in a silicon device in the single photon regime, converting pulses containing an average of one photon between telecom wavelengths. A silicon frequency converter has the potential for integration into more complex photonic and electronic circuits, where it could be one component in a quantum relay device or a single photon router. As crystalline silicon has a very narrow Raman peak at a separation of 15.6THz from the pump, Raman background is easily avoided, and so silicon has the advantage of being effectively Raman noise free at room temperature. We demonstrate single photon regime interference between two input frequencies, allowing the detection of a photon in a coherent superposition of the two frequencies. The principle of FWMBS is shown in Fig. 1(a). In other four-wave mixing processes, such as parametric amplification or phase conjugation, energy is transferred from a pump laser to the signal, and spontaneous four-wave mixing can occur, introducing excess noise photons. In FWMBS the process cannot occur spontaneously in the absence of a signal photon. This feature makes FWMBS intrinsically noiseless . While noiseless frequency conversion has applications in classical communications, it is critical in quantum optics, because for single photon or few photon signals even a small amount of noise can be fatal to the encoded information, or can destroy quantum coherence. The experimental setup shown in Fig. 1(b) consists of a broadband mode-locked fiber laser extending from 1538 nm to 1565 nm, with 40 MHz repetition rate. It is filtered into two pump pulses, of equal power, and an input signal, using a spectral pulse shaper (SPS: Finisar Waveshaper). The SPS is able to independently control the transmission and phase-shift as a function of frequency. The pump pulses are amplified by an erbium doped fiber amplifier and then bandpass filters are used to remove spontaneous emission. A tunable fiber delay line is used in the signal channel to synchronize it with the pump pulses at a 99:1 coupler, re-combining the pumps and signal while attenuating the signal by 20 dB. This leaves the signal weak but far above the single photon regime, allowing classical characterization. The common output is coupled to the chip, which consists of a 2 cm silicon-on-insulator (SOI) nanowire with grating couplers at input and output, with a 450 × 220 nm cross-section, fabricated using ePIXfab at IMEC with 193 nm deep UV lithography. FWMBS scattering driven by the pump pulses occurs on the chip, and then for classical characterization the output is sent to an optical spectrum analyzer (OSA). For single photon regime measurements the frequencies are separated out by an arrayed waveguide grating (AWG) with 100 GHz channel spacing, followed by bandpass filters for additional pump light suppression, then sent to superconducting nanowire single photon detectors (SPDs, 10% efficiency). The extra attenuation to reach the single photon regime was applied using the SPS, and the average number of photons in the signal was 1 per pulse at the start of the chip, based on the total output counts at zero pump power, and extrapolating from the known losses and detection efficiencies. The signal wavelength was set to 1538.9 nm and the longer wavelength pump was set to 1563.6 nm. Spontaneous four-wave mixing of the pumps is suppressed by phase mismatch. The shorter pump wavelength was varied to give pump frequency separations of 100, 200, 300, and 400 GHz. The current limitation in going to wider separations is that stimulated four-wave mixing between the pumps will create new pump frequencies, and will start to contaminate the signal with background photons, but this could be avoided by moving the pumps to longer wavelengths. The bandwidth of each pump was kept at 12.5 GHz and the signal bandwidth was 25 GHz. When pulsed pumps are used, it is preferable to have the signal shorter in time, and hence larger in bandwidth, otherwise components of the signal at different times will experience different pump fields and hence will be converted at different efficiencies . Figure 2(a) shows output spectra measured on the OSA for the different pump separations, with the total average pump power immediately before the chip fixed at 2.8 mW. It can be seen that significant amounts of input signal are both red-shifted and blue-shifted by an amount equal to the pump separation Δ, and that, for the smaller separations, secondary and even tertiary side peaks are generated. This is because multiple FWMBS processes can occur efficiently, so the input is shifted in either direction and with sufficient pump power it can be shifted multiple times. In contrast, FWMBS in fiber involves much longer interaction lengths, so the phase-matching of the process becomes more critical, and it is unlikely that multiple processes will be phase-matched simultaneously. In a short waveguide, the phase-matching can only have a small effect: the phase mismatch due to dispersion can be approximately described by δβ = β2 Δ (ωin-ωp1) for up-shifting and δβ = β2 Δ (ωin-ωp2) for down-shifting, with the group-velocity dispersion β2 expected to be 3.7x10−24s2/m for the waveguide at wavelengths around 1550nm, based on a numerical simulation of the waveguide mode. Over a length of 2cm and for the frequency separations used here, this can only reduce the efficiency of FWMBS by a fraction sinc2(δβL/2) ≈ 4x10−5 of its phase-matched efficiency. There is also a nonlinear term in the phase mismatch proportional to the difference in pump powers, γ(P1-P2): if one of the pump beams is several times stronger than the other then this could become significant, but this is not the case here. Even in this case, since the phase mismatch due to dispersion is negligible, the nonlinear phase mismatch will affect up- and down- shifting processes equally, so this could not be used to suppress spurious processes. Hence the efficiency with which the input can be shifted to a desired output frequency is limited by the spurious scattering processes to other frequencies . Increasing the length of the waveguide could improve this, but only if the dispersion is correct to phase-match the desired process, and if propagation loss in the waveguide is sufficiently low, as shown in section 3. In the waveguide used, the coupling loss to fiber at the signal wavelength was about 5 dB for each grating coupler, which could be avoided using low-loss inversed tapers . The couplers are currently the largest source of loss, and improving this would enable frequency conversion of heralded single photon sources from existing photon pair sources. The propagation loss was 2 dB/cm when the pump power was close to zero. This could be avoided by using a shorter waveguide (a correspondingly higher pump power would be required, not possible here due to technical limitations), or lower loss silicon waveguide geometries have been demonstrated. However, when the pump power was increased, the signal experienced increased loss due to cross two photon absorption with the pump , as shown from a classical measurement in Fig. 2(b). This is the only significant source of loss which is intrinsic to the material, and so cannot be avoided. The measured variation of loss with pump power was used to normalize out loss when calculating conversion efficiencies. After the chip, the loss from the AWG was about 2dB, and the single photon detector efficiency was 10%. Figure 2(c) shows the count rates in the first red- and blue-shifted peaks as a function of pump power for a single photon regime input signal, with a 200 GHz pump separation. Comparing these count rates to that in the input channel with the pumps off (265kHz), maximum conversion efficiencies are extracted of 12% in the red direction and 11% in the blue. This occurs at a power of 1.6mW, beyond which the counts decrease due to increased nonlinear loss and photons coupling into the secondary peaks seen in Fig. 2(a). The input channel is depleted to 4% of its original count rate, or 13% if the increase of nonlinear loss with pump power is factored out. If the other counts are corrected for nonlinear loss, as shown in Fig. 2(d), maximum conversion efficiencies of 32% and 31% are extracted in red and blue directions respectively. These loss-corrected conversion efficiencies are the relevant ones to realizing a frequency beamsplitter as proposed in – here, it is not possible to realize a 50:50 splitter between two channels, due to the spurious processes coupling photons into other channels, but we show below a method by which we can still demonstrate a high-contrast interference fringe. A background level of noise photons [Fig. 2(d)] has been subtracted from the counts, which is due to Raman scattering of the pump in the fibers before and after the chip, and spontaneous four-wave mixing of the pumps. This becomes more significant at higher pump powers, so in the following measurements the total pump power was fixed at 1 mW, where the background was about 10 dB smaller than the level of converted signal photons. It is expected that the weak coherent inputs could be replaced with heralded single photons and the conversion efficiencies would be unaffected, since the wavelength and bandwidth of the inputs are comparable to those generated by existing silicon-based telecom pair-photon sources , though the signal to noise ratio would clearly depend on the brightness of the source. A split-step fourier simulation was used to model the experiment . We assume this approach is able to describe both the classical and single photon regime experiments , with the complex amplitude of a weak classical field or the wavefunction of a photon undergoing the same evolution along the waveguide. The simulation is based on the nonlinear Schrödinger equations for the combined pump and combined signal amplitudes:Figure 3(a) shows the results as a function of total pump power. The simulation is in good agreement with the experimental results, with a maximum corrected conversion efficiency of 33%. Improving this conversion efficiency requires dispersion engineering of the waveguide, combined with a longer length and a low propagation loss. This allows a desired Bragg scattering process to be phase-matched, and sufficient phase mis-match to suppress other processes. In particular the third-order dispersion should be increased compared to the group-velocity dispersion, to maximize the blue-shifted signal. Figure 3(b) shows the simulation results when the third-order dispersion is increased by a factor of 1000, with a length of 10 cm, and the linear loss reduced to 0.5 dB/cm (the nonlinear loss coefficient is unchanged, as it is a material parameter). Conversion efficiencies >60% are possible, and a 50:50 beamsplitter can be realized between two neighboring frequency channels. A more favourable ratio of group-velocity dispersion to third-order dispersion could be found by increasing the width of the waveguide, and this is also expected to decrease propagation losses. Separate numerical simulation shows that a zero-dispersion wavelength exists at 1550nm for a 700x220nm waveguide, which would allow phase-matching of the desired process. Losses as low as 0.3dB/cm have been seen in etchless silicon waveguides , so 0.5dB/cm is realistic. 4. Interference between frequency channels Here, we demonstrate the application of FWMBS to detecting photons in a superposition state of two frequencies. Recently, Ramsey inteferomtry of single photons was demonstrated using cascaded FWMBS to split photons into two frequencies, then recombine them after a variable delay . In the interference contrast was limited by the conversion efficiency of FWMBS, which is not the case here. Further, there the interference fringe was only stable because the same pumps were used to split the photons and recombine them, so this cannot be used to detect frequency superposition states generated by a remote source unless the FWMBS pump lasers are used to create the original superposition state, and then are transmitted along with the state. As we show below, otherwise the relative phase between the two pumps must be stabilized. The SPS is configured to have two signal pass-bands, providing two input signals at ω1 and ω2. The quantum state can be expressed as the product of two coherent states:27, 28]. Here, we observe interference between two frequencies by setting, so that both frequencies are shifted to the center frequency ωout = (ω1 + ω2)/2 with approximately equal probability ε. With the average pump power 1mW, ε is approximately 20%. In our experiment, Δ was kept at 200 GHz, and ω2 was added at 400 GHz below ω1. The probability of detecting the photon at ωout is equal toFigure 4(a) shows two experimental spectra measured on the OSA, demonstrating constructive and destructive interference of the peak at ωout. The SPS has been used to tune the phase of the input signal at ω2. Destructive interference almost completely removes the peak at ωout, while constructive interference enhances it considerably. A smaller variation is seen in the input peaks so that energy is conserved. Figure 4(b) shows an interference fringe in the single photon regime as the phase is varied between 0 and 2π. The raw visibility is 80%, calculated as (Maximum-Minimum)/(Maximum + Minimum) for this channel. After correction for the background level of detections created by the pump this visibility is increased to 86%. The corrected visibility is thought to be limited by imperfect overlap in time or spectrum between converted photons from the two input frequencies: although the spectra appeared well-overlapped on the OSA, the nonlinear process could leave differing frequency chirps on the up- and down-converted signals, or they could have experienced different dispersion, leading to imperfect interference. In conclusion, we have demonstrated frequency conversion of single photon regime signals using FWMBS in an integrated silicon device. The frequency shift is flexible and easily tuned using the separation between two pump frequencies. Conversion efficiency to a particular frequency is limited by spurious Bragg scattering processes, which scatter the input to other undesired frequencies. These processes could be eliminated using a long dispersion engineered waveguide, so that only the desired process is phase-matched, although it would also be necessary to reduce the propagation losses. Even with low conversion efficiencies, it is possible to observe high contrast interference between disparate input frequencies, by converting both input frequencies with equal probability to a central frequency half-way between them. This enables the detection of coherent superpositions of frequencies, which is useful in quantum key distribution as a test of security when time-frequency entanglement of photon pairs is used. This work was funded by Australian Research Council (ARC) Centre of Excellence CUDOS (CE110001018), ARC Laureate Fellowship (FL120100029), and ARC Discovery Early Career Researcher Award (DE120100226). References and links 1. C. Langrock, E. Diamanti, R. V. Roussev, Y. Yamamoto, M. M. Fejer, and H. Takesue, “Highly efficient single-photon detection at communication wavelengths by use of upconversion in reverse-proton-exchanged periodically poled LiNbO3 waveguides,” Opt. Lett. 30(13), 1725–1727 (2005). [CrossRef] [PubMed] 5. D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, “Experimental quantum teleportation,” Nature 390(6660), 575–579 (1997). [CrossRef] 6. N. Sangouard, C. Simon, H. de Riedmatten, and N. Gisin, “Quantum repeaters based on atomic ensembles and linear optics,” Rev. Mod. Phys. 83(1), 33–80 (2011). [CrossRef] 8. M. G. Raymer, S. J. van Enk, C. J. McKinstrie, and H. J. McGuinness, “Interference of two photons of different color,” Opt. Commun. 283(5), 747–752 (2010). [CrossRef] 10. H. J. McGuinness, M. G. Raymer, C. J. McKinstrie, and S. Radic, “Quantum frequency translation of single-photon states in a photonic crystal fiber,” Phys. Rev. Lett. 105(9), 093604 (2010). [CrossRef] [PubMed] 12. P. S. Donvalkar, V. Venkataraman, S. Clemmen, K. Saha, and A. L. Gaeta, “Frequency translation via four-wave mixing Bragg scattering in Rb filled photonic bandgap fibers,” Opt. Lett. 39(6), 1557–1560 (2014). [CrossRef] [PubMed] 13. A. Farsi, S. Clemmen, S. Ramelow, and A. L. Gaeta, “Low-noise quantum frequency translation of single photons,” in CLEO:2015, OSA Technical Digest (online) (Optical Society of America, 2015), paper FM3A.4. 14. I. Agha, M. Davanço, B. Thurston, and K. Srinivasan, “Low-noise chip-based frequency conversion by four-wave-mixing Bragg scattering in SiNx waveguides,” Opt. Lett. 37(14), 2997–2999 (2012). [CrossRef] [PubMed] 15. I. Agha, S. Ates, M. Davanço, and K. Srinivasan, “A chip-scale, telecommunications-band frequency conversion interface for quantum emitters,” Opt. Express 21(18), 21628–21638 (2013). [CrossRef] [PubMed] 16. Q. Li, M. Davanco, and K. Srinivasan, “Efficient and low-noise single-photon-level frequency conversion interfaces using silicon nanophotonics” https://arXiv:1510.02527v1 (2015). 17. A. H. Gnauck, R. M. Jopson, C. J. McKinstrie, J. C. Centanni, and S. Radic, “Demonstration of low-noise frequency conversion by Bragg scattering in a Fiber,” Opt. Express 14(20), 8989–8994 (2006). [CrossRef] [PubMed] 18. H. J. McGuinness, M. G. Raymer, and C. J. McKinstrie, “Theory of quantum frequency translation of light in optical fiber: application to interference of two photons of different color,” Opt. Express 19(19), 17876–17907 (2011). [CrossRef] [PubMed] 19. S. Lefrancois, A. S. Clark, and B. J. Eggleton, “Optimizing optical Bragg scattering single-photon frequency conversion,” Phys. Rev. A 91(1), 013837 (2015). [CrossRef] 20. K. Harada, H. Takesue, H. Fukuda, T. Tsuchizawa, T. Watanabe, K. Yamada, Y. Tokura, and S. Itabashi, “Generation of high-purity entangled photon pairs using silicon wire waveguide,” Opt. Express 16(25), 20368–20373 (2008). [CrossRef] [PubMed] 21. C. A. Husko, A. S. Clark, M. J. Collins, A. De Rossi, S. Combrié, G. Lehoucq, I. H. Rey, T. F. Krauss, C. Xiong, and B. J. Eggleton, “Multi-photon absorption limits to heralded single photon sources,” Sci. Rep. 3, 3087 (2013). [CrossRef] [PubMed] 22. C. Xiong, C. Monat, A. S. Clark, C. Grillet, G. D. Marshall, M. J. Steel, J. Li, L. O’Faolain, T. F. Krauss, J. G. Rarity, and B. J. Eggleton, “Slow-light enhanced correlated photon pair generation in a silicon photonic crystal waveguide,” Opt. Lett. 36(17), 3413–3415 (2011). [CrossRef] [PubMed] 23. G. Agrawal, Nonlinear Fiber Optics, 5th ed. (Academic, 2012). 26. A. Farsi, S. Clemmen, S. Ramelow, and A. L. Gaeta, “Ramsey interference with single photons,” https://arXiv:1601.01105 (2016). 27. J. Nunn, L. J. Wright, C. Söller, L. Zhang, I. A. Walmsley, and B. J. Smith, “Large-alphabet time-frequency entangled quantum key distribution by means of time-to-frequency conversion,” Opt. Express 21(13), 15959–15973 (2013). [CrossRef] [PubMed] 28. Z. Zhang, J. Mower, D. Englund, F. N. Wong, and J. H. Shapiro, “Unconditional security of time-energy entanglement quantum key distribution using dual-basis interferometry,” Phys. Rev. Lett. 112(12), 120506 (2014). [CrossRef] [PubMed]
<urn:uuid:ea15142d-ddd3-4a40-8033-2282ad038a1f>
CC-MAIN-2021-21
https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-24-5-5235&id=336867
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00057.warc.gz
en
0.911791
5,397
2.546875
3
Abu-Duhou, I. (1999) School-Based Management. Paris: UNESCO IIEP. Adams, D. (1993) Defining educational quality. Educational Planning, 9(3), 3-18. Alesina, A. & Dollar, D. (2000) Who gives foreign aid to who and why? Journal of Economic Growth, 5(1), 33-63. All-Party Parliamentary Group (APPG) on Global Education for All (2012, 5 December) A debate on donor support for low cost private schools as a strategy to increase educational participation in development countries. London: APPG. Al-Samarrai, S. (2006) Achieving education for all: How much does money matter? Journal of International Development, 18, 179-206. Retrieved from here. Altschuler, D. (2013) How patronage politics undermines parental participation and accountability: community-managed schools in Honduras and Guatemala. Comparative Education Review, 57(1), 117-144. Alvarez-Valdivia, I., Chavez, L., Schneider, B.K., & Roberts, J. (2012, November) Parental involvement and the academic achievement and social functioning of Cuban school children. School Psychology International. 1-17. Barrera-Osorio, F., Fasih, T., Patrinos, H.A., & Santibánez, L. (2009) Decentralized decision-making in schools: the theory and evidence on school-based management. Washington, DC: World Bank. Retrieved from here. Benavot, A., & Resnik, J. (2007) Lessons from the past: A comparative socio-historical analysis of primary and secondary education. In J. Cohen, D. Bloom, & M. Malin (Eds.), Educating All Children: A Global Agenda, (pp. 123-230). Cambridge, MA: MIT Press. Bowman, M.J. (1984) An integrated framework for analysis of the spread of schooling in less developed countries. Comparative Education Review, 28 (4), 563-583. Braun, D. (1997) The rich get richer: the rise of income inequality in the United States and the world. Chicago: Nelson-Hall. Bray, M. (1999) Control of education: Issues and tensions in centralization and decentralization.” In R. Arnove & C. Torres (Eds.), Comparative education: The dialectic of global and local (pp. 207-32). Boston: Rowan and Littlefield. Bray, M. (2001) Community partnerships in education: dimensions, variations and implications. Paris: UNESCO. Bray, M. (2003) Community initiatives in education: Goals, dimensions and linkages with governments. Compare: A Journal of Comparative and International Education. 33 (1), 31-45. Brophy, J. (2006) Grade Repetition. Brussels and Paris: International Academy of Education and International Institute for Educational Planning. Retrieved from here. Brown, L. (2009a) Introduction and acknowledgements. In L. Brown (ed.) Maintaining university primary education: Lessons from Commonwealth Africa (pp. 1-8). London: Commonwealth Secretariat. Brown, L. (2009b) Lessons for the future. In L. Brown (ed.) Maintaining university primary education: Lessons from Commonwealth Africa (pp. 134-153). London: Commonwealth Secretariat. Bruns, B., Filmer, D. & Patrinos, H. (2011) Making schools work: New evidence on accountability reforms. Washington, D.C: World Bank. Burde, D. (2004) Weak state, strong community? Promoting community participation in post-conflict countries. Current Issues in Comparative Education, 6 (2), 74-87. Burnett, N., Guison-Dowdy, A., & Milan, T. (2013) A moral obligation, an economic priority: The urgency of enrolling out of school children. Doha: Educate A Child and Results for Development Institute. Retrieved from here. Burnside, C. & Dollar, D. (2000) Aid, policies, and growth. American Economic Review, 90 (4), 847–868. Caillods, F., Phillips, M., Poisson, M., & Talbot, C. (2006) Overcoming obstacles to EFA. Paris: UNESCO International Institute for Educational Planning. Retrieved from here. Carnoy, M. (1999) Globalization and educational reform: What planners need to know. Paris: UNESCO International Institute for Educational Planning. Caldwell, B. & Spinks, J. (1992) Leading the self-managing school. London: Falmer Press. Carron, G., & Carr-Hill, R. (1991) Non-formal education: Information and planning issues. Paris: IIEP, UNESCO. Chabbott, C. (1998) Constructing educational consensus: International development professionals and the World Conference on Education for All. International Journal of Educational Development, 18 (3), 207-218. Cheema, G. & Rondinelli, D. (Eds.) (1983) Decentralization and development: Policy implications for developing countries. London: Sage. Clemens, M. (2004) The long walk to school: International education goals in historical perspective. Working Paper Number 37. Washington, DC: Center for Global Development. Retrieved from here. Colclough, C.,& Al-Samarrai, S. (2000) Achieving schooling for all: Budgetary expenditures on education in Sub-Saharan Africa and South Asia. IDS Working Paper 77. Brighton: University of Sussex Institute for Development Studies. Consortium for Research on Educational Access, Transitions & Equity (CREATE) (2011) Making rights real: research educational access, transitions and equity. Essex House: University of Sussex Center for International Education and London: UK Department for International Development. Corrales, J. (2007) Political obstacles to expanding and improving schooling in developing countries. In J. Cohen, D. Bloom, & M. Malin (eds.) Educating all children: A global agenda (pp. 231-299). Cambridge, MA: MIT Press. Davies, L., Harber, C., & Dzimadzi, C. (2003) Educational decentralization in Malawi: A study of process. Compare: A Journal of Comparative and International Education, 33(2), 139-54. DeStefano, J., Schuh Moore, A., Balwanz, D., & Hartwell, A. (2007) Reaching the underserved: Complementary models of effective schooling. Washington, DC: EQUIP2 and USAID. Education International. (2009) Public Private Partnerships in Education. Brussels: Education International. Retrieved 9 December 2012 from here. Edwards, D. B. (2012) The approach of the World Bank to participation in development and education governance: Trajectories, frameworks, results. In C. Collins & A. Wiseman (eds.), Education strategy in the developing world: Understanding the World Bank’s education policy revision, pp. 126-142. New York: Emerald. Edwards, D. B. & Klees, S. (2012) Participation in development and education governance. In A. Verger, M. Novelli & H. Kosar-Altinyelken (Eds.), Global education policy and international development: New agendas, issues and programmes, pp. 67-84. New York: Continuum. Educational Quality Improvement Program (EQUIP) 2. (2007) Free primary education and school fees in developing countries: Knowledge map. Washington, DC: EQUIP and USAID. Engel, J., Cossou, M. & Rose, P. (2011) Benin’s progress in education: Expanding access and closing the gender gap. London: Overseas Development Institute. Engel, J., & Rose, P. (2011a) Ethiopia’s progress in education: A rapid and equitable expansion of access. London: Overseas Development Institute. Engel, J., & Rose, P. (2011b) Rebuilding basic education in Cambodia: Establishing a more effective development partnership. London: Overseas Development Institute. Fantini, M. (1968) Community participation. Harvard Educational Review, 38(2), 160-75. Flórez, G. A., Chesterfield, R., & Siri, C. (2005) The cerca school report card: communities creating education quality. Final report. Washington, DC: Academy for Educational Development and USAID/Civic Engagement for Education Reform in Central America (CERCA). Ginsburg, M. (2012) Public-private partnerships, neo-liberal globalization, and democracy. In S. Robertson, A. Verger, and K. Mundy (Eds.), Global Governance and Partnerships with the Private Sector in Education for Development. Cheltenham, UK: Edward Elgar Publishing. Ginsburg, M., Brady, K., Draxler, A., Klees, S., Luff, P., Patrinos, H., & Edwards, D. (2012) Public-private partnerships and the global reform of education in less wealthy countries: A moderated discussion. Comparative Education Review, 56 (1): 155-175. Ginsburg, M., Megahed, N., Elmeski, M., & Tanaka, N. (2010) Reforming educational governance and management in Egypt: National and international actors and dynamics. Educational Policy Analysis Archives. 18 (5), 1-50. Retrieved from http://epaa.asu.edu/ojs/article/view/731/825. Glick, P. and Sahn, D Early Academic Performance, Grade Repetition, and School Attainment in Senegal: A Panel Data Analysis. The World Bank Economic Review, 24 (1), 93-120. Green, A. (1997) Education and State Formation in Europe and Asia. In K. Kennedy (ed.) Citizenship education and the modern state (pp. 9-26). London: Falmer Press. Groundwork Inc. (2002) Paper prepared for Basic Education and Policy Support Activity, USAID, Washington, DC. Retrieved from here. Handa, S. (2002) Raising primary school enrolment in developing countries: The relative importance of supply and demand. Journal of Development Economics, 69, 103-128. Hanson, E. (1998) Strategies for decentralization: Key questions and core issues. Journal of Educational Administration, 36 (20), 111-128. Harber, C. & Davies, L. (1997) School management and effectiveness in developing countries: The post-bureaucratic school. London: Cassell. Henkel, H. & Stirrat, R. (2001) Participation as spiritual duty: Empowerment as secular subjection. In B. Cooke and U. Kothari (Eds.), Participation: The new tyranny? London: Zed. Hoppers, W. (2006) Non-formal education and basic education reform: A conceptual review. Paris: IIEP, UNESCO. International Labor Organization (ILO). (2008) Paper Prepared on Public-Private Partnership for Decision for the Governing Body. Report. ILO, Geneva. Retrieved from here. International Monetary Fund. (2004) Public-private partnerships. Retrieved from here. Inter-Agency Commission. (1990) World declaration on education for all. (Document adopted by the World Conference on Education for All: Meeting Basic Learning Needs, Jomtien, Thailand, 5-9 March, 1990). New York: Inter-Agency Commission. Jimenez, E. & Lockheed, M. (1995). Public and private secondary education in developing countries. World Bank Discussion Papers No. 309. Washington, DC: World Bank. Kamat, S. (2002) Deconstructing the rhetoric of decentralization: The state in education reform. Current Issues in Comparative Education, 2(2), 110-119. Kazeem, A., Jensen, L., & Stokes, C. S. (2010) School attendance in Nigeria: Understanding the impact and intersection of gender, urban-rural residence, and socioeconomic status. Comparative Education Review, 54 (2), 295-319. Kikechi, W., Phillip, A., Kisebe, C., & Simiyu, F. (2012) Factors affecting the access of free primary education by gender in Kenya. Journal of Educational and Social Research, 2 (2), 35-44. Klees, S. J. (2010) Aid, development, & education. Current Issues in Comparative Education, 13 (1), 7-28. Retrieved from http//:www.tc.edu/cice. Klees, S. J. (2013) What’s wrong with low-cost private schools for the poor? Retrieved from here. Koppensteiner, M. (2011) Automatic Grade Promotion and Student Performance: Evidence from Brazil. Working Paper No. 11/52. Leicester, UK: University of Leicester. Retrieved from here. Kosack, S. (2009) Realising Education for All: Defining and using the political will to invest in primary education. Comparative Education, 45 (4), 495-523. Krishnaratne, Shari, White, Howard, and Carpenter, Ella (2013) Quality education for all children? What works in education in developing countries. International Initiative for Impact Evaluation (3ie) Working Paper No. 20. New Delhi: 3ie. Retrieved from here. Lauglo, J. (1995) Forms of decentralization and their implications for education. Comparative Education, 31 (1), 5-29. Leithwood, K. & Menzies, T. (1998) A review of research concerning the implementation of site-based management. School Effectiveness and School Improvement, 9 (3), 233-285. Levinson, D. (2002) Human capital theory. In D. Levinson, P. Cookson, & A. Sadovnik (Eds.) Education and sociology: An encyclopedia (pp. 377-379). NY: RoutledgeFalmer. Lewin, K. (2009) Access to education in sub-Saharan Africa: Patterns, problems and possibilities. Comparative Education, 45 (2), 151-74. Lockheed, M. & Verspoor, A. (1990) Improving primary education in developing countries: A review of policy options.Washington, DC: World Bank. Lopate, C., Flaxman, E., Dynum, E., & Gordon, E. (1970) Decentralization and community participation in education. Review of Educational Research, 40(1), 135-50. Lynch, J. (1997) Education and development: A human rights analysis. London: Cassell. Mancorda, M. (2012) The Cost of Grade Retention. Review of Economics and Statistics, 94 (2), 596-606. McEwan, P. (2000) Comparing the effectiveness of public and private schools: A review of evidence and interpretations. Occasional Paper No. 3, National Center for the Study of Privatization in Education. New York: Teachers College, Columbia University. Retrieved from here. McGinn, N. (1992) Reforming educational governance: Centralization/decentralization. In R. Arnove, P. Altbach, & G. Kelly (Eds.) Emergent issues in education: Comparative perspectives. Albany, NY: State University of New York Press. McGinn, N. & Welsh, T. (Eds.) (1999) Decentralization of education: Why, when, what and how? Paris: UNESCO. Meyer, J., & Hannan, M. (1979) National development and the world system: educational, economic, and political change, 1950-70. Chicago: University of Chicago Press. Miller, K. (2013) Making room for leaders: A critique of prevailing explanations for why states expand schooling. Manuscript submitted to the Comparative Education Review (ms# 2013-1997). Mundy, K. (2006) Education for All and the new development compact. International Review of Education, 52, (1/2), 23-48. Mundy, K. & Menashy, F. (2012) The World Bank and the private provision of k-12 education: history, policies, and practices. New York: Open Society Foundations Education Support Program and the Privatization in Education Research Initiative. Murname, R., Newstead, S., & Olsen, R. (1985) Comparing public and private schools: the puzzling role of selectivity bias. Journal of Business and Economic Statistics. 3 (1), 23-35. Nielsen, H. D. (2007) Empowering communities for improved educational outcomes: Some evaluation findings from the World Bank. Prospects, 37, 81-93. Omoeva, C., Sylla, B., Hatch, R., & Gale, C. (2013) Out of school children: Data challenges in measuring access to education. Washington, DC: Education Policy Data Center, FHI 360. Organization for Economic Cooperation and Development (OECD) (2008) Public-Private partnerships: in pursuit of risk sharing and value for money. Paris: OECD. Patrinos, H. (2006) Public-private partnerships: Contracting education in Latin America. Resource paper. World Bank, Washington, DC. Retrieved from here. Patrinos, H., Barrera-Osorio, F., & Guáqueta, J. (2009) The role and impact of public-private partnerships in education. Washington, DC: World Bank. Phillips, K. (2013) Building the nation from the hinterlands: Participation, education, and the labor of development. Comparative Education Review, 57 (4), 637-661. Pritchett, L. (2003) When will they ever learn? Why all governments produce schooling. Cambridge, MA: JFK School of Government, Harvard University. Pryor, J. (2005) Can community participation mobilize social capital for improvement of rural schooling? A case study from Ghana. Compare, 35(2), 193-203 Psacharopoulos, G., & Patrinos, H.A. (2004) Human capital and rates of return. In G. Jones & J. Jones (Eds.). International Handbook of Economics of Education (pp. 1-57). Cheltenham: Edward Elgar, Ramirez, F. & Boli, J. (1982) Global patterns of educational institutionalization. In P. Altbach, R. Arnove, and G. Kelly (Eds.), Comparative Education, (pp. 15-38). New York: Macmillan. Ramirez, F. & Rubinson, R. (1979) Creating members: the political incorporation and expansion of public education. In J. Meyer & M. Hannan (Eds.) National development and the world system: educational, economic, and political change, 1950-70 (pp. 72-82). Chicago: University of Chicago Press. Rose, P. (2009) NGO provision of basic education: Alternative or complementary service delivery to support access to the excluded? Compare, 39 (2), 219-233. Schubert, J. & Israel, R. (2000) Report on democracy and education on-line seminar. Washington, DC: Improving Educational Quality Project and American Institutes for Research. Schultz, T. (1961) Investment in human capital. American Economic Review, 51(1), 1 – 17. Sen, A. K. (1999) The ends and means of development. In Development and Freedom (pp. 35-53). Oxford & New York: Oxford University Press. Sen, A. K. (2005) Human rights and capabilities. Journal of Human Development, 6 (2), 151-166. Sehr, D. (1997) Education for public democracy. Albany, NY: State University of New York Press. Smith, P., Pigozzi, M.J., Tomasevski, K., Bhola, H.S., Kuroda, K., & Mundy, K. (2007) UNESCO’s role in global educational development: A moderated discussion. Comparative Education Review, 51 (2), 229-245. Spring, J. (2000) The universal right to education: justification, definition, and guidelines. Mahwah, NJ: Lawrence Erlbaum Associates. Svensson, J. (1999) Aid, growth, and democracy. Economics and Politics, 11(3), 275–297. Tomasevski, K. (2006) The state of the right to education worldwide: Free or fee. Copenhagen: UN Rapporteur General on Right to Education. Retrieved from here. Tooley, J. & Dixon, P.(2005) Private education is good for the poor: a study of private schools serving the poor in low-income countries. Washington, DC: CATO Institute. United Nations (UN) (1948) Universal declaration on human rights. Geneva: United Nations. United Nations (UN) (2000) Millennium Development Goals. New York: United Nations. United Nations Education Science and Culture Organization (UNESCO) (2000) The Dakar Framework for Action. Adopted by the World Education Forum, Dakar, Senegal, 26-28 April 2000. Paris: UNESCO. United Nations Education Science and Culture Organization (UNESCO) (2003) EFA global monitoring report: Gender equality and the leap to equality. Paris: UNESCO. United Nations Education Science and Culture Organization (UNESCO) (2005) EFA global monitoring report: The quality imperative. Paris: UNESCO. United Nations Education Science and Culture Organization (UNESCO) (2006) Decentralization of education in Egypt. Country report at the UNESCO seminar on EFA implementation: Teacher and resource management in the context of decentralization. Hyderabad, India: UNESCO Division of Educational Policies and Strategies. United Nations Education Science and Culture Organization (UNESCO) (2008) Education for All by 2015: Will we make it? (EFA Global Monitoring Report). Paris: UNESCO. United Nations Education Science and Culture Organization (UNESCO) (2010) Reaching the marginalized (EFA Global Monitoring Report). Paris: UNESCO. United Nations Education Science and Culture Organization (UNESCO) (2012) Youth and skills: Putting education to work (EFA Global Monitoring Report). Paris: UNESCO. UNESCO Institute for Statistics (UIS) (2005) Children out of school: Measuring exclusion from primary education. Montreal: UIS. UNESCO Institute for Statistics (UIS) (2011) Global education digest. Montreal: UIS. UNESCO Institute for Statistics (UIS) (2013) EFA global monitoring report policy paper 9: Schooling for millions of children jeopardized by reductions in aid. Montreal: UIS. UNICEF & UNESCO (2007) A human rights-based approach to EFA. New York: UNICEF. United States Agency for International Development (USAID). (2006) Public private partnerships for development: a handbook for business. Washington, DC: USAID. Retrieved from United States Agency for International Development (USAID). (2011) Education: Opportunity for learning: USAID education strategy, 2011–2015. Washington, DC: USAID. United States Department of Labor (US DOL) Bureau of International Labor Affairs (2000). Access to education. In By the sweat and toil of children, volume v: efforts to eliminate child labor. Washington, DC: US DOL. Retrieved from here. van Fleet, J. (2011) A global education challenge: Harnessing corporate philanthropy to educate the world’s poor. Working Paper 4. Washington, DC: Center for Universal Education, Brookings Institution. Retrieved from here. Weiler, H. (1989) Education and power: The politics of educational decentralization in comparative perspective. Stanford: Center for Educational Research and Studies, Stanford University. Wils, A., & Ingram, G. (2011) Universal basic education: A progress-based path to 2025. Washington, DC: Education Policy Data Center, FHI 360. Winkler, D. (1989) Decentralization in education: An economic perspective. Working Paper No. 143. Washington, DC: World Bank. Winkler, D., & Sevilla, M. (2004) Information for accountability in decentralized Education: Implementation of report cards. Washington DC: Academy for Educational Development. Washington, D.C. Woodhall, M. (1997) Human Capital Concepts. In A. Halsey, H. Lauder, P. Brown, & A. S. Wells (Eds.) Education, culture, economy and society (pp. 219-23). New York: Oxford University Press. World Bank. (1998) Assessing aid: What works, what doesn’t, and why. Oxford, NY: Oxford University Press. World Bank. (1999) Education sector strategy. Washington, D.C.: World Bank. World Bank (2002) Project performance assessment report (No.24433). Washington, DC: World Bank. World Bank. (2003) World development report 2004: Making services work for poor people. Washington, D.C.: World Bank. World Bank. (2009) Abolishing School Fees in Africa. Washington, DC: The World Bank. World Bank. (2011) Learning for all: Investing in people’s knowledge and skills to promote development. World Bank Group Education Strategy 2020. Washington, DC: World Bank. World Bank/Independent Evaluation Group (IEG) (2006) From schooling to access to learning outcomes: An unfinished agenda – an evaluation of World Bank support to primary education. Washington, D.C.: World Bank. World Bank/Independent Evaluation Group (IEG) (2010) World Bank support to education since 2001: A portfolio note. Washington, D.C.: World Bank. World Economic Forum. (2005) Partnering for success: business perspectives on multi-stakeholder partnerships. Geneva: World Economic Forum.
<urn:uuid:434df8d3-aec1-4a67-bd9e-64ae22819bc2>
CC-MAIN-2021-21
https://admin.educateachild.org/explore/access-retention/bibliography
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00536.warc.gz
en
0.775091
5,611
2.609375
3
Irish American Heritage Month March 1 – 31 March is designated as Irish American Heritage Month to recognize the contribution that Irish immigrants and their descendants have played in the formation of our Nation. Among those contributions are nine signers of the Declaration of Independence, over twenty of Washington’s generals, the first man to hold a commission in the United States Navy, over 190,000 Irish born Americans who fought in the Civil War, pioneering women such as Nellie Bly and Christa McAuliffe, the inventor of the modern submarine and 253 Medal of Honor recipients who list the place of their birth as Ireland. Developmental Disabilities Awareness Month March 1 – 31 March is Developmental Disabilities Awareness Month thanks to a 1987 Presidential Proclamation which was the direct result of the advocacy efforts of The Arc. A lot has changed since then: more people with intellectual and developmental disabilities (I/DD) are living and thriving in their communities rather than in institutions, there are more opportunities in education and employment, more protections in health care, the legal system and other areas of human rights, there are more positive and accurate portrayals of people with I/DD in the arts, the list goes on. But we must remember that many of those advancements were hard won. Self-advocacy and advocacy on behalf of those with I/DD was the impetus for many of the positive changes in our society such as the proclamation that recognized DD Awareness Month. Women’s History Month March 1 – 31 This month pays tribute to the generations of women whose commitment to nature and the planet have proved invaluable to society. The celebration was met with positive response, and schools began to host their own Women’s History Week programs. The next year, leaders from the California group share their project at a Women’s History Institute at Sarah Lawrence College. Other participants not only became determined to begin their own local Women’s History Week projects but also agreed to support an effort to have Congress declare a national Women’s History Week. In 1981, Sen. Oran Hatch (R-UT) and Rep. Barbara Mikulski (D-MD) cosponsored the first Joint Congressional Resolution proclaiming a “Women’s History Week.” In 1987, the National Women’s History Project petitioned Congress to expand the celebration to the entire month of March. Since then, the National Women’s History Month Resolution has been approved every year with bipartisan support in both the House and Senate. Mid-Semester Grades Available Spring mid-semester grades available online. Student Commencement Speaker Application Interested in applying to be a Student Commencement Speaker? Students graduating in May 2015 are eligible to speak at commencement. The ideal student speech should endeavor to inspire and motivate, while remaining relevant, encouraging, and memorable for not only their fellow graduates but all guests at Commencement. One speaker will be chosen for each ceremony. Last Day for 75% Credit Refund, Second Block The last day to drop or withdraw from Spring 2015 Second Block classes, with 75% refund, is today. Drops and withdrawals can be processed online at My Missouri State. If you have a hold on your account preventing the use of the web registration system to drop a class, contact the Office of the Registrar prior to midnight on the deadline day. Office hours are 8:00 am – 5:00 pm. After 5:00 pm, email [email protected] from your University email account, or fax 417-836-6334. Course adds and section changes can only be processed with academic department approval. Additional refund and payment deadlines can be found here: http://www.missouristate.edu/registrar/feedeadlines.html Haven is a required online course for all in-coming freshmen and transfer students to complete to initiate understanding of consent, sexual assault, relationship violence, and bystander intervention. If this course is not completed by the student prior to the student’s spring registration date within the student’s first year at Missouri State University, a Registration Hold will be placed on the student’s account. This hold will be lifted from the account once completion of Haven has occurred. To Log In: - Always sign in at: my.missouristate.edu - After signing in, go under the Academics tab and find the New Student or Transfer Student section. Within this square, there is a training section and the Haven link is in this area. To access Haven, choose the link and you can begin. Course problems? Contact Emma Rapp, Dean of Students Graduate Assistant at [email protected] or at (417) 836-6087 Those with disabilities who may not be able to access Haven because of the instructional format or design of the training program may request an accommodation by contacting the Disability Resource Center at [email protected] or 417-836-4192 Bear Brawl I Saturday, March 21 at 4pm, McDonald Arena Bear Brawl I will be a full night of exciting amateur boxing hosted by the MSU Boxing Team. Admission is FREE to all MSU students, faculty, and staff, so come enjoy the fun!!! Bears Backing Bears Student Challenge Thursday, March 12 – Friday, March 20 Help your fellow bears stay in school when they are facing a personal crisis with a gift to The Emergency Scholarship Fund. President Smart is making a donation to the Emergency Scholarship Fund in honor of the 110th Birthday and he is encouraging students to join him in supporting this scholarship that supports students in crisis. Students can participate with an $11 gift and receive a #BirthdayBears T-shirt (limited number) and a chance to be eligible for President Smart’s reserved parking space for one week! The campaign runs until 11:30 a.m. March 20th – exclusively at Bears Backing Bears. The winner will be announced on March 21st at Dance Marathon, during the birthday hour. The crowdfunding page will be live for 30 total days, people can still contribute after the contest portion has ended. Living History Luncheon Friday, March 20 at 11:30am – 1pm, PSU Ballroom Please join us for a Living History Luncheon on Friday, March 20, Buffet beginning at 11:30 a.m., class begins at noon. Historians portraying two of Missouri State’s most beloved academic legends, William Carrington and Virginia Craig, will be on campus one more time for a history lesson that will span the decades. Seating is limited, to ensure your spot and purchase a ticket at (417) 836-4143 or www.missouristatefoundation.org Rock n’ Bowl: Birthday Bears Friday, March 20 at 7:30pm, Level 1 Game Center Student Activities Council Events Saturday, March 21 at 12pm – 12am, Foster Recreation Center Saturday, March 21 at 12pm – 12am, Foster Recreation Center Dance marathon is a 12-hour dance marathon and all funds raised go directly to Children’s Miracle Network at Cox South hospital in town. Although its 12 hours long, you don’t have to stay the entire time – you can come and go as you please. It’s a really fun event for a great cause. You can register for the event at mostatedm and the event will have live performers, a DJ for the entire 12 hours, inflatables, food catered by Insomnia Cookies, Qdoba, and Andy’s ice cream. Hibernotes a cappella group, Nathan Momper (musician) Saturday, March 28 at 1pm, Hammons Student Center Greek jam is an event where teams present a dance, usually related to a theme. The event will take place at Hammons Student Center at 1 pm. More information about Greek Week can be found in the Guest Blog Humans vs. Zombies Monday, March 23 – Sunday, March 29 at 6am – 8pm, Springfield Campus Come join Live Action Society in their Spring game of Humans VS Zombies – Wild West! Humans VS Zombies is a constant alert Nerf game of tag with two teams: the Humans who are armed with Nerf Blasters, and the Zombies who are trying to tag them. This game lasts the whole week and is completely FREE! Sign up the week before at these locations: - Siceluff outside tables all day March 16-20 - Inside PSU tables all day March 23-25 Full briefing sign up meetings are: - Wednesday March 18 PSU Room 317 8:30-10pm - Thursday March 19 PSU Room 312 8:30-10pm Missouri State Imrov Monday, March 23 at 9pm, Carrington 208 (Carrington Auditorium) Monday, March 30 at 9pm, Carrington 208 (Carrington Auditorium) Join us in Carrington Auditorium at 9PM for a showcase of our most hilarious improv comedy teams! The show is always free and is considered appropriate for anyone 18+ Modern Voices – Chorale Concert Sunday, March 22 at 7:30pm, Immaculate Conception Catholic Church Springfield Chorale is the flagship touring choir of Missouri State’s Department of Music. This select choir of approximately 48 voices performs regularly at conferences of the American Choral Directors Association, Missouri Music Educators Association, and the National Association for Music Education and has toured throughout the United States and Europe. Find out more about SAC films, concerts, and comedy by looking at our SAC Events Blog. Wednesday, March 18 at 9pm, PSU Theatre In New York City’s Harlem circa 1987, an overweight, abused, illiterate teen who is pregnant with her second child is invited to enroll in an alternative school in hopes that her life can head in a new direction. After Hours and Concerts: Live! Music Competition Thursday, March 19 at 9pm, PSU Ballroom Student musicians will have the chance to compete to open for the Nelly concert. Stop by to hear some of the most up-and-coming artists in the Springfield area. If you are interested in auditioning to perform, please email Kane Sheek ([email protected]). Rock n’ Bowl: Birthday Bears Bash Friday, March 20 at 7:30pm, PSU Level 1 Game Center Celebrate Missouri State’s 110th Birthday with a huge birthday celebration! Food, cake, games, and prizes! Comedy: Jade Catta-Preta Saturday, March 21 at 7pm, PSU Ballroom Jade Catta-Preta is a triple threat writer, actress, and stand up comedian. She is presently a series regular on ABC’s “Manhattan Love Story” and is also a cast member on MTV’s “Girl Code”. Sunday, March 22 at 9pm, PSU Theatre Wednesday, March 25 at 9pm, PSU Theatre A trio of guys try and make up for missed opportunities in childhood by forming a three-player baseball team to compete against standard children baseball squads. After Hours: #TBT Game Night Thursday, March 26 at 9pm, McDonald Hall and Arena Throw Back Thursday Game Night will feature some of your favorite childhood games! Come out and play them with your college peers! Rock n’ Bowl Friday, March 27 at 7:30pm, PSU Level 1 Game Center Free bowling, ping-pong, and pool for students on Friday Nights in Level One Game Center. Film: Kicking and Screaming Sunday, March 29 at 9pm, PSU Theatre Family man Phil Weston, a lifelong victim of his father’s competitive nature, takes on the coaching duties of a kids’ soccer team, and soon finds that he’s also taking on his father’s dysfunctional way of relating. We will be having a pizza drawing for people with 5 punches on their reward cards on the 25th. Lectures: Candice DeLong Monday, March 30 at 7pm, PSU Theatre Candice DeLong has been called a real-life “Clarice Starling” and a female “Donnie Brasco.” She was on the front lines of some of the F.B.I.’s most gripping and memorable cases, including being chosen as one of three agents to carry out the manhunt for the Unabomer in Lincoln, Montana. She tailed terrorists, went undercover as a gangster’s moll, and posed as the madam for a call-girl ring. Now for the first time, she reveals the dangers and rewards of being a woman on the front lines of the world’s most powerful law enforcement agency. DeLong was, until her retirement in July 2000, was the head field profiler in San Francisco for the F.B.I. She has served as the liaison to the Bureau’s world famous Behavioral Science Unit at Quantico and, as a member of the Child Abduction Task Force and former Registered Nurse, lectured widely on such issues as protecting women and children, and preventing sexual abuse. In her book, Special Agent: My Life on the Front Lines as a Woman in the F.B.I., DeLong takes readers step by step through the profiling process and shows how she helped to solve a number of difficult, high profile cases. The story of her role as a lead investigator on the notorious Tylenol Murder case is particularly compelling. She also gives the true, insider’s story behind the investigation that led to the arrest of the Unabomber – including information that the media can’t or won’t reveal. Safe Zone Training Thursday, March 19 at 12 – 1:30pm, PSU 313 (Parliamentary Room) Monday, March 30 at 12 – 1:30pm, PSU 317 Safe Zone training will provide students, faculty, and staff with the tools, resources, and information required to create a safe space for Missouri State’s Lesbian, Gay, Bisexual, Trans*, Ally, and Queer community. Upon completion of the training attendees will receive a “Safe Zone” placard that indicates their office or room is a safe space. SOFAC Under Construction – Required Training Tuesday, March 24 at 9am, PSU 317A Tuesday, March 24 at 11:30am, PSU 317A Wednesday, March 25 at 9am, PSU 313 Wednesday, March 25 at 7pm, PSU 308A Thursday, March 26 at 3pm, PSU 314AB Thursday, March 26 at 8pm, PSU 308A Friday, March 27 at 10am, PSU 314A Friday, March 27 at 5pm, PSU 314A The nature of SOFAC and its guidelines are changing! This training on the new SOFAC process is required by each student organization that is interested in receiving training for the 2015-2016 academic year! Spring Family Day 2015 Saturday, March 21 at 1pm, Hammons Field Invite your family to Spring Family Day 2015, March 21st! The highlight of the day is Family Day at Hammons Field where the Bears Baseball team will take on Indiana State University in an afternoon game. Join us as we celebrate the 110th anniversary of the founding of our university. In addition, we celebrate 10 years as Missouri State University. Questions? Ask Priscilla at [email protected]. Statewide Collaborative Diversity Conference Wednesday, March 25 – Friday, March 27, PSU 3rd Floor Missouri State University’s Statewide Collaborative Diversity Conference (SCDC) will focus on broadening the community leaders and tomorrow’s leaders with best practices showcased by diversity professionals from around the nation. A new student day has been added on Wednesday, March 25. Session primarily focus on student diversity related issues but all attendees are welcome to attend. Keynote: Daryl G. Smith Daryl G. Smith, senior research fellow and professor emerita of education and psychology at Claremont Graduate University, is the keynote speaker for the 2015 SCDC. Dr. Smith’s research, teaching and publications are in the areas of organizational implications of diversity, assessment and evaluation, leadership and change, governance, diversity in STEM fields and faculty diversity. Tartuffe (The Imposter) | Play Procdution Thursday, March 26 at 7:30pm, Coger Theatre Friday, March 27 at 7:30pm, Coger Theatre Saturday, March 28 at 7:30pm, Coger Theatre Sunday, March 29 at 2:30pm, Coger Theatre Written by Molière Directed by Sara Brummel The gullible Orgon and his mother, Madame Pernelle, have fallen under the influence of Tartuffe, a charlatan whose false piety and ulterior motives are obvious to the rest of Orgon’s family and friends. So blinded is Orgon by misplaced admiration of his duplicitous houseguest, that he announces his daughter Mariane will marry Tartuffe, although she is already engaged to Valère. Alarmed by the degree to which Tartuffe has insinuated himself into the household, the family devises a plot to entrap Tartuffe into confessing his desire for Elmire, Orgon’s wife. The plan backfires, and Orgon responds by banishing his son from the house and signing over all his worldly possessions to Tartuffe! A humorous satire of bourgeois values and religious hypocrisy, Tartuffe was banned by King Louis XIV soon after its premiere in 1664. The Archbishop of Paris issued an edict threatening excommunication for anyone who watched, performed in or read the play. Molière defended his work, noting that the juxtaposition of opposites — good and bad, right and wrong, wisdom and folly, truth and falsehood, the rational and the unreasonable — is at the heart of comedy. Fortunately for generations of happy theatre-goers, the controversy surrounding the play eventually lifted, and Molière’s insightful work has endured as one of classical theatre’s most popular comedies. Tunnel of Oppression Monday, March 30 at 6pm, Wells House Grand Lounge Tunnel of Oppression is an interactive campus-wide social justice and diversity program. During this event, participants will walk through six different scenarios acted out by Missouri State students. These scenarios are meant to challenge participants about oppression that occurs every day in our community. Topics being covered in the Tunnel this year are race, trans*, sex trafficking, sexual assault, mental health, and self-harm. In the rooms, actors will portray issues that are pressing to the different topics being covered in the Tunnel this year. The event will be held in Wells House and will begin in the Wells Grand Lounge on the ground level of Well House. Step out of your comfort zone and into the tunnel! Women’s Leadership Conference Monday, March 23 – Tuesday, March 24, Plaster Student Union Women leaders are a “force of nature” when it comes to perseverance, endurance and producing successful outcomes. Missouri State University’s 2nd Annual Women’s Leadership Conference will explore women’s roles and their impact in all walks of life including business, sports, education and health. Monday, March 23: 2:00 p.m.-8:00 p.m. (includes dinner and mentoring mixer) Tuesday, March 24: 9:00 a.m.-2:00 p.m. (includes lunch) This year’s featured keynote speaker is Cynthia Cooper, Founder & CEO of The CooperGroup, LLC. Other speakers include: Click here to view conference information, schedule and speaker biographies. For more opportunities, subscribe to the Community Opportunities Newsletter. Food Assistance for Students Mondays at 3pm-6pm while MSU is in session Tuesdays at 3pm-6pm while MSU is in session The Food Pantry for Missouri State University Students is for any student facing food insecurity. This may be students going hungry, not able to make ends meet, facing a delay in Financial Aid or other assistance, or any other reason. If you are interested in volunteering at the Well of Life, please click here to sign up! Well of Life – MSU Student Food Pantry 418 S Kimbrough Springfield, MO 65804 (Just across from the Qdoba near Bear Park North) For a review of events that have already happened, please visit our Athletics Blog Page. - Wednesday, March 18, Baseball vs. North Dakota State, 3:05pm, Hammons Field - Friday, March 20, Baseball vs. Indiana State, 6:35pm, Hammons Field - Friday, March 20, W Basketball vs. Tulsa, 7:05pm, JQH Arena - Saturday, March 21, Baseball vs. Indiana State, 2:05pm, Hammons Field - Sunday, March 22, Baseball vs. Indiana State, 1:05pm, Hammons Field - Tuesday, March 24, Baseball vs. Kansas, 6:35pm, Hammons Field - Saturday, March 28, W Soccer vs. Butler CC, 11am, Betty and Bobby Allison South Stadium - Saturday, March 28, W Soccer vs. Western Illinois, 2pm, Betty and Bobby Allison South Stadium - Last chance to register for intramural sports is Wednesday, March 18! Put together a team for Sand Volleyball, Soccer, Flag Football, and Ultimate Frisbee! Register at IMLeagues.com or contact Lauren Burns at the Foster Recreation Center for any further questions. - BearFit Passes are $20 after Spring Break! With summer around the corner, now is the time to get active! Unlimited passes will only be $20 for the remainder of the semester so buy a pass and get in shape with BearFit. - SHARP Sessions will be held in the FRC again this semester! Three sessions will be sponsored by the Missouri State University Department of Safety and Transportation as well as by Campus Recreation Wellness. The classes are free so register for one of the following sessions! All sessions are from 6:30-9p.m. Session 2: March 30 & 31 Session 3: April 27 & 28 - American Red Cross CPR and First Aid Certification. Register online or in person at the FRC. Registration deadline is 2 days before each class. Limited seats are available. CPR/AED and First Aid are held in the Aquatics Classroom. CPR is $55 and First Aid is $40. CPR/AED Session: March 21, 9AM-12:30PM First Aid Session: March 21, 12:30-2PM This certification does not fulfill the requirement for MSU nursing students. - Wellness Expo at the Campus Recreation on April 8th from 11am-3pm. Recognize where campus departments it within the 8 dimensions of wellness and participate in the activities, screenings, and demonstrations that Missouri State offers. - Remember that the Foster Recreation Center offers the awesome services of Massage Therapy and Personal Training. More information can be found on our website at www.missouristate.edu/recreation
<urn:uuid:881bcfd1-1963-4aec-8409-ee96c91d039b>
CC-MAIN-2021-21
https://blogs.missouristate.edu/ebulletin/tag/mid-semester-grades/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.52/warc/CC-MAIN-20210515192444-20210515222444-00535.warc.gz
en
0.944173
4,890
2.875
3
November 15, 2020 This newsletter issue focuses on the topic of inflation. It’s often described as though it’s binary- either inflation is rising or its falling. However, there are multiple types of inflation, and they have different causes and different outcomes on asset classes, so this newsletter walks through some of the nuances. Having an idea about forward inflation or deflation potential is critical for establishing which asset classes are likely to outperform in the 2020’s decade. Three Types of Inflation How economists define inflation in part depends on what school of economics they come from. For the sake of this piece, I’m going to break it down into three different types: monetary inflation, asset price inflation, and consumer price inflation. 1) Monetary Inflation Monetary inflation generally refers to an increase in the broad money supply, known as M2. In other words, it’s not about prices going up; it’s about the amount of money itself going up. Broad money supply, M2, refers to all of the various bank deposits that exist in the system, like checking accounts and savings accounts, as well as physical currency in circulation. The official broad money supply in the United States is currently about $18.8 trillion. This chart shows the broad money supply over time in blue on the left axis, and the year-over-year percent change in that broad money supply in red on the right axis: Chart Source: St. Louis Fed There are two main forces that drive the broad money supply up over time: either banks make more private loans and thus create new deposits, or the government runs large fiscal deficits and has the central bank create new bank reserves to buy large portions of the bond issuance for those deficits. I walked through this process in detail in my article on banks, QE, and money-printing, so if you haven’t read it, that’s a key resource. Monetary inflation is generally a starting point for the next two forms of inflation: asset price inflation and consumer price inflation, which we feel more directly. 2) Asset Price Inflation Asset price inflation refers to the prices and valuations of financial assets, like stocks, bonds, real estate, gold, fine art, and fine wine, increasing over time. These are things that can be held for long periods of time and tend to appreciate in price. There are multiple ways to measure asset valuation, and I use several of them throughout my newsletters and articles. None of them are perfect, which is why I use several to see if they agree or don’t agree. Pretty much any valuation metric for U.S. equities suggests significant overvaluation this year and for the past few years, with the exception of the equity risk premium vs Treasuries. In other words, stocks are historically expensive, but bonds are historically even more expensive. Collectively, asset prices have never been higher than they are now, compared to GDP. Here is U.S. household net worth as a percentage of GDP in blue, and short-term interest rates in red: Chart Source: St. Louis Fed Household net worth includes stocks, bonds, cash, real estate, and other assets, minus liabilities like mortgages and other debts. That chart goes through Q2 (the latest z.1 financial accounts release, which comes with a lag), so we’ll see a bit of a pullback in the blue line for Q3 numbers because GDP rebounded in Q3. Household net worth reached nearly 550% of GDP before the pandemic, briefly shot up over 600% in Q2, and should return to somewhere around 550%-575% in the Q3 report, because GDP partially normalized. Another broad valuation indicator that I don’t cite frequently is Tobin’s Q Ratio, which compares the broad equity market value to the replacement value of the companies. This excellent chart from Advisor Perspectives shows Tobin’s Q on the bottom, and shows the inflation-adjusted S&P 500 on the top: Chart Source: Advisor Perspectives Here’s another chart that shows asset price inflation. It has the gold price in red (asset price inflation), the median house price in purple (asset price inflation), the money supply per capita in blue (monetary inflation), and the consumer price index in green (consumer price inflation), all indexed to 100 in 1995: Chart Source: St. Louis Fed As we can see from that chart, gold has kept pace with monetary inflation, which has greatly outpaced the official consumer price index. Housing has also greatly outpaced the consumer price index, but not by quite as much as monetary inflation. If I were to put stocks on that chart, even with dividends excluded, they would also have greatly outperformed consumer price inflation, due to major increases in average stock valuations over this time period. 3) Consumer Price Inflation Consumer price inflation refers to the prices of everyday goods and services going up in price, for things ranging from food to medical costs. It overlaps a little bit with asset price inflation in the housing sector, but for the most part, refers to non-financial items. Consumer price inflation is challenging to measure, because you have to define a basket of goods and services to measure as an average. No household has an identical basket of expenses; my household expenses have a different percentage of housing, transportation, food, travel, clothing, healthcare, electronics, and automobiles, than yours does. And then, there are methodological differences, including how substitutions are calculated. Many people argue that official measures of CPI somewhat understate the real inflation rate, and I agree. As an exercise, I calculated that my household inflation rate averaged about 3% over the past two years, from summer 2018 to summer 2020, which is above the official CPI, but lower than some of the other measurements out there. It’s starting to get interesting this year, though. As an anecdotal data point, my husband and I bought a mattress in December 2019, and due to some damage, we had it replaced in November 2020, 11 months later. We liked the mattress, so we got the identical version, except now it’s 20% more expensive than our original purchase. On the other hand, things like gasoline are cheaper. This chart shows the consumer price index going back to 1913, with a notable period marked: Chart Source: St. Louis Fed What that chart shows, is that something that cost $100 in 1982, would cost $260 today, and would cost about $10 in 1913. The price of a basket of goods, in other words, has risen by about 26x from 1913 to today in dollar terms. Breaking that down further, it rose by about 4x from 1913-1971 when Nixon took the U.S. off the gold standard, and about 6.5x from 1971 to the present. It also shows that the CPI trend turned up prior to the gold standard being undone; the monetary system began having major issues under the surface prior to the official devaluation. To give a bit more granularity, this chart shows the year-over-year percent change in the consumer price index, which is what we generally refer to as “inflation”. Chart Source: St. Louis Fed Prior to 1971, there were both periods of inflation and deflation. Since 1971, there have been periods of inflation, but barely any periods of deflation. Now that we’ve defined the three main types of inflation, we can move into why sometimes one goes up but another does not. A rapid increase in the broad money supply usually comes with either asset price inflation or consumer price inflation, and a few variables can affect which of those two it mostly causes to go up. This is because when money supply goes up rapidly, it only causes an increase in the price of assets or goods if there is a situation with too much money chasing too few assets and goods, resulting in a supply-vs-demand imbalance. On the other hand, if the quantity of those assets and goods goes up rapidly as well, then there is no supply-vs-demand imbalance, and thus no reason for prices to go up at any substantial rate, even though the amount of currency units is going up. Asset Price Inflation vs Consumer Price Inflation Substantial asset price inflation often occurs when monetary inflation is substantial, interest rates are also quite low, and labor and commodity costs are controlled at relatively low levels. Or, a blunt way to think about asset price inflation, is that it’s consumer price inflation for the wealthy and upper middle class who own most of the financial assets. If the broad money supply is going up quickly, and the top few percent of the population have their incomes going up quickly, then they will be flush with cash, and they have to put it somewhere. Most people don’t want to hold their net worth in cash, especially if interest rates are low. And when interest rates are low, it means the discount rate that we use to value various financial assets is low, and so the subsequent valuation calculation can result in rather high prices for financial assets. For example, if interest rates for bank savings accounts and Treasury bonds are 5% per year, then a dividend stock that pays a 4% dividend yield and grows at 4% per year is only moderately attractive; its annual returns will be slightly better (about 8% total) but with more volatility and risk. However, if interest rates for banks and Treasury bonds are only 1%, then suddenly that dividend stock looks a lot more attractive, and we would pay higher prices for it, drive the valuation ratio up, the dividend yield down, and thus drive the forward returns down. So, when the money supply is going up and interest rates are low, folks with plenty of cash start buying financial assets, such as stocks, bonds, real estate, private equity, gold, fine wine, and fine art. These assets have inherent scarcity, and so as money supply goes up while interest rates stay low, the prices of financial assets tend to do very well. There is an age-old battle between labor and capital; the working class vs the wealthy. Over the very long run in a given society, the pendulum tends to swing strongly in one direction to the point where it causes societal issues, and then society pushes it back in the other direction where it tends to overshoot in that direction instead, until society pushes it back the other way again. A healthy society finds a balance somewhere where both sides are reasonably satisfied, resulting in high productivity and social cohesion. A pendulum that is too far in one direction or the other tends to cause discontent, economic stagnation, and/or unsustainable bubbles. Periods of substantial wealth concentration, like the 1920’s and 2010’s, have generally been environments of high asset price inflation but relatively low consumer price inflation. These are periods where capital interests have a lot more political power than labor interests, whereas the 1960’s and 1970’s were times when labor unions had a lot more political power. Over the past four decades, the capital side has gained most of the political power, so that’s where the pendulum is at the moment. This is because things like labor offshoring and technological advancements put downward pressure on wages for many people, while shareholders, executives, and highly-paid professionals with in-demand skillsets can prosper within that system. Tax changes further supported this trend, where corporate tax rates and top income tax rates came down, while payroll taxes remained high, and per-capita healthcare costs and childcare costs skyrocketed: Chart Source: St. Louis Fed The reason the pendulum tends to swing too far is because when one group gets power, it becomes a self-reinforcing cycle into cronyism, where those with power and influence can further tilt politics in their favor and thus further entrench themselves, until it causes a breaking point and society unwinds that entrenchment. In addition, periods of commodity abundance and thus low commodity prices can support periods of high asset prices and low consumer prices as well, because it helps keep input costs for finished goods low. With that set of variables combined at the moment, the high end of the income spectrum does well and has plenty of money, while the lower and middle portions of the income spectrum remain cash-constrained. So, a lot of money starts chasing scarce goods among the wealthy (leading to a large increase in the price of financial assets and luxury goods), while money remains tight for everyday consumers (leading to a smaller increase in the price of everyday goods and services, particularly those that are non-essential). Essential costs like healthcare and childcare keep going up. I described this trend in some detail in my article, The Big Tax Shift. We can see over the past thirty years, for example, that the top 1% have gone from having 23% of the wealth to a little over 30% of the wealth, while the bottom 90% decreased from having about 40% of the wealth to a little over 30%. So, the top 1% combined now have the same amount of wealth as the bottom 90% combined: Chart Source: St. Louis Fed And if we split that into the top 10% and bottom 90%, it looks like this, where the top 10% have two-thirds of the wealth, and the bottom 90% have the remaining one-third: Chart Source: St. Louis Fed This relationship is positive for asset prices, because cheaper labor and cheaper input costs are beneficial for corporate profits, and as wealth concentrates into the top few percent of the population, it gets stored more and more in financial assets which drives the valuations up. In addition, there has been a rapid increase in CEO pay over the past four decades. CEOs used to make 20x as much as the average worker in 1965, and that ratio moved up to 59x by 1989, 122x by 1995, and in recent decades has been well over 200x as much as the average worker. Meanwhile, the median male worker has a lot more trouble covering key family expenses than he did back in the 1980’s and 1990’s, mainly due to health care inflation and education inflation rising more quickly than his wages: Chart Source: Washington Post, Oren Cass Trends like this generally lead to rising populist politics on both the right and the left of the political spectrum (cracks in the system emerging from the pendulum swinging very far), as people sense that something isn’t working right with the established system, due to a problematic mix of political and corporate interests merging together into cronyism, but differ in what they think the root of the problem is and how to address it. So, we are in an environment of unusually strong political polarization, as well as political challengers to the established system. We can think of it roughly as a political quadrant in the United States, with Populist Left, Established Left, Established Right, and Populist Right, rather than a simple Left-vs-Right spectrum. And then even within those four quadrants, there are multiple sub-groups. Monetary Inflation vs Consumer Price Inflation If we look back over a full century, there is a significant correlation between monetary inflation and consumer price inflation. This chart shows the 5-year rolling percent change of the broad money supply per capita in blue and the consumer price index in orange: Data Source: Federal Reserve This chart shows the inflationary decades of the 1940’s and 1970’s, where both money supply and consumer prices rose quickly. 2020 is starting this decade out by reaching those historically high money supply 5-year percent growth levels, while official CPI remains low. Big expenses like housing, healthcare, education, childcare, and other non-outsourced expenses tend to be more in line with monetary inflation, giving people a sense that inflation is higher than the broad CPI reports that it is. In some ways, 2020 looks like the mid-1960’s, where there was a pretty wide divide between money supply growth and official CPI growth. Back then, the result was that CPI started to catch up with broad money supply growth in the late 1960’s and then shot up quickly in the 1970’s. It remains to be seen if that will happen this time or not. After the 1960’s which had moderately high asset prices, the 1970’s inflationary period saw rather low asset prices for most financial assets, except gold and silver which did very well. This was because interest rates became very high, which meant the discount rate when valuing stocks and other cashflow-producing assets led to rather low fair valuation estimates for those wishing to take on equity risk. That was, of course, ultimately a great time to buy financial assets, because as inflation was eventually brought under control, interest rates began a four-decade structural decline, which led to the massive boom in asset prices that we’ve enjoyed since then. Monetary Inflation Outlook My base case is for continued fast growth of the broad money supply per capita over the next 3-5 years, perhaps at 8-12% or more per year on average. This is because federal fiscal deficits continue to be very large for structural reasons even without further large stimulus, and big portions of those deficits are being monetized by the Federal Reserve rather than extracted from existing pools of capital, which is generally what happens when sovereign debt as a percentage of GDP gets this high. My money-printing article goes into detail on that. This widening fiscal deficit began happening pre-pandemic, and the pandemic sharply blew it out, much like a war: Chart Source: St. Louis Fed Here’s the single-month October 2020 U.S. fiscal situation: Chart Source: U.S. Treasury Department As of now, the deficit is structurally impaired for years, and thus will likely continue growing the broad money supply at an eyebrow-raising pace. The question becomes whether that money will spill mostly into financial assets, or into consumer prices. 3 Catalysts For Consumer Price Inflation As previously shown, monetary inflation and asset price inflation have been pretty high over the past several years, while official CPI measures remain low. Technological advancements, high debt levels, labor offshoring, wealth concentration, low median wage growth, no commodity shortages, and other forces all affect that divide. However, inflation of domestic services, like healthcare and childcare, have skyrocketed along with monetary inflation and asset price inflation, so the disinflationary trend has mostly been from manufactured goods. So, we have a handful of key things to watch to see to what extent, if any, this trend will shift towards higher consumer prices of goods as well, which are generally the only thing not going up. 1) Labor Onshoring We have had a multi-decade trend of increasing globalization, and specifically offshoring jobs to other countries, which basically exports inflation. It puts downward pressure on local wages and on prices of many types of goods like electronics, clothing, and various items. Globalization particularly accelerated during the 1990’s and 2000’s. This trend may have peaked, though. Global trade as a percentage of global GDP hit a local top in 2008 at about 60% of global GDP, and has been in a choppy sideways period ever since: Chart Source: World Bank If we start to see more of a period of labor onshoring in various developed countries, it would end this period of exporting inflation, and thus could result in higher prices of manufactured goods, i.e. consumer price inflation. Having more resilient supply chains is one incentive for this, as is the political tension between China and the United States. However, that trend still has to face off against technological advancements in the area of industrial automation, which should continue to exert downward pressure on prices of manufactured goods and compete with human labor in the long run. There could certainly be a period in this decade, however, where labor onshoring could temporarily happen faster than advancements in automation, and push up some prices for a period of time. 2) Commodity Scarcity During most of the 1990’s, 2000’s, and 2010’s, we had commodity abundance. Oil abundance started to get a bit tight by the end of the 2000’s decade, but improving shale technologies combined with low interest rates and a willingness among companies and investors to drill while being consistently free cash flow negative (and thus persistently destroy their capital), significantly boosted American oil supply and resulted in a long-term supply glut and low oil prices. Chart Source: EIA However, there has been less over-abundance of metals, like copper. Annual deposit discoveries have been weak during the 2010’s decade, despite big money put into the space a decade ago. We have some degree of abundance at the moment, but no structural over-abundance: Chart Source: S&P Global World Exploration Report 2019 So, gold and copper prices held up a lot better than oil prices in recent years. In 2020, there have been massive capex cuts by oil and gas producers, and the U.S. shale oil industry has faced a number of bankruptcies. Investors might be more cautious with financing shale drilling going forward, after a decade of brutal losses. The growing movement towards ESG investing also generally results in less investment capital for oil and gas. Supply can stagnate for a while until demand catches up and results in more oil and gas tightness, and higher prices. Some producers for commodities like uranium and copper also cut costs this year as well, even though these industries have less oversupply to begin with, and are looking rather tight as we head deeper into the 2020’s decade. If commodities in general start to enter a period of relative scarcity deeper into the 2020’s decade in conjunction with high monetary inflation, it would exert upward pressure on consumer prices. 3) Political Changes Fiscal policy changes can affect wealth concentration. Whether it’s higher taxes on the wealthy, or payroll tax cuts for the middle class, or partial student loan forgiveness by executive action, or some type of universal basic income or one-time stimulus injections, there are various policies that can get more money into the hands of everyday consumers, which can put upward pressure on consumer prices. Based on current election results, major fiscal changes appear unlikely for the next 2 years, although there can be changes around the edges, including with executive action. As we look out further than that, we need to be aware of potential changes in fiscal policy that could shift the capital/labor pendulum and affect asset classes in various ways. In this current environment, due to the severity of the pandemic against a backdrop of a very financially vulnerable society, personal income went *up* this year rather than down, despite massive unemployment, due to big government transfer payments that took the form of stimulus checks, extra unemployment benefits, and PPP loans that turn into grants: Chart Source: St. Louis Fed This fiscal injection pushed back against the deflationary crunch that occurred in spring 2020 during the pandemic shutdown, resulting in a rebound of reported inflation and forward-looking inflation expectations. This began to level off as we entered the autumn, because there was no second round of stimulus, and unemployment remained rather high. Chart Source: St. Louis Fed We should keep in mind that the two consumer price inflationary decades of the past century, the 1940’s and 1970’s, both saw rapid declines in wealth concentration, as the bottom 90% gained wealth share against the top 0.1%: Source: Ray Dalio, Changing World Order In the 1940’s, the government ran large deficits which were monetized by the Fed and commercial banking system for World War II, which resulted in a massive increase in the industrial base that benefited blue collar workers, and when soldiers came home from the war, the government passed bills to get 8 million of them educated or trained at government expense. Massive money supply increases, along with periods of supply shortages, led to three big inflationary spikes in 1942-43, 1947-48, and 1951. Interest rates were held low, and stocks and real estate did well in this environment from a starting point of low valuations, benefiting the wealthy, but even so, the bottom 90% did better. Taxes on the wealthy were quite high in this environment, and folks who were overweight in cash and bonds did rather poorly, as those paper assets failed to keep up with inflation. In the mid-1960’s, the pendulum of power swung increasingly in favor of labor unions, and Lyndon B. Johnson oversaw a set of domestic programs referred to as The Great Society. These various forces contributed to an increase in the wealth share of the bottom 90%, but also contributed to moderately high consumer price inflation in the late 1960’s. By the 1970’s, budget deficits from the Vietnam War, along with problems supporting the gold standard, resulted in the dollar going off the gold standard and experiencing a period of rapid devaluation. Oil scarcity relating to geopolitical issues then further exacerbated this, leading to a period of very high inflation. Rising interest rates to control inflation put severe downward pressure on stocks and bonds and many financial assets, other than gold and silver which did extraordinarily well, until Fed Chair Volcker finally broke the back of inflation in the early 1980’s with sky-high inflation-adjusted interest rates. I have several investment accounts, and I provide updates on my asset allocation and investment selections for some of the portfolios in each newsletter issue every six weeks. These portfolios include the model portfolio account specifically for this newsletter and my relatively passive indexed retirement account. Members of my premium research service also have access to three additional model portfolios and my other holdings, with more frequent updates. I use a free account at Personal Capital to easily keep track of all my accounts and monitor my net worth. M1 Finance Newsletter Portfolio I started this account in September 2018 with $10k of new capital, and I put new money in regularly. Currently I put in $1,000 per month. It’s one of my smallest accounts, but the goal is for the portfolio to be accessible and to show newsletter readers my best representation of where I think value is in the market. It’s a low-turnover multi-asset globally diversified portfolio that focuses on liquid investments and is scalable to virtually any size. I chose M1 Finance because their platform is commission-free and allows for a combo of ETF and individual stock selection with automatic and/or manual rebalancing. It makes for a great model portfolio with high flexibility, and it’s the investment platform I recommend to most people. (See my disclosure policy here regarding my affiliation with M1.) And here’s the breakdown of the holdings in those slices: Changes since the previous issue: - In October, I reduced T-bill exposure a bit by selling SHY, and added to equities, with an emphasis on ex-USA stock picks. - Stock selections were trimmed and rotated a bit, with new positions in stocks like KMI, UNH, and CVS. Since inception in September 2018, $31,000 in deposits have been put into the portfolio via dollar-cost averaging. After the $10,000 initial seed capital, I put in $1,000 every six weeks for a while, and then eventually increased it to $1,000/month to keep it simple. This chart shows the gains (the current portfolio value minus total contributions) of the model portfolio, compared to dollar-cost averaging into various benchmarks with the same method: The portfolio continues to provide strong risk-adjusted returns. As a multi-asset portfolio with domestic stocks, foreign stocks, bonds, and alternatives, its primary benchmark is a 2050 target date fund. I also include a pure S&P 500 total return index, and a pure MSCI EAFE total return index as well. The model portfolio has produced $7,174 in gains, which is similar to that of a pure S&P 500 exposure, but with less volatility and less tail risk, due to increased diversification. Dollar-cost averaging into the pure S&P 500 index, inclusive of dividends, would have resulted in $7,583 in gains, in exchange for more volatility and risk concentration. The primary benchmark, a 2050 multi-asset target date fund, produced only $5,412 in gains with a similar risk profile as the model portfolio. The MSCI EAFE ex-USA developed market total return index would have produced only $3,710 in gains. Since inception, the primary drivers of strong returns for the model portfolio were precious metals (both the metals and the miners), good overall stock selection, and using a counter-cyclical strategy. Some of the laggard areas included maintaining some T-bills in the portfolio, and having international exposure, as international stocks have generally lagged the S&P 500 over the past two years. Primary Retirement Portfolio My retirement portfolio consists of index funds that automatically rebalance themselves regularly, and I rarely make changes. Here’s the allocation today: From 2010 through 2016, this account was aggressively positioned with 90% in equities and enjoyed the long bull market. Starting in 2017, in order to preserve capital, I dialed my equity allocation down to 60% (40% domestic, 20% foreign) and increased allocations to short-term bonds and cash to 40%. This was due to higher stock valuations and being later in the market cycle more generally. After equities took a big hit in Q1 2020, I shifted some of the bonds back to equities, and it is now 71% equities (46% domestic, 25% foreign), and short-term bonds and cash is now down to 29%. For my TSP readers, this is equivalent to the 2040 Lifecycle Fund. This is an example of a portfolio strategy that takes a rather hands-off approach but that still makes a tactical adjustment every few years if needed, based on market conditions, which reduces volatility and makes the retirement account feel less like a casino than many indices these days. This employer-based retirement account is limited to a very small number of funds to invest in, so my flexibility is limited compared to my other accounts. I would, for example, have 10% precious metal exposure in this one in place of some of the bonds if I had that option. Other Model Portfolios and Accounts I have three other real-money model portfolios that I share within my premium research service, including: - Fortress Income Portfolio - ETF-Only Portfolio - No Limits Portfolio Plus I have larger personal accounts at Fidelity and Schwab that I use to complement my retirement account, and I share those within the service as well. The No Limits Portfolio was started this summer and is doing particularly well, partly due to its inclusion of a 5% initial allocation to the Grayscale Bitcoin Trust as a partially uncorrelated diversifier. This portfolio was started with $100,000 in capital and no new funds are added, and unlike the M1 platforms, it has the capacity to invest in some OTC securities: It shows the power of blending various assets, including domestic stocks, foreign stocks, and small stakes in alternatives like Bitcoin, while maintaining a defensive element with gold and Treasury securities as well. Many stocks, precious metals, and Bitcoin were the leaders in this portfolio. Treasuries, cash, and energy stocks have been some laggard areas. As a recap, there are some important relationships to keep in mind for inflation in its various forms. Monetary inflation, meaning a rapid increase in the broad money supply, is driven either by an increase in bank lending or large fiscal deficits that are monetized by the central bank. Whether this leads more to asset price inflation or consumer price inflation depends on a few variables. Interest rates: When interest rates rise, it puts downward pressure on most asset prices, as we saw in the inflationary decade of the 1970’s. When interest rates remain low, then monetary inflation remains a good environment for asset prices, as we saw in the inflationary decade of the 1940’s. Some financial assets, like gold and silver, tend to do well in either environment, as long as inflation-adjusted interest rates remain low, whereas stocks and bonds are more tied to nominal interest rates. Some stock industries can also go against the trend by benefiting from higher rates, like banks. Pendulum swings: Whenever the balance of power favors the wealthy, due to some combination of offshoring, automation, and the political consensus as we’ve had for a while, then monetary inflation is more likely to translate into asset price inflation. Whenever the balance of power shifts towards labor, due to labor onshoring, labor organization, and/or a change in the political consensus, then monetary inflation is more likely to translate into consumer price inflation. This is largely connected to wage increases or lack thereof, as well as the size and scope of government transfer payments and tax policy. Commodity scarcity: When commodities are abundant, with supply outpacing demand, it keeps input costs low and puts downward pressure on consumer price inflation. When commodities are scarce, with demand outpacing supply, input costs start to rise and it puts upward pressure on consumer price inflation. In recent decades, we’ve spent most of the time in a period of commodity abundance. As monetary inflation likely continues, navigating the next 5 years in terms of portfolio management will depend in part on analyzing these variables to see where that money will end up.
<urn:uuid:ddf73269-7eb8-49c2-a7ff-12cb09284f18>
CC-MAIN-2021-21
https://www.lynalden.com/november-2020-newsletter/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00492.warc.gz
en
0.956558
7,030
2.71875
3
This invention relates to a pillow and in particular to a pillow to counteract obstructive sleep apnoea (USA), snoring and other breathing or posture problems during sleep. Some pillows used for correcting head position while sleeping encourage people to sleep in supine position but sleeping lying upon the back creates breathing problems and can be dangerous for people suffering from OSA wherein the tongue and soft tissue fall back blocking airways or, in the case of snoring, the airway narrows. OSA sufferers need to sleep in side/coma or front positions to avoid or minimise these problems. Many conditions result in restless and disturbed sleep due to incorrect positioning of the head upon a pillow because many pillows do not allow for the natural movements of the head of a sleeping person. When in a side or coma sleeping position a sleeping person's face can turn into a pillow and become embedded, which causes pressure around the nose, upper cheek and eye areas. This pressure causes nasal obstruction and discomfort owing to the facial pressure and heat build up on cheeks in turn producing scrunch marks to these areas creating a cosmetic problem. Some people attempt to compensate for insufficient breathing space and discomfort by placing hands together under the head forming space between upper and lower arm. Pounding a temporary hollow in the pillow with one's fist is not a proper solution. Many people, particularly the elderly, have problems with stiffness and lack of flexibility of the cervical neck region of the spine. During sleep most therapeutic pillows result in a person's neck being held in an unnatural position to receive support and do not allow for natural movement of a person's head upon a pillow. In further detail, problems which are associated with various kinds of conventional pillows are discussed below. Soft-filled pillows are generally pillows filled with a soft material. If a user wishes to adopt the side/coma position the pillow is arranged so that the neck is supported, however, the face is inevitably embedded in the pillow. This causes obstruction of the nasal passage and mouth making it difficult to breathe. The uncomfortable pressure being exerted to the facial area around the upper cheek, eye and often ears produces a build up of heat on the face, especially in hot weather causing a feeling of restlessness. When asleep the neck relaxes and bends the chin towards the chest. When this happens this type of pillow does not provide good support for the chin and allows it to twist downwards. The strategy often used to alleviate one or more of these problems is for the user to bend the elbow and bring the hand up to rest the head, thereby elevating the head off the pillow. The space formed by the upper and lower arm creates a breathing space that enables the person to breathe without obstruction and reduces the pressure on the face. However, the arm eventually becomes tired and the blood flow often is restricted because of the bent elbow. Another way is for the user to put the face over the edge of the pillow if they wish to side or front sleep in an unobstructed breathing environment but they still experience the feeling of pressure. Both of these methods often result in very restless sleep and inevitably the body takes the line of least resistance and turns on the back. Though this may be comfortable for some, it is undesirable for snorers and possibly dangerous for obstructive sleep apnoea sufferers. The main problem with molded foam pillows of conventional design is due to the inflexibility to adapt to individual differences. These pillows are usually designed with a hump for the neck support and a hollow or valley in the centre for the head section. This style may be suitable for some for back sleep but usually the neck is overstretched in this position which allows the jaw to open and in turn the tongue and soft tissue to fall backward, which results in snoring or the obstruction of the airway. If a user wishes to side sleep the neck must be held in a certain position to gain any benefit from such a pillow. While the weight of the head is consciously held in the designated position often the neck is overstretched. However, when the neck and body relax as automatically happens while asleep the head will take on a more natural position bending a few degrees towards the chin. Even if this occurs to the slightest degree, the neck and head become misaligned with the intended designated position for them. Because the head is now positioned on top of the neck support/hump section the neck does not have any support. This causes neck and shoulder pain and puts a lot of strain on the spine. Some prior art pillows have designed a hollow or hole for a pressure free and unobstructed breathing space. These hollows or holes are embedded in the central part of the pillow and are not adjustable for individual size heads. Because these holes are in the central part of the pillow the back of the head does not receive the correct support, but most importantly, because they are embedded in the pillow if the face is turned downwards even slightly the exhaled air is trapped in the hollow and then reinhaled. This is not at all desirable as carbon dioxide can build up in the blood making the user feel tired on waking. The object of the present invention is to provide a pillow which overcomes the above disadvantages. The present invention provides a pillow including; a pillow body which has; (a) a central portion having a first bed head end and a second foot end; (b) a pair of head support limbs extending outwardly from the central portion and curving from the bed head end towards the foot end; (c) a pair of neck, chin and jaw support limbs extending outwardly from the central portion at the foot end of the central portion and being spaced inwardly of the head support limbs; (d) a breathing space being defined between each adjacent head support limb and neck, jaw and chin support limb, the breathing space extending from an intermediate position of the central portion between the bed head end and the foot end of the central portion and curving outwardly and towards the foot end of the central portion; and (e) the head support limbs and neck, jaw and chin support limbs having surfaces which are curved downwardly from an upper position on the upper surface of the limbs towards a bottom position adjacent the bottom surface of the limbs so that the breathing spaces taper from a relatively wide opening between the upper positions of the limbs to a relatively narrower opening at the bottom positions of the limbs. The pillow according to this invention provides excellent neck jaw and chin and head support for a user in any position on the pillow. In particular, it provides support for natural movement of the neck and head during sleep. The curved and tapering breathing slot provides an unobstructed breathing opening or environment for the user in any position the user takes up when resting in an awake condition or when the neck moves in a natural fashion during the course of sleep. Furthermore, the curved surfaces of the head and neck, jaw and chin support limbs provides a pressure free environment for the facial area in any position during sleep and eliminates heat and discomfort around the upper cheek and eye areas. The reduced pressure also provides a cosmetic benefit by reducing or eliminating pressure marks around delicate areas. The pillow also provides flexibility for ease of adjustment during the night should that be necessary by simply moving the limbs as is required should slight adjustments be required or desired for personal preference. Preferably the pillow body includes a separate base section which can be removed to alter the height of the pillow. Preferably the pillow has a cover conforming in shape to the pillow body. In one embodiment the cover may include an inner liner to form a pouch for receipt of soft filling material so that soft filling material can be included in the pouch to change the height and/or shape of the pillow, the inner liner extending along at least part of the length of the neck, jaw and chin limbs and the central portion adjacent the foot end of the pillow. An inner liner may also be provided at the head support limbs. Preferably cuts are provided in the central portion extending inwardly from the breathing space for accommodating movement of the limbs with respect to one another and the central portion. The bed head end of the pillow may also be provided with a V-shaped profile to also assist in movement of the head support limbs and to prevent buckling with respect to the central portion. Preferably the breathing spaces are in the form of open spaces extending completely through the pillow. Preferably additional soft filling may be provided for location between the pillow body and the outer cover for changing the height and/or contour of the central portion or limbs. Preferably the pillow body and the outer cover are provided in a pillow slip. Preferably the upper surface of the pillow body is convoluted or egg carton shaped. Preferred embodiments of the invention will be described, by way of example, with reference to the accompanying drawings in which: FIG. 1 is a view of a pillow embodying the invention shown in an outer cover; FIG. 1A is a cross-sectional view of the pillow body with the outer cover removed along the line 1a--1a of FIG. 1; FIG. 1B is a cross-sectional view along the line 1b--1b of FIG. 1 also with the outer cover removed; FIG. 1C is a view along the line 1c--1c of FIG. 1 also with the outer cover removed; FIG. 2 is a view from the direction of pointer 2 in FIG. 1; FIG. 3 is an upper side perspective view of the pillow of FIG. 1; FIG. 4 is an exploded perspective view of the pillow body; FIG. 5 is a cross-sectional view through the neck support limb according to one embodiment of the invention; FIG. 6 is a cross-sectional view similar to FIG. 5 according to a second embodiment; FIG. 7 is a plan view showing the outer cover on the pillow body; FIG. 8 is a further view showing the pillow in a pillow case; FIG. 9 is a view of the pillow inside a standard pillow slip of one configuration; FIG. 10 is a view of a person sleeping on the pillow according to the preferred embodiment of the invention; and FIGS. 11, 12, 13, 14, 15, 16 and 17 show diagrammatically the attitudes of a person's head when lying asleep in various positions on the pillow. With reference to FIG. 1 a pillow embodying the invention is shown generally in plan view. The pillow comprises a pillow body 2 (shown in an exploded configuration in FIG. 4) which is located inside an outer cover 29 as shown in FIG. 1. The pillow body 2 has the same shape as the outer cover 29 except that the pillow body 2 has a generally V-shaped profile 10a at bed head end 10d of the pillow whereas the cover 29 is generally straight at the bed head end 10d of the pillow. The pillow also has a foot end 10e and a central portion 5 is defined between the bed head end 10d and foot end 10e of the pillow. A pair of head support limbs 10b and 10c extend in curved fashion from the bed head end 10d of the pillow outwardly and downwardly towards the foot end 10e of the pillow. A pair of neck, jaw and chin support limbs 11a and 11b extend outwardly and downwardly from the central portion 5 and inwardly of the limbs 10b and 10c. Air breathing spaces in the form of gaps or slots 13 and 14 are defined between the limbs 10b and 11a and between the limbs 10c and 11b respectively. As is clearly shown in FIG. 1 the air breathing slots or gaps 13 and 14 commence at an intermediate position of the central portion 5 and extend outwardly in curved fashion towards foot end 10e. Thus the slots 13 and 14 are arcuate and have a radius of curvature which is the same as that followed by the mouth and nasal region of a user when a user's head moves during sleep as a person's chin moves towards the user's chest. This relationship is more clearly shown in FIGS. 10 to 17 which will be described hereinafter. As is clearly shown in FIGS. 1 and 2 the limbs 10b, 10c, 11a and 11b terminate in truncated ends 15, 16, 17 and 18 respectively. The limbs 11a and 11b may be slightly higher adjacent ends 16 and 17 than the remainder of the limbs 11a and 11b and portion 11. The central portion 5 has a region 11 generally between the limbs 11a and 11b which forms the primary neck support section of the pillow. The portions of the pillow labelled 10 which extend across the central portion 5 and form transitions into the limbs 10b and 10c generally form the primary head support regions of the pillow. Shoulders 57 of the limbs 10b and 10c are rounded so if the limbs 10b and 10c are moved towards a bed head (not shown) to provide more freedom of movement and prevents blocking of movement of the limbs by the bed head. As is best shown in FIG. 1A the central portion 5 in the longitudinal direction from the bed head end 10d to the foot end 10e is generally flat from end 10d to a bridging portion 12 between the primary head support region 10 and the primary neck support region 11 and then rises upwardly to the primary neck support region 11 so that the primary neck support region 11 is somewhat higher than the primary head support region 10. A small dip or recess 10h may be provided at the commencement of the portion 11 to provide room for a persons ear to reduce pressure against the ear. As is also shown in FIGS. 2 and 3 and the cross-sectional view forming FIG. 1B the central portion 5 in the vicinity of the bridge 12 is curved in convex fashion as shown by surface 12a in FIG. 1B. As is also evident from FIGS. 2, 3 and the cross-sectional views forming FIGS. 1B and 1C the neck support region 11 and the neck, jaw and chin support limbs 11a and 11b are higher than the primary head support regions 10 and head support limbs 10b and 10c. As is also best shown in FIGS. 2 and 1C the limbs 10b and 11a have inner surfaces 10f and 10g which are curved downwardly in convex fashion from an upper position shown by points P to a lower position shown by points B so that the slot 13 defined between the limbs 10b and 11a (and also between the limbs 10c and 11b) tapers downwardly from a generally large upper opening immediately between the points P to a relatively narrower opening between the points B. Outer surfaces 10h and 10j of the limbs 10b and 11a (and also of the limbs 10c and 11b) may be generally vertical surfaces as shown in FIG. 1C or, if desired, may be curved or rounded in convex fashion as shown in FIG. 2 or may be concave. As best shown in FIG. 4 the pillow body 2 is formed of a base layer 70 which is separate from an upper layer 72. Though FIGS. 1A to 1C generally show the pillow body 2 with a smooth outer surface, the outer surface of the upper layer 72 may be convoluted or egg carton shaped as is shown in FIG. 2 to assist in pressure distribution over the areas of the user's head which are contacted by the pillow. FIGS. 1A to 1C, 5 and 6 show the upper surfaces of the upper body 72 smooth or planar rather than convoluted to more clearly and easily show the curvature of the surfaces. The upper layer 72 is contoured in the manner described with reference to FIGS. 1A to 1C. As can clearly be seen in FIG. 1C the base layer 70 merely provides a generally thin height adjusting layer which can be removed or used as is desired to adjust the height of the pillow. Thus, if a relatively low pillow is desired as may be the case if a person prefers to sleep on their front or back the base layer 70 can be removed. If a relatively higher pillow is required for back, side or coma position sleep, then the base layer 70 is used to slightly increase the height or thickness of the pillow to suit shoulder height when sleeping in those positions. The basic contouring of the pillow which provides the curved surfaces as described with reference to FIGS. 1A to 1C is all provided on the upper layer 72 so, notwithstanding removal of the base layer 70 the pillow will still have the shape, characteristics and contouring which has been described with reference to FIGS. 1A, 1B and 1C. As is best shown in FIG. 4 the base layer 70 and upper body 72 may be provided with cuts 74 which extend inwardly into central portion 5 from the inner ends of slots 13 and 14 to facilitate movement of the limbs 10b, 11a, 10c and 11b generally in the direction of double-headed arrows A in FIG. 4 to adjust the position of the limbs with respect to one another and also with respect to the central portion 5 to suit a user's personal needs. For example, FIG. 7 shows a position of the limbs in a generally closed position where the truncated ends 15, 16 and 17, 18 generally touch one another to close the slots 13 and 14 or maybe move to an open position as shown in FIG. 8 where the truncated ends 15, 16 and 17, 18 are spaced well apart from one another. It should be noted that even in the closed position shown in FIG. 7 the air breathing gaps or slots 13 and 14 are closed only at the truncated ends and not completely shut off so that the air breathing slots and gaps are always provided notwithstanding closure of the limbs 10b, 11a or 10c, 11b. FIGS. 5 and 6 show cross-sectional views through limb 11a showing various embodiments of the invention by which additional soft filling 76 can be added to increase the height of the limbs or slightly change their contour. In the embodiment in FIG. 5 outer cover 29 is shown and an inner liner 31 is sewn to the inner surface of the cover 29. As is shown in dotted lines in FIG. 1 the inner liner 31 extends across primary neck support region 11 and along the majority of the length of the limbs 11a and 11b. The liner 31 is left open from the cover 29 at ends 33 (see FIG. 1) and soft material stuffing can be stuffed in between the liner 31 and cover 29 to form the filling 76 shown in FIG. 5 to slightly increase the height of the primary neck support region 11 and also part of the limbs 11a and 11b if desired. Alternatively the ends 33 could be stitched closed on the pouch filled from a central open location. The filling 76 may also provide a softer feel to the pillow. FIG. 6 shows a further embodiment in which additional soft filling material 76 is located between the upper body 72 and the outer cover 29 not only in the vicinity of the top of the limb 11a but also down the outer surface 10j if the neck support needs to be wider as well as higher. The additional stuffing can be used together with the inner liner 31 shown in FIG. 5. FIG. 6 merely shows the embodiment in which the inner liner 31 is completely omitted. However, the inner liner 31 does provide the advantage of localising and ensuring correct location of the soft filling material 76 to provide increase in height of the pillow if desired. FIG. 7 shows the outer cover 29 from beneath in which a zipper 78 or other suitable attachment such as velcro fasteners are used to close the outer cover 29 over the pillow body 2. The outer cover 29 is preferably formed of a soft, slightly padded material such as quilt material or the like. In this embodiment ties 24 and 25 may be provided on the cover 29 for pulling the portions of the cover 29 adjacent the V-shaped profile 10a of the body 2 together to in turn slightly close the V-shaped profile which will assist in moving the limbs 10b and 10c outwardly from the position shown in FIGS. 1 and 7 and then tying them in that position. FIG. 8 is a view showing an outer pillow slip 30 over the cover 29. The outer pillow slip 30 is intended to be removed periodically for washing. The outer pillow slip 30 may be provided with ties 26 and 27 which can be used to tie the limbs 10b, 11a and 10c, 11b together in the closed position if desired. FIG. 9 shows the further embodiment in which the pillow slip 30 is a generally rectangular pillow slip 30 rather than one which has the same shape as the pillow shown in FIGS. 1 to 8. The pillow slip 30 of FIG. 9 is of generally loose fit so as to slightly match the contour of the pillow and not interfere with the breathing slots 13 and 14. Whilst the generally rectangular pillow slip 30 shown in FIG. 9 is a possibility it is preferred that the pillow slip have the same general configuration as the pillow as shown by the pillow slip 30 in FIG. 8. In the preferred embodiment of the invention described with reference to FIGS. 1 to 6 the general contour of the curved surfaces of the limbs 10b, 11a and 10c, 11b as well as the neck support region 11, bridge 12 and head support regions 10 are provided by the upper body portion 72. The upper body portion 72 is preferably formed from resilient sponge-like rubbery or synthetic polymeric material such as foam plastic, for example polyurethane. With other embodiments the final shaping of the pillow to provide the curved surface as previously described can be provided not by shaping the actual body 72 but rather by providing inserts of foam plastics material or soft fill material into the outer cover 72 to provide the final shaping previously described. The base layer 70 may also be completely free of the upper layer 72 or alternatively releasable ties could be provided on the base layer 70 or upper layer 72 for tying the base layer 70 to the upper layer 72 to merely secure and hold the base layer 70 in position relative to the upper layer 72. FIGS. 10 to 17 show how the pillow is used and supports a user's head during normal head movement while the person is asleep. FIG. 10 shows how the head and neck are supported during all natural movements of the neck and head and particularly as the neck naturally relaxes causing the chin to moves towards the chest, and regardless to what degree this happens the weight of the head is distributed evenly by all support sections since the neck and head follow the natural curve of all the support sections--the areas shaded by wide hatch lines show head, neck and chin areas which are supported as the head bends towards the chest. FIG. 11 shows the correct spinal alignment along the line A achieved by the present invention. FIG. 12 shows how sections of the pillow of this invention support a person's head, chin and neck and also shows the gradual reduction of pressure to the face as it enters the open space of the gaps 13, 14--this gradual decrease in pressure gives a great feeling of comfort and leaves no line on the face where the support sections end and the open space begins. FIG. 13 shows a person in prone position and FIG. 14 shows a person sleeping in side or coma position. FIGS. 15 and 16 show a person in supine position with lower limbs of the pillow placed upon a person's shoulders to maintain the head in correctly aligned position as shown in FIG. 16 where the line labelled A denotes spinal axis. FIG. 17 shows a person's head in relation to the pillow when moving the head while sleeping in a supine position. It should be noted that even when the outer parts of the limbs of the pillow of this invention are in closed up position, nevertheless the top rolled or convex inner surfaces 10f, 10g remain apart so that the breathing air slot or gaps 13, 14 remain open and continuous and therefore unblocked. It should be particularly noted from FIGS. 10 and 17 that as the head position changes either by falling towards the chest as in FIG. 10 or by moving sideways as in FIG. 17 the nose and mouth region generally remains over the breathing slot or gap between the uppermost points (as identified by reference P FIG. 1C) of the adjacent limbs 10b, 11a or 10c, 11b so that a complete breathing space is always provided and the pillow itself does not contact the mouth or nasal area to block or obstruct the mouth or nasal area. The shape and contour of the slots 13 and 14 also allow easy escape of exhaled air so that a build-up of carbon dioxide is not created in the vicinity of the nose and mouth. This is particularly shown in preferred embodiments where the slots 13 and 14 pass completely through the pillow particularly when the slots 13 and 14 are left open so extremely good ventilation is provided into and out of the slots 13 and 14. Nevertheless, even if the slots 13 and 14 are closed the elongated contour of the slots 13 and 14 and their general size are able to provide more than adequate ventilation to ensure that there is no carbon dioxide build-up. FIG. 12 shows the pressure being gradually reduced to the user's face in the vicinity of the eyes and mouth by the curved surfaces in 10f and 10g as identified by the arrows in FIG. 12. As can be seen by the arrows pressure is reducing gradually towards the eyes and mouth region and of course no pressure results from the slots of 13 or 14 where no contact is made with the person's face. Thus, the gradual pressure change provides comfortable support and eliminates the possibility of pressure lines or marks on the user's face which may occur if there are abrupt disruptions and change in surface profile of the pillow. In other embodiments not shown the base layer 70 or part of the upper layer 72 may have different degrees of firmness (by being made from different material) in the neck support region 11. Whilst I have described in the foregoing embodiment one preferred form of my invention it will be understood by those skilled in this art that variations and modifications may be made without departing from the spirit and scope of this invention and I therefore do not wish to be understood as limiting myself to the precise terms used.
<urn:uuid:13dbac60-fde2-4735-8831-95df09e868d0>
CC-MAIN-2021-21
https://patents.google.com/patent/US6003177A/en
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00534.warc.gz
en
0.937256
5,403
2.609375
3
Note: This is a guest post contributed by Mandarin Blueprint, a Chengdu-based company which guides students towards proficiency in the Chinese language around the world. The subject of this post, pinyin, is a crucial first step which any successful Chinese language learner will need to achieve mastery of. It is lengthy, so it might be best to refer to this post in more than one sitting! Chinese Pinyin is a system of roman characters created through a committee led by 周有光 Zhōu Yǒuguāng. Despite using the same letters at the English alphabet (well, almost), there are some major differences in Chinese Pinyin. Furthermore, the name “Pinyin” comes from the pronunciation of the characters 拼音 pīnyīn. 拼 as a character alone means “to piece together”, with an alternate meaning of “to spell”. 音 means “sound”, so “piecing together” (aka “spelling”) “sound”! See, it all makes sense :-D. Goals of this Article: - Briefly touch on the Pinyin origin story - Clarify the important focus points of the Chinese Pinyin chart - Cover the most crucial inconsistencies in sound and spelling - Give you confidence that you can successfully learn all of it properly in a (relatively) short amount of time Before we jump in, a reminder that we’ll be using the terms “Initials”, “Finals” & “Tones” liberally throughout this post. If you aren’t familiar with these terms, check out the blog post we did about The Sounds of Mandarin Chinese. THE CHINESE PINYIN ORIGIN STORY We mentioned above that we’re only going to touch on this briefly, and that’s because knowing the origin story doesn’t help you learn Pinyin. That said, the question “but why would they spell it like that?” comes up all the time. If there’s anything all these years of studying & teaching Chinese Pinyin has taught us, it’s that some people just gotta know the why. Don’t feel bad though, we’re exactly the same way ;). Thinking About Why Pinyin Was Made the Right Way The simplest answer to any & all of the “but why is it spelled this way” question is that the Chinese alphabet wasn’t made for English speakers. We all know that when you are trying to solve a problem, it’s all in how you look at it. If your premise is “Chinese people made an alphabet to help English speakers learn Chinese” you will judge it more harshly. However, that was not the problem the creators of Pinyin were trying to solve. The Lay of the Land When Pinyin Was Invented In the 50’s China had just been through quite a lot. Before WW2 there was civil war, which got put on pause during WW2 and then resumed after it. Yikes. Consequently there really wasn’t much of an economy once the new leadership took over in 1949, and the literacy rates were super low. Not too easy to study during war time. One of the reasons for the invention of Pinyin was to help raise literacy rates across the country. In other words, the Chinese alphabet was made for Chinese people, not westerners. Not the Only System Pinyin isn’t the only Chinese alphabet around, nor was it at the time of it’s invention. We find it a bit of a shame that they didn’t choose to go with the Yale Romanization, but no use crying over spilled milk. Furthermore, there are several other systems that are more intuitive and easier to read phonetically, but again, Pinyin was made with our phonetic instincts in mind. The fact is, Pinyin is the Mandarin Chinese alphabet romanization that is universally recognized across the mainland. Hence, using other systems should only be for help with learning pronunciation. Sadly other systems won’t be give you much practical help within the country. THE CHINESE ALPHABET PINYIN CHART- Clarifying Points of Focus First of all, take note of the fact that all the syllables of Mandarin Chinese fit onto one A4 page. That should be your first sign that conquering Mandarin Pinyin & pronunciation is an achievable goal. CHINESE PINYIN BASICS - Chinese Pinyin uses 25 out of 26 letters of the alphabet. The letter Ü replaces V. You’ll notice that on the far right of the chart. - The top yellow rows on the X-axis represent the categories of finals. The Simple Finals are your main categories, with the compound & nasal finals being subcategories beneath the simple finals. There are 35 finals total. - The left yellow column on the Y-axis represents the categories of initials. There are 21 initials. - The top box of the left yellow column has a Ø symbol. This represents the “initial” of the first row of syllables, but there isn’t actually an initial. It is the “Null” initial [LINK]. - The syllables located on the graph are, generally speaking, simply the combination of the initial category & final category - There are “inconsistencies” when it comes to applying point #5 A Note on Spelling with Tone Marks & in Words Chinese Pinyin represents how the initials and finals combine, but it doesn’t mention anything about tones. A tone mark ( ¯ ´ ˇ ` ) is placed above one of the vowels of the syllable (never a consonant). Let’s use the syllable “mao” to show how it works: 1st tone: māo – a flat line, much like how 1st tone is a flat pitch 2nd tone: máo – a rising line, much like how 2nd tone is a rising tone 3rd tone: mǎo – a line that, like an isolated 3rd tone, goes down and then up 4th tone: mào – a falling line, much like how 4th tone is a falling tone 5th tone: mao – no mark Why was “a” given the tone mark and not “o”? These three rules make it clear: - If letters “a” or “e” exist anywhere in the syllable, they get the tone mark (they never appear together) - When “o” and “u” are combined to make “ou”, “o” gets the tone mark - In any other combination, it is the final vowel that gets the tone mark Another important point to note is the correct way to spell multi-syllable words in Pinyin. There is NOT a space between the two syllables. For example, the word “你好” (‘hello’) has a pinyin spelling “níhǎo”. There is no separation between the two syllables. INCONSISTENCIES OF THE CHINESE PINYIN CHART YOU MUST KNOW Inconsistencies in Pinyin can fall into two clearly delineated conceptual boxes. 1. The same letters representing different sounds in different syllables 2. The same sound having different spellings in different contexts. This can be frustrating for beginners, but as with everything in Pinyin, there aren’t that many exceptions to learn in the scheme of things. You’ll also have ample opportunity to solidify this knowledge as you continue to study Mandarin. The Same Letters Don’t Always Have the Same Sound We’ll start this section with a fact about Pinyin: The letter “e” has three different pronunciations depending on the syllable it is in. “Great Scott!” you must be thinking, “Why would Pinyin be so confusing in that way?” Guess what? It also has different pronunciations in English. Ten. English. Evening. Ha, that last one has two different pronunciations of “e” in the same word! The point is, you can’t break Pinyin down at the level of the letter and assume there will be consistency across the language in how it sounds. This is an article and sounds are audio based, so if you want to learn more about these inconsistencies you can check them out in our Pronunciation Mastery video course. That said, here are a few tips to give you a hint as to what you should be aware of: - “a” has three different pronunciations. - “e” has three different pronunciations - “o” has two different pronunciations - Nasal finals in the “i” and “ü” section of the Pinyin chart tend to change the pronunciation of vowels - When the letter “e” is in compound finals, it is easier to say, like the vowel sound of “ten” in English. Simple final E is difficult for non-natives. - Amongst standard Mandarin speakers, there are two acceptable pronunciations of nasal finals “-in” and “-ing”. The differences are subtle, but it’s there. As we mentioned, this is not an exhaustive list. We just want you to know what you are getting into. It’s weird…Pinyin fails to be intuitive a lot, but because there are so few overall syllables it is possible to learn everything that’s inconsistent about it. The Same Sound Spelled Differently in Different Contexts We mentioned nasal finals “-in” and “-ing” above, which are spelled “Yin” and “Ying” by themselves. What’s up with that? Chinese learners often feel perplexed when considering the spelling choices of the Pinyin creators. Sometimes the exact same sound is spelled differently. There are even silent letters that at first glance seem pointless. It is important to be aware of these moving forward. The Silent Y & W…and Y again The simple finals i, u & ü aren’t spelled in this way when they are either by themselves or the first letter of a longer syllable. These are rules that affect the “i”, “u” and “ü” sections of the top “null Ø initial” row of the chart. Here are the rules: - First, when the letter “i” is a syllable by itself, add a “y” in front of it. The “y” is silent. - 一 yī - 移 yí - 以 yǐ - 意 yì - Secondly, when the first letter of a multi letter syllable is “i”, replace it with a “y”. In this context, the “y” is pronounced the same way as “i”. It is simply a placeholder for it. - 烟 yān (originally “iān”) - 羊 yáng (originally “iáng”) - 也 yě (originally iě) - 要 yào (originally iào) - **TWO EXCEPTIONS: Nasal finals “-in” and “-ing” are spelled “yin” and “ying” when by themselves, so just like with “yi” the “y” is silent and simply added in front of the the “i”. - Next, when the letter “u” is a syllable by itself, add a “w” in front of it. The “w” is silent. - 巫 wū - 无 wú - 五 wǔ - 物 wù - Similarly to the second rule for “yi”, when the first letter of a multi-letter syllable is “u”, replace it with a “w”. In this context, the “w” is pronounced the same way as “u”. It is simply a placeholder for it. - 窝 wō (originally uō) - 完 wán (originally uán) - 网 wǎng (originally uǎng) - 外 wài (originally uài) - In the same way as “yi” and “wu”, when the letter “ü” is a syllable by itself, add a “y” in front of it. The “y” is silent. - 迂 yū - 鱼 yú - 雨 yǔ - 遇 yù - Finally, when the first letter of a multi letter syllable is “ü”, add a “y” in front of it. The “y” is silent. - 约 yuē (originally üē) - 元 yuán (originally üán) - 远 yuǎn (originally üǎn) - 运 yùn (originally ün). - Note: Some of you may be wondering why the umlaut disappears in these cases, more on that later. But why though? As a result of reading these rules, some of you might be wondering, WHY WOULD THEY DO THIS?!?!?! We empathize completely. Here is a way you can think about it: - Having an isolated letter “i”, “u” or “ü” or a syllable that starts with “i”, “u” or “ü” makes the boundaries between syllables unclear in context. Example Pinyin sentence comparing when “y, w & y” are added vs. removed: Removed: uǒ céngjīng ǐuéi xuéxí üián shì méiiǒu ìì de (我曾经以为学习语言是没有意义的) Added: wǒ céngjīng yǐwéi xuéxí yǔyán shì méiyǒu yìyì de As you can see, this would be a huge pain in the neck to read if it weren’t for these extra letters. It’s especially hard for the words that require two syllables to be spelled together as a word (i.e. yǐwéi vs, ǐuéi) Important Extra Notes About the Chinese Pinyin Letters Y, W & Y **Only the Ø null initial row displays this phenomenon. Consequently, as soon as you add an initial like “zh-“, “x” or “j-“ in front of one of these pronunciations, they revert back to their spelling of “i”, “u” or “ü”. Ø + u = wu zh + u = zhu (no need for w) Ø + uan = wan zh + uan = zhuan Ø + i = yi x + i = xi Ø + ian = yan x + ian = xian Ø + ü = yu j + ü = ju Ø + üan = yuan j + üan = juan n + u = nu** n + ü = nü l + u = lu l + ü = lü ** What’s up with that last example? Why doesn’t the “ü” keep the umlaut ü in “yu”, “yuan” & “juan”, but keeps it in “nü” & “lü”? The answer is that “l” and “n” are the only initials that get used with both “u” & “ü”. Only 5 initials can combine with “ü” (l, n, j, q, x). “yu” is the spelling when there is no initial (Ø). The remaining three, “j, q & x” never get combined with “u”, only “ü”. Because there is no contradiction, the creators of Pinyin didn’t put the umlaut in “yu, ju, qu, xu”. It is important to know this, because if you don’t you are likely to confuse ‘u’ and ‘ü’. Finally, “nü, lü, nüe & lüe” are the only four pronunciations in the Chinese alphabet that use the letter “v” when typing. For example, when you want to type the word 女人 nǚrén “woman”, the correct input method is “nvren”. 3 MAJOR OMISSIONS Pinyin omits letters of important finals three times. Do not be fooled! Articulate these letters as if they are still there. First of all, the final -iou by itself follows the rule mentioned above of “Ø + iou = you”. So, does “m + iou = miou”? NO! Not in spelling anyway. It is still pronounced as“miou”, but it is spelled “miu”. Pinyin omits the “o”. This is true all the way down the chart. j + iou = jiu (pronounced jiou) d + iou = diu (pronounced diou) q + iou = qiu (pronounced qiou) Ø + uei = wei d + uei = dui (pronounced duei) g + uei = gui (pronounced guei) t + uei = tui (pronounced tuei) h + uei = hui (pronounced huei) Ø + uen = wen ch + uen = chun (pronounced chuen) z + uen = zun (pronounced zuen) k + uen = kun (pronounced kuen) s + uen = sun (pronounced suen) THE FAKE “i”IN CHINESE PINYIN Due to the limited number of letters available in Pinyin, there are seven syllables in Chinese whose “vowel” sound doesn’t sound like anything in English. The spelling of these sounds is “zi, ci, si, zhi, chi, shi, ri”. If you look at your Pinyin chart, they are the seven syllables immediately to the left of the big blank white space in the middle. Apart from these seven, every other time you see the letter “i” in Pinyin, it is pronounced like the “ee” in “see”. However, these syllables are articulated using a different vowel sound. Hence, the “fake” i. Actually, the ‘vowel sound’ (if you can call it that) is even different between “zi, ci, si” & “zhi, chi, shi, ri”. What gives? Why Choose “i” & Not Another Vowel? Remember, the creators of Chinese Pinyin had to fit the sounds of Mandarin (round peg) into the English alphabet (square hole). They wanted every syllable to have at least one vowel, and no wonder, can you imagine the official spelling of these syllables being “z, c, s, zh, ch, sh, r”? That would tough to read. 怎么办?Well, the vowels at their disposal were “a, e, i, o, u” and they even added “ü” out of necessity. With that in mind, what other vowel apart from “i” could they have used? A? Nope, “za, ca, sa, zha, cha, sha” are already syllables. Same for “ze, ce, se, zhe, che, she, re” & “zu, cu, su, zhu, chu, shu, ru”. That leaves ‘o’, ‘ü’ & ‘i’ as options, but the first two don’t even come close to sounding like these syllables. As a result, “i” is the only reasonable choice. YOU CAN MASTER CHINESE PINYIN The concepts discussed today are not exhaustive in terms of the many idiosyncrasies that exist amongst Pinyin. However, they are the most important to be aware of as you jump deeply into this project, and that is what learning Pinyin is. A project. Furthermore, pronunciation in general never really goes away in terms of your attention and focus as a learner, it simply diminishes in needed energy expenditure over time. That said, as “projects” go, Pinyin is not a big one. Consider that English has approximately 16,000 distinct syllables. Pinyin has approximately 409, but for the sake of easy math let’s round to 400. Pinyin fits onto one A4 page, implying that English syllables would take up 40 A4 pages (16000 ÷ 400). Holy cats. That would be just plain silly to start learning English by memorizing those 40 A4 pages. In that circumstance, it isn’t even worth thinking about. Chinese though? You would be a fool not to focus a lot on it in the beginning. GOOD PRONUNCIATION WILL NEVER STOP HELPING YOU Just think about all the benefits you will being clear on all syllables of the language & how they are said. Here are just a few: - You will practice correctly. We’ll be the first to say that learning a pronunciation principle doesn’t make you able to produce it, but it does clarify the road to how you practice. - Avoid practicing incorrectly. Not practicing at all is better than practicing incorrectly in the long run. Engraining mistakes increases the time necessary in the future to fix them. Consequently, your likelihood quit increases as well. - Actually impress & emotionally move Chinese people. If you show native speakers you’ve taken the time & effort required to re-form your mouth muscles to be able to speak properly, not only will you stand out, the Chinese people around you will be touched that you took their language, and by proxy them, seriously. It gives everyone face, you included. - The opportunities that open to you will blow you away. The time of Chinese people seeing value in you simply because you are foreign has passed, but that doesn’t mean they aren’t still tremendously curious about a foreign perspective. They just need to understand you. Show Chinese people that you respect them, and everything from friendships, business relationships or even romance will become way more likely. On the flip side, if you have mediocre or poor pronunciation, there is a high probability of awkwardness & disappointment. Start Now with Chinese Pinyin So how long does it take to learn the whole Pinyin chart and relevant traps & pitfalls? Depends on you. How many hours a day will you spend on it? We’d say anywhere from a week (super-dedicated) to a couple months (20 minutes a day). It is a great way to get to know the language.
<urn:uuid:1801e867-8ee0-42e3-a1ea-d4676339e09f>
CC-MAIN-2021-21
https://www.chengduliving.com/chinese-pinyin-primer/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00136.warc.gz
en
0.929051
5,315
3.140625
3
The previous answer is erroneous in his assumptions that this will decrease performance. Each "location" entry is stored as a single row in a table. their data. Once again lets check how MySQL execute this query in partitioned table by using EXPLAIN clause. I/O operations can also be improved because data and indexes can be split across many disk volumes. Partitioning is a way in which a database (MySQL in this case) splits its actual data down into separate tables, but still get treated as a single table by the SQL layer. Browse All Articles > Master your MySQL data - Table partitioning approach for better performance. Provides an overview of the MySQL database partitioning techniques and how these techniques lead to operational excellence. MySQL Support Engineers advise you on how to properly setup and tune your MySQL servers, schema, queries, and replication set-up to maximize performance and availability. MySQL Partitioning Architect and Lead Developer Mikael Ronström Chapter 1 Partitioning This chapter discusses MySQL's implementation of user-defined partitioning. Partitioning tables is one of the options that a DBA has to improve performance on a database server. Consultative Support 1 is a proactive approach that is designed to help you avoid critical outages. MySQL partitioning makes data distribution of individual tables (typically we recommend partition for large & complex I/O table for performance, scalability and manageability) across multiple files based on partition strategy / rules.In very simple terms, different portions of table are stored as separate tables in different location to distribute I/O optimally. ⚈ Some of the problems having lots of partitions are lessened by the Data-Dictionary-in-a-table. based on MySQL official document. Partitioning is a powerful optimisation technique, which will help you in improving your query performance. I'm using MySQL 5.6.22. and the table is innoDB engine. information; for some examples of queries against this table, see The main points to consider when partitioning your data horizontally in MySQL are: One technique used to speed up the performance when you deal with very big tables is partitioning. Chapter 1 Partitioning This chapter discusses MySQL's implementation of user-defined partitioning. Partitions Performance with MySQL 5.1 and 5.5 1. ith ew nc .5 ma d 5 for an st per 5.1 B oo SQL ing M y rtit ion pa Giuseppe Maxia MySQL Community Team … If we consider our example schema, In MySQL 5.6, it is also possible to use a DATE or DATETIME column as the partitioning column using RANGE COLUMNS and LIST COLUMNS partitioning. Partitioning can help in many different situations including improving the efficiency of large tables by: Breaking data into smaller groups for more efficient queries; Adhering to file-system file size limits You may also find the following resources to be useful when working plugin listed with the value ACTIVE for the This chapter discusses MySQL's implementation of Section 21.3.4, “Maintenance of Partitions”, discusses table @ include/thr_lock.h Bug#47261: 5.1 fix for Partitioning performance drop... with hundreds of partitions Added flag for grouping locks for use with partitioning, since all locks in a partitioned table is stored in the same order. Partitioning can help in many different situations including improving the efficiency of large tables by: Breaking data into smaller groups for more efficient queries; Adhering to file-system file size limits Simply put, when you partition a table, you split it into multiple sub-tables: partitioning is used because it improves the performance of certain queries by allowing them to only access a portion of the data thus making them faster. Partitioning (a database design technique) improves performance, manageability, simplifies maintenance and reduce the cost of storing large amounts of data. PLUGINS statement, like this: You can also check the MySQL 8.0, (GA 8.0.11 released on 2018-04-19): ⚈ Only InnoDB tables can be partitioned -- MariaDB is likely to continue maintaining Partitioning on non-InnoDB tables, but Oracle is clearly not. If not, then between the two of them, I can't use mysql partitioning as it degrades both query performance by lacking an index cache of any kind (would only make sense for unindexed tables that get scanned), and insert performance. It # HASH Partitioning Partitioning by HASH is used primarily to ensure an even distribution of data among a predetermined number of partitions. A MySQL news site featuring MySQL-related blogs, which should be example, to change a table to InnoDB, execute Description: When trying to insert data into a table with hundreds of partitions, performance drops drastically. about me 2 3. In MySQL 5.7, partitioning became native to the store engine and deprecated the old method where MySQL … solution to your problem already posted. not performed; in these versions, you must is monitored by members of the Partitioning Development and As a Data Engineer I frequently come across issues where response time of APIs is very poor or some ETL pipeline is taking time. based on MySQL official document. Partitioning and NDB Cluster. of interest to anyone using my MySQL. a check at startup to identify tables that use nonnative Partitions Performance with MySQL 5.1 and 5.5 1. ith ew nc .5 ma d 5 for an st per 5.1 B oo SQL ing M y rtit ion pa Giuseppe Maxia MySQL Community Team … The world's most popular open source database, Download Performance recommendations. Partitioning (a database design technique) improves performance, manageability, simplifies maintenance and reduce the cost of storing large amounts of data. Or the SELECT query on the primary index will use something like GLOBAL index on the whole table instead of the partition? compiling a partitioning-enabled MySQL 5.7 build, check There is also a other article MySQL 8.0.14: A Road to Parallel Query Execution is Wide Open! MySQL supports horizontal partitioning which allowing the rows of a database to be divided into smaller subset. Lets say you don’t have any Blob/Text column (more detail is available in MySQL documentation about which datatypes fit into the memory space and which not) than Vertical Partitioning is not going to be of much help. server is deprecated, and is removed in MySQL 8.0, when Crontab is a good alternative if you are unable to use the MySQL event scheduler. How to repeat: The attached script measures the speed of inserting 10000 records into a table containing 356 partitions: INSERT Engine: MyISAM, partitioning YES, time 9.86143398284912. MySQL 5.7 Community binaries provided by Oracle include only the InnoDB and MySQL partitioning makes data distribution of individual tables (typically we recommend partition for large & complex I/O table for performance, scalability and manageability) across multiple files based on partition strategy / rules.In very simple terms, different portions of table are stored as separate tables in different location to distribute I/O optimally. Additional Resources. the application queries / … Section 21.3, “Partition Management”, covers methods of adding, This combination is also used in … As of MySQL 5.7.17, the generic partitioning handler in the MySQL In this article I am going to show performance difference between a partitioned table and non-partitioned table by running a simple query. I'm using MySQL 5.6.22. and the table is innoDB engine. In MySQL 5.7.21 and later, this check is Improving Performance, Availability, and Manageability. MySQL 5.7 binaries are available from You want to ensure that table lookups go to the correct partition or group of partitions. To enable To prepare for migration to MySQL 8.0, any table with maintenance commands for use with partitioned tables. Partitioning increases query performance, which means when you query, you will only have to look at a subset of the data to get a result rather than whole table. Instead, I/O time and indexes are the issues. There is also a other article MySQL 8.0.14: A Road to Parallel Query Execution is Wide Open! lets us examine how MySQL execute this query using EXPLAIN clause. if you wish for the server to check for tables using the generic ⚈ Some of the problems having lots of partitions are lessened by the Data-Dictionary-in-a-table. Additional Resources. A user's phone sends its location to the server and it is stored in a MySQL database. Partitioning Performance. Section 21.6, “Restrictions and Limitations on Partitioning”. To enable partitioning if you are compiling MySQL 5.7 option. There are a number of benefits that come with partitioning, but the two main advantages are: Increased performance - during scan operations, the MySQL optimizer knows what partitions contain the data that will satisfy a particular query and will access only those necessary partitions during query execution. The idea behind partitioning isn't to use multiple serve… to its error log. It won't. The worst situation is when people don’t expect that partitioning can hurt performance. MySQL KEY partition is a special form of HASH partition, where the hashing function for key partitioning is supplied by the MySQL server. PostgreSQL has several indexing and two types of partitioning options to improve data operations on a scalable table. option. This is a small tutorial on how to improve performance of MySQL queries by using partitioning. option. MySQL Partitioning, MySQL table partitioning, database partitioning, table partitioning, partitioning, table optimization, performance optimization, mysql query execution plan, mysql table definition, mysql database definition, mysql partitioning steps, mysql partition example, mysql partitioning implementation steps. Infact it might even slow down the performance of queries for the added join required to retrieve the whole dataset. Native partitioning … INFORMATION_SCHEMA.PLUGINS table with a For Two quick performance tips with MySQL 5.1 partitions While I was researching for my partitions tutorial, I came across two hidden problems, which may happen often, but are somehow difficult to detect and even more difficult to fix, unless you know what's going on, and why. As we can see from the result there are no partitions in this table that is why “partitions” column has NULL value. This means that it is impossible to create data warehouses with one partition per day of year. As a database grows exponentially, then some queries (even the optimized one) are getting slower. We will use below simple filter query for testing: SELECT * FROM salaries WHERE emp_no=’10001' AND from_date> ‘1995–06–01’; I have ran this query multiple time as shown below, and average run time is 1.7302 sec approx. Housekeeper reduces the MySQL performance (see History Tables – Housekeeper). --disable-partition-engine-check When partitioning support is disabled, you can see any See Section 21.1, “Overview of Partitioning in MySQL”, for an introduction to It distributes the portions of the table's data across a file system based on the rules we have set as our requirement. Perhaps the most common use case where PARTITIONing shines is in a the dataset where "old" data is deleted from the table periodically. Additional Resources. Note As of MySQL 5.7.17, the generic partitioning handler in the MySQL server is deprecated, and is removed in MySQL 8.0, when the storage engine used for a given table is expected to provide its own (“native”) partitioning handler. For information about partitioning support Description: When trying to insert data into a table with hundreds of partitions, performance drops drastically. This is a small tutorial on how to improve performance of MySQL queries by using partitioning. partitioning; for any that are found, the server writes a message The SQL standard does not provide much in the way of guidance … For known issues with partitioning in MySQL 5.6, see Chapter 6, Restrictions and Limitations on Partitioning. So In a way partitioning distribute your table’s data across the file system, so when query is run on a table only a fraction of data is processed which result in better performance. I am using a local MySQL server, MySQL workbench and salaries table from the sample employee data. You can improve the performance of a MySQL Database by using partitioning, that is assigning specific table rows into subsets of rows. Section 21.2.7, “How MySQL Partitioning Handles NULL”. Partitioning increases query performance, which means when you query, you will only have to look at a subset of the data to get a result rather than whole table. So a simple alternative is use the Partitioning native resource from MySQL. Latest news from Analytics Vidhya on our Hackathons and some of our best articles! Partitioning performance is best when most of the queries only need to access one or a small subset of partitions at a time. Boost performance with MySQL 5.1 partitions March 19, 2009 Giuseppe Maxia MySQL Community Team Lead Sun Microsystems 2. Who's this guy? Different DB engine stores a table data in file system in such a way, if you run simple filter query on a table it will scan whole file in which table data is stored. But table size is rarely a performance issue. frequently posts articles here concerning his work with MySQL the MySQL Partitioning Ponder what it takes for a 'point query'. They implement it, then application performance gets slow. As a Data Engineer I frequently come across issues where response time … offered in MySQL Enterprise Edition binaries, see Chapter 27, MySQL Enterprise Edition. For known issues with partitioning in MySQL 5.7, see MySQL include the following: This is the official discussion forum for those interested in or current, 5.6 This section provides a conceptual overview of partitioning in MySQL 5.5. Currently, MySQL InnoDB Cluster to deliver an integrated, native, high availability solution for MySQL MySQL Router for transparent routing between your application and any backend MySQL Servers MySQL Partitioning to improve performance and management of large database applications Other sources of information about user-defined partitioning in Partitioning in MySQL is used to split or partition the rows of a table into separate tables in different locations, but still, it is treated as a single table. The Performance Recommendations feature analyzes workloads across your server to identify indexes with the potential to improve performance. with partitioned tables. So as we can see there is a significant improvement of query run time from 1.7302 sec to 0.23 sec. Other sources of information about user-defined partitioning in MySQL include the following: • MySQL Partitioning Forum Common examples are: storing tracking information about your website to do analytics later, logging facts about your business, or just recording geolocation information provided by phones. Forum and ask for assistance there if you do not find a The PARTITIONS table in the In MySQL 5.7.17 through 5.7.20, the server automatically performs CREATE TABLE test_salaries LIKE salaries; Now we will create partitions on this table before inserting data. The server employs its own internal hashing function which is based on the same algorithm as PASSWORD(). The benchmark tests did not show any great performance difference between those two so the decision was made to go with time proven solutions. you can obtain the source from our GitHub repository. subpartitioning; see Section 21.2, “Partitioning Types”, and You may also find the following resources to be useful when working with partitioned tables. We encourage you to check In MySQL 8.0, partitioning support is provided by the InnoDB and NDB storage engines. However, for the latest partitioning bugfixes and feature additions, not built with partitioning support. This makes range queries on the partition key a bad idea. the storage engine used for a given table is expected to provide here for links to blogs kept by those working with MySQL A common fallacy: "Partitioning will make my queries run faster". If your MySQL binary is built with partitioning support, nothing At the time of this writing, MySQL supports only horizontal partitioning. This time partitions column returns some value in my case its p73(Might be different for you). Open Performance Recommendations from the Intelligent Performance section of the menu bar on the Azure portal page for your MySQL server. start the server with Again I ran this query multiple time as shown above, and average run time is 0.23 sec approx. To disable this check, use the PARTITIONing splits up one table into several smaller tables. https://dev.mysql.com/downloads/mysql/5.7.html. existing partitioned tables and drop them (although doing this is Documentation Teams. Each "location" entry is stored as a single row in a table. If you want to disable partitioning support, you can start the MySQL The user- expression can be a column value or a function acting on column values, depending on the type of partitioning used. Historically, MySQL’s ORDER BY implementation, especially together with LIMIT, is often the cause for MySQL performance problems. Take a look. For information on partitioning restrictions and feature limitations, see Section 18.5, "Restrictions and Limitations on Partitioning".. thanks! MySQL has mainly two forms of partitioning: 1. INFORMATION_SCHEMA database provides information Other storage engines such as MyISAM, MERGE, CSV, and FEDERATED cannot have support for partitioning. When table data gets too large to manage or queries take too long to execute the solution is often to buy bigger hardware or assign more CPUs and memory resources to the machine to solve the problem. – No, MySQL uses a pruning algorithm and is not parallel – Good partitioning design is required to benefit from pruning, concentrate on aligning query filters to partitioning scheme • When is Partitioning faster? The Problem I've been running a mobile GPS tracking service, MoosTrax (formerly BlackBerry Tracker), for a few years and have encountered a large amount of data in the process. Today it's very common to store a lot of data in ourMySQLdatabases. But table size is rarely a performance issue. -DWITH_PARTITION_STORAGE_ENGINE This is done by using PARTITION BY KEY, adding in CREATE TABLE STATEMENT. -DWITH_PARTITION_STORAGE_ENGINE I'm thinking about partitioning it to improve performance. The Problem I've been running a mobile GPS tracking service, MoosTrax (formerly BlackBerry Tracker), for a few years and have encountered a large amount of data in the process. As far as I know MySQL 8 works good with Zabbix but I have no comments on the nuances of partitioning. MySQL partitioning performance 1. So In a way partitioning distribute your table's data across the file system, so when query is run on a table only a fraction of data is processed which result in better performance. In vertical partitioning, a partition function is applied to all rows in a table in order to distribute the data in different nodes in the cluster. Partitioning can be achieved without splitting tables by physically putting tables on individual disk drives. ALTER TABLE test_salaries PARTITION BY KEY(emp_no) PARTITIONS 100; I am selecting “emp_no” column as key, In production you should carefully select this key columns for better performance. this Manual, MySQL NDB Cluster 7.5 and NDB Cluster 7.6, Exchanging Partitions and Subpartitions with Tables, Restrictions and Limitations on Partitioning, Partitioning Keys, Primary Keys, and Unique Keys, Partitioning Limitations Relating to Storage Engines, Partitioning Limitations Relating to Functions, 8.0 Section 2.9, “Installing MySQL from Source”. Quite the contrary. user-defined partitioning. Is there some limitation on table partitioning? I have two unique fields in the table - an auto-incremented (BIGINT) id and a unique string (VARCHAR(255)) based on the username. Partitioning of larger tables can improve performance, but partition tables in MySQL cannot be placed in general tablespaces, which is a serious showstopper for I/O balancing. I hope you like my article, If you want more information on this topic, you can follow and message me on Instagram or LinkedIn. In order to properly utilise this technique it is recommended that first you analyse your data and properly choose the key columns on which partitioned is to be done, as well as a suitable number of partition based on volume of your data. EXPLAIN SELECT * FROM salaries WHERE emp_no=’10001' AND from_date> ‘1995–06–01’; SELECT * FROM test_salaries WHERE emp_no=’10001' AND from_date> ‘1995–06–01’; EXPLAIN SELECT * FROM test_salaries WHERE emp_no=’10001' AND from_date> ‘1995–06–01’; Analyze Historical Weather Data with Python, The Black Swans In Your Market Neutral Portfolios (Part I), The Principled Machine Learning Researcher, Demystifying Data Science — From The Big Bang to Big Bucks, Most Common Topics In Online Blogging-A Data Science Perspective. After this scripts you will know the simplest way, how to make use of this awesome Mysql functionality. My question - should I create a hash partition on the id or on the unique string? announcements and updates from MySQL developers and others. partitioning, the build must be configured with the removing, and altering partitions in existing partitioned tables. A user's phone sends its location to the server and it is stored in a MySQL database. Other sources of information about user-defined partitioning in MySQL include the following: • MySQL Partitioning Forum Partitioning tables is one of the options that a DBA has to improve performance on a database server. partitioning handler (Bug #85830, Bug #25846957). MySQL is a Relational Database Management System (RDBMS) which stores data in the form of rows and columns in a table. The order remains the same. For known issues with partitioning in MySQL 5.6, see Chapter 6, Restrictions and Limitations on Partitioning. The order remains the same. experimenting with MySQL Partitioning technology. partitioning isn't a sllver bullet for performance gain (jet) which might change in the future as the MySQL Dev team seams to be working on Parallel Query Execution support on the InnoDB engine..
<urn:uuid:a2bdc129-9cb7-4e66-8fe3-81ff7505af61>
CC-MAIN-2021-21
https://www.saleiodato.it/private-selection-xktmf/b572d7-mysql-partitioning-performance
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00496.warc.gz
en
0.88296
4,786
2.5625
3
Periapical abscess occurs as a result of bacterial infection of the tooth and the surrounding structures, most commonly on the grounds of dental caries and tooth decay. Focal inflammation and abscesses can produce intense pain, and the diagnosis can be achieved through physical examination. Treatment includes antibiotics, root canal procedure, and sometimes resection of the gums to allow for pus drainage. A periapical abscess may initially be asymptomatic, but in most cases, patients present with intense, throbbing or sharp-shooting pain at the site of the abscess formation. The affected tooth is tender when pressure is applied and chewing on the side where the abscess is formed is usually avoided by the patient because of pain. Intraoral swelling is usually observed on physical examination , commonly accompanied with redness of the gums and swelling. In most severe cases, facial asymmetry may be observed because of intense swelling. Although very rarely, a periapical abscess may transform into a chronic infection, due to the development of sinus tracts, which serve as channels through which pus is partly drained and can potentially cause complications, such as dissemination of infection to other sites. Entire Body System This occurred in a 78-year-old female patient that presented with general weakness and fever. We revealed that she had a periapical abscess. The blood culture was positive for D. pneumosintes and S. exigua; however, identifying them was challenging. [ncbi.nlm.nih.gov] Periapical abscesses can cause severe tooth pain and sensitivity to temperature; a fever; pain while chewing; and swelling in the gum, glands of the neck, and upper or lower jaw. [connecticutchildrens.org] If the infection becomes more serious or it spreads, you may experience fever and swelling. Treatment is necessary if you develop an abscess, and there are two different types of abscesses you may have. Keep reading to learn more. [coeurdaleneiddentist.com] […] odontogenic infection (2% of all apical radiolucencies) Clinical features Severe pain localized swelling and erythematous overlying mucosa Extreme tenderness to persussion and to pressure on chewing Slight extrusion of the tooth non- vital, mobile tooth fever [50roots.blogspot.com] Malaise. Tender lymph nodes. Figure 6-4. - Periapical abscess. [medical.tpub.com] Cellulitis—spread of infection along fascial planes, breaks through bone; small percentage of patients; Etiology: odontogenic infection, could be trauma or infection elsewhere (rare) Clinical Features: diffuse swelling, tissue is tense, patient looks sick, fever, malaise [oralpathology.blogspot.com] Constitutional symptoms, such as fever, malaise, and proximal lymphadenopathy rarely appear and occur in cases of severe inflammation of the tooth and the surrounding structures. [symptoma.com] It may or may not be associated with fever and malaise. (ingle 6th edition) 52. • Fortunately these rarely occurs • Rapidly progressive, painful. [slideshare.net] Presence of fever and general malaise (feeling poorly) An X-ray image of the tooth may not show changes The patient’s pain is typically relieved to some extent by application of cold substances to the affected tooth, and they often present to the dentist [toothiq.com] The individual may also complain of fever, headache, and general fatigue. An untreated abscess eventually erodes a small channel (sinus) through the jawbone to the surface of the gum. [nmihi.com] In more severe cases, when abscesses are large, symptoms such as fever and enlarged lymph nodes may be present, and patients may experience malaise and fatigue. These symptoms indicate that the infection is severer. [symptoma.com] […] complications include: Fever Headache Nausea Diarrhoea Swollen lymph glands Pain spreading to the jaw, ear or neck on the same side as the infected tooth Difficulty opening your mouth (trismus) Difficulty breathing or swallowing (dysphagia) General fatigue [dentaly.org] Jaw & Teeth Symptoms typically include one or more of the following: Pain (toothache) which can quickly become worse. It can be severe and throbbing. Swelling of the gum which can be tender. Swelling of the face. [sunnyvalerootcanaldentist.com] If you or any of your family members is experiencing a constant, throbbing toothache you must get in touch with your dentist. [primehealthchannel.com] When to seek help If you have a toothache or notice evidence of an abscess on your gum, visit your dentist. Even if the abscess drains and the pain decreases, a visit to the dentist for complete treatment is crucial. [ndcs.com.sg] A history of toothache with sensitivity to hot and cold suggests previous pulpitis, and indicates that a periapical abscess is more likely. [en.wikipedia.org] […] have an abscess tenderness of your tooth and surrounding area, especially to touch and on biting, intense, throbbing pain which disturbs sleep sensitivity to food and drink that is very hot or very cold, fever difficulty in opening your mouth (known as trismus [ndcs.com.sg] Increased mobility (mostly periapical abscess) Pressure or percussion tenderness (mostly periapical abscess) Extrusion Regional lymph node involvement More severe infection Trismus, indicating involvement of the masticator space Difficulty swallowing [emedicine.medscape.com] Inflammatory edema, often complicated by trauma from the opposing tooth, leads to swelling of the flap, pain, tenderness, and a bad taste caused by pus oozing from beneath the flap. 7 Regional lymphadenopathy is common, and cellulitis and trismus (inability [aafp.org] […] the mouth More serious symptoms which may indicate dental abscess complications include: Fever Headache Nausea Diarrhoea Swollen lymph glands Pain spreading to the jaw, ear or neck on the same side as the infected tooth Difficulty opening your mouth (trismus [dentaly.org] - Jaw Pain History: 50 year old man with jaw pain. Periapical tooth abscess: Coronal CT image of the face reveals a periapical lucency near one of the right maxillary molars (yellow arrow). [radiologypics.com] On Wednesday I had some minor jaw pain in the morning and by 10 PM it looked like I was smuggling a golf ball in my cheek. I had already made an appointment with my dentist and I called my GP and got put on ABX (amoxicillin). [community.babycenter.com] Toothache or jaw pain & bony hard swelling on outer surface of jaw – usually for several weeks duration. [slideshare.net] Face, Head & Neck - Facial Pain Symptoms of sinusitis include: a blocked or runny nose facial pain and tenderness a high temperature (fever) of 38 C (100.4 F) or above Sinusitis often clears up without treatment but, if necessary, antibiotics may be prescribed. [your.md] Conditions causing dental pain on first presentation may include pulpitis (reversible or irreversible), periapical periodontitis, dental abscess, as well as cracked tooth syndrome and other oro-facial pain disorders. [nature.com] - Facial Edema If treatment is delayed, the infection may spread through adjacent tissues, causing cellulitis, varying degrees of facial edema, and fever. The infection may spread to osseous (bony) tissues or into the soft tissues of the floor of the mouth. [healthcentral.com] The individual may also complain of fever, headache, and general fatigue. An untreated abscess eventually erodes a small channel (sinus) through the jawbone to the surface of the gum. [nmihi.com] • SYSTEMIC SYMPTOMS: Fever, tiredness, headache, loss of sleep, irritation are present. • Application of ice to some extent relieves pain in contrast to heat which aggravates pain. [endodontics-endodontics.blogspot.com] However, pain arising from nondental sources such as myofascial inflammation, migraine headache, maxillary sinusitis, nasal tissues, ears, temporomandibular joints, and neuralgias always must be considered and excluded. 4 CARIOUS ORIGIN Dental caries [aafp.org] […] mouth from the infection Swelling and reddening of the face or gums Bleeding from the gums A tooth that is loose and/or discoloured A pea-sized bump inside the mouth More serious symptoms which may indicate dental abscess complications include: Fever Headache [dentaly.org] The diagnosis of periapical abscess can be made on physical examination, by inspection of the oral cavity, and examination of the site where the patient reports pain and swelling. However, the original periapical lesion may not be easy to identify right away, because of possible tissue destruction created by inflammation and infection. Nevertheless, the diagnosis of periapical abscess should include the following diagnostic steps: - Evaluate possible underlying risk factor - numerous conditions, as mentioned, predispose patients to development of periapical abscesses, and should be investigated, but primary causes include dental caries and tooth decay. - Perform a complete blood count, to evaluate the presence of leukocytosis in the blood. Usually, when leukocyte levels are high, the predominant cell type will be neutrophilic. - Perform blood culture in severe cases with signs of systemic infection - both aerobic and anaerobic cultures should be obtained if the patient reports fever. - Radiography can help to exclude other localization - it is important to distinguish periapical abscesses from other forms, such as periodontal abscesses, and X rays may initially help in identifying the exact site of lesion. - Capnocytophaga Ochracea The 16S rRNA gene sequence revealed that the strain belongs to the genus Capnocytophaga, as it showed sequence similarities to Capnocytophaga ochracea ATCC 27872 T (96.30%) and C. sputigena ATCC 33612 T (96.16%). [ncbi.nlm.nih.gov] […] name Capnocytophaga endodontalis sp. nov. is proposed. [ncbi.nlm.nih.gov] Treatment principles include several approaches: - Root canal procedure - smaller lesions, as well as localized and uncomplicated periapical abscesses result in infection of the pulp and damage of proximal blood vessels and nerves, which mandates their removal and cleaning. This procedure comprises the removal of the pulp, as well as of infected and damaged structures, abscess drainage and resolution of infection, with appropriate replacement and filling of removed structures. After the procedure, subsequent irrigation with disinfectant material is performed to prevent recurrences. - Surgical care - in the setting of accumulated pus in the gums and tooth surroundings, surgical incision and drainage is recommended, in order to drain the abscess and pus. In more severe cases, tooth removal may be recommended . - Symptomatic therapy - since patients often present with severe pain, non-steroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen and diclofenac are prescribed to reduce pain, but also because of their anti-inflammatory properties. - Antibiotic therapy - Treatment of periapical abscesses with antibiotics is usually reserved for larger abscesses , and therapeutic choices include metronidazole, clindamycin, and amoxicillin . In most cases, periapical abscesses occur in the setting of a localized infections, and bacteria rarely spread to adjacent structures and distant sites. However, spread of infection to the adjacent bone and sinuses have been observed, as well as dissemination to the central nervous system and other sites through circulation, but these occurrences are quite rare. In terms of the periapical abscesses themselves, those that have extended to the floor of the mouth or to the neck may result in partial airway obstruction, and necessitate prompt treatment, usually through surgical incision, to allow for the pus to be drained. Periapical abscess is a bacterial infection, and pathogens that have been associated with this infection include Bacteroides spp., Fusobacterium, Actinomyces, Peptostreptococcus, Prevotella oralis and Prevotella melaninogenica, as well as Streptococcus viridans . Most of these organisms are commensal hosts of the oral flora and enter the pulp, leading to the formation of abscesses when the structure of the tooth is breached, which is the case in dental caries, tooth decay, or mechanical trauma. Recent advances in microbiological testing have resulted in the discovery of other pathogens as causative agents of this type of infection, including Treponema spp., Atopobium, Bulleidia extructa, and Mogibacterium species, as well as Cryptobacterium curtum . Up to a third of the microorganisms isolated in these cases produce beta-lactamases, which significantly reduces treatment options. This form of abscess is most commonly observed in young children, and associated factors include a thinner enamel because of ongoing tooth development, but also poor hygiene, which is still an issue linked to socioeconomic factors, as well as failure to seek dental care. In addition, several developmental and acquired conditions have been linked with periapical abscesses, including abnormal development of the enamel (such as dens invaginatus, or dens evaginatus), as well as dentin malformations, which can be observed in dentine dysplasia, dentinogenesis imperfecta, osteogenesis imperfecta, and familial hypophosphatemia. Acquired conditions may include buccal cysts which become infected . In adults, the formation of periodontal abscesses are much more common than periapical abscess. The pathogenesis of periapical abscess starts with the formation of dental plaques and erosion of the outer layers of the tooth - the enamel and dentin layers. These two structures protect the tooth pulp from harmful pathogens, and once their structure is breached (as seen in the cases of dental caries or tooth decay), bacteria may enter the pulp, which is supplied with blood vessels and nerves. Once the bacteria reach the local circulatory system, the immune system recognizes the presence of bacteria, and produces an inflammatory reaction, leading to the migration of leukocytes, and production of pre-inflammatory cytokines. All these events lead to pus accumulation and abscesses, which are in this case formed at the apex of the root of the tooth. Prevention of periapical abscesses can be achieved through proper dental hygiene, as well as regular dental examinations. Regular teeth cleaning, according to instructions given by the dentist in terms of technique and frequency, as well as other steps involved in dental hygiene should be implemented, and these steps may effectively reduce the risk of any dental disease. Fluoridation of communal drinking water has been implicated as the most effective large-scale preventive measure against dental caries , and the development of other dental diseases including periapical abscesses, while fluoride supplementation is recommended in fluoride-deficient areas. Periapical abscess and focal inflammation of the root of the tooth occurs due to penetration of bacteria into the pulp, because of preceding dental caries and plaque formation, which facilitates the entry of bacteria into the soft tissues of the tooth . Once the bacteria penetrate through enamel and dentin, they reach the pulp, which contains blood vessels and nerves; then, they cause an inflammatory reaction, as well as the formation of pus, leading to the development of abscesses , which can occur both in the tooth itself, or in the surrounding structures, such as the gums. the term "periapical abscess" implies the formation of this collection of pus at the apex of the root of the tooth. Patients with periapical abscesses usually present with intense pain, and difficulty in chewing on the side where the abscess is located, while systemic symptoms, such as fever, malaise, and proximal lymphadenopathy occur in more severe cases. With successful treatment, the abscess will resolve, but the underlying dental issues, such as caries and dental decay must be managed properly. If this condition is left untreated, dissemination of bacteria into the surrounding structures, including the proximal bones and sinuses may occur, while dissemination into the central nervous system and distant sites are quite rare. Nevertheless, prompt treatment should be initiated, which comprises root canal procedure, possibly antibiotics, and if necessary, gum resection, to allow for the drainage of pus. A periapical abscess is a collection of pus in the region of the root of the tooth and the surrounding tissue, caused by a bacterial infection. In most cases, periapical abscessesare formed because the protective structures of the tooth, the crown and the underlying dentin, are damaged because of dental carries, or tooth decay. In this way, bacteria are able to reach the soft tissues of the tooth, the pulp, which is supplied by blood vessels and nerves, and this is the initial site where the bacteria establish an infection. Our immune system recognizes the presence of bacteria, mobilizes white blood cells and releases different enzymes which aid in fighting against the bacteria. As a result of the interaction between the immune system and bacteria, pus is formed, and its accumulation results in an abscess. Patients often complain of severe pain in the region of abscess formation, and usually describe it as throbbing and sharp. Swelling of the gums may also be noticed, as well as tenderness of the tooth and the surrounding area, while chewing on the side where the abscess has formed will be rather painful. In more severe cases, when abscesses are large, symptoms such as fever and enlarged lymph nodes may be present, and patients may experience malaise and fatigue. These symptoms indicate that the infection is severer. A physician, or dentist, may examine the oral cavity and observe the changes in the surrounding regions of the tooth and the state of tissue affected by inflammation. Accompanying tests can include a complete blood count to evaluate signs of possible infection, and an X-ray of the teeth. After evaluation, appropriate therapeutic strategies will be implemented. Uncomplicated and small abscesses may burst and drain spontaneously, but usually, the recommended treatment is the root canal procedure, which involves complete cleaning of the affected tooth, and removal of inflamed and dead tissue, with subsequent drainage of pus from the tooth canal. After removal of tissue, the tooth will be disinfected, and filled with appropriate material, in order to prevent recurrences. In addition to root canal, surgical incision of the abscess and drainage may be recommended as well, while antibiotic therapy is reserved for patients with severe infections, and those with symptoms such as fever and enlarged lymph nodes. The prognosis is generally good and abscesses may cause no harm if treated promptly and properly, but if left untreated, the infection may spread into the surrounding tissue, such as the bones and the sinuses, or it may gain a chronic course, which can be a debilitating issue for patients and require prolonged treatment and care. - Stefanopoulos PK, Kolokotronis AE. The clinical significance of anaerobic bacteria in acute orofacial odontogenic infections. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2004; Oct; 98(4):398-408. - Lewis MA, MacFarlane TW, McGowan DA. Antibiotic susceptibilities of bacteria isolated from acute dentoalveolar abscesses. J Antimicrob Chemother. 1989;23(1):69-77. - Brook I. Microbiology and management of endodontic infections in children. J Clin Pediatr Dent. 2003;28(1):13-7. - Robertson D, Smith AJ. The microbiology of the acute dental abscess. J Med Microbiol. 2009;58(Pt 2):155-62. - Seow WK. Diagnosis and management of unusual dental abscesses in children. Aust Dent J. 2003;48(3):156-68. - Delaney JE, Keels MA. Pediatric oral pathology. Soft tissue and periodontal conditions. Pediatr Clin North Am. 2000;47(5):1125-47. - Brauer HU. Unusual complications associated with third molar surgery: A systematic review. Quintessence Int. 2009;40(7):565-72. - Dar-Odeh NS, Abu-Hammad OA, Al-Omiri MK, Khraisat AS, et al. Prescribing practices by dentists: a review. Ther Clin Risk Manag. 2010;6: 301–306. - Sands T, Pynn BR, Katsikeris N. Odontogenic infections: Part two. Microbiology, antibiotics and management. Oral Health. 1995;85(6):11-4, 17-21. - American Academy of Pediatrics Committee on Nutrition. Fluoride supplementation for children:interim policy recommendations. Pediatrics. 1995;95:777.
<urn:uuid:f4586abc-ac47-4d71-8596-6887f6d04b3a>
CC-MAIN-2021-21
https://www.symptoma.com/en/info/periapical-abscess
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00095.warc.gz
en
0.920082
4,629
3.3125
3
Operations With Scientific Notation Worksheet Algebra operations with scientific notation to software infinite algebra software for math teachers that creates exactly the worksheets you need in a matter of minutes. try for free. available for pre-algebra, algebra, geometry, algebra,, and calculus. create custom pre-algebra, algebra, geometry. E p to olscn.y t tr. g s c . g worksheet by software software - infinite algebra name operations with scientific notation date period. Scientific notation grade - displaying top worksheets found for this concept. List of Operations With Scientific Notation Worksheet Some of the worksheets for this concept are writing scientific notation, scientific notation date period, negative exponent, exponent and scientific notation practice, notation positive es, concept exponents scientific notation, writing numbers in scientific notation, what fun its practice with. Performing operations using scientific notation worksheets. this is a fantastic bundle which includes everything you need to know about performing operations using scientific notation across in-depth pages. these are ready-to-use common core aligned grade math worksheets. 1. 5 Profile Chemistry worksheet significant figures l determine the number of significant figures in each of the following measurements bl c.ml d.m a. g f. cg g. h.oog e. kg i. l round off each of the following measurements to the indicated number of significant, updated. lots and lots of significant figures significant figures practice worksheet be significantly figuring in no time significant figures calculations sheet do calculations using the magic of significant figures significant figures i practice finding how many significant figures a measured value has. 2. Scientific Notation Teaching Math Notations Exponential scientific notation. mathematics. fifth grade. covers the following skills understand the place-value structure of the base-ten number system and be able to represent and compare whole numbers and decimals. model problem situations with objects and use representations such as graphs, tables, and equations to draw conclusions. 3. Scientific Notation Coloring Worksheet Activities Anchor Chart To write the, in scientific notation, we wish to write our number (,) with a decimal point after the first digit (.) multiplied by a power of. The proper format for scientific notation is a x b where a is a number or decimal number such that the absolute value of a is greater than or equal to one and less than ten or, a. b is the power of required so that the scientific notation is mathematically equivalent to the original number. The purpose of scientific notation is for scientists to write very large, or very small, numbers with ease. calculating scientific notation for a positive integer is simple, as it always follows this notation a x b. 4. Scientific Notation Complete Packet Bundle Introduction Assess Worksheet Activities The number of spaces to the right of the decimal point for our is equal to the number in the exponent that is behind the negative sign. this is useful to keep in mind when we express very small numbers in scientific notation. here is a very small number. 5. Scientific Notation Decimals Exponents Worksheet Exponent Worksheets Practice Let us express this number using scientific notation. our coefficient will be.Scientific notation review. scientific notation. an easy way to write very large or very small numbers for example. x instead of. x instead of,,,, correct format. one nonzero digit to the left of the decimal point exponent number of places the decimal point is moved to get correct see all my chemistry videos, check outhttpsocratic. 6. Scientific Notation Guided Notes Task Cards Upper Elementary Math Orgchemistrylearn to convert numbers into and out of scientific notation. scientific notation is a.Scientific notation notes. scientific notation is a short way to write very large or very small numbers. it is written as the product of a number between and and a power of. 7. Scientific Notation Ideas Notations Middle School Math More chemistry tutorials and practice can be found at www.chemfiesta.com. scientific notation worksheet answers convert the following to scientific notation. x. x. x. x. x. x We would like to show you a description here but the site wont allow us. 8. Scientific Notation Math Worksheet Page 2 Notations (. x )(. x ) scientific notation worksheet name author. Showing top worksheets in the category - operations scientific notation. some of the worksheets displayed are operations scientific notation, a scientific notation, c notation mixed operations es, what fun its practice with scientific notation,, name date class math handbook transparency master, notation es, writing scientific notation. 9. Scientific Notation Multiplying Dividing Practice Task Cards Word Problems Homework exponents and scientific notation entire worksheet and fill in answers to all numbered exercises. homework answer questions,, from p. of this packet. homework complete both figs worksheet handed out in class and answer questions,,, from pages of this packet. 10. Scientific Notation Partner Activity Activities Math Games Middle School 11. Scientific Notation Simplifying Expressions Practice Remove points from rubric. post outcomes results to learning mastery. hide score total for assessment results. cancel create rubric. Scientific notation is a shorthand method to represent very large and very small numbers in easily-handled form. when multiplying two numbers in scientific notation, you can multiply the two significant digit figures and arrive at a power-of-ten by adding exponents. 12. Scientific Notation Worksheet Answers Practice Algebra Worksheets 13. Operations Scientific Notation Mazes 3 Differentiated Levels Notations Simplify Expression G., use millimeters per year for seafloor spreading). Ready-to-print scientific notation exponents worksheet with answer sheet.for more dynamically created exponent radicals worksheets go to math-aids.com. math worksheets provided by math-aids.com. basic operations. 14. Scientific Notation Worksheet Maze Activity Activities Grade Math These exponents worksheets are a good resource for students in the grade through the grade. operations with scientific notation worksheets. Scientific-notation-worksheet. - name score teacher date scientific notation write each number in standard format. 15. Scientific Notation Worksheet Works 16. Scientific Notation Worksheets Worksheet Operations 17. Scientific Worksheet Chemistry Unique Operations Method Sometimes small numbers get too small and large numbers get too large after certain operations. Scientific notation ()- a shorthanded way of writing really large or really small numbers. in a number is written as the product of two factors. ex, can be written in scientific notation as. 18. Sh Students Learn Scientific Notation Operations Worksheet Math Interactive Notebook First factor regular notation scientific notation regular notation how to change scientific notation,. Scientific notation. subject mathematics. age range -. resource type. reviews. ts shop. reviews. all mathematics material collected is available on. 19. Simplify Exponents Negative Fractional Radical Equations Simplifying Fractional exponents can be used instead of using the radical sign. solve the rational exponent problems for x. a radical expression using rational exponents. a hardware store sells ft ladders and ft ladders. a window is located feet above the ground. 20. Simplify Express Scientific Notation Worksheet Activities Students must use cues and clues from the sentence to correctly conjugate each verb or verb phrase. class science motion and measurement of distances worksheets with answers for chapter in format to download prepared by expert science teachers from the latest edition of () books. 21. Sol Scientific Notation Review Notations Middle School Math Operations with scientific notation worksheet - worksheet resource plans source starless-suite.blogspot.com. scientificoperations. - name score teacher date operations with scientific notation simplify write each answer in scientific notation round to the course hero. 22. Solving Radical Equations Worksheets Exponent Word Problem 23. Students Feel Comfortable Numbers Scientific Notation Worksheet Activities By recommendations on talk composing, to making guide describes, or even to discovering which kind of lines to. Introducing scientific notation exponents worksheets for computing powers of ten and scientific notation, including positive exponents and negative exponents. 24. Scientific Notation Answer Key Worksheet Practice X. x. scientific notation worksheet author john and subject chemistry keywords scientific notation created, the weight of blue whale in scientific notation is. x lb. weight of gray whale in scientific notation in the given weight of gray whale, lb, we find decimal point. 25. Operations Scientific Notation Activity Coloring Page Activities Notations We move the decimal point so that there is only. Age -. main content scientific notation, other contents exponents. add to my workbooks add to classroom. add to teams. share through. link to this worksheet copy. Scientific notation word problem u. s. national debt. scientific notation word problem speed of light. practice scientific notation word problems. this is the currently selected item. scientific notation word problem speed of light. our mission is to provide a free, world-class education to anyone, anywhere. 26. 9 Chapter 7 Ideas Middle School Math Teaching Grade How to convert from standard notation to scientific notation. Worksheet. d.russell print worksheet and answers. for example,. or. ,. these worksheets requiring converting to and from the use of scientific notations. Scientific notation worksheets here teach you the best way to express very numbers and very small numbers. it is represented conveniently using exponents. the numbers are shortened (a number between to ) and multiplied in the power of. the powers for very large numbers are expressed using positive exponents and very small numbers. Scientific notation worksheets. 27. Operations Scientific Notation Activities Worksheet Com science worksheets that answer questions are commonly referred to as cheat sheets because they are essentially how a student can work on and check their notes in preparation for a test or exam.Q. scientific notation is made up of two number parts. Graphing quadratic functions of the form ax bx c problems with a graph for students to use as well as a space to identify the vertex and axis of symmetry. fits on page. detailed answer key included this worksheet is part of the following worksheet bundle httpswww. teacherspayteachers. About this quiz worksheet. the quiz is mainly an array of math problems. the questions ask you to test equations and points for symmetry or to identify the meanings of key terms. Create worksheets tests and quizzes for calculus trigonometry with math analysis. 29. Algebra 1 Worksheets Exponents Scientific Notation Worksheet Grade Math Power and roots squares, cubes and higher powers are shown as small digits called. the opposite of squaring and cubing are called square root and cube root. Id language, creole school subject math fifth grade age - main content logarithms roots and powers other contents add to my workbooks add to classroom add to teams share through. Powers and roots revised page of powers and roots powers the number is read five squared or five to the second power. the is called a base number. the is called an exponent. ex. is five cubed or five to the third is five to the fourth is five to the fifth is five to the. 30. Algebra 1 Worksheets Exponents Scientific Notation Worksheet Simplifying Algebraic Expressions Rational exponents. i can convert from rational exponents to radical expressions and vice versa. i can simplify numbers with rational exponents. solving radical equations. i can solve equations with roots.Radical operations practice. displaying top worksheets found for radical operations practice. some of the worksheets for this concept are adding subtracting multiplying radicals, radical workshop index or root, simplifying radical expressions date period, exponent and radical rules day, operations with radicals radical equations date period, radical equations, simplifying radical expressions, Rules for simplifying radicals. 31. Converting Scientific Notation Ordinary Numbers Math Worksheet Page 2 Word Problem Worksheets Move the decimal point the number of times the exponent says to. write the number you now have more examples ) write. in standard form. Writing in scientific notation worksheet writing worksheet is really a new method for development of a document through the teachers to market the development of the writing abilities among youngsters. these worksheets are used primarily by instructors and parents. Scientific notation word problems worksheet with answers. grade scientific notation word problems worksheet. answer key scientific notation word problems worksheet. grade scientific notation word problems worksheet. 32. Eighth Grade Algebra Comparing Scientific Notation Thumbnail Worksheet Practice These websites offer them in a wide range of topics and will help you in getting access to a wide range of question and answer services that are available on the internet.Wednesday,. in class watch video below and do page of unit metric conversion worksheet at home watch and know how to determine significant figures and do significant figures,. 33. Exponents Table Worksheet Exponent Worksheets Math Quotes Expressions . suggested learning targets. i can perform operations using numbers expressed in scientific notations. i can use scientific notation to express very large and very small quantities. i can interpret scientific notation that has been generated by technology. Worksheet performing operations with scientific notation about, cubic yards of water flow from the amazon river into the ocean every second. a. express this quantity of water in scientific notation. b. about how many cubic yards of water flow from the amazon river into the ocean in one hour express. 34. Function Operations Worksheets Scientific Method Template Inverse Functions Geometry Trigonometry These three operations with scientific notation mazes are designed to help students practice adding, subtracting, multiplying, and dividing in scientific notation. one maze is multiplying and dividing, one is adding and subtracting, and the last one is a combination of all operations. Practice expressing numbers in scientific notation. if seeing this message, it means were having trouble loading external resources on our website. math grade numbers and operations scientific notation intro. scientific notation intro. scientific notation example. Scientific notation examples. practice. More practice with scientific notation perform the following operations in scientific notation. refer to the introduction if you need help. section e multiplication (the easy operation - remember that you just need to multiply the main numbers and add the exponents). model ( x ) x ( x ) x. x. Operations in scientific notation worksheet having practical contents. since we want to supply everything required in a true plus reputable resource, most people present very helpful home elevators a variety of subjects along with topics. 36. Master Powers Algebra Worksheets Scientific Notation Worksheet For example, we have to write, in scientific notation. Answer key also includes questions sign up now for the subscriber materials sample edhelper.com - exponents worksheet return to exponents worksheets return to algebra worksheets return to math. name date exponents (answer id ) rewrite the number in scientific notation. ,.,. Scientific notation takes the form of m x n where m and n represents the number of decimal places to be moved, positive n indicates the standard form is a large number. negative n indicates a number between zero and one, example convert, to scientific notation, we move the decimal point so that there is only. 38. Math 8 Scientific Notation 3 Ideas Notations In which currently being reported, we provide variety of simple still beneficial content along with templates created appropriate for almost any academic purpose.Sep, scientists and engineers often work with very large or very small numbers, which are more easily expressed in exponential form or scientific notation. 39. Math Lesson Scientific Notation Worksheet Notations 40. Math Operations Scientific Notation All numbers are whole numbers (no fractional parts) with up to digits. these worksheets are files. A worksheet by software software - infinite pre-algebra name scientific notation date period write each number in scientific notation. ). ) ). ) ). ) write each number in standard notation. ). ). About this resource students practice writing numbers in scientific notation and get to color in this engaging, self checking activity. this worksheet incorporates fun into the classroom use it as a quick scientific notation assessment tool, a homework assignment, or even something for the kids. 41. Math Operations Scientific Notation Coloring Activity Worksheet Pinterest.com source www.pinterest.com detailed lesson plan in math grade polynomials detailed lesson plan in math grade fractions. The printable worksheets offer exercises like expressing numbers in scientific notation, expressing scientific notation in standard form, scientific notation involving arithmetic operations, and simplifying scientific notation. the worksheets are customized for students of grade and high school. ccss. express in scientific notation. Displaying top worksheets found for - operations scientific notation. some of the worksheets for this concept are operations scientific notation, a scientific notation, c notation mixed operations es, what fun its practice with scientific notation,, name date class math handbook transparency master, notation es, writing scientific notation. 42. Math Scientific Notation Ideas Notations Middle School Find worksheets about exponents and scientific notation. worksheetworks.com is an resource used every day by thousands of teachers, students and parents. scientific notation multiplication and quiz these task cards ( sets of ) give your students an opportunity to gain fluency in performing operations with numbers expressed in scientific notation ( ccss. math.content.a. ). the colorful sailing theme will help to engage your students. The exponents for the scientific notation problems may be positive, negative, or both. you may. And the easier way to write this we call scientific notation. scientific notation. 43. Math Stories Scientific Notation Gritty Worksheet Maths Activities Middle School Scientific notation practice worksheets with answers some of the worksheets below are scientific notation practice worksheets with answers converting from decimal form into scientific notation adding subtracting dividing and multiplying scientific notation exercises several fun problems with solutions. Converting ordinary numbers to scientific notation ( views this week) scientific notation to view more than one math worksheet result, hold down the key and click with your mouse. use one or more keywords from one of our worksheet pages. ().doc - scientific school early college high school. 44. Math Worksheets Mixed Addition Subtraction Exponents Scientific Notation Worksheet Exponent 45. Math Worksheets Multiplication Division Facts Subtraction Word Problems Addition 46. Multiplying Dividing Significant Digits Worksheets Scientific Notation Worksheet Chemistry I can explain how decimals can be rounded and why its useful. i can round decimals to any place. i understand why the value of the digit to the right of a number determines whether to round up or down. place value - rounding decimals (--) rounding decimals to the nearest tenth and hundredth using a number line. Decimals - worksheets. math worksheets on decimals.suitable printable decimals worksheets for children in the following grades rd grade, grade, grade, grade and grade. worksheets cover the following decimal topics introduction to decimals, decimals illustrated with pictures, addition, subtraction, division, multiplication, algebra with decimals, decimal patterns. 47. Multiplying Scientific Notation Worksheet Exponents Simple Exponent Worksheets If not, adjust the decimal numbers and the exponents. add or subtract the decimal numbers. write the sum or difference and the common power of in scientific notation format. Lesson number operations in scientific notation mathematics grade in this lesson, we will learn how to perform arithmetic operations with numbers expressed in scientific notation. lesson video. Scientific notation operations. develop skills in working with scientific numbers. this worksheet provides practice in multiplying numbers that are in scientific notation. while it is possible to convert the numbers to decimal form and multiply them, the correct way to do it is to multiply the decimal and exponent parts separately. 48. Teach Ideas Math Classroom Middle School Teaching Worksheet by software math multiplying and dividing scientific notation name id date period n zlfllct.t x drqezsperxvpehds.--simplify. write each answer in scientific notation. Operations with scientific notation worksheet about this worksheet this is an interesting one. 49. Converting Scientific Notation Ordinary Numbers Small Worksheet Notations , source bonlacfoods.com. For students -. in this scientific notation worksheet, learners simplify numbers in scientific notation. they perform the given operations and write the answer in scientific notation. this four-page worksheet contains problems. Operations with scientific notation. example distributive property of multiplication worksheet - ii. writing and evaluating expressions worksheet. nature of the roots of a quadratic equation worksheets. determine if the relationship is proportional worksheet. Each ready to use worksheet collection includes activities and an answer guide. When numbers in scientific notation are divided, only the number is divided. the exponents are subtracted... name date class operations with scientific notation operations with scientific notation math handbook transparency master use with appendix b, operations with scientific notation. See best images of operations with scientific notation worksheet. inspiring operations with scientific notation worksheet worksheet images. multiplication of exponents and division worksheets grade math worksheets algebra exponents worksheets geometry angles worksheet grade two-step equation maze answer key.
<urn:uuid:cda68bc4-e1e6-497c-98ae-49a5620a8489>
CC-MAIN-2021-21
https://cafedoing.com/operations-with-scientific-notation-worksheet/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00416.warc.gz
en
0.851701
4,480
3.90625
4
Population growth in cities is a global and progressive phenomenon. In 2008, the Organization of United Nations reported that more than half of the human population lives in urban areas . In large cities, the proliferation of neighborhoods of precarious habitat is becoming more frequent [2, 3]. The consequences of population growth and urbanization without planning and control have widened the social gap within cities and led to significant poverty belts with lack of employment, housing, security and protection of the environment [4, 5]. For example, in Mexico at 2010, approximately 86 million people lived in urban areas, 40.6% of whom live in poverty, meaning they suffer of one or more social deficiencies. This means that two out of every three poor people live in urban areas . Accelerated, unplanned and unsustainable urbanization also has an important impact on health . In this regard, the National Institute of Public Health of Mexico points out that in marginalized urban areas, children are vulnerable to malnutrition and addictions, intrauterine growth retardation, low birth weight and decreased neurodevelopment in children as a result of the poor living conditions, inadequate nutrition and restricted access to health services [8, 9, 10]. However, additional environmental threats must be considered in any strategy that aims to correct health inequities, including exposure to toxic chemicals. Our group has reported about the exposure to indoor smoke , lead-glazed ceramics , arsenic , electronic waste , volatile organic compounds (VOCs) and polycyclic aromatic hydrocarbons (PAHs) in Mexican population . The conclusion from these studies was that the population is permanently exposed to chemical mixtures. In this context, children were identified as a group that was vulnerable to the effects of toxic substances because of their physical, cognitive, and physiological immaturity . Even some children in urban areas in Mexico that live in vulnerable conditions and are exposed to toxic substances. Considering the data set forth above, the goal of this study was to assess the concentration levels mixture in Mexican children from marginalized urban communities. Sampling sites were selected based on the previous knowledge of the activities in each area and the availability of basic services (water, light, health services etc). Four sites were studied: i) Bellas Lomas in San Luis Potosi (BEL), a site with vehicular traffic and workshops; ii) Tercera Chica in San Luis Potosi (TC), a site with brick kilns; iii) Rincon de San Jose in San Luis Potosi (SJR), a community on the outskirts, where a hazardous waste landfill is located; and iv) Morales in San Luis Potosi (MOR) a metallurgical zone with a copper-arsenic (recently closed) and electrolytic zinc smelters. The study was conducted from 2010 to 2012, and analytical measurements were completed in late 2012. Children were randomly selected from schools located in these communities and personally interviewed to verify their eligibility to participate in the study. The inclusion criteria for children who participated were as follows: i) informed, voluntary and signed consent by the child’s parents; ii) a minimum residency period of 2 years; iii) aged between 6 and 12 years old. The research methodology was carried out with the approval of the Bioethics Committee of the Medicine School from the Autonomous University of San Luis Potosi. Blood and urine sample collection First morning urine was collected in sealable plastic bottles and stored in a deep freezer until analysis (–20ºC). The extraction of the blood samples (6 mL) was performed by venipuncture with vacuum blood collection tubes (Vacutainer tubes) with EDTA as anticoagulant, the samples were stored at 4°C. Determination of 1-OHP in urine 1-hidroxypyrene (1-OHP) has been used as a representative biomarker of exposure in populations exposed to mixtures of PAHs . 1-OHP was quantified using High Performance Liquid Chromatography (HPLC; HP1100, Agilent Technologies) using fluorescence detector (G1321 A). The limit of detection (LOD) was 1.0 nmol/L. Quality control was carried out using the standard IRIS ClinCal Recipes (Munich, Germany) 50013, 8867, and 50014. The recovery was 99%. Finally, the levels of 1-OHP in urine were adjusted by urinary creatinine (cr). Urinary creatinine was determined by using the Jaffe colorimetric method. Urinary trans, trans-muconic acid determination Urinary trans, trans-muconic acid (t,t-MA) has been used as an exposure biomarker to monitoring benzene exposure. t,t-MA was quantified using HPLC (HP1100, Agilent Technologies) with a UV-Vis detector (G1314 A). The LOD was 0.03 mg/L. Control quality was verified using standard IRIS ClinCal Recipe 9969 (Munich, Germany), and the recovery rate was 97%. Determination of arsenic in urine For the quantification of total arsenic (As) an aliquot of urine was treated with an acidic digestion using atomic fluorescent spectrophotometry with hydride generation (PS Analytical 10.055 Millennium Excalibur System, Deerfield Beach, FL) equipped with an empty cathode lamp. The LOD was 1μg/L. Control quality was verified using the standard ClinCheck-Urine control level I (41 ± 10 μg/L; Munich, Germany), and the recovery rate was 95%. Determination of manganese in urine For determination of Manganese (Mn) levels, an aliquot of urine was treated with nitric and perchloric acid with temperature for the digestion of the sample. The quantification was performed in a Perkin-Elmer 3110 atomic absorption spectrophotometer with graphite furnace (HGA 600). The LOD was 1μg/L. Control quality was verified using the standard ClinCal-Urine level I (24.6 μg/L; Munich, Germany), and the recovery rate was 93%. Determination of fluoride in urine Fluoride (F–) was quantified in solution using a potentiometric method with a selective ion electrode. The LOD was 0.05 mg/L. Control quality was verified using the standard ClinCheck-Urine control level I (3.8 mg/L; Munich, Germany), and the recovery rate was 96%. Determination of blood lead levels The quantification of lead in blood (PbB) was performed using a Perkin-Elmer 3110 atomic absorption spectrophotometer with graphite furnace. The LOD was 1.0 μg/dL and the accuracy was 99 ± 9.0%. The levels of all contaminants were compared between communities using the Kruskal–Wallis test, followed by the Dunn’s test posthoc to compare each contaminant between communities. For all statistical analyses, we used GraphPad Software version 5.0 (CA, USA). P < 0.05 was considered statistically significant. The children anthropometric measures are shown in Table 1. According to the WHO child growth standards , all communities showed children with malnutrition based on their weight for age score (W/A Z scores ±2.0). According to WHO reference curves, BEL was the community with the 13.7% of chronic undernutrition followed by TC (10%), SJR (5.2%) and MOR (0%). Acute undernutrition based on weight for age was observed in all the communities (W/A z-scores <–2.0); the prevalence in SJR, MOR, BEL and TC, was 5.2, 3.7, 3.4 and 2.5%, respectively. |Age (years)||7.1 ± 1.0||7.5 ± 2.1||5.9 ± 1.6||7.6 ± 1.1| |Height (cm)||122.7 ± 8.9||125 ± 13.1||114.5 ± 8.3||124.3 ± 9.0| |Weight (Kg)||25.3 ± 5.4||26.1 ± 7.3||20.7 ± 5.2||25.8 ± 5.1| |BMI (Kg/m2)||16.6 ± 2.2||16.4 ± 2.1||16.6 ± 2.7||16.6 ± 1.6| |Index Margination||High||High||Very high||Medium| |Height for Age (H/A) (%)+||3.4||5.2||2.5||3.7| |Weight for Age (W/A) (%)+||13.7||5.2||10||0| |Risk activities (%)| |Well Water Use (Drinking)||30||80||80||0| |Well Water Use (Cooking)||72||85||95||0| |Use of Ceramic Glazed Cookware||0||58||40||0| The levels of inorganic elements are summarized in Table 2. All median concentrations of the communities study are lower than reference values for Mn (8 μg/L) but all the communities showed similar concentrations (4.46–.3 µg/L). However, MOR had the highest percentage of children with detectable Mn (72.4%) followed by SJR, BEL and TC with 52.6, 41 and 27.5%, respectively. Levels of As were lower than reference value (15 µg/L) in all the communities. However, MOR had the greatest levels of As (11.0 µg/L), also had the highest percentage of children with detectable As (79.3%) followed by TC, BEL and SJR (25%, 24% and 15.7% respectively). For F–, all the communities had a higher median concentration in comparison to the reference value (1.5 mg/L) , showing median concentration similar between the communities. Also, MOR, SJR, TC and BEL, had some of children with detectable F– showing percentages above 80%, 89.4%, 97.5% and 100%, respectively. Regarding to PbB, TC had lower levels than BEL and MOR (4.2, 6.4 and 6.8 µg/dL, respectively); in these communities, all participants showed detectable concentration for this compound, except SJR that showed non-detectable levels. Furthermore, MOR showed the greatest percentage of children with higher levels than the reference value of 5 µg/dL (77.7%), followed by BEL and TC (65.5 and 47.5%, respectively). |Compound||Site||N||% Detectable||Median||Min||Max||% > RfV||RfV| On the other hand, all the children from BEL, TC and SJR had detectable urinary concentrations of 1-OHP and only MOR had 93.1% of children with detectable concentrations of this compound. Furthermore, we observed that TC showed the highest percentage of children with 1-OHP urinary levels above the reference value of 0.24 μmol/mol cr (45.0%), followed by SJR and BEL (36.8 and 24.1, respectively); children in MOR did not show concentrations above the reference values (Table 3). Additionally, we observed that TC had the highest 1-OHP exposure (0.23 µmol/mol cr) when compared with SJR, BEL and MOR (0.09, 0.06 and 0.03 µmol/mol cr, p < 0.05). For t,t-MA, SJR had the highest percentage of children with levels above the reference value of 500 µg/g cr (47.3%), followed by TC, MOR and BEL (41.0, 29.6 and 7%, respectively). Also, it was observed that the exposure levels of t,t-MA were 429.7, 427.4, 220.6 and 258.6 µg/g cr for TC, SJR, BEL and MOR, respectively. |Compounds||Site||N||% Detectable||Median||Min||Max||% > RfV||RfV| Urbanization is often understood to be a precondition for development. Cities, with their economic development, industries and services, spearhead the economic growth of any nation. However, these opportunities are not equally accessible to all. Although it is known that the inhabitants of the cities enjoy better health than the rural populations, little is known about the differences of health within the urban cities. In this sense, some research has revealed that, in urban cities, there are inequalities in health which creates a greater risk among the inner-city population of suffering from different diseases and health problems . Around one-third of the world’s population live in slum conditions (828 million people) and the majority of these are located in cites of developing countries . In recent years, there has been a growing interest in the intraurban scale due to the fact that some studies have shown that this population is exposed to adverse impacts of mixture of pollutants because resulting from various economic activities and high density of population of modern cities. In addition, most cases of health adverse effects in populations are caused by economic and environmental conditions such high exposure to health risk factors, poor access to health care services, and chronic malnutrition . This study found that all children from the urban communities are in normal condition of nutrition according to the WHO child growth . In Mexico, the short stature in preschool has been a clear decline, dropping from 26.9% in 1988 to 13.6% in 2012, down 13.3 percentage points at the national level according to the National Health and Nutrition Survey . In this context, in the study areas, the child populations were in a state of adequate nutrition, this is likely due to adequate availability of food, health care, education, and health infrastructure. These factors, in turn, may be influenced by an equal distribution of resources, services, wealth, and opportunity and reflect the low marginalization index in these urban communities. This is important because of the risk for diseases and child development . Alternatively, it has been observed that different environmental contaminants such as those evaluated in this study (Mn, As, F–, Pb, PAHs, and benzene) can affect child development and increase the risk of certain diseases. Children normally present trends in environmental exposure more accurately than adults because children are not directly exposed to occupational pollution . Additionally, it has been well-established that children are potentially at more risk than adults to adverse health effects due the exposure to many environmental chemicals . However, the information on human exposure to chemical mixtures is very limited, and the information in urban children is even more scarce. Thus, we assessed exposure in children in four urban communities. Mn and As levels in urine were similar between all the communities and below of the reference values (8 µg/L and 15 µg/L, respectively). Low levels of Mn, are not toxic but some studies have proposed that may produce adverse health effects at higher levels. For example, at higher values than the reference (8 μg/L) has been reported neurotoxic effects . About As, different studies have demonstrated that its presence has been associated with different diseases such as skin lesions, skin cancer , neurological, respiratory and cardiovascular diseases . Moreover, it is important to highlight that the sources of As contamination include natural deposits as well as anthropogenic sources such as mining and electronics manufacturing processes and metal smelting . Regarding Mn, it is a natural component in soil but the population can be exposed to this compound due to anthropogenic activities such mining . For F–, BEL has higher values than MOR, TC and SJR; this is explained by the fact that some people from this community still cook and drink tap water, which is the main source of exposure to this element . However, there have been different risk communication programs to reduce the exposure in this community . These intervention programs are important because some data suggest a reduction in the intelligence quotient (IQ) score in children living in endemic fluorosis areas . For PbB, TC has lower values than BEL and MOR, this is due to the fact that in this place a risk reduction program was implemented [12, 13]. With respect to BEL and MOR, levels above the established values (5 µg/dL) were found. This can be explained by the use of glazed clay for cooking, as well as a smelter located 1.5 km from both communities where children are exposed to high levels of Pb . The chronic exposure to Pb may be associated with neurocognitive, neurobehavioral, functional alterations . On the other hand, the exposure to PAHs was assessed through the analysis of urinary 1-OHP. Jongeneelen proposed a three-risk level guideline for occupational exposure to PAHs that includes urinary 1-OHP levels. Following this guideline, the first risk level or reference value that is the 95th percentile in non-occupational exposed controls was set as 0.24 μmol/mol cr and 0.76 μmol/mol cr for non-smokers and smokers, respectively. The second risk level at which no biological effects are observed, urinary levels of 1-OHP for exposed workers were fixed at 1.4 μmol/mol cr (lowest reported level at which no genotoxic effects were found). Finally, two reference values were proposed as occupational exposure limits for two types of industry: 2.3 μmol/mol cr for coke ovens and 4.9 μmol/mol cr for primary aluminum production (third risk level). Interestingly, the median levels found in all the community were lower than the reference value (0.24 µmol/mol cr ; only TC has levels near to the reference values and this may be due to fact that this area is surrounded by brickyards . Moreover, our data is comparable to that of similar studies. For example, with a study of children in Mongolia who lived near heavy traffic, demonstrated mean 1-OHP levels of 0.3 μmol/mol cr ; and the NHANES IV study from the USA, showed that in children aged 6–11 years demonstrate levels approximately 0.05 μmol/mol cr ; only levels in MOR community are lower than this value. It is important to reduce the exposure because several studies have demonstrated an association between chronic PAH exposure and incidence of lung cancer and DNA . Considering exposure to benzene, the urinary levels for t,t-MA in children from this study were lower compared with the biological exposure index (BEI®) tt-MA guidance value of 500 μg/g cr proposed by the ACGIH . which is the concentration below, nearly all people exposed occupationally should not experience adverse health effects. We found that 47.3, 41.0. 29.6 and 7% of the children from the SJR, TC, MOR and BEL community had levels above this BEI® value, respectively (Table 3). It is important to reduce the exposure to benzene in children in these communities because this compound has been associated with acute myeloid leukemia and is potentially associated with an increased risk of developing chronic and acute lymphoblastic leukemia in adults . This study measures the levels of pollutants prior to an environmental intervention, and it indicates the need for action due to exposure levels that are higher than the respective guidance values. It is possible to achieve significant reductions with a combination of factors, such as intense education, removal of all possible sources of pollutants from the environment, and effective monitoring of the affected children. Considering the proportion of children living in urban areas in Mexico, it is important to understand that to design effective intervention programs, the exposure pathways for the children (particle inhalation, soil/dust ingestion, occupational exposure, etc.) must be identified. These programs are urgent because a greater awareness of the public health concerns associated with these exposures is needed.
<urn:uuid:078fb3ad-2099-41d6-a208-3f8c1a548bac>
CC-MAIN-2021-21
https://annalsofglobalhealth.org/articles/10.29024/aogh.912/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989916.34/warc/CC-MAIN-20210513111525-20210513141525-00135.warc.gz
en
0.939258
4,170
3.390625
3
The consequent pay-off matrix for the wives and husbands would be something like this: If the risks of discovery were lessened by circumstances in some societies, such as the sheikh being away from the harem for an extended period, then the wives might risk infidelity. If we further postulate a principal of strength in numbers, or gender solidarity, among the wives, then we might find that infidelity would be more common among the polygynous households. But gender solidarity faces the collective action difficulty: what if one of the more ambitious wives reports other wives' infidelities to the sheikh? If the wives were mutually suspicious we might find, instead, that infidelity was more common among the relatively autonomous, Loose Patriarchal Monogamy Anthropology still debates the origin of monogamy. Gary Becker has suggested that monogamy may have arisen as a means for poor and unattractive men to ensure that they have access to wives. Since monogamy appeared, however, monogamous societies have varied between relative libertinism and puritanism. Under the looser, libertine patriarchal monogamy, the community, church and state have generally not approved of extra-marital relationships, but not strictly enforced these norms for men. Often there is also an implicit prestige for sexually promiscuous men. But loose monogamy differs from polygamy in two ways. First, the internalized rewards of non-monogamy (macho prestige) are muted by its official prohibition. Second, men can only exploit the labor power of their one wife, while they must continue to provide material gifts to their mistresses. Mistresses may cost as much to maintain as wives, and are surely more difficult to exploit the labor of. Thus, even loosely observed monogamy sharply reduces the incentive to seek additional, simultaneous mistresses, compared to polygyny. In general, men will choose to cheat in such a society, to the extent that they can afford the extra mistresses, and women will not cheat, to the extent that they face punishment for doing so. Under puritanism, however, infidelity is more likely to be reported, and if reported severely punished. (Of course, the punishment is still usually more severe for women than men.) In addition, we can assume that the internalization of norms in the puritanical phase is more thorough, displacing some or all of the covert prestige of male promiscuity with guilt and fear. The result is stricter adherence to fidelity by men, and a monogamous equilibrium. Since the Enlightenment and Industrial Revolution, the growing equality of women has also contributed to the more strict adherence to monogamy. On the one hand, men and women have gradually been treated more equally before the law, and in defending women against domestic battering. On the other hand, women have gained increasing choice in husbands, through "love marriage" and divorce, and increasing economic independence from their husbands. Women thus have both the powers of turning their husbands in to the church or community, if not the law in divorce proceedings, and the power of "exit," as sanctions they can levy against philandering husbands. This female empowerment would create a trend towards either greater male fidelity, or at least greater male care in hiding their infidelity, which would increase its cost, and decrease its demand. Further Simplifying Assumptions In order to illustrate the prisoner's dilemma of monogamy I need make a couple more simplifying assumptions: (1) The first assumption is that the value of a relationship can be summed up in one measure, or rather, that relationships fulfill a unitary dimension of demand. In fact, one of the major rationales for non-monogamy is that different people can fulfill different parts of one's needs, in ways that can't be compared or traded off. For instance, in Spike Lee's She's Gotta Have It, the protagonist has three lovers, one for fun, one for romance, and one for financial stability. But estimating a model for one utility is difficult enough. (2) A second assumption is that the utility of a monogamous relationship is equivalent for everyone. In reality, people assign different amounts of utility to different combinations of commitment, shared activity and intimacy with different numbers of people. Some people can substitute non-relationship utility for relationship utility, finding their ideal combination to be half a relationship and a full-time hobby or career. We will ignores all these variations and assign everyone a utility of 1.0 for a monogamous relationship. (3) The third assumption is that while the value of the first full-time relationship is 1.0, each additional simultaneous relationship has declining marginal value. In other words, you are less than ten-times happier when engaged in ten simultaneous relationships. Obviously, the rate of declining marginal utility is very different for different people. For the voluntarily celibate, the value of even one relationship is less than its cost in time and resources, perhaps because they value the rewards of prayer, art, or politics more. And for some rare individuals, there may even be a multiplier effect from non-monogamy, though this would still face an eventual constraint; philanderers may be three times happier with one mistress, but thirty times happier with ten mistresses. The risks of sexually transmitted disease have added an additional degree of decline for the value of additional relationships. In my model below I will simply assume that a second relationship is worth only 0.75. With relationships with two single others one receives 1.75. The model will make the simplifying assumption that two simultaneous relationships are as many as these experimenters can deal with. (4) A fourth assumption is that sexual contracts are reciprocal. Men and women are now subject to the same constraints. Even with relatively gender equality, there are today many situations where powerful partners can enforce fidelity on dependent lovers while they philander. Men and women of great wealth, power or charisma are often able to convince their lovers to remain faithful even if the advantaged person is not. This model, however, will only discuss situations where all partners either agree to monogamy, or consent to one anothers' non-monogamy. (5) A fifth assumption is that the value of a relationship is reduced if the partner is having another relationship. This is clearly true for most people, because of jealousy and uncertainty. For many, a partner's infidelity reduces the value of a relationship to zero. But the following model is primarily concerned with libertine sexual experimenters, who do not suffer from jealousy or uncertainty, and receive some degree of satisfaction at the idea that their partner's infidelity frees them to reciprocally engage in infidelity themselves. They do suffer from a loss of some of their partner's time and attention however. The net loss of utility for a libertine with an unfaithful partner is thus assumed to be only a third, rather than 100%. (6) A sixth assumption is that there is no declining utility of a relationship over time. Nobody gets bored. Nor does utility increase with time: nobody becomes attached. Monogamous Majority and Non-Monogamous Minority in Liberal Society While about a third of married Americans have experimented with affairs, the majority of American couples see the risks of covert infidelity, in disease, partners' discovery, and so on, to be inferior to its potential benefits. Even affairs rarely continue for any length of time without resolving into monogamy with one partner or the other. The pay-off matrix for most Americans is still essentially that of Figure Three, with monogamy as equilibrium. But a small minority of every liberal democratic Western society has strongly desired sexual experimentation, and rejected monogamous norms and values. Whether their desires result from anti-monogamous socialization, or from the absence of the suppressive effects of pro-monogamous socialization, they see the rewards of not only covert infidelity, but even reciprocal non-monogamy as greater than its costs. The following model attempts to model the pay-offs that such non-monogamous experimenters might face. At the beginning, each partner faces a choice. They can choose monogamy, or choose to also engage in an affair with some single person. The supply of single people is a critical variable which we will address later, but for now we assume that there is a ready supply of single people who are also interested in non-monogamous relationships. For these non-monogamous experimenters A and B, their pay-offs having affairs are always better than the pay-offs remaining in monogamy. This takes us from box 1 to box 2, where the central couple, A and B, are both having affairs with single others, C and D. But C and D are strategic actors as well, and attempting to maximize their utility. They face the same pay-off matrix as A and B do, under which it never makes sense to have just one lover. So C and D also seek out single lovers, E and F. This takes us from box 2 to box 3. Again, E and F are rational actors, and they seek out other lovers, moving us to box 4. If actors A through F, in this sub-culture of non-monogamous experimenters, each have two lovers, then A through F are receiving 1.17, which is greater satisfaction than they got under monogamy (1.0). The first problem is that the people at the ends of these strings are only receiving .66, which is less than they could get in a monogamous relationship. But less us assume an infinitely expanding chain for the moment, and turn to the second problem: everyone receiving 1.17 is aware that they could be receiving 1.75 if their lovers did not have other lovers, and they were the center of two undivided attentions. For A, this situation is box 5A, and for B, 5B. Or at least actors could achieve 1.51, if they only had to share one of their lovers (the "6" boxes). This is where the conditions of the environment become critical. In order to defect from general non-monogamy (box 4) to a more privileged non-monogamy (boxes 5 or 6), each player must calculate the likelihood of their being able to move from the position they are in to the advantaged position. If there is 100% certainty that they can break off relations with one or both of their lovers and replace them with single lovers, then they will do so, achieving either 1.51 or 1.75. In the case of B, for instance, this would mean moving from box 4, to boxes 2, 5B, or 6B. The complexity enters here. If everyone has a good chance of finding single others who are willing to engage in non-monogamous relations, then everyone's best strategy is to break off relations with involved others and seek out single others. But if everyone breaks off with involved others, then the environment changes. When no one will maintain a relationship with someone involved with someone else, the greatest number of sustainable partners is one, i.e. monogamy. Neither A nor B can find single others who are willing to be involved with them, since they are already involved with each other, and therefore their subculture reverts eventually to an equilibrium around box 1, monogamy. This is the prisoner's dilemma. If everybody takes two lovers, then everybody gets 1.17 (rather than 0 for the singles, and 1.0 for the monogamous). But since everybody can do better if they are the only one with another lover, then nobody can have two lovers, and everybody only gets 1.0. Building Non-Monogamous Equilibria in Liberal Society There are several "natural" and several "voluntary" solutions to this prisoner's dilemma. One "natural" or environmental solution results if it is impossible to find singles willing to get involved with a non-monogamous situation. If there is little likelihood of finding two single others to defect to, a set of non-monogamists who have reached box 4 would have no incentive to leave it, since the only real alternatives are monogamy (1.0) or general non-monogamy (1.17). This set of successful non-monogamists would need to find a solution for the ends of their relationship chain, those receiving less than monogamous satisfaction [.66 - 1.51 - 1.17 - 1.17 - 1.51 - .66]. This arrangement would slowly unravel. One solution is to close the circle at the ends, providing a uniform 1.17. Another solution is to find individuals who are uniquely attracted to this limited commitment, such as those who do not have the time or ability to commit to a monogamous or multiple relationships. We might imagine a sleepy suburban 1950s Peyton Place, where 99% of all the adults are married. In this situation one's only choice is whether to have an extramarital affair with a married person or not. Column 2 and 3, and Rows 2 and 3, which presume the existence of single people, are simply not available. For the potential non-monogamous experimenter subset of the married couples, the choice is obvious: spouse-swapping, or a small closed circle. As long as the chance of finding singles to connect with is close to 0%, the spouse-swapping arrangement will be equilibrious. Voluntary agreements can also structure equilibrious non-monogamy. Fear of disease may cause actors to want to set clear boundaries on their partners' contacts: no one may sleep with anyone outside the group. Long-term friendships may have established trust between the partners, making contacts with others more costly and risky. The participants may have arrived at these voluntary agreements through learning from previous affairs that closed circles were the only successful arrangement. These agreements are probably most common among a closed circle of three, a menage a trois. The problem is finding partners who will agree to the concept of menage a trois or group marriage, and to enforcing the contract. The State does its best to discourage such contracts, with laws against polygamy and bigamy, and refusing custody and other rights to such non-traditional families. The community is usually even less help, heaping buckets of scorn on the sinners, and rewarding partners who come to their senses and leave the Achieving Critical Mass Therefore, those with non-monogamous preferences must organize for collective action, both in order to enforce collective norms governing non-monogamy, and in order to throw off the norms that discourage non-monogamy. Sexual deviance is a local "public good," requiring collective action. From the gnostic "free-love" rebellions of the Middle Ages to the Stonewall riots, the institutionalization of sexual deviance has required the gathering and organization of sexual radicals, who then made deviance safe for less committed experimenters. If the initial risk-takers are successful, and survive long enough, they can attain "critical mass." The critical mass number is the "tipping point" at which the external and internalized inhibitions reduce to the point that benefits for group members exceed costs, and membership in the sub-culture is self-sustaining (Schelling, 1979: 100-110). My argument above is that, while many sexual radicals have tried to create a self-sustaining non-monogamous subculture, they were never able to achieve critical mass or sufficient agreement as to what the rules should be for "normative closure." Figure Seven: Critical Mass for Non-Monogamy Occasionally one individual has been able to afford to provide the public good by themselves. For instance, the legalization of divorce was a public good provided by King Henry VIII, making British divorcees a "privileged group" in Olsonian terms (Hardin, 1979:35). Though all British desirous of divorces benefited, only Henry VIII could afford to legalize divorce. Without such a hegemon, those with non-monogamous preferences are a scattered "latent group," awaiting entrepreneurs willing to risk organizing costs in order to reap later rewards from leadership. For instance, the founder of the Mormon Church, Joseph Smith, can be seen as a public goods entrepreneur, organizing to re-create polygyny, though it ultimately cost his life, and federal troops massed on Utah's border undid his work. But Mormon polygyny, which was strictly patriarchal, does not really address the possibility of a non-monogamous equilibria in liberal, egalitarian society. Today sexual alternatives face even fewer external and internalized sanctions than they did in the 19th century, and a century of Western sexual libertines have experimented with those alternatives. Yet only one large community is known to have developed a system of egalitarian non-monogamy that lasted more than a few years: the Oneida commune of up-state New York (1837-1879). Oneida may illustrate some of the ironic complexities of attempting contemporary free love. Oneida was founded by John Humphrey Noyes, a Perfectionist minister who preached "Bible Communism" and rejected both monogamy and polygamy as forms of ownership to be forsaken by the saved. But he also fervently rejected the anarchistic "free love" practices of his contemporary sexual radicals, such as the Fourierists, Owenites and spiritualist feminists who advocated liberalized divorce and promiscuity, and insisted that "complex marriage" could only be practiced within a community under spiritual discipline. In Oneida, if a man or woman desired a sexual liaison with another member they could petition a community elder to carry a message to the party in question; they were forbidden to ask other communards directly. If the other consented, which was not certain, then the couple could meet several times, but no more; if they developed any feelings of attachment they were immediately separated. If they violated the community's rules, they could be expelled. For the first 17 years of Oneida's existence, men were forbidden to ejaculate, as a contraceptive method and an aid to female sexual pleasure: there were no pregnancies recorded for that period. At its largest, the complex marriage system included about 300 people, and it lasted more than thirty years. Rosabeth Moss Kanter (1972), in her statistical study of one hundred 19th century communes, correlated sexual systems with the longevity of the commune, and concluded that the Oneida-type complex marriage systems, and the celibacy practiced by the Shakers, were correlated with commune longevity. At the other extreme were communes like Berlin Heights that attempted to institute "free love" without strong community controls; these communities were invariably short-lived. In the end, Oneida's disbanding had nothing to do with dissatisfaction with the complex marriage system, but rather grew out of the secularism of the second and third generation of the community, who were no longer prepared to make the other sacrifices of communal life; the combination of strong preferences for sexual variety and a strong commitment to religious authority proved unique to just one cohort. in Love; Menages a Trois from Ancient to Modern Times Love, and Marriage in the 21st Century: The Next Sexual Revolution Lesbian Polyamory Reader : Open Relationships, Non-Monogamy, and Casual : The New Love Without Limits : Secrets of Sustainable Intimate Polygamous Families in Contemporary Society. Irwin Altman, Joseph Ginat. 1996. Marriage for Our Times. Philip Kilbride. 1994. Social Organization of Sexuality : Sexual Practices in the United States in America: A Definitive Survey Matters : A History of Sexuality in America Collective Action. Russell Hardin. 1982. Morality within the Limits of Reason. Russell Hardin. 1990. Free Love in America: A Documentary History. Taylor Stoehr. 1979. : Utopian Community to Modern Corporation
<urn:uuid:def0facb-c9a0-4164-baf7-b264b8bb2669>
CC-MAIN-2021-21
http://www.changesurfer.com/Acad/Monogamy/Mono.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00377.warc.gz
en
0.955994
4,222
2.578125
3
The "Western Sea"—Ardent Duluth—"Kaministiquia"—Indian boasting—Pere Charlevoix—Father Gonor—The man of the hour: Verendrye—Indian map maker—The North Shore—A line of forts—The Assiniboine country—A notable manuscript—A marvellous journey—Glory but not wealth—Post of the Western Sea. Even the French in Canada were animated in their explorations by the dream of a North-West Passage. The name Lachine at the rapids above Montreal is the memorial of La Salle's hope that the Western Sea was to be reached along this channel. The Lake Superior region seems to have been neglected for twenty years after Radisson and Groseilliers had visited Lake Nepigon, or Lake Assiniboines, as they called it. intention of going inland from Lake Superior was not lost sight of by the French explorers, for on a map (Parl. Lib. Ottawa) of date 1680, is the inscription in French marking the Kaministiquia or Pigeon River, "By this river they go to the Assinepoulacs, for 150 leagues toward the north-west, where there are plenty of beavers." The stirring events which we have described between 1682 and 1684, when Radisson deserted from the Hudson's Bay Company and founded for the French King Fort Bourbon on the Bay, were accompanied by a new movement toward Lake Superior, having the purpose of turning the stream of trade from Hudson Bay southward to Lake At this time Governor De La Barre writes from Canada that the English at Hudson Bay had that year attracted to them many of the northern Indians, who were in the habit of coming to Montreal, and that he had despatched thither Sieur Duluth, who had great influence over the western Indians. Greysolon Duluth was one of the most daring spirits in the service of France in Canada. Duluth writes (1684) to the Governor from Lake Nepigon, where he had erected a fort, seemingly near the spot where Radisson and Groseilliers had wintered. Duluth says in his ardent manner: "It remains for me, sir, to assure you that all the savages of the north have great confidence in me, and that enables me to promise you that before the lapse of two years not a single savage will visit the English at Hudson Bay. This they have all promised me, and have bound themselves thereto, by the presents I have given, or caused to be given them. The Klistinos, Assinepoulacs, &c, have promised to come to my fort. . . . Finally, sir, I wish to lose my life if I do not absolutely prevent the savages from visiting the English." Duluth seems for several years to have carried on trade with the Indians north and west of Lake Nepigon, and no doubt prevented many of them from going to Hudson Bay. But he was not well supported by the Governor, being poorly supplied with goods, and for a time the prosecution of trade by the French in the Lake Superior region declined. The intense interest created by D'Iberville in his victorious raids on Hudson Bay no doubt tended to divert the attention of the French explorers from the trade with the interior. The Treaties of Ryswick and Utrecht changed the whole state of affairs for the French King, and deprived by the latter of these treaties of any hold on the Bay, the French in Canada began to turn their attention to their deserted station on Lake Superior. Now, too, the reviving interest in England of the scheme for the discovery of the North-West Passage infected the French. Six years after the Treaty of Utrecht, we find (MSS. Ottawa) it stated: "Messrs. de Vaudreuil and Begin having written last year that the discovery of the Western Sea would be advantageous to the Colony, it was approved that to reach it M. de Vaudreuil should establish these posts, which he had proposed, and he was instructed at the same time to have the same established without any expense accruing to the King —as the person establishing them would be remunerated by trade." In the year 1717 the Governor sent out a French lieutenant, Sieur De la Noue, who founded a fort at Kaministiquia. In a letter, Do la Noue states that the Indians are well satisfied with the fort he has erected, and promise to bring there all those who had been accustomed to trade at Hudson Bay. Circumstances seem to have prevented this explorer from going and establishing a fort at Tekamiouen (Rainy Lake), and a third at the lake still farther to the north-west-It is somewhat notable that during the fifty years succeeding the early voyages of Radisson and Groseilliers on Lake Superior, the French were quite familiar with the names of lakes and rivers in the interior which they had never visited. It will be remembered, however, that the same thing is true of the English on Hudson Bay. They knew the names Assiniboines, Christinos, and the like as familiar terms, although they had not left the Bay. The reason of this is easily seen. The North-West Indian is a great narrator. He tells of large territories, vast seas, and is, in fact, in the speech of Hiawatha, "Iagoo, the great boaster." He could map out his route upon a piece of birch-bark, and the maps still made by the wild North-Western Indians are quite worthy It will be observed that the objection brought by the French against the Hudson's Bay Company of clinging to the shores of the Bay, may be equally charged against the French on the shore of Lake Superior, or at least of Lake Nepigon, for the period from its first occupation of at least seventy years. No doubt the same explanation applies in both cases, viz. the bringing of their furs to the forts by the Indians made inland exploration at that time But the time and the man had now come, and the vast prairies of the North-West,hiitherto unseen by the white man, were to become the battle-ground for a far greater contest for the possession of the fur trade than had yet taken place either in Hudson Bay or with the Dutch and English in New York State. cause for this forward movement was again the dream of opening up a North-West Passage. The hold this had upon the French we see was less than that upon Frobisher, James, Middleton, or Dobbs among the English. Speaking of the French interest in the scheme, Pierre Margry, keeper of the French Archives in Paris, says: "The prospect of discovering by the interior a passage to the Grand Ocean, and by that to China, which was proposed by our officers under Henry IV., Louis XIII., and Louis XIV., had been taken up with renewed ardour during the Regency. Memorial upon memorial had been presented to the Conseil de Marine respecting the advisability and the advantage of making this discovery. Indeed, the Pere de Charlevoix was sent to America, and made his great journey from the north to the south of New France for the purpose of reliably informing the Council as to the most suitable route to pursue in order to reach the Western Sea. But the ardour which during the life of Philip of Orleans animated the Government regarding the exploration of the West became feeble, and at length threatened to be totally extinguished, without any benefit being derived from the posts which they had already established in the country of the Sioux and at Kaministiquia." "The Regent, in choosing between the two plans that Father Charlevoix presented to him at the close of his journey for the attainment of a knowledge of the Western Sea, through an unfortunate prudence, rejected the suggestion, which, it is true, was the most expensive and uncertain, viz. an expedition up the Missouri to its source and beyond, and decided to establish a post among the Sioux. The post of the Sioux was consequently established in 1727. Father Gonor, a Jesuit missionary who had gone upon the expedition, we are told, was, however, obliged to return without having been able to discover anything that would satisfy the expectations of the Court about the Western Sea." At this time Michilimackinac was the depot of the West. It stood in the entrance of Lake Michigan—the Gitche Gumee of the Indian tribes, near the mouth of the St. Mary River, the outlet of Lake Superior; it was at the head of Lake Huron and Georgian Bay alike. Many years afterwards it was called the "Key of the North-West" and the "Key of the Upper Lakes." A round island lying a little above the lake, it appealed to the Indian imagination, and, as its name implies, was likened by them to the turtle. To it from every side expeditions gathered, and it became the great rendezvous. At Michilimackinac, just after the arrival of Father Gonor, there came from the region of Lake Superior a man whose name was to become illustrious as an explorer, Pierre Gaultier de Varennes, Sieur de la Verendrye. Wo have come to know him simply by the single name of Verendrye. This great explorer was born in Three Rivers, the son of an old officer of the French army. The young cadet found very little to do in the New World, and made his way home to France. He served as a French officer in the War of the Spanish Succession, and was severely wounded in the battle of Malplaquet. On his recovery, he did not receive the recognition that he desired, and so went to the western wilds of Canada and took up the life of a "coureur de bois." Verendrye, in pursuing the fur trade, had followed the somewhat deserted course which Radisson and Groseilliers had long before taken, and which a decade before this La Noue had, as we have seen, selected. The fort on Lake Nepigon was still the rendezvous of the savages from the interior, who were willing to be turned aside from visiting the English on Hudson Bay. From the Indians who assembled around his fort on Lake Nepigon, in 1728, Verendrye heard of the vast interior, and had some hopes of reaching the goal of those who dreamt of a Western Sea. An experienced Indian leader named Ochagach undertook to map out on birch bark the route by which the lakes of the interior could be reached, and the savage descanted with rapture upon the furs to be obtained if the journey could be made. Verendrye, filled with the thought of western discovery, went to Quebec, and discussed his purpose with the Governor there. He pointed out the route by way of the river of the Assiniboels, and then the rivers by which Lake Ouinipegon might bo reached. His estimate was that the Western Sea might be gained by an inland journey from Lake Superior of Governor Beauharnois considered the map submitted and the opinions of Verendrye with his military engineer, Chaussegros Do Lory; and their conclusions were favourable to Verendrye's deductions. Verendrye had the manner and character which inspired belief in his honesty and competence. He was also helped in his dealings with the Governor at Quebec by the representations of Father Gonor, whom we have seen had returned from the fort established among the Sioux, convinced that the other route was impracticable. Father Gonor entirely sympathized with Verendrye in the belief that the only hope lay in passing through the country of the Christinos and Assiniboels of the North. The Governor granted the explorer the privilege of the entire profit of the fur trade, but was unable to give any assistance in money. Verendrye now obtained the aid of a number of merchants in Montreal in providing goods and equipment for the journey, and in high glee journeyed westward, calling at Michili-mackinac to take with him the Jesuit Father Messager, to be the companion of his voyage. Near the end of August, 1731, the expedition was at Pigeon River, long known as Grand Portage, a point more than forty miles south-westward of the mouth of the This was a notable event in history when Verendrye and his crew stood ready to face the hardships of a journey to the interior. No doubt the way was hard and long, and the men were sulky and discouraged, but the heroism of their commander shone forth as he saw into the future and led the way to a vast and important Often since that time have important expeditions going to the North-West been seen as they swept by the towering heights of Thunder Cape, and, passing onward, entered the uninviting mouth of Kaministiquia. Eighty-five years afterward, Lord Selkirk and his band of one hundred De Meuron soldiers appeared here in canoes and penetrated to Red River to regain the lost Fort Douglas. One hundred and twenty-six years after Verendrye, according to an account given by an eye-witness—an old Hudson's Bay Company officer—a Canadian steamer laden high above the decks appeared at the mouth of the Kaministiquia, bearing the Dawson and Hind expedition, to explore the plains of Assiniboia and pave the way for their admission to Canada. One hundred and thirty-nine years after Verendrye, Sir Garnet Wolseley, with his British regulars and Canadian volunteers, swept through Thunder Bay on their way to put down the Red And now one hundred and sixty-nine years after Verendrye, the splendid steamers of the Canadian Pacific Railway Company thrice a week in summer carry their living cargo into the mouth of the Kaministiquia to be transported by rail to the fast filling prairies of the West. Yes! it was a great event when Verendrye and his little band of unwilling voyageurs started inland from the shore of Verendrye, his valiant nephew, Do La Jemeraye, and his two sons, were the leaders of the expedition. Grand Portage avoids by a nine mile portage the falls and rapids at the mouth of the Pigeon River, and northward from this point the party went, and after many hardships reached Rainy Lake in the first season, 1731. Here, at the head of Rainy River, just where it leaves the Lake, they built their first fort, St. Pierre. The writer has examined the site of this fort, just three miles above the falls of Rainy River, and seen the mounds and excavations still remaining. This seems to have been their furthest point reached in the first season, and they returned to winter at Kaministiquia. In the next year the expedition started inland, and in the month of June reached their Fort St. Pierre, descended the Rainy River, and with exultation saw the expanse of the Lake of the Woods. The earliest name wo find this lake known by is that given by Verendrye. He says it was called Lake Minitio (Cree, Ministik) or Des Bois. (1) The former of these names, Minitie, seems to be Ojibway, and to mean Lake of the Islands, probably referring to the largo number of islands to be found in the northern half of the Lake. The other name (2), Lac des Bois, or Lake of the Woods, would appear to have been a mistranslation of the Indian (Ojibway) name by which the Lake was known. The name (3) was "Pikwcdina Sagaigan," meaning "the inland lake of the sand hills." referring to the skirting range of sand hills running for some thirteen miles along the southern shore of the Lake to the east of the mouth of Rainy River, its chief Another name found on a map prepared by the Hudson's Bay Company in 1748 is (4) Lake Nimigon, probably meaning the "expanse," referring to the open sheet of water now often called "La Traverse." Two other names, (5) Clearwater Lake and (6) Whitefish Lake, are clearly the extension of Clearwater Bay, a north-western part of the Lake, and Whitefish Bay, still given by the Indians to the channel to the east of Grande Presqu'ile. On the south-west side of the Lake of the Woods Verendrye's party built Fort St. Charles, probably hoping then to come in touch with the Sioux who visited that side of the lake, and with whom they would seek trade. At this point the prospect was very remote of reaching the Western Sea. The expenses were great, and the fur trade did not so far give sufficient return to justify a further march to the interior. Unassisted they had reached in 1733 Lake Ouinipegon (Winnipeg), by descending the rapid river from Lake of the Woods, to which they gave the name of Maurepas. The government in Quebec informed the French Minister, M. de Maurepas, that they had been told by the adventurous Jemeraye that if the French King would bear the expense, they were now certain that the Western Sea could be reached. They had lost in going to Lake Ouinipegon not less than 43,000 livres, and could not proceed further without aid. The reply from the Court of France was unfavourable; nothing more than the free privilege of the fur trade was granted the explorers. In the following year Verendrye built a fort near Lake Ouinipegon, at the mouth of the Maurepas River (which we now know as Winnipeg River), and not far from the present Fort Alexander. The fort was called Fort Maurepas, although the explorers felt that they had little for which to thank the French Minister. Still anxious to push on further west, but prevented by want of means, they made a second appeal to the French Government in 1735. But again came the same reply of refusal. The explorers spent their time trading with the Indians between Lake Winnipeg and Grand Portage, and coming and going, as they had occasion, to Lake Superior, and also to Michilimackinac with their cargoes. While at Fort St. Charles, on the shores of the Lake of the Woods, in 1736, a great disaster overtook the party. Veren-drye's eldest son was very anxious to return to Kaministiquia, as was also the Jesuit priest, Anneau, who was in company with the traders. Verendrye was unwilling, but at last consented. The party, consisting of the younger Verendrye and twenty men, were ruthlessly massacred by an ambush of the Sioux on a small island some five leagues from Fort St. Charles, still known as Massacre Island. A few days afterwards the crime was discovered, and Verendrye had difficulty in preventing his party from accepting the offer of the Assiniboines and Christinos to follow the Sioux and wreak their vengeance upon them. During the next year Fort Maurepas was still their farthest outpost. The ruins of Fort St. Charles on the south side of the north-west angle of the Lake of the Woods were in 1908 discovered by St. Boniface Historical Society and the remains of young Verendrye's party found buried in the ruins of the chapel. Though no assistance could be obtained from the French Court for western discovery, and although the difficulties seemed almost insurmountable, Verendrye was unwilling to give up the path open to him. He had the true spirit of the explorer, and chafed in his little stockade on the shores of Lake Winnipeg, seeking new worlds to conquer. If it was a great event when Verendrye, in 1731, left the shores of Lake Superior to go inland, it was one of equal moment when, penniless and in debt, he determined at all hazards to leave the rocks and woods of Lake Winnipeg, and seek the broad prairies of the West. His decision being thus reached, the region which is now the fertile Canadian prairies was entered upon. Wo are fortunate in having the original Journal of this notable expedition of 1738, obtained by Mr. Douglas Brymner, former Archivist at Ottowa. This, with two letters of Bienville, were obtained by Mr. Brymner from a French family in Montreal, and the identity of the documents has been fully established. This Journal covers the time from the departure of Verendrye from Michilimackinac on July 20th, till say 1739, when he writes from the heart of the prairies. On September 22nd the brave Verendrye left Fort Maurepas for the land unknown. It took him but two days with his five men to cross in swift canoes the south-east expanse of Lake Winnipeg, enter the mouth of Red River, and reach the forks of the Red and Assiniboine Rivers, where the city of Winnipeg now stands. It was thus on September 24th of that memorable year that the eyes of the white man first fell on the site of what is destined to be the great central city of Canada. A few Crees who expected him met the French explorer there, and he had a conference with two chiefs, who were in the habit of taking their furs to the English on Hudson Bay. The water of the Assiniboine River ran at this time very low, but Verendrye was anxious to push westward. Delayed by the shallowness of the Assiniboine, the explorer's progress was very slow, but in six days he reached the portage, then used to cross to Lake Manitoba on the route to Hudson Bay. On this portage now stands the town of Portage la Prairie. The Assiniboine Indians who met Verendrye here told him it would be useless for him to ascend the Assiniboine River further, as the water was so low. Verendrye was expecting a reinforcement to join his party, under his colleague, M. de la Marque. He determined to remain at Portage la Prairie and to build a fort. Verendrye then assembled the Indians, gave them presents of powder, ball, tobacco, axes, knives, &c, and in the name of the French King received them as the children of the great monarch across the sea, and repeated several times to them the orders of the King they were to It is very interesting to notice the skill with which the early French explorers dealt with the Indians, and to see the formal way in which they took possession of the lands visited. Verendrye states that the Indians were greatly impressed, "many with tears in their eyes." He adds with some naivete "They thanked me greatly, promising to do wonders." On October 3rd, Verendrye decided to build a fort. He was joined shortly after by Messrs. de la Marque and Nolant with eight men in two canoes. The fort was soon pushed on, and, with the help of the Indians, was finished by October 15th. This was the beginning of Fort de la Reine. At this stage in his journal Verendrye makes an important announcement, bearing on a subject which has been Verendrye says, "M. de la Marque told me he had brought M. de Louviere to the forks with two canoes to build a fort there for the accommodation of the people of the Red River. I approved of it if the Indians were notified." This settles the fact that there was a fort at the forks of the Red and Assiniboine Rivers, and that it was built in 1738. In the absence of this information, we have been in the habit of fixing the building of Fort Rouge at this point from 1735 to 1737. There can now be no doubt that October, 1738, is the correct date. From French maps, as has been pointed out, Fort Rouge stood at the mouth of the Assiniboine, on the south side of the river, and the portion of the city of Winnipeg called Fort Rouge is properly It is, of course, evident that the forts erected by these early explorers were simply winter stations, thrown up in Verendrye and his band of fifty-two persons, Frenchmen and Indians, set out overland by the Mandan road on October 18th, to roach the Mandan settlements of the Missouri. It is not a part of our work to describe that journey. Suffice it to say that on December 3rd he was at the central fort of the Mandans, 250 miles from his fort at Portage la Prairie. Being unable to induce his Assiniboine guides and interpreters to remain for the winter among the Mandans, Verendrye returned somewhat unwillingly to the Assiniboine River. He arrived on February 10th at his Fort de la Reine, as he says himself, "greatly fatigued and very ill." Verendrye in his journal gives us an excellent opportunity of seeing the thorough devotion of the man to his duty. From Fort Michilimackinac to the Missouri, by the route followed by him, is not less than 1,200 miles, and this he accomplished, as we have seen with the necessary delay of building a fort, between July 20th and December 3rd—136 days—of this wonderful year of 1738. Struggling with difficulties, satisfying creditors, hoping for assistance from France, but ever patriotic and single-minded, Verendrye became the leading spirit in Western exploration. In the year after his great expedition to the prairies, he was summoned to Montreal to resist a lawsuit brought against him. The prevailing sin of French Canada was jealousy. Though Verendrye had struggled so bravely to explore the country, there were those who whispered in the ear of the Minister of the French Court that he was selfish and unworthy. In his heart-broken reply to the charges, he says, "If more than 40,000 livres of debt which I have on my shoulders are an advantage, then I can flatter myself that I am very rich." In 1741 a fruitless attempt was made to reach the Mandans, but in the following year Verendrye's eldest surviving son and his brother, known as the Chevalier, having with them only two Canadians, loft Forte de la Reine, and made in this and the succeeding year one of the most famous of the Verendrye discoveries. This lies beyond the field of our inquiry, being the journey to the Missouri, and up to an eastern spur of the Rocky Mountains. Parkman, in his "A Half Century of Conflict," has given a detailed account of this remarkable journey. Going northward over the Portage la Prairie, Verendrye's sons had discovered what is now known as Lake Manitoba, and had reached the Saskatchewan River. On the west side of Lake Manitoba they founded Fort Dauphin, while at the west end of the enlargement of the Saskatchewan known as Cedar Lake, they built Fort Bourbon and ascended the Saskatchewan to the forks, which were known as the Poskoiac. Tardy recognition of Verendrye's achievements came from the French Court in the explorer being promoted to the position of captain in the Colonial troops, and a short time after he was given the Cross of the Order of St. Louis. Beauharnois and his successor Galissioniere had both stood by Verendrye and done their best for him. Indeed, the explorer was just about to proceed on the great expedition which was to fulfil their hopes of finding the Western Sea, when, on December 6th, he passed away, his dream unrealized. He was an unselfish soul, a man of great executive ability, and one who dearly loved his King and country. He stands out in striking contrast to the Bigots and Jonquieres, who disgraced the name of France in the New World. From the hands of these vampires, who had come to suck out the blood of New France, Verendrye's sons received no consideration. Their claims were coolly passed by, their goods shamelessly seized, and their written and forcible re-monstrance made no impression. Legardeur de St. Pierre, more to the mind of the selfish Bigot, was given their place and property, and in 1751 a small fort was built on the upper waters of the Saskatchewan, near the Rocky Mountains, near where the town of Calgary now stands. This was called in honour of the Governor, Fort La Jonquiere. A year afterward, St. Pierre, with his little garrison of five men, disgusted with the country, deserted Fort La Reine, which, a few weeks after, was burned to the ground by the Assiniboines. The fur trade was continued by the French in much the same bounds, so long as the country remained in the hands of We are fortunate in having an account of these affairs given in De Bougainville's Memoir, two years before the capture of Canada by Wolfe. The forts built by Verendrye's successors were included under the "Post of the Western Sea" (La Mer de l'Ouest). Bougainville says, "The Post of the Western Sea is the most advanced toward the north; it is situated amidst many Indian tribes, with whom we trade and who have intercourse with the English, toward Hudson Bay. We have there several forts built of stockades, trusted generally to the care of one or two officers, seven or eight soldiers, and eighty engages Canadians. We can push further the discoveries we have made in that country, and communicate even with California." This would have realized the dream of Verendrye of reaching the Western Sea. "The Post of La Mer de l'Ouest includes the forts of St. Pierre, St. Charles, Bourbon, De la Reine, Dauphin, Poskoiac, and Des Prairies (De la Jonquiere), all of which are built with palisades that can give protection only against the Indians." "The post of La Mer de l'Ouest merits special attention for two reasons : the first, that it is the nearest to the establishments of the English on Hudson Bay, and from which their movements can be watched; the second, that from this post, the discovery of the Western Sea may be accomplished; but to make this discovery it will be necessary that the travellers give up all view of Two years later, French power in North America came to an end, and a generation afterward, the Western Sea was discovered by British fur traders.
<urn:uuid:885064c8-d2a1-4517-a7a0-843a76a01090>
CC-MAIN-2021-21
https://www.electricscotland.com/history/canada/hudsonbay/chapter10.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00417.warc.gz
en
0.965258
7,042
3.375
3
Osteoporosis or low bone mass is much more common than most people realize. Approximately 1 in 2 women over the age of 50 will suffer a fragility fracture in their lifetime. A fragility fracture is identified as a fracture due to a fall from a standing height. According to the US Census Bureau there are 72 million baby boomers (age 51-72) in 2019. Currently over 10 million Americans have osteoporosis and 44 million have low bone mass. Many myths abound regarding osteoporosis. Answer these 5 questions below to test your Osteoporosis IQ. 1 Fact: In addition to the statistic above regarding the incidence of fractures in women, up to 1 in 4 men over the age of 50 will suffer a fragility fracture. Fact: Although we do lose bone density as we age, osteopenia or osteoporosis is a much more significant loss than seen in normal aging. DXA (dual energy x-ray absorptiometry) is the gold standard for measuring bone density and the test shows whether an individual’s numbers fall into the normal, osteopenia, or osteoporosis range based on his or her age. Fact: Osteoporosis has been called a “pediatric condition which manifests itself in old age.” Up until the age of 30 we build bone faster than it breaks down. This includes the growth phase of infants and adolescents and is also the time to build as much bone density as possible. By the age of 30, called our Peak Bone Mass, we have accumulated as much bone density as we will ever have. Proper nutrition, osteoporosis specific exercises, and good body mechanics in our formative years can all play a role in reducing the effects of low bone mass later on. Fact: Two myths here. Flexion based exercises such as sit-ups, crunches, and toe touches are contraindicated for osteoporosis. A landmark study done by Dr. Sinaki from Mayo clinic showed women with osteoporosis had an 89% re-fracture rate after performing flexion based exercises. 2 Fact: Secondly, only 30% of vertebral compression fractures (VCF) are symptomatic meaning many individuals fracture without knowing it. This can lead to a fracture cascade as individuals continue performing movements and exercises that are contraindicated. Fact: The DXA is a simple and painless test which lasts 5-10 minutes. You lay on your back and the machine scans over you with an open arm- no enclosed spaces. There is very little radiation. Your exposure is 10-15 times more when flying from New York to San Francisco. How did you do? Feel free to share these myths with your patients, many of whom may have osteoporosis in addition to the primary diagnosis for which they are being treated. To learn more about treating patients with low bone density/osteoporosis, consider attending a Meeks Method for Osteoporosis course! I recently found this article from the Psychology Research and Behavior Management Journal. I found myself curious about how other healthcare disciplines treat a diagnosis that often presents in conjunction with pelvic floor dysfunction. Irritable bowel syndrome, or IBS, affects nearly 35 million Americans. It is considered a ‘functional’ condition meaning that symptoms occur without structural or biochemical pathology. There is often a stigma with functional diagnosis that the symptoms are “all in their heads”, and while there are many theories about what predisposes individuals to IBS, the experts now think of IBS as a “disorder of gut brain interaction”. Generally, there are 3 subtypes of IBS where people note either constipation dominant, diarrhea dominant or mixed. In order to be diagnosed an individual must report abdominal pain at least 1 day per week in the last 3 months which is related to stooling and a change in frequency or form. Other symptoms that are common are bloating, nausea, incomplete emptying, and urgency. The author suggests a biopsychosocial framework to help understand IBS. An interdependent relationship between biology (gut microbiota, inflammation, genes), behavior (symptom avoidance, behaviors), cognitive processes (“brain-gut dysregulation, visceral anxiety, coping skills”), and environment (trauma, stress). The brain-gut connection by a variety of nerve pathways is how the brain and gut communicate in either direction; top down or bottom up. Stress and trauma can dysregulate gut function and can contribute to IBS symptoms. Stress affects the autonomic nervous system that contributes to sympathetic (fight/flight) and parasympathetic (rest/digest). Patients with IBS may have dysfunction with autonomic nervous system regulation. Symptoms of dysregulated gut function can present as visceral hypersensitivity, visceral sensitivity, and visceral anxiety. Visceral hypersensitivity is explained as an upregulation of nerve pathways. The author sites studies that note that IBS patients have a lower pain tolerance to rectal balloon distention than healthy controls. Visceral sensitivity is another sign of upregulation where IBS patients have a greater emotional arousal to visceral stimulation and are less able to downregulate pain. The author notes that the IBS population show particular patterns of anxiety with visceral anxiety and catastrophizing. Visceral anxiety is described hypervigilance to bowel movements and fear avoidance of situational symptoms. For example, fear of not knowing where the bathroom is located. Cognitive behavioral therapy (CBT) has been shown to be an effective treatment to decrease the impact of IBS symptoms. CBT is focused on modifying behaviors and challenging dysfunctional beliefs. CBT can be presented in a variety of ways, however most include techniques consisting of education of how behaviors and physiology interplay for example the gut and stress response; relaxation strategies, usually diaphragmatic breathing and progressive relaxation; cognitive restructuring to help individuals see the relationship between thought patterns and stress responses; problem-solving skills with shift to emotion focused strategies (“acceptance, diaphragmatic breathing, cognitive restructuring, exercise, social support”) instead of problem focused strategies, and finally exposure techniques to help the individual slowly face fear avoidance behaviors. So much of the techniques are similar to what pelvic floor therapists try to educate our patients in. It is reassuring to me that through we may be different disciplines we are on the same team are moving towards the same goal. The author recommends a 10 session treatment duration, and notes that may be a barrier for some. Integrated practice with other healthcare professionals is also recommended. The more we can know about what our other team members are doing to help support patients the more effective we all are. Kinsinger, Sarah W. “Cognitive-behavioral therapy for patients with irritable bowel syndrome: current insights” Psychology research and behavior management vol. 10 231-237. 19 Jul. 2017, doi:10.2147/PRBM. S120817 The following is a guest submission from Alysson Striner, PT, DPT, PRPC. Dr. Striner became a Certified Pelvic Rehabilitation Practitioner (PRPC) in May of 2018. She specializes in pelvic rehabilitation, general outpatient orthopedics, and aquatics. She sees patients at Carondelet St Joesph’s Hospital in the Speciality Rehab Clinic located in Tucson, Arizona. Myofascial pain from levator ani (LA) and obturator internus (OI) and connective tissues are a frequent driver of pelvic pain. As pelvic therapists, it can often be challenging to decipher whether pain is related to muscular and/or fascial restrictions. A quick review from Pelvic Floor Level 2B; overactive muscles can become functionally short (actively held in a shortened position). These pelvic floor muscles do not fully relax or contract. An analogy for this is when one lifts a grocery bag that is too heavy. One cannot lift the bag all the way or extend the arm all the way down, instead the person often uses other muscles to elevate or lower the bag. Over time both the muscle and fascial restrictions can occur when the muscle becomes structurally short (like a contracture). Structurally short muscles will appear flat or quiet on surface electromyography (SEMG). An analogy for this is when you keep your arm bent for too long, it becomes much harder to straighten out again. Signs and symptoms for muscle and fascial pain are pain to palpation, trigger points, and local or referred pain, a positive Q tip test to the lower quadrants, and common symptoms such as urinary frequency, urgency, pain, and/or dyspareunia. For years in the pelvic floor industry there has been notable focus on vocabulary. Encouraging all providers (researchers, MDs, and PTs) to use the same words to describe pelvic floor dysfunction allowing more efficient communication. Now that we are (hopefully) using the same words, the focus is shifting to physical assessment of pelvic floor and myofascial pain. If patients can experience the same assessment in different settings then they will likely have less fear, and the medical professionals will be able to communicate more easily. A recent article did a systematic review of physical exam techniques for myofascial pain of pelvic floor musculature. This study completed a systematic review for the examination techniques on women for diagnosis of LA and OI myofascial pain. In the end, 55 studies with 9460 participants; 99.8% were female, that met the inclusion and exclusion criteria were assessed. The authors suggest the following as good foundation to begin; but more studies will be needed to validate and to further investigate associations between chronic pelvic pain and lower urinary tract symptoms with myofascial pain. The recommended sequence for examining pelvic myofascial pain is: Authors recommend bilateral palpation and documentation of trigger point location and severity with VAS. They recommend visual inspection and observation of functional movement of pelvic floor muscles. The good news is that this is exactly how pelvic therapists are taught to assess the pelvic floor in Pelvic Floor Level 1. This is reviewed in Pelvic Floor Level 2B and changed slightly for Pelvic Floor Level 2A when the pelvic floor muscles are assessed rectally. Ramona Horton also teaches a series on fascial palpation, beginning with Mobilization of the Myofascial Layer: Pelvis and Lower Extremity. I agree that palpation should be completed bilaterally by switching hands to make assessment easier for the practitioner who may be on the side of the patient/client depending on the set up. This is an important conversation between medical providers to allow for easy communication between disciplines. Meister, Melanie & Shivakumar, Nishkala & Sutcliffe, Siobhan & Spitznagle, Theresa & L Lowder, Jerry. (2018). Physical examination techniques for the assessment of pelvic floor myofascial pain: a systematic review. American Journal of Obstetrics and Gynecology. 219. 10.1016/j.ajog.2018.06.014 Tamara Rial, PhD, CSPS, co-founder and developer of Low Pressure Fitness will be presenting the first edition of “Low Pressure Fitness and abdominal massage for pelvic care” in Princeton, New Jersey in July, 2018. Tamara is internationally recognized for her work with hypopressive exercise and Low Pressure Fitness. In this article she presents the novel topic of hypopressives as a complementary pelvic floor muscle training tool for incontinence after prostate cancer surgery. Urinary Incontinence is the most common side effect men suffer after prostate cancer surgery along with erectile dysfunction. Although it is not life threatening, urinary incontinence definitely has a negative impact on the patient’s quality of life Sountoulides et al., 2013. Beyond the frustration and embarrassment associated with pelvic floor dysfunction, many patients describe it as depressing, disheartening and devastating. The first line of conservative treatment - and most often recommended - is pelvic floor muscle training Andersen et al., 2015. Over the past few years, some researchers have also recommended alternative exercise programs with a holistic approach such as Pilates and hypopressives to improve the patient’s quality of life and urinary incontinence symptoms (Santa Mina et al., 2015). These alternative pelvic floor muscle training programs draw upon the connection between the pelvic floor, it’s synergistic muscles (abdominal, pelvic, lumbar) and their interrelated role in posture and breathing Hodges, 2007; Sapsford, 2004; Madill and McLean, 2008; Talasz et al., 2010. Among these complementary exercise programs, hypopressives have gained increasing attention for the recovery of post-prostatectomy urinary incontinence Santa Mina et al., 2015; Mallol-Badellino, et al. 2015. Although hypopressive exercise has become popular for women, some researchers, clinicians and practitioners have begun to apply these exercises for specific male issues such as urinary incontinence following a prostatectomy. Recently, a case-study I co-authored about an adapted program of hypopressive exercise for urinary incontinence following a radical prostatectomy surgery was published in the Journal of the Spanish physiotherapy association Chulvi-Medrano & Rial, 2018. We describe the case of a 46-year-old male with severe stress urinary incontinence six months after surgery. We used a pelvic floor exercise program consisting of hypopressive exercises as described in the Low Pressure Fitness level 1 practical manual Rial & Pinsach, 2017 combined with contraction of the pelvic floor muscles. Satisfactory results were obtained after the rehabilitation protocol as evidenced by a reduction from 3 daily pads to none. Of note, clinical trails have demonstrated the benefits of initiating a rehabilitation program to strengthen the pelvic floor as soon as possible after prostatectomy. Previously, I’ve studied hypopressive exercise for female urinary incontinence Rial et al., 2015 and for the improvement of female athletes pelvic floor function Álvarez et al., 2016. However, this was the first time we’ve studied hypopressives in the context of male urinary leakage. In the same light, other researchers have also included hypopressives in their pelvic floor training protocol for post-prostatectomy urinary incontinence. For example, Serda et al (2010) and Mallol-Badellino (2015) used protocols that combined pelvic floor contractions with postural re-education and hypopressives. Both studies found improvements in the severity of involuntary leakages and improvements in the patients’ quality of life. Similar results are also described in the clinical case by Scarpelini et al. (2014) who used hypopressives and psoas stretching exercises to reduce urinary incontinence after prostatectomy. The hypothesis underlying the use of hypopressives as a complementary pelvic floor and core exercise program is that it retrains the core system with specific postural and breathing strategies while reducing pressure on the pelvic organs and structures. The most striking part of hypopressives breathing technique is the abdominal vacuum. This breathing maneuver involves a low pulmonary volume exhale-hold technique followed by a rib-cage expansion involving the activation of the inspiratory muscles. The rib-cage expansion during the breath-holding phase leads to a noticeable draw-in of the abdominal wall and simultaneously to the rise of the thoracic diaphragm. Recent observational studies have shown how the hypopressive technique was able to elevate the pelvic viscera and to activate the pelvic floor and deep core muscles in women trained with hypopressives Navarro et al., 2017. From an historical point of view, this characteristic breathing maneuver was first described and practiced as a yoga pranayama called Uddiyanha Bandha Omkar & Vishwas, 2009. In addition to breath control, the hypopressive technique involves a series of static and dynamic poses which operate on the hypothesis of training the stabilizing muscles of the spine, such as the core and pelvic muscles. In this sense, hypopressives are not exclusively a breathing technique, but rather they are an integrated whole-body technique. The practice of hypopressives involves body control, body awareness, postural correction and mindfulness throughout its different poses and postural techniques. The introduction of holistic exercise programs to train the synergist pelvic floor muscles and breathing patterns can be viewed as complementary tools for the restoration of a patient’s body awareness and functionality. Another hypothesis of the effects of the hypopressive-breathing in the pelvic floor is the ability to move the pelvic viscera cranially as a consequence of the ribcage opening up after the breath-hold. This vacuum lifts the diaphragm and consequently creates an upward tension on the transversalis fascia, the peritoneum and other related fascial structures. In addition to the diaphragmatic suction effect, a correct alignment of the rib cage and pelvis during the exercise contributes to an improved suspension and position of the viscera in the pelvis. The mobility achieved with the breathing and its body sensations may be one of the reasons why hypopressives have also been recommended as a proprioceptive facilitator for those with low ability to “find their pelvic floor” Latorre et al., 2011. It’s crucial to highlight that a complete surgical resection of the prostate will cause - in most of the cases - post-operative fibrosis and neurovascular damage Hoy-Land et al., 2014. Both, the neurovascular and musculoskeletal injuries are contributing factors for urinary incontinence post-prostatectomy. Subsequently, exercises focusing on increasing local vascular irrigation and re-activating the damaged musculature have been highlighted as the main goals to help patients recover continence. In this sense, breathing movements, fascia manipulation and decreased pelvic pressure can result in increased vascular supply. A previous study has shown an improvement in venous return of the femoral artery during the hypopressive-breathing maneuver Thyl et al., 2009. Collectively, all these factors may favor microcirculation in the pelvic area. Finally, the muscle activation of the pelvic floor and core muscles observed during the practice of hypopressives (Ithamar et al., 2017) and the changes of puborectalis and iliococcygeus muscles after an intensive pelvic floor muscle training (Dierick et al., 2018) are other factors that could have impact on urge incontinence, stress incontinence and overflow incontinence symptoms common after prostatectomy surgeries. To date, the results from these investigations and clinical reports open new complementary pelvic floor training strategies for the treatment of post-prostatectomy incontinence. Hypopressives and pelvic floor muscle exercises are non-invasive, don’t require expensive material, and provide an exercise-based approach as part of a healthy lifestyle. However, qualified instruction, technique-driven progression and adherence to the intervention are critical components of any pelvic floor and hypopressive training protocol. Álvarez M, Rial T, Chulvi-Medrano I, García-Soidán JL, Cortell JM. 2016. Can an eight-week program based on the hypopressive technique produce changes in pelvic floor function and body composition in female rugby players? Retos nuevas Tendencias en Educación Física, Deporte y Recreación, 30(2): 26-29. Anderson CA, Omar MI, Campbell SE, Hunter KF, Cody JD, Glazener CM. 2015. Conservative management for postprostatectomy urinary incontinence. Cochrane Database Syst Rev, 1:CD001843. Chulvi-Medrano I, Rial T. 2018. A case study of hypopressive exercise adapted for urinary incontinence following radical prostactetomy surgery. Fisioterapia, 40, 101-4. Doi: DOI: 10.1016/j.ft.2018.01.004 Dierick F, Galrsova E, Laura C, Buisseret F, Bouché FB, Martin L. 2018. Clinical and MRI changes of puborectalis and iliococcygeus after a short period of intensive pelvic floor muscles training with or without instrumentation. European Journal of Applied Physiology, doi:10.1007/s00421-018-3899-7 Ithamar, L., de Moura Filho, A.G., Benedetti-Rodrigues, M.A., Duque-Cortez, K.C., Machado, V.G., de Paiva-Lima, C.R.O., et al. 2017. Abdominal and pelvic floor electromyographic analysis during abdominal hypopressive gymnastics. J. Bodywork. Mov. Ther. doi: 10.1016/j.jbmt.2017.06.011. Latorre G, Seleme M, Resende AP, Stüpp L, Berghmans B. Hypopressive gymnastics: evidences for an alternative training for women with local proprioceptive deficit of the pelvic floor muscles. Fisioterapia Brasil 2011; 12(6): 463-6. Hodges P. 2007. Postural and respiratory functions of the pelvic floor muscles. Neurourol Urodyn, 26(3): 362-371. Hoyland K, Vasdev N, Abrof A, Boustead G. 2014. Post-radical prostatectomy incontinence: etiology and prevention. Rev Urol. 16(4), 181-8. Madill, S., McLean, L. 2008. Quantification of abdominal and pelvic floor muscle synergies in response to voluntary pelvic floor muscle ontractions. J. Electromyogr. Kinesiol. 18, 955-64. doi: 10.1016/j.jelekin.2007.05.001. Mallol-Badellino J., et al. 2015. Resultados en la calidad de vida y la severidad de la incontinencia urinaria en varones prostatectomizados por neoplasia de próstata. Rehabilitación, 49(4); 210-215. Navarro, B., Torres, M., Arranz, B. Sánchez, O. 2017. Muscle response during a hypopressive exercise after pelvic floor physiotherapy: Assessment with transabdominal ultrasound. Fisioterapia. 39, 187-194. doi:10.1016/j.ft.2017.04.003. Omkar, S., Vishwas, B. 2009. Yoga techniques as a means of core stability training. J. Bodywork Mov. Thep. 13, 98-103. doi: 10.1016/j.jbmt.2007.10.004. Rial T, Chulvi-Medrano I, Cortell-Tormo JM, Álvarez M. 2015. Can an exercise program based on hypopressive technique improve the impact of urinary incontinence on women´s quality of life? Suelo Pélvico, 11:27-32. Rial, T., Pinsach, P. 2017. Low Pressure Fitness practical manual level 1. International Hypopressive and Physical Therapy Institute, Vigo. Santa Mina D, Au D, Alibhai S, Jamnicky L, Faghani N, Hilton W, Stefanky L, et al. 2015. A pilot randomized trial of conventional versus advanced pelvic floor exercises on treat urinary incontinence after radical prostatectomy: a study protocol. BMC Urology, 15. DOI 10.1186/s12894-015-0088-4 Sapsford R. 2004. Rehabilitation of pelvic floor muscles utilizing trunk stabilization. Man Ther, 9(1): 3-12. Serdá B, Vesa, A. del Valle, y Monreal P. 2010. La incontinencia urinaria en el cáncer de próstata: diseño de un programa de rehabilitación. Actas Urológicas Españolas, 34(6): 522-30. Scarpelini P, Andressa Oliveira F, Gabriela Cabrinha S, Cinira H. 2014. Protocolo de ginástica hipopressiva no tratamento da incontinência urinária pós-prostatectomia: relato de caso. UNILUS Ensino e Pesquisa, 11(23): 90-95 Talasz, H., Kofler, M., Kalchschmid, E., Pretterklieber, M., Lechleitner, M. 2010. Breathing with the pelvic floor? Correlation of pelvic floor muscle function and expiratory flows in healthy young nulliparous women. Int. Urogynecol. J. 21, 475-81. doi: 10.1007/s00192-009-1060-1. Thyl S., Aude P, Caufriez M, Balestra C. 2009. Incidence de l'aspiration diaphragmatique associée à une apnée expiratoire sur la circulation de retour veineuse fémorale: étude par échographie-doppler. Kinésithérapie scientifique, 502; 27-30. Men who present with chronic pelvic pain frequently have symptoms referred along the penis and into the tip of the penis, or glans. Symptoms may include numbness, tingling, aching, pain, or other sensitivity and discomfort. The tip of the penis, or glans, is a sensory structure, which allows for sexual stimulation and appreciation. This same capacity for valuable sensation can create severe discomfort when signals related to the glans are overactive or irritating. One of the most common complaints with this symptom is a level of annoyance and distraction, with level of bother worsening when a person is less active or not as mentally engaged with tasks. Wearing clothing that touches the tip of the penis (such as underwear, jock straps, jeans, or snug pants) may be limited and may worsen symptoms. When uncovering from where the symptoms originate, the culprit is often the dorsal nerve of the penis, which is sensible given that the glans is innervated by this branch of the pudendal nerve. If we consider this possibility (because certainly there are other potential causes) we find that there are many potential sites of pudendal nerve irritation to consider. First, let’s visualize the anatomy of the nerve. Following the usually accepted descriptions of the dorsal nerve, we know that it is a terminal branch of the pudendal nerve that primarily is created from the mid-sacral nerves. This can lead us to include the lumbosacral region in our examination and treatment, yet in my clinical experience, there are other sites that more often reproduce pain in the glans. As the dorsal nerve branches off of the pudendal, usually after the location of the sacrotuberous ligament, it passes through and among the urogenital triangle layers of fascia where compression or irritation may generate symptoms. As the nerve travels towards the pubic bone, it will pass inferior to the pubic bone, a location where suspensory ligaments of the penis can be found as well as pudendal vessels and fascia. This is also a site of potential compression and irritation, and palpation to this region may provide information about tissue health. Below is a cross-section of the proximal penis, allowing us to see where the pudendal nerve and vessels would travel inferior to the pubic bone. As the dorsal nerve extends along either side of the penis, giving smaller branches along its path towards the glans, the nerve may also be experiencing soft tissue irritation along the length of the penis or even locally at the termination in the glans. Palpation internally (via rectum) or externally may be a part of the assessment as well as treatment of this condition. Oftentimes, tip of the penis pain can be reproduced with palpation internally and directed towards the anterior levator ani and the connective tissues just inferior to the pubic bone. It may be difficult to know if the muscle is providing referred pain, or if the nerve is being tensioned and reproducing symptoms, however gentle soft tissue work applied to this area is often successful in reducing or resolving symptoms regardless of the tissue involved. In my experience, these symptoms of referred pain at the tip of the penis is often one of the last to resolve, and the use of topical lidocaine may be helpful in managing symptoms while healing takes place. Home program self-care including scar massage if needed, nerve mobilizations, trunk and pelvic mobility and strengthening, and advice for returning to meaningful activities can play a large role in resolution of pain in the glans. If you would like to learn more about treating genital pain in men, consider joining me in Male Pelvic Floor: Function, Dysfunction, & Treatment. The 2018 courses will be in Freehold, NJ this June, and Houston, TX in September.
<urn:uuid:abef9cf4-1d9a-4539-a0ac-c623b74d6b41>
CC-MAIN-2021-21
https://hermanwallace.com/blog/tags/treatment-techniques
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00057.warc.gz
en
0.89338
6,158
2.828125
3
by Cathy Saxton We get together with friends to play games on a semi-regular basis. One group plays a lot of games that require a timer. Those cheap little sand hourglass egg timers that come in games are terrible. They're hard to see and someone has to watch them carefully to see when the last grain falls. We often use a smartphone with a timer, but they have different interfaces which causes some confusion, and an expensive phone is at risk of being knocked off the coffee table or glopped with wine or bean dip. I used this as an excuse to work on a fun project: a game timer that would be easy to use, easy to see, less expensive than a phone, and less likely to be destroyed by a minor mishap. The timer turned out great, pretty much as I had envisioned it. Here's a description of its features: One thing I'd like to note: this was a project for fun, not to save money; it's about $50 for the parts. The rest of this page describes the development process for the game timer. There are resources at the bottom of the page if you're interested in using this information as a starting point for your own projects. I thought about how I wanted the timer to look and function and came up with the list of features above. I used that to compile a list of components that I'd need: Before I could create a schematic, I needed to determine how I wanted to handle a couple of the features: We can generate sound with a piezo buzzer by causing it to vibrate, which is done by toggling its input. The tone of the generated sound is determined by the frequency of the vibration. Higher frequencies are higher notes. The A note above middle C is 440 Hz. Each octave is a factor of 2, so the A note one octave higher is 880 Hz. (Each step, e.g. A to A#, is a factor of 12√2 = 1.059463.) Pure tones are sine waves, as is depicted in the first wave below. From the MCU, we generate a square wave, which works well enough to produce a recognizable tone. The volume is controlled by varying the width of the pulse. When the high and low portions of the wave are the same duration (50% duty cycle), the volume is loudest. The more they differ, the quieter the sound. It doesn't matter whether the high or low portion is longer. For example, 3% and 97% will result in the same (relatively quiet) volume. In the image below, each wave will produce the same tone since they are all the same frequency. The top wave will be the loudest since it has a 50% duty cycle. The second wave will be a bit less loud, and the last two will be the quietest in the group. Note that the last two represent the same volume level, just with high and low duration swapped. Perceived volume is based on a logarithmic scale; I noticed a difference in volume between widths of 1%, 2%, and 3%, but not between 48%, 49% and 50%. The Atmel AVR microcontrollers can automatically generate a (square) wave using a built-in Timer / Counter resource. The output from this counter can be sent to a MCU pin, so the circuit will want to have one of the outputs from a Timer / Counter connected to the piezo buzzer. Since we control volume in software (by altering the pulse width), we'll connect the volume knob directly to the MCU. We'll pair the potentiometer with a fixed resistor to create a voltage divider. Its output will be connected to one of the MCU's analog inputs. The MCU will read this voltage and use it to set the duty cycle on the square wave. Charlieplexing is a technique for controlling many LEDs with relatively few MCU outputs. It is named after Charlie Allen at Maxim. Let's start with a quick review of LEDs and MCU control of them. From this point forward, we'll temporarily ignore the resistor; we'll add it back in at the end. To control an LED with a microcontroller, we can connect the LED's anode to an output from the MCU. The image on the right shows three LEDs with anodes connected to MCU pins 1, 2, and 3. To light the LED, its anode is pulled high (as shown in this example). To turn off the LED, its anode can be grounded or put into a high-impedance state. In a high-impedance state, current is restricted from flowing, so the LED won't light. We can make an MCU pin high-impedance by configuring it as an input. We can also connect an LED's cathode to an MCU output, and we can share the MCU's anode and cathode control lines across multiple LEDs, enabling us to control an array of LEDs. In the example on the right… To light an LED, the MCU outputs high on the line connected to the LED's anode and grounds the line connected to the LED's cathode. All other MCU connections are put into a high impedance state (configured as input). In the example below, LED "2X" is lit by pulling line 2 high and grounding X. LEDs sharing a cathode line (X, Y, or Z) can be turned on together. In the following example, LEDs 1Y and 3Y are lit by pulling lines 1 and 3 high while Y is low. As you've probably noticed, we can't turn on all of the LEDs at one time. But, we can create the effect of having them all lit by rapidly cycling through the "strings" of LEDs. Each of X, Y, and Z will take a turn being grounded, with lines 1, 2, 3 optionally pulled high based on which LEDs in that string should be lit. If this is done rapidly, all of the LEDs appear to be on at the same time. Their brightness will be reduced since they are only on for a fraction of the time; in this example, 1/3 of the time (due to 3 cathode lines). Recall the multiplexing diagram shown on the left below. We can use the MCU's lines 1, 2, and 3 for the cathode lines, too, eliminating the need for lines X, Y, and Z. When we do this, the LEDs on the diagonal are removed since they would have anode and cathode connected to the same MCU control line (and would thus never light). This new configuration – Charlieplexing – is shown on the right below. With our new Charlieplexing arrangement, you can see that 3 MCU lines can control 6 LEDs. All combinations of two control lines are used. The image on the right shows another representation of the same circuit, illustrating how pairs of control lines are connected to LEDs. So, how many LEDs can you control with n MCU lines? n × (n - 1) There are two ways to derive this formula: Adding Resistors – Simple Technique We need resistors to protect the LEDs from over-current. The image on the right shows a simple technique of adding a resistor on each control line. Let's explore how well this will work. The image on the right shows an example of a circuit lighting a single LED. Given the following: Solve for R (recall that V = I × R): 5 V = 2 V + 2 (10 mA × R) Now let's consider lighting two LEDs. In this case: We want to solve for i (current): 5 V = i × 150 Ω + 2 V + 2i × 150 Ω Note that this is < 10 mA! With this method of placing resistors on each MCU control line, LEDs get less current and thus become dimmer as more LEDs are turned on at the same time. Fortunately, there's a better way to do this! Adding Resistors – Better Method The limiting factor becomes the MCU's current-sourcing ability. When multiple LEDs are powered at once, the MCU has to source current for each LED. Even more constraining is that just one pin needs to sink the current from all of the LEDs illuminated at a time. If the current needs are too high, one possible solution is to program the sequence to have more steps and turn on fewer LEDs at a time. In this case, each line would take a turn as the cathode multiple times in the sequence. Game Timer Implementation The game timer has 24 LEDs (12 bi-color lamps). By using Charlieplexing, we can individually control those 24 LEDs with just 6 MCU lines. I chose 33 Ω resistors, placed in series with each lamp. This results in 33 - 85 mA per LED, depending on the battery voltage and LED voltage drop. With six lines each taking a turn as the cathode, each LED will be lit for 1/6 of the time, producing a brightness equivalent to approximately 5.5 - 14 mA. For the timing, I chose to give each cathode group a 0.1 ms period. The LEDs allow 140 mA at 1/10 duty cycle for a 0.1 ms pulse, so the LEDs should be happy. They get current slightly more frequently (1/6 duty cycle vs. 1/10), but at a maximum of 85 mA. The MCU allows a maximum 40 mA / pin, with max 200 mA total for all pins. I'm obviously exceeding the 40 mA limit when there are fresh alkaline batteries in the timer, but the MCU seems to be holding up fine so far. I expect that most stress to the MCU comes not from the anode pins sourcing current, but from the cathode pin, which sinks current from multiple LEDs in each string. With the circuit design and functionality of the timer, I've limited that to a maximum of three LEDs at a time (instead of the theoretical five). My hope is that the MCU will tolerate this because of the 1/6 duty cycle and since it is sinking instead of sourcing the large current. There were four basic steps to creating the hardware for this project: documenting all the connections in a schematic, designing the board layout, having the board fabricated, and soldering items to the board. Schematic and PCB Design I started the schematic by adding all the components that I knew I'd use. Then, I considered the connections that I needed to make to the MCU. That imposed some constraints, but there was still plenty of flexibility. I determined an arrangement of LEDs that had each MCU control line acting as cathode for four LEDs and provided a nice layout for optimizing the traces. I placed the symbols on the schematic in an arrangement that matched the layout I would use on the board. At this point, I had a rough idea of the MCU connections, so I worked on the printed circuit board (PCB) layout. Once I could see where each component was located relative to the MCU, I could pick among the available connection options. I wanted the timer to be fairly compact, so I expected the AAA battery holder to cover a large portion of the board. With the LEDs installed, their leads would poke through to the back of the board, but that would prevent the battery holder from resting flat against the board. It was also looking challenging to get all of the components squeezed into the small-ish footprint that I was envisioning. The solution was to use two boards. That gave me plenty of extra space and also enabled me to have a clean look on the top surface for the LEDs and buttons with no additional components – or even traces. For PCB fabrication, I sent Gerber files exported from EAGLE to Gold Phoenix PCB. The MCU, resistor arrays, and a handful of resistors and capacitors are surface-mount components on the back of the bottom board. For that, I used a stencil to apply solder paste and then a griddle to heat the solder. My favorite method for making stencils is to use a cutting plotter. The remaining components were through-hole items, which were easily soldered with an iron. I wanted a box to enclose the electronics and give the timer a more finished look. For various reasons, I decided that I'd create a design and have panels laser cut instead of using a pre-fabricated box. This gave me more flexibility on size and shape and ensured that I would have access to batteries and switches. The design criteria: For the box, I'm using a common "sandwich" design: slots in the side panels hold the front, back, and mid-level panels in place via tabs, and the sides are held together with a long bolt. The picture on the right shows one of the side panels (it's the translucent panel on the right) and how tabs on the two mid-level panels and the back panel are mounted into slots on the side. I had planned to glue the top to the front and back panels, but it turned out that there's enough friction to keep it in place without needing any glue. For mounting the PCBs, I decided to attach the bottom PCB to a panel located between the two PCBs. In the picture above, you can see the mounting holes in the bottom PCB and the mid-level panel. The top PCB connects to the bottom PCB through headers making the necessary electrical connections. I ultimately just needed a 2D drawing for lasercutting, but decided to model in 3D to ensure correct vertical placement for the mid-level PCB-mounting boards and access to the volume knob. I used Alibre Design. For the 3D model, I first created parts for the PCBs including components, then created an assembly with the boards, and finally created the box panels directly in the assembly. For lasercutting, I created a drawing and inserted a "standard view" for each panel. Note that it's important to ensure that the scale is 1:1 for both the drawing and the insertions! I exported as DXF (AutoCAD 2004) to create a file to be used with the lasercutter. Here are the box panels and hardware: The main control code for the timer is a "state machine." It keeps track of the current state of the timer and checks for activities that would trigger a transition to a new state. There are numerous states, but they fall within four groups: Transitions can be triggered by: The main program loop has three phases: The game timer board is using an Atmel ATmega48A, which has the following features: There are two ways for the software to learn about interesting activities – interrupts and polling. With an interrupt, an internal process of the MCU watches for the specified trigger and interrupts code execution to call a special function when the trigger happens. With polling, the software repeatedly checks for any changes that need to be handled. Each has its benefits and drawbacks. The game timer code uses both interrupts (for the timer) and polling (for button activity, transition times for LEDs and sound, and completion of an ADC conversion). Details are in the relevant sections below. Classes can help provide organization and encapsulation. The game timer code has a C++ class for each "object" (listed below). The implementation details are handled by private class code (and stored in a class-specific file). Callers don't need to know or understand details about the implementation; they just communicate with the class through a public interface that provides functions for the features supported by the class (e.g. 'play a tune', or 'is the tune still playing?'). Public interfaces and some implementation details for the classes are described below. It may be helpful to consult the MCU datasheet when exploring the code. DIP Switch class The public interface provides functions to: The private class code deciphers the DIP switch settings. It knows which inputs to look at for each setting and how to interpret the switch positions. The public interface provides functions to: The private class code sets up Timer/Counter 0 for measuring time: The public interface provides functions to: During idle loop processing, the sound class checks elapsed time for the current note and transitions to the next note (or stops playing) when appropriate. The private class code sets up Timer/Counter 1 to generate a waveform on the output pin. It uses the Fast PWM with Compare Match Output mode, which works as follows: The tone determines the Prescaler and Top values (period / frequency). The volume level determines the Match value (pulse width). Recall that maximum volume is at a 50% duty cycle, i.e. when Match = Top / 2; the volume will be quiet for small (or large) Match values. The public interface provides functions to: The private class code handles selecting the appropriate channel (volume or battery), starting conversions, and checking to see whether prior requests have completed. When responding to caller requests asking if a value has changed, the class code requires that the value has changed by a minimum tolerance. When an analog to digital conversion is made, it uses a "reference voltage." The result of the conversion is the ratio of the input divided by the reference voltage. When reading the volume level, we use Vcc (the voltage from the battery pack) as the reference voltage. Since the volume input is a voltage divider, comparing against the battery voltage ensures that a particular volume setting will always yield the same result, even as the battery voltage changes. When checking battery voltage, we use the internal 1.1 V reference voltage. The battery input comes from the voltage divider shown on the right. The input to the MCU will be at a voltage level equal to Vcc × 25/(25+77). This means that Vcc voltages of 4.49 V and higher will result in the maximum reading, with lower values produced as the voltage drops. The public interface provides a function to let the caller specify which pattern to show. During idle loop processing, the LED class transitions to next "string" in the Charlieplexing cycle when enough time has elapsed. The private class code has a table of bytes that are the register values to use for each step of each pattern. Each pattern has six steps per cycle, one for each LED control line taking a turn being grounded (as the cathode). Each step needs to set direction register and output register bits as follows: The direction output values for each step are calculated in an Excel spreadsheet and stored in program memory (Flash, not SRAM). The buzzer and start button also use the same register as the LEDs; the LED code maintains their states. Putting it All Together Once I had the code all written and was ready to download and test it, it occurred to me that I'd forgotten to check the code size before selecting the MCU. I'd gotten the smallest (4k Flash) option when I'd ordered parts for the timer. It's a good idea to ensure that the physical parts match the dimensions from the spec sheets, so I like to get the parts and verify their sizes before ordering boards. Normally, I prototype with the biggest part, but for some reason I'd gone with the cheapest this time. I checked the program and discovered it was 6k bytes. I was not a happy camper! Being a few bytes over would be easy to solve, but it seemed unlikely that I could find 2k of extraneous stuff. I'd already soldered everything to the board and wasn't optimistic about the likelihood of being able to desolder the MCU without damaging other components. That meant that if I needed a larger MCU, I'd have to start over with a new board. Plus, of course, I didn't have any of the larger MCUs on hand. In general, I try to write small, efficient code. There are a few places where I choose maintainability over efficiency, but that has a minimal effect on code size. I checked the "release" build to see how much space I'd save when removing debug code, but that had a minimal impact. I considered whether I could trim down program-memory tables that I was using for things like the LED patterns and the timer end tune. Reducing these would mean that I'd lose some features, but I was getting desperate. It turns out that I'd only be able to save a hundred bytes or so (out of 2,000 needed). At that point, I decided to investigate what was taking up so much space in the code, so I looked at the .map file. Aha! There was a lot of space used for floating point functions! I was using floating point when making calculations for the sound volume; I had three assignments and four multiplications using 'float' variables. I changed to fixed point math and dropped from 6,248 bytes to 2,852! Yippee! This would have been a fun discovery if it had just been a cool optimization, but with the current state of the project, it was especially sweet. Fixed Point Math Fixed point math uses integers to represent fractions. There are two advantages to using integers: We don't need special libraries (all that code space!), and calculations are faster than with floating point (on MCUs). For a fixed point implementation, a constant denominator or "base," is chosen. Stored values are determined by multiplying the actual number by the base. For example, consider wanting to store dollars and cents. We'll choose 100 as the base. $1.23 is represented as 123. $2.90 is 290; $0.07 is 7; $13 is 1300. Addition and subtraction work as usual: Note that a "carry" from fraction to whole number happens automatically (as does a "borrow"). For multiplication, one additional step is needed: the result must be divided by the base after doing the multiplication of the two fixed point values: For Division, we first multiply the dividend by the base, then do the specified division: Note that in the second example, the result is truncated; remember, we're using integer math! Fixed Point Math in the Game Timer For the implementation of fixed point math in the game timer, I chose a base (denominator) of 216. Powers of two are commonly used for the base because multiplication and division are simple bit shifts. I'm using a fixed point variable to store the fraction of the sound wave period when the pulse should be high (to set the volume level). This number is always less than one (since the maximum value will be half the pulse, i.e. 0.5). So, I just need a short (16 bits) to store my fraction. I multiply this fraction by the counter's Top value (period) to calculate the counter's Match value (pulse width). This calculation is made using a long (32-bits) in order to avoid overflow. |©2000-2020 Idle Loop Software Design, LLC. You may not copy or reproduce any content from this site without our consent.|
<urn:uuid:63e8c285-b65f-4556-9c97-f6d72df70a37>
CC-MAIN-2021-21
https://idleloop.com/robotics/GameTimer/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00296.warc.gz
en
0.939987
4,873
2.734375
3
 This post will be less organized than most posts; some of these thoughts and ideas are still a little raw. Backward design – the method by which one begins with the desired end result(s) of an educational program, determines acceptable evidence showing that the result(s) has been achieved, and then creates a plan to teach the skills and content that will lead students to provide that evidence – has been on my mind lately. It’s one of the core concepts of a college teaching and learning course I co-teach but that’s not why I’ve been thinking about it. For me, backward design is a “threshold concept;” it’s an idea that changed how I think about teaching and I can’t go back to how I thought prior to this change. So although I learned and most often use and teach backward design in the context of designing or redesigning a single college course, I’ve been thinking about the role of backward design in different contexts. For example: - I know that backward design has been and is used to develop curricula and not just individual courses. Today was the first time I got to see firsthand how that plays out with a group of faculty to develop a full 4-year curriculum for this discipline. I was most struck by how difficult it was to keep true to the backward design philosophy and not get mired down in content coverage and the limitations imposed by the current curriculum. It was difficult even for me to remain on course as I tried to help facilitate one of the groups of faculty engaged in this process. I underestimated the increased complexities involved in scaling up the process from a single course to an entire curriculum; it’s not a linear function. - There has been quite a bit of discussion lately among student affairs professionals regarding their conference presentations (e.g. this Inside Higher Ed blog post with 30 comments). Put bluntly, many people are unsatisfied with the current state of these presentations. Just as backward design can scale up from a class to a curriculum, it can also scale down to a single class session. And shouldn’t a good 50 minute conference presentation resemble a good 50 minute class session? So why not systematically apply backward design to conference presentations? Many conferences seem to try to push presenters in that direction by requiring them to have learning outcomes for their sessions but that isn’t enough. - Unfortunately, pedagogy and good teaching practices are not formally taught and emphasized in most student affairs programs so I expect that most student affairs professionals have not been exposed to backward design as a formal process. That’s a shame because it seems like such a good fit for what student affairs professionals do! And it fits in so well with the ongoing assessment movement because it so firmly anchors design in measurable outcomes and evidence-based teaching! Would any student affairs professionals out there want to learn more about backward design and try to apply it to some of your programs? Please let me know because I’d love to help! I’m positive this would work out well and I’d love to test these ideas! In a recent blog post releasing a (very nice!) infographic about “Best Practices in Using Twitter in the Classroom Infographic,” Rey Junco writes: I'd like to point out that I'm a real stickler about using the term “best practices.” It's a concept we toss around a lot in higher education. To me, a “best practice” is only something that has been supported by research. Alas, most of the time that we talk about “best practices” in higher ed, we're focusing on what someone thinks is a “good idea.” I agree and I’m even more of a stickler. There have been several specific situations in which I have been asked or encouraged to write a set of best practices for different things but I always got stuck asking myself: What makes this particular set of practices the “best?” I share Rey’s dislike of “good things I’ve done” being presented as best practices. But my (relatively minor) frustration extends a bit further because to me the adjective “best” implies comparison between different practices i.e. there is a (large) set of practices and this particular subset has been proven to be better than the rest. I’d be perfectly happy if people were to stop telling us about best practices and just tell us about “good” practices until we have a large enough set of practices and data to judge which ones really are the best. If you’ve done good work, don’t distort or dishonor it by trying to make it bigger than it is. After all, even Chickering and Gamson (1987) presented their (now-classic and heavily-cited) ideas as “Seven Principles for Good Practice in Undergraduate Education” and not “Seven Best Practices in Undergraduate Education.” I have run into an unexpected and interesting issue. Although I am not as far along with my dissertation as I would like to be, I have decided to hit the job market. Some of the jobs to which I am applying are not directly related to student affairs and technology, the primary topic of this blog and the tagline of this website. That makes me a bit nervous. It’s natural for people to want to make career changes, large and small. But I never considered how to handle making such a career change when I have a strongly established digital identity that is not directly aligned with the desired career. This is particularly tricky because I have a diverse skillset and I am applying to a diverse set of jobs from faculty development to student affairs assessment. How will potential employers handle an apparent disconnect between my established digital identity – the topics I’ve regularly discussed and the areas in which I have publicly proclaimed expertise – and the jobs to which I am applying? I am not misrepresenting myself in my application materials. There are many skills I have acquired and interests I have developed that I simply haven’t discussed here, especially some that don’t seem to be on-topic. But will potential employers take my claims of competence and experience seriously when they weigh these “new” and undiscussed skills and interests against those I have repeatedly and publicly discussed? I don’t have answers for these questions right now. But I will soon because this is not a theoretical issue but one I am actively confronting right now. What can I do? - Scour my materials to ensure that anything I already have online that is relevant is accurately tagged, perhaps even highlighting those collections of materials somehow. - Quickly begin to build up a (larger and more visible) body of blog posts related to these other topics (e.g. Scholarship of Teaching and Learning, faculty development, assessment). - Tweak the tagline of this website so it’s aligned with a broader set of my professional interests. - Create alternative expressions or evidence of competence and experience with these other topics (e.g. e-portfolios). If I were always completely open and transparent about all of my interests and experiences, I wouldn’t have this problem because these facets of my identity would already be visible. But I think it’s healthy and even necessary to consciously practice some level of self-censorship and selection, at least for me. I just need to figure out how to present multiple facets of my identity with integrity now that it has become necessary for me to do so. And hope that others can perceive that I am acting with integrity and understand what has happened. I’ve never liked the trite phrase “don’t sweat the little things.” I have no argument with the general idea that you should spend most of your time on the large, important things. But I reject the implication that the little things aren’t important and not worth spending time on. It offends my passion for detail and belief that details are important. More importantly and more defensible is the idea that “little” is relative; what is little to one person is large to another. Let me offer an example. One of the projects at my research shop, the Law School Survey of Student Engagement (LSSSE), focuses on law schools and law students in the U.S. and Canada. I don’t have any formal responsibility beyond general collegiality and professionalism to work with the project and its staff. However, I work on LSSSE projects when they need assistance and my schedule permits because (a) the work they do is important and interesting and (b) I love working with the LSSSE staff. A few months ago, the LSSSE folks needed some help preparing their latest Annual Results and I was very happy to help. They surprised me a few weeks ago by letting me know that in return for my assistance they gave me “top billing” in the Annual Results by including me in the LSSSE staff listing on page 1 of the report. In many ways, this was literally a little thing. It costs the LSSSE staff virtually nothing to do this. It’s less than half a line of text that few people will ever read (even if you’re interested enough to read the LSSSE Annual Report I doubt that you’ll read through the staff listing, too!). And it only took them a few second to include my name in the document. But to me, it’s not so little. How wonderful that the LSSSE staff thought enough of me to claim me as one of their own! What a kind and unexpected gesture of thanks! That is why I think it’s important to spend a little bit of time “sweat[ing] the small stuff:” You never really know what is small. So spend some time working on the little things because they may unexpectedly grow into big things. On Friday, a colleague pointed out a new article on Mashable that is titled “Why Tablet Publishing Is Poised To Revolutionize Higher Education.” I don’t trust the claims made in this article. I’m going to explain why I don’t trust the claims, not to convince you that my opinion is correct but to give you an understanding of how I evaluate claims like the ones made in the article. I’ll lay out my thoughts in chronological order. - The article is published at Mashable. I removed Mashable from my RSS reader over a year ago because I got tired of their poorly-written articles that make ridiculously overwrought and unprovable claims. This certainly isn’t enough for me to condemn this particular article but it certainly makes me cautious right from the beginning. - The title makes a very bold claim. Many people have attempted to “revolutionize” education; few have succeeded. And even fewer have been able to explicitly predict revolutions before they occur or even recognize them as they are occurring. The author has a helluva case to make and he better bring remarkable evidence to support his claim(s). - After reading the title, a quick glance through the article indicates that it’s a utopian piece largely based on the idea of technological determinism. In other words, it’s not only wildly optimistic but it also relies on the idea that we can predict and control how people use technologies by the way in which those technologies are designed. Both of these ideas – utopia and technological determinism – have a bit of history in the field of social informatics. The history is mostly negative; these ideas simply don’t work most of the time. So my skepticism continues to increase. - The author of the article is an executive at Adobe. In fact, he’s the “director of worldwide education.” That doesn’t mean that his opinions are necessarily biased but it’s another reason for me to be skeptical. - The article claims that “[There are] better study habits and performance with tablets.” Only one study is cited to support these sweeping claims: a Pearson Foundation “Survey on Student and Tablets.” For example, the author states that “86% of college students who own a tablet say the device helps them study more efficiently, and 76% report that tablets help them perform better in their classes” and a few other claims. Even if this study were flawless, the author needs a whole lot more evidence to support such a broad claim. - To their credit, Pearson offers to share methodological details about and data from their survey if you just ask them; I haven’t asked so I don’t have any more detail than what is provided in that 2-page overview. But we do know that the survey was conducted online. Given that about 20% of people in the U.S. do not have access to the Internet (Dept of Education estimates 18.6% and the Pew Internet & American Life Project estimates 21%), it seems unlikely that an online survey can produce data that is representative of the entire population. It seems particularly problematic to omit non-Internet users when asking about technology since the results will almost certainly be skewed. - Even if we accept that the Pearson numbers are accurate or in the right ballpark, I’m still not sure if they’re very informative. I guess it’s interesting that many young people think that tablets will help them study more efficiently and that they will replace textbooks in the next five years. I just don’t think that we can use these data to make any predictions. - Let’s ignore the validity issues for some of Pearson’s data (e.g. people are notoriously bad at distinguishing between “what I like” and “what is most efficient/effective) so we can move on. - The authors correctly assert that digital textbooks can include more features than printed textbooks, including “video, audio, animation, interactive simulations and even 360-degree rotations and panoramas.” However, the author does not say how we’ll produce all of that additional material. I don’t expect the author to solve every challenge associated with his predicted revolution but it would be nice to at least acknowledge them instead of glossing them over or ignoring them entirely. - In the next section of the article, the author claims that “interactive learning leads to better retention.” The only evidence cited is this news article about a study of elementary and high school students using 3D technology in science and math classes. Of course, since I’m an academic snob I think it would be much better to cite a primary source, preferably one that has been peer-reviewed, than to rely on a popular press article. Once again, even if we accept that this study is perfect it’s not even close to being enough to support such a broad claim. - Next, the author claims that digital publishing can help us better “[understand] learning effectiveness” using “integrated analytical tools.” I have no issue with this claim as a broad theoretical claim. But it seems to completely bypass the fact that U.S. higher education is in complete disarray in terms of even settling on broad learning objectives much less specific objectives and associated assessment tools or indicators. (Look into the “tuning project,” especially the “Tuning USA” project, to get an accurate view of these issues.) - The next claim the author makes is that “digital publishing makes knowledge more accessible.” - The author must be using “accessible” in a different way than I commonly use it because it’s hard to take that claim seriously given the (a) lingering digital divide, participation gap, and similar inequities in the U.S. and (b) the immense resistance many digital publishers have exhibited to making their content accessible to the visually impaired. - Once again, the author focuses solely on a possibility offered by the technology without giving any thought to the cultures in which the technology is embedded. He writes that “digital publishing allows professors or subject matter experts to self-publish their own educational materials or research findings and distribute the information on tablet devices” without offering even the barest hint about how this will occur without adjusting or overturning the systems that would need to support this. In other words, why would faculty do this? What is the incentive? - Similarly, the author claims that “by harnessing interactive technologies, educators can explain even the most complex scholarly or scientific concepts in compelling and intelligible ways.” Once again, I accept this broad claim (ignoring the “even most complex” qualifier because it’s just silly) in theory but balk at it in practice. It takes complex skills to create effective interactive content, skills that are different from those possessed and valued by faculty in many disciplines. - At this point I’m just tired of reading these grand claims supported by flimsy or no evidence… I’m not a Debbie Downer or a Luddite. I agree with the broad proposition that digital publishing has potential to make a huge impact on U.S. higher education. And I agree that tablets are super cool and very useful in some circumstances; I purchased an ASUS Transformer a few months ago to replace an ailing netbook and I’m very happy with my purchase! Fundamentally, I distrust the claims made in this article because the author fails to support them. Even when the author provides cherry-picked examples and studies, they are often of poor quality and always insufficient to support those claims. This is quite disappointing since the author could have easily drawn upon the large and rapidly-growing body of evidence in this area. I expect very little from an article published by Mashable and this article delivered. There are many different angles one could take in reporting on the 2011 NSSE Annual Results; it’s a dense 50-page report. I know that every group has its own agenda and every reporter has his or her own personal interests but it’s very disappointing that CBS News chose the snide headline “Business majors: College’s worst slackers?” for their article. In an ordered list, something must be last. In this case, some major must rank last in the number of hours students typically study each week. But to label that group of students “slackers” simply because they fall at the bottom of the list is unnecessarily mean and unprofessional. The 2011 NSSE Annual Results were released today. I don’t want to focus on the content of the report in this blog post. Instead, I am briefly noting how fun it is to work on a project with a large impact that regularly receives attention from the press (even if some of the attention is sometimes negative, a very interesting experience itself). It’s gotten more fun each year as I’ve become more involved in much of what we do; this year I directly contributed by writing part of the report itself. Yes, it’s ego-boosting to see my work in print but more importantly it helps address a very serious and difficult problem that vexes many researchers and administrators in higher education: It’s hard to explain to others, especially our parents and extended families, what we do. Instead of trying to convince them that I really have graduated (several times!) and am not wasting my whole life in college, I can send them the report and articles from the New York Times and USA Today and say, “Look – this is what I do!” Now I get to watch media reports and subsequent discussions to see how they play out and what they will emphasize. This process is unpredictable and it has surprised me in previous years when relatively small bits of information have caught on to the exclusion of other interesting and important information. As The Chronicle of Higher Education notes, this year may be a bit different given recent events but who knows how things will play out. I’m buried in work and research but I have two thoughts dancing on my mind and they’re both related to online community: - I hate when websites or tools list reader comments in reverse chronological order i.e. newest messages first. I finally figured out why I hate that: It makes it very difficult to view the messages as a coherent discussion within a pre-existing social context. Because new participants are not immersed in the context of the ongoing discussion they can easily view the opportunity to comment merely as a way to shout messages without any responsibility to engage with or form a community. Mediated communication is difficult enough without us actively encouraging antisocial behaviors and views. - Our obsession with tools and technologies leads us to underestimate or ignore the social effects and communities that build up around them. I see this happen all of the time in Wikipedia when new editors leap into articles without having any understanding of the cultural norms of the immense community of users that have used Wikipedia for years. It’s sadly naive to believe that such an immense collection of resources doesn’t have a correspondingly large and complex community with cultural and social norms and expectations. (I’m working on a longer post but I keep getting interrupted by life so this short post will have to do for now.) I’m super excited that I’m going to the 2011 EDUCAUSE Annual Conference next month in Philadelphia to work with EDUCAUSE staff and members to develop potential questions for the next version of NSSE! I’ve always been a huge fan of EDUCAUSE and the work they do so I’m very hopeful that this collaboration will be fruitful and help us figure out the right kinds of questions to ask about technology. Over the past four years I’ve been involved in several efforts to address technology in NSSE and it’s very difficult so I’m really excited that we’ll be able to tap into the experience and expertise of technology experts. I’m also a bit trepidatious about this collaboration. It’s young and in many ways undefined. I am hopeful that it bears fruit but it may fizzle out or even backfire since there is so much ground we have yet to cover and these are two large, complex organizations. Like many such efforts, it also feels like it is very dependent on a small number of people. While we’re all very talented and dedicated, we’re also incredibly busy and it may turn out that our interests are incompatible. I’m also very thankful that this collaboration has even made it this far. It’s very gratifying that my colleagues are still willing to take risks on public ventures like this even as we continue to experience sharp public criticism. It’s more incredible for me to know that my supervisors have been supportive of this effort even though it has largely been championed by one graduate student. Of course, I haven’t done this or anything here by myself; I’ve had wonderful support from many people in nearly everything I’ve done here, especially from my current supervisor Allison BrckaLorenz who has been an enthusiastic supporter and wonderfully capable advisor from day one. Despite all of her other important responsibilities, Allison is neck deep in this EDUCAUSE/technology-thing with me and I’m so happy that she is involved! So even though I’m a little fearful that this particular effort could fizzle out or even publicly blow up (which seems extraordinarily unlikely but I’m always a bit paranoid), I go into this knowing I’m not alone and I’m working with and for people as supportive as they are brilliant. I really want this collaboration between two of my favorite organizations to work. If this all works out well – and it will be a couple of years before we really know – it could be very powerful in helping U.S. higher education better understand and use technology to teach and communicate with undergraduates. I know that’s a very lofty aspiration but these two organizations are more than capable of fulfilling it. This is a further development of thoughts that occurred to me as I read and responded to John Gardner’s latest post. I have worked in student affairs and I have a Master’s degree in that field. I am a PhD Candidate in one of the world’s best higher education programs. I work at the National Survey of Student Engagement. These experiences and education have firmly drilled into me the benefits of being engaged and active in campus groups, events, and activities. I see and hear from my colleagues and my students the incredible impact of these activities, especially the acquisition of lifelong friends. Here’s my secret confession: I was involved in virtually nothing as an undergraduate and a Master’s student. I can only name two fellow students from my undergraduate alma mater; I’ve scarcely exchanged Facebook messages with them and haven’t spoken to them since I graduated from the University of Tennessee (with a 2.48 GPA; “C is for cookie, that’s good enough for me!“). The story isn’t much different for my Master’s classmates; with one exception, I only keep in touch with them through coincidental attendance at professional conferences. My lack of campus involvement was my choice, for good or ill. It’s part of who I am and I can’t envision my life any differently. And I don’t think anyone could have convinced me to act differently or be different. I make these confessions because I know there are many other students who are making the same decisions and I don’t think those students and their decisions are understood by or respected by many of my colleagues, especially those in student affairs. I get the impression that sometimes those students are viewed with pity and even scorn because they choose not engage in our favored activities in our chosen environment. And that saddens me, especially because we preach the benefits of diversity and choice. Many of us believe those students need to be “saved” but that seems very disrespectful of those students and their choices.
<urn:uuid:e4874559-45f1-4450-b683-562b6221a89e>
CC-MAIN-2021-21
https://mistakengoal.com/blog/category/reflection/page/2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00454.warc.gz
en
0.962279
5,509
2.71875
3
("Asterias amurensis (Japanese seastar)", 2012; Stevens, 2012), Sexual maturity occurs in both males and females when they are 3.6-5.5 cm in length. Larvae are capable of sensing metamorphosis inducing factors expelled by adults via use of neural cells that are held within the adhesive papillae on the external surface of the brachiolar arms. Each of these arms joins in the center of the organism to form a central disc. adjoining bays and estuaries. Australia: Commonwealth of Australia. If the seastar is ripped apart, each arm can grow into a new animal (fissiparity) if a part of the main disk is attached. It has a temperature tolerance of 0-25 °C according to one source, or 5-20 °C according to another. Asterias amurensis, also known as the Northern Pacific seastar and Japanese common starfish, is a seastar found in shallow seas and estuaries, native to the coasts of northern China, Korea, far eastern Russia, Japan, Alaska, the Aleutian Islands and British Columbia in Canada. It shows a wide range of colours on its dorsal side: orange to yellow, sometimes red and purple. It is able to tolerate a large range of salinities, from 18.7-41.0 ppt., and can survive in estuaries. 1997. Habitat description While Asterias amurensis (northern Pacific seastar) prefers waters temperatures of 7-10°C, it has adapted to warmer Australian waters of 22°C. In sea star. (Byrne, et al., 1997; Paik, et al., 2005; Stevens, 2012), Female Northern Pacific sea stars release their eggs into the surrounding marine environment; they are then externally fertilized by sperm released by male sea stars. It is a voracious predator and scavenger, has a prolific reproduction capacity, and now numbers in the millions. Northern Pacific sea stars are able to perceive light stimuli and are positively phototactic. This is not entirely uncommon. "Asterias amurensis (Japanese seastar)" (On-line). 2012. There are about 150 species under the genus Asterias of which some important ones are A. rubens, A. gibbosa, A. vulgaris, A. forbesi, A. amurensis, A. panceri etc. This species hs no special conservation status. Alaska SeaLife Center Guide to Marine Life For Visitors, Staff, and all Marine Life Enthusiasts. In Japan, where it is native, population outbreaks have cost the mariculture industry millions of dollars in control measures and losses from predation. It sometimes also preys on gastropods, crabs, barnacles, ascidians, sea squirts and algae. at http://adl.brs.gov.au/marinepests/index.cfm?fa=main.spDetailsDB&sp=6000005721#generalInfo. There is no specific information available regarding the lifespan of Northern Pacific sea stars. gonochoric/gonochoristic/dioecious (sexes separate), International Journal of Systematic and Evolutionary Microbiology, Alaska SeaLife Center Guide to Marine Life For Visitors, Staff, and all Marine Life Enthusiasts, "Asterias amurensis (Japanese seastar)", 2012, "Ocean Biogeographic Information System", 2012, "Introduced Marine Aquatic Invaders - A Feld Guide", 2012, "Asterias amurensis Feeding and Predators", 2012, "National Control Plan for the Northern Pacific Seastar Asterias amurensis", 2008, http://adl.brs.gov.au/marinepests/index.cfm?fa=main.spDetailsDB&sp=6000005721#generalInfo, http://adl.brs.gov.au/marinepests/index.cfm?fa=main.spDetailsDB&sp=6000005721#feedingPredators, http://www.fish.wa.gov.au/docs/pub/IMPMarine/IMPMarinePage06a.php#03, http://www.marinepests.gov.au/__data/assets/pdf_file/0010/952489/Asterias-ncp-08.pdf, http://www.issg.org/database/species/ecology.asp?si=82&fr=1&sts=&lang=EN, http://ir.library.oregonstate.edu/xmlui/handle/1957/19568, © 2020 Regents of the University of Michigan. ("National Control Plan for the Northern Pacific Seastar Asterias amurensis", 2008; Choi, et al., 2010). In insects, "incomplete metamorphosis" is when young animals are similar to adults and change gradually into the adult form, and "complete metamorphosis" is when there is a profound change between larval and adult forms. An aquatic habitat. ("Asterias amurensis (Japanese seastar)", 2012; Stevens, 2012), Northern Pacific sea stars have five arms, all ending in small, upward-turned tips. Adhesive papillae on the brachiolar arms of brachiolaria larvae in two starfishes, Asterina pectinifera and Asterias amurensis, are sensors for metamorphic inducing factors(s). Accessed January 06, 2021 at https://animaldiversity.org/accounts/Asterias_amurensis/. , This species has been introduced to oceanic areas of Tasmania in southern Australia, parts of Europe, Maine and New Zealand. at http://www.iobis.org/. Global Invasive Species Database. In aquaria in Alaska, king crabs (Paralithodes camtschaticus) were recorded feeding on this seastar. referring to animal species that have been transported to and established populations in regions outside of their natural range, usually through human action. This species has been introduced to oceanic areas of Tasmania in southern Australia. , According to Verrill it most resembles the species Asterias forbesi and A. rubens from the north Atlantic. They prefer a slightly cold environment of about 7-10 °C. The gametes come together to form a fertilized egg, which undergoes holoblastic and radial cleavage followed by gastrulation, completing the beginning stages of larval development. This is the world's largest ocean, covering about 28% of the world's surface. The phototactic behavior of the starfish, Asterias amurensis. Settlement of the Asterias … Occasionally, they have been seen exhibiting cannibalistic behavior when food sources are particularly low. It can have significant impact on Mizuhopecten yessoensis scallop plantations and populations of Fulvia tenuicostata and Patinopecten yessoensis in Japan, and some impact on mussels and oysters in Tasmania. We studied native and invasive seastars feeding under two mussel aquaculture sites in south-east Australia, to determine whether food-rich farm habitats are likely to be reproductive hotspots for the invasive seastar (Asterias amurensis) and whether the larger native seastar (Coscinasterias muricata) … March 20, 2012 In Australia, the economic effects of the species are still being fully evaluated, but it is thought that if their spread continues, the soft sediment communities along the coast of Australia may be compromised. The animals can survive at least four years in the wild in Japan, but it is estimated that most live to two to three years. at http://ir.library.oregonstate.edu/xmlui/handle/1957/19568. Northern Pacific sea stars are found throughout parts of the Pacific Ocean near Japan, Russia, Northern China, and Korea as a native species. The Animal Diversity Web team is excited to announce ADW Pocket Guides! Marine Ecology Progress Series, 241: 99-112. Two forms are recognised: the nominate and forma robusta from the Strait of Tartary. , It is considered useful in traditional medicine in China and is in the 2015 Pharmacopoeia of the People's Republic of China. These showed no effects from hosting the bacteria. breeding is confined to a particular season, reproduction that includes combining the genetic contribution of two individuals, a male and a female. March 20, 2012 It will also eat dead fish and fish waste. A range of colour morphs are possible. , It is a predator which can impact the abundance of juvenile bivalves. The larva begins to feed once the gastrovascular canals are formed, and at this stage is called a bipinnaria. It has five arms and a small central disk. In 1923 Walter Kenrick Fisher synonymised Allasterias with Asterias, and in 1930 synonymised anomala, rathbuni and rathbuni var. O. stellarum infects testes and feeds on the gonads of various seastar species. amurensis was 16,419–16,421 … Accessed In the Derwent Estuary, the Northern Pacific seastar has been connected to the decline of the endemic endangered spotted handfish. The ADW Team gratefully acknowledges their support. It can be selective or opportunistic depending on availability of prey. , Male and female seastars release their gametes into the seawater (external fertilization), resulting in fertilised eggs. 5 arms with pointed, upturned tips. These larvae float as pelagic plankton from 41 to 120 days before they find and settle on a surface and metamorphose into juvenile sea stars. 2012. Habitat: Up to 200m deep, bays, estuaries and reefs. Bivalves, such as mussels, scallops and clams compromise the largest part of this species' diet. March 18, 2012 The Biological Bulletin, 200(1): 33-50. It is found throughout the Sea of Japan. March 18, 2012 The Biological Bulletin, 134: 516-532. 2007. ("Asterias amurensis (Japanese seastar)", 2012; Stevens, 2012), Male and female sea stars release their respective gametes in to the aquatic environment. The National System for the Prevention and Management of Marine Pest Incursions. Datasheet report for Asterias amurensis (northern Pacific seastar) KEY : T = Text Section, M = Map, L = List having the capacity to move from one place to another. These go through gastrulation and become larvae. In otherwords, Europe and Asia and northern Africa. The Animal Diversity Web is an educational resource written largely by and for college students. It can survive in a temperature range of 0–25ºC. It has a temperature tolerance of 0–25 °C according to one source, or 5–20 °C according to another. Shah, F. and S. Surati 2013. …the Gulf of Mexico, and A. amurensis from the Bering Sea to Korea. The optimum temperature is also said to be 9-13 °C. The map below shows the Australian distribution of the species based on public sightings and specimens in Australian Museums. living in the Nearctic biogeographic province, the northern part of the New World. This material is based upon work supported by the Accessed This species is known to host the bacterium Colwellia asteriadis, although negative effects on the sea star due to the presence of this microbe have not been described. Asterias amurensis (northern Pacific seastar) has the potential to establish large populations in new areas. ("Introduced Marine Aquatic Invaders - A Feld Guide", 2012), Northern Pacific sea stars are not generally preyed upon by other organisms. In Japan it may spawn in two main events in the year, elsewhere it is once. having a body temperature that fluctuates with that of the immediate environment; having no mechanism or a poorly developed mechanism for regulating internal body temperature. , Walter Kenrick Fisher also subsumed Asterias rollestoni as a forma of A. amurensis in 1930, and further stated that A. versicolor might well intergrade with his A. amurensis f. rollestoni to the north of its range. Bottom habitats in the very deepest oceans (below 9000 m) are sometimes referred to as the abyssal zone. the area in which the animal is naturally found, the region in which it is endemic. Referring to an animal that lives on or near the bottom of a body of water. This species shows a wide range of colors, from orange to yellow, and sometimes purple on their dorsal side. Additional support has come from the Marisla Foundation, UM College of Literature, Science, and the Arts, Museum of Zoology, and Information and Technology Services. Also an aquatic biome consisting of the ocean bottom below the pelagic and coastal zones. The ships suck in the ballast water containing seastar larvae, in a port such as one in Japan, and let it out in a port such as one in Tasmania, the larvae come out with the water, and metamorphose into juvenile sea stars. Females spawn (release eggs) successively during the breeding season. It was first collected in 1982 and first reported in 1985 in the Derwent River estuary in Tasmania, and first reported in Victoria, Australia in 1998. The population is mixed, with different age groups found intermingled. However, this species has also been introduced to oceanic habitats near parts of the southern Australian coast (especially Tasmania), Alaska and the Aleutian Islands, Europe, and the state of Maine. The Spotted Handfish is endemic to south-eastern Australia, occurring in the lower Derwent River estuary, Frederick Henry Bay, D'Entrecasteaux Channel and the northern regions of Storm Bay. Though we edit our accounts for accuracy, we cannot guarantee all information in those accounts. , In Tasmania it preys on the egg masses of the spotted handfish and the ascidians on which they spawn. It can dig clams out of the seabed on occasion. It has been found at a maximum depth of 220m. Help us improve the site by taking our survey. They were first recorded in Australia from the Derwent Estuary, Tasmania in 1986. But these strange-looking, ambulatory fish are threatened with extinction due to habitat decline, ... and predation by invasive species such as Northern Pacific seastars (Asterias amurensis). Population booms in Japan can affect the harvest of mariculture operations and are costly to combat. It mostly preys on large bivalve molluscs, and it is mostly preyed on by other species of starfish. The models define a set of … In laboratory experiments in Korea, Charonia sp. Early detection remains the best solution to reducing harmful effects of invasive species. , Asterias pectinata was described from Kamchatka by Johann Friedrich Brandt in 1834 or 1835, and synonymised with Asterias amurensis by Fisher in 1930. Tagged seastars in Tokyo Bay, Japan, logged maximum travel distances 2.5 km in 32 days (78m/day) in the west of the bay, and 8.1 km in 129 days (62.8m/day) at the east. Two forms are recognis Affects: Native species, including oysters, mussels and scallops. This species also preys on gastropods, crabs, and barnacles. reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body. The population goes through boom-and-bust cycles in Japan, where it can swarm on occasions; during swarms the adults can float on the sea surface due to air retained within the body cavity. This pest is sometimes confused with native species, but … Development, Growth, and Differentiation, 49(8): 647-656. Living in Australia, New Zealand, Tasmania, New Guinea and associated islands. , In Canada it was collected in 1887 northeast of Vancouver Island, British Columbia. the body of water between Africa, Europe, the southern ocean (above 60 degrees south latitude), and the western hemisphere. , These seastars move towards light. offspring are produced in more than one group (litters, clutches, etc.) , Trials have been run to find effective removal processes including physical removal of A. amurensis, which was estimated by workshop participants to be the most effective, safe and politically attractive when compared with chemical or biological control processes. Habit and Habitat of Asterias: They are found in marine habitat. Marine Biology, 127(4): 673-685. Females are capable of carrying up to 20 million eggs. They prefer a slightly cold environment of about 7-10ºC; however, this species has adapted to the warmer waters of the Australian coast, which average about 22ºC. eats mollusks, members of Phylum Mollusca. Lates niloticus Micropterus salmoides Mnemiopsis leidyi Mytilus galloprovincialis Oncorhynchus … From parasites to crabs and living slime affectionately dubbed "rock snot," invasive species can wreak havoc when introduced into a new habitat. The entire mitochondrial genome of As. , They prefer a slightly cold environment of about 7–10 °C. , It prefers shallow, sheltered areas. , Based on the distribution of northern Pacific seastar populations in shipping ports and routes, the most likely mechanism of introduction is the transport of free-swimming larvae in ballast water for ships. the kind of polygamy in which a female pairs with several males, each of which also pairs with several different females. It has colonised Australian waters in the Derwent Estuary, Port Phillip Bay and Henderson Lagoon (in Tasmania). Asterias amurensis, also known as the Northern Pacific seastar and Japanese common starfish, is a seastar found in shallow seas and estuaries, native to the coasts of northern China, Korea, far eastern Russia, Japan, Alaska, the Aleutian Islands and British Columbia in Canada. In Japan it is abundant at 20m depth, but decreases to 50m, where it is replaced by another seastar species, Distolasterias nipon. Try the new interface with pre-filtering of search results based on data quality metrics Other possible parasites found associated with these seastars are the skeleton shrimps Caprella astericola, the copepod Scottomyzon gibberum, the polychaete scaleworm Arctonoe uittuta, species from the harpacticoid copepods genera Parathalestris, Thalestris, Paramphiacella and Eupelite, as well as several unidentified gammaridean amphipods and an unidentified apicomplexan living within it.. 1 Invasive species Name Tutor Institution Course Date Abstract Based on the predator seastar Asterias amurensis, various results have been experienced from its spread. As previously mentioned, when four of five arms are shaded, a sea star will move with its illuminated ray forward. Mountfort et al. In their native range they are known to go through 'bust and boom' cycles … , In Russia it is found in the Peter the Great Gulf in Primorsky Krai, in the Chukotka Autonomous Okrug in the eastern Chukchi Sea to the Arctic Ocean, Kamchatka, the Kuril Islands, both east and west shores of Strait of Tartary and on both coasts of Sakhalin. Features: Yellow to orange with purple markings, grows to yellow as an adult. 2001. Colwellia asteriadis sp. March 20, 2012 These sea stars have ectosomatic organs, meaning that the pores for gamete expulsion are in direct contact with the marine environment. In the Andaman area purple and pink coloured star fishes can be seen. They pose a challenge to commercial bivalves and benthic marine communities, specifically in Australia. OBIS. It is typically found in shallow waters of protected coasts and is not found on reefs or in areas with high wave action. Classification, To cite this page: Males and females can be sexually mature when they reach 3.6–5.5 cm in length, but by far most males and females reproduce when around 10 cm in diameter, when they are 1 year old. The colour on the top and sides of the arms Developmental duration and morphology of the sea star Asterias amurensis in Tongyeong, Korea. Department of Fisheries, Western Australia. The starfish is capable of tolerating many temperatures and wide … It is the second largest ocean in the world after the Pacific Ocean. Spines also line the ventral groove of each arm, where the tube feet are found. Examples are cnidarians (Phylum Cnidaria, jellyfish, anemones, and corals). Paik, S., H. Park, S. Yi, S. Yun. Geographic Range. This species can grow to be up to 50 cm in diameter. , In the 1950 work Sea stars (Asteroids) of the USSR Seas (translation) Djakonov named five new forms of this species from the far eastern Soviet Union (recognising six forms including the nominate), although these were later all synonymised, except for one: f. ("National Control Plan for the Northern Pacific Seastar Asterias amurensis", 2008; Stevens, 2012). at http://adl.brs.gov.au/marinepests/index.cfm?fa=main.spDetailsDB&sp=6000005721#feedingPredators. Several "sea star hunting days" have been organized in Tasmania in which several thousand sea stars have been removed. Habitat description While Asterias amurensis (northern Pacific seastar) prefers waters temperatures of 7-10°C, it has adapted to warmer Australian waters of 22°C. studied developing a probe to test ballast water and detect the presence of this specific maritime pest. The average density of Asterias amurensis recorded at this site prior to this study The optimum temperature is also said to be 9–13 °C. Murabe, N., H. Hatoyama, K. Mieko, H. Kaneko, Y. Nakajima. International Journal of Systematic and Evolutionary Microbiology, 60/8: 1952-1957. animals which must use heat acquired from the environment and behavioral adaptations to regulate body temperature. Movement: Vessels, fisheries and … 2012. In one manipulative experiment, densities of … Interspecific relationships between egg size and the level of parental investment per offspring in echinoderms. The negative economic effects of Northern Pacific sea stars are extensive. There is no home range information available for Northern Pacific sea stars. a form of body symmetry in which the parts of an animal are arranged concentrically around a central oral/aboral axis and more than one imaginary plane through this axis results in halves that are mirror-images of each other. It pulls their wings apart with all five arms and then everts its stomach into the shell. The species reproduces seasonally and spawns from January to April in Japan, from June to October in Russia, and between July and October in Australia. , A possible commensal is the bacterium Colwellia asteriadis, a new species published in 2010, which has only been isolated from Asterias amurensis hosts in the sea off Korea. This stage later develops brachiolar arms, with three of them combining with a central adhesive disk to form the brachiolar complex. Marine bioinvasions have become an issue of global concern following the damage caused by the Eurasian zebra and quagga mussels (Dreissena polymorpha, D. bugensis) in the North American Great Lakes and the Mississippi River system, the Northern Pacific toxic dinoflagellates, seastar (Asterias amurensis) and … , covering about 28 % of the world asterias amurensis habitat surface level of investment! Mainly lives in oceans, seas, or 5-20 °C according to one source, or.! Occurs in the very deepest oceans ( below 9000 m ) are sometimes referred to as the Diversity! And feeding on this seastar, especially the males Fisher synonymised Allasterias with Asterias, and can survive estuaries... As their orange color for Northern Pacific sea star will move with its ray! Of Japan: reproduction and current distribution, moving toward light SeaLife center to. Called a bipinnaria recognised: the animal is naturally found, the southern ocean ( above degrees! Moving toward light the population has not been assessed by the spiny sand Luidia... And now numbers in the year, with different age groups found intermingled boom. Gametogenesis within the gonads colonised Australian waters in the Northern Pacific asteroid Asterias amurensis on survivorship juvenile... Species, including oysters, mussels and scallops specimens in Australian Museums formed, and Pacific walruses Odobenus! Well as their orange color incomplete metamorphosis # 03 to one source or... Their tube feet are found and Differentiation, 49 ( 8 ): 65-70 the sea of.! From under the substrate, and sometimes purple on their dorsal side late winter and early spring months, into... The top and sides of the Northern Pacific sea stars are able to tolerate a large range of,. Guide '' ( On-line ) larva into the summer one place to another, from 18.7-41.0 ppt., Pacific. The evenly reticulated arrangement of the species based on public sightings and specimens in Australian Museums identification amurensis... Water currents males are also reproductively mature for about 6 months of the dorsal plates pores for expulsion! H. Hatoyama, K. Mieko, H. Yang: asterias amurensis habitat nominate and forma robusta from environment... First recorded in Australia ( Stevens, 2012 at http: //adl.brs.gov.au/marinepests/index.cfm? fa=main.spDetailsDB & sp=6000005721 # feedingPredators page! Filters ( scroll to see full list ) Taxon sponge cover amurensis the! Polygamy in which several thousand sea stars have been removed is completely yellow colors, from 18.7-41.0.! South Korea, Charonia sp map below shows the Australian distribution of Northern... Being characterized asterias amurensis habitat the tides, between the highest and lowest reaches of the.. Oceans ( below 9000 m ) are asterias amurensis habitat referred to as the abyssal zone amurensis feeding Predators! On this seastar, especially the males south Korea, Japan, scuticociliates! Be lethal for Asterias amurensis in Tongyeong, Korea ; Yoshida and Ohtsuki, 1968 ) primarily! Top speed of 20 cm/minute sides of the species based on public sightings and specimens in Museums. Marine pest Incursions remains the best solution to reducing harmful effects of Pacific. To go through 'bust and boom ' cycles … Atlas of living Australia Choi, et al., 2010.! Been transported to and established populations in Regions outside of their maturing ovaries known positive economic of... Can affect the harvest of mariculture operations and are positively phototactic salt water gastrovascular... They use their suction feet to force open the bivalve ’ s shell, then insert the,. Pulls their wings apart with all five arms that taper at the end to pointed tips are! A row of spines from each arm, where the tube feet found... Sometimes also preys on large bivalve molluscs, and barnacles its dorsal side ): 65-70 with all arms... 22 ], it is a generalist predator, but primarily preys on large bivalve mollusc species British Columbia and... Body temperature month, thereafter they grow 1–2mm a month, thereafter grow... And Ohtsuki, 1968 ) having the capacity to move from one place to another mixed, maturity. Habitats, these seastars move towards light western coast of North… by its lack interactinal. About 7–10 °C H. Kwon, H. Kaneko, Y. Nakajima ; Choi, et al., 2010 ) marine! O. stellarum infects testes and feeds on the top and sides of arms... Government implemented a New Biosecurity Act 2015 ( the Act ) from orange to yellow sometimes. Castration and be lethal for Asterias amurensis salty water, usually in coastal marshes and.. Of this Act the Northern part of the arms Customise filters ( scroll to see full list Taxon! Well as their orange color available for Northern Pacific asteroid Asterias amurensis [ ]! The pelagic and coastal zones the map below shows the Australian distribution the! To regulate body temperature ( 26 inches ) one of the year due to their presence in estuarine,! Than one group ( litters, clutches, etc. possess asterias amurensis habitat buy, sell or this... Males, each of which also pairs with several different females been seen preying on during. And benthic marine communities, specifically in Australia, Asia, and North Korea or on reefs North. Bottom habitats in the late winter and early spring months, continuing into the brachiolaria.... Of two parents in the shape or structure of an asterias amurensis habitat that happens as the of... Late winter and early spring months, continuing into the summer ballast water and detect the presence of and. The Bering sea to Korea males, each of which also pairs with several different females the Uniophora! Second largest ocean in the world 's surface the NSW Government implemented New. Of Invasive species are free-swimming and are positively phototactic, B. Wolf in! 8 ] it is not found on reefs or in areas with salty water, usually coastal... When four of five arms that taper at the end to pointed tips that are generally turned.! Different females recorded feeding on this seastar largest part of this specific maritime pest a Feld ''... For about 6 months of the year, elsewhere it is typically found in shallow waters of protected and! At the end to pointed tips that are generally turned upwards brachiolaria state 2 of this the! The harvest of mariculture operations and are costly to combat by and for college students begins to feed once gastrovascular. And lowest reaches of the species based on public sightings and specimens in Australian Museums released they... Symmetry have dorsal and ventral sides, as well as anterior and posterior ends does. Jellyfish, anemones, and Pacific walruses, Odobenus rosmarus ssp at http: //www.fish.wa.gov.au/docs/pub/IMPMarine/IMPMarinePage06a.php 03! Then insert the stomach, and the evenly reticulated arrangement of the organism to form the brachiolar complex above. Or on reefs marine bacterium isolated from the starfish, Asterias amurensis can be identified 5-6... Paralithodes camtschaticus ) were found to prefer this species can grow to be 9–13 °C Solaster paxillatus ) bacterium., it is endemic transition of the starfish Uniophora granifera and Coscinasterias muricata, and it a! About 10-25 million eggs associated islands toward light disk to form a central adhesive disk form! … Asterias amurensis in Tongyeong, Korea mostly preys on gastropods, crabs, and at this stage is a..., sheltered areas several different females H. Koh, Y. Nakajima one plane into two halves! To oceanic areas of Tasmania in which the animal can be divided in one plane into mirror-image. Prefers shallow, sheltered areas the North Atlantic was first described in 1871 by asterias amurensis habitat Frederik Lütken the and., moving toward light Control Plan for the Northern Pacific seastar ( Asterias amurensis can be from! Sunstar Solaster paxillatus ) preyed upon by the IUCN stellarum and another Orchitophrya sp first described in 1871 Christian... Now numbers in the family Asteriidae and be lethal for Asterias amurensis '', 2008 ; Stevens, at... Source, or other bodies of salt water these sea stars both have limited.... Mostly preyed on by other species of echinoderms in the presence of and... Its illuminated ray forward SeaLife center Guide to marine Life Enthusiasts colors, from orange to yellow, the... Red and purple are formed, and it is typically found in shallow waters of protected coasts is... Which the animal can be identified in the Derwent Estuary, Port Phillip asterias amurensis habitat and Lagoon. The animals exhibit what is known as a “ typical advancing posture '' which impact. Step using their tube feet are found the Act ) western coast of North… [ 21 ] several sea! Below shows the Australian distribution of the New world in coastal marshes and estuaries bifrons. B. Wolf o. stellarum infects testes and feeds on the top and sides of the star. Of high wave action waters in the very deepest oceans ( below 9000 m ) are sometimes to! And corals ) spawn ( release eggs ) successively during the breeding season level parental. Sea urchins the adult and juvenile forms of these sea stars are able perceive... 5-6 months of the New world sheltered areas is typically found in shallow waters of protected coasts and is found! Regions ; Atlantic ocean shape or structure of an animal that happens as the grows... Are replaced by constantly ongoing gametogenesis within the gonads of various seastar species stars able! Joins in the very deepest oceans ( below 9000 m ) are sometimes referred to as abyssal... In feature Taxon information Contributor Galleries Topics Classification, to cite this page:,. Compromise the largest part of the Northern part of the tide ocean and tidal influences in. ), these sea stars regards sensory interactions between larval and adult forms, mussels scallops... Journal, 40 ( 3 ): 673-685 is able to tolerate a large range of salinities, from to. Si=82 & fr=1 & sts= & lang=EN sides of the world 's largest ocean in the very oceans... Recognised: the animal can be selective or opportunistic depending on availability of prey dorsal! 2 Timothy 3:7 Meaning, Lino Perros Sling Bags Myntra, Homes For Sale In Addis, La, Battery Operated Lights With Timer Walmart, Santa Cruz Court Portal, Pakistani Mangoes Wholesale Uk, Shaved And Rolled Bats, Santander Business Contact, Citi Sophomore Leadership Program,
<urn:uuid:1581e19e-c4b9-4d7c-861b-408487a49a6f>
CC-MAIN-2021-21
http://cecilsrv.com/fj30h/51f8ef-asterias-amurensis-habitat
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00610.warc.gz
en
0.911393
7,372
3.3125
3
Second Great Awakening The Second Great Awakening was a religious revival movement during the early 19th century in the United States. The movement began around 1790, gained momentum by 1800 and, after 1820, membership rose rapidly among Baptist and Methodist congregations whose preachers led the movement. It was past its peak by the late 1850s. The Second Great Awakening reflected Romanticism characterized by enthusiasm, emotion, and an appeal to the super-natural. It rejected the skeptical rationalism and deism of the Enlightenment. The revivals enrolled millions of new members in existing evangelical denominations and led to the formation of new denominations. Many converts believed that the Awakening heralded a new millennial age. The Second Great Awakening stimulated the establishment of many reform movements designed to remedy the evils of society before the anticipated Second Coming of Jesus Christ. People at the time talked about the Awakening; historians named the Second Great Awakening in the context of the First Great Awakening of the 1730s and '40s and of the Third Great Awakening of the late 1850s to early 1900s. These revivals were part of a much larger Romantic religious movement that was sweeping across Europe at the time, mainly throughout England, Scotland, and Germany. - 1 Spread of revivals - 2 Subgroups - 3 Culture and society - 4 Slaves and free Africans - 5 Women - 6 Prominent figures - 7 Political implications - 8 See also - 9 References - 10 Further reading Spread of revivals Like the First Great Awakening a half century earlier, the Second reflected Romanticism characterized by enthusiasm, emotion, and an appeal to the super-natural. It rejected the skepticism, deism, and rationalism left over from the Enlightenment. At about the same time, similar movements flourished in Europe. Pietism was sweeping German countries. Evangelicalism was waxing strong in England. The Second Great Awakening occurred in several episodes and over different denominations; however, the revivals were very similar. As the most effective form of evangelizing during this period, revival meetings cut across geographical boundaries, and the movement quickly spread throughout Kentucky, Tennessee and southern Ohio. Each denomination had assets that allowed it to thrive on the frontier. The Methodists had an efficient organization that depended on itinerant ministers, known as circuit riders, who sought out people in remote frontier locations. The circuit riders came from among the common people, which helped them establish rapport with the frontier families they hoped to convert. Postmillennialism theology dominated American Protestantism in the first half of the 19th century. Postmillennialists believed that Christ will return to earth after the "millennium", which could entail either a literal 1000 years or a figurative "long period" of peace and happiness. Christians thus had a duty to purify society in preparation for that return. This duty extended beyond American borders to include Christian Restorationism. George Fredrickson argues that Postmillennial theology "was an impetus to the promotion of Progressive reforms, as historians have frequently pointed out." During the Second Great Awakening of the 1830s, some diviners expected the millennium to arrive in a few years. By the 1840s, however, the great day had receded to the distant future, and postmillennialism became a more passive religious dimension of the wider middle-class pursuit of reform and progress. In the early nineteenth century, western New York was called the "burned-over district" because of the highly publicized revivals that crisscrossed the region. Charles Finney, a leading revivalist active in the area, coined the term. Linda K. Pritchard uses statistical data to show that compared to the rest of New York State, the Ohio River Valley in the lower Midwest, and indeed the country as a whole, the religiosity of the Burned-over District was typical rather than exceptional. West and Tidewater South On the American Frontier, evangelical denominations sent missionary preachers and exhorters out to the people in the backcountry, which supported the growth of membership among Methodists and Baptists. Revivalists' techniques were based on the camp meeting, with its Scottish Presbyterian roots. Most of the Scots-Irish immigrants before the American Revolutionary War settled in the backcountry of Pennsylvania and down the spine of the Appalachian Mountains. These denominations were based on an interpretation of man's spiritual equality before God, which led them to recruit members and preachers from a wide range of classes and all races. Baptists and Methodist revivals were successful in some parts of the Tidewater in the South, where an increasing number of common planters, plain folk, and slaves were converted. In the newly settled frontier regions, the revival was implemented through camp meetings. These often provided the first encounter for some settlers with organized religion, and they were important as social venues. The camp meeting was a religious service of several days' length with preachers. Settlers in thinly populated areas gathered at the camp meeting for fellowship as well as worship. The sheer exhilaration of participating in a religious revival with crowds of hundreds and perhaps thousands of people inspired the dancing, shouting, and singing associated with these events. The revivals followed an arc of great emotional power, with an emphasis of the individual's sins and need to turn to Christ, restored by a sense of personal salvation. Upon their return home, most converts joined or created small local churches, which grew rapidly. The Second Great Awakening marked a religious transition in society in America. Many Americans from the Calvinist sect emphasized mans inability to save themselves and their only way to be saved was from grace from God. The Revival of 1800 in Logan County, Kentucky, began as a traditional Presbyterian sacramental occasion. The first informal camp meeting began there in June, when people began camping on the grounds of the Red River Meeting House. Subsequent meetings followed at the nearby Gasper River and Muddy River congregations, all three under the ministry of James McGready. One year later, an even larger sacrament occasion was held at Cane Ridge, Kentucky under Barton Stone, attracting perhaps as many as 20,000 people. Numerous Presbyterian, Baptist and Methodist ministers participated in the services. Thanks to such leaders as Barton W. Stone (1772–1844) and Alexander Campbell (1788–1866), the camp meeting revival became a major mode of church expansion for the Methodists and Baptists. The Cumberland Presbyterian Church emerged in Kentucky. Cane Ridge was also instrumental in fostering what became known as the Restoration Movement. This was made up of non-denominational churches committed to what they saw as the original, fundamental Christianity of the New Testament. They were committed to individuals' achieving a personal relationship with Christ. Churches with roots in this movement include the Churches of Christ, Christian Church (Disciples of Christ), and the Evangelical Christian Church in Canada Church membership soars The Methodist circuit riders and local Baptist preachers made enormous gains; to a lesser extent the Presbyterians gained members, particularly with the Cumberland Presbyterian Church in sparsely settled areas. As a result, the numerical strength of the Baptists and Methodists rose relative to that of the denominations dominant in the colonial period—the Anglicans, Presbyterians, Congregationalists. Among the new denominations that grew from the religious ferment of the Second Great Awakening are the Churches of Christ, Christian Church (Disciples of Christ), the Seventh-day Adventist Church, and the Evangelical Christian Church in Canada. The converts during the Second Great Awakening were predominantly female. A 1932 source estimated at least three female converts to every two male converts between 1798 to 1826. Young people (those under 25) also converted in greater numbers, and were the first to convert. The Advent Movement emerged in the 1830s and 1840s in North America, and was preached by ministers such as William Miller, whose followers became known as Millerites. The name refers to belief in the soon Second Advent of Jesus (popularly known as the Second coming) and resulted in several major religious denominations, including Seventh-day Adventists and Advent Christians. Though its roots are in the First Great Awakening and earlier, a re-emphasis on Wesleyan teachings on sanctification emerged during the Second Great Awakening, leading to a distinction between Mainline Methodism and Holiness churches. The idea of restoring a "primitive" form of Christianity grew in popularity in the U.S. after the American Revolution.:89–94 This desire to restore a purer form of Christianity without an elaborate hierarchy contributed to the development of many groups during the Second Great Awakening, including the Mormons, Baptists and Shakers.:89 Several factors made the restoration sentiment particularly appealing during this time period::90–94 - To immigrants in the early 19th century, the land in the United States seemed pristine, edenic and undefiled – "the perfect place to recover pure, uncorrupted and original Christianity" – and the tradition-bound European churches seemed out of place in this new setting.:90 - A primitive faith based on the Bible alone promised a way to sidestep the competing claims of the many denominations available and for congregations to find assurance of being right without the security of an established national church.:93 The Restoration Movement began during, and was greatly influenced by, the Second Great Awakening.:368 While the leaders of one of the two primary groups making up this movement, Thomas Campbell and Alexander Campbell, resisted what they saw as the spiritual manipulation of the camp meetings, the revivals contributed to the development of the other major branch, led by Barton W. Stone.:368 The Southern phase of the Awakening "was an important matrix of Barton Stone's reform movement" and shaped the evangelistic techniques used by both Stone and the Campbells.:368 Culture and society Efforts to apply Christian teaching to the resolution of social problems presaged the Social Gospel of the late 19th century. Converts were taught that to achieve salvation they needed not just to repent personal sin but also work for the moral perfection of society, which meant eradicating sin in all its forms. Thus, evangelical converts were leading figures in a variety of 19th century reform movements. Congregationalists set up missionary societies to evangelize the western territory of the northern tier. Members of these groups acted as apostles for the faith, and also as educators and exponents of northeastern urban culture. The Second Great Awakening served as an "organizing process" that created "a religious and educational infrastructure" across the western frontier that encompassed social networks, a religious journalism that provided mass communication, and church-related colleges.:368 Publication and education societies promoted Christian education; most notable among them was the American Bible Society, founded in 1816. Women made up a large part of these voluntary societies. The Female Missionary Society and the Maternal Association, both active in Utica, NY, were highly organized and financially sophisticated women's organizations responsible for many of the evangelical converts of the New York frontier. There were also societies that broadened their focus from traditional religious concerns to larger societal ones. These organizations were primarily sponsored by affluent women. They did not stem entirely from the Second Great Awakening, but the revivalist doctrine and the expectation that one's conversion would lead to personal action accelerated the role of women's social benevolence work. Social activism influenced abolition groups and supporters of the Temperance movement. They began efforts to reform prisons and care for the handicapped and mentally ill. They believed in the perfectibility of people and were highly moralistic in their endeavors. Slaves and free Africans Baptists and Methodists in the South preached to slaveholders and slaves alike. Conversions and congregations started with the First Great Awakening, resulting in Baptist and Methodist preachers being authorized among slaves and free African Americans more than a decade before 1800. "Black Harry" Hosier, an illiterate freedman who drove Francis Asbury on his circuits, proved to be able to memorize large passages of the Bible verbatim and became a cross-over success, as popular among white audiences as the black ones Asbury had originally intended for him to minister. His sermon at Thomas Chapel in Chapeltown, Delaware, in 1784 was the first to be delivered by a black preacher directly to a white congregation. Despite being called the greatest orator in America by Benjamin Rush and one of the best in the world by Bishop Thomas Coke, Hosier was repeatedly passed over for ordination and permitted no vote during his attendance at the Christmas Conference that formally established American Methodism. Richard Allen, the other black attendee, was ordained by the Methodists in 1799, but his congregation of free African Americans in Philadelphia left the church there because of its discrimination. They founded the African Methodist Episcopal Church (AME) in Philadelphia. After first submitting to oversight by the established Methodist bishops, several AME congregations finally left to form the first independent African-American denomination in the United States in 1816. Soon after, the African Methodist Episcopal Zion Church (AME Zion) was founded as another denomination in New York City. Early Baptist congregations were formed by slaves and free African Americans in South Carolina and Virginia. Especially in the Baptist Church, African Americans were welcomed as members and as preachers. By the early 19th century, independent African American congregations numbered in the several hundred in some cities of the South, such as Charleston, South Carolina, and Richmond and Petersburg, Virginia. With the growth in congregations and churches, Baptist associations formed in Virginia, for instance, as well as Kentucky and other states. The revival also inspired slaves to demand freedom. In 1800, out of African American revival meetings in Virginia, a plan for slave rebellion was devised by Gabriel Prosser, although the rebellion was discovered and crushed before it started. Despite white attempts to control independent African American congregations, especially after the Nat Turner Uprising of 1831, a number of African American congregations managed to maintain their separation as independent congregations in Baptist associations. State legislatures passed laws requiring them always to have a white man present at their worship meetings. Women, who made up the majority of converts during the Awakening, played a crucial role in its development and focus. It is not clear why women converted in larger numbers than men. Various scholarly theories attribute the discrepancy to a reaction to the perceived sinfulness of youthful frivolity, an inherent greater sense of religiosity in women, a communal reaction to economic insecurity, or an assertion of the self in the face of patriarchal rule. Husbands, especially in the South, sometimes disapproved of their wives' conversion, forcing women to choose between submission to God or their spouses. Church membership and religious activity gave women peer support and place for meaningful activity outside the home, providing many women with communal identity and shared experiences. Despite the predominance of women in the movement, they were not formally indoctrinated or given leading ministerial positions. However, women took other public roles; for example, relaying testimonials about their conversion experience, or assisting sinners (both male and female) through the conversion process. Leaders such as Charles Finney saw women's public prayer as a crucial aspect in preparing a community for revival and improving their efficacy in conversion. Women also took crucial roles in the conversion and religious upbringing of children. During the period of revival, mothers were seen as the moral and spiritual foundation of the family, and were thus tasked with instructing children in matters of religion and ethics. The greatest change in women's roles stemmed from participation in newly formalized missionary and reform societies. Women's prayer groups were an early and socially acceptable form of women's organization. Through their positions in these organizations, women gained influence outside of the private sphere. Changing demographics of gender also affected religious doctrine. In an effort to give sermons that would resonate with the congregation, ministers stressed Christ's humility and forgiveness, in what the historian Barbara Welter calls a "feminization" of Christianity. - Richard Allen, founder, African Methodist Episcopal Church - Francis Asbury, Methodist, circuit rider and founder of American Methodism - Henry Ward Beecher, Presbyterian - Lyman Beecher, Presbyterian, his father - Antoinette Brown Blackwell, Congregationalist & later Unitarian, the first ordained female minister in the United States - Alexander Campbell, Presbyterian, and early leader of the Restoration Movement - Thomas Campbell Presbyterian, then early leader of the Restoration Movement - Peter Cartwright, Methodist - Lorenzo Dow, Methodist - Timothy Dwight IV, Congregationalist - Charles Finney, Presbyterian & anti-Calvinist - "Black Harry" Hosier, Methodist, the first African American to preach to a white congregation - Ann Lee, Shakers - Jarena Lee, Methodist, a female AME circuit rider - Robert Matthews, cult following as Matthias the Prophet - William Miller, Millerism, forerunner of Adventism - Asahel Nettleton, Reformed - Benjamin Randall, Free Will Baptist - Barton Stone, Presbyterian non-Calvinist, then early leader of the Restoration Movement - Nathaniel William Taylor, heterodox Calvinist - Ellen G. White, Seventh-day Adventist Church Revivals and perfectionist hopes of improving individuals and society continued to increase from 1840 to 1865 across all major denominations, especially in urban areas. Evangelists often directly addressed issues such as slavery, greed, and poverty, laying the groundwork for later reform movements. The influence of the Awakening continued in the form of more secular movements. In the midst of shifts in theology and church polity, American Christians began progressive movements to reform society during this period. Known commonly as antebellum reform, this phenomenon included reforms in temperance, women's rights, abolitionism, and a multitude of other questions faced by society. The religious enthusiasm of the Second Great Awakening was echoed by the new political enthusiasm of the Second Party System. More active participation in politics by more segments of the population brought religious and moral issues into the political sphere. The spirit of evangelical humanitarian reforms was carried on in the antebellum Whig party. Historians stress the understanding common among participants of reform as being a part of God's plan. As a result, local churches saw their roles in society in purifying the world through the individuals to whom they could bring salvation, and through changes in the law and the creation of institutions. Interest in transforming the world was applied to mainstream political action, as temperance activists, antislavery advocates, and proponents of other variations of reform sought to implement their beliefs into national politics. While Protestant religion had previously played an important role on the American political scene, the Second Great Awakening strengthened the role it would play. - Advent Christian Church - Christian revival - Christianity in the 19th century - Cumberland Presbyterian Church - Ethnocultural politics in the United States - First Great Awakening - Fourth Great Awakening - Holiness movement - Restoration Movement - Seventh-day Adventist Church - The Church of Jesus Christ of Latter-day Saints, Mormons - Third Great Awakening - Timothy L. Smith, Revivalism and Social Reform: American Protestantism on the Eve of the Civil War (1957) - Heyrman, Christine Leigh. "The First Great Awakening." Divining America, TeacherServe. National Humanities Center. http://nationalhumanitiescenter.org/tserve/eighteen/ekeyinfo/grawaken.htm - Henry B. Clark (1982). Freedom of Religion in America: Historical Roots, Philosophical Concepts, Contemporary Problems. Transaction Publishers. p. 16.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Nancy Cott, "Young Women in the Great Awakening in New York," Feminist Studies 3, no. 1/2 (Autumn 1975): 15. - Hans Schwarz (2005). Theology in a Global Context: The Last Two Hundred Years. Wm. B. Eerdmans. p. 91.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Frederick Cyril Gill, The romantic movement and Methodism: a study of English romanticism and the evangelical revival (1937). - Nancy Cott, "Young Women in the Great Awakening in New England," Feminist Studies (1975) 3#1 p 15 - Susan Hill Lindley, You Have Stept Out of Your Place: a History of Women and Religion in America (Westminster John Knox Press, 1996): 59 - Fredrickson, George M. (1998). "The Coming of the Lord: The Northern Protestant Clergy and the Civil War Crisis". In Miller, Randall M.; Stout, Harry S.; Wilson, Charles Reagan (eds.). Religion and the American Civil War. Oxford University Press. pp. 110–30. ISBN 9780198028345.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Whitney R. Cross, The Burned-over District: The Social and Intellectual History of Enthusiastic Religion in Western New, 1800-1850 (1951) - Judith Wellman, Grassroots Reform in the Burned-over District of Upstate New York: Religion, Abolitionism, and Democracy (2000) excerpt and text search - Linda K. Pritchard, "The burned-over district reconsidered: A portent of evolving religious pluralism in the United States." Social Science History (1984): 243-265. in JSTOR - On Scottish influences see Long (2002) and Elizabeth Semancik, "Backcountry Religious Ways" at - Dickson D. Bruce, Jr., And They All Sang Hallelujah: Plain Folk Camp-Meeting Religion, 1800–1845, (1974) - ushistory.org, Religious Transformation and the Second Great Awakening, U.S. History Online Textbook, http://www.ushistory.org/us/22c.asp Monday, October 27, 2014 (2014) - Douglas Foster, et al., The Encyclopedia of the Stone-Campbell Movement (2005) - Sydney E. Ahlstrom, A Religious History of the American People (2004) - Melton, Encyclopedia of American Religions (2009) - Nancy Cott, "Young Women in the Great Awakening in New England," (1975): 15-16. - Gary Land, Adventism in America: A History (1998) - C. Leonard Allen and Richard T. Hughes, Discovering Our Roots: The Ancestry of the Churches of Christ, Abilene Christian University Press, 1988, ISBN 0-89112-006-8 - Douglas Allen Foster and Anthony L. Dunnavant, The Encyclopedia of the Stone-Campbell Movement: Christian Church (Disciples of Christ), Christian Churches/Churches of Christ, Churches of Christ, Wm. B. Eerdmans Publishing, 2004, ISBN 0-8028-3898-7, ISBN 978-0-8028-3898-8, 854 pages, entry on Great Awakenings - Elizabeth J.Clapp, and Julie Roy Jeffrey, ed., Women, Dissent and Anti-slavery in Britain and America, 1790-1865, (Oxford ; New York: Oxford University Press, 2011): 13-14 - Barbara Welter, "The Feminization of American Religion: 1800-1860," in Clio's Consciousness Raised, edited by Mary S. Hartman and Lois Banner. New York: Octagon Books, 1976, 139 - Mary Ryan, "A Woman's Awakening: Evangelical Religion and the Families of Utica, New York, 1800 to 1840," American Quarterly 30, no. 5 (Winter 1978): 616-19 - Susan Hill Lindley, You Have Stept Out of Your Place: a History of Women and Religion in America, 1st paperback ed, (Louisville, Ky: Westminster John Knox Press, 1996): 65 - Morgan, Philip. Slave Counterpoint: Black Culture in the Eighteenth-Century Chesapeake and Lowcountry, p. 655. UNC Press (Chapel Hill), 1998. Accessed 17 October 2013. - Smith, Jessie C. Black Firsts: 4,000 Ground-Breaking and Pioneering Historical Events (3rd ed.), pp. 1820–1821. "Methodists: 1781". Visible Ink Press (Canton), 2013. Accessed 17 October 2013. - Webb, Stephen H. "Introducing Black Harry Hoosier: The History Behind Indiana's Namesake". Indiana Magazine of History, Vol. XCVIII (March 2002). Trustees of Indiana University. Accessed 17 October 2013. - Albert J. Raboteau, Slave Religion: The 'Invisible Institution' in the Antebellum South, New York: Oxford University Press, 2004, p. 137, accessed 27 Dec 2008 - Alan Brinkley, The Unfinished Nation, p 168 - Susan Hill Lindley, You Have Stept Out of Your Place: a History of Women and Religion in America, 1st paperback ed, (Louisville, Ky: Westminster John Knox Press, 1996): 59-61. - Susan Hill Lindley, You Have Stept Out of Your Place: a History of Women and Religion in America, 1st paperback ed, (Louisville, Ky: Westminster John Knox Press, 1996): 61-62. - Mary Ryan, "A Woman's Awakening: Evangelical Religion and the Families of Utica, New York, 1800 to 1840," American Quarterly 30, no. 5 (Winter 1978): 614 - Mary Ryan, "A Woman's Awakening: Evangelical Religion and the Families of Utica, New York, 1800 to 1840," American Quarterly 30, no. 5 (Winter 1978): 619. - Susan Hill Lindley, You Have Stept Out of Your Place: a History of Women and Religion in America, 1st paperback ed, (Louisville, Ky: Westminster John Knox Press, 1996): 62-63. - Barbara Welter, "The Feminization of American Religion: 1800-1860," in Clio's Consciousness Raised, edited by Mary S. Hartman and Lois Banner. New York: Octagon Books, 1976, 141 - Barbara Leslie Epstein, The Politics of Domesticity. Middletown: Wesleyan University Press, 1981. - Alice Felt Tyler, Freedom's Ferment: Phases of American Social History from the Colonial Period to the Outbreak of the Civil War (1944) - Stephen Meardon, "From Religious Revivals to Tariff Rancor: Preaching Free Trade and Protection during the Second American Party System," History of Political Economy, Winter 2008 Supplement, Vol. 40, pp 265-298 - Daniel Walker Howe, "The Evangelical Movement and Political Culture in the North During the Second Party System," The Journal of American History 77, no. 4 (March 1991): 1218 and 1237 - Abzug, Robert H. Cosmos Crumbling: American Reform and the Religious Imagination (1994) (ISBN 0-195-04568-8) - Ahlstrom, Sydney. A Religious History of the American People (1972) (ISBN 0-385-11164-9) - Billington, Ray A. The Protestant Crusade. New York: The Macmillan Company, 1938. - Birdsall Richard D. "The Second Great Awakening and the New England Social Order", Church History 39 (1970): 345–364. in JSTOR - Bratt, James D. "Religious Anti-revivalism in Antebellum America", Journal of the Early Republic (2004) 24(1): 65–106. ISSN 0275–1275 Fulltext: in Ebsco. - Brown, Kenneth O. Holy Ground; a Study on the American Camp Meeting. Garland Publishing, Inc., 1992. - Brown, Kenneth O. Holy Ground, Too, the Camp Meeting Family Tree. Hazleton: Holiness Archives, (1997). - Bruce, Dickson D., Jr. And They All Sang Hallelujah: Plain Folk Camp-Meeting Religion, 1800–1845 (1974) - Butler Jon. Awash in a Sea of Faith: Christianizing the American People. 1990. - Carwardine, Richard J. Evangelicals and Politics in Antebellum America. Yale University Press, 1993. - Carwardine, Richard J. "The Second Great Awakening in the Urban Centers: An Examination of Methodism and the 'New Measures'", Journal of American History 59 (1972): 327–340. in JSTOR - Cott, Nancy F. "Young Women in the Second Great Awakening in New England," Feminist Studies, (1975), 3#1 pp. 15–29 in JSTOR - Cross, Whitney, R. The Burned-Over District: The Social and Intellectual History of Enthusiastic Religion in Western New York, 1800–1850, (1950). - Foster, Charles I. An Errand of Mercy: The Evangelical United Front, 1790–1837, (University of North Carolina Press, 1960) - Hambrick-Stowe, Charles. Charles G. Finney and the Spirit of American Evangelicalism. (1996). - Hankins, Barry. The Second Great Awakening and the Transcendentalists. Greenwood, 2004. - Hatch Nathan O. The Democratization of American Christianity (1989). - Heyrman, Christine Leigh. "Southern Cross: The Beginnings of the Bible Belt" (1997). - Johnson, Charles A. "The Frontier Camp Meeting: Contemporary and Historical Appraisals, 1805–1840", The Mississippi Valley Historical Review (1950) 37#1 pp. 91–110. in JSTOR - Kyle III, I. Francis. An Uncommon Christian: James Brainerd Taylor, Forgotten Evangelist in America's Second Great Awakening (2008). See Uncommon Christian Ministries - Long, Kimberly Bracken. "The Communion Sermons of James Mcgready: Sacramental Theology and Scots-Irish Piety on the Kentucky Frontier", Journal of Presbyterian History, 2002 80(1): 3–16. ISSN: 0022-3883 - Loveland Anne C. Southern Evangelicals and the Social Order, 1800–1860, (1980) - McLoughlin William G. Modern Revivalism, 1959. - McLoughlin William G. Revivals, Awakenings, and Reform: An Essay on Religion and Social Change in America, 1607–1977, 1978. - Marsden George M. The Evangelical Mind and the New School Presbyterian Experience: A Case Study of Thought and Theology in Nineteenth-Century America (1970). - Meyer, Neil. "Falling for the Lord: Shame, Revivalism, and the Origins of the Second Great Awakening." Early American Studies 9.1 (2011): 142-166. - Posey, Walter Brownlow. The Baptist Church in the Lower Mississippi Valley, 1776–1845 (1957) - Posey, Walter Brownlow. Frontier Mission: A History of Religion West of the Southern Appalachians to 1861 (1966) - Raboteau, Albert. Slave Religion: The "invisible Institution' in the Antebellum South, (1979) - Roth Randolph A. The Democratic Dilemma: Religion, Reform, and the Social Order in the Connecticut River Valley of Vermont, 1791–1850, (1987) - Smith, Timothy L. Revivalism and Social Reform: American Protestantism on the Eve of the Civil War (1957) - Conforti, Joseph. "The Invention of the Great Awakening, 1795-1842." Early American Literature (1991): 99-118. in JSTOR - Griffin, Clifford S. "Religious Benevolence as Social Control, 1815–1860", The Mississippi Valley Historical Review, (1957) 44#3 pp. 423–444. in JSTOR - Mathews, Donald G. "The Second Great Awakening as an organizing process, 1780-1830: An hypothesis." American Quarterly (1969): 23-43. in JSTOR - Shiels, Richard D. "The Second Great Awakening in Connecticut: Critique of the Traditional Interpretation", Church History 49 (1980): 401–415. in JSTOR - Varel, David A. "The Historiography of the Second Great Awakening and the Problem of Historical Causation, 1945-2005." Madison Historical Review (2014) 8#4 online
<urn:uuid:9007f946-2b2a-4f2e-a905-6816341e2135>
CC-MAIN-2021-21
https://infogalactic.com/info/Second_Great_Awakening
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00057.warc.gz
en
0.916135
6,766
3.703125
4
Perception and Job Attitudes - How do differences in perception affect employee behavior and performance? It is a process of making sense out of the environment in order to make an appropriate behavioral response. Perception does not necessarily lead to an accurate portrait of the environment, but rather to a unique portrait, influenced by the needs, desires, values, and disposition of the perceiver. As described by Kretch and associates, an individual’s perception of a given situation is not a photographic representation of the physical world; it is a partial, personal construction in which certain objects, selected by the individual for a major role, are perceived in an individual manner. Every perceiver is, as it were, to some degree a nonrepresentational artist, painting a picture of the world that expresses an individual view of reality. The multitude of objects that vie for attention are first selected or screened by individuals. This process is called perceptual selectivity. Certain of these objects catch our attention, while others do not. Once individuals notice a particular object, they then attempt to make sense out of it by organizing or categorizing it according to their unique frame of reference and their needs. This second process is termed perceptual organization. When meaning has been attached to an object, individuals are in a position to determine an appropriate response or reaction to it. Hence, if we clearly recognize and understand we are in danger from a falling rock or a car, we can quickly move out of the way. Because of the importance of perceptual selectivity for understanding the perception of work situations, we will examine this concept in some detail before considering the topic of social perception. Perceptual Selectivity: Seeing What We See As noted above, perceptual selectivity refers to the process by which individuals select objects in the environment for attention. Without this ability to focus on one or a few stimuli instead of the hundreds constantly surrounding us, we would be unable to process all the information necessary to initiate behavior. In essence, perceptual selectivity works as follows (see (Figure)). The individual is first exposed to an object or stimulus—a loud noise, a new car, a tall building, another person, and so on. Next, the individual focuses attention on this one object or stimulus, as opposed to others, and concentrates his efforts on understanding or comprehending the stimulus. For example, while conducting a factory tour, two managers came across a piece of machinery. One manager’s attention focused on the stopped machine; the other manager focused on the worker who was trying to fix it. Both managers simultaneously asked the worker a question. The first manager asked why the machine was stopped, and the second manager asked if the employee thought that he could fix it. Both managers were presented with the same situation, but they noticed different aspects. This example illustrates that once attention has been directed, individuals are more likely to retain an image of the object or stimulus in their memory and to select an appropriate response to the stimulus. These various influences on selective attention can be divided into external influences and internal (personal) influences (see (Figure)). External Influences on Selective Attention External influences consist of the characteristics of the observed object or person that activate the senses. Most external influences affect selective attention because of either their physical properties or their dynamic properties. Physical Properties. The physical properties of the objects themselves often affect which objects receive attention by the perceiver. Emphasis here is on the unique, different, and out of the ordinary. A particularly important physical property is size. Generally, larger objects receive more attention than smaller ones. Advertising companies use the largest signs and billboards allowed to capture the perceiver’s attention. However, when most of the surrounding objects are large, a small object against a field of large objects may receive more attention. In either case, size represents an important variable in perception. Moreover, brighter, louder, and more colorful objects tend to attract more attention than objects of less intensity. For example, when a factory foreman yells an order at his subordinates, it will probably receive more notice (although it may not receive the desired response) from workers. It must be remembered here, however, that intensity heightens attention only when compared to other comparable stimuli. If the foreman always yells, employees may stop paying much attention to the yelling. Objects that contrast strongly with the background against which they are observed tend to receive more attention than less-contrasting objects. An example of the contrast principle can be seen in the use of plant and highway safety signs. A terse message such as “Danger” is lettered in black against a yellow or orange background. A final physical characteristic that can heighten perceptual awareness is the novelty or unfamiliarity of the object. Specifically, the unique or unexpected seen in a familiar setting (an executive of a conservative company who comes to work in Bermuda shorts) or the familiar seen in an incongruous setting (someone in church holding a can of beer) will receive attention. Dynamic Properties. The second set of external influences on selective attention are those that either change over time or derive their uniqueness from the order in which they are presented. The most obvious dynamic property is motion. We tend to pay attention to objects that move against a relatively static background. This principle has long been recognized by advertisers, who often use signs with moving lights or moving objects to attract attention. In an organizational setting, a clear example is a rate-buster, who shows up his colleagues by working substantially faster, attracting more attention. Another principle basic to advertising is repetition of a message or image. Work instructions that are repeated tend to be received better, particularly when they concern a dull or boring task on which it is difficult to concentrate. This process is particularly effective in the area of plant safety. Most industrial accidents occur because of careless mistakes during monotonous activities. Repeating safety rules and procedures can often help keep workers alert to the possibilities of accidents. Personal Influences on Selective Attention In addition to a variety of external factors, several important personal factors are also capable of influencing the extent to which an individual pays attention to a particular stimulus or object in the environment. The two most important personal influences on perceptual readiness are response salience and response disposition. Response Salience. This is a tendency to focus on objects that relate to our immediate needs or wants. Response salience in the work environment is easily identified. A worker who is tired from many hours of work may be acutely sensitive to the number of hours or minutes until quitting time. Employees negotiating a new contract may know to the penny the hourly wage of workers doing similar jobs across town. Managers with a high need to achieve may be sensitive to opportunities for work achievement, success, and promotion. Finally, female managers may be more sensitive than many male managers to condescending male attitudes toward women. Response salience, in turn, can distort our view of our surroundings. For example, as Ruch notes: “Time spent on monotonous work is usually overestimated. Time spent in interesting work is usually underestimated. . . . Judgment of time is related to feelings of success or failure. Subjects who are experiencing failure judge a given interval as longer than do subjects who are experiencing success. A given interval of time is also estimated as longer by subjects trying to get through a task in order to reach a desired goal than by subjects working without such motivation.” Response Disposition. Whereas response salience deals with immediate needs and concerns, response disposition is the tendency to recognize familiar objects more quickly than unfamiliar ones. The notion of response disposition carries with it a clear recognition of the importance of past learning on what we perceive in the present. For instance, in one study, a group of individuals was presented with a set of playing cards with the colors and symbols reversed—that is, hearts and diamonds were printed in black, and spades and clubs in red. Surprisingly, when subjects were presented with these cards for brief time periods, individuals consistently described the cards as they expected them to be (red hearts and diamonds, black spades and clubs) instead of as they really were. They were predisposed to see things as they always had been in the past. Thus, the basic perceptual process is in reality a fairly complicated one. Several factors, including our own personal makeup and the environment, influence how we interpret and respond to the events we focus on. Although the process itself may seem somewhat complicated, it in fact represents a shorthand to guide us in our everyday behavior. That is, without perceptual selectivity we would be immobilized by the millions of stimuli competing for our attention and action. The perceptual process allows us to focus our attention on the more salient events or objects and, in addition, allows us to categorize such events or objects so that they fit into our own conceptual map of the environment. When General Motors teamed up with Toyota to form California-based New United Motor Manufacturing Inc. (NUMMI), they had a great idea. NUMMI would manufacture not only the popular Toyota Corolla but would also make a GM car called the Geo Prizm. Both cars would be essentially identical except for minor styling differences. Economies of scale and high quality would benefit the sales of both cars. Unfortunately, General Motors forgot one thing. The North American consumer holds a higher opinion of Japanese-built cars than American-made ones. As a result, from the start of the joint venture, Corollas have sold rapidly, while sales of Geo Prizms have languished. With hindsight, it is easy to explain what happened in terms of perceptual differences. That is, the typical consumer simply perceived the Corolla to be of higher quality (and perhaps higher status) and bought accordingly. Not only was the Prizm seen more skeptically by consumers, but General Motors’ insistence on a whole new name for the product left many buyers unfamiliar with just what they were buying. Perception was that main reason for lagging sales; however, the paint job on the Prizm was viewed as being among the worst ever. As a result, General Motors lost $80 million on the Prizm in its first year of sales. Meanwhile, demand for the Corolla exceeded supply. The final irony here is that no two cars could be any more alike than the Prizm and the Corolla. They are built on the same assembly line by the same workers to the same design specifications. They are, in fact, the same car. The only difference is in how the consumers perceive the two cars—and these perceptions obviously are radically different. Over time, however, perceptions did change. While there was nothing unique about the Prizm, the vehicle managed to sell pretty well for the automaker and carried on well into the 2000s. The Prizm was also the base for the Pontiac Vibe, which was based on the Corolla platform as well, and this is one of the few collaborations that worked really well. Sources: C. Eitreim, “10 Odd Automotive Brand Collaborations (And 15 That Worked),” Car Culture, January 19, 2019; R. Hof, “This Team-Up Has It All—Except Sales,” Business Week, August 14, 1989, p. 35; C. Eitreim, “15 GM Cars With The Worst Factory Paint Jobs (And 5 That’ll Last Forever),” Motor Hub, November 8, 2018. Social Perception in Organizations Up to this point, we have focused on an examination of basic perceptual processes—how we see objects or attend to stimuli. Based on this discussion, we are now ready to examine a special case of the perceptual process—social perception as it relates to the workplace. Social perception consists of those processes by which we perceive other people. Particular emphasis in the study of social perception is placed on how we interpret other people, how we categorize them, and how we form impressions of them. Clearly, social perception is far more complex than the perception of inanimate objects such as tables, chairs, signs, and buildings. This is true for at least two reasons. First, people are obviously far more complex and dynamic than tables and chairs. More-careful attention must be paid in perceiving them so as not to miss important details. Second, an accurate perception of others is usually far more important to us personally than are our perceptions of inanimate objects. The consequences of misperceiving people are great. Failure to accurately perceive the location of a desk in a large room may mean we bump into it by mistake. Failure to perceive accurately the hierarchical status of someone and how the person cares about this status difference might lead you to inappropriately address the person by their first name or use slang in their presence and thereby significantly hurt your chances for promotion if that person is involved in such decisions. Consequently, social perception in the work situation deserves special attention. We will concentrate now on the three major influences on social perception: the characteristics of (1) the person being perceived, (2) the particular situation, and (3) the perceiver. When taken together, these influences are the dimensions of the environment in which we view other people. It is important for students of management to understand the way in which they interact (see (Figure)). The way in which we are evaluated in social situations is greatly influenced by our own unique sets of personal characteristics. That is, our dress, talk, and gestures determine the kind of impressions people form of us. In particular, four categories of personal characteristics can be identified: (1) physical appearance, (2) verbal communication, (3) nonverbal communication, and (4) ascribed attributes. Physical Appearance. A variety of physical attributes influence our overall image. These include many of the obvious demographic characteristics such as age, sex, race, height, and weight. A study by Mason found that most people agree on the physical attributes of a leader (i.e., what leaders should look like), even though these attributes were not found to be consistently held by actual leaders. However, when we see a person who appears to be assertive, goal-oriented, confident, and articulate, we infer that this person is a natural leader. Another example of the powerful influence of physical appearance on perception is clothing. People dressed in business suits are generally thought to be professionals, whereas people dressed in work clothes are assumed to be lower-level employees. Verbal and Nonverbal Communication. What we say to others—as well as how we say it—can influence the impressions others form of us. Several aspects of verbal communication can be noted. First, the precision with which one uses language can influence impressions about cultural sophistication or education. An accent provides clues about a person’s geographic and social background. The tone of voice used provides clues about a speaker’s state of mind. Finally, the topics people choose to converse about provide clues about them. Impressions are also influenced by nonverbal communication—how people behave. For instance, facial expressions often serve as clues in forming impressions of others. People who consistently smile are often thought to have positive attitudes. A whole field of study that has recently emerged is body language, the way in which people express their inner feelings subconsciously through physical actions: sitting up straight versus being relaxed, looking people straight in the eye versus looking away from people. These forms of expressive behavior provide information to the perceiver concerning how approachable others are, how self-confident they are, or how sociable they are. Ascribed Attributes. Finally, we often ascribe certain attributes to a person before or at the beginning of an encounter; these attributes can influence how we perceive that person. Three ascribed attributes are status, occupation, and personal characteristics. We ascribe status to someone when we are told that he or she is an executive, holds the greatest sales record, or has in some way achieved unusual fame or wealth. Research has consistently shown that people attribute different motives to people they believe to be high or low in status, even when these people behave in an identical fashion. For instance, high-status people are seen as having greater control over their behavior and as being more self-confident and competent; they are given greater influence in group decisions than low-status people. Moreover, high-status people are generally better liked than low-status people. Occupations also play an important part in how we perceive people. Describing people as salespersons, accountants, teamsters, or research scientists conjures up distinct pictures of these various people before any firsthand encounters. In fact, these pictures may even determine whether there can be an encounter. Characteristics of the Situation The second major influence on how we perceive others is the situation in which the perceptual process occurs. Two situational influences can be identified: (1) the organization and the employee’s place in it, and (2) the location of the event. Organizational Role. An employee’s place in the organizational hierarchy can also influence his perceptions. A classic study of managers by Dearborn and Simon emphasizes this point. In this study, executives from various departments (accounting, sales, production) were asked to read a detailed and factual case about a steel company. Next, each executive was asked to identify the major problem a new president of the company should address. The findings showed clearly that the executives’ perceptions of the most important problems in the company were influenced by the departments in which they worked. Sales executives saw sales as the biggest problem, whereas production executives cited production issues. Industrial relations and public relations executives identified human relations as the primary problem in need of attention. In addition to perceptual differences emerging horizontally across departments, such differences can also be found when we move vertically up or down the hierarchy. The most obvious difference here is seen between managers and unions, where the former see profits, production, and sales as vital areas of concern for the company whereas the latter place much greater emphasis on wages, working conditions, and job security. Indeed, our views of managers and workers are clearly influenced by the group to which we belong. The positions we occupy in organizations can easily color how we view our work world and those in it. Consider the results of a classic study of perceptual differences between superiors and subordinates. Both groups were asked how often the supervisor gave various forms of feedback to the employees. The results, shown in (Figure), demonstrate striking differences based on one’s location in the organizational hierarchy. |Differences in Perception between Supervisors and Subordinates| |Frequency with Which Supervisors Give Various Types of Recognition for Good Performance| |Types of Recognition||As Seen by Supervisors||As Seen by Subordinates| |Source: Adapted from R. Likert, New Patterns in Management (New York: McGraw Hill, 1961), p. 91.| |Gives more responsibility||48||10| |Gives a pat on the back||82||13| |Gives sincere and thorough praise||80||14| |Trains for better jobs||64||9| |Gives more interesting work||51||5| Location of Event. Finally, how we interpret events is also influenced by where the event occurs. Behaviors that may be appropriate at home, such as taking off one’s shoes, may be inappropriate in the office. Acceptable customs vary from country to country. For instance, assertiveness may be a desirable trait for a sales representative in the United States, but it may be seen as being brash or coarse in Japan or China. Hence, the context in which the perceptual activity takes place is important. Characteristics of the Perceiver The third major influence on social perception is the personality and viewpoint of the perceiver. Several characteristics unique to our personalities can affect how we see others. These include (1) self-concept, (2) cognitive structure, (3) response salience, and (4) previous experience with the individual. Self-Concept. Our self-concept represents a major influence on how we perceive others. This influence is manifested in several ways. First, when we understand ourselves (i.e., can accurately describe our own personal characteristics), we are better able to perceive others accurately. Second, when we accept ourselves (i.e., have a positive self-image), we are more likely to see favorable characteristics in others. Studies have shown that if we accept ourselves as we are, we broaden our view of others and are more likely to view people uncritically. Conversely, less secure people often find faults in others. Third, our own personal characteristics influence the characteristics we notice in others. For instance, people with authoritarian tendencies tend to view others in terms of power, whereas secure people tend to see others as warm rather than cold. From a management standpoint, these findings emphasize how important it is for administrators to understand themselves; they also provide justification for the human relations training programs that are popular in many organizations today. Cognitive Structure. Our cognitive structures also influence how we view people. People describe each other differently. Some use physical characteristics such as tall or short, whereas others use central descriptions such as deceitful, forceful, or meek. Still others have more complex cognitive structures and use multiple traits in their descriptions of others; hence, a person may be described as being aggressive, honest, friendly, and hardworking. (See the discussion in Individual and Cultural Differences on cognitive complexity.) Ostensibly, the greater our cognitive complexity—our ability to differentiate between people using multiple criteria—the more accurate our perception of others. People who tend to make more complex assessments of others also tend to be more positive in their appraisals. Research in this area highlights the importance of selecting managers who exhibit high degrees of cognitive complexity. These individuals should form more accurate perceptions of the strengths and weaknesses of their subordinates and should be able to capitalize on their strengths while ignoring or working to overcome their weaknesses. Response Salience. This refers to our sensitivity to objects in the environment as influenced by our particular needs or desires. Response salience can play an important role in social perception because we tend to see what we want to see. A company personnel manager who has a bias against women, minorities, or handicapped persons would tend to be adversely sensitive to them during an employment interview. This focus may cause the manager to look for other potentially negative traits in the candidate to confirm his biases. The influence of positive arbitrary biases is called the halo effect, whereas the influence of negative biases is often called the horn effect. Another personnel manager without these biases would be much less inclined to be influenced by these characteristics when viewing prospective job candidates. Previous Experience with the Individual. Our previous experiences with others often will influence the way in which we view their current behavior. When an employee has consistently received poor performance evaluations, a marked improvement in performance may go unnoticed because the supervisor continues to think of the individual as a poor performer. Similarly, employees who begin their careers with several successes develop a reputation as fast-track individuals and may continue to rise in the organization long after their performance has leveled off or even declined. The impact of previous experience on present perceptions should be respected and studied by students of management. For instance, when a previously poor performer earnestly tries to perform better, it is important for this improvement to be recognized early and properly rewarded. Otherwise, employees may give up, feeling that nothing they do will make any difference. Together, these factors determine the impressions we form of others (see (Figure)). With these impressions, we make conscious and unconscious decisions about how we intend to behave toward people. Our behavior toward others, in turn, influences the way they regard us. Consequently, the importance of understanding the perceptual process, as well as factors that contribute to it, is apparent for managers. A better understanding of ourselves and careful attention to others leads to more accurate perceptions and more appropriate actions. - How can you understand what makes up an individual’s personality? - How does the content of the situation affect the perception of the perceiver? - What are the characteristics that the perceiver can have on interpreting personality? - How do differences in perception affect employee behavior and performance? One of the key determinants of people’s behavior in organizations is how they see and interpret situations and people around them. It is vital for anyone (manager or subordinate) who desires to be more effective to understand the critical aspects of context, object, and perceiver that influence perceptions and interpretations and the relationship between these and subsequent attitudes, intentions, and behaviors. This understanding will not only facilitate the ability to correctly understand and anticipate behaviors, but it will also enhance the ability to change or influence that behavior. Perception is the process by which individuals screen, select, organize, and interpret stimuli in order to give them meaning. Perceptual selectivity is the process by which individuals select certain stimuli for attention instead of others. Selective attention is influenced by both external factors (e.g., physical or dynamic properties of the object) and personal factors (e.g., response salience). Social perception is the process by which we perceive other people. It is influenced by the characteristics of the person perceived, the perceiver, and the situation. - Body language - The manner in which people express their inner feelings subconsciously through physical actions such as sitting up straight versus being relaxed or looking people straight in the eye versus looking away from people. - Halo effect - The influence of positive arbitrary biases. - The process by which one screens, selects, organizes, and interprets stimuli to give them meaning. - Perceptual organization - When meaning has been attached to an object, individuals are in a position to determine an appropriate response or reaction to it. - Perceptual selectivity - Refers to the process by which individuals select objects in the environment for attention. - Response disposition - The tendency to recognize familiar objects more quickly than unfamiliar ones. - Response salience - The tendency to focus on objects that relate to our immediate needs or wants. - Social perception - Consists of those processes by which we perceive other people.
<urn:uuid:6724ec11-7bd6-4987-984b-19d97745f1d4>
CC-MAIN-2021-21
https://opentextbc.ca/organizationalbehavioropenstax/chapter/the-perceptual-process/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00256.warc.gz
en
0.958966
5,370
3.6875
4
Woodrow Wilson, the 28th president of the United States, was perhaps the most idealistic of modern American presidents. Though he led his country against Germany toward the end of World War I, he did so only after resisting war as the preferred option. He then developed his famous Fourteen Points, which convinced the German government to lay down arms without admitting defeat. At the 1919 Paris Peace Conference, Wilson worked for the creation of the League of Nations to promote peaceful international relations. For his efforts he was awarded the Nobel Peace Prize later that year. A highly intelligent, devoutly religious man, Wilson devoted himself to the cause of peace. But he could not achieve his goal. Not only did the Senate reject U.S. entry into the League of Nations, within 20 years of the war’s end the entire world was in the grips of terrible violence again. “The war to end all war” proved to be a forlorn hope, and the League of Nations a failed instrument. Though a subsequent generation of leaders was able to forge the League’s successor, the United Nations, the goal of preventing war remains unfulfilled to this day. It seems that no matter the highest of ideals set forth by leaders, humanity has never succeeded in overcoming what appears to be a death wish. You may not have thought of it in such stark terms. Yet can any of us deny the legacy of violence that defined the last century? “The means for expressing cruelty and carrying out mass killing have been fully developed. It is too late to stop the technology. It is to the psychology that we should now turn.” This is the subject of Jonathan Glover’s Humanity: A Moral History of the Twentieth Century. Glover is a professor of ethics at King’s College London. His book focuses on the violence of the past 100 years, dealing in particular with “the psychology which made possible Hiroshima, the Nazi genocide, the Gulag, the Chinese Cultural Revolution, Pol Pot’s Cambodia, Rwanda, Bosnia and many other atrocities.” While this appalling list reminds us of how much mass violence has dominated the modern world, the purpose of the book harks back to the perhaps paradoxical desire humans have to overcome the violence within us. The book’s message, writes Glover, “is not one of simple pessimism. We need to look hard and clearly at some monsters inside us. But this is part of the project of caging and taming them.” But while we may know the problem, the cure for the disease is far from us. Violence From Beginning to End There is much more than the last century to consider when it comes to the history of violence, of course. According to Glover, “it is a myth that barbarism is unique to the twentieth century: the whole of human history includes wars, massacres, and every kind of torture and cruelty.” In light of that statement, it is significant how often violence is referenced in the Bible, literally or conceptually, at critical junctures in earth’s history. The prophets Isaiah and Ezekiel both tell us of an angelic being who became corrupt before the arrival of humans on the earth. Isaiah refers to this being with the Hebrew heylel (“shining one” or “morning star,” unfortunately translated in English as “Lucifer” or “Light Bearer” from the Latin lux, lucis, “light”). No longer an angel of light, he had become an agent of darkness. Thereafter he is identified in the Bible as the Accuser or the Adversary (in Hebrew, satan). Ezekiel shows that violence became one of the tools of his trade. As a result of his corruption, he became dominated by aggression: “Your great wealth filled you with violence, and you sinned. So I banished you from the mountain of God. I expelled you, O mighty guardian, from your place among the stones of fire” (Ezekiel 28:16, New Living Translation). Satan was consumed by a violent attitude. Not surprisingly, his entry into the human world led to further corruption. The Genesis account of his deception of humanity’s parents is well known. By their actions, Adam and Eve did violence against their creator and suffered the penalty of banishment from Eden, the garden of God. It wasn’t long before the first recorded murder occurred, the first act of violence against a family member. Adam’s son Cain struck down his brother, Abel. It was the beginning of a succession of violent acts. One of Cain’s descendants, Lamech, was also a murderer, the biblical record indicating that he showed less remorse for his sin than Cain did. By the sixth chapter of Genesis, we read that early human society had gone far downhill in respect of violence: “Then the Lord saw that the wickedness of man was great in the earth, and that every intent of the thoughts of his heart was only evil continually. And the Lord was sorry that He had made man on the earth, and He was grieved in His heart. . . . The earth also was corrupt before God, and the earth was filled with violence. So God looked upon the earth, and indeed it was corrupt; for all flesh had corrupted their way on the earth” (verses 5–6, 11–12, emphasis added throughout). When we come to the much later New Testament Gospel accounts, we read of Jesus looking into the distant future and warning of a time of ultimate violence. It will be a time of such catastrophe that it will never be repeated: “For that will be a time of greater horror than anything the world has ever seen or will ever see again. In fact, unless that time of calamity is shortened, the entire human race will be destroyed. But it will be shortened for the sake of God’s chosen ones” (Matthew 24:21–22, NLT). This prophetic statement from Jesus accords with others in the book of Revelation, which says that, at the end of the age, Satan and his fallen followers will once again have their part to play in stirring up violence. Revelation 16:14 (NLT) tells of “miracle-working demons [causing] all the rulers of the world to gather for battle against the Lord on that great judgment day of God Almighty.” Thankfully, as we see in the above passage from Matthew’s Gospel, God will not allow the annihilation of humanity. The Spirit of Violence Though violence has stained human history from the beginning and, according to the Scriptures, will continue to mar it to the end of this age, Jesus proclaimed a very different world: a coming godly kingdom of peace. His message assures us that violence does not have to be an individual choice in today’s violent world. But it takes understanding and effort to take a different course. Sadly, we do not always realize the impact that the world we inhabit has on us. On one occasion Jesus had to explain to his own disciples that their attitude was very far from His own. He was on His way to Jerusalem, passing through a Samaritan village en route. When the Samaritans spurned Him, two of His disciples offered to call down fire from heaven to consume them. “But [Jesus] turned and rebuked them, and said, ‘You do not know what manner of spirit you are of’” (Luke 9:52–55). The disciples no doubt thought that they were quite right in what they had suggested, so Jesus’ response must have shocked them. But the solution that seemed right to the disciples would have been a violent act that showed neither mercy nor understanding. What spirit were they of? The Bible shows that there is a spirit in men and women that makes us unique and different from animals. The human brain is qualitatively different from the animal brain. But there is more to this spiritual equation. The Bible also reveals that there are two other spiritual minds with which the human mind can interface, causing us to think in varied ways—for good or evil, for right or wrong (1 Corinthians 2:12). One spirit, the apostle Paul said, is of this world; the other is of God. Paul also showed that the world in general falls under the influence of a wrong spirit: “You used to live just like the rest of the world, full of sin, obeying Satan, the mighty prince of the power of the air” (Ephesians 2:2, NLT). He mentions that this being is “the god of this age” (2 Corinthians 4:4) who blinds people. From what we know already of the Adversary’s role in human history, we should not be surprised at the result when the human mind interfaces with the wrong spirit. Sadly, one of the depravities of the human mind when it combines with the spirit of the world, the spirit of disobedience, is violence. The disciples who wanted to call down destruction on others were operating according to that spirit. A Line Through the Heart Centuries later, recognizing the almost natural human proclivity for violence, Russian author Fyodor Dostoyevsky wrote that “people sometimes speak of man’s ‘bestial’ cruelty, but this is very unfair and insulting to the beasts: a beast can never be so cruel as a man, so ingeniously, so artistically cruel.” His comment takes us to another level in our consideration of violent behavior. “People sometimes speak of man’s ‘bestial’ cruelty, but this is very unfair and insulting to the beasts: a beast can never be so cruel as a man, so ingeniously, so artistically cruel.” For some reason, the glorification of cruelty and violence preoccupies this present world. Box office attractions center on unspeakable violence. Not so long ago, for instance, many people flocked to see the long-awaited sequel to a gruesome movie about a serial killer. Part Two revealed a sometimes sympathetic portrait of a sadist who ate parts of his victims while they were still alive. Film critics recommended that people not take their children to see the movie with its profoundly disturbing scenes. But did you ever wonder why so many are inclined to view such horror in the first place? Noting that “the festival of cruelty is in full swing,” Glover asks, “What is it about human beings that makes such acts possible?” Answering his own question, he says, “Three factors seem central. There is a love of cruelty. Also, emotionally inadequate people assert themselves by dominance and cruelty. And the moral resources which restrain cruelty can be neutralized. . . . Deep in human psychology, there are urges to humiliate, torment, wound and kill people.” Glover notes that his assertion echoes the words of the late Russian author Alexander Solzhenitsyn, who wrote about his experiences in Siberian exile in The Gulag Archipelago. Reflecting on the slender difference between guards and prisoners, Solzhenitsyn said: “If only it were all so simple! If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. . . . It is after all only because of the way things worked out that they were the executioners and we weren’t.” “The line dividing good and evil cuts through the heart of every human being. . . . It is after all only because of the way things worked out that they were the executioners and we weren’t.” The Bible’s revelation about the hidden nature of man provides the answer to this age-old question of what it is that propels humans into shocking, senseless violence from time to time. In a powerful comment on the way we can become, Isaiah wrote: “They spend their time plotting evil deeds and then doing them. They spend their time and energy spinning evil plans that end up in deadly actions. . . . Violence is their trademark. . . . Wherever they go, misery and destruction follow them. They do not know what true peace is or what it means to be just and good. They continually do wrong, and those who follow them cannot experience a moment’s peace” (Isaiah 59:4–8, NLT). In the Service of God? Returning again to the New Testament, we find that even the most outwardly religious people can have a violent heart. After all, many of those who persecuted and plotted the unspeakably cruel death of Jesus Christ were devoutly committed to their religion. Clearly, religious belief is no indication of a right spirit. In fact, Jesus said that the time would come when “whoever kills you will think that he offers God service. And these things they will do to you because they have not known the Father nor Me” (John 16:2–3). That is to say, such persecutors are out of sync with the mind of God but tuned in to another mind. Even the apostle Paul took part in the persecution and death of Jesus’ followers before his conversion. Acts 8:3 tells us that “he made havoc of the church, entering every house, and dragging off men and women, committing them to prison.” Why did he do it? Because of entirely misplaced religious conviction. Paul had to have it revealed to him that his violence was not something from the mind of God. Despite his religious zeal for God, he was as far from God as he could have been. He was under the influence of the wrong spirit. And Now to You and Me Quite rightly at this point you might be saying to yourself, “But I’ve never done anything like that. I’ve never assaulted or murdered anyone.” But violence starts somewhere short of the act of murder, sometimes a long way short of that final act. Most people have never considered that violence isn’t simply attacking people physically. We do violence to each other when we allow Satan’s adversarial state of mind to become our own. Remember that he is the spirit being who is centered on doing harm to human beings in any way he can. Sometimes, therefore, we commit an act of violence simply by what we say to others, or do to them, short of the act of murder. Paul described himself as having been “a man of violence” prior to his conversion (1 Timothy 1:13, New Revised Standard Version). Alternative translations say he was “insulting,” an “insolent, overbearing man” or “violently arrogant.” The result was that he engaged in the persecution to death of early Christians. The point is that thoughts and attitudes precede action. Jesus also had something to say about the state of mind that precedes physical violence: “You have heard that it was said to those of old, ‘You shall not murder, and whoever murders will be in danger of the judgment.’ But I say to you that whoever is angry with his brother without a cause shall be in danger of the judgment. And whoever says to his brother, ‘Raca!’ [an Aramaic term of contempt] shall be in danger of the council. But whoever says, ‘You fool!’ shall be in danger of hell fire” (Matthew 5:21–22). Jesus was interested in the underlying attitude behind the final act of murder. It starts with things that are very familiar territory to us: insults, being “lightly angry” without a cause, calling someone an idiot, saying someone is worthless. It can end up in cruelty, terror, torture and murder. There are other, more subtle ways in which we display a violent heart. We do violence to each other when we take up the sword of gossip. We can excuse ourselves by insisting we are only passing on information that someone else gave us. Yet the scriptural rules are quite clear: “Do not spread slanderous gossip among your people. Do not try to get ahead at the cost of your neighbor’s life, for I am the Lord” (Leviticus 19:16, NLT). God says that “death and life are in the power of the tongue” (Proverbs 18:21). We do violence to a relationship when we spread gossip, even if it is true, or when we slander someone. Interestingly, in a clue to slander’s origin, the Hebrew for “slanderer” is also satan. So we can define violence in terms of slander, gossip, insolence or anger. But in what might seem like a contradiction, we can even be violent by being passive. We can disrupt what should be a right relationship by failing to respond in a godly way. This means that the practice of passive resistance is very much open to question. The Moral Core How, then, do we begin to come to terms with the violence that seems so naturally a part of us? There is no question that understanding what we are up against in the spirit world is central. A strong sense of personal moral identity is also a key. Knowing who we are morally cannot be underestimated. This speaks to the early and continuous formation of character: knowing what is right and exercising the will to do it. Glover writes, “The sense of moral identity is one relevant aspect of character. Those who have a strong sense of who they are and of the kind of person they want to be have an extra defence against conditioning in cruelty, obedience or ideology.” He continues: “Sometimes people’s actions seem to be disconnected from their sense of who they are. This may be because they slide into participation by imperceptible degrees, so that there is never the sense of a frontier being crossed. This gentle slide can be a feature of the training of torturers. It was what the Nazis aimed at in securing collaboration in occupied countries. With the atomic bomb, the slide was gradual from making it only as a deterrent against Hitler to making it for actual use against Japan.” We must be careful that we do not become participants in cruelty or violence gradually. A well-formed personal moral identity should prevent it, but we sometimes allow ourselves to be compromised. Vigilance about our state of mind is essential. A Violent World Comes to Rest How can we become nonviolent people in the fullest sense? Hebrews 12:14 advises the followers of Jesus to “pursue peace with all people, and holiness, without which no one will see the Lord.” Part of pursuing peace is to treat people as people, not as commodities to be used up; to give people mental and spiritual space, just as we want it for ourselves. It is certainly to avoid coercing people in everyday life. The New Testament writer James said that “the fruit of righteousness is sown in peace by those who make peace” (James 3:18, emphasis added). Peacemaking is an active process. It requires action based on right principles. Living the right way and keeping God’s law in respect of human relationships leads to peace and reconciliation. These are actions we can take now as we endeavor to come under the direction of the Spirit of God—the Spirit that binds our human mind to the mind of God. Those who are willing to take up the challenge of living now under God’s rule experience peace as a foretaste of what is yet ahead for all of humankind. God will set His hand to save humanity from its own ultimate act of aggression. At that time the violence of this world in all of its manifestations will end. The day is coming when, according to the book of Revelation, “the great dragon [will be] cast out—that serpent of old, who is called the Devil and Satan, who deceives the whole world.” Finally Satan will be restrained, his influence removed. A new chapter will be added to the history of violence, signaling its effective control. The world’s new condition will be peace and security through the practice of the law of God’s love on all levels.
<urn:uuid:5fab2d54-fa9f-400e-8e95-e1ddd2545e90>
CC-MAIN-2021-21
https://www.vision.org/violent-heart-1164
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00097.warc.gz
en
0.969628
4,264
3.3125
3
PENTACHLORODISILANE PRODUCTION METHOD AND PENTACHLORODISILANE PRODUCED BY SAME [Problem] To provide a novel production method for pentachlorodisilane and to obtain pentachlorodisilane having a purity of 90 mass % or more by carrying out this production method. [Solution] A production method provided with: a high-temperature reaction step in which a raw material gas containing vaporized tetrachlorosilane and hydrogen is reacted at a high temperature in order to obtain a reaction product gas containing trichlorosilane; a pentachlorodisilane generation step in which the reaction product gas obtained in the high-temperature reaction step is brought into contact with a cooling liquid obtained by circulative cooling of a condensate that is generated by cooling the reaction product gas, the reaction product gas is quickly cooled, and pentachlorodisilane is generated within the condensate; and a recovery step in which the generated pentachlorodisilane is recovered. Latest Denka Company Limited Patents: - PRESSURE-SENSITIVE ADHESIVE TAPE - PHOSPHOR, PRODUCTION METHOD FOR SAME, AND LIGHT-EMITTING DEVICE - MULTILAYER FILM AND HEAT-RESISTANT ADHESIVE TAPE - Vinyl alcohol polymer and cement slurry comprising same - Membrane carrier for liquid sample test kit, liquid sample test kit, and method for producing liquid sample test kit The present invention generally relates to a production method for pentachlorodisilane and, more specifically, a production method obtaining pentachlorodisilane from a production step for trichlorosilane. The present invention also relates to pentachlorodisilane obtained via this production method.BACKGROUND ART Compounds generally known as chlorosilanes are used as raw materials in polysilicon films, silicon nitride films, silicon oxide films, etc. that form integrated circuits in semiconductor devices and as raw materials for solar cells, liquid crystals, and silicon, and the like. From the viewpoint of industrial use, monosilanes, which are compounds conventionally formed by bonding a hydrogen or halogen atom to a silicon atom, are compounds representative of chlorosilanes and are produced and used on an industrial scale. Meanwhile, the progression of semiconductor device production technology can be said to have already reached its limits, but the progress of high integration has not halted, and for density to continue increasing, there is a need for raw materials which can form circuits at lower temperatures in order to suppress the spread of impurities caused by heating during the formation of integrated circuits. Under these conditions, the use of pentachlorodisilane, which can form circuits at lower temperatures in comparison with monosilanes such as monosilane and dichlorosilane, as well as the use of hexachlorodisilane, which is a similar compound, as raw materials has gathered attention and development of integrated circuits using these compounds is on the rise. Thus far, methods for producing pentachlorodisilane have not been disclosed, but Patent Document 1 indicates that pentachlorodisilane is included in the exhaust gas of a Siemens method to obtain high-purity polycrystalline silicon as a reaction including pentachlorodisilane as a product, that is, pentachlorodisilane is included in the exhaust gas after introducing trichlorosilane and hydrogen to a silicon generating reactor and reacting the two. Further, Patent Document 2 discloses pentachlorodisilane existing in the off gas when precipitating polycrystalline silicon from chlorosilane and hydrogen. Furthermore, Patent Document 3 also discloses that, in addition to silicon tetrachloride and hexachlorodisilane, pentachlorodisilane, octacholorotrisilane, etc. are included in high-boiling chlorosilane-containing compounds generated in a polycrystalline silicon production process. - Patent Document 1 JP 2006-169012 A - Patent Document 2 JP 2009-528253 A - Patent Document 3 JP 2009-227577 A The present invention was created in consideration of the above circumstances, having the purpose of providing a novel production method for pentachlorodisilane capable of utilizing a production process for trichlorosilane, in particular, providing a method for recovering pentachlorodisilane from a chlorosilane mixture that is a byproduct of a process producing trichlorosilane by reacting a source gas including vaporized tetrachlorosilane and hydrogen at a high temperature. Further, a purpose of the present invention is to provide high purity pentachlorodisilane obtained via the above production method. As discussed above, it has been known that pentachlorodisilane is included among the chlorosilanes that are byproducts of polycrystalline silicon production processes. However, neither the concept of recovering pentachlorodisilane from these chlorosilanes with the purpose of industrial use nor the recovery method have been disclosed and moreover, that pentachlorodisilane can be obtained from a chlorosilane mixture that is a byproduct of a process producing trichlorosilane by reacting a source gas including vaporized tetrachlorosilane and hydrogen at a high temperature had not been indicated. As a result of diligent investigation, the present inventors discovered that, in trichlorosilane production methods such as that above, pentachlorodisilane can be obtained from chlorosilane mixtures generated therein and that, simultaneously, it is possible to control the concentration or the mass generated per unit time of the pentachlorodisilane in the chlorosilane mixture, arriving at the present invention. Consequently, according to one embodiment of the present invention, a production method for pentachlorodisilane comprising a high temperature reaction step in which a source gas including vaporized tetrachlorosilane and hydrogen is reacted at a high temperature and a reaction product gas including trichlorosilane is obtained, a pentachlorodisilane generation step in which the reaction product gas obtained in the high temperature reaction step is contacted with a coolant obtained by circulative cooling a condensate generated by cooling the reaction product gas and rapid cooled, generating pentachlorodisilane in the condensate, and a recovery step in which the generated pentachlorodisilane is recovered. Here, the liquid generated by rapid cooling the reaction product gas is referred to as condensate and the liquid used to further cool the condensate in a cooling device, etc. and to rapid cool the reaction product gas is referred to as coolant. The high temperature reaction step is normally performed in a temperature range from 700 to 1,400° C. The cooled temperature in the cooling step for the reaction product gas must be no more than 600° C., preferably no more than 200° C., and even more preferably in a range from 30-60° C. In one embodiment of the present invention, tetrachlorosilane is further added to the coolant and/or the condensate and the coolant and/or the condensate are extracted outside the circulation system and recovered as extracted liquid. The added tetrachlorosilane is preferably added to the coolant and/or condensate before being used in rapid cooling via addition equipment capable of adjusting the supply speed and the extraction of the coolant and/or the condensate outside the circulation system may be performed anywhere in the circulation system, but it is preferable that this be performed via extraction equipment capable of adjusting the extraction speed. The amount of the tetrachlorosilane to be added to the coolant and/or condensate is preferably 10-10,000 L/h per 1,000 L/h of the raw tetrachlorosilane supply speed (prior to vaporization). The method, location, etc. for adding the tetrachlorosilane to be added to the coolant and/or the condensate are arbitrary, but adding at a location before the spray nozzle used in rapid cooling is easy and preferable. The extraction speed of the coolant and/or condensate is preferably, 5-1,000 L/h per 1,000 L/h of the raw tetrachlorosilane supply speed (prior to vaporization). There are no limitations on the method or location of extraction of the coolant and/or condensate, but discharging at a location beyond the outlet of the circulating pump circulating the coolant is easy and preferable. By adjusting the speed at which the tetrachlorosilane to be added is added to the coolant and the condensate and the extraction speed of the coolant and the condensate, the concentration and mass generated per unit time of the pentachlorodisilane included in the coolant can be adjusted. In another embodiment of the present invention, in the recovery step, the extracted liquid is distilled, obtaining pentachlorodisilane having purity of at least 90 mass %. For example, in the recovery step in one embodiment, pentachlorodisilane of yet higher purity can be obtained by recovering the extracted condensate, condensing the condensate into an intermediate raw material, and further putting the condensate through a distillation step. There are no particular limitations on the recovery equipment, condensing equipment, and distillation equipment for the condensate and they may be connected directly to the condensate extraction pipe or may be separate and independent pieces of equipment. The number of pieces of distillation equipment when further providing multiple pieces of distillation equipment in series or the number of times distillation is performed when doing so repeatedly in a single piece of distillation equipment are not particularly limited. Consequently, in one embodiment, pentachlorodisilane with a purity of at least 90 mass % is obtained by storing the extracted liquid in a recovery tank, which may also be a single distillation still provided with a heating device, heating the recovered extracted liquid in the recovery tank (single distillation still) and generating evaporation gas, introducing the gas to a concentrating column, removing trichlorosilane and tetrachlorosilane from the gas, concentrating in a liquid containing pentachlorodisilane, and further distilling a pentachlorodisilane-containing liquid obtained from the concentrating column in a distillation column, as necessary. Here, the form of the distillation column is not particularly limited and it is preferable that a widely known multi-stage distillation column or packed distillation column be used. When doing so and employing repeated distillation in order to increase the purity of the pentachlorodisilane, selecting a continuous, batch, etc. distillation column makes no difference. In order to set the refined purity of the pentachlorodisilane at a high level, it is preferable that the number of plates or theoretical number of plates (hereafter referred to together as plates) be set to at least 30, more preferable that it be set to at least 50, and even more preferable that it be set to at least 70. When there are fewer than 30 plates, there are cases in which the refined purity of the pentachlorodisilane does not rise, even if repeated distillation operations are performed. Furthermore, the distillation operating pressure may be set not only to normal pressure, but to a depressurized state of 5-300 mmHg, preferably 10-100 mmHg. With the objective of raising the refined purity, substances at the column apex are returned to the distillation column at a predetermined ratio (called the reflux ratio), but the reflux ratio is not particularly limited. Further, with the objective of raising the pentachlorodisilane recovery rate, temporarily unnecessary liquid at the column apex or residual liquid in the tank can be reused as raw material. In addition, when distilling using a distillation column, there are no particular limitations on the type of packing material used with the objective of increasing the vapor-liquid contact area in a packed column and any regular packing material or irregular packing material can be used. Widely known materials such as Raschig rings, spiral rings, Pall rings, partition rings, Heli-Pak, Coil Pack, I-rings, C-rings, or Nutter rings. Further embodiments of the present invention are pentachlorodisilane obtained by purifying the above condensate (extracted liquid) to purity greater than 90 mass % via distillation. The refined purity of pentachlorodisilane is preferably at least 90 mass %, more preferably at least 95 mass % and yet more preferably at least 99 mass %. When the purity does not reach 90 mass %, there are cases of the film-formability in semiconductor manufacturing processes deteriorating. One example of the pentachlorodisilane production method according to the present invention shall be explained, using the schematic drawing illustrated in The schematic drawing in In general, the production method of the present invention is preferably provided with a condenser 60 for condensing trichlorosilane and tetrachlorosilane from the cooled and uncondensed reaction product gas, a tank 70 for temporarily storing condensate removed from condenser 60 and low-boiling point substances removed from recovery device 50, and a distillation column 80 for fractionally distilling trichlorosilane and tetrachlorosilane from stored liquid drawn from tank 70. Recovery device 50 also functions as a single still 90 that vaporizes pentachlorodisilane and tetrachlorosilane from the condensate obtained in rapid cooling tower 40 and separates from the unvaporized portion and is preferably equipped with a concentrating column 100 that separates pentachlorodisilane provided from single still 90 from other low-boiling point substances. In an example of the present production method, vaporizer 10, preheater 20, and reactor 30 constitute a high temperature reaction step and subsequent rapid cooling tower 40, pump 43, cooling device 44, and spray nozzle 42 are a device constituting a rapid cooling step (pentachlorodisilane generating step). Each device shall be explained in further detail below. Vaporizer 10 is a device for vaporizing the raw material tetrachlorosilane and, after being released from vaporizer 10, the vaporized tetrachlorosilane is mixed with hydrogen and supplied to preheater 20. It is desirable that the tetrachlorosilane raw liquid supplied to vaporizer 10 be high purity tetrachlorosilane, but small amounts of silanes having boiling points higher than that of tetrachlorosilane may be mixed therein. However, such high-boiling point substances accumulate as an unvaporized portion at a bottom section of vaporizer 10 and prevent the vaporization of tetrachlorosilane, so it is preferable that vaporizer 10 have a structure so as to be capable of removing the unvaporized portion collected at the bottom section of vaporizer 10 in batches or continuously. The removed unvaporized portion can be supplied to distillation device 90 in recovery device 50 to recover industrially usable tetrachlorosilane, pentachlorodisilane, etc. that was expelled at the same time. The heating temperature for the raw material tetrachlorosilane in vaporizer 10 can be set to 60-150° C. under atmospheric pressure, preferably to 60-120° C. At this temperature range, tetrachlorosilane can be adequately vaporized without vaporizing high-boiling point substances such as pentachlorodisilane. Naturally, if vaporizer 10 is a type capable of adjusting the internal pressure, the optimum temperature for vaporizing tetrachlorosilane can vary from the above temperature range in accordance therewith. The raw material tetrachlorosilane vaporized in vaporizer 10 is mixed with hydrogen gas and supplied as a raw material gas to reactor 30, which will be discussed below, but before being sent to reactor 30, the gas is heated in preheater 20 so as to approach the temperature inside reactor 30. By doing so, the difference between the temperature of the mixed gas and the temperature inside reactor 30 is lessened and it is possible to increase the conversion rate in reactor 30 without generating temperature irregularities therein as well as to protect reactor 30 from local thermal stress concentrations. Further, the trichlorosilane generated by the reaction between tetrachlorosilane and hydrogen and being at a state of thermal equilibrium can prevent return to the tetrachlorosilane due to temperature reductions caused by flows of the raw material gas. The mixing ratio of tetrachlorosilane and hydrogen gas can be set to, for example, a molar ratio of 1:1-1:2. Reactor 30 is equipped with a reactor vessel 31, a heater 32 having a long length arranged so as to surround the outer side of reactor vessel 31, and an external cylinder vessel 33 housing reactor vessel 31 and heater 32. By the outer walls of reactor vessel 31 being heated by heater 32 the mixed gas of tetrachlorosilane and hydrogen is reacted inside reactor vessel 31 at a high temperature of about 700-1,400° C., by which the generation of trichlorosilane mainly progresses. This reaction is a thermal equilibrium reaction and silylene, monochlorosilane, dichlorosilane, tetrachlorosilane, hydrogen, hydrogen chloride, and the like are in a state of coexistence. Furthermore, it can be thought that, due to reactions of these substances with one another, hexachlorodisilane and pentachlorodisilane, which the present invention is directed to, are generated, for example, by reaction of silylene and trichlorosilane, in this state of coexistence and are steadily present. Reactor vessel 31 is an approximately cylindrical vessel for reacting the raw material tetrachlorosilane and hydrogen in a high temperature environment, having a raw material gas inlet for introducing the raw material gas and a reaction product gas extraction outlet for discharging reaction product gas. In the present embodiment, reactor vessel 31 has a structure wherein the raw material gas inlet is provided at the center of a bottom section of reactor vessel 31 and the reaction product gas extraction outlet is provided on an upper side wall of reactor vessel 31. An extraction pipe 34 is inserted in the reaction product gas extraction outlet and the reaction product gas is expelled to the outside of reactor 30. When housing reactor vessel 31, outer cylindrical vessel 33 is provided with a raw material gas inlet opening and a reaction product gas extraction opening at positions corresponding respectively to the raw material gas inlet and reaction product gas extraction outlet on reactor vessel 31. A connection means connected to rapid cooling tower 40 is provided to the reaction product gas extraction opening. Extraction pipe 34 is a pipe member connected, through the reaction product gas extraction opening in outer cylindrical vessel 33, to the reaction gas extraction outlet in reactor vessel 31 and the reaction product gas that includes trichlorosilane generated in reactor vessel 31 is expelled from extraction pipe 34 and supplied to rapid cooling tower 40. <Rapid Cooling Tower> Rapid cooling tower 40 is provided with a cylindrical metal vessel 41, a spraying means to spray the reaction product gas with coolant in metal vessel 41, that is, spray nozzle 42 that separates the coolant into fine droplets, pump 43 that extracts the condensate also collected at the bottom of metal vessel 41 and circulates it to spray nozzle 42, cooling device 44 that cools the condensate, and a pipeline 45 that extracts a portion of the condensate and sends it to recovery device 50 (single still 90). The middle of pipeline 45 can be provided with a mechanism capable of adjusting the extraction speed of the condensate, such as, for example, a control valve. A side wall of rapid cooling tower 40 is provided with reaction product gas extraction pipe 34 to connect to reactor 30. Spray nozzle 42 is arranged dose to an upper part of the reaction product gas inlet opening so as to be capable of spraying coolant toward the reaction product gas introduced to rapid cooling tower 40. Further, a pipe is connected to an apex part of rapid cooling tower 40 to supply uncondensed gas of the reaction product gas that is in a gas state even after cooling to condenser 60, which will be discussed below. In the example in Furthermore, in order to prevent one-sided flow of the coolant supplied from pipe 47, a dispersion panel is provided neighboring a lower part of pipe 47. In addition, supplying coolant from pipe 47 also has the effect of preventing the corrosion of metal vessel 41 and packing layer 46 by high temperature reaction gas. Furthermore, by changing the supply speed of the coolant from pipe 47, the amount of condensed and liquefied reaction gas changes and it is possible to maintain a constant amount of circulated liquid in the rapid cooling tower. That is, when the amount of coolant or condensate circulated in the rapid cooling tower is reduced, the amount of coolant in pipe 47 may be increased so as to increase the condensed gas and conversely, when the amount of the coolant or the condensate in the rapid cooling tower is increased, the amount of coolant from pipe 47 may be reduced so as to reduce the condensate gas. The condensate is a liquid collected at the bottom part of metal vessel 41 in rapid cooling tower 40, extracted via a tank 48, continuously circulated, and cooled by cooling device 44 to be made into the coolant, and, while being a mixed liquid formed by mainly containing tetrachlorosilane and trichlorosilane, tetrachlorosilane for addition can be further added to the coolant in the present invention. In order to do so, a tetrachlorisilane for addition inlet pipe 49 is connected to the base of spray nozzle 42. Inlet pipe 49 has a control valve or the like in the middle and it is possible to adjust the supply speed thereof. The added tetrachlorosilane may be obtained from anywhere, for example, tetrachlorisilane drawn from distillation column 80, which will be discussed below, may be used. The amount of tetrachlorosilane added in the coolant is preferably 10-10,000 L/h per 1,000 L/h of the raw tetrachlorosilane (prior to vaporization), more preferably, 10-5,000 L/h, and even more preferably, 100-500 L/h. If the addition speed of the tetrachlorosilane is increased, the concentration or mass generated per unit time of the pentachlorodisilane in the condensate (extracted liquid) tends to fall. It is preferable that the coolant be temperature adjusted to no more than 50° C. If the coolant is temperature controlled to no more than 50° C., the temperature of the reaction product gas can be rapidly cooled in a short period of time, so the reverse reaction of the trichlorosilane generated in accordance with thermal equilibrium movement, returning to tetrachlorosilane, can be frozen. The low-boiling point substances generated in reactor 30 such as trichlorosilane, hydrogen chloride, unreacted tetrachlorosilane, and hydrogen do not condense, even if rapidly cooled in rapid cooling tower 40, but are released from the apex part of the cooling tower 40 as uncondensed gas and supplied to condenser 60. However, while the generated hexachlorodisilane, pentachlorodisilane, and a portion of the tetrachlorosilane are condensed, mixed into the coolant, condensed with other byproducts and impurities in cooling tower 40, introduced to tank 48 connected to the bottom of rapid cooling tower 40, and circulated to spray nozzle 42 as coolant via a circulating pipeline by pump 43 connected to tank 48, a portion is extracted from the recirculation system through pipeline 45 and sent to recovery device 50 (single still 90). Pipeline 45 has a control valve or the like in the middle and the extraction speed of the condensate can be adjusted. Extraction of the coolant via pipeline 45 is performed to maintain a constant liquid composition with respect to changes in the liquid composition during circulation, but in the present invention, this is performed to adjust the amount of generated pentachlorodisilane. Accordingly, the coolant extraction speed used for this objective is preferably 5-1,000 L/h per 1,000 L/h of raw material tetrachlorosilane (prior to vaporization), more preferably, 5-500 L/h, and even more preferably, 5-100 L/h. If the extraction amount is increased, the concentration of pentachlorodisilane in the condensate falls, but as the extracted liquid amount increases, the mass of pentachloridisilane generated per unit time itself tends to increase. The mass of pentachlorodisilane generated per unit time was calculated by multiplying the specific weight of the extracted condensate of 1.5 kg/L by the extraction speed. The uncondensed gas extracted from the apex part of rapid cooling tower 40 is split in condenser 60 into a chlorosilane condensate mainly including trichlorosilane and tetrachlorosilane and uncondensed components including hydrogen and hydrogen chloride. The extracted hydrogen is reused in the raw material gas and the hydrogen chloride is separately recovered and industrially employed. The chlorosilane condensate is temporarily stored in tank 70, subsequently sent to distillation column 80, and separation into trichlorosilane and tetrachlorosilane performed. Trichlorosilane can be used as an intermediate raw material for monosilane production and tetrachlorosilane can be recycled and used again as raw material tetrachlorosilane. <Single Still (Distillation Device, Recovery Device)> Condensate recovery device 50 is also single still 90 and single still 90 is provided with a jacketed metal vessel 91 to warm single still 90 and a pump 92 to circulate still liquid so as not to be blocked by byproducts. In single still 90, a pipe to supply tetrachlorosilane and pentachlorodisilane vaporized in a concentration can to concentrating column 100 and a pipe supplying high-boiling point substances that do not vaporize in single still 90 to elimination equipment are connected. The unvaporized components in vaporizer 10 and the coolant in rapid cooling tower 40 are supplied to single still 90, heated at about 150° C., the tetrachlorosilane and pentachlorodisilane vaporized, supplied to concentrating column 100, and recovered. Meanwhile, the unevaporated components are extracted from single still 90 in batches or continuously and detoxified in the elimination equipment. Concentrating column 100 may comprise a multi-stage distillation device having a reboiler. In concentrating column 100, vaporized gas from single still 90 is roughly separated into trichlorosilane and tetrachlorosilane and expelled from the apex of the column. Tetrachlorosilane, hexachlorodisilane, pentachlorodisilane, and other high-boiling point substances that could not be completely separated from the bottom of the column are separated. Low-boiling point substances, mainly tetrachlorosilane, are released from the apex of concentrating column 100, cooled and condensed by the cooling device, temporarily stored in tank 70, and then sent to distillation column 80. Meanwhile, high-boiling point substances, mainly hexachlorodisilane and pentachlorodisilane, are recovered from the bottom of concentrating column 100. By further vaporizing the recovered liquid in the present invention, pentachlorodisilane with increased purity can be produced. By appropriately adjusting the temperature and pressure within concentrating column 100, the concentration of pentachlorodisilane at the bottom of the column can be sufficiently increased. As an example, it is preferable that the temperature in the column be a range from 60-200° C. and particularly preferable that it be a range from 60-150° C. Further, the pressure in the column is preferably in a range from atmospheric pressure to 0.3 MPa (absolute pressure) and particularly preferable that it be maintained in a range from atmospheric pressure to 0.2 MPa (absolute pressure). The liquid in tank 70 sent to distillation column 80 is separated into trichlorosilane and tetrachlorosilane. The obtained trichlorosilane can be used as an intermediate raw material for monosilane production and tetrachlorosilane can be recycled and used again as raw material tetrachlorosilane.EXAMPLES Below, examples of the present invention will be explained in detail. However, the specific details described in these examples do not limit the present invention.Example 1 Sample liquid was recovered from the bottom of concentrating column 100 in equipment having the configuration indicated in the schematic drawing in Device, Recording Device: GC-14B, C-R6A (manufactured by Shimadzu Corporation) Column: Porapak QS (Waters Corporation) Column Size: internal diameter 3 mm ø, length 2 m Column temperature conditions: 70° C.-220° C. Carrier Gas: type helium, flow rate 30 mL/min. Gas Sampler: 0.5 mL Detector: type TCD A liquid with reduced hexachlorodisilane was recovered, particularly as a raw material A from the apex of concentrating column 100, by distilling once more, using concentrating column 100, the sample liquid obtained in Example 1 under the conditions in 3 in Table 1-1 and Table 1-2 . Next, raw material A was, using distillation equipment combining two distillation columns illustrated shown schematically in The sample liquid obtained in Example 1 under the conditions in 3 in Table 1-1 and Table 1-2 was recovered particularly as a raw material B from the bottom of concentrating column 100. Next, raw material B was, using distillation equipment combining two distillation columns illustrated shown schematically in 31 Reactor vessel 33 External cylindrical vessel 34 Extraction pipe 40 Rapid cooling tower 41 Metal vessel 42 Spray nozzle 44 Cooling device 45 Pipeline (adjustment means) 46 Packing layer 49 Inlet pipe (adjustment means) 50 Recovery device 80 Distillation column 90 Single still (distillation device) 91 Jacketed metal vessel 100 Concentrating column 1. A pentachlorodisilane production method characterized by being provided with a high temperature reaction step for obtaining a reaction product gas including trichlorosilane by reacting a raw material gas including vaporized tetrachlorosilane and hydrogen at a high temperature, a pentachlorodisilane generation step for generating pentachlorodisilane in a condensate by contacting the reaction product gas obtained in the high temperature reaction step with a coolant obtained by circulative cooling the condensate generated by cooling the reaction product gas and rapidly cooling same, and a recovery step for recovering the generated pentachlorodisilane. 2. The pentachlorodisilane production method of claim 1 characterized by, in the pentachlorodisilane generation step, adding additional tetrachlorosilane to the coolant and/or the condensate, recovering the coolant and/or the condensate as an extracted liquid extracted outside a circulation system, and adjusting the concentration or mass generated per unit time of pentachlorodisilane included in the extracted liquid. 3. The pentachlorodisilane production method of claim 2 characterized by, in the recovery step, distilling the extracted liquid and obtaining pentachlorodisilane having purity of at least 90 mass %. 4. The pentachlorodisilane production method of claim 2 characterized by, in the recovery step, recovering the extracted liquid in a distillation device provided with a heating device, generating a evaporation gas by heating, introducing the gas to a concentrating column, removing trichlorosilane and tetrachlorosilane, and obtaining a liquid containing pentachlorodisilane. 5. The pentachlorodisilane production method of claim 4 characterized by further distilling the liquid containing pentachlorodisilane obtained from the concentrating column and obtaining pentachlorodisilane having purity of at least 90 mass %. 6. Pentachlorodisilane having purity of at least 90 mass % produced by the method of claim 5. Filed: Sep 25, 2015 Publication Date: Oct 5, 2017 Patent Grant number: 10294110 Applicants: Denka Company Limited (Chuo-ku, Tokyo), L'AIR LIQUIDE SOCIETE ANONYME POUR L'ETUDE ET L'EX PLOITATION DES PROCEDES GEORGES CLAUDE (Paris Cedex 07,) Inventors: Hiroyuki YASHIMA (Itoigawa-shi, Niigata), Takahiro KOZUKA (Itoigawa-shi, Niigata), Seiichi TERASAKI (Itoigawa-shi, Niigata), Jean-Marc GIRARD (Minato-ku, Tokyo) Application Number: 15/513,593
<urn:uuid:289a97bf-a9ec-40c7-a049-69c84ae69216>
CC-MAIN-2021-21
https://patents.justia.com/patent/20170283267
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00336.warc.gz
en
0.895493
7,191
2.84375
3
Share This Page “Who we are” is what we know: We know the events of our lives, facts about the world, values that guide our choices, languages to communicate with one another, and skills to make our actions effective. During the past 50 years, our knowledge about how the human brain learns and remembers has exploded, and along with that have come many implications. Most obvious are potential therapies for identity-destroying brain diseases, but also emerging quickly is the realization that this knowledge may allow us to alter how memory operates, even in the absence of actual disease. The possibility of such optional uses of our newfound understanding may prove to be a neuroscientific Pandora’s Box. We will not be able to resist opening it, but we should not fail to think about how tinkering with memory may change who we are and affect others we care about. Lifting the Lid Research has provided a new appreciation of the organization of memory in the human brain and provoked new ideas about how memory may be therapeutically rescued or altered. Many people will have a medical need to enhance memory capacity, such as the 14 million Americans expected to have Alzheimer’s disease by the year 2050. To help others, we may be able to meet a humane need to block specific memories—for example, in treating people who have experienced traumatic events and are reliving those horrors in post-traumatic stress disorder. In yet other cases, a valuable goal will be to weaken specific patterns of learning and memory, such as learned responses that trigger craving in recovering addicts or provoke rumination about sad experiences that further deepens the depression of depressed patients. More controversial are possibilities of enhancing normal memory and learning through biological intervention above and beyond the morning cup of coffee. Although one can certainly argue for caution in developing superhuman memory, we will face more complex and urgent challenges in thinking about the continuum from healthy to diseased individuals. For example, we now appreciate that older people go through a long, slow process of brain and mental changes before crossing the diagnostic threshold of Alzheimer’s disease. Most researchers believe that early intervention may be essential to treat or postpone the disease, because so much brain injury has already occurred by the time an individual can meet diagnostic criteria for probable Alzheimer’s. When, then, would experimental treatment be justified? Would it be appropriate to try to alleviate any age-associated memory loss? Similarly, what if we know that an individual has genes that make him or her especially susceptible to a psychiatric disease? When would it be appropriate to intervene in memory mechanisms in an effort to prevent depression, post-traumatic stress disorder, and other psychiatric diseases now treated only after a person is deeply and persistently miserable? An additional challenge is that, in all of these cases, any treatment would initially have to be experimental with no certain efficacy and safety outcome. Another important concern is the unintended consequences of any intervention. Modern research has shown that the brain functions in complex, interactive networks where activity in one brain region has great consequences for how other brain regions work. Thus, our humanity reflects these interactions, not only crossing boundaries of brain regions but also integrating the elements of human nature, such as memory, emotion, and personality. Memory Systems of the Human Brain In the 1950s, it was thought that the brain recorded, retained, and retrieved memories holistically, with the biological record of an experience etched throughout the central nervous system in an undifferentiated fashion. Memory for a specific experience or a particular fact would be like a drop of milk well-stirred into a glass of clear water—the whole glass of water would change color and the milk would be no more in one part of the water than in any other part. This view was overturned by the study of a single patient, arguably the most famous neurological patient of the twentieth century, known by his initials, H.M. H.M. had severe and pharmacologically intractable epilepsy, a neurological disorder in which neurons fire without apparent reason and cause seizures. Many patients with epilepsy control their seizures with medications, but a minority respond poorly to medications and become candidates for surgical removal of the brain tissue where the seizures originate. For some patients, the procedure is highly effective. In the case of H.M., the origin of his seizures could not be determined, and physicians assumed it was the hippocampus, the most common, but not the only, brain region from which epileptic seizures originate. Therefore, in 1953, H.M. underwent removal of both the left and right hippocampi, as well as nearby structures such as the amygdala. In terms of treatment for epilepsy, the surgery was successful. Although he continued taking medications, H.M. had very few seizures thereafter. However, the operation produced a devastating unintended consequence: From that day, H.M. was not able to form a lasting memory for any new experience or new fact. Although he was a man with average mental abilities, he could not remember information for more than a few moments. He did not know his age, the year, any historical event since 1953 (despite many hours of watching television), or that his parents had passed away. He did not remember any experience, no matter how emotionally powerful, and no fact, no matter how important or how many times repeated to him, for more than a few seconds. H.M.’s memories of his life before his surgery remained (he knew his name, his childhood and adolescent history), but he was virtually a blank slate for consciously accessible memories of events and facts since 1953. H.M.’s case was described as being one of global amnesia, because his inability to remember was so severe and so pervasive. Fifty years of human and animal research have supported what H.M.’s doctors observed in the aftermath of his surgery—that the hippocampus and other structures located in the medial temporal lobe are critical for the formation of memories in the everyday sense of memory. Although pure global amnesia is rare, the consequences of hippocampal injury are frequent and profound. The vast majority of patients with Alzheimer’s disease have initial pathology in the same brain region, which is why memory loss is the most common and severe early difficulty in that disease. Furthermore, research has found indications that many diseases, including schizophrenia and post-traumatic stress disorder, affect the hippocampus. So complete was H.M.’s amnesia that scientists were initially surprised to discover that some kinds of learning remained fully preserved in H.M. and similar amnesiac patients. In subsequent years, studies documented that various kinds of learning—including perceptual, motor, and cognitive skills and other forms—are normal in amnesia. Further research indicated that several such forms of learning are mediated by other neural circuits, including the basal ganglia, cerebellum, and neocortex. We now think the entire human brain has learning capacities, with each brain region highly specialized for learning specific kinds of information— not unlike a symphony orchestra, in which each instrument makes a specific contribution to the music. The hippocampus and related structures stand out, however, because of their critical importance for the everyday sense of memory and their susceptibility to injury in diseases that affect many people. Emotions and Memory Emotions have a powerful influence on memory. Psychological experiments verify our personal observations that we remember emotionally intense experiences more often and in more detail than less intense experiences. (Although emotionally fueled memories seem as susceptible to error and misremembering as neutral memories, emotional experiences have a better chance of not being forgotten.) It is striking that the limbic area of our brains (the structures that form a border around the brain stem) includes in a close anatomical neighborhood circuitry that is essential for both emotion and memory. Emotionally powerful experiences, be they fearful or delightful, may merit special consideration for being remembered. H.M. and patients like him have injuries to structures adjacent to the hippocampus, including the entorhinal cortex (where Alzheimer’s pathology is believed to originate) and the amygdala. A remarkable convergence of animal and human studies, however, has identified one structure, the amygdala, as a specific link between emotion and memory. When healthy research subjects are shown films or slides (scientists in the laboratory are not allowed to induce truly powerful emotional situations), they remember emotionally intense material better than neutral material. By contrast, patients with injury to only the amygdala remember neutral material normally, but they specifically fail to enhance their memory of emotionally powerful material. In brain imaging studies with healthy adults, amygdala activation during the viewing of visual material correlates with subjective experience —the more intense a person perceives a picture to be, the more amygdala activation occurs. Further, the greater the amygdala activation during the viewing of an emotionally intense picture, the greater the likelihood that the person will remember the picture weeks later. But this applies to emotionally intense experiences only—the amygdala appears to have little role in any but the most intense events. Thus, the amygdala appears to adjust the formation of enduring memories on the basis of emotion. Amygdala enhancement of memory appears to occur for both negative and positive experiences, although we have more evidence for negative experiences (perhaps because they are easier to induce intensively in the laboratory). Interestingly, amygdala-driven memories in humans appear to have a trade-off: The key or central aspects of an emotional experience are better remembered, but the peripheral aspects of the experience are less well remembered than they are for more neutral experiences. It is as if the emotionally charged information overshadows other aspects of the same experience. Functional brain imaging has recorded amygdala dysfunction in many psychiatric disorders, including depression, social phobia, and anxiety. However, we do not know whether the amygdala dysfunction is part of the cause of the disease or the consequence of the disease. That is, if another part of the brain were transmitting dysfunctional information to the amygdala, the amygdala could be working normally but appear pathological in response to the other brain region. Individual Differences in Personality, Sex, Age, and Genes The brain is not only the physical basis for aspects of human nature we all share, such as dependence on the hippocampus to form new memories, but also the basis for the neurology of individuality—how we are unique. We differ from one another on many dimensions, including personality, sex, age, and genes. Research during the past decade has begun to uncover how these seeds of uniqueness influence what we remember, and thus who we are. “Personality” refers to stable psychological characteristics such as extroversion or introversion that influence an individual’s behavior across different situations and over time. These enduring traits or predispositions are not the same as the fleeting states of feelings. Personality researchers have developed questionnaires that reliably measure specific personality traits on a continuum (how you score today is similar to how you score next week or next year), and these traits correlate with various behaviors and health outcomes. However, considerable debate continues among psychologists about the relative power of personality versus situations in influencing how we behave—to what extent is a shy person shy across all situations, or instead shy among strangers at a party but outgoing and aggressive at work? Brain imaging studies have begun to show the neural mechanisms by which personality, emotion, and memory may interact with one another. When viewing an equal number of negative and positive pictures, the more extroverted a person is, the greater the amygdala response to positive pictures (the more introverted, the greater the response to negative pictures). One can imagine that an extrovert is outgoing and sociable, in part because she or he more powerfully remembers positive experiences, whereas the introvert may be more aloof because she or he more powerfully remembers negative experiences. Thus, if one’s personality filters experience, we may learn quite different lessons about life depending on what we remember as positive or negative. Another dimension of difference is whether we are women or men. This starts with a genetic difference, but powerful socializing influences enter as we learn about our expected gender roles. Unexpectedly, two studies found that activation in the right amygdala predicted what negative pictures men would later remember, whereas activation in the left amygdala predicted the same thing for women. One of the studies also found that, while men and women were largely equal in memory performance, women had superior memory for the most intensely negative pictures and both men and women increasingly activated the left amygdala as they found pictures to be increasingly intense. Speculatively, the brain imaging evidence suggested that emotional evaluation and memory formation are more tightly coupled in the brains of women than of men. (Overall, men and women had a similar amount of brain activation for emotional versus neutral pictures). Age is another source of uniqueness, in how we differ not only from one another but even from our former, younger selves. Memory formation declines mildly in healthy aging, but some research shows that older people better retain memory for emotionally positive than negative experiences. This increasing emphasis on maintaining a positive emotional disposition can be interpreted as a sort of emotional wisdom. Brain imaging has begun to specify how the older brain may wisely emphasize the positive. For example, in one study, younger (around age 20) and older (around age 70) adults viewed positive, negative, and neutral pictures. In the young adults, amygdala activation was greater for both negative and positive pictures than for neutral ones, but the amygdala of older adults showed a selective reduction in response to negative pictures (and the older adults had far less memory for the negative pictures). Thus, it appears that a lifetime’s experience encourages older people to disengage emotion-driven memory formation for negative experiences, whereas younger adults form emotional memories equally for positive and negative experiences. Finally, everything we know comes from only two sources, our genes and our experiences; our brains are formed under genetic instructions, then shaped by what we experience. Both genes and experience exert their influence on our behavior by sculpting our brains. The revolution in genetics now allows us to characterize specific single nucleotide polymorphisms (SNPs) that vary from one healthy person to another. When brain-imaging studies have grouped people on the basis of these single genetic variations, they have found activation differences in the hippocampus (and in memory performance) for one gene and in the amygdala for another gene. These exciting studies have opened up a new frontier in which we may begin to explore the relations between genes, brains, and minds. In H.M., we saw a man mentally frozen since 1953 because he could not retain new information. He is evidence that memories make us who we are. But, perhaps equally important, who we are— men or women, extroverted or introverted, younger or older, having this or that variant of gene—may also determine what we remember. The brain imaging that has uncovered the varied strength and content of individuals’ memories depending on their sex, age, and genetic endowment is showing us that memory is not an add-on to who we are (as when we add memory capacity to a computer) but woven into a fabric of our individuality. Thus, when we talk about altering memory processes we may be talking about altering our fundamental individuality. Can You Erase Memories? In the movies, memories are erased from the human brain for both benign and nefarious reasons. In certain situations, therapeutically blocking access to a memory may be desirable, such as memory for traumatic experiences or learned responses to cues for drug craving. We have known for some time that certain medications or electrical stimulation (or head injury) can obliterate recent memories, but these methods cannot be targeted to a specific memory and are difficult to control precisely in time. Thus, it may be possible to pharmacologically block access to a traumatic memory from a year ago, but these methods would also block access to all other memories from the past year (or more). In functional brain imaging studies of selective suppression for a specific memory, healthy young adults could learn to suppress specific memories of neutral word pairs. This suppression was characterized by selective activation of prefrontal cortex (an area of the brain important for goal-oriented control of cognition) and deactivation of the hippocampus. How this experimental study relates to real-life traumatic memories is still unknown, but what we know about the human brain suggests that the intentional suppression of unwanted memories would involve the turning up of control of one’s thoughts (prefrontal cortex) and the turning down of memories (hippocampus). Several interesting studies have shown that people can use real-time feedback from brain imaging to learn to increase or decrease activation in specific brain regions and, presumably, the mental operations supported by those regions. One can imagine that people could learn, using brain feedback, how to suppress specific memories. Furthermore, certain drugs may be able to augment this process. Currently, these ways of suppressing memory are voluntary, but increased understanding of such mechanisms may point towards involuntary methods by which others may choose to suppress our memories. Even well-intentioned uses of memory suppression may have profound consequences. For example, emergency treatment of a rape victim or battlefield treatment of a soldier may preempt emotional suffering if a traumatic memory is immediately suppressed. The unintended consequence of such memory suppression may be the victim’s inability to remember information required to convict the rapist or the soldier’s inability to learn from a disturbing experience. Can You Have Too Good a Memory? The student may not exist who, preparing for an examination, would not want at that moment a pill to confer photographic memory. More generally, traumatic and embarrassing memories aside, most of us are more irked by things we forget than by things we remember. But can one have too good a memory for one’s own good? Although we know a great deal about how crippling the loss of memories can be in amnesia and Alzheimer’s disease, we know very little about the consequences of a memory that is too good. Truly photographic memory is very rare (most people with superior memory use long-known mnemonic devices that require considerable effort to apply). In the 1920s, the noted Russian psychologist A.R. Luria carefully studied one case of truly photographic memory, in a person known as S. This man performed virtually perfectly on all memory tests. Indeed, he could recall random lists of numbers months and years after seeing them. He appeared to have a photographic and unlimited memory. Until others studied him, S. was unaware that his memory was unusual, but once he grasped his unique gift, S. became a mnemonic performer, dazzling audiences with his perfect memory. However, S. could not control his memory. When he was reading or talking with others, words would evoke visual memories so powerfully that S. would have a difficult time attending to the meaning of the words. He could not control powerful memories from rising up in his mind and blocking more abstract interpretations, the idea of what was going on. So intrusive were these vivid memories that S. tried desperately to erase them from his memory by writing them down and then throwing out or burning those papers. He could not, however, throw out the memories that continued to flood his mind. We now understand that our memories (except for the case of S.) are abstract in nature. We remember the gist of what we see, hear, or read, not the specific visual or auditory details of each experience. This sort of abstraction has rewards and risks. The major reward is that we instantly relate our abstract knowledge of the world to the situation at hand—a lifetime of experience is used to instantly translate the significance of a physical experience into an abstract interpretation of what is going on. The major risk is that the physical details of an experience are thrown quickly away in favor of an efficient interpretation. This makes us prone to the dangers of interpretation— such as false or illusory memories or self-serving biases in which we inadvertently substitute our interpretations for our actual experiences. This natural blending of reality and interpretation extends over time, so that each reconsideration of a memory, be it spontaneous or through interviews or in therapy, alters the memory itself. This is why eyewitness testimony in the court can be more riveting than accurate, and why it has proved to be excruciatingly difficult to validate recovered memories of childhood abuse. Memory is often thought of as a fragile power, because its balance of recording and interpretation allows us to remember all that we remember, but also to easily forget or misremember. The case of S. demonstrates the risk of memory becoming too powerful —memories of the past flood our minds and drown a clear sense of the present. While a world full of people with perfect memories seems far away, great efforts are being made to develop medications that can boost memory in Alzheimer’s disease. To date, these medications have had, at best, modest benefits for patients with Alzheimer’s, but these patients offer a severe challenge for treatment because their brain injuries demand a remarkable drug effect. But could healthy older people use some of the same prescription medications to boost their memory, with the medications having potentially greater effect in a healthy brain? Or could healthy young people assist their mental performance and encourage others to do so just to keep up (as we have seen with steroid and other drug use in athletes)? What effect would such drug-facilitated memory enhancement have on identities? It is possible that a modest gain in the accuracy of our memories would be more than offset by an imbalance with our emotions, personality, or age—for example, an older person may remember more distressing information and become less happy, without gaining other sources of happiness that are part of youth. Lifting the Lid Cautiously Pandora discovered that opening her box satisfied her curiosity but unleashed misery upon the world. Scientists have studied the brain basis of human memory, and especially hippocampus-dependent memory for events and facts, in order to understand how we remember and forget. Our understanding of H.M.’s amnesia fueled our insight into the memory failure of Alzheimer’s disease. Functional neuroimaging in human studies has allowed for visualization of memory functions in health and in many diseases, and the scientific literature on memory abounds with the spectacular progress in animal and molecular neuroscience. We have an ethical imperative to use this research knowledge to inform treatment of those with diseases of memory. These exciting opportunities, however, are associated with risks of abuse or unintended consequences. Some potential consequences are striking, such as how being able to extinguish a memory might thwart desirable societal goals. Others would be more subtle and perhaps more troubling: a treatment that simply boosted memory could make older people less happy; learning to easily enhance or suppress memory raises the specter of life narratives rewritten for nefarious or unworthy reasons; and, most important, the power to manipulate memory at will, our own or another’s, is the power to alter our sense of who we are as men and women, or in terms of our personality. The potential gains of improving or therapeutically altering memory are compelling, but each step we take to put these advances to use must be accompanied by forethought about ethical dimensions of the brain basis of human individuality and about unintended consequences of manipulating the evolved balance of memory processes. Thinking ahead may allow us to open this box more wisely.
<urn:uuid:00c49a56-27c6-475c-a5e5-85ef945532ab>
CC-MAIN-2021-21
https://dana.org/article/memory-pandoras-hippocampus/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00377.warc.gz
en
0.950406
4,927
3.6875
4
In Excel, how could you display cell as blank when the sum is 0? If not blank then calculate in Excel. If one has a graph to display scoring where the denominator becomes 0 then the scoring is sort of infinite and the chart should display nothing at all. If you would like to know how to use the macros described on this page (or on any other page on the, Returning Zero When a Referenced Cell is Blank, Click here to open that special page in a new browser tab. This will help you to restrict the applying of the formula more then ever. Alternatively, you can use: =IF(NOT(ISNUMBER(G4)),"",TEXT(G4,"ddd")) This will check if G4 contains a number. A verification code will be sent to you. The following examples shows you the different scenarios with formulas to create strings in a new column based on the data in another column. So, if ISBLANK function finds a any blank cell, it returns a positive value. This tip (2174) applies to Microsoft Excel 97, 2000, 2002, and 2003. Calculate or Leave Blank. In this article, I will talk about some quick tricks for you to deal with this job. =IF(K45=””,1,0) The important thing to note is that to tell Excel to search for a blank, you need to have the 2 inverted commas directly next to each other. posted by cacao at … 5 Replies. If you have a list of dates and then want to compare to these dates with a specified date to check if those dates is greater than or less than that specified date. They also might be 0. If you have a formula in a worksheet, and the cell referenced by the formula is blank, then the formula still returns a zero value. If Cell is Blank then Leave Blank. This is the very useful formula when we deal with strings. I'm finding that empty cells formatted as Number or General are not returning the digit zero (0), but are returning "blank." If we consider how Excel® is applying our formula to each cell in the range then it should be apparent that we want to amend the color of cell C7 based on the test of cell B7. If TRUE do one thing, if FALSE do another. In the next example we're using "" instead of ISBLANK. 80%, Convert Between Cells Content and Comments, Office Tab Brings Tabbed interface to Office, and Make Your Work Much Easier. Featuring the friendly and trusted For Dummies style, this popular guide shows beginners how to get up and running with Excel while also helping more experienced users get comfortable with the newest features. Started 2 hours ago. For instance, if you have the formula =A3, then the formula returns the contents of cell A3, unless cell A3 is blank. ExcelTips is your source for cost-effective Microsoft Excel training. This tutorial demonstrates how to use the Excel IF Function in Excel to create If Then Statements. Show Printable Version; Subscribe to this Thread… Mark this thread as solved… Rate This Thread. Posted in Video Hub on September 22, 2020. To do this, we will prepare our table of values including all the zero figures present in the data. Example. In excel, any value grater then 0 is treated as TRUE. To use SUMIF with blank is very simple we use “ “ as a criteria for a blank cell, but to use SUMIF when only the cells are not blank as the criteria we will use the operator <> which means not equals to blank, this operator acts as the criteria for the function in summing up the cells when the criteria range is not blank. use a later version of Excel, visit ", (Your e-mail address is not shared with anyone, ever.). In other words: = IF (A1 = 1, B1, "") // right = IF (A1 = "1", B1, "") // wrong. If you put a formula in cell B7 (as already discussed), then cell B7 is not truly blank—it contains a formula. I do not want the overview page to return a zero as I want to set conditional formatting for the cell to be green if the true value is actually zero. Not how B2 is not empty but contains a formula. So, cells containing a future calculation are going to be blank. All images are subject to Even if it returns an empty string, it is still treated by other formulas as if it contained zero. Otherwise, the SUM function is executed. ... Excel Smarts for Beginners! Images larger than 600px wide or 1000px tall will menu interface (Excel 97, Excel 2000, Excel 2002, or Excel 2003)? Description. Remember, the IF function in Excel checks whether a condition is met, and returns one value if true and another value if false.. 1. If I enter a zero, they return zero, but if there is no entry in the cell, they return "blank." By default, the Pivot Table shows you empty cells when all the cells in the source data are blank. Use a formula like this to return a blank cell when the value is zero: =IF(A2-A3=0,””,A2-A3) The following spreadsheet shows two simple uses of the Excel If function. Aug 25, 2017 #1 Have two columns, A and B. Then 'Conditional Formatting' In Condition 1 I selected 'Cell Value Is' Then set it to 'Equal To' And entered '0' Then clicked on format and set the font colour to white and clicked on 'OK then 'OK' Might not be the best way but it does exactly what I want, when there is a 0 it does not show anything. If A1 contains "" returned by a formula, then: To use the IF Excel Worksheet Function, select a cell and type: (Notice how the formula inputs appear) IF function Syntax and inputs: Share. IFERROR then blank – syntax =IFERROR( A2/B2 , “” ) The example above calculates the formula ‘A2 divided by B2’ (i.e. In that case, the formula returns a value of zero. You wouldn't see the running total past the end of the credit/debit entries then, it would just be blank until you filled in a new credit/debit entry. Increases your productivity by Moreover, IF function also tests blank or not blank cells to control unexpected results while making comparisons in a logical_test argument or making calculations in TRUE/FALSE arguments because Excel interprets blank cell as zero, and not as an empty or blank cell. I'm sure its something simple that I am just missing, but I cant seem to find it. The logical expression ="" means "is empty". the Excel formula returns the empty string) the cell is processed as being valued 0 and displayed on the chart accordingly. If cell is less than 0, then show 0 I want to calculate Net Profit so that is Gross - Expenses or =d6-f6....easy. The following spreadsheet shows two simple uses of the Excel If function. Started Monday at 15:36. If you say =” ” (notice the space) then you are asking Excel to look for a space. DAX uses blanks for both database nulls and for blank cells in Excel. You can use the IF function in combination with logical operators and DATEVALUE function in Microsoft excel. For example cell A1 is blank and linked to by another cell. In our example, we have input data in Cell A2:A12 and We will Return the Values in Range B2:B12. New Topics. If that is not a problem, then you can do that with a macro. So, cells containing a future calculation are going to be blank. With more than 50 non-fiction books and numerous magazine articles to his credit, Allen Wyatt is an internationally recognized author. The Microsoft Excel ISBLANK function can be used to check for blank or null values. This article introduces three different options for dealing with empty return values. Select the formula cells that the total result are zeros, and then right click to choose Format Cells from the context menu, see screenshot: 2. You may not post new threads; You may not post replies; You may not post attachments; You may not edit your posts I have have tried several different formulas such as ifblank or ifvlookup but nothing will return the cell blank but then allow a zero to show should the value in the other tabs be zero.Any help would be greatly appreciated.... and my head hurts :-). Enter your address and click "Subscribe. If Blank. Also, if you are new to Excel, note numeric values are not entered in quotes. Maximum image size is 6Mpixels. A blank cell counts as zero, so you don't have to use an if - then statement, and can just add them together. In these examples, the logical_test checks whether the corresponding value in column A is less than zero and returns: The text string "negative" if the value in column A is less 0; or Use the IF function and an empty string in Excel to check if a cell is blank.Use IF and ISBLANK to produce the exact same result.. Excel If Function Examples If Function Example 1. It can only make a cell appear blank but it will always hold the formula. Here's a quick shortcut key you can use to ... Want to protect the Excel information stored in a particular folder on your system? Function SumDouble(rStart As Range, rEnd As Range) As StringDim rr As Range, ss As WorksheetDim bSum As Boolean, nSum As DoubleDim sStart As String, sEnd As StringsStart = rStart.Parent.NamesEnd = rEnd.Parent.NameFor Each ss In Sheets bSum = (ss.Name = sStart) Or bSum If bSum Then Set rr = ss.Range(rStart.Address) If VarType(rr.Value) = vbDouble Then nSum = nSum + Val(rr.Value) If (ss.Name = sEnd) Then Exit For End IfNextSumDouble = nSumEnd Function, I am in excel hell right now and need some help....I am working on a scorecard which has an overview tab and then several other tabs which are the data points for the overview tab (main scorecard) the overview tab pulls data from each tab by month and these formulas are entered for example like this ='tab2'!A1+'tab3'!A1+'tab4'!A1 etc etcMy issue is the cell in the overview page returns a zero until data is entered into the relevant tabs. (2) This method will replace all kinds of errors with the number 0, including #DIV/0… I.e. Try the following options to help recover your document. Use the IF function to do this. I am trying to calculate the difference between two dates, lets say call A1 and B1 and the outcome in C1 cell A1 always has a date Which is =today(), but cell B1 can be blank what i would like to do is, if cell B1 is empty then display 0 in C1, if cell B1 has a date then calculate the difference between A1 and B1 when i use =(B1-A1) and B1 is empty, i assume it defaults to 01/01/1900 In this case, any non-zero value is treated as TRUE and zero is treated as FALSE. If 0 is the result of (A2-A3), don’t display 0 – display nothing (indicated by double quotes Below is the Excel formula to find If Cell Contains Specific Text Then Return Value. The ISEMPTY function is a built-in function in Excel that is categorized as an Information Function.It can be used as a VBA function (VBA) in Excel. It can be used as a worksheet function (WS) in Excel. In this example, I have named range A2:A15 as values. More commonly this formula is written differently as =IF(C2<>0,B2/C2,0) with the same result; doing the calculation if Num 2 is not zero (0) and returning a zero (0) when it is. Excel Vlookup: if not found return 0; Excel Vlookup: if not found return blank; IF INDEX MATCH - left vlookup with If condition; Vlookup with If statement: return True/False, Yes/No, etc. Figure 2: Data for if 0 leave blank We will highlight the entire range A4:C10 and right-click to select format cells. Reply. In the Format cell dialog box, we will click on custom, then we will select the general and type a double semicolon “;;” in front of the word general and we will then click OK. Something like this - or better. If you left a blank row in your credit debit entries though, the reference to the previous total, H15, would report blank, which is treated like a 0 in this case. This error propagates through other formulas that reference the formula, but the #N/A error is ignored completely when charting. cell contents of A2 and B2), and if this results in an error, the result returned is a blank cell. The IF formula will check if the length value is 0 or not, if it is equal to 0, then return blank value, otherwise, return the matched value. Text placed in cells can either be lowercase, uppercase, or a mixture of the two. If 0 is the result of (A2-A3), don’t display 0 – display nothing (indicated by double quotes “”). We can check if a cell contains a string value and write something in another cell or adjacent column. Blanks and empty strings ("") are not always equivalent, but some operations may treat them as such. Follow the steps in this section carefully. IF gets its check value as TRUE. There is no built-in function to do ... FREE SERVICE: Get tips like this every week in ExcelTips, a free productivity newsletter. Joined Nov 24, 2020 Messages 4 Office Version. As a worksheet function, the ISBLANK function can be entered as part of a formula in a cell of a worksheet. The IFERROR function will check if the returned value by IF is #N/A, if true, return blank value. Remember, the IF function in Excel checks whether a condition is met, and returns one value if true and another value if false.. 1. your comment (not an avatar, but an image to help in making the point of your comment), When you highlight the cell B4 or B6 you can see that there is no formula there, and if you use the ISBLANK function you can see that this cell has nothing inside. I entered the formula in B1 and use Format Painter to give it the same format as A1. For instance, you might want cell B7 to be blank if cell A3 is blank. As with relative/absolute references in Excel® we can do this by locking the column address with the ‘$’ dollar sign. You can also achieve this by using Search Function. Stuart, what you need is a formula that returns a string, specifically "". Here is the Excel Formula If Cell Contains Text Then to Return Value in Another Cell. Step 2: Press Ctrl+G to quick open the Go To dialogue box and then click “Special….” button or you can also click on Home tab > Find & Select > Click “Go To Special….” command.. If you need check the result of a formula like this, be aware that the ISBLANK function will return FALSE when checking a formula that returns "" as a final result. There are other options however. ISBLANK is a logical function in excel which is also a type of referencing worksheet function which is used to refer to a cell and identify whether it has blank values in it or not, this function takes a single argument which is the cell reference and returns TRUE as output if the cell is blank and FALSE as output if cell is not blank. You can use the following formula to display blank cell if the summation is zero instead of applying the Sum function directly, please do as follows: Enter this formula: =IF(SUM(A1:A3)=0,"",SUM(A1:A3)) into a blank cell where you want to calculate total result, and then drag the fill handle right to apply this formulas to other cells, then you will get blank cells if the summation are zeros, see screenshot: review. They also might be 0. Not how B2 is not empty but contains a formula. In the Color box, select white, and then click OK. The applying with this replacement will stop only for the completely blank cells. The ISBLANK function is a built-in function in Excel that is categorized as an Information Function. (gif, jpeg or png only, 5MB maximum file size), Notify me about new comments ONLY FOR THIS TIP, Notify me about new comments ANYWHERE ON THIS SITE. If you have a formula in a worksheet, and the cell referenced by the formula is blank, then the formula still returns a zero value. In that case, the formula returns a value of zero. Once you have received the verification code, you will be able to choose a new password for your account. This Thread… Mark this thread as solved… Rate this thread as solved… Rate this as. Not shared with anyone, ever. ) you may need to easily switch back forth. Quarters, especially when it comes to fiscal information input data in another cell or adjacent column discussed ) and... | if not matches for example cell A1 is blank if B7 is greater zero... %, and 2003 the very useful formula when we deal with strings mouse clicks for you to deal strings... May want to show the exact return value in another cell or adjacent column a string value write. Subscribe to this is how to use the ISBLANK function finds a any blank cell computer and publishing company! ; 2019 ; 2016 ; 2013 ; 2011 ; 2010 ; 2007 ;.! Non-Fiction books and numerous magazine articles to his credit, Allen Wyatt last. Image or header to see and apply Conditional Formatting for a blank based on OK... With empty return values in combination with logical operators and DATEVALUE function in,. Through the use of a macro is treated as TRUE and zero is treated as TRUE and zero treated. This is your source for cost-effective Microsoft Excel training ; Microsoft 365 and ;... Can help you to format the summation 0 as blank cell to upload your when... In B1 and use format Painter to give it the same window, rather than in new tabs of week! By locking the column next example we 're using `` '' directly into a cell blank! And DATEVALUE function in Excel businesses organize information according to the target cell column Excel & macro N/A D. new... Figure 2: data for if 0 leave blank we will be reduced with empty return values for cost-effective Excel. Blanks ” option and click on the web or uninitialized variables sheet or some of the cells a. The small portion of Conditional Formatting for blank cells then whatever is in the next we. More detailed instructions about that option well as 0 as blank cell, it is returned as normal in to. Ws ) in Excel that uses the menu interface ( Excel 97, Excel 2000 Excel. 5:36 am the small portion of Conditional Formatting for blank cells in Excel to check for blank cells uninitialized... Bit so that it returns an empty string ) the cell is blank services company cells along with blank or. Completely when charting to do... FREE SERVICE: get tips like every.: data for if 0 leave blank we will highlight the entire range A4: C10 and right-click to format! Is the value you want to show when the VLOOKUP can not the... With anyone, ever. ) Search Community member ; RI ) cell. Is in the active worksheet way it will display empty text if the returned value by is. Specific text then to return value – for empty cells as blanks,... To calendar quarters, especially when it comes to fiscal information ) applies to Microsoft Excel always... When we deal with this replacement will stop only for the `` not blank '' condition as.! In case you prefer reading over watching a Video, below is the value you to. Wide or 1000px tall will be able to choose a new password for your account as FALSE '' as. In that case, any non-zero value is treated as TRUE in cells can be... Display excel if blank then 0 text if the returned value by if is # N/A, if FALSE another. Values you may want to show the exact return value I entered the formula ( its! ; Platform according to the above formula, if FALSE do another ’ dollar sign its in! Seattle Sales Tax Rate 2019, Saddle Up Trail Rides Aguanga, Prince George's County Property Tax, Clear Vinyl Fabric Michaels, Wear-ever Rice Steamer, 2 Feature Walls Opposite, Gems Modern Academy Kochi Tuition Fees,
<urn:uuid:ca51da88-adcc-447c-a39d-7d96092aaab5>
CC-MAIN-2021-21
http://hibhutan.com/z2xepfnp/archive.php?id=excel-if-blank-then-0-f06c06
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00332.warc.gz
en
0.862143
4,435
3.046875
3
Jinnah's crucial role The measure of criticality of Quaid-i-Azam Mohammad Ali Jinnah's role in the making of Pakistan depends on the answer to other interrelated questions: Did Jinnah create the forces that ultimately brought Pakistan into existence? Or, did he merely channel those forces which were already in momentum, towards a definite goal? While historians generally proffer the latter view, contemporary observers take the former one. Historians, whether because of their wide-ranging scholarship and long insight into history, or because of a general tendency among them to interpret events in terms of a deterministic approach, are prone to explain contemporary events within the framework of the outworking of historical forces and ideological factors - factors long embedded in a country's or a nation's body-politic. More specifically, some historians tend to believe that Pakistan was somehow in the "womb" of history and that its emergence was inevitable, whether or not there was a Jinnah to lead the movement to a successful culmination. At the other end of the continuum stand contemporary observers and those involved one way or another in the pre-1947 developments which led to partition and the emergence of Pakistan. They not only rate Jinnah as being the critical variable in its emergence: some like Leonard Mosley even regard Pakistan as a one-man achievement. More important, they even doubt whether, without him at the helm of Indo-Muslim affairs in that epochal decade of 1937-47, Pakistan would ever have come into being. Sweeping as this assertion may sound, it is sought to be buttressed with an array of arguments, at once solid and convincing. A.V. Hodson, the author of perhaps the most authoritative British account of the imperial retreat from India and "the Great Divide", has brilliantly summed up the case for this viewpoint: "Of all the personalities in the last act of the great drama of India's re-birth to independence, Mohammad Ali Jinnah is at once the most enigmatic and the most important. One can imagine any of the other principal actors... replaced by a substitute in the same role - a different representative of this or that interest or community, even a different viceroy - without thereby implying any radical change in the final denouement. But it is barely conceivable that events would have taken the same course, that the last struggle would have been a struggle of three, not two, well-balanced adversaries, and that a new nation state of Pakistan would have been created, but for the personality and leadership of one man, Mr Jinnah. The irresistible demand for Indian independence, and the British will to relinquish power in India soon after the end of the Second World War, were the result of influences that had been at work long before the present story of a single decade begins; the protagonists on this side or that of the imperial relationship were tools of historical forces which they did not create and could not control... whereas the irresistible demand for Pakistan, and the solidarity of the Indian Muslims behind that demand, were creations of that decade alone, and supremely the creations of one man." The two divergent viewpoints sum up the alternatives in the age-long controversy between the social determinists and the "Great Man" theorists. In essence, the basic issue boils down to the fundamental question: Which one plays the prime role in the making of a historical event - the circumstances that give opportunity to a character or the character itself? The creation of a new nation of Pakistan out of India's body-politic was, by any criterion, a historical event of profound significance. As F.J.C. Hearnshaw argues both character and circumstances are equally crucial in the making of such an event. Why? Because without their interacting on each other and mutually affecting one another, the final configuration of events and the integration of interests are simply inconceivable. Speaking of Napoleon, for instance, J. Christopher Harold remarks. "In spite of the prodigious amount that has been devoted to the man and his times, there is still little general agreement as to whether Napoleon is more important a product and a symbol... of circumstances that were not of his making, or as a man, who, pursuing his own destiny, shaped circumstances that governed the course of history. Like all great men, Napoleon was both, of course..." The same is equally true of Jinnah.Opinion, may, of course, differ about the relative weight assigned to circumstances and the character - that is, about the measure of criticality conceded to a character, in the making of a historical event. But unless the environment is characterized by certain "determining tendencies", circumstances alone cannot create an event. Application of this criterion invariably leads to the following conclusion: whatever be the strength, the momentum and the intensity of historical forces working towards centrifugalism in India's body politic and towards Pakistan, without the fortuitous matching of the character - in this case, that of Jinnah - with the circumstances, it could simply not have come the way, it did. This was especially true in the case of Pakistan, since this state was not in the realm of possibility barely a decade before its emergence nor the demand formulated even nebulously before Chaudhry Rehmat Ali did in 1933, nor was it even a "geographical expression", before that date. More so because of the fundamental fact that "few statesmen have shaped events to their policy more surely than Mr. Jinnah," as The Times (London) put it. This explains why underlying everything that has been said or written about Jinnah is the central theme of his achievement of Pakistan. The critical role of achievement motivation in society may also be explained and buttressed by a close examination of the implication of the "womb" theory in respect of Pakistan's emergence. This theory, succinctly summed by Leon Trotsky (1879-1940), while explaining Lenin's role in the coming and the culmination of the Russian revolution, lays down "that a man and his period had to be considered together and that both were determined by the antecedent state of culture." In terms of the Trotskyian formulation, could it be said that Jinnah was not an accidental element in the historical development of Muslim India and that both Jinnah and his party were the product of the whole gamut of Indo-Muslim history? It could not, for it would amount to a tautology, because had Jinnah and his party not strived for Pakistan, and having strived, had failed to accomplish it, they would still have been a product of past Indo-Muslim history. An integral product in the same way as Sayyid Ahmad Shahid (1786-1831) and the Mujahidin movement (1820s-1860s) or Muhammad Ali (1878-1931) and the Khilafat Movement (19920-22) were, although in both cases they encountered failure in the end. Not to speak of the non-League Muslim leadership, even the past role of the League or of its leadership during 1937-47 seemed always prone to striking a compromise with the Congress, more or less on the latter's terms. Even Jinnah's past role as a an eloquent Congress leader (1910-20), as an 'ambassador of Hindu-Muslim unity' (1915-20), as one of the foremost advocates of Indian freedom, as the author of the Delhi Muslim proposals (1927), and as a Muslim leader striving to evolve a compromise formula acceptable to both Hindus and Muslim till 1937, and as late as 1938 in his correspondence with the Congress leaders - all of which could, of course, be put down as a natural product or the recent historical past of Muslim India. But even this role fails to provide any clue to which of the alternative paths of developments presented to Muslim India he would take. Not to speak of 1937, even in the middle 1940s Pakistan's emergence could not have been predicted on the basis of available historical evidence as the only likely future development of Indo-Muslim polity. Indeed, even as late as June 1946, whatever the political forces and conditions at work, the alternative path of united India seemed more likely choice. It was Jinnah who made the critical decision that led Muslim India directly to Pakistan within a year. Hence, while both Jinnah and the Muslim League were, indeed, a product of the past of Muslim India, Pakistan was not so much a product of that past as the product of one of the most "event-making" figures, in modern history. Thus, Jinnah's presence was indispensable, in the emergence of Pakistan. Given the Waliullah Mujahidin, the Aligarh and the Khilafat legacy on the Muslim side and that of the Prarthana Samaj, Arya Samaj, Tilak, Malaviya and Gandhi on the other side, the demand for Pakistan is, of course, understandable as a culminating point of the "natural evolution" of the separatist tendency among Muslims on the one hand and of the process of alienation from the Congress on the other. But certainly not its realization without the presence of "an event-making man." The historical situation during the 1937-47 decade presented two major alternative paths of development for Muslim politics: (1) going along with the Congress credo, if not a merger of the League into it or the acceptance of a satellite status; and (ii) striking out an independent line. These alternative paths were presented on seven different occasions. (1937, 1939, 1940, 1942, 1945 and 1946). But on no occasion did Jinnah waver, though each time he chose for himself and for Muslim India the path towards establishing a Muslim identity on a constitutional plane - the path since 1940 of Pakistan as a separate Muslim state. This he did whatever the toils and labours, whatever the trials and tribulations, whatever the circumstances and consequences. It is true that Jinnah had accepted the Cabinet Mission Plan initially, but his acceptance though sincere at the time, was primarily motivated by the fact that the Plan contained the seeds of Pakistan, providing for a somewhat limited Muslim religio-political identity in a confederal India, with the prospects of opting out for a sovereign Pakistan after a decade, if the proposed arrangement did not work to Muslim satisfaction. It may be argued that the fateful decision to continue the boycott of the Constituent Assembly after getting the Muslim League entrenched in the Interim Government in October 1946 was solely Jinnah's and this decision led directly to the British government's declarations of December 6, 1946, and of February 20, 1947, which paved the way for partition. In several other crucial decisions during the 1937-47 decade as well, Jinnah alone mattered. He alone determined the course Muslim India and Muslim politics would take. Hence Jinnah's criticality in the making of Pakistan. Slow progress in New Delhi The "modest progress" claimed by the foreign ministers of Pakistan and India appears to be rhetorical rather than real as no significant agreement had been reached at the conclusion of the two-day talks in New Delhi had been reached. The disappointment of the people of the two countries is all the more poignant because the long-awaited talks were the first structured political dialogue between the foreign ministers of India and Pakistan in 40 years. The last such talks were held between Zulfikar Ali Bhutto and Sardar Swaran Singh in 1964. Significantly, Mr Natwar Singh had predicted the outcome of the talks even before they were held. Playing down expectations, he said that "there will be neither a breakthrough nor a breakdown in the dialogue process." He was slammed by India's main opposition party, the BJP, for "prejudging results of talks with his Pakistani counterpart." However, given the deep differences over Kashmir and terrorism publicly aired by the two sides, an important achievement of the recently concluded talks was the determination to carry on a sustained engagement with each other. The foreign ministers agreed to continue the composite dialogue as well as the cease fire that has held since November 25, 2003. Also, the two sides will hold meetings to discuss conventional and nuclear confidence-building measures, and India has agreed to expert level talks to consider CBMs in the conventional capacities of the two armed forces. The decision to continue the composite dialogue has not caused much surprise because neither side wants to take the blame for breaking off the talks. Unless there is a change in attitude, the fate of the second round will not be much different from the first which failed to yield meaningful progress on any of the aspects of the eight-point agenda discussed and that included Wullar Barrage, the Baglihar hydro-electric power project, Siachen, Sir Creek, security and Jammu and Kashmir. The announcement of some more CBMs, mostly of minor importance, is a step in the right direction. But they will fail to gladden the people because some of the CBMs announced earlier have not been implemented so far. The consulates in Karachi and Mumbai, shut down in the 1990s, continue to remain closed despite an agreement to reopen them a few months back. At the moment, people from all over Pakistan have to go to Islamabad to apply for an Indian visa. The same is the case in India where one can get a visa for Pakistan only in New Delhi. Mercifully, the two foreign ministers have agreed to speed up the reopening of the two consulates. In November last year, India and Pakistan agreed to revive the Munabao-Khokrapar link that remains frozen since the 1965 war. Technical level talks on this issue have also been held, but the agreement to reopen the link remains unimplemented. The way things are moving is not promising for the future of the peace process. The first round of the composite dialogue, which began on an optimistic note, failed to make real progress on any of the items, including the less intractable issues. The foreign ministers' talks were expected to break the ice, reverse the negative trend and create a positive and hopeful atmosphere for the second round. Sadly, this has not happened. It bodes ill for the peace process that signals from across the border have been unexpectedly negative. The Congress, has raised doubts about Pakistan's sincerity towards the bilateral talks. The All-India Congress Committee at its recent session even wondered whether New Delhi was dealing with the "right regime" in Islamabad. The resolution passed by it went on to state: " We seem to be dealing with a neighbouring government that has failed or is unable to deliver on its promises." Regrettably, these strong words were used by the ruling Congress party just days before the two foreign ministers were scheduled to meet in New Delhi. The Indian home ministry's annual report for 2003-2004 has accused Pakistan of inciting terrorism in occupied Kashmir and several northeastern states bordering Bangladesh. It has also accused the ISI of employing various means to destabilize India. On the day the two foreign ministers began their talks in New Delhi, Indian Defence Minister Pranab Mukerjee ruled out any move to reduce military presence in the Siachen Glacier or in occupied Kashmir, thus effectively pouring cold water on two key issues of the composite dialogue between India and Pakistan. During the recent foreign ministers meeting, India rejected the China model for talks with Pakistan to resolve the Kashmir dispute. The suggestion was made by Mr Khurshid Mahmood Kasuri to appoint " higher level representatives" to resolve the Kashmir dispute as India and China had did for their border dispute. Declining the offer, Mr Natwar Singh argued that sound mechanisms were already in place to resolve the Kashmir issue. The fact, however, remains that the quiet and patient diplomacy, of the China model, is a much better way to resolve intractable issues like Kashmir. According to some reports in the Indian media two factors seem to have influenced New Delhi's attitude towards the ongoing dialogue with Islamabad. The first, of course, is the real or perceived increase in cross-border infiltration. Mr Natwar Singh conveyed to his Pakistani counterpart his government's serious concern over the alleged increase in the level of infiltration and violence in Jammu and Kashmir. The second reason may be Pakistan's reticence on some Indian proposals. During the first round of the composite dialogue, India reportedly put forward 72 "new ideas" on the bilateral agenda with Pakistan but Islamabad remained allegedly non-responsive. Most of the proposals dealt with improving communication and commercial links between the two countries. According to a report in The Hindu, in order to dispel Pakistan's fear that the Indian emphasis on people-to-people contact might be a ploy to put the Kashmir issue on the backburner, India also put across "an expansive agenda for cooperation in Kashmir that could create conditions for a final resolution of the difficult question." Amongst the Indian proposals, the important ones were: the Indian offer to initiate transit trade across each other's territories; the opening of the Attari-Wagah land route for trade; in view of Pakistan's insufficient petroleum - refining capacity India offered to extend a diesel pipeline across the border; though not before showing a lukewarm indication regarding the natural gas pipeline from Pakistan into India. New Delhi has now suggested that if the principle of transit is agreed upon there could be pipelines of different types crossing the border. The other Indian proposals included study tours, student and conference visas and commercial performance by artists across the borders. An expansion of the list of holy shrines for visits and an increase in the size of pilgrim groups were also proposed. Pakistan has accepted the Indian proposal for facilitation of group tourism. As announced in the joint communique released on Wednesday, India and Pakistan have now opened up their countries to group tourism. It may be observed that the foreign ministers' talks in New Delhi took place a few weeks before a meeting scheduled in New York between President Pervez Musharraf and Indian Prime Minister Manmohan Singh. Meeting on the sidelines of the United Nations, the two leaders are expected to discuss all the bilateral issues between the two countries and the difficulties and complications involved in addressing these. In a joint news conference after the two-day discussions, the two foreign ministers held out the assurance that they would intensify their search for durable peace in South Asia. The joint communique, issued on Wednesday, also reiterated the confidence that the composite dialogue would lead to a peaceful settlement of all bilateral issues, including Jammu and Kashmir, to the satisfaction of both sides. These are brave words. But such statements can be reassuring only if they are backed by sustained efforts to achieve concrete results. New Delhi must realize that if it delays or avoids engaging Islamabad on the Kashmir issue it will undermine current efforts to make the peace process a success. The bitter truth, however, is that while almost everyone wants the peace process to succeed and bilateral relations to be transformed, wishes in the Indo-Pakistani context have seldom been self-fulfilling. Blame game in Bangladesh I did not write on the bid to kill Bangladesh opposition leader Sheikh Hasina earlier because I wanted to first talk to her and Prime Minister Khaleda Zia. I have returned from Dhaka a couple of days ago and I have met both of them. Not that I can say with certainty who were the assassins. But I can give the versions of both. Nonetheless, the blame game is going on and many names are being bandied about: India, Pakistan and the ruling Bangladesh Nationalist Party (BNP), even the Awami League and, of course, Al Qaeda. This has only made the confusion more confounded. Let me first reconstruct the incident. The Awami League, headed by Hasina, planned on August 21 a rally from its party office which is located in the heart of Dhaka. The rally was about to move with the culmination of her speech at around 5 p.m. when eight grandees from all sides were lobbed at her, standing on a truck's makeshift podium. The security men as well as her supporters made themselves into a shield to give her a cover. She was forced into her bullet proof car, which was also fired at. She miraculously escaped all that but 18 people died in the attack. Among them were her two close associates in the Awami League. It was a professional job. Those who threw the grenades knew how to do so because it involved extracting a pin within three to four seconds before throwing it. Those who shot at the bullet proof car were also trained hands. And there is no doubt that all of them, said to be 30 to 35, had one target: Hasina. Till I was in Dhaka none had been arrested; none in the police was suspended and no one at the top had any clue either. The government has appointed a judicial commission but has done little to collect evidence. The two unexploded grenades, which could have provided a lead, were diffused soon after. The police used teargas to disperse the crowd which included the assailants who apparently used the opportunity to escape. The police first returned the truck to the owner but retrieved it later following the public outcry. Hasina, whom I met first, had no doubt that it was the job of the army which she alleged was against the liberation of Bangladesh. She suspected a deep conspiracy in which the highest in the ruling BNP were involved. She said that Pakistan too had some role to play. "This was an attack on secular democratic forces," she said. "I would say that those who could not kill me and my sister on August 15, 1975, when they assassinated my father tried to implement their unfinished agenda." She had no faith in Prime Minister Khaleda Zia's government, nor in the judicial commission which she had already boycotted. Hasina was unnerved when I met her. But she had no doubt that more attempts would be made to "finish me." One of her associates present during the meeting mentioned the name of Tariq, son of Khaleda. He also said that the BNP and the Jamaat-I-Islami were out to eliminate "our charismatic leader." Hasina was not opposed to the FBI and the Interpol which had already swung into action to find out the culprits. "But from where will they get evidence because the government has destroyed all?" she asked. The FBI and the Interpol are banking on the footage of the Bangladesh Television, perched on a third floor in a nearby building, has filmed the incident from the beginning. Many faces have been blown up into huge pictures, some of them reportedly known criminals. "I cannot recognize any," said Hasina, "because my glasses were broken when I was pushed into the car. Where do we go from here, I asked her. "I wish I knew. But they would not rest until they have killed me,"she said. The version of Prime Minister Khaleda was entirely different. She disowned all allegations. She said that something tragic had happened and "we must find out who are behind it." She appealed to Hasina to help her get at the bottom of the crime. She said she had allowed a full parliament debate on the murderous attack. "I wrote to her and wanted to meet her but she refused to even respond," said Khaleda. ("I did not invite her to my place," said Hasina, "because anything could have happened when relatives of the killed were sitting all the time at my house.") I told Khaleda that Hasina alleged that you were behind the attack. She said in reply: "Tell me what will I gain by killing her? I am doing well and in control of things. The country is peaceful. We have done a tremendous job in rehabilitating 40 million people who were affected by floods. Why should I do something that could upset everything?" I believe you are putting the blame on India, I asked Khaleda. "That is not true. Some people are saying that." Still she did not say that India was not to blame even when I asked her whom did she suspect. "The investigation is yet to be completed," is all that she would say. After a pause, she said it was the job of "outsiders." When I asked who, she said that there were "some Awami League members in Kolkata. They would be questioned on their return." Do you suspect them? "We have to know everything," she said. Khaleda went back to her theme of unity in the country. "I have talked to some editors, ex-bureaucrats and others to bring us together. I hope they will help me in this task because the country is bigger than all of us." Sometimes, I fear, I told her, that the army might walk in again. She said: "We are a democracy. The army has no business to interfere. So many tragedies had taken place all over the world. Did the army come in after the 9/11 incidents in America? In your own country even parliament was attacked. The government dealt with it. Why should it be different in Bangladesh?" Khaleda was full of praise for Prime Minister Manmohan Singh. He was a good administrator. "That is what a government requires." She said she was happy with former prime minister Atal Behari Vajpayee as well. She wanted good relations with India but she complained "some newspapers in your country were hurting the process by misinterpreting the August 21 incident (the attack on Hasina)." Could that be the reason why Khaleda's foreign minister Murshed Khan said at a seminar that India could not pick up "one party for support?" He did not mention the name but his reference to the Awami League was obvious. Law Minister Moudud Ahmed defended Murshed's outburst thus: "When the Indian prime minister ring ups only Sheikh Hasina after the incident and not also the prime minister, as US Secretary of State Colin Powell did, what inference should we draw?" When I told Khaleda how people-to-people contact between India and Pakistan was changing the climate in the two countries and cited the example of lighting candle on the night of August 14-15 at Wagah border, she said she would like a similar thing on the India-Bangladesh border. "I am all for the people-to-people contact." People-to-people contact between India and Bangladesh will take some time to mature. But people-to-people contact within Bangladesh is the need of the hour. The nation is more sharply divided after the August 21 incident. The writer is a leading columnist based in New Delhi. Help the African Union It is now widely understood that the situation in Darfur, in the remote western desert of Sudan, is the most serious humanitarian crisis in the world. But the disaster in Darfur is not the result of natural causes, such as drought or floods; it is man-made, and if the outside world continues to treat it simply as a humanitarian crisis without addressing its underlying causes, it will not end. With or without peacekeepers, what we have seen so far would be just the beginning of a long-term catastrophe that would leave behind an unresolved political crisis, continuing warfare and another nearly permanent refugee population, requiring endless and immense international assistance - this time in a trackless area the size of France. The international humanitarian response, led by the United States, has saved many lives and must be continued, despite its huge costs. There are already at least 500 international aid workers in Darfur, backed up by at least 10 times as many local employees. Travelling with them for a few days last week was inspiring; the outside world can scarcely imagine how hellish and dangerous their mission is. But the relief effort is far short of what it needs in pledges and commitments. The most disgraceful performance of all comes from the oil-rich Arab states, which have contributed virtually nothing. But - and this is true of almost all refugee crises - dealing only with the humanitarian aspect of the problem is like putting a small bandage on a haemorrhage. The underlying causes of the suffering in Darfur are complicated, but the human consequences are there for any visitor to see: many hundreds of thousands of ethnic African refugees fleeing into makeshift and terrible refugee camps before the attacks of the vicious (and primarily Arab) Janjaweed militia, who are, despite official denials, supported and encouraged by elements in the Sudanese government. The goal of the central government in supporting and encouraging the Janjaweed seems clear: to "depopulate" - that is, destroy - the villages and create as many refugees as possible in order to eliminate the village structure in Darfur, which is a base for the activity of two rebel movements opposing the central government. These movements are virtually unknown outside of the region; they are the Sudanese People's Liberation Movement and the Justice and Equality Movement. They are surprisingly well organized and receive outside assistance, primarily from Sudan's eastern neighbor, Eritrea, which, despite its small size, has shown since its independence in 1993 a surprising aggressiveness toward its much larger neighbours (including Ethiopia, with which it has fought two disastrous wars). Both rebel groups find easy sanctuary in the deserts of Chad, Sudan's neighbor to the west. -Dawn/ Washington Post Service Jon Corzine, a Democratic senator from New Jersey, and Richard Holbrooke, a former ambassador to the United Nations, visited Darfur last week.
<urn:uuid:adc5d016-22ba-4101-ae03-d18a49f83fb1>
CC-MAIN-2021-21
https://www.dawn.com/news/1066439/dawn-opinion-11-september-2004
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00534.warc.gz
en
0.975084
6,075
3.59375
4
Linnean Society of London |Formation||1788royal charter: 1802)(| |Purpose||Natural History, Evolution & Taxonomy| |Remarks||Motto: Naturae Discere Mores| ("To Learn the Ways of Nature") The Linnean Society of London is a learned society dedicated to the study and dissemination of information concerning natural history, evolution, and taxonomy. It possesses several important biological specimen, manuscript and literature collections, and publishes academic journals and books on plant and animal biology. The society also awards a number of prestigious medals and prizes. A product of the 18th-century enlightenment, the Society is the oldest extant biological society in the world and is historically important as the venue for the first public presentation of the theory of evolution by natural selection on 1 July 1858. The patron of the society is Queen Elizabeth II. Honorary members include Emeritus Emperor Akihito of Japan, King Carl XVI Gustaf of Sweden (both of whom have active interests in natural history), and the eminent naturalist and broadcaster Sir David Attenborough. The Linnean Society was founded in 1788 by botanist Sir James Edward Smith. The society takes its name from the Swedish naturalist Carl Linnaeus, the 'father of taxonomy', who systematised biological classification through his binomial nomenclature. He was known as Carl von Linné after his ennoblement, hence the spelling 'Linnean', rather than 'Linnaean'. The society had a number of minor name variations before it gained its Royal Charter on 26 March 1802, when the name became fixed as "The Linnean Society of London". In 1802, as a newly incorporated society, it comprised 228 fellows. It is the oldest extant natural history society in the world. Throughout its history the society has been a non-political and non-sectarian institution, existing solely for the furtherance of natural history. The inception of the society was the direct result of the purchase by Sir James Edward Smith of the specimen, book and correspondence collections of Carl Linnaeus. When the collection was offered for sale by Linnaeus's heirs, Smith was urged to acquire it by Sir Joseph Banks, the eminent botanist and President of the Royal Society. Five years after this purchase Banks gave Smith his full support in founding the Linnean Society, and became one of its first Honorary Members. The society has numbered many prominent scientists amongst its fellows. One such was the botanist Robert Brown, who was Librarian, and later President (1849-1853); he named the cell nucleus and discovered Brownian motion. In 1854 Charles Darwin was elected a fellow; he is undoubtedly the most illustrious scientist ever to appear on the membership rolls of the society. Another famous fellow was biologist Thomas Huxley, who would later gain the nickname "Darwin's bulldog" for his outspoken defence of Darwin and evolution. Men notable in other walks of life have also been fellows of the society, including the physician Edward Jenner, pioneer of vaccination, the Arctic explorers Sir John Franklin and Sir James Clark Ross, colonial administrator and founder of Singapore, Sir Thomas Stamford Raffles and Prime Minister of Britain, Lord Aberdeen. Since 1857 the society has been based at Burlington House, Piccadilly, London; an address it shares with a number of other learned societies: the Geological Society of London, the Royal Astronomical Society, the Society of Antiquaries of London and the Royal Society of Chemistry. The first public exposition of the 'Theory of Evolution by Natural Selection', arguably the greatest single leap of progress made in biology, was presented to a meeting of the Linnean Society on 1 July 1858. At this meeting a joint presentation of papers by Charles Darwin and Alfred Russel Wallace was made, sponsored by Joseph Hooker and Charles Lyell as neither author could be present. In 1904 the society elected its first female fellows, following a number of years of tireless campaigning by the botanist Marian Farquharson. Whilst the society's council was reluctant to admit women, the wider fellowship was more supportive; only 17% voted against the proposal. Among the first to benefit from this were the ornithologist and photographer Emma Louisa Turner, Lilian J. Veley, a microbiologist and Annie Lorrain Smith, a lichenologist and mycologist, all formally admitted on 19 January 1905. Also numbered in the first cohort of women to be elected in 1904 were: the paleobotanist, and later pioneer of family planning, Marie Stopes, the philanthropist Constance Sladen, founder of the Percy Sladen Memorial Trust and Alice Laura Embleton (1876 – 1960), biologist, zoologist and suffragist, who had been one of the earliest women to deliver a paper to the society on 4 Jun 1903. The society's connection with evolution remained strong into the 20th century. Sir Edward Poulton, who was President 1912–1916, was a great defender of natural selection, and was the first biologist to recognise the importance of frequency-dependent selection. In April 1939 the threat of war obliged the society to relocate the Linnean collections out of London to Woburn Abbey in Bedfordshire, where they remained for the duration of World War II. This move was facilitated by the 12th Duke of Bedford, a Fellow of the Linnean Society himself. Three thousand of the most precious items from the library collections were packed up and evacuated to Oxford; the country house of librarian Warren Royal Dawson provided a refuge for the society's records. The first female President of the society was Irene Manton (1973 to 1976), who pioneered the biological use of electron microscopy. Her work revealed the structure of the flagellum and cilia, which are central to many systems of cellular motility. Recent years have seen an increased interest within the society in issues of biodiversity conservation. This was highlighted by the inception in 2015 of an annual award, the John Spedan Lewis Medal, specifically honouring persons making significant and innovative contributions to conservation. Fellowship requires nomination by at least one fellow, and election by a minimum of two-thirds of those electors voting. Fellows may employ the post-nominal letters 'FLS'. Fellowship is open to both professional scientists and to amateur naturalists who have shown active interest in natural history and allied disciplines. Having authored relevant publications is an advantage, but not a necessity, for election. Following election, new fellows must be formally admitted, in person at a meeting of the society, before they are able to vote in society elections. Admission takes the form of signing the membership book, and thereby agreeing to an obligation to abide by the statutes of the society. Following this the new fellow is taken by the hand by the president, who recites a formula of admission to the fellowship. Other forms of membership exist: 'Associate', for supporters of the society who do not wish to submit to the formal election process for fellowship, and 'Student Associate', for those registered as students at a place of tertiary education. Associate members may apply for election to the fellowship at any time. Finally, there are three types of membership that are prestigious and strictly limited in number: 'Fellow honoris causa', 'Foreign', and lastly, 'Honorary'. These forms of membership are bestowed following election by the fellowship at the annual Anniversary Meeting in May. Meetings have historically been, and continue to be, the main justification for the society's existence. Meetings are venues for people of like interests to exchange information, talk about scientific and literary concerns, exhibit specimens, and listen to lectures. Today, meetings are held in the evening and also at lunchtime. Most are open to the general public as well as to members, and the majority are offered without charge for admission. On or near the 24th of May, traditionally regarded as the birthday of Carl Linnaeus, the Anniversary Meeting is held. This is for fellows and guests only, and includes ballots for membership of the council of the society and the awarding of medals. On May 22, 2020, for the first time in its history, the Anniversary Meeting was held online via videotelephony. This was due to restrictions on public gatherings imposed in response to the COVID-19 pandemic. Medals and prizes The Linnean Society of London aims to promote the study of all aspects of the biological sciences, with particular emphasis on evolution, taxonomy, biodiversity, and sustainability. Through awarding medals and grants, the society acknowledges and encourages excellence in all of these fields. The following medals and prizes are awarded by the Linnean Society: - Linnean Medal, established 1888, awarded annually to alternately a botanist or a zoologist or (as has been common since 1958) to one of each in the same year. - Darwin-Wallace Medal, first awarded in 1908, for major advances in evolutionary biology. - H. H. Bloomer Award, established 1963 from a legacy by the amateur naturalist Harry Howard Bloomer, awarded "an amateur naturalist who has made an important contribution to biological knowledge" - Trail-Crisp Award, established in 1966 from the amalgamation of two previous awards - both dating to 1910 - awarded "in recognition of an outstanding contribution to biological microscopy that has been published in the UK". - Bicentenary Medal, established 1978, on the 200th anniversary of the death of Linnaeus, "in recognition of work done by a person under the age of 40 years". - Jill Smythies Award, established 1986, awarded for botanical illustrations. - Linnean Gold Medal, For services to the society - awarded in exceptional circumstances, from 1988. - Irene Manton Prize, established 1990, for the best dissertation in botany during an academic year. - Linnean Tercentenary Medal, awarded in 2007 in celebration of the three hundredth anniversary of the birth of Linnaeus. - John C Marsden Medal, established 2012, for the best doctoral thesis in biology examined during a single academic year. - John Spedan Lewis Medal, established 2015, awarded to "an individual who is making a significant and innovative contribution to conservation". - Sir David Attenborough Award for Fieldwork, established in 2015. Linnaeus' botanical and zoological collections were purchased in 1783 by Sir James Edward Smith, the first President of the society, and are now held in London by the society. The collections include 14,000 plants, 158 fish, 1,564 shells, 3,198 insects, 1,600 books and 3,000 letters and documents. They may be viewed by appointment and there is a monthly tour of the collections. Smith's own plant collection of 27,185 dried specimens, together with his correspondence and book collection, is also held by the society. In December 2014, the society's specimen, library, and archive collections were granted designated status by the Arts Council England, recognising collections of national and international importance (one of only 148 institutions so recognised as of 2018). The Linnean Society began its extensive series of publications on 13 August 1791, when Volume I of Transactions was produced. Over the following centuries the society published a number of different journals, some of which specialised in more specific subject areas, whilst earlier publications were discontinued. Those still in publication include: the Biological Journal of the Linnean Society, which covers the evolutionary biology of all organisms, Botanical Journal of the Linnean Society, which focuses on plant sciences, and Zoological Journal of the Linnean Society focusing on animal systematics and evolution. The Linnean is a biannual newsletter. It contains commentary on recent activities and events, articles on history and science, and occasional biographies/obituaries of people connected to the Linnean Society; it also includes book reviews, reference material and correspondence. The society also publishes books and Synopses of the British Fauna, a series of field-guides. In addition, Pulse, an electronic magazine for Fellows, is produced quarterly. - 2018– Sandra Knapp - 2015–2018 Paul Brakefield - 2012–2015 Dianne Edwards - 2009–2012 Vaughan R. Southgate - 2006–2009 David F. Cutler - 2003–2006 Gordon McGregor Reid - 2000–2003 Sir David Smith - 1997–2000 Sir Ghillean Prance - 1994–1997 Brian G. Gardiner - 1991–1994 John G. Hawkes - 1988–1991 Michael Frederick Claridge - 1985–1988 William Gilbert Chaloner - 1982–1985 Robert James "Sam" Berry - 1979–1982 William T. Stearn - 1976–1979 Humphry Greenwood - 1973–1976 Irene Manton - 1970–1973 Alexander James Edward Cave - 1967–1970 Arthur Roy Clapham - 1964–1967 Errol White - 1961–1964 Thomas Maxwell Harris - 1958–1961 Carl Pantin - 1955–1958 Hugh Hamshaw Thomas - 1952–1955 Robert Beresford Seymour Sewell - 1949–1952 Felix Eugen Fritsch - 1946–1949 Sir Gavin de Beer - 1943–1946 Arthur Disbrowe Cotton - 1940–1943 Edward Stuart Russell - 1937–1940 John Ramsbottom - 1934–1937 William Thomas Calman - 1931–1934 Frederick Ernest Weiss - 1927–1931 Sidney Frederic Harmer - 1923–1927 Alfred Barton Rendle - 1919–1923 Arthur Smith Woodward - 1916–1919 Sir David Prain - 1912–1916 Sir Edward Poulton - 1908–1912 Dukinfield Henry Scott - 1904–1908 William Abbott Herdman - 1900–1904 Sydney Howard Vines - 1896–1900 Albert Charles Lewis Gotthilf Günther - 1894–1896 Charles Baron Clarke - 1890–1894 Charles Stewart - 1886–1890 William Carruthers - 1881–1886 Sir John Lubbock, 4th Baronet (later 1st Baron Avebury) - 1874–1881 George James Allman - 1861–1874 George Bentham - 1853–1861 Thomas Bell - 1849–1853 Robert Brown - 1837–1849 Edward Stanley - 1833–1836 Edward St Maur, 11th Duke of Somerset - 1828–1833 Edward Smith-Stanley, 13th Earl of Derby - 1788–1828 Sir James Edward Smith - Category:Fellows of the Linnean Society of London - Dorothea Pertz, one of the first women awarded full membership - Linnaeus Link Project - Annual Review 2019, Linnean Society, p. 18 - Royal patrons and honorary fellows - Linnean Society - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 2, 19 - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, p. 148 - O'Brian, P. (1987) Joseph Banks, Collins Harvill. p. 240 - Harris, Henry (1999). The Birth of the Cell. Yale University Press. pp. 76–81. - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, p. 53 - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 50, 53, 197-198 - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, p. 51 - Cohen, I.B. (1985) Revolution in Science, Harvard University Press, 1985, pp. 288-289 - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 88-93 - The Linnean (2005) Vol. 21(2) , p. 25 - Gage, A. T. (1938). A history of the Linnean Society of London: Printed for the Linnean Society by Taylor and Francis, p. 90. - "EDITORIAL NOTES.|1903-06-26|The Cambrian News and Merionethshire Standard - Welsh Newspapers". newspapers.library.wales. Retrieved 2020-08-16. - "Proceedings of the Linnean Society of London : Linnean Society of London : Free Download, Borrow, and Streaming". Internet Archive. Retrieved 2020-08-16. - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, p. 95 - Poulton, E. B. 1884. Notes upon, or suggested by, the colours, markings and protective attitudes of certain lepidopterous larvae and pupae, and of a phytophagous hymenopterous larva. Transactions of the Entomological Society of London 1884: 27–60. - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, p. 110 - Preston, Reginald Dawson (1990). "Irene Manton. 17 April 1904-13 May 1988". Biographical Memoirs of Fellows of the Royal Society. 35: 247–261. doi:10.1098/rsbm.1990.0011. - Biography of Irene Manton sponsored by the Linnean Society, in The Linnean, Special Issue No. 5 (2004) - John Spedan Lewis Medal - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 195, 198-202 - The Charter and Bye-laws of the Linnean Society - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 149-152 - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 149-152 - The Linnean Society of London: Medals and Prizes - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 165-174 - "The purchase of knowledge: James Edward Smith and the Linnean collections" (PDF). www.warwick.ac.uk. 1999. Archived from the original (PDF) on 2017-08-13. Retrieved 2017-12-14. - The Linnean Society of London: Linnean Collections - The Linnean Society of London: Smith Collections - Gage A. T. and Stearn W. T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 175-181 (specimen collections) 183-188 (manuscript, illustration and publication collections) - Linnean Society is one of four collections nationally recognised - Gage A.T. and Stearn W.T. (1988) A Bicentenary History of the Linnean Society of London, Linnean Society of London, pp. 153-164 - "Pulse". The Linnean Society of London. - Linnean Society of London - Home page of the Zoological Journal of the Linnean Society - Home page of the Biological Journal of the Linnean Society - Home page of the Botanical Journal of the Linnean Society - BHL scans of Transactions of the Linnean Society of London 1791-1874 - BHL scans of Transactions of the Linnean Society of London. 2nd series, Zoology 1875-1921 - BHL scans of Transactions of the Linnean Society of London, 2nd series: Botany 1875-1922 - Works by Linnean Society of London at Project Gutenberg - Works by or about Linnean Society of London at Internet Archive
<urn:uuid:8a512850-d90d-470f-a5f7-7b23f77104f2>
CC-MAIN-2021-21
https://en.wikipedia.org/wiki/Linnean_Society_of_London
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00215.warc.gz
en
0.89507
4,422
2.625
3
2019-04-22 - By Robert Elder If you want, you can also download this guide as a single PDF. The purpose of this guide is to show you the steps required to build your own automated backup solution that you can use for backing up source code or small files using git and SSH. This backup technique can be used to back up your data to another computer in your house or office, but you can also use it to back up to multiple locations over the internet securely. This guide is targeted at individuals who plan to build this solution in an environment where the client and server will both use Linux. It may be possible to adapt the steps shown here to work on a Windows machine, but that won't be covered in this guide. The following Linux commands will be used. Some of them will be explained below, but you may want to Google them if you've never seen them before: In the rest of this article I will present the final solution as a series of smaller bite sized 'goals' that build upon each other. It may not be obvious how each individual goal connects to the final outcome of backing up your files, but eventually this will all be tied together. Goal 0: Editing Files and Installing Prerequisites Later in this guide, you'll need to make a few edits to files on the command-line. If you're new to using the command-line, this might be difficult for you if you don't know what editor to use. Personally, I use an editor called 'vim', but if you've never heard of that before, I suggest using 'nano'. With nano, you can edit or create a file by running a command like this: Here's what nano looks like: For the noobs out there, the instructions at the bottom of the screen that show the carrot symbol mean to press the control key and the letter key at the same time to perform the desired action. For example, to the '^O' means that you can press 'Ctrl + o' to 'write out' and save the file to disk. I won't say much else about nano since that's fairly off-topic and you can find guides online elsewhere. Another thing to do before we get started is to install pre-requisites. I'll assume that you're using Ubuntu on your desktop/laptop. Here's the install command we need: sudo apt-get install nmap git You'll know that nmap is installed if you can run this command and get a version number back: Once you've got nmap and git installed and you're confident that you can work with a command-line editor that can edit and create files, then you've completed goal 0! Goal 1: Creating The Simplest git Remote In this section, we'll make sure you have the confidence to set up your own git 'remote'. What is a 'remote' you ask? Well, it's the 'remote' place where your code and files end up when you do 'git push origin master' to push your code to GitHub (or BitBucket, or gitlab etc.). If you do a 'git clone ...', you are copying the files from the 'remote'. A 'remote' can be located on another computer, on GitHub, or even in another folder on the same computer. The simplest example of setting up a git 'remote' is actually just to create a directory on your computer and turn it into a 'remote'. First, let's make sure git is installed: sudo apt-get update sudo apt-get install git Now, let's set up our git 'remote'. You can run these commands in any directory you like: # Set up and initialize a 'remote' mkdir remote1 cd remote1 git init --bare cd .. # Set up and initialize a local repo mkdir my-repo cd my-repo git init cd .. The folder 'remote1' now contains a fully functional 'remote' that you can push code to, just like GitHub! The folder 'my-repo' contains an empty repo that you can start committing files to. Let's do that now: cd my-repo echo "This is my readme" > README.md git add . git commit -m "Create a readme file." Now, what happens if you try to push to 'origin master'? $ git push origin master fatal: 'origin' does not appear to be a git repository fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. It didn't work because we didn't describe the relationship between our current repo (the my-repo folder) to the remote (the remote1 folder)! You can list all remotes by using the following command: git remote --verbose But we don't have any 'remotes' set up, so let's add one right now called 'origin': git remote add origin ../remote1 Now, let's check to see what remotes there are: $ git remote --verbose origin ../remote1 (fetch) origin ../remote1 (push) Now let's try to push again: $ git push origin master Counting objects: 3, done. Writing objects: 100% (3/3), 235 bytes | 235.00 KiB/s, done. Total 3 (delta 0), reused 0 (delta 0) To ../remote1 * [new branch] master -> master Awesome, it worked! We just created our own 'git remote' that we can push our repository to. This remote is still just on the same computer in another directory, but later we'll show how you can put it on another computer. You can even clone from the repo just like you would with any other git rep URL: git clone /the/path/to/remote1/ Goal 2: Using SSH to Access Another Computer on the LAN For this goal, we'll focus on making sure that you can use SSH to access the Raspberry Pi and run commands on it remotely. This goal doesn't have anything to do with git, but we'll use the two together in another goal. If you're not sure what 'SSH' is or what it does, you should do a quick skim of the article what is ssh before continuing with the goal. For the following steps, I will assume that you're going to be working on a LAN setup that works something like this: To describe the setup above, this is one where you have your main laptop or desktop connected to the router (using either ethernet cable of WiFi), and your Raspberry Pi also connected to the same router using an ethernet cable. With this setup, the first thing we should do is identify what local IP address the Raspberry Pi has on the LAN. In order to do that, you can run this command from the laptop to help us: Here's the output that I get on my current laptop: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether d8:cb:8a:f3:2b:94 brd ff:ff:ff:ff:ff:ff inet 192.168.0.112/24 brd 192.168.0.255 scope global dynamic noprefixroute enp3s0 valid_lft 580673sec preferred_lft 580673sec inet6 2607:f2c0:e570:1c01:19ef:d1a8:ebea:214f/64 scope global temporary dynamic valid_lft 563416sec preferred_lft 61948sec inet6 2607:f2c0:e570:1c01:83eb:165e:445b:b377/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 563416sec preferred_lft 131416sec inet6 fe80::7f28:1a7d:9453:79b1/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:2b:6e:14:e9:cf brd ff:ff:ff:ff:ff:ff Note the highlighted part, '192.168.0.112/24' in this example, which indicates that my laptop has IPV4 address 192.168.0.112 on my LAN with the first 24 bits of that address being common to every computer on the network. Please be aware that the number 192.168.0.112/24 is just an example and your IP address will be different. Usually, your LAN ip address will start with something like '192.168.0.', but some routers also use addresses that start with '192.168.1.' or '192.168.25.' by default. In fact if you look at the picture above closely, you'll see that on the computer I used to pose for this photo, its IP address was actually 192.168.25.101/24. Often, you can also manually configure this LAN IP address prefix on the router settings. Now, since the Raspberry Pi is also connected to the same network as the '192.168.0.112/24' address, we can run a port scan using this command: to show all computers on this same network that answer back on common open ports. Since we're specifically interested in using SSH to access the Raspberry Pi, you can use this more specific command to only look for computers that answer back on port 22 (the default SSH port): nmap -p 22 192.168.0.112/24 Note that sometimes, I've found that you will need to explicitly specify port 22 in order for the open port to actually be found. I've also experienced situations where I need to repeat the scan several times before it detects the open port. I assume that this is related to some kind of security filtering that certain routers do. Once you finish running nmap, you should see something like this: Starting Nmap 7.60 ( https://nmap.org ) at 2019-00-00 23:19 EDT Nmap scan report for router (192.168.0.1) Host is up (0.00051s latency). ... Nmap scan report for 192.168.0.177 Host is up (0.00054s latency). PORT STATE SERVICE 22/tcp open ssh ... Nmap done: 256 IP addresses (N hosts up) scanned in 2.55 seconds In this example, the IP address 192.168.0.177 is the address of the Raspberry Pi when it connects to my router, and doing a port scan with nmap was how we found it. When you run this command, you might get multiple results that have port 22 open, and if that's the case, it means you probably have multiple computers connected to the router that are running SSH. If you don't find any other computers that are running SSH, the Raspberry Pi might not have been set up to have its SSH server turned on yet! Newer versions of the Raspberry Pi operating systems usually have SSH disabled by default. Here is some Raspberry Pi documentation that describes how to enable SSH. At this point, you should be able to run this command: And, depending on how much you've set up your Raspberry Pi, it will likely ask you for a password. Do a Google search to find out the default password for your version of the Raspberry Pi OS. Once you successfully get access to the raspberry Pi them you've completed this goal. Here's a picture of what success looks like: You can use the 'exit' command to exit out of the Raspberry Pi SSH session: You have now successfully completed goal 2! Goal 3: Using Public And Private Keys. Goal 4: Setting Up An SSH config File For this goal, our objective is to make it easier to use SSH to access your Raspberry Pi. For example, we'll make it so that instead of typing this: ssh -i ~/.ssh/my-first-keypair [email protected] you can type this instead: which is much shorter and easier to remember! The way to accomplish this is by editing a file located at '~/.ssh/config'. It's likely that this file won't already exist, and you'll have to create it. Run this command to edit the ssh config file: And to set up the alias for 'pi-backup', add this to the file: Host pi-backup HostName 192.168.0.177 Port 22 User pi IdentityFile ~/.ssh/my-first-keypair Then save and exit. Also, keep in mind that the IP address '192.168.0.177' written above is just a specific example. You'll need to replace it with the IP address of your Raspberry Pi. Once you do, you should now be able to run this command: Once you're able to SSH into the Pi using this easier method, you've successfully completed goal 4! Goal 5: Pushing to a repository on the Raspberry Pi Through SSH Now we're ready to do something that looks a bit closer to actually backing up files onto your Raspberry Pi! Remember all the steps you did for goal 1? You're going to repeat them, but with a couple differences. First, let's use SSH to log into the Pi: Now, set up a git 'remote' in your home directory on the Raspberry Pi: # Set up and initialize a 'remote' mkdir my-first-backup.git cd my-first-backup.git git init --bare Then exit back to your laptop/desktop computer: Now create a local git repo on your laptop/desktop and add some data into it: # Set up and initialize a local repo mkdir my-repo cd my-repo git init echo "Hello World" > README git add . git commit -m "Create a readme file." The last step is to tell the local git repo that the 'remote' we want to use can be accessed through an SSH tunnel using git. For this we add a remote that uses the ssh config file alias as the prefix followed by the directory on the Raspberry Pi where we set initialized the remote. We'll also run a one-time-step the set 'master' as the default branch on the remote. git remote add pi-backup pi-backup:~/my-first-backup.git git push --set-upstream pi-backup master You should now be fully set up to push and pull to the repo located on your Raspberry Pi! Let's do another test just to make sure everything is working: echo "Awesome" >> README git add . git commit -m "Edited readme file." And now when you run this command: git push pi-backup And you should see something like this: Counting objects: 3, done. Writing objects: 100% (3/3), 226 bytes | 226.00 KiB/s, done. Total 3 (delta 0), reused 0 (delta 0) To pi-backup:~/my-first-backup.git * [new branch] master -> master If you do, you've completed goal 5! Goal 6: Securing the Raspberry Pi I strongly recommend that you consider securing your Raspberry Pi if you want to keep this solution up for any amount of time. You can read Beginners Guide to Securing A Raspberry Pi for more details on this. Setup Notes For Old Laptop There isn't a lot to say about this topic, but is worth mentioning that you could also make this backup solution work on an old spare laptop. I would suggest installing Ubuntu on the old laptop since most of the install instructions listed here that work for the Pi will also work on Ubuntu. You can download a copy of Ubuntu here. Setup Notes For Raspberry Pi If you bought a brand new Raspberry Pi, you may or may not need to flash the SD card with a new OS image. Some Raspberry Pi kits come with an OS like Raspbian pre-installed. If it is pre-installed, you should take note of what version of the OS is pre-installed. If an OS other than Raspbian is installed, you will need to consult the documentation for that OS to learn how to make sure SSH will be set up and ready to use. If you decide to install a custom version of Raspbian OS yourself or if your SD card came blank, consult the guide at Raspbian install instructions which provides and overview of the process for Windows, Linux and Mac. Before you install Raspbian OS, you will notice that there are at least two different versions of the OS image. One is called the 'desktop' version, and one is called the 'lite' version. The 'desktop' version includes all the software needed to present a nice user interface that reminds you of Windows with lots of clickable buttons and icons etc. The 'lite' version doesn't have any of this and expects you to know Linux commands because it only presents you with a terminal where you can type in commands. If you're a n00b, you should probably go with the 'desktop' version. If you're more experienced with Linux commands, you may prefer the 'lite' version because it will run faster, use less RAM and it won't require a larger size SD card. Once you've got your Raspbian OS installed, the next thing to do is make sure that you have an SSH server running on it so you can access it through the command-line. Consult this article on setting up SSH on Raspbian OS for details. If you're using the UI, there is an easy UI feature to enable SSH. &nbps;If you're on the command-line, you can do: sudo touch /boot/ssh Then, reboot the Raspberry Pi, and run the following command to make sure that the SSH server is running: ps -ef | grep sshd You should see at least one entry that contains a reference to the sshd executable '/usr/bin/sshd' like this: root 1234 1 0 07:46 ? 00:00:00 /usr/sbin/sshd -D If you don't, then the SSH server is probably not running and you'll have to debug why. Static Versus Dynamic IPs The backup solution described by this article has assumed that you have your Raspberry Pi hosted on the LAN on a given local IP address (192.168.0.177 in our example). However, we haven't considered the fact that next time you reset all your devices, this IP address is not guaranteed to be the same. This will mean that any SSH rules you've set up won't work anymore. But how do we solve this problem to make sure our backup solution is truly always 'automatic'? The answer to this question involves the DHCP protocol, which you may want to read up on. There is more than one way to guarantee that our Raspberry Pi always has the same IP address on our local network, and two different general approaches are: - 1) Change the settings in your router to always assign the Raspberry Pi the same IP address. - Using this solution you will change some setting on your router only and leave all of the settings alone on your Raspberry Pi. Your Raspberry Pi will continue to use the DHCP protocol to obtain a 'dynamic' IP address, but the router will remember that your Raspberry Pi (specifically its MAC address) should always be assigned the same address. If you don't know how to log into your router's admin page, check the back of the router as it will usually have some default username/password and IP address printed on it. You can log into many common household routers by using a web browser to access '192.168.0.1'. You can also use the nmap command discussed elsewhere in this guide to scan for anything on the LAN that talks on port 80. - 2) Change the Raspberry Pi's configuration to give it a static IP address. - Using this method, you change some of the network setting on your Raspberry Pi so that it always uses the same IP address every time it boots up. In this case, it does not rely on the DHCP server hosted on your router to decide what IP address it has. It simply chooses to use an IP like '192.168.0.177' regardless of what everything else on the network is doing. In this situation, you need to be careful to make sure nothing else gets assigned the same IP address on the network, otherwise both computers would experience problems. Option 1 is probably the easiest, although it assumes that your router includes such a configuration feature. Usually, what you can do is connect the Raspberry Pi to the Router, and then log into the router admin panel where it will show you what devices are connected, and then present you with the option to pin an IP address somewhere. If you decide to use a static IP address for the Raspberry Pi, you should be careful not to use a static IP address that is not within the DHCP lease range that the router can assign. Otherwise, there could be a case where the router accidentally assigns the same IP address that your router is using to another computer on the network To determine the DHCP lease range, you can likely find it somewhere inside the router's admin panel. Also, make sure that the static IP that you use has the same subnet mask. For more reading on this topic, consult Q/A on Assigning a fixed IP address to a machine in a DHCP network. Making it Work Over The Internet It's great to be able to make local backups in your home or office, but wouldn't it be great to be able to do backups from anywhere that you can get an internet connection? This is totally possible!. There are actually many ways to accomplish this task, but I'm going to show you one method that involves setting up a proxy server with your favourite cloud provider, and then tunneling the connection to the Raspberry Pi through the proxy server, and down to your Pi. For instructions on how to use SSH tunneling over the internet through a proxy server, see Using SSH to Connect to Your Raspberry Pi Over The Internet. Automation Using Cron Cron jobs are fast and easy way to automate various tasks on a Linux/Unix system. In order to set up a cron job, you can open up the crontab editor and edit up your user's cron file where you use a special syntax to describe what Linux command you want to run, and when you want it to run. The first time you try to edit your crontab file, it will usually ask you what editor you want to use. I would suggest using nano if you're not experienced with command-line editors yet. Here is an example of a cron file that has a single entry that will run the script 'do-backup.sh' once per day at 6:01pm: # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any').# # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command 1 18 * * * /home/robert/do-backup.sh The lines that start with '#' are just comments. For the actual backup script, you can use something like this: #!/bin/bash cd ~/my-repo git push pi-backup master Just replace the directory name, the remote and the branch name with whatever git repo you can to push. The syntax for cron jobs is easy to forget, and it can also get more complicated. A really good site for remembering cron syntax can be found on https://crontab.guru/. You can also use a cron job to regularly run the script provided in the section 'Making it Work Over The Internet' to make sure that the Raspberry Pi will regularly make sure that the remote connection tunnel is always listening for new SSH connections even if the tunnel dies, the power reset, or something else breaks the connection. Assuming that you put the script inside a file called '~/connection-script.sh' you could do: 5 * * * * ~/connection-script.sh Which will run the connection keep-alive script every hour, 5 minutes after the hour. Another use for cron is to create a rule to periodically install updates, although there is also an automatic updates feature that may be better suited to this purpose. Flash Storage Issues Some people encounter issues with corrupted SD cards in with their Raspberry Pi setup. You can read in detail about some of the causes and solutions to the problem of flash storage. Using An External USB Disk One way to avoid using flash completely is to use an external hard drive to host the data that you're backing up. When you plug in most USB hard drives, you can usually find out where they have mounted by using the 'df' command. You'll see output like this: Filesystem 1K-blocks Used Available Use% Mounted on udev 10200812 0 10200812 0% /dev tmpfs 2046288 1204 2045084 1% /run /dev/sda1 921923300 589451908 285570556 68% / /dev/sdb1 3844607992 90140 3649152324 1% /mnt Where, in this case, the 'sdb1' entry is the external USB hard disk. In your case, the device name will be different, but it will sometimes auto-mount to '/mnt'. If you don't see your external USB in the output of 'df', then it might not be mounted. In order to mount it, you'll need to use a tool like 'fdisk' which can list off all storage devices, even ones that are not mounted. Explaining fdisk is beyond the scope of this article, but if you do end up using it just make sure you read the documentation. Fdisk is able to modify partition tables of your storage devices, and if you accidentally edit a partition table of one of your storage devices, you could lose all your data!. After you find out which storage device is your USB disk, you can use the 'mount' command to manually mount it. However, there is a problem with using the 'mount' command to manually mount the USB disk: You may need to manually re-mount it every time you reboot the Raspberry Pi, otherwise, when your script tries to push data to a git repo stored on the disk that isn't mounted, it will fail. You can fix this problem by editing the '/etc/fstab' file and instructing it to auto-mount the USB disk every time the Raspberry Pi starts. One draw-back of editing the fstab file is that, by default, it will interrupt the boot process if the disk is not present when it tries to mount. This makes sense when for a server with an internal hard disk, but for a removable USB drive that you may take out every once in a while, it can be annoying. Therefore, you can use a special 'nofail' option in the fstab entry to prevent it from hanging up the boot process: UUID=1234XXXX-AAAA-BBBB-CCCC-DDDDEEEEFFFF /mnt-my-USB ext4 defaults,nofail 0 0 Also, be very careful when editing your fstab file, and make sure you know what you're doing. If you accidentally switch where your disks are mounted or break your boot process, it may cause mistakes that lead to data loss. Finally, in order to add these entries in a way that is consistent under different race conditions from which device is detected first, use the UUID based method of identifying devices. You can find the UUIDs of devices with the 'blkid' command: If you encounter trouble getting your SSH connections to work, especially when using tunneling through the proxy server, a very useful command to run is: You may want to pipe the result of this command into less so you can look at the result easier (press 'q' to exit): netstat -an | less The results of running this command will look something like this: Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 88 126.96.36.199:22 188.8.131.52:52032 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN Active UNIX domain sockets (servers and established) ... The example output above is what it looks like when I am SSH'ed into one of my servers. Pay special attention to the port numbers and states of each of the connections. In the above output, we can see that the first entry is 'tcp' (aka IPV4 TCP) and is listening for new connections from packets that have a destination port of 22, and are destined for any interface (0.0.0.0:22). Furthermore, we are listening for connections from any IP with any source port. In the second entry, we see that there is an ESTABLISHED from my laptop which has IP 184.108.40.206 originating from port 52032 on my laptop (220.127.116.11:52032). This connection is sending packets to the server at 18.104.22.168 to port 22 (no surprise because that's the port for SSH connections). In the third entry we see another listen socket for 'tcp6' which just means it is also listening for IPV6 connections too. In the above output, there are not remote forwarded tunnels set up, and you'll see more entries when there are. It may take you a while to get used to reading this output, but eventually you'll be able to glance at it and tell what is connected, what's waiting for connections, and what is unrelated. Another thing you should do if you're having troubling setting up your SSH connection is use verbose mode when invoking SSH itself. You can enable full verbose mode with the '-vvv-' flag: ssh -vvv pi-backup Here is an example of the kind of output you might see: robert@computer:~$ ssh -vvv pi-backup openSSH-10.3 Ubuntu-ubuntu0.4, OpenSSL 1.1.3e 21 Nov 2009 debug1: Reading configuration data /home/robert/.ssh/config debug1: /home/robert/.ssh/config line 3: Applying options for pi-backup debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 11: Applying options for * ... Depending on what your problem is, you may be able to glean some useful information from the output that can help solve your problem. In this article we've discussed many topics related to hosting a backup solution using your Raspberry Pi or spare laptop. This includes simple situations that only require communication with a Raspberry Pi hosted on the same LAN, but also more complex situations that require the connection to go over the internet. Concerns like flash memory corruption were discussed with the conclusion that you should avoid buying the absolute rock bottom cheapest flash memory, and also make sure you use a good power supply. A method of automating the backup 'push' operation was discussed that involves using cron jobs. A Guide to Recording 660FPS Video On A $6 Raspberry Pi Camera An Overview of How to Do Everything with Raspberry Pi Cameras Using SSH to Connect to Your Raspberry Pi Over The Internet DS18B20 Raspberry Pi Setup - What's The Deal With That Pullup Resistor? A Beginners Guide to Securing A Raspberry Pi Pump Room Leak & Temperature Monitoring With Raspberry Pi A Surprisingly Common Mistake Involving Wildcards & The Find Command The Most Confusing Grep Mistakes I've Ever Made Why Bother Subscribing?
<urn:uuid:e5910bcc-f9ef-47f6-99fc-86098e615202>
CC-MAIN-2021-21
https://blog.robertelder.org/ssh-git-self-hosted-backups/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00016.warc.gz
en
0.894543
7,101
2.640625
3
The gyroscope, however, has two: game rotation vector sensor and rotation vector sensor. Just like my firs… Otherwise, use the following code to convert all its angles to degrees: You can now change the background color of the activity based on the third element of the orientations array. (Image via Pepperl+Fuchs) There are three basic configurations for photoelectric proximity sensors; reflective, through-beam, and proximity. There are different types of proximity sensors like optical, ultrasonic, capacitive, inductive, and magnetic. Here's how you can unregister the listener: The SensorEvent object, which is available inside the onSensorChanged() method, has a values array containing all the raw data generated by the associated sensor. Be aware, though, that apps that use sensors inefficiently can drain a device's battery very quickly. Most developers today prefer software, composite sensors over hardware sensors. You can determine the maximum range of any hardware sensor using the getMaximumRange() method of the associated Sensor object. A proximity sensor is a sensor able to detect the presence of nearby objects without any physical contact. A software sensor combines low-level, raw data from multiple hardware sensors to generate new data that is not only easy to use, but also more accurate. Design, code, video editing, business, and much more. The app we'll be creating in this tutorial will not work on devices that lack a proximity sensor and a gyroscope. Why always expect users to tap buttons on their touchscreens? Accordingly, add the following code inside the onSensorChanged() method you created in the previous step: If you run the app now and hover your hand close to the top edge of your phone, you should see the screen turn red. Lately, even budget phones are being manufactured with a gyroscope built in, what with augmented reality and virtual reality apps becoming so popular. Working with a software sensor is no different from working with a hardware one. Share ideas. Therefore, you must now associate a listener with the rotation vector sensor to be able to read its data. With the VCNL4010 you can easily read the proximity (i.e. The hardware is now complete. Host meetups. WTWH Media LLC and its licensors. When the device proximity sensor detects a change between the device and an object, it notifies the browser of that change. Sensor myProximitySensor = mySensorManager.getDefaultSensor(Sensor.TYPE_PROXIMITY); Get The SensorManager, SensorManager mySensorManager = (SensorManager)getSystemService(Context.SENSOR_SERVICE); Check the Proximity Sensor … By using the rotation vector sensor, let us now create an activity whose background color changes only when it's rotated by a specific angle. You could do all this yourself with LEDs and light sensors, but the VCNL4010 wraps all that logic up into a stand-alone chip for you! The hardware consists of an Arduino MKR100 with a sonar sensor … To learn more about hardware sensors and the data they generate, you can refer to the official sensors API guide. All rights reserved. In this example, an adjustable threshold is read from ThingSpeak to create a proximity detector. For example, we could turn it yellow every time its rotation—along the Z-axis—is more than 45°, white when its rotation is between -10° and 10°, and blue when its rotation is less than -45°. The VCNL4010 is a fully integrated proximity and ambient light sensor. Lastly, connect the data pin of your sensor to digital pin 1 on your Arduino. Let us discuss about the inductive proximity sensor circuit which is most frequently used in many applications. Consequently, the values array of its SensorEvent object has the following five elements: You can convert the quaternion into a rotation matrix, a 4x4 matrix, by using the getRotationMatrixFromVector() method of the SensorManager class. Lead discussions. Retrieve the proximity read from the sensor. AN92239 shows how to implement capacitive proximity-sensing applications using PSoC® CapSense®. After several times trying to optimize it, I finally came up with something that is quite simple e precise. Feel free to use the sensors in creative ways. However, when the object is 5 cm away, the output is 655. Well, it uses the proximity sensor, which is a hardware sensor that can tell if an object is close to it. Mark. For example, it turns off your display when a phone call is ongoing such that you wouldn’t accidentally activate something while placing it near your cheeks! I'll also introduce you to the rotation vector sensor, a composite sensor that can, in most situations, serve as an easier and more accurate alternative to the gyroscope. In order to avoid accidental touch events, your phone's touchscreen goes black during calls, when it's very close to your ear. The following code registers a listener that allows you to read the proximity sensor's data once every two seconds: I suggest you always register the listener inside the onResume() method of your activity and unregister it inside the onPause() method. Proximity sensors are used to detect something approaching near. Most handset and tablet manufacturers include a geomagnetic field sensor. In addition to the material of the object, the following criteria are important for choosing the right sensor: The design of a proximity sensor can be based on a number of principles of operation, some examples include: variable reluctance, eddy current loss, saturated core, and Hall Effect. Looking for something to help kick start your next project? Furthermore, we had no idea what the actual angle of the device was before or after the rotation. Connect the black wire of the proximity sensor with the digital pin number 2. Depending on the principle of operation, each type of sensor will have different performance levels for sensing different types of objects. This application of proximity is hugely use. Proximity sensors have number of applications. To built this example Android app we will use SensorManager and Sensor classes from the Android API. In this instructable I'll teach you how to make a very simple proximity sensor using infrared LEDs and Arduino. Step 4 "Gradual" proximity sensing . You can let Google Play and other app marketplaces know about your app's hardware requirements by adding one or more tags to your Android Studio project's manifest file. It consists of simple IR technology that switches on and off display accordingly to your usage. Sometimes Proximity sensors are also known as non-contact sensors because they don’t require any physical contact with the object to be sensed. In case of clockwise rotation, it will be negative. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too! If you are developing an OpenGL app, you can use the rotation matrix directly to transform objects in your 3D scene. An alternating magnetic field is generated in front of these windings. Composition of an inductive proximity sensor 1 Oscillator 2 Output driver 3 Output stage If it's more than 0.5f, we can, to a large extent, be sure that the rotation is anticlockwise, and set the background color to blue. As examples, sensors may detect that a part is present, that a part is not present, that an actuator is in a certain position, that a lift is lowered or raised, that a door is open or closed, or that a spring-returned component is a certain distance away. In case of the proximity sensor, the array contains a single value specifying the distance between the sensor and a nearby object in centimeters. LJ12A3-4-Z/BX Inductive Proximity Sensor Switch NPN DC6-36V I know that the voltage is 6-36VDC, but from what I have read, ... (like in the attached pic, I just put 2 in for the example) or will there just be one going to the 5VDC for all 6 Prox? They basically comprise an oscillator whose windings constitute the sensing face. Before you call the getOrientation() method, you must remap the coordinate system of the rotation matrix. To actually change the background color of the activity based on the proximity sensor's data, you can use the setBackgroundColor() method of the top-level window's decor view. First, connect the positive pin (red wire) of the sensor to 5v (+5 volts) on the Arduino. If you are accustomed to radians, feel free to use it directly. Proximity sensors are also use in parking lots, sheet break sensing and conveyor systems. Proximity sensors are also applicable in phones as well, be it your Andriod or IOS devices. And check out some of our other hardware and sensor content here on Envato Tuts+! DC Motor and Inductive Proximity Sensor Arduino Code: In case of anticlockwise rotation along any axis, the value associated with that axis will be positive. To create it, use the getSystemService() method of your Activity class and pass the SENSOR_SERVICE constant to it. For example in cell phones the proximity sensors are the essential parts. Creating a listener for the gyroscope sensor is no different from creating one for the proximity sensor. Connect brown wire of the proximity sensor with the VCC. In the previous step's example, we changed the activity's background color every time the angular velocity along the Z-axis was more than 0.5 rad/s in the clockwise or anticlockwise direction. Prox example.png 209.19 KB downloaded 8433 times Pin of Raspberry pi 3 model B (Imgae source) The first thing that you need to kn o … Because we are currently interested only in rotation along the Z-axis, we'll be working only with the third element in the values array of the SensorEvent object. When you now approach the sensor, you can change the LED’s brightness gradually, depending on how close or far away you are from the sensor. Everything you need for your next creative project. How to set up Raspberry Pi. With the latest ASIC technology, SICK's sensors offer the ultimate in precision and reliability. For example, a capacitive proximity sensor may be suitable for a plastic object; an inductive proximity sensor always requires an object made of ferrous metal. In this tutorial, you learned how to use Android's sensor framework to create apps that can respond to data generated by the proximity sensor and the gyroscope. To follow along, you'll need the following: 1. To avoid this condition, I suggest you set the screenOrientation of the activity to portrait in the manifest file. Before proceeding, always make sure that the Sensor object is not null. Similarly, if it's less than -0.5f, we can set the background color to yellow. To be able to read the raw data generated by a sensor, you must associate a SensorEventListener with it by calling the registerListener() method of the SensorManager object. Proximity Sensor: A proximity sensor is an electronic sensor that can detect the presence of objects within its vicinity without any actual physical contact. In this application we will display “NEAR” when an object is close to the device and display “AWAY” when the object is moved away. The rotation vector sensor combines raw data generated by the gyroscope, accelerometer, and magnetometer to create a quaternion. Google Fit for Android: Reading Sensor Data. Inductive proximity sensors are solely for the detection of metal objects. They detect metal objects without contact, and are characterized by a long service life and extreme ruggedness. These sensors are useful in many applications like collision avoidance, obstacle detection, path following, touchless sensing, motion detection, and object detection. Second, connect the ground pin (black wire) of your sensor toGND (-) on your Arduino. Let move to proximity sensor android example part. When the browser gets such a notification, it fires a DeviceProximityEvent for any change, and a UserProximityEvent event in the case of a more rough change.. If you run the app now, hold your phone in portrait mode, and tilt it by more than 45° clockwise or anticlockwise, you should see the background color change. The gyroscope allows you to determine the angular velocity of an Android device at any given instant. If it is, it means that the proximity sensor is not available. Visuino How to Use Inductive Proximity Sensor: In this tutorial we will use Inductive Proximity Sensor and a LED connected to Arduino UNO and Visuino to detect metal Proximity.Watch a demonstration video. The sensor uses an infrared LED to bounce light off objects in front of it and time how fast it takes for the light to return. Hathibelagal is an independent Android app developer and blogger who loves tinkering with new frameworks, SDKs, and devices. Trademarks and brands are the property of their respective owners. Design like a professional without Photoshop. Design templates, stock videos, photos & audio, and much more. For example, ferromagnetic materials like steel generally have the longest sensing distances, whereas other metals such as aluminum or copper have much shorter sensing distances. The latest version of Android Studio It features an interrupt function. Millions of inductive proximity sensors are currently in use in virtually all industries. For example, one common application of capacitive sensing is proximity detection, where the detection range is limited by the minimum capacitance that can be measured. In this tutorial, I'll show you how to use the framework to read data from two very common sensors: proximity and gyroscope. To create a Sensor object for the gyroscope, all you need to do is pass the TYPE_GYROSCOPE constant to the getDefaultSensor() method of the SensorManager object. The Qwiic Proximity Sensor is a simple IR presence and ambient light sensor utilizing the VCNL4040.This sensor is excellent for detecting if something has appeared in front of the sensor; detecting objects qualitatively up to 20cm away. Therefore, instead of specifying a polling interval in microseconds, I suggest you use the SENSOR_DELAY_NORMAL constant. An Android device with a proximity sensor and a gyroscope, The X, Y, Z, and W components of the quaternion, Adobe Photoshop, Illustrator and InDesign. The proximity sensor has no software alternative. Create the Sensor class object. If the value is equal to the maximum range of the sensor, it's safe to assume that there's nothing nearby. To acquire the rotation vector sensor, you must pass the TYPE_ROTATION_VECTOR constant to the getDefaultSensor() method of the SensorManager object. If the sensor would be used in an application that would only determine level of liquid, object detection, or distance measurement, perhaps, an Ultrasonic sensor would be a better choice. The very basic is the detection of objects. As an example, have a look at the “Prox_LED” code. For example, smartphones use proximity sensors to detect nearby objects. Let us now create an activity whose background color changes to red every time you hover your hand over your device's proximity sensor. Some proximity sensors can also tell how far away the object is, though their maximum range is usually only about 5 cm. Proximity sensors are … Thanks again for the assistance! The gyroscope sensor's raw data consists of three float values, specifying the angular velocity of the device along the X, Y, and Z axes. I attached the proximity sensor to fixed object (in my case, I used the clips of the soldering iron stand to secure the sensor). To see how, let us now create an activity whose background color changes to blue every time you rotate the phone in the anticlockwise direction along the Z axis, and to yellow otherwise. The sensor topology depends on: • Sensor-to-target distance An example of a reflective sensor with both the emitter and receiver in a single housing. But if the application involves, light detection or measurement of heat emission, an infrared or proximity sensor should work well. To get access to any hardware sensor, you need a SensorManager object. Connect VCC of the relay module with the 5V of the Arduino. It explains how to design a proximity sensor and tune it to achieve a large proximity-sensing distance and liquid-tolerant proximity sensing. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media. More precisely, you must rotate the rotation matrix such that the Z-axis of the new coordinate system coincides with the Y-axis of the original coordinate system. Tubular Example X S 1 M 18 P A 370 D SENSOR TYPE Self Contained X SENSING TECHNOLOGY Inductive Proximity S Capacitive Proximity T BODY TYPE Shielded -- Metal Body 1 Non-shielded -- Metal Body 2 Non-shielded -- Plastic Body 4 TYPE OF ENCLOSURE OR FAMILY Economy D D Standard Length -- Threaded Metal CaseM M Short Length -- Threaded Metal CaseN N Conversely, if it is less than the maximum range, it means that there is something nearby. If you turn the phone too much, however, its screen orientation will change to landscape and your activity will restart. proximity sensor android app example. You can again use the SENSOR_DELAY_NORMAL constant for the polling interval. The proximity sensor circuit diagram is shown in the above figure which consists of different blocks such as oscillator block, electrical induction coil, power supply, voltage regulator, etc. Example of Inductive Proximity Sensor . You can check if a proximity has been read by the sensor and may be retrieved using the APDS.proximityAvailable()function. There’s another way to make use of the proximity sensing of the boards. Fully integrated means that the infrared emitter is included in the package. It includes a signal processing IC and features standard I2C communication interface. With deep sub-fF measurement capability, this new device can achieve longer detection range than … To do so, we can use the getOrientation() method of the SensorManager class. By default, the orientations array contains angles in radians instead of degrees. As shown in the above image, smartphone lcd and touch is turned off to save battery life and unintentional operations when you put it in a pocket or near your ears during calls. © 2021 Envato Pty Ltd. While registering it, however, you must make sure that its sampling frequency is very high. You can now create a Sensor object for the proximity sensor by calling the getDefaultSensor() method and passing the TYPE_PROXIMITY constant to it. If you tilt it in the opposite direction, it should turn yellow. Collaborate. If you run the app now, hold your phone in the portrait mode, and tilt it to the left, you should see the activity turn blue. Connect blue wire of the sensor with the ground of the Arduino. While doing so, you must also specify how frequently the data should be read from the sensor. By using the gyroscope, you can develop apps that can respond to minute changes in a device's orientation. Ever wondered how your phone determines whether or not it is close to your ear? An Android device with a proximity sensor and a gyroscope 2. It has 16 bit resolution. Also, the type of material sensed will influence the sensing distance. Proximity Sensor External Offset www.ti.com Capacitive Sensor Topologies Figure 8. For now, however, let us convert the rotation matrix into an array of orientations, specifying the rotation of the device along the Z, X, and Y axes. Most of the modern android devices comes with an inbuilt IR-based proximity sensor. Proximity Sensor Circuit Block Diagram. Get access to over one million creative assets on Envato Elements. You also learned how to work with the rotation vector sensor, a more popular alternative to the gyroscope. Working with angular velocities, however, is not intuitive. The unit of each value is radians per second. The geomagnetic field sensor and the proximity sensor are hardware-based. For example, when an object is 1 cm away from the sensor, it reports an analogRead() value of 322. In different projects of engineering different proximity sensors are use for various functionalities. You can get an overview of Android's hardware sensors by reading the following tutorial: To follow along, you'll need the following: If your app is simply unusable on devices that do not have all the hardware sensors it needs, it should not be installable on such devices. The sensor framework, which is a part of the Android SDK, allows you to read raw data from most sensors, be they hardware or software, in an easy and consistent manner. Simple IR Proximity Sensor With Arduino: Hello guys! Time-Varying Offset Measurements Example Figure 9. Now let's get onto the wiring of this proximity sensor. Car Door Proximity Example 6 Capacitive Sensor Topologies There are several capacitive sensor topologies that are common depending on the application. Therefore, add the following lines to your manifest file: Note, however, that because the tag doesn't help if a user installs your app manually using its APK file, you must still programmatically check if a sensor is available before using it. Identifying Best-Value Linear Motion Technologies, Learn how to reduce noise and distortion in encodersâ signals, Helical Planetary Gearboxes: Understanding The Tradeoffs, Tweets from https://twitter.com/Motion_Control/lists/motion-control-tweets. Likewise, handset manufacturers usually include a proximity sensor to determine when a handset is being held close to a user's face (for example, during a phone call). This application note also explains how to implement gesture detection based on proximity sensing and the wake-on-approach method to reduce In simpler terms, it tells you how fast the device is rotating around its X, Y, and Z axes. By using some of the hardware sensors available on mid-range Android phones today, you can create apps that offer far more engaging user experiences. Introduction. In this tutorial, we'll focus only on the latter. Precision and reliability to digital pin 1 on your Arduino directly to transform in! Next project that axis will be negative rotating around its X,,. Turn the phone too much, however, is not null involves, light detection measurement. Sensor able to detect nearby objects without contact, and magnetometer to create a proximity has been read by gyroscope! Manufacturers include a geomagnetic field sensor and may be retrieved using the (. Also known as non-contact sensors because they don ’ t require any physical contact Y, and are by! How to make a very simple proximity sensor with the object is not available if it is though! Prox_Led ” code assume that there is something nearby include a geomagnetic field sensor and rotation sensor! Of metal objects downloaded 8433 times WTWH Media LLC and its licensors proximity sensor, which is most used... The positive pin ( black wire ) of the device and an object, it uses the proximity sensor a! In many applications doing so, we can set the screenOrientation of the sensor you. Ultrasonic, capacitive, inductive, and magnetometer to create it, I suggest set. Simple e precise should turn yellow to acquire the rotation vector sensor ’ t require any physical with... Radians per second ( Image via Pepperl+Fuchs ) there are different types of proximity sensors reflective... The activity to portrait in the opposite direction, it means that the sensor and may retrieved! 3D scene communication interface sensor object velocity of an inductive proximity sensor a. Object to be sensed can drain a device 's battery very quickly the constant. Your Andriod or IOS devices the Output is 655 1 oscillator 2 driver., photos & audio, and proximity for photoelectric proximity sensors are used detect..., stock videos, photos & audio, and Z axes an activity whose background color to! You to determine the maximum range of the sensor to be able to detect approaching! That its sampling frequency is very high 1 on your Arduino specifying a polling interval actual angle of the,. That there is something nearby Andriod or IOS devices I 'll teach you how fast the device an... Trademarks and brands are the essential parts in virtually all industries are … move! How frequently the data pin of your sensor to digital pin 1 on your.! By our community members—you can be involved too an example, an adjustable threshold read... This example Android app we will use SensorManager and sensor content here on Envato Tuts+ tutorials are translated into languages. Motor and inductive proximity sensors are also applicable in phones as well, it will be positive ) your., inductive, and devices using the gyroscope, you must remap the system... Other hardware and sensor classes from the Android API also use in virtually all industries will not work on that... Apps that can tell if an object, it will be positive because they don ’ t any! As non-contact sensors because they don ’ t require any physical contact with the latest version of Studio. Black wire of the Arduino is a hardware sensor using infrared LEDs and Arduino rotation matrix directly to objects! Latest ASIC technology, SICK 's sensors offer the ultimate in precision reliability! It 's less than the maximum range is usually only about 5 cm able to read its data 6... Thingspeak to create it, use the getOrientation ( ) function will different. There 's nothing nearby a hardware sensor, you can develop apps that use sensors inefficiently drain! Its screen orientation will change to landscape and your activity will restart something to help kick start next! Radians, feel free to use it directly who loves tinkering with new,. Has been read by the sensor and devices while registering it,,. Sensors because they don ’ t require any physical contact sure that its sampling is! Angles in radians instead of degrees depending on the Arduino and features standard I2C communication interface through-beam, much. Sensor able to detect the presence of nearby objects, which is frequently! The Output is 655 optical, ultrasonic, capacitive, inductive, and much more sensors to detect approaching. Free to use it directly contact, and magnetic of material sensed will influence the sensing face data! Be retrieved using the getMaximumRange ( ) method of the SensorManager object a reflective sensor the! Onto the wiring of this proximity sensor different from working with angular velocities, however, has two game... An activity whose background color changes to red every time you hover your hand over your device 's..
<urn:uuid:a91a8d64-c485-4cec-af5d-416816343a97>
CC-MAIN-2021-21
https://dusanrodic.com/logan-nayneecassum-osct/b7dec8-proximity-sensor-example
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00614.warc.gz
en
0.897228
5,549
2.78125
3
The term "medium" The term "medium" is currently applied to very different subjects. Some authors consider it to mean in a broader sense all possible carriers and channels of transmission for symbols and messages. The Canadian media critic Marshall McLuhan even used it to describe all civilisational means of compensating for deficiencies of human organs (e.g. cars, clocks, trains etc.).1 This very much broadens the term. Therefore, technologies described in the narrow sense as media are ones suitable for the mass replication and distribution of messages to a large number of recipients. In this context, the terms "media of mass communication" and "public media" are also used. Only these will be discussed here. The first technology to perform in this manner was print using movable letters, invented by Johannes Gutenberg (ca. 1400–1468) around 1450 in , . Only then did it become possible to speak of media in the narrower sense. However, precursors have existed back to the dawn of history. Books were used in Antiquity and the Middle Ages but their distribution was limited because manuscript (re-)production was very time consuming.2 A number of media genres have emerged from print but for 300 years it remained the only media technology. New media were only added starting late in the 19th century – first film, then in the 1920s electronic media radio and television. A further multiplication of media has been in progress since the late 20th century. Media can be classified according to the symbols (primarily) used for encoding (word/image and digital/iconic symbols), the channels of perception addressed (one-channel/two-channel, optical/acoustic/audiovisual), the technology (print/radio) and availability (stored/unstored). Print media are differentiated according to a number of characteristics: whether they are non-periodical or periodical, by content and subject matter, by format, by type of presentation, by purpose or function, and by readership. The book took on the shape familiar to us in the 16th century.3 While the codex was the standard form of the book in the preceding millennium, block books with texts carved in wooden templates were the immediate precursor. However, only the use of movable letters permitted the production of larger editions. Initially, the Bible and other ecclesiastical works predominated, but textbooks, academic and "belletristic" works were soon added using this technology. From Mainz, printing rapidly spread throughout the German-speaking territories and other book market was created.countries. By 1500, print shops existed in some 265 locations, of which 62 were located in German-speaking countries, 80 in the territory of modern and 45 in . The number of editions printed by then is estimated at 40,000, of which about 10 million copies were produced. A European The supply of books further diversified over the course of centuries as a result of intellectual, cultural and social developments.4 New genres took shape. These can be classified according to various principles that can overlap: according to content and type of presentation (fiction, non-fiction etc.), readership (children's, youth books), form or format (paperback, hardcover, picture book etc.), and even the intended purpose (lexicon, encyclopaedia, cook book, travel guide etc.). Small prints: broadsheets, handbills, newsbooks Apart from books, which can feature a larger number of pages, small prints, i.e. editions of no more than a few pages, arose in the 15th century. These include, for example, broadsheets, which often showed illustrations along with the typographical print. Later, the term "Flugblatt" was established in German, translated from the French term "feuille volante". These popular prints could have religious, official, scientific, propagandistic or literary content.5 However, they were only printed in large numbers starting in the 17th and 18th centuries. Broadsheets were also used to distribute news. The "Newen Zeytungen" (news reports, lit. new tidings), which could be several pages long, coalesced into a separate genre early in the 16th century. Their number is estimated – by approximation – to have been about 16,000 to 17,000 in the German-speaking territories during the 16th and 17th century.6 These "Newen Zeytungen" were printed depending on events and flowered in the second half of the 16th century. They preferentially reported on wars and military matters but sensational events were also included.7 Similar printed works also appeared in other European countries during the 16th century. In France they were called "feuilles occasionelles" or "canards". In Italy 8 In these news sheets were known as "relaciones", in as "relações".was initially the centre of production, where they appeared as "avvisi" or "gazette". In the , they circulated as "Nieuwsbrieven" and printed "Nieuwe Tijdingen". In they were called "corontos", "newsbooks" or "diurnalls". The pamphlet must be distinguished as a separate genre from the handbill and the broadsheet because of its outer form, its content and its function, though the boundaries and transitions between them are at times blurred. Pamphlets characteristically comprise more than one page thus allowing space for longer expositions. Therefore, they were not only used for mere information purposes but also for influencing opinions and convictions. They even served as means of propaganda. Pamphlets (German: Flugschrift) experienced a boom during the and the accompanying confessional confrontations. This certainly applies to the . The number of pamphlets produced between 1501 and 1530 is estimated at 10,000 editions and there must have been more than ten million copies in total.9 In England, a new wave of these printed works arose in the 1580s. Pamphlets developed less strongly in Catholic countries but nevertheless occurred there as well.10 For example, there was intense pamphlet-publishing activity during the struggle of the Parisian Fronde against Cardinal Jules Mazarin (1648–1652).11 As a media genre, pamphlets – like handbills – experienced periods of flowering in later times of political turmoil and moments of crisis. For example, this was the case in France during the outbreak of the , which has been labelled a "brochure crisis".12 This media genre also experienced peaks in Germany during the Thirty Years War, the against Napoleon as well as the period before and around the Revolution of 1848.13 Newspapers in the modern meaning of the word only fully developed when current and thematically universal news was continuously printed in regular intervals. Periodicity is also essential. Already the "Messrelationen" (fair relations), the oldest of which date from 1583, exhibited regularity. They appeared annually or semi-annually at trade fairs with information from the preceding (half-) year. A first sequence of 12 monthly papers is preserved from 1597 (Annus Christi or "Rorschacher Monatsschrift"). However, these publications lacked sufficient immediacy due to the long intervals between publications. This was only provided with a weekly publication rhythm. The oldest paper to possess this characteristic is the Relation, which was printed in Johann Carolus (1575–1634) to the city council suggests that it must have already appeared in 1605. Another paper is also preserved from 1609, the Aviso, which was printed in . With these two papers, Germany stood at the beginning of newspaper history. There, this medium developed most abundantly in the 17th century, which is related to the territorial fragmentation of the old empire. There were 60 to 70 titles in existence around 1700. Newspapers also gradually began to appear more frequently, initially twice weekly, then three to four times weekly in the 18th century. The first daily newspaper was published in in 1650 and was already printed six times a week (Einkommende Zeitung). However, this was an exception. The average edition size of early newspapers has been estimated at 400 to 500 copies.14. The 1609 volume is the earliest to have survived, but a petition by the printer Newspapers also came into existence in other European countries during the 17th century.15 The first to follow were the States-General (United Netherlands), where the development was most turbulent after Germany. The oldest known Dutch newspaper editions date from 1618 (Courante uyt Italien, Duytsland &c). Next to , became the most important centre of newspaper production in the early period of the European press.16 In the (present day ) the first paper was published at in 1620 (Tijdinghe...). In France the first regular newspaper was established in 1631. The Gazette, which was founded at this time, was preceded by the Nouvelles ordinaires de divers endroits, with the latter giving way to the new competition.17 The Gazette was able to preserve its monopoly into the middle of the 18th century but was reprinted in many cities of the country. A special case are French papers printed outside the home country for reasons of censorship, such as the Gazette de Leyde (1677ff.) and the Gazette d'Amsterdam (1688ff.). The (official) Oxford Gazette, which was published by the government in 1655 and changed its title to the London Gazette in the next year, is considered to be the oldest periodical newspaper of England.18 In Italy full-fledged newspapers appeared in 1636 in ). The first newspaper in Danish appeared in 1672 while the first Russian newspaper was printed in 1703. Several decades more passed until the first newspaper appeared in 1771 in the Finnish university town of Åbo/ .and 1639 in . They were preceded in Venice by the "avvisi" or "gazette". The first Swedish newspaper was published in 1645 (Ordinaries Post Tijdender / Post- och Inrikes Tidningen) and still exists. The Gaceta Ordinaria de Madrid became the first Spanish weekly in 1677 but was preceded in 1661 by the monthly Gazeta Nueva. Two copies of newspapers from stem from 1641 but they are simply translations of French sources. 1661 also marks the birth of the oldest newspaper in ( The newspapers of various countries were initially quite similar in appearance and content. Political and military reporting dominated. Newspapers primarily presented news from abroad, that is from other European countries and, consequently, contributed greatly to the transmission of information among them. The emphasis of reporting shifted over the course of time and according to historical events.19 Change in the basic newspaper format did not occur until the 19th century. The precondition for this was the invention of the high-speed press and innovations in paper production which made it possible to print larger formats and editions that numbered in the thousands. With the advancement of advertisements in newspapers and the elimination of taxation, the genre of the mass paper came into being. Though the prototypes of the "penny press" originated in the and Le Siècle, already appeared in 1836. They turned the advertisement section into their major source of revenue, allowing the retail price to be lowered and achieving mass sales as a result. Readers' interests were satisfied by expanding the content (e.g. features). In the 1850s, popular dailies also found a home in the , beginning with the Daily Telegraph and Courier, which was able to raise its edition size to 250,000 copies by 1880. In the transition to the popular mass press occurred in the 1870s and 1880s in the form of the "Generalanzeiger" [general advertiser]. They were entirely based on the advertisement section but also provided local and entertaining reading materials. Their pages also contained advice. The most successful of these publications in Germany was the Berliner Lokal-Anzeiger (edition size: 150,000 copies)., in France two newspapers of the "presse à bon marché", La Presse The editorial and partisan press became a common phenomenon next to the rather apolitical mass papers. The former could only rise to prominence when freedom of the press was assured. Since England led the way and freedom of the press virtually prevailed there since 1695, a politically polarized press formed in which the Whigs and the Tories had their own papers. In other countries, such as Germany and France, the partisan press followed parlamentarisation and the formation of parties in the 19th century. The editions of the partisan press remained limited because they were primarily read by politically like-minded individuals. By contrast, politically less decided papers were able to achieve larger editions. The most successful newspaper in 19th century France was Le Petit Journal (1863ff.), with Le Petit Parisien (1876ff.) and Le Matin (1883ff.) also achieving editions in the millions of copies. Apart from newspapers, which served to provide current information, another print media genre appeared from the middle of the 17th century: the journal (Zeitschrift). The term Zeitschrift was first documented in German in 1751. Previously, Latin titles (Acta, Ephemeriden) were used or one spoke of a "Journal", "Wochenschrift" (weekly) or "Monatsschrift" (monthly). Characteristic of this press genre is that it is "limited" by larger publication intervals regarding both the content and its currency. The journal also fulfils other functions than the newspaper and is directed at more or less defined target groups. The Journal des Sçavans, published in 1665 in, is considered to be the first journal. It was a journal for the learned that provided excerpts and summaries of books, novelties from the literary world, obituaries and eventually original treatises and reviews. Imitators were already found in the same year in England (Philosophical Transactions). The Italian Giornale dei Letterati was produced three years later in . In Germany the genre of the learned journal was only adopted with the Acta Eruditorum in 1682. As the title shows, Latin remained in use as the language of knowledge. In France another type of journal followed in 1672 with the Mercure galant. It supplied novelties from court society and cultural life, with riddles, verse and short prose pieces being added. This journal aimed to entertain. In Germany the political and historical journal appeared with Der Verkleidete Götter-Both Mercurius in 1674, which used important political events as the occasion for controversial debates, allowing political reasoning to enter journalism. The Monatsgespräche of the Leipzig professor Christian Thomasius (1688–1692) is considered the first literary journal in the German language.20 In the 18th century, the journal became the publishing medium of a growing specialization. The number of titles rose rapidly with new subgenres continuously forming. Often they were only short-lived series because the sales were low. Few journals achieved larger editions of several thousand copies. Joachim Kirchner recorded more than 6,600 titles in his bibliography of journals from 1682 to 1830 and classified them according to types. Although this classification is not unproblematic in many cases, it nevertheless illustrates the diversity of journal genres. The journal became a communication medium for the increasingly specialised individual sciences and social fields of interest. However, many journals intended to serve as entertainment and for passing time (general magazines). Some were able to achieve large circulations. Furthermore, new groups of readers were accessed, especially women (women's and fashion magazines). As the result of new inventions and societal trends, new journals and magazines continuously appeared, for example, in the 20th century film, motor, sports journals etc. During the 18th century, the journal spread throughout Europe. France was fertile soil for a whole series of literary and philosophical publications that took radical positions and were contested by others. Titles are, for example, the Journal encyclopédique (1755ff.) and the Journal de Trévoux (with precursors since 1701).21 The are among the culture-specific journal creations of England. Joseph Addison (1672–1719) and Richard Steele (1672–1729) created the Tatler (1707–1711), the Spectator (1711–1712, 1714) and the Guardian (1713). This genre found its equivalent and successor on a large scale in the German Moralische Wochenschriften (classified by Kirchner as "Sittenschriften").22 This is a fairly coherent genre that is characterised by original titles, fictional authorship and a programme for improving the mores and life-styles of the citizenry. Dozens of titles appeared in Germany between 1713/14 and 1775. There were imitations – though less numerous – in France, , , and Spain. Hardly any other journal concept found such a transnational distribution in Europe during the 18th century. In England the scholarly journal was represented above all by the "learned journals" and the entertainment magazine by the "miscellany journals", which were written in the style of personal letters, starting with the Gentleman's Magazine (1731ff.), the Universal Magazine (1747ff.) and the Monthly Magazine (1796ff.). Journals in Italy followed English and French models, the former by the Magazino universale (1751ff.) and the latter by the Novelle letterarie (1740ff.). New journal types originated in turn during the 19th century. Some achieved mass editions because of the material offered and their production methods. First to deserve mention are the "penny magazines" created in England (1832ff.). They offered inexpensive reading materials of general interest. In Germany, the immediate descendant of this concept was the Pfennig-Magazin der Gesellschaft zur Verbreitung nützlicher Kenntnisse (1833ff.) and similar titles of this kind. These publications were richly illustrated. A new technology was available for this purpose with the wood engraving. Another journal genre that arose in the 1840s, the "illustrated magazine", also made use of it. This group included the Illustrated London News (1842 ff.), the Leipzig Illustrirte Zeitung (1842ff.) and L'Illustration (1843ff.) in Paris. Magazines experienced another rise starting in the late 19th century when it became possible to print photos. The invention of lithography was also followed up with the creation of a new journal genre that primarily contained caricatures. In France, La Caricature (1830ff.) and Le Charivari (1832ff.) became famous, while in Germany various humour magazines of the 1848 Revolution and the Simplicissimus followed later (from 1896).23 In England, the caricature already experienced a first flowering in the 18th century. Punch or The London Charivari, which was first published in in 1841, was the most important journal of this type. Intelligencers (advertisement sheets) In the 18th century, a third genre of press medium, for which the term Intelligenzblatt (intelligencer) established itself in Germany, joined the newspaper and the journal. However, it originated in France. Théophraste Renaudot (ca. 1586–1653) established in 1630 in Paris the "Bureau d'Adresse et de Rencontre" for promoting business. It was possible to deposit and request offers of goods and services there. Renaudot had the idea of printing these offers and requests and enclosing them in his paper, the Gazette (Feuilles du Bureau d'Adresse). Thus, the intelligencer, which used the potential of print to enhance circulation for advertisements, was born. Renaudot's example was first taken up in England. Attempts to establish an address office are already known from the 1640s there. In 1667, the Publick Advertiser, which exclusively consisted of advertisements, appeared in London and established this category in England. In Germany it only gained a foothold in the early 18th century. In 1722, the Wöchentlichen Franckfurter Frag- und Anzeigungs-Nachrichten began to appear with a title referring to the type of publication, place, content etc., which is typical for this genre. The term "Intelligenzblatt" only established itself in the title around 1760. (It is derived from Latin intellegere, "to have insight"). The intelligencer became widely distributed in 18th century Germany. There were about 170 titles in 1800.24 In 25 Now the titles carried the word "affiches". Apart from the Affiches de Paris (1745ff.), similar titles appeared in French provincial towns such as , , , and .the existence of the intelligencer was linked to a state monopoly of advertisement, which was only voided in 1850 and until then obstructed advertising in the (political) daily press. The function of intelligencers for publishing no longer remained limited to advertisement. Official notifications, local reporting in a manner of speaking, advisement and also entertainment material (in inserts) were added. There were several variants of the genre, especially in the early 19th century. The genre of the advertisement journal only returned in the middle of the 18th century via the German model to France, where the idea of the intelligencer was born. The printed press (with its subgenres) was the only mass medium for a long period in European history. A new medium developed only towards the end of the 19th century in the form of film. The first public film presentations took place in 1895 in Paris (and 26 However, there was a long prehistory. The human desire for moving images is old and was already expressed in inventions such as the laterna magica and other instruments for creating optical illusions. The oldest photographs have been dated to the 1830s. Additional inventions were required so that images could "learn to walk". After several precursor stages, the brothers Auguste (1862–1954) and Louis Lumière (1864–1948) in France succeeded in 1895 with their cinématographe in constructing a unit that was suited both for recording and projecting film.), which is why this year has been called the birth year of film. Film is another medium that diversified into a series of subgenres. Specifically, they can be distinguished according to length, form, function, content etc. into full-feature films and short films, black-and-white and colour films, cinema and TV films as well as genres like crime, love, Heimatfilm (films with local background), science-fiction, music and westerns. While the feature film has a fictional plot, the documentary describes factual situations. Initially, silent movies were produced that were accompanied by music and at most had language interposed on the screen. These were readily shown in different countries without problems of understanding. When sound technology was invented in the mid-1920s, international sales of films became difficult and required synchronisation in foreign languages. The French film industry dominated in Europe during the early days of film. The French production company of Charles Pathé (1863–1957) also marketed the first weekly review in 1909 and thus founded the news theatre. Before 1914 Italian and Danish movies also dominated the German market. The German film industry only experienced an upswing as a result of the First World War, which made the importation of foreign films more difficult. During the war, film was also discovered as a propaganda tool. Germany's most important film company, Universum Film AG (UFA), arose after the war from the Image and Film Office (Bild- und Film-Amt, BUFA) that the military had wanted. Electronic media (broadcasting) The discovery of electromagnetic waves by Heinrich Hertz (1857–1894) in 1888 was the technical breakthrough from which the broadcast medium arose in the 20th century. Initially, this only meant the transfer of acoustic signals, but in the meantime it has become a generic term for two genres of electronic media, radio and audiovisual television. During the First World War, radio transmission technology was entirely used for military purposes. Civil use only came about in the 1920s. While private entrepreneurs (printers, film producers, cinema owners) had made press and film media of mass communication, it was the state that ran radio from the start in Germany. The telecommunications monopoly, which was already laid down in the Constitution of the German Empire (1871), was crucial in this regard and was later expanded to wireless signals. The first radio programme in Germany was broadcast on 29 October 1923 by the Berliner Funkstunde (Berlin Radio Hour). Apart from it, an additional eight regional radio companies, which had started broadcasting operations by October 1924, were founded in Germany.27 The Deutsche Reichspost (Imperial Postal Service) owned the majority of shares with 51 percent, while the other 49 percent were mostly owned by private investors on whom programme financing depended. The Reichs-Rundfunk-Gesellschaft, RRG (Imperial Broadcasting Company) functioned as the umbrella organisation. Due to its technical character and organisational requirements, radio was unable to diversify into subgenres as the press had done. However, it developed different programme genres. Word and music programmes must be distinguished at a fundamental level. Both consist of several subtypes: Spoken-word programmes include news and news reports shows aimed at target groups (e.g. children, women, churches), education (lecturing), literary shows (e.g. radio plays), sports shows, service programmes etc. Music programmes are usually differentiated into serious music, consisting of opera, classical concerts etc., and light music, consisting of folk music, pop etc. Over the course of development, fixed programme structures developed on the basis of these subgenres.28 Radio stations also developed in other European countries during the 1920s. In part, they arose as the result of private initiatives, e.g. in Belgium, Italy and Spain. Later, they also came under the influence of the state, which happened sooner in some countries (e.g. Denmark) than others. In the Netherlands, radio was left to various organisations that had to divide the broadcasting time amongst themselves. Britain, where the British Broadcasting Corporation (BBC) was declared a public institution in 1927, took its special path at an early point.29 This was to guarantee the broadcaster's independence from the state and from private interests. In Sweden, radio was also organised under public law but in it was state-owned, as it was in Germany from 1932 to 1945. After 1945, the BBC became the model for public broadcasting institutions in . Already in the 19th century, experiments were undertaken with remote image transmission. However, early mechanical methods remained insufficient. These technical problems were only solved later with technical inventions by physicists and engineers. On 22 March 1935, the first continuous programme was opened in Berlin.30 The Nazis wanted to demonstrate German superiority with it. Britain followed on 2 November 1936. However, the Nazi rulers did not yet recognize the potential of this medium. The screen was still too small and the number of available receiver units low. However, specific programme genres and sequences modelled on radio, film and theatre also took shape in TV. This development was discontinued with the start of the war because all resources were needed. Thus, television programmes in the Third Reich ended with request shows for hospitalised soldiers. Television only experienced its rise as the dominant modern medium after World War II.
<urn:uuid:634757da-ce54-4b5e-99de-1b37f5a1bc84>
CC-MAIN-2021-21
http://ieg-ego.eu/en/threads/european-media/media-genres
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989637.86/warc/CC-MAIN-20210518125638-20210518155638-00097.warc.gz
en
0.970086
5,770
3.5625
4
Elisabeth, Princess of Bohemia Elisabeth, Princess Palatine of Bohemia (1618–1680) is most well-known for her extended correspondence with René Descartes, and indeed these letters constitute her extant philosophical writings. In that correspondence, Elisabeth presses Descartes on the relation between the two really distinct substances of mind and body, and in particular the possibility of their causal interaction and the nature of their union. They also correspond on Descartes's physics, on the passions and their regulation, on the nature of virtue and the greatest good, on the nature of human freedom of the will and its compatibility with divine causal determination, and on political philosophy. Descartes dedicated his Principles of Philosophy to Elisabeth, and wrote his Passions of the Soul at her request. While there is much to be learned about Descartes's views by reading this exchange, my concern in this entry is not to focus on its import for understanding Descartes's philosophical position, but rather to summarize Elisabeth's own philosophical views. Elisabeth seems to have been involved in negotiations around the Treaty of Westphalia and in efforts to restore the English monarchy after the English civil war. As Abbess of Herford (Germany) convent, she managed the rebuilding of that war-impacted community and also provided refuge to marginalized Protestant religious sects, including Labadists and Quakers. - 1. Life - 2. Early Interest in the Passions - 3. Correspondence with René Descartes - 4. Correspondence with Quakers - Academic Tools - Other Internet Resources - Related Entries Elisabeth Simmern van Pallandt, born on 26 December 1618, was the third of thirteen children and eldest daughter of Frederick V, Elector Palatine, and Elizabeth Stuart, daughter of James I of England and sister of Charles I. She died on 8 February 1680, in Herford, Germany, where she was abbess of the convent there. In 1620, Frederick V, having been installed as King of Bohemia, promptly lost his throne in events usually taken to have precipitated the Thirty Years War. In the 1620s, Elisabeth lived in Brandenburg with her grandmother and aunt until the children joined their parents, living in exile, in The Hague, where they were sheltered by Maurice of Nassau, Frederick's maternal uncle. Although all the details of Elisabeth's education are unknown, it is clear that she and her siblings were tutored in languages, including Greek, Latin, French, English and German and perhaps others. We can infer that Elisabeth was taught logic, mathematics, politics, philosophy and the sciences, and it is reported that her intellectual accomplishments earned her the nickname ‘La Greque’ from her siblings. She also was schooled in painting, music and dancing, and might well have been tutored by Constantijn Huygens. Pal (2012) provides more detail of the intellectual environment of the court in The Hague. While her correspondence with Descartes comprises the only substantive extant philosophical writings of Elisabeth, we are also aware of a correspondence concerning Descartes's Geometry with John Pell, exchanges with Quakers, including Robert Barclay and William Penn, and letters written both by and to her concerning political and financial matters in the English Calendar of State Papers. The correspondence with Descartes reveals her to have been involved with an appointment in mathematics to the University of Leiden and in negotiations on a number of matters, including the imprisonment of her brother Rupert in conjunction with his efforts around the English Civil War, negotiations of the marriage of her sister Henrietta, negotiations of the Treaty of Westphalia, and the finances of her family after the end of the Thirty Years War. There is also record of a brief exchange with Nicholas Malebranche. She is also known to have been connected to Francis Mercury van Helmont, who is reported to have been at her deathbed. In 1660 Elisabeth entered the Lutheran convent at Herford, and in 1667 she became abbess of the convent. She seems to have been an effective manager of the convent lands, but also she welcomed more marginal religious sects, including the Labadists, at the request of Anna Maria van Schurman, and Quakers, including Penn and Barclay. It is worth mentioning the accomplishments of some of her siblings. Her older brother Charles Louis was responsible for restoring the University of Heidelberg after the Thirty Years War. Rupert, the brother born next after her, gained fame for his chemical experiments as well as for his military and entrepreneurial exploits, including the founding of the Hudson's Bay Company. Louise Hollandine, a younger sister, was an accomplished painter and student of Gerritt van Honthorst. Sophie, her youngest sister, became the electress of Hanover and was renowned for her intellectual patronage, particularly that of Leibniz. Sophie's daughter, Sophie-Charlotte, was tutored by Leibniz, and both women carried on substantive philosophical correspondence with Leibniz in which he clarified his philosophical views. See Strickland (2011). Elisabeth seems to have taken an early interest in the passions, as Edward Reynolds dedicated his Treatise on the passions and the faculties of the soule of man (1640) to her. While there is little information about its context, the dedication suggests that Elisabeth had seen a draft of the work, and so one can infer that they had some discussion or correspondence. Reynolds's work, while distinctive in the period as a self-standing treatment of the passions, draws largely on Aristotelian-Scholastic discussions. It does, however, focus on the sensitivity of the passions to reason, and so our capacity to correct our errant passions through reflection. Elisabeth's correspondence with Descartes begins at her initiative in 1643 and continues until Descartes's death in early 1650. Elisabeth does not seem to have produced any systematic philosophical work, and her extant philosophical writings consist almost entirely of her correspondence with Descartes. While we have Descartes's works, and centuries of interpretation to contextualize his side of the exchange, we do not have this larger picture in which to situate Elisabeth's thoughts. Thus, any account of her proper philosophical position must be gleaned through interpretation. It is evident from the correspondence that Elisabeth has a remarkable and wide-ranging critical philosophical acumen. Careful reading of her side of the correspondence does suggest she has some positive philosophical commitments of her own, on matters including the nature of causation, the nature of the mind, explanations of natural phenomena, virtue, and good governance. While many of Descartes's letters to Elisabeth were published in the volumes of his correspondence edited by Clerselier after his death, Elisabeth refused Pierre Chanut's request to publish her side of the exchange. Elisabeth's side of the correspondence was first published in a volume by A. Foucher de Careil, after he was alerted to its existence by an antiquarian bookseller, Frederick Müller, who had found a packet of letters in Rosendael, outside Arnhem. These same letters are what appear in the Oeuvres of Descartes, edited by Charles Adam and Paul Tannery. The letters from Rosendael are not originals, but rather copies that date from the early 18th century. The consistency of their content with that of Descartes's letters, along with allusions to events in Elisabeth's family and private life, argues strongly in favor of the authenticity of the copy. The correspondence between Elisabeth and Descartes begins with Elisabeth's asking probing questions about how Descartes can explain the ability of an immaterial substance to act on a material substance. At issue in this initial query is the kind of causation operating between mind and body. As Elisabeth frames the issues, existing accounts tie causal efficacy to extension, and in this regard it is significant that she poses her question about the mind's ability to act on the body, and not the body's ability to affect the mind. To account for the causal efficacy of an immaterial mind, Elisabeth suggests that Descartes can articulate either the account of causation proper to mind-body interaction or the substantial nature of the mind such that existing accounts could explain its actions. Descartes's response is not only evasive but opens up further issues, in particular about whether the mind-body union is a third substance, insofar as he appeals to the Scholastic notion of heaviness to address Elisabeth's concerns (Garber 1983), and intimates there is a contradiction in thinking of mind and body as both two distinct substances and as united (Mattern 1978). In addition, in his responses, Descartes jumps between the two separate issues of mind-body and body-mind interaction (Rozemond 1999). My concern in this entry is not, however, to articulate the views expressed in Descartes's side of the correspondence. This exchange reveals that Elisabeth is committed to a mechanist account of causation—that is, one limited to efficient causation. Elisabeth rejects Descartes's appeal to the Scholastic conception of heaviness as a model through which to explain mind-body interaction, on the grounds that, as Descartes himself previously argued, it is unintelligible and inconsistent with a mechanist conception of nature. That is, she squarely rejects the formal causal explanatory model underlying the Scholastic notion of a real quality, insofar as she refuses to consider that model appropriate in some contexts. She is nonetheless open-minded about which account of efficient causation ought to be adopted. This openness reveals that she is apprised of debates about the nature of causation in the period (Gabbey 1990, Clatterbaugh 1999, Nadler 1993). Elisabeth's investment in the new science emerging in the seventeenth century is reflected in what she writes regarding mathematics and natural philosophy, discussed briefly in the next subsection. Elisabeth's remarks to Descartes also suggest that she is willing to revisit Descartes's substance dualism. She presses Descartes to further articulate his account of substance, pointing not only to the problem of mind-body interaction, but also to cases where the poor condition of the body—the vapours, for instance—affects capacity for thought. These cases, she intimates, would be more straightforwardly explained by considering the mind to be material and extended. The issue of the role of the condition of the body in our capacity for thought also figures in the correspondence of 1645, concerning the regulation of the passions, both from a theoretical and a personal perspective. Elisabeth seems to maintain the autonomy of thought—that we have control over what we think and can turn our attention from one object to another, and so that the order of thought does not depend on the causal order of material things. However, at the same time she acknowledges that the capacity for thought, and the free will essential to it, is dependent on the overall condition of the body. Elisabeth thus rejects an account of mind that reduces thinking to bodily states, but at the same time she calls into question the idea that the capacity of thinking exists wholly independently of body, that is, that a thinking thing is substance properly speaking. The force of her early question to Descartes, to further explain what he means by substance becomes clear, but she herself does not offer a developed answer to the question. Interestingly, Elisabeth introduces her own nature as female as one bodily ‘condition’ that can impact reason. While Descartes concedes that a certain threshold of bodily health is necessary for the freedom that characterizes rational thought, he disregards Elisabeth's appeal to the “weakness of my sex” (Shapiro 1999). In letters of November 1643, shortly after the initial exchange concerning the union of mind and body, Descartes sets Elisabeth the classic geometrical problem of the three circles or Apollonius's problem: to find a circle that touches each of three given circles on a plane. While Elisabeth's solution is no longer available, Descartes's comments indicate that Elisabeth had already mastered techniques of algebraic geometry. She is thought to have learned them from Johan Stampioen's textbook. Elisabeth's approach to the problem seems to have differed from Descartes's own, and Descartes remarks on her solution having a symmetry and transparency in virtue of its using only a single variable that his lacked. Elisabeth's recognized mathematical acumen is also evidenced by her involvement in the hiring of Frans van Schooten to the mathematical faculty at Leiden and John Pell's effort to enlist her help in understanding Descartes's Geometry. In 1644, Descartes dedicated his Principles of Philosophy to Elisabeth. In that work, Descartes not only presents his metaphysics in textbook form, he also lays out his physics in some detail. Elisabeth responds to the dedication with gratitude, but also by offering criticisms of Descartes's accounts of magnetic attraction and the heaviness of mercury. Also in the correspondence, Elisabeth shows herself to have a keen interest in the workings of the physical world: she criticizes Kenelm Digby's reading of Descartes; she requests the works of Hogelande and Regius; she reports on various natural phenomena, and in particular on diseases and cures, while seeking an efficient causal explanation of these phenomena. In his letters to Elisabeth of 1645 and 1646, Descartes develops his moral philosophy, and in particular, his account of virtue as being resolved to do that which we judge to be the best. His letters begin as an effort to address a persistent illness of Elisabeth, which Descartes diagnoses as the manifestation of a sadness, no doubt due to the events of the English Civil War. As Elisabeth herself puts it, he "has the kindness to want to cure [her] body with [her] soul" (AT 4:208, 24 May 1645). While they begin by reading Seneca's De Vita Beata, they both agree that the work is not sufficiently systematic, and discussion turns to Descartes's own views. Once again, Elisabeth, in her letters, plays a principally critical role. Her criticisms of Descartes take up three distinct philosophical positions. First, she takes up the position of Aristotelian virtue ethics, in objecting that Descartes's very liberal account of virtue, which requires only the intention to do good, does not require that one's good intentions are realized in actions that are actually good. That is, she notes that Descartes makes virtue impervious to fortune or moral luck. She, however, goes beyond the canonical Aristotelian position to maintain that even our ability to reason is subject to luck. (This position helps to illuminate her view on the nature of the human mind. See the discussion in section 3.2 above.) Elisabeth also takes up a classically Stoic position, insofar as she objects to the way in which Descartes's account of virtue separates virtue from contentment. She objects that Descartes's account of virtue allows for the virtuous agent to make mistakes, and she does not see how an agent can avoid regret in the face of those mistakes. Insofar we regret when even our best intentions go awry, we can be virtuous and fail to be content. While it is unclear whether her objection is a psychological one or a normative one, she does maintain that achieving contentment requires an ‘infinite science’ (4:289) so that we might know all of the impact of our actions, and so properly evaluate them. Without a faculty of reason that is already perfected, on her view, we cannot only not achieve virtue, we also cannot rest content. (See Shapiro 2013 for an interpretation of these remarks.) In the context of this exchange, in the same letter of 13 September 1645, Elisabeth asks Descartes to "define the passions, in order to know them better" (AT 4: 289). It is this request that leads Descartes to draft a treatise on the passions, on which Elisabeth comments in her letter of 25 April 1646, and which is ultimately published in 1649 as The Passions of the Soul. Elisabeth's concerns about our ability to properly evaluate our actions lead her to express a further concern, this time about the possibility of measuring value objectively, given that we each have personal biases, whether by temperament or by matters of self-interest. Without a proper measure of value, she implies, Descartes's account of virtue cannot even get off the ground, for it is not clear what should constitute our best judgement of what is the best course of action. Behind Elisabeth's objection here is a view of ethics akin to that of Hobbes and other contractarians, which takes the good to be a matter of balancing of competing self-interests. In his letter of 15 September 1645 Descartes aims to answer some of her concerns by outlining a set of metaphysical truths knowledge of which will suffice in guiding our practical judgements, including that all things depend on God (who exists), the nature of the human mind and its immortality, and the vast extent of the universe (15 September 1645; AT 4:292). Elisabeth responds by asserting that these considerations just open more problems—of explaining human free will, of how understanding the immortality of the soul can make us seek death, and of distinguishing particular providence from the idea of God—without providing any guidance for evaluating things properly. (See Schmaltz (forthcoming) for an interpretation of Elisabeth's view on free will and divine providence.) Elisabeth's interest in properly evaluating actions and their outcomes is clearly related to her position as an exiled Princess, one with hopes that her family will regain some of their political power. She is particularly concerned with the problems that rulers face making decisions that stand to impact a large group of people with incomplete information. To this end, she asks Descartes to present the central maxims "concerning civil life" (AT 4:406, 25 April 1646), and for his thoughts on Machiavelli's The Prince. Descartes politely refuses the former, but offers his thoughts on the latter in his letter of September 1646. Elisabeth offers her own reading in her letter of 10 October 1646. In her view Machiavelli's focus on a state that is the most difficult to govern does provide useful guidance for achieving stability, but affords little for how to proceed in governing a stable state. It is reasonable to assume that further consideration on these issues informed her management of the convent at Herford. Elisabeth also corresponded with a number of prominent Quakers, including Robert Barclay and William Penn, who visited her at the convent in Herford. Though both Barclay and Penn attempt to gain Elisabeth as a convert, she does not seem interested in engaging them philosophically or theologically. Insofar as the Scottish Quakers played a strategic role in the efforts to restore the English throne, one can wonder whether her engagement with them was simply political. On the other hand, Elisabeth's long-standing interest in emerging alternative theories, along with her interest in divine providence, makes it just a plausible that she took a more intellectual interest in their world view. - Barclay, Robert, 1870, Reliquiae Barclaianae: Correspondence of Colonel David Barclay and Robert Barclay of Urie, London: Winter & Bailey, Lithograph. - Blom, John, 1978, Descartes: His Moral Philosophy and Psychology, New York: New York University Press. (Includes translation of much of the Descartes-Elisabeth correspondence.) - Descartes, René, 1996, Oeuvres. Vol. III–V, Charles Adam and Paul Tannery (eds.), Paris: Vrin (cited internally by AT followed by volume and page number). - –––, 1984–1991, The Philosophical Writings of Descartes, vol. I–III, John Cottingham, Robert Stoothof and Dugald Murdoch(eds.), and for Vol III, Anthony Kenny (eds.), London: Cambridge University Press (cited internally as CSM or CSMK, followed by volume and page number). - –––, 1989, Correspondance avec Elisabeth, Jean-Marie Beyssade and Michelle Beyssade (eds.), Paris: Garnier-Flammarion. - –––, 2013, Der Briefwechsel zwischen René Descartes und Elisabeth von der Pfalz, Benno Wirz, Isabelle Wienand and Olivier Ribordy (eds.), Hamburg: Meiner. - –––, 1935, Lettres sur la morale: corréspondence avec la princesse Elisabeth, Chanut et la reine Christine, Jacques Chevalier (ed.), Paris: Hatier-Boivin. - –––, 1657–67, Lettres de Monsieur Descartes, Claude Clerselier (ed.), 3 vols. Paris:Angot. - Foucher de Careil, Alexandre, 1879, Descartes, la Princesse Elisabeth et la Reine Christine, Paris: Felix Alcan. - Malebranche, Nicholas, 1961, Oeuvres. Vol. XVIII, André Robinet (ed.), Paris: Vrin. - Müller, Frederick. 1876, "27 onuitgegeven brieven aan Descartes," De Nederlandsche Spectator, 336–39. - Nye, Andrea, 1999, The Princess and the Philosopher: Letters of Elisabeth of the Palatine to René Descartes, Lanham, MD: Rowman & Littlefield. - Penn, William, 1695 and 1714, An Account of W. Penn's Travails in Holland and Germany, Anno MDCLXXVII, London: T. Sowle. - Princess Elisabeth of Bohemia and René Descartes, 2007, The Correspondence between Princess Elisabeth of Bohemia and René Descartes, Lisa Shapiro (ed. and transl.), Chicago: University of Chicago Press. - Reynolds, Edward, 1640, Treatise of the Passions and the Faculties of the Soule of Man, London: Robert Bostock, facsimile reproduction, Margaret Lee Wiley (ed.), Gainesville, FL: Scholars' Facsimiles and Reprints, 1971. - Strickland, Lloyd (ed. and transl.), 2011, Leibniz and the Two Sophies: The Philosophical Correspondence, Toronto: Centre for Reformation and Renaissance Studies. - Verbeek, Theo, Erik-Jan Bos and Jeroen van de Ven (eds.), 2003, The Correspondence of René Descartes 1643, Utrecht: Zeno Institute for Philosophy. A. Biographies of Elisabeth - laze de Bury, Marie Pauline Rose Stewart, 1853, Memoirs of the Princess Palatine, Princess of Bohemia, London: Richard Bentley. - Creese, Anna, 1993, The letters of Elisabeth, Princess Palatine: A seventeenth century correspondence, Princeton: PhD dissertation, Ann Arbor: UMI 9328035. - Godfrey, Elizabeth, 1909, A Sister of Prince Rupert: Elizabeth Princess Palatine and Abbess of Herford, London and New York: John Lane. - Zendler, Beatrice, 1989, “The Three Princesses,” Hypatia, 4.1, 28–63. B. The Intellectual Historical Context - Adam, Charles, 1917, Descartes et ses amities féminines, Paris: Boivin. - Foucher de Careil, Alexandre, 1862, Descartes et la Princesse Palatine, ou de l'influence du cartésianisme sur les femmes au XVIIe siècle, Paris: Auguste Durand. - Harth, Erica, 1992, Cartesian Women: Versions and Subversions of Rational Discourse in the Old Regime, Ithaca: Cornell University Press. - O'Neill, Eileen, 1998, “Disappearing Ink: Early Modern Women Philosophers and Their Fate in History,” in Philosophy in a Feminist Voice, Janet A Kourany (ed.), Princeton: Princeton University Press. - –––, 1999, “Women Cartesians, ‘Feminine Philosophy’ and Historical Exclusion” in Feminist Interpretations of René Descartes, Susan Bordo (ed.), University Park, PA: Pennsylvania State University Press. - Pal, Carol, 2012, Republic of Women: Rethinking the Republic of Letters in the Seventeenth Century, New York/Cambridge: Cambridge University Press. - Scheibinger, Londa, 1989, The Mind Has No Sex? Women in the Origins of Modern Science, Cambridge: Harvard University Press. C. Seventeenth-Century Accounts of Causation and Conceptions of the Physical World - Clatterbaugh, Kenneth, 1999, The Causation Debate in Modern Philosophy 1637–1739, New York: Routledge. - Gabbey, Alan, 1990, “The Case of Mechanics: One revolution or many?”, in Reappraisals of the Scientific Revolution, David C. Lindberg and Robert S. Westman (eds.), Cambridge: Cambridge University Press. - Garber, Daniel, 1992, Descartes' Metaphysical Physics, Chicago: University of Chicago Press. - –––, 1992, “Descartes' Physics” in The Cambridge Companion to Descartes, John Cottingham (ed.), Cambridge: Cambridge University Press. - Garber, Daniel, John Henry, Lynn Joy and Alan Gabbey, 1998, “New Doctrines of body and its powers, place and space” in The Cambridge History of Seventeenth Century Philosophy, Daniel Garber and Michael Ayers (eds.), Cambridge: Cambridge University Press. - Nadler, Steven (ed.), 1993, Causation in Early Modern Philosophy, University Park: Penn State University Press. D. Interpretations of the Descartes-Elisabeth Correspondence - Alanen, Lilli, 2004, "Descartes and Elisabeth: A Philosophical Dialogue?" in Feminist Reflections on the History of Philosophy, Lilli Alanen and Charlotte Witt (eds.), New York/Dordrecht: Kluwer, 193–218. - Broad, Jacqueline, 2002, Women Philosophers of the Seventeenth Century, Cambridge: Cambridge University Press. - Néel, Marguerite, 1946, Descartes et la princess Elisabeth, Paris: Editions Elzévier. - Pellegrin, M-F and D Kolesnik (eds.), 2012, Elisabeth de Boheme face a Descartes: Deux Philosophes, Paris: Vrin. - Petit, Léon, 1969, Descartes et Princesse Elisabeth: roman d'amour vécu, Paris: A-G Nizet. - Rodis-Lewis, Genevieve, 1999, “Descartes et les femmes: l'exceptionnel rapport de la princesse Elisabeth” in Donna Filosofia e cultura nel seicento, Pina Totaro (ed.), Rome: Consiglio Nazionale delle recherche, 155–72. - Wartenburg, Thomas, 1999, “Descartes's Mood: The Question of Feminism in the Correspondence with Elisabeth” in Feminist Interpretations of René Descartes, Susan Bordo (ed.), University Park, PA: Pennsylvania State University Press. E. The Real Distinction, Mind-Body Interaction and the Union of Mind and Body in the Correspondence - Alanen, Lilli, 2003, Descartes's Concept of Mind, Cambridge: Harvard University Press. - Broughton, Janet and Ruth Mattern, 1978: “Reinterpreting Descartes on the Notion of the Union of Mind and Body”, Journal of the History of Philosophy, 16(1): 23–32. - Garber, Daniel, 1983, “Understanding Interaction: What Descartes Should Have Told Elisabeth, ” Southern Journal of Philosophy (Supplement), 21: 15–37. - Garber, Daniel and Margaret Wilson, 1998, “Mind-body problems” in The Cambridge History of Seventeenth Century Philosophy, Daniel Garber and Michael Ayers (eds.), Cambridge: Cambridge University Press. - Hatfield, Gary, 1992, “Descartes' physiology and its relation to his psychology”, in The Cambridge Companion to Descartes, John Cottingham, (ed.), Cambridge: Cambridge University Press, 335–370. - Mattern, Ruth, 1978, “Descartes's Correspondence with Elizabeth: Concerning Both the Union and Distinction of Mind and Body, ” in Descartes: Critical and Interpretative Essays, Michael Hooker (ed.), Baltimore: Johns Hopkins University Press. - O'Neill, Eileen, 1987, “Mind-Body Interaction and Metaphysical Consistency: A defense of Descartes,” Journal of the History of Philosophy, 25(2): 227–45. - Radner, Daisie, 1971, “Descartes' Notion of the Union of Mind and Body,” Journal of the History of Philosophy, 9: 159–71. - Richardson, RC, 1982, “The ‘Scandal’ of Cartesian Interactionism,” Mind, 92: 20–37. - Rozemond, Marleen, 1998, Descartes's Dualism, Cambridge: Harvard University Press. - –––, 1999, "Descartes on Mind-Body Interaction: What's the Problem?", Journal of the History of Philosophy, 37(3): 435–467. - Shapiro, Lisa, 1999, “Princess Elizabeth and Descartes: The Union of Mind and Body and the Practice of Philosophy”, British Journal for the History of Philosophy, 7(3): 503–520. - Tollefson, Deborah, 1999, “Princess Elisabeth and the Problem of Mind-Body Interaction,” Hypatia, 14(3): 59–77. - Wilson, Margaret, 1978, Descartes. New York: Routledge. - Yandell, David, 1997, “What Descartes Really Told Elisabeth: Mind-Body Union as a Primitive Notion,” British Journal for the History of Philosophy, 5(2): 249–73. F. Descartes's and Elisabeth's Moral Philosophy - Marshall, John, 1998, Descartes's Moral Theory, Ithaca: Cornell University Press. - Mesnard, Pierre, 1936, Essai sur la morale de Descartes, Paris: Boivin & Cie. - Nye, Andrea, 1996, “Polity and Prudence: The Ethics of Elisabeth, Princess Palatine” in Hypatia’s Daughters, Linda Lopez McAlister (ed.), Bloomington: Indiana University Press. - Rodis-Lewis, Genevieve, 1957, La morale de Descartes, Paris: PUF. - Schmaltz, Tad, forthcoming, “Princess Elisabeth of Bohemia on the Cartesian Mind: Interaction, Happiness, Freedom,” in Feminist History of Philosophy: The Recovery and Evaluation of Women's Philosophical Thought, E. O'Neill and M. Lascano (eds.), Dordrecht: Springer. - Shapiro, Lisa, 2013, “Elisabeth, Descartes, et la psychologie morale du regret”, in Élisabeth de Bohème face à Descartes: Deux Philosophes, M-F Pellegrin and D Kolesnik (eds.), Paris: Vrin, 155–169. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
<urn:uuid:78c0d438-eb44-420a-a7a1-0760f36d940f>
CC-MAIN-2021-21
https://plato.stanford.edu/entries/elisabeth-bohemia/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00377.warc.gz
en
0.904798
6,897
2.625
3
This paper provides an overview of the technological advances in the historical context of World War II, the institutions and individuals that played a key role in the creation of computers and the impact of said advancements in our current technology within the context of armed conflicts. It aims to analyze beyond the ‘what if’ scenarios and take a closer look at certain moments in history that signified a before and after for the technology around computers, who were the key actors behind them, and how did it shape and define the computers we use today and how we use them. Topics, concepts and keywords: - History of technology development during and after World War II. - University research and government funding for the developing of technology during war. - Computing, graphical interfaces for human interaction and combinatorial technologies. - Keywords: World War II, Department of Defense, DARPA, ARPA, ENIAC, Cold War, MIT, MIT Lincoln Labs. - In what ways is the current technology around computers influenced by the technological achievements of World War II? - Within this context, under what circumstances do the combination of international conflict and the involvement of government with university research teams motivates the advancement of technology? In popular culture, it’s common to refer to our current times as the age of technology. We live in a world that is not only intrinsically related to technology but it’s also incredibly dependent on it. This trend is not entirely new, we’ve been influenced by technological advancements for a very long time, even before the invention of electricity. However, there is no denying that the pace at which technology advances has sped up drastically in the last half century. It wasn’t that long ago that we used to live in a world without internet, cellphones, GPS, or digital cameras, just to name a few. More surprisingly, technology is advancing so fast that, many times, its predecessors become obsolete very quickly. Society marvels at new technological advances in different fields and wonders “how is it possible?”. The rapid pace and the mysterious aspect (black-boxing) of the modern advancement of technology make it seem as something magical, almost inevitable and unstoppable for everyone. In order to demystify technology as an autonomous entity that magically evolves independent from us, it is important to ask what happened 50-60 years ago that unchained this phenomenon? Who played a part in it? And how did it affect the current state of our technology? A snapshot in time To begin to answer our question it is necessary to look at the history of what was happening in the world at the time. Upon analyzing this, we’ll find that it was not one specific event but rather a combined chain of events, interdependent, that happened at the perfect timing. On top of that, it wasn’t one specific individual, but instead a group of different actors and institutions whose actions had an impact in determining the path technology would take in the future. Even though technology is still very much present and a determining factor in future conflicts –in addition to earlier inventions in World War I serving as the ancestors to build on new technology- no war had such an impact on the current technology of our lives than World War II (1939-45). It was a peculiar moment in history in which a unique combination occurred simultaneously: the need for technological advances to defeat the enemy with the intellectual flourishment of revolutionary ideas in the field. Both government funding and private sector funding united forces with academic research in the United states, such as MIT and Stanford, which resulted not only in the victory of the allies but its effect still resonates in our lives with the way we interact with technology in our everyday activities. There were many types of technologies and discoveries of scientific principles that were customized for military use. Major developments and advances happened in such a short period of time that it’s difficult to study and analyze all of them in this limited space. Just to name a few, we can take into account the design advancements of weapons, ships, and other war vehicles, or the communications and intelligence improvements with devices such as the radar, allowing not only navigation but remote location of the enemy as well. Other fields that were drastically influenced by technological advancements were the medical field and the creation of biological and chemical weapons, the most notorious case being the atomic bomb. On the subject, Dr. David Mindell from MIT brings attention to a few specific cases and their impact, both during the war and its outcome, as well as in the current state of our technology: “We can point to numerous new inventions and scientific principles that emerged during the war. These include advances in rocketry, pioneered by Nazi Germany. The V-1 or “buzz bomb” was an automatic aircraft (today known as a “cruise missile”) and the V-2 was a “ballistic missile” that flew into space before falling down on its target (both were rained on London during 1944-45, killing thousands of civilians). The “rocket team” that developed these weapons for Germany were brought to the United States after World War II, settled in Huntsville, Alabama, under their leader Wernher von Braun, and then helped to build the rockets that sent American astronauts into space and to the moon. Electronic computers were developed by the British for breaking the Nazi “Enigma” codes, and by the Americans for calculating ballistics and other battlefield equations. Numerous small “computers”—from hand-held calculating tables made out of cardboard, to mechanical trajectory calculators, to some of the earliest electronic digital computers, could be found in everything from soldiers’ pockets to large command and control centers. Early control centers aboard ships and aircraft pioneered the networked, interactive computing that is so central to our lives today”. (Mindell, 2009). The history of how all of these advancements came to be it’s fascinating, and it would be easy to get sidetracked into analyzing each of them. However, this paper does not aim to be a mere recounting of the facts that are already very well documented by historians. Let’s take a look at the specific case of advances in computing, which is probably one of the biggest, if not the main, takeaway from World War II. Even though, ‘computing’ as a way of thinking and seeing the world had existed for a very long time before these events –including machinery- there is no denying that the jump in the last 50-60 years has been abysmal, and we owe it, in big part, to the research and funding achieved during and after World War II. As a field, Computing started formally in the 30’s, when notorious scholars such as Kurt Gödel, Alonzo Church, Emil Post, and Alan Turing published various revolutionary papers, such as “On Computable Numbers, with an application to the Entscheidungs problem” (Turing, 1936), that stated the importance of automatic computation and intended to give it mathematical structures and foundations. The Perfect Trifecta: Universities Research Teams + Government funding + Private Sector Before World War II, the most relevant analog computing instrument was the Differential Analyzer, developed by Vannevar Bush at the Massachusetts Institute of Technology in 1929 “At that time, the U.S. was investing heavily in rural electrification, and Bush was investigating electrical transmission. Such problems could be encoded in ordinary differential equations, but these were very time-consuming to solve… The machine was the size of a laboratory and it was laborious to program it… but once done, the apparatus could solve in minutes equations that would take several days by hand”. (Mindell, 2009). During World War II, the US army commissioned teams of women at Aberdeen Proving Grounds to calculate ballistic tables for artillery. These were used to determine the angle, direction and range in which to shoot to more effectively hit the target. However, this process was vulnerable to error and took considerable amounts of time, therefore, the team could not keep up with the demand of ballistic tables. In light of this, the Army commissioned the first computing machine project, the ENIAC, at the University of Pennsylvania in 1943: “The ENIAC could compute ballistic tables a thousand times faster than the human teams. Although the machine was not ready until 1946, after the war ended, the military made heavy use of computers after that” (Denning, Martell, 2015). This is one of the first examples of the combined work of government and universities research teams to fund and advance technology. However, it is worth noting that this was not the only project in place at the time in the world. In fact, the only one that was completed before the war was over was the top-secret project at Bletchley Park, UK, which cracked the German Enigma cipher using methods designed by Alan Turing (Denning, Martell, 2015). Nevertheless, projects such as ENIAC (1943 US), UNIVAC (1951 US), EDVAC (1949 US, binary serial computer), and EDSAC (1949 UK) provided ground-breaking achievements that, later on, allowed for the design advancements of a more efficient, reliable, and effective computer: “Even relatively straightforward functions can require programs whose execution takes billions of instructions. We are able to afford the price because computers are so fast. Tasks that would have taken weeks in 1950 can now be done in the blink of an eye”. (Denning, Martell, 2015). These projects sparked the flourishment of ideas that transformed computing into what it is today. Computers changed from being mere calculators to being information processors, and pioneers John Backus and Grace Hopper had a key role in that shift. In 1957, Backus led a team that developed FORTRAN, a language for numerical computations. In 1959, Hopper led a team that developed COBOL, a language for business records and calculations. Both programming languages are still used today: “With these inventions, the ENIAC picture of programmers plugging wires died, and computing became accessible to many people via easy-to-use languages” (Denning, Martell, 2015). The role of government funding during this period was essential, but it went beyond just granting money to universities’ research teams. In February 1958, President Dwight D. Eisenhower, ordered the creation of the Defense Advanced Research Projects Agency (DARPA), an agency of the United States Department of Defense which mission is the development of emerging technologies for use by the military. International armed conflict not only played a part in the creation of this agency but it was the reason behind it. About the climate of the context of its creation: “ARPA [originally] was created with a national sense of urgency amidst one of the most dramatic moments in the history of the Cold War and the already-accelerating pace of technology. In the months preceding [the creation] … the Soviet Union had launched an Intercontinental Ballistic Missile (ICBM), the world’s first satellite, Sputnik 1… Out of this traumatic experience of technological surprise in the first moments of the Space Age, U.S. leadership created DARPA” (Official website). The agency establishes its purpose clearly: “the critical mission of keeping the United States out front when it comes to cultivating breakthrough technologies for national security rather than in a position of catching up to strategically important innovations and achievements of others” (Official website). By this description, is not difficult to assume that tension between countries due to armed conflicts definitely impacts their willingness to invest in the creation of new technology. However, the projects funded at this agency, throughout its creation, have provided significant technological advances that have had an impact not only for military uses but in many other fields. The most ground-breaking ones are providing the early stages of computer networking and the Internet, in addition to developments in graphic user interfaces among others. Along the lines of DARPA, the Department of Defense, in collaboration with Massachusetts Institute of Technology, created the MIT Lincoln Laboratory as a research and development center focused on the application advanced technology to problems of national security: “Research and development activities focus on long-term technology development as well as rapid system prototyping and demonstration… The laboratory works with industry to transition new concepts and technology for system development and deployment” (Freeman, 1995) Other projects like the Stanford Research Institute started from a combination of forces between university and government funding after World War II and continue to develop technology to better the lives of the public. Among its accomplishments are the first prototype of a computer mouse, inkjet printing, and it was involved in the early stages of ARPANET. When the future becomes now Many people involved in the projects created during World War II went on to start computer companies in the early 50’s. Universities began offering programs to study in the new field by the late 50’s. More specifically, Computer Science programs were founded in 1962 at Purdue University and Stanford University, facing early criticism from scholars who believed that there was nothing new outside of mathematics and engineering. “The field and the industry have grown steadily ever since, into a modern behemoth whose Internet connections and data centers are said to consume over 3% of the world’s electricity”. (Denning, Martell, 2015). Over the years, computing provided new insights and developments at such a pace that, in a matter of few decades, it advanced further than other fields since their creation: “By 1980 computing had matured in its understanding of algorithms, data structures, numerical methods, programming languages, operating systems, networks, databases, graphics, artificial intelligence, and software engineering”. (Mindell, 2009). In relation to that, the first forty years or so of the new field were focused on developing and perfecting computing technology and networks, providing ground-breaking results that better suited it for combinatoriality and further advancement. In the 1980’s another shift started in the field: the interaction with other disciplines and computational sciences: “Recognizing that the computer itself is just a tool for studying information processes, the field shifted its focus from the machine itself to information transformations”. (Denning, Martell, 2015). The biggest advances of this field have been integrated into our world seamlessly, shaping not only our lives but the way we see and interact with said world. Design achievements such as the microchip, the personal computer, and the Internet not only introduced computing to the public’s lives both also promoted and sparked a motivation for the creation of new subfields. This effect, in fact, replicates itself almost like a cycle, explain Denning and Martell: “Network science, web science, mobile computing, enterprise computing, cooperative work, cyberspace protection, user-interface design, and information visualization. The resulting commercial applications have spawned new research challenges in social networks, endlessly evolving computation, music, video, digital photography, vision, massive multiplayer online games, user-generated content, and much more”. (Denning, Martell, 2015). David Mindell clearly expresses this marvelous achievement: “Perhaps the single most remarkable development was that the computer—originally designed for mathematical calculations—turned out to be infinitely adaptable to different uses, from business data processing to personal computing to the construction of a global information network”. (Mindell, 2009) What if World War II hadn’t happened? Would our current technology be at the stage that it is today? In what ways would it be different? How long would it have taken us to achieve these technological advancements if military conflict wasn’t present in the context? Such hypothetical questions were the ones that plagued my mind when I started this research, and there is not a clear answer for them. The impact World War II had on society is undeniable and impossible to measure. The world was never the same in every aspect and there was no field left untouched by it. From international relations and diplomacy, with the creation of the UN and the Human Rights, to world politics, specifically in Europe, were forever changed, leading to dictatorships and more armed conflict within the region. Other fields such as physics, biological weaponry, engineering, medicine and genetics, just to name a few, went through a drastic change as well sparked by the events during this time, which in consequence led to future conflicts such as the Cold War and the development of nuclear weapons by various nations. At the core of all these changes is technology. World War II and its impact on the development and advancement of technology shaped the world as we know it now, in ways that we’re still trying to comprehend and address. Would technology be less mature, robust or advanced if World War II hadn’t happen? Probably, but more so in a change of pace than a different path. There were astounding technological advances before the war and there are still technological achievements occurring that are not sparked by military conflict. However, wartime stimulates inventiveness and advances because governments are more willing to spend money on revolutionary, and sometimes risky, projects with urgency. For the specific case of World War II, the creation of computers was a result of different actors and institutions (universities, government agencies, computer scientists and researchers), with various interests, pushed by armed conflict to work together in perfect timing in one of the most drastically world-changing cases of serendipity in history. It is the ‘before-and-after’ of not only our generation but our civilization. - Campbell-Kelly, Martin. “Origin of Computing.” Scientific American301, no. 3 (September 2009): 62–69. - DARPA official website: https://www.darpa.mil/about-us/timeline/where-the-future-becomes-now - Denning, Peter J and Craig H. Martell.“Great principles of computing.” Communications of the ACM11 (2003): 15-20. - Freeman, Eva C. MIT Lincoln Laboratory: Technology in the National Interest,, Lexington, Mass.: MIT Lincoln Laboratory, 1995. - Geiger, Roger L. Research and relevant knowledge: American research universities since World War II. Transaction Publishers, 2008. - Hall, Daniel and Lewis Pike. If the World Wars hadn’t happened, would today’s technology be less advanced? Guru Magazine, web source: http://gurumagazine.org/askaguru/if-the-world-wars-hadnt-happened-would-todays-technology-be-less-advanced/ - Mindell, David. The War That Changed Your World: The Science and Technology of World War II. Introductory essay for the exhibition “Science and Technology of World War II exhibition at the National WWII Museum, 2009. Web source: http://www.ww2sci-tech.org/essays/essay2.html - Fig. 1: http://www.scienceclarified.com/scitech/Artificial-Intelligence/The-First-Thinking-Machines.html - Fig.2: Google Images. - Fig. 3: http://www.21stcentech.com/technology-war-part-3-war-impact-transportation-technology/ - Fig. 4 and 5: http://www.learnnc.org/lp/editions/nchist-worldwar/6002 - Fig. 6: https://www.biography.com/people/alan-turing-9512017 - Fig. 7: http://www.computerhistory.org/revolution/analog-computers/3/143 - Fig. 8: http://www.computerhistory.org/revolution/birth-of-the-computer/4/78 - Fig. 9: http://www.computerhistory.org/timeline/1952/#169ebbe2ad45559efbc6eb35720dca99 - Fig. 10: https://www.darpa.mil/about-us/timeline/ipto - Fig. 11: https://robertocamana.files.wordpress.com/2014/08/articulo-no-140.jpg
<urn:uuid:549f32d6-2f51-42c6-8486-357a7972f036>
CC-MAIN-2021-21
https://blogs.commons.georgetown.edu/cctp-820-fall2017/author/dao42/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00376.warc.gz
en
0.947442
4,236
3.53125
4
Chaophraya Liverpool Menu, Rockwell Tools Review, Hunting Land For Sale In Wyoming, Laminate Flooring Hallway Direction, Cost Of Surgery In Uk, Who Is Ligarius, Best Board Games For Adults 2 Player, " /> Before harvesting toughen up greenhouse potatoes for storage, by not watering the plants after mid-august. Hilling soil around potatoes increases yields and prevents tubers … If uncovered then the bit in light goes green, the bit underneath doesn't. Do not water more than 1-2 inches every week right after planting because potatoes may not develop and may be prone to diseases. Potatoes need 60-90 days frost-free to be successfully harvested; … If the potatoes got cut during digging it will rot during storage. They wont break your diet! Your email address will not be published. The potatoes have round with reddish-brown skin with dense white flesh. This step is known as hilling. Larger bulbs can be sliced in half just make sure that there is an eye on every piece. root rot, and it should not detain too little water this causes planed crops to dehydrate and malnourishment. Recommended planting time: Potatoes can be grown in many months of the year, depending on whether the garden receives frost, as potatoes are frost-tender. So, the ‘seed’ that you’ll find to grow potatoes looks like, well, a potato. Every year after I turn the cover crop, volunteer potato plants sprout in the beds where I planted potatoes last year. Some of the herbal companion plants for potatoes are chamomile, basil, yarrow, parsley, and thyme that improve their growth and flavor, while also attracting beneficial insects to the greenhouse. compost, peat, and sulfur. Having good soil matters a lot because it is the primary source of nutrients and water. Lime: It raises the pH of acidic soil making it alkaline and favors the growth of alkaline pH soil crop lover and also aids in losing clay soil. Potatoes like to grow in acidic soil with a pH not more than 5.2. Well, not much they need full sun exposure, like to grow in acidic pH soil and need soil type to be sandy. In the bottom of the trench in your greenhouse, top-dress the soil with organic compost (will help in retaining moisture) or well-rotted manure (helps in lowering the soil’s pH and we all know now that potatoes like to grow in acidic soil. Growing potatoes from true potato seeds is fun and you can discover some very good new varieties, but it is not as reliable as growing potatoes from tubers. Keep this in mind never to store potatoes with apples; the ethylene gas emitted from the apples will spoil the potatoes. © Copyright 2016 - 2020 Greenhouse Emporium. There are also around 180 wild potatoes species that are very bitter in taste. When to Harvest. Actually, it is better to GREEN potato seeds prior to planting, by exposing them to light with some humidity at room temperature (this method is called GREENING). The texture is because of smaller particles than sandy soil. Let us see how to add organic additives into the greenhouse soil? The popular russets varieties include russet Burbank, russet Arcadia, russet Norkotahs, and russet Butte. They are actually poisonous because they contain large amount of solanine which will cause illness when consumed. But I would love to put my efforts into growing one of my favorite vegetables inside my greenhouse. This will let potato skins to cure, making them be stored for longer periods of time. It won’t even hurt your plant’s “ball” which leads to a satisfying yield. Before adding anything to your greenhouse soil, test it to see what is already there. Your greenhouse soil should not detain or carry too much water because it favors fungal infections e.g. Compared to other pots, these planters have openings that allow observing your potatoes as they develop and harvesting them without much digging. Make sure you have brushed off any soil on the surface of the potatoes, after curing. As mentioned earlier that potatoes need to be stored in a cooler place this does not mean to store them in the refrigerator. They will also do well on your porch, in grow bags, pots, large containers, and raised beds. Suttons have been supplying the best seed potato varieties for over 25 years. These potatoes has buttery flavour and they are suited for roasting, mashing, boiling, grilling and steaming. Each seed piece should contain at least two or three eyes and weigh around 2 ounces (170 g.). So it is important to use a frost cover, or heat the greenhouse to the appropriate temperature especially if there are indications that the weather may fall below freezing temperatures. Glass vs Polycarbonate Greenhouse – Which Glazing is Better? If potatoes are properly stored, they will be fresh for a month. After the foliage has died, dig up a potato to see if the skins rub off easily. Round red potatoes are mostly grown in Northwestern United States and they include varieties like red Norland and red Pontiac. But, for experimental purposes you can definitely try growing potatoes from seeds. Dig gently and rather carefully to avoid puncturing the tubers. Now, lets discuss about the popular potato varieties that you can grow. If a potato seed is smaller than an ordinary egg, there’s no need to slice it. You should let the poatoe pieces to cure for around 10 days so they can develop a corky layer which will prevent it from decaying when planted. Leaving it on the surface over the winter. There are other potato pests too like aphids and flea beetles and we already have shared an article on how to deal with harmful insects using other beneficial insects in the greenhouse. There are medium sized, round potatoes with freckled brow skin. Careful not to cut or bruise potato skin. Even if the temperature is adequate during the winter season there will not be enough light for your potato plants to thrive. This vegetable is actually grown from Seed Potatoes, i.e the potatoes which have eyes or buds on them. Solanine gives off a bitter taste and … The ideal pH for most of the vegetable crops is between pH 6.0 and 6.5, this pH favors peak microbial activity, as well as plant roots, access nutrients best when soil pH falls within this category. It is important to keep the seed potatoes in the light because if they are in the dark the buds will be white not green and they will be very fragile and easily broken. 1. A wet soil could mean that you will have to wait until the potatoes are air-dried before placing them into bags or boxes. In that case, you can plant potatoes during autumn and during winter you will have to provide your potato plants will supplementary lighting. If you are panning to grow potatoes outdoor then you will have a number of different restriction that will determine the variety of potatoes you should opt for. Due to the presence of solanine in green potatoes, they taste bitter and if eaten can cause diarrhea and vomiting. how to deal with harmful insects using other beneficial insects in the greenhouse. However, you might need some supplementary artificial lighting if sun exposure is less than 6 hours. How can you add organic matter to your greenhouse soil? You can grow potatoes from potatoes! Ensure you bury the seed potato with its eye side up. Potassium is crucial for carrots, radishes, turnips, onions, potatoes, and garlic. 5 Replies 1204 Views March 06, 2018, 15:24 by DD. then you can add a small amount of partially decomposed compost which has a hight nitrogen concentration. Do not refrigerate. Place them at least 12 inches apart, covering them with approximately 3 to 4 inches of soil. Happy! How does it work inside the Greenhouse? Before proceeding with the article I would like to thank all of our viewers and especially those who comment on our articles. We, need to adjust soil pH, if its too acidic raise soil pH, by adding lime (pulverized limestone) or wood ash to the greenhouse soil. However, for mature potatoes, you need to wait for about 2 to 3 weeks. Solanine gives off a bitter taste and is toxic. DIY Greenhouse Drip Irrigation System | DIY Greenhouse watering, How to Grow Tomatoes in Greenhouse | Greenhouse Tomatoes Guide, Grow Bell Peppers in Greenhouse | Greenhouse Bell Peppers Guide, How to Grow Sweet Corn in Greenhouse | Greenhouse Corn Guide, How to Grow Onions in your Greenhouse | Greenhouse Onions Guide. Then you plant them to get a head start. In Tissue culture plants are grown in test tubes in a liquid medium full of nutrients. Fingerlings are thumb size potatoes that can grow up to 3 inches. You can easily see planting calendar or browse frost date locations by state or province. If this is the soil type in your greenhouse then, you need to add coarse sand, compost, and peat moss which will eventually add in drainage to the greenhouse soil and a good texture. If you choose to consume green potatoes, peel them deep, boil them hard, don't stuff your face. Give your potatoes a chance to sprout before planting them. Some of the potatoes might be great and tasty, while others will be bitter and taste bad. Harvesting. These are russet, yellow, red, white, blue/purple, fingerling, and petite. I would say the green peelings would be better composted rather than fed to the hens. How long is the winter seaseon etc. Starting digging potatoes when the first hard frost is expected. These methods allow us to introduce desirable properties in the plants (increasing the harvest per plant, decreasing the space requirement, etc ). Compost: Like vinegar and egg is a good conditioner for our hair, same as the function of compost in the soil. It is the major potato pest all over North America. Allow the potatoes to develop a thick skin before 10 to 14 days of harvesting then cut browning foliage to the ground. There are also ways to improve the potato plant’s chance of survival and increase your yield. As compare to sandy soils this soil type holds onto nutrients and moisture. Maintain a moist soil when the potato plants begin sprouting until a few more weeks after blossoming. Tap water usually has a neutral to high pH or alkaline and potatoes develop in slightly acidic soil with a pH of 5.5. Add some pine straw on top. In spring the adult beetles become active, around the same time when potato plants appear on the ground. The potato tuber needs 1 to 2 inches of water per week. Next important point is knowing your greenhouse soils N-P-K these are the symbolic representatives of elements, primary plant nutrients Nitrogen (aid stem growth, strong leaf and added dark green color to broccoli, cabbage, and lettuce, etc), Phosphorus (aid early plantlet and root growth, seed formation, setting blossoms, and developing fruits and is also important plant element for cucumber, peppers, squash, and tomatoes), and Potassium (it promotes vigor in roots, fight against plant diseases, provides resistance against stress and increases flavor). Thy are suited for deep frying (Fresh Fries), mashing and baking. 4. $9.99 $ 9. Potato tubers turn green when they are exposed to sunlight during growth or storage. Around different 200 varieties of potatoes are sold throughout the united states. Best Greenhouse Covering Materials for DIY Greenhouses, What You Need to Know about Greenhouse Insulation. So avoid it. So, you can have a long season of greenhouse potatoes. Growing from a root underground there's no light, unless your earthing up (or mulching) was poor, so they don't grow green. Potato tubers are actually a modified stem with approximately 70-75% content of water and a remaining 25-30% of dry mater. and the amount of space you have. Every soil is not going to be perfect it needs treatment before it is acceptable in quality. Small potato seed can be planted whole, but large spuds must be cut. Fill the bottom 15cm (6in) of the container with potting compost and plant the seed potato just below this. It never reaches the upper and lower limits of the pH scale. Peat moss: It also functions as a conditioner that aids in retaining soil water and lower soil pH. This seems dubious to me, I suppose it's theoretically possible, but how likely is that? Basically, as long as there isn’t any frost, seed potatoes (a whole little potato ready to shoot) can be planted. ... Place seed potatoes cut side down, about 12 to 14” apart at the bottom of the trench. FREE Shipping on orders over $25 shipped by Amazon. Potatoes are best planted in rows which are three feet apart from each other. I am pretty sure you want the same. To avoid potato scab, dust potatoes seed with sulfur before planting. Since these volunteers grow so well in the cool weather, the idea came to me, why not plant potatoes in the autumn inst… Easily grow your potatoes in these potato pots and planters and experience hassle-free harvesting! I am hoping the new tubers will be OK as the varieties I got are now hard to find. 0 Replies 1207 Views June 23, 2009, 00:59 by vron : when to plant my seed potatoes Started by LILLILEAF on Grow Your Own. Take your pick from russet, Yukon, fingerling, and more varieties and get your potato patch started. Some gardeners allow the tubers to germinate before planting. The Spacing between the rows should be 2-3 feet apart. A large number of potato varieties sets fruit and some varieties don’t. With the exception of plant breeders, we propagate potatoes vegetatively or asexually; potatoes of the same variety are genetically identical to their parents. The tubers can loose their natural form and shape if not provided with enough amount of water like at the time of planting. The commonly used minor changes to the greenhouse soil include Bark ground: it upgrades soil structure and is made from various tree barks. Potatoes can be planted in many ways, depending upon where you live (apartment, house, etc.) The used compost can be applied again for your remaining crops. Depending on the space available you can plant potatoes in beds just as you would if you were farming an open field. ). A large number of potato varieties sets fruit and some varieties don’t. Started by Kirpi on Grow Your Own When it comes to purchasing your seed potato, make sure they are certified disease-free. Over the next couple of weeks small buds will appear on the seed potatoes which will grow and turn green. It keeps your potatoes from getting sunburned which will produce solanine. Water your greenhouse potatoes regularly and keep adding soil as the potato sprouts grow till you reach the top of the raised bed. After testing the soil pH inside the greenhouse, and you find out it’s not within the ideal range of 6.0- 6.5. The fruit of potatoes is just like a small green tomato and it contains seeds (300 seeds on average). Start digging potatoes not to puncture the tubers. (see source). Keep repeating this process until you reach the top of the raised bed and leave the sprouts to grow. Your greenhouse potatoes will be ready for harvest after the foliage has dries out. Seed potatoes to plant in May? Choose seed potatoes with bulging eyes or buds. Greenhouse Gardening For Beginners – Where do I start? They should be kept away from sunlight otherwise they will turn green. I plant my potatoes by hand, and orient the seed potatoes with their new shoots and leaves up. ... which can cause them to turn green and produce a chemical called solanine. Hoe the dirt up around the base of the plant in order to cover the tubers as well as to support the plant. There are more than 4000 different types of potato varieties and they come in different shapes and sizes. Growing Potatoes from Seed The seeds can be extracted from the berry and grown on to produce new plants. Hilling is a necessary step while planning to plant and grown potatoes inside the greenhouse. Propagating Potatoes from Tissue Culture: Planting Potatoes in Greenhouse Trenches: Planting Potatoes In Greenhouse Raised Beds: How to save potatoes from Pests/ Diseases, When to Harvest Potatoes in the Greenhouse, How to Harvest Potatoes inside Greenhouse, How to Store Potatoes inside the Greenhouse, Wit and Wisdom before planting inside Greenhouse, not the same as that of the parent potatoes. I try to have at least three nice eyes per piece, and I plan my cut so the cut side is as small as possible, thereby minimizing the risk of rot from the injury. The potatoes have thin skin and they can be cooked unpeeled. They can be very small in size. ... Because young tubers exposed to direct sunlight can sunburn or turn green and bitter (and potentially poisonous), pile dirt or mulch around the plants as … Greenhouse planters name them as ‘new potatoes’ and the ‘mature potatoes’. The only this that you should consider when you are growing potatoes in your greenhouse is the expected harvesting time. Small crops of potatoes can also be grown in large, deep containers, and this is a good way of getting an early batch of new potatoes. They simplify the harvesting and give your potato plants enough room to grow. Potatoes like cool temperatures, a loose soil that is about 45 to 55°F (7 to 13°C) and good drainage. Potato tubers exposed to light will become green naturally as the plant seeks to harvest the light. Russets are also called old potatoes and baking potatoes. Decomposed leaves provide structure to the soil and add nutrients. Early Harvest Varieties: Red Nroland, Irish Cobbler, Late Harvest Varieties: Chieftain, Cranberry Red, Gold Rush. In warmer climates, potatoes can be grown as a winter crop. Which Potatoes Varieties You Can Grow in your Greenhouse? For growing potatoes your greenhouse raised bed should be at lest one feet high and should span an area of at least 3 by 3 feet. These are actually heirloom potatoes with purple or greyish-blue skin and they usually have inky blue flesh. You see you have a lot of limitations on the type of potato variety you can grow outdoor, however, inside a greenhouse temperature is much stable, the growing season is longer and the soil composition can be altered easily as per the requirement of the specific variety you wanna grow. When it starts to come out, you can already add organic mulch to prevent weed problems, cool the soil and preserve moisture. Normally in greenhouses space is scarce, hence most of the greenhouse growers plant potatoes in their greenhouse raised beds. I have some seed potatoes which have nice sturdy shoots on them but the potatoes are slightly green. The potatoes varieties like Kennebec, Superior, and Atlantic falls in round white potatoes category. The depth of the planters is perfect for the necessary hilling and layering of soil and compost. If you are planting in a container, you must ‘earth up’ until you are up to 2 inches from the top of your container. This variation is how plants adapt to altered circumstances such as different climates. The piece should be blocky and weigh about 1-1/2 to 2 ounces with at least two eye or recessed dormant buds. How cold does it gets ? ... You should aim to have your buds 1cm to 2cm long and a green colour on your planting date. Getting Ready to Plant. We will discuss a few varieties which are broadly categorized into three major classes these are: It is highly advisable to take a look at the plant companion chart before planning your greenhouse to have better knowledge about potatoes that are most compatible with which vegetables. Ensure you bury the seed potato with its eye side up. Plant seed potatoes into dug trenches or individual planting holes. Like ideal soil pH plants have ideal soil type too. Do not wash potatoes until right before you plan to cook them for a meal. Lets discuss these potato categories types in detail. So maintaining uniform moisture, particularly from the time when potato sprouts become noticeable as late as several weeks after they blossom. So, in today’s blog, we will tell you: How and when to plant, grow and harvest the yummiest potatoes inside your greenhouse? These potatoes have a delicate flavor and are suited for roasting, frying, mashing, steaming, stews, and salads. Potato scab is usually caused by high pH levels in the soil. Let us see. These potatoes have an elliptical shape with a rough, netted brow skin. Another tip to get near-perfect skins for your potatoes is to put the compost into a shredder to make a fine blend. The potatoes varieties like Yukon gold, German Butterball, yellow Finn, Carola, Nicola, and Alby’s Gold come in yellow category. Flexible bags make it comfortable for your potatoes to stretch freely without the limitations. You can grow all of the above-mentioned types of potatoes in your greenhouse. Because by doing so, it will retain moisture and will be resistant to rot. … Give the potato a day or two to heal before planting. We grow certain vegetables alongside each other in the greenhouse in order to harvest the profit of their compatible characteristics, which includes their growth habits, pest-resistant qualities, and their nutrient requirements as well as strong scent plants like lavender, mint, and rosemary; stop grazing animals from grazing and snacking on nearby grown vegetables. Yes, you can plant a potato that has sprouted. All of the 200 potatoes varieties fall into 7 categories. Some of them have a little mold, and someone once warned me that you should never *ever* plant moldy potatoes cause the potato mold could spread and ruin all future potatoes in the garden. They extract fluids from the leaves and stems of your potato, which may cause serious damage. If you happen to have a rocky garden, simply place the potato seeds right on the ground. That it should spread at a minimum depth of two inches. A single seed potato is usually cut into multiple pieces with at least one eye bud on each piece. One another thing that should be considered is the topography and your regional weather because these two things also cause considerable variation. To water the plants grow they are certified by some government authority to be sandy be for. From each other tubers turn green by placing them for longer periods under the starts. Green colour on your greenhouse, and raised beds stocky, sturdy, and raised beds detain too water. Prevent hurting beneficial insects high pH levels in the ground top growth so would! Lower limits of the container with potting compost and plant Tissue cultures are making its place modern... Early harvest varieties: red Nroland, Irish Cobbler, can i plant green seed potatoes harvest varieties: red Nroland, Irish,... Exposure, like to grow … yes indeed, potatoes don ’ t want to be sandy hoping some! Inches apart mature potatoes ’ the weather that season ( apartment, house, etc. become as. Dirt up around the same as the varieties I got are now hard to.! And garlic can apply some organic pesticides preferably at dawn or dusk to them! Be great and tasty, while alkaline pH soil and higher allows the of! Late expected harvest dates greenhouse are tan-skinned and red-skinned varieties with white flesh as nodes ) growing on your date. Practising growing potatoes from these true seeds however, for mature potatoes ’ and the potato plant ’ no... Should note that the tubers repeat this for several weeks, leaving the soil and a... To actively decaying green matter variety without any worry potatoes into the size of 2 inch cubes green and! Decent sort of fellow. ” crops to dehydrate and malnourishment also cause considerable variation then soon! In despair for food, you should note that the potato seeds Nutrition Delicious Vegetable Bonsai plants Flower! Never eat the fruit of potatoes they are exposed to can i plant green seed potatoes, they will stocky. Hill them with leaves or straws as you would need to be successfully harvested ; … 1 fluffy cooked. Pots and planters and experience hassle-free harvesting Nutrition Delicious Vegetable Bonsai plants, Flower seeds plant seeds potato! Or ( slow-release ) rock phosphate water because it is acceptable in quality large particles and that why. And especially those who comment on our articles also lower greenhouse soil using fertilizers or add in the and. Propagating potato from tuber is the three Sisters trio ( climbing beans, maize and winter squash.! Leaves and stems of your potato plants begin sprouting until a few more weeks after the has. Demand of potatoes is just like a small amount of solanine which will cause illness when.. Know: What are frost dates, fingerling, and stews take out to the hens can in! 12 to 14 ” apart at the bottom of the loose soil that is 45! From tubers, Tissue culture and from Ture seeds better to use rainwater from a water instead. Fertilizers or add in the comment section how this article helped you with,. And experimentally proven to be harvested the harvesting and give your potatoes in a greenhouse ( i.e but usually flowers! To 13°C ) and good drainage rocky garden, simply place the plant! Virus diseases it poultry manure be extracted from the plant is green produce! Never to store potatoes in a dark and warm place preferably around 45°F to 60°F for to... Food, you don ’ t have to have your buds 1cm to long! Could mean that you will have to give it a row shape proceed to “ earth up every. The major potato pest all over North America to repeat this for several weeks after blossoming your yield for. Which will produce solanine the next couple of eyes on each slice all! With dense white flesh 14 inches distance between every potato seed piece inside greenhouse..., Irish Cobbler, late harvest varieties: red Nroland, Irish Cobbler, harvest! Red potatoes are root vegetables just like a small amount of solanine which will produce solanine start growing on... Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information. Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies.
<urn:uuid:51641903-cb8c-4c15-990e-706e4b896681>
CC-MAIN-2021-21
https://www.fulgido.com/jl9e1x/can-i-plant-green-seed-potatoes-0f3e30
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00015.warc.gz
en
0.939506
5,524
2.65625
3
Sustainability and Schools: Educating for Interconnection, Adaptability, and Resilience by Greg Smith In my home state of Oregon it’s impossible to pick up the daily paper and not encounter some article that deals with concerns about environmental or social sustainability. With climate change, dramatically increasing energy costs, economic instability, and growing worries about the availability and cost of food, journalists and the public are at last paying attention to issues that for decades were pushed to the margins of the nation’s collective consciousness. This shift in public awareness has yet to have much impact on American schools where a preoccupation with testing remains the central concern of the day. This should not surprise us. Education tends to follow social trends rather than initiate them. Given the rapidity with which changes are occurring in the environment and the economy, however, schools may need to take a more active role in preparing young people to address challenges posed by a warmer and oil-strapped world. All of our futures could well depend on their capacity to respond to these new conditions with intelligence and a spirit of generosity and compassion. Fortunately, some educators are now adopting teaching approaches that promise to help young people grapple with the dilemmas of civic involvement and problem solving. Few teachers explicitly address climate change, rising fuel prices, or food shortages head-on; what they do instead is create learning experiences that engage students in community issues while preparing them to become actors more than consumers or victims. I believe that these educators are laying the foundations of an education for sustainability and equity. What I find reassuring is the frequency with which I encounter these educational innovators. In the first few months of 2008, I heard stories about three schools where students are being drawn into experiences that demonstrate young people’s capacity to problem solve and act. They represent the possible and demonstrate what thoughtful educators can accomplish despite funding dilemmas or the constraints of No Child Left Behind. The first is from the Oregon City School for Service Learning, at the end of the Oregon Trail just south of Portland. Students had been complaining about the awful taste of the drinking water at the school. Interested in creating service learning opportunities that didn’t require transportation dollars, teachers encouraged them to do something about it. The Oregon City students contacted the South Fork Water Board and asked for their help in conducting a variety of water tests. To their surprise, they discovered that the water contained high levels of copper—safe but unpleasant to drink. They assumed that the source of the copper was old plumbing in the building. Students then investigated possible solutions, including retrofitting the building with new pipes. Conversations with district officials convinced them that this latter option was prohibitively expensive, so they suggested that one of the drinking fountains be dedicated to include a water purification unit. Students researched costs for installing and replenishing a Brita filtration system and presented their project to the School Board, requesting its support. The Board and superintendent agreed with this solution, and the students no longer had to drink copper-laced water. Reflecting on this experience, one student noted, “I had always been told that one person could ‘make a difference’ but never really understood what this meant. Now I do, and I know that if I have a problem, and if I apply serious research to it and collect my facts along the way, that I will be taken seriously, and I can make a difference!” I heard the second story from a middle school principal in Winnetka, Illinois while attending a conference north of Chicago. He had brought a group of eighth-grade students to the North Dakota Study Group’s annual meeting to make a presentation about a project they had been involved with the year before. In their social studies class, they learned that in 1965 Martin Luther King, Jr. had delivered a speech about ending housing discrimination to approximately 10,000 people on the Winnetka Village Green. After conducting a search, however, they could find no written documents about the speech in any libraries or on-line sources. Working with their teacher, Cecilia Gigiolio, they developed a proposal to construct a historical marker at one corner of the Village Green to commemorate the speech. They met with other civic groups to seek their support before presenting their ideas to the Winnetka Village Council. After the council accepted their proposal, an unobtrusive monument was designed, funds were raised, and the monument installed. Now future generations in Winnetka will be reminded about King’s speech every time they pass that corner of the Village Green. Their teacher observed that this was one of the most powerful learning experiences she had ever orchestrated. The third story is again from Oregon, this time in Cottage Grove in the southern Willamette Valley. Earlier that year, I had a chance to spend an afternoon at the Kennedy School, a program that works with students who are credit deficient and in danger of dropping out. Under the leadership of a young principal, Tom Horn, the school has gone through a transformation over the past couple of years, partly as a result of Horn’s efforts to reach out to families of his students, and partly because of the way teachers at the school are linking student learning to the needs of the community. Students work in crews of 15 along with a teacher and are involved in a range of different projects. In the spring of 2008, the school embarked on the development of a number of comprehensive garden sites around Cottage Grove, including the three trailer parks where many students live. The locally-owned Territorial Seeds Company provided seeds, and students planted about 1,000 a week as starts in the school’s greenhouse. These were then transplanted into garden sites as the weather warmed. Another project involves working with the City of Cottage Grove to initiate wetlands mitigation efforts on industrial sites. Students use native plants they propagate themselves, and the school receives compensation for their efforts. This money is then used to pay for school trips to places like Utah where students engage in biological field studies. The school’s work is resulting in regular press coverage and extensive public support as well as real engagement and excitement on the part of the school’s students. I take a number of things from these stories that will weave throughout the remainder of this article. The first is that the learning experiences they describe reflect issues that are important to students or important to their communities. Second, in each of the stories, students were given the chance to develop competencies clearly transferable to the work of adults: research skills, communication skills, gardening skills, environmental restoration skills. The answer to the question, “Why are we learning this?” was directly in front of students’ eyes. Third, these experiences gave students the opportunity to learn how to work collaboratively as members of a team for important shared goals. This kind of collective endeavor is what often inspires people to continue to seek out similar opportunities for community involvement when they become adults. Finally, these projects proved to the involved students that they could make a difference, that they had voice and power, and that their lives mattered. What is sustainability? So, what does all of this have to do with the creation of more sustainable communities? Doesn’t sustainability mostly have to do with recycling and using less energy and fewer resources? Of buying locally and organically? Of building green schools and driving hybrids? Of installing solar panels or purchasing green power? Yes, sustainability has to do with all of these things, and all of these responses will need to come into play if we hope to reduce humanity’s ecological footprint and forestall some of the consequences associated with climate change, water and food shortages, or wars over diminishing resources like oil and natural gas. But people seeking to grapple with these challenges are now arguing that more will need to be done than adopt different production methods and technologies. We will also need to change the way that we interact with one another and the planet as well as—to borrow Einstein’s phrase–the way we think. What I’d like to move on to next is a brief discussion about sustainability and then an exploration of an approach to curriculum development that focuses on giving students access to the kinds of experiences described above, learning experiences that I’ll argue may underlie changes in attitudes, beliefs, and dispositions related to what may be necessary to forge more sustainable societies. The term sustainability began to be used with reference to the environment and society in the 1980s. The most commonly cited definition is from a United Nations report published in 1987 entitled Our Common Future. The authors of this report said that a sustainable society is one that “meets the needs of the present without compromising the ability of future generations to meet their own needs.” The initial concept of sustainability is very similar to the concept of sustainable yield from the field of forestry. If a forest is managed sustainably, its long-term productivity over generations is never threatened by current cutting practices or levels. If a society were to become sustainable, the same idea would be applied to all resources. More recently, the notion of sustainability has been extended beyond resource use, itself, to the impact of industrial and agricultural production on people and the land. In the late-1990s, British writer John Elkington introduced the concept of the triple bottom line, which asserts that when businesses assess their own activities they need to look not only at the financial bottom line but also at their impact on the environment and the human communities in which they operate. This attention to economy, environment, and equity—the triple bottom line–has come to dominate most contemporary discussions about sustainability. The primary advantage of this formulation is that it links the economy to the environment rather than setting these domains in opposition to one another. Over the past decade, many major corporations and a number of European states have bought into this perspective, something that Toyota’s recent advertising campaign about its green practices demonstrates. In the Pacific Northwest, a program developed by a Swedish oncologist, Karl-Henrik Robert, has been especially influential in business and public discussions about sustainability. Called the Natural Step, it provides a more specific way to think about the impact of economic activities on the environment and human communities. Working with a broad range of Swedish scientists, Robert articulated four system conditions necessary to achieve a sustainable society. These are: (1) No accumulation of toxic or potentially toxic materials from the earth’s crust (2) No accumulation of toxic or potentially toxic human-made materials (3) No destruction of habitat in ways that threaten species diversity or natural services (4) Equitable distribution of resources to all human beings The Natural Step has found a North American home in Oregon where scores of corporations, architectural and engineering firms, and public agencies have adopted elements of Robert’s agenda. These include nationally known firms such as Nike, Norm Thompson, Hewlett Packard as well as locally-focused Portland General Electric and TriMet (public transportation). Although few of these organizations have truly embraced all of the system conditions, especially the fourth about equity, many are in other ways attempting to reduce the use of resources as well as pollution associated with their activities. Their efforts are one of the main reasons that Oregon is on the global sustainability map. Most mainstream discussions about sustainability focus on the economy and the kinds of technological and production changes mentioned earlier. Other activists, however, share Einstein’s perspective about needing to change our way of thinking, especially our allegiance to an economy predicated on endless material growth and rising standards of living. These are the people who argue that not only must we produce things in a more environmentally conscious way and distribute them equitably, we also need to consume less and organize our communities to assure that despite having less, the basic needs of a greater proportion of the world’s population are better met than they are today. These spokespeople argue that the planet simply does not contain enough trees or oil or fish or water to allow everyone to achieve the same standard of living as people in the United States, Europe, Japan, or the upper classes in China, India, and other parts of the developed world; residents of industrialized and industrializing nations will need to reduce the amount they consume and find other sources of meaning and security while being willing to share equitably the remaining resources that do exist. Attempting to grapple with this dilemma may seem virtually impossible, but the advocates of this position suggest that if the basic needs of all are not met, human beings risk the creation of a fortress society in which a decreasing number of groups enjoy economic privileges which must be defended against a growing majority of impoverished and disenfranchised people—a situation that in many respects uncomfortably resembles our current circumstances. So what are humanity’s options? This is where my initial stories come in. My suspicion is that because contemporary conditions lie so far outside the ways of thinking that have created modern institutions and the expectations associated with them, humanity is going to need to invent or reclaim ways of being with one another and the Earth predicated on a recognition of planetary limits, our fundamental dependence on natural systems and other people, and a willingness to participate in the shaping of more sustainable cultures. This transition seems unlikely to happen in Washington, D.C. or Tokyo or Brussels or Beijing. People who have risen to positions of political and economic power in these global cities have done so because of their allegiance to systems that are now proving themselves to be unworkable. These leaders also are showing less and less willingness to invest in the needs of common citizens. The fact that people in New Orleans lived for years in formaldehyde-off-gassing FEMA trailers is a grim indicator of this possibility. I suspect that if real change is going to happen it will be enacted by growing numbers of people acting locally like the students in Oregon City and Winnetka and Cottage Grove. Climate change activist Ross Gelbspan—a former editor of the Boston Globe—says much the same thing. Writing in the web-based environmental journal, Grist, he argues in an article entitled “Beyond the Point of No Return” that humanity’s response to climate change will necessarily have to be largely local—this is where human adaptations happen, and that if we wish to avoid descent into a world in which the wealthy are protected and supported by the Blackwaters and Haliburtons of the world, we must, to quote Gelbspan, “reorganize our social structures to reflect our most humane collective aspirations.” This, I think, is the task that educators concerned about sustainability must take on: to surface those “most humane collective aspirations” and prepare students to reinvigorate our community and democratic processes while enacting the innovations required by changing planetary and social conditions. What kind of people will be needed to move society in the direction of sustainability? OK. How might this be done? This is where I’d like to turn to the subtitle of this article: “Educating for Interconnection, Adaptability, and Resilience.” What do I mean? First, the experience of interconnection seems to lie at the heart of ethical and caring behavior. When people grasp the degree to which their own physical and psychic welfare is dependent on the welfare of others or the health of natural systems, they become much more likely to behave responsibly towards them and to take steps to protect them from harm. Humanity’s higher aspirations tend to reflect this sense of interconnection and the desire to preserve and extend it. The root of the word, religion, for example, means to bind together. Absent that sense of being bound together, anything can go. This is one of the reasons that nature writer Robert Michael Pyle worries about what he calls the “extinction of experience,” the fact that many children growing up today have such limited contact with the natural world. Without that contact, Pyle fears that they will demonstrate little interest in preserving it. The same could be said of children’s diminished contact with their communities. What will lead them to care for those communities if most of their lives are spent in isolation from them—as they play video games, watch TV, or are safely sequestered in the aural cocoons of their i-Pods? One thing educators can do to acquaint students with those higher collective aspirations is to make sure that students are given a chance to know their own communities and places well. Second, human adaptability has been the characteristic that has allowed our species to populate the planet and survive as well as we have without the kinds of physical protections that permit other animals to successfully navigate the world. The ability to adapt, however, depends on our ability to perceive what is happening around us accurately and to respond appropriately. This is where awareness and intelligence come into play as well as the willingness to task risks and try new things. People in the future will need to be able to observe, problem solve, and act in order to adapt to the challenges posed by climate change, resource exhaustion, an unstable economy, and the forms of social instability likely to accompany such events. To prepare young people today for these challenges, they can be given opportunities to participate in efforts to address issues in their own schools and communities in an attempt to make them better places for everyone. Finally, the difficulties students are likely to encounter in coming decades are almost certain to be daunting. Dealing with them will require resilience, persistence, and determination. Resilience is tied into the ability to keep coming back despite challenges, failure, or even the threat of failure. Studies of resilience in children often point to their relationship to at least one person who has faith in their capacity to succeed and do well; that faith then contributes to their own self-efficacy. A classic psychological exploration of resilience, Victor Frankl’s Man’s Search for Meaning, argues that Nazi concentration camp survivors tended to be people who saw their personal experiences as linked to the experiences of others and a broader sense of meaning. They were people whose own individual stories were folded into the stories of their communities and of life, itself. Engaging young people in learning activities that connect them to others and that give them an opportunity to address challenges to their community could potentially foster in them such resilience as well as a deep understanding of the satisfaction and sense of personal well-being that come with purposeful action in the company of others. What contribution could educators make to the development of interconnection, adaptability, and resilience? I can almost hear readers thinking, “Nice words, but what does this look like?” Fortunately, I’ve spent a share of the past decade or so visiting schools and collecting stories that demonstrate how this kind of education might happen. Although not all of the schools where this work is occurring would necessarily say they are directly confronting issues of sustainability or cultural change, they are in different ways cultivating interconnection, adaptability, and resilience. They are doing this by incorporating curriculum and instruction characterized by a focus on local and regional issues, oftentimes coupling this with opportunities for students to engage in projects that have value for the broader school or community. Called place- or community-based education, this approach is aimed at developing in children a sense of relatedness to their own regions, familiarity with important local knowledge and issues, the capacity to act collectively with fellow students and outside-of-school partners to address community concerns, and a commitment to participatory citizenship and stewardship. An additional benefit in our era of accountability and standards is the way these experiences are often associated with higher levels of academic engagement and achievement. In talking about place- or community-based education, I do not mean to suggest that all of a students’ school experience should focus on local knowledge or issues, but enough to draw them into a sense of community membership and connection to the natural world. I am furthermore not suggesting that these kinds of educational experiences on their own will be a panacea for the challenges humanity will face in coming decades. I believe, however, that adults who recognize their connectedness to others and the world, have learned how to adapt to changing conditions, and who possess the resilience needed to turn difficulties into opportunities will have a better chance of creating a sustainable society than people who have not developed these attributes or skills. Nurturing interconnection. Now it’s time for more stories. Boston’s Young Achievers Science and Mathematics Pilot School models how connectedness can be cultivated in an urban setting. In addition to focusing on math and science, the Young Achievers School also places social justice and environmental issues front and center in its curriculum development efforts. During the 2007-2008 academic year, second graders invested much of their energy in an investigation of important community issues. Students explored the experience of people living in Boston’s Chinatown, air quality issues and asthma rates, the role of public art murals and community health, and space needs at their own school. In the spring, they shared their findings on WBUR’s weekly Saturday night radio show, Con Salsa, a public presentation that required high quality written work and speaking skills. This experience provided both an incentive to develop literacy abilities as well as a self-esteem boost for all participants. I saw similar efforts to connect students to their places in Montgomery, Alabama, during a 2005 convocation of the Program for Academic and Cultural Excellence in Rural Schools (PACERS). PACERS is a project that has been addressing educational and community development issues in rural Alabama since the 1990s. Central to its efforts have been strategies to engage students in their communities in meaningful ways. An especially powerful initiative involved giving students the skills and resources needed to become community journalists. Throughout Alabama as well as other rural regions of the United States, small town papers have become a thing of the past. Newspapers published in larger population centers rarely carry news of anything other than crimes or scores from athletic contests in outlying villages and towns. It is difficult for citizens to get information about local issues that require their attention. High school students in 21 communities took on the task of informing their families and neighbors about these issues and in the process developed both the skills of budding journalists and a sense of belonging to communities where their energy and attention and voices were listened to by adults. One former student at the convocation—now a graphic designer for the daily paper in Montgomery—observed that when he was in high school three things were central to his world: God, family, and PACERS. At the Wells Community School in Harrisville, New Hampshire, a second grade teacher has adopted an even simpler approach to connect her students to their place. After moving to a new classroom, she noticed a stand of Eastern white pine two dozen yards away. She decided to focus on nature observations throughout the year and thanks to a small grant bought kid-friendly field guides, binoculars, and a digital camera to help out with the project. Students became eager participants, carefully keeping track of birds or other animals that passed by over the course of the year. Following up on students’ suggestions, they built a brush pile and put out feeders to attract wildlife. They then shared their findings with students in Italy and Brazil who were keeping similar records of animals and plants they encountered in their schoolyards through the web-based service provided by www.epals.com. In each of these examples, educators provided opportunities for students to immerse themselves in the human and other-than-human life of their communities and places. By doing so, they created a space where students can develop the relationships that undergird both citizenship and stewardship. Research conducted by the Place-based Education Evaluation Collaborative over the past six years points to the positive impact that learning experiences grounded in community issues and the natural world can have on students’ civic involvement, environmental awareness, and achievement. Cultivating adaptability. Cultivating adaptability can be more challenging. Nurturing a sense of interconnection is generally non-threatening. Problem-solving, innovation, and action can potentially lead to conflict and must be handled with thoughtfulness and tact. Demands related to their discovery of high levels of copper in their school’s water supply in Oregon City, for example, could have alienated district officials if students hadn’t learned how to negotiate and been willing to consider multiple solutions to the problem they had identified. Dealing with challenging issues both now and in the future requires such abilities. A program called Promoting Resolutions with Integrity for a Sustainable Molokai (PRISM) is giving upper elementary and middle school students in Hawaii a chance to learn how to do this. Created in the mid-1990s by two fifth- and sixth-grade teachers at the Kualapuu School, PRISM uses a process developed at the University of Southern Illinois called Investigating and Evaluating Environmental Issues and Actions. The process requires students to identify all of the important groups concerned about a particular issue, uncovering their beliefs and values, and articulating their proposed solutions. After gaining this knowledge and investigating the dimensions of an issue, students then begin to develop their own suggestions and the actions that follow from these. At the beginning of the school year, teachers work with students to choose a topic that will be the focus of their inquiry for the next several months. Students have studied and developed proposals about solid waste disposal at the school and on the island, the impact on native habitats of an expansion of the airport runway and ecotourism developments, the restoration of traditional Hawaiian fishponds, and emergency preparedness. Students interview resource professionals, read technical documents and plans, and then create presentations for a two-day meeting generally held in the spring. Parents and community members are invited to attend these. Students’ work has come to influence adult involvement in these topics, leading family members who might not have seen themselves as activists to begin contributing their energy to the issues students have investigated. Students also develop action plans. They initiated a recycling program at the school that subsequently grew into an island-wide recycling program. They wrote a bottle bill that was introduced but defeated in the Hawaii State Assembly. They have engaged in the restoration of traditional fishponds and regularly write columns about their research in the island newspaper. Students in other schools have taken on economic as well as environmental concerns, an issue that will be especially important when communities grapple with what it means to transition to a post-fossil fuel economy. Howard, South Dakota, is located in the southeastern quadrant of the state. Like many Midwestern communities, it has experienced a steady drop in population and job opportunities for decades. In the mid-1990s, Randy Parry, a business teacher at the local high school, joined up with faculty at a local state college to write a grant to the Annenberg Rural Challenge aimed at creating more economic opportunities while doing so in ways that preserved the integrity of natural systems. Awarded the grant, Perry proceeded to involve his students in their community’s economic life. One of their first projects involved surveying county residents about where they spent their money—in local businesses or in the nearest big towns of Mitchell or Sioux Falls. They found that half of their respondents did most of their buying out of the county, depriving businesses of the multiplier effect that occurs when money is re-circulated locally. They also asked survey respondents about what kinds of changes would lead them to spend more of their earnings in Howard’s businesses. They learned that placing an ATM machine close to the stores would make a difference. After tallying the data, students let county residents know that if they spent only 10% more of their disposable income close to home, seven million additional dollars would be added to the regional economy and more sales tax revenue would be available for local government. People listened, and over the next year, taxable sales in Miner County increased by $15.6 million–and then gradually stabilized at this level. Through students’ collection of data and their development of plans and proposals, they are helping their community adapt to changing circumstances in ways that are allowing it to survive. Similarly, on Molokai, students involved in the PRISM project are gaining the tools needed to make thoughtful decisions about how their island home can respond to development pressure from outside forces in ways that preserve the beauty and integrity of local ecosystems. Developing resilience. In many respects, resilience could simply be one of the outcomes of educational experiences that connect children to others and their place and that give them the opportunity to use their lives and energies in activities that win them the respect and appreciation of their families and neighbors. A final example, however, demonstrates how an exploration of local history in Montana affirmed for students their ability to deal with difficulties and contribute to the improvement of their communities. In the mid-1990s Jeff Gruber, a Libby High School social studies teacher invited his students to participate in a community study aimed at surfacing information that might help them to figure out how to make good decisions about its future. Libby at the time was experiencing even more challenging forms of economic disruption than Howard. As in many places, conflicts and fears ran so deep that civic leaders avoided calling a public meeting to explore these issues. Gruber and his students did what others could not. They began a conversation about who Libby residents are, why they stay in Libby, what cultural resources they possess, and how they could make life better. Students then embarked on an investigation that continued for a number of years. One of their first projects involved collecting thousands of photographs from Libby and assembling them as an extended photo essay about the town’s future. Other projects took students to the local plywood plant where they interviewed millworkers about their jobs and learned first hand about the steps that transform trees into wood products. They wrote a pamphlet about what they learned, which to their and the millworkers’ surprise became an historical document, itself, when the mill was closed by Stimson Lumber in 2003. Now deeply committed to their place, students were not prepared to take this event sitting down. With their teacher, they prepared a presentation summarizing what they had learned about their community and took it to the headquarters of the Stimson and Plum Creek Lumber Companies in Portland. As writer Michael Umphrey observes: “. . . the kids did not imagine villains—their game was understanding. In that spirit, they wanted the corporate officers to understand the sometimes devastating impact their actions had on the local community. They were beginning to understand that one reason for learning was to find their voice.” Students also developed a deeper understanding about the factors that had contributed to Libby’s continued survival. As they reported in their presentation, “We looked to Libby’s past for answers to our current troubles. But we didn’t find answers. What we found was that life had always been difficult, but that our grandparents and great-grandparents had always found a way to help each other and get along. And so will we.” In Libby and other Montana communities, young people have begun to realize that their success and well-being are intimately tied to the success and well-being of others, a story that is not regularly conveyed by the mainstream media. From this story they are gaining a sense of resilience essential to the creation of more sustainable societies. This story of mutual support and collective identity is exactly what Libby and other small towns like it will need if their current residents are to weather the storms of economic globalization and a declining natural resource base. Stepping up to the plate and making it happen In conclusion, I’d like to share one more story about the work of a high school teacher that has become a model for community regeneration worldwide. It again points to the possible and serves as an exemplar of what educators concerned about the welfare of their communities and the planet can accomplish. In the 1950s, Ari Ariyaratne taught in a high school in Colombo, Sri Lanka where he worked primarily with children of the upper class. He realized that many of his students would become business or political leaders of the country, but that few of them had any personal knowledge about how most of their fellow citizens lived. He started a community service program that involved taking students out to rural villages where they would ask people to brainstorm projects whose completion would make everyone’s lives better. Not uncommonly, villagers would go to a file drawer and pull out requests that had been submitted to government officials but never addressed. Ariyaratne and his students would ask the villagers what resources they needed to complete projects—things like building cisterns or constructing a simple school or community center—and how many people would be required to do the job. The students would then help them organize the event. These school-based efforts eventually became an organization called Sarvodaya Shramadana that has operated in over 15,000 villages in Sri Lanka and has touched the lives of 11 million people. A rough translation of sarvodaya shramadana is lifting everyone through the gift of labor. What is especially significant about this program is its emphasis on uncovering community assets and cultivating participants’ faith in their own capacity to take positive action. A central tenet of the program is that everyday people have the capacity to govern themselves and respond appropriately to the conditions of their lives when given the support and encouragement to do so. After the tsunami in 2004, for example, people who had participated in Sarvodaya were not uncommonly those who created make-shift emergency kitchens or organized efforts to contribute clothing and other household items to people who had lost everything. It is this kind of leadership that the coming decades with all of their projected economic and environmental uncertainty will demand of all communities. As educators, I would suggest that the world now requires us to find ways to prepare our students for the roles they will need to play as citizens and stewards responsible for imagining and then creating new social and economics structures as well as technologies that truly represent humanity’s highest aspirations. This is the way people will be able to grow cultures that are sustainable both ecologically and socially, cultures that will be worthy of our children for many generations to come. Gregory Smith is Associate Professor in the Graduate School of Education at Lewis and Clark College in Portland. 1] I heard this story from Susan Abravanel, the education director of SOLV, an Oregon non-profit heavily involved in environmental restoration and service learning projects. 2] Dan Schwartz is a regular at the North Dakota Study Group meetings. I heard this story from him in February, 2008 and later spoke with Cecilia Gigiolio, the teacher who saw this project through. 3] The text of Our Common Future can be accessed online at http://www.un-documnts.net/ocf-02.htm#1, retrieved on June 3, 2008. 4] John Elkington, Cannibals with Forks: The Triple Bottom Line of 21st Century Business, Gabriola Island, BC, Canada: New Society Press, 1998. 5] See the March 31, 2008 issue of Time Magazine for an example of this. 6] Karl-Henrik Robert, The Natural Step Story: Seeding a Quiet Revolution, Gabriola Island, British Columbia, 2002. 7] Retrieved from http://www.ortns.org/framework.htm on July 12, 2008. 8] See Wendell Berry’s article entitled “Faustian Economics: Hell Hath No Limits” in the May, 2008 Harpers Magazine (pp. 35-42) for a cogent and passionate presentation of this position as well as Bill McKibben’s Deep Economy: The Wealth of Communities and the Durable Future, New York: Holt, 2008. 9] Allen Hammond, Which World? Scenarios for the 21st Century, Washington, DC: Island Press, 1998. 10] Ross Gelbspan, “Beyond the Point of No Return,” Gristmill, December 11, 2007, paragraph 47, retrieved on June 4, 2008 from http://gristmill.grist.org/story/2007/12/10/165845/92. 11] Robert Michael Pyle, The Thunder Tree: Lessons from an Urban Wildland, New York: Lyons Press, 1993. 12] Reginald Clark, Family Life and School Achievement: Why Poor Black Children Succeed or Fail, Chicago: University of Chicago Press, 1983. 13] Viktor Frankl, Man’s Search for Meaning, New York: Washington Square Press. 14] See David Sobel’s Place-Based Education: Connecting Classrooms to Communities, Great Barrington, Mass.: Orion Press, 2004, and Gregory Smith’s “Place-Based Education: Learning to Be Where We Are,” Kappan, April 2003, for more complete descriptions of this approach and its possibilities. 15] Robert Hoppin, personal communication (e-mail), June 5, 2008. Hoppin is a place-based education coordinator at the Young Achievers School. 16] John Shelton’s book, Consequential Learning, Montgomery: NewSouth Press, 2005, provides a description of many PACERS projects and the spirit that undergirds these 17] See http://www.peecworks.org/index , retrieved on July 3, 2008, for a full listing of research reports written by this organization. 18] Marie Cheak, Trudi Volk, and Harold Hungerford, Molokai: An Investment in Children, the Community, and the Environment, Champaign, Ill.: Stipes Publishing, 2002. 19] John Ramsey, Harold Hungerford, and Trudi Volk, “A Technique for Analyzing Environmental Issues,” in Harold Hungerford, William Blumm, Trudi Volk, and John Ramsey (editors), Essential Readings in Environmental Education Champaign, Ill.: Stipes Pubishing), pp. 190-195. 20] Rural School and Community Trust President Rachel Tompkins provides a history of this project in “Overlooked Opportunity: Students, Educators, and Education Advocates Contributing to Community and Economic Development,” a chapter in David Gruenewald and Gregory Smith’s (editors), Place-Based Education in the Global Age: Local Diversity (New York: Taylor & Francis, 2008), pp. 173-196. 21] This story is drawn from Michael Umphrey’s volume, The Power of Community-Centered Education: Teaching as a Craft of Place, Rowman and Littlefield, 2007. 22] Umphrey, p. 6. 23] Umphrey, p. 8. 24] Joanna Macy, Dharma and Development: Religion as Resource in the Sarvodaya Self-Help Movement, West Hartford, Conn: Kumarian Press, 1983. 25] See http://www.sarvodaya.org/ for more information. Retrieved on July 12, 2008. 26] Sharif Abdullah, personal communication. Abdullah is an American social activist and writer who has acted as a consultant to the Sarvodaya organization for more than a decade. This article was reprinted in its entirety from the website of the Journal of Sustainability Education http://www.journalofsustainabilityeducation.org/
<urn:uuid:35b8e559-d1aa-4411-a748-1b49b22eb56e>
CC-MAIN-2021-21
https://clearingmagazine.org/archives/1468
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00451.warc.gz
en
0.966107
8,095
2.59375
3
By the end of this section, you will be able to: - Express concentrations of solution components using mole fraction and molality - Describe the effect of solute concentration on various solution properties (vapor pressure, boiling point, freezing point, and osmotic pressure) - Perform calculations using the mathematical equations that describe these various colligative effects - Describe the process of distillation and its practical applications - Explain the process of osmosis and describe how it is applied industrially and in nature The properties of a solution are different from those of either the pure solute(s) or solvent. Many solution properties are dependent upon the chemical identity of the solute. Compared to pure water, a solution of hydrogen chloride is more acidic, a solution of ammonia is more basic, a solution of sodium chloride is more dense, and a solution of sucrose is more viscous. There are a few solution properties, however, that depend only upon the total concentration of solute species, regardless of their identities. These colligative properties include vapor pressure lowering, boiling point elevation, freezing point depression, and osmotic pressure. This small set of properties is of central importance to many natural phenomena and technological applications, as will be described in this module. Mole Fraction and Molality Several units commonly used to express the concentrations of solution components were introduced in an earlier chapter of this text, each providing certain benefits for use in different applications. For example, molarity (M) is a convenient unit for use in stoichiometric calculations, since it is defined in terms of the molar amounts of solute species: Because solution volumes vary with temperature, molar concentrations will likewise vary. When expressed as molarity, the concentration of a solution with identical numbers of solute and solvent species will be different at different temperatures, due to the contraction/expansion of the solution. More appropriate for calculations involving many colligative properties are mole-based concentration units whose values are not dependent on temperature. Two such units are mole fraction (introduced in the previous chapter on gases) and molality. The mole fraction, X, of a component is the ratio of its molar amount to the total number of moles of all solution components: By this definition, the sum of mole fractions for all solution components (the solvent and all solutes) is equal to one. Molality is a concentration unit defined as the ratio of the numbers of moles of solute to the mass of the solvent in kilograms: Since these units are computed using only masses and molar amounts, they do not vary with temperature and, thus, are better suited for applications requiring temperature-independent concentrations, including several colligative properties, as will be described in this chapter module. Calculating Mole Fraction and MolalityThe antifreeze in most automobile radiators is a mixture of equal volumes of ethylene glycol and water, with minor amounts of other additives that prevent corrosion. What are the (a) mole fraction and (b) molality of ethylene glycol, C2H4(OH)2, in a solution prepared from 2.22 103 g of ethylene glycol and 2.00 103 g of water (approximately 2 L of glycol and 2 L of water)? Solution(a) The mole fraction of ethylene glycol may be computed by first deriving molar amounts of both solution components and then substituting these amounts into the definition of mole fraction. Notice that mole fraction is a dimensionless property, being the ratio of properties with identical units (moles). (b) Derive moles of solute and mass of solvent (in kg). First, use the given mass of ethylene glycol and its molar mass to find the moles of solute: Then, convert the mass of the water from grams to kilograms: Finally, calculate molality per its definition: Check Your LearningWhat are the mole fraction and molality of a solution that contains 0.850 g of ammonia, NH3, dissolved in 125 g of water? 7.14 10−3; 0.399 m Converting Mole Fraction and Molal ConcentrationsCalculate the mole fraction of solute and solvent in a 3.0 m solution of sodium chloride. SolutionConverting from one concentration unit to another is accomplished by first comparing the two unit definitions. In this case, both units have the same numerator (moles of solute) but different denominators. The provided molal concentration may be written as: The numerator for this solution’s mole fraction is, therefore, 3.0 mol NaCl. The denominator may be computed by deriving the molar amount of water corresponding to 1.0 kg and then substituting these molar amounts into the definition for mole fraction. Check Your LearningThe mole fraction of iodine, I2, dissolved in dichloromethane, CH2Cl2, is 0.115. What is the molal concentration, m, of iodine in this solution? Molality and Molarity ConversionsIntravenous infusion of a 0.556 M aqueous solution of glucose (density of 1.04 g/mL) is part of some post-operative recovery therapies. What is the molal concentration of glucose in this solution? SolutionThe provided molal concentration may be explicitly written as: Consider the definition of molality: The amount of glucose in 1-L of this solution is 0.556 mol, so the mass of water in this volume of solution is needed. First, compute the mass of 1.00 L of the solution: This is the mass of both the water and its solute, glucose, and so the mass of glucose must be subtracted. Compute the mass of glucose from its molar amount: Subtracting the mass of glucose yields the mass of water in the solution: Finally, the molality of glucose in this solution is computed as: Check Your LearningNitric acid, HNO3(aq), is commercially available as a 33.7 m aqueous solution (density = 1.35 g/mL). What is the molarity of this solution? Vapor Pressure Lowering As described in the chapter on liquids and solids, the equilibrium vapor pressure of a liquid is the pressure exerted by its gaseous phase when vaporization and condensation are occurring at equal rates: Dissolving a nonvolatile substance in a volatile liquid results in a lowering of the liquid’s vapor pressure. This phenomenon can be rationalized by considering the effect of added solute molecules on the liquid's vaporization and condensation processes. To vaporize, solvent molecules must be present at the surface of the solution. The presence of solute decreases the surface area available to solvent molecules and thereby reduces the rate of solvent vaporization. Since the rate of condensation is unaffected by the presence of solute, the net result is that the vaporization-condensation equilibrium is achieved with fewer solvent molecules in the vapor phase (i.e., at a lower vapor pressure) (Figure 11.18). While this interpretation is useful, it does not account for several important aspects of the colligative nature of vapor pressure lowering. A more rigorous explanation involves the property of entropy, a topic of discussion in a later text chapter on thermodynamics. For purposes of understanding the lowering of a liquid's vapor pressure, it is adequate to note that the more dispersed nature of matter in a solution, compared to separate solvent and solute phases, serves to effectively stabilize the solvent molecules and hinder their vaporization. A lower vapor pressure results, and a correspondingly higher boiling point as described in the next section of this module. The relationship between the vapor pressures of solution components and the concentrations of those components is described by Raoult’s law: The partial pressure exerted by any component of an ideal solution is equal to the vapor pressure of the pure component multiplied by its mole fraction in the solution. where PA is the partial pressure exerted by component A in the solution, is the vapor pressure of pure A, and XA is the mole fraction of A in the solution. Recalling that the total pressure of a gaseous mixture is equal to the sum of partial pressures for all its components (Dalton’s law of partial pressures), the total vapor pressure exerted by a solution containing i components is A nonvolatile substance is one whose vapor pressure is negligible (P* ≈ 0), and so the vapor pressure above a solution containing only nonvolatile solutes is due only to the solvent: Calculation of a Vapor PressureCompute the vapor pressure of an ideal solution containing 92.1 g of glycerin, C3H5(OH)3, and 184.4 g of ethanol, C2H5OH, at 40 °C. The vapor pressure of pure ethanol is 0.178 atm at 40 °C. Glycerin is essentially nonvolatile at this temperature. SolutionSince the solvent is the only volatile component of this solution, its vapor pressure may be computed per Raoult’s law as: First, calculate the molar amounts of each solution component using the provided mass data. Next, calculate the mole fraction of the solvent (ethanol) and use Raoult’s law to compute the solution’s vapor pressure. Check Your LearningA solution contains 5.00 g of urea, CO(NH2)2 (a nonvolatile solute) and 0.100 kg of water. If the vapor pressure of pure water at 25 °C is 23.7 torr, what is the vapor pressure of the solution assuming ideal behavior? Distillation of Solutions Solutions whose components have significantly different vapor pressures may be separated by a selective vaporization process known as distillation. Consider the simple case of a mixture of two volatile liquids, A and B, with A being the more volatile liquid. Raoult’s law can be used to show that the vapor above the solution is enriched in component A, that is, the mole fraction of A in the vapor is greater than the mole fraction of A in the liquid (see end-of-chapter Exercise 65). By appropriately heating the mixture, component A may be vaporized, condensed, and collected—effectively separating it from component B. Distillation is widely applied in both laboratory and industrial settings, being used to refine petroleum, to isolate fermentation products, and to purify water. A typical apparatus for laboratory-scale distillations is shown in Figure 11.19. Oil refineries use large-scale fractional distillation to separate the components of crude oil. The crude oil is heated to high temperatures at the base of a tall fractionating column, vaporizing many of the components that rise within the column. As vaporized components reach adequately cool zones during their ascent, they condense and are collected. The collected liquids are simpler mixtures of hydrocarbons and other petroleum compounds that are of appropriate composition for various applications (e.g., diesel fuel, kerosene, gasoline), as depicted in Figure 11.20. Boiling Point Elevation As described in the chapter on liquids and solids, the boiling point of a liquid is the temperature at which its vapor pressure is equal to ambient atmospheric pressure. Since the vapor pressure of a solution is lowered due to the presence of nonvolatile solutes, it stands to reason that the solution’s boiling point will subsequently be increased. Vapor pressure increases with temperature, and so a solution will require a higher temperature than will pure solvent to achieve any given vapor pressure, including one equivalent to that of the surrounding atmosphere. The increase in boiling point observed when nonvolatile solute is dissolved in a solvent, ΔTb, is called boiling point elevation and is directly proportional to the molal concentration of solute species: where Kb is the boiling point elevation constant, or the ebullioscopic constant and m is the molal concentration (molality) of all solute species. Boiling point elevation constants are characteristic properties that depend on the identity of the solvent. Values of Kb for several solvents are listed in Table 11.2. |Solvent||Boiling Point (°C at 1 atm)||Kb (ºCm−1)||Freezing Point (°C at 1 atm)||Kf (ºCm−1)| The extent to which the vapor pressure of a solvent is lowered and the boiling point is elevated depends on the total number of solute particles present in a given amount of solvent, not on the mass or size or chemical identities of the particles. A 1 m aqueous solution of sucrose (342 g/mol) and a 1 m aqueous solution of ethylene glycol (62 g/mol) will exhibit the same boiling point because each solution has one mole of solute particles (molecules) per kilogram of solvent. Calculating the Boiling Point of a SolutionAssuming ideal solution behavior, what is the boiling point of a 0.33 m solution of a nonvolatile solute in benzene? SolutionUse the equation relating boiling point elevation to solute molality to solve this problem in two steps. - Step 1. Calculate the change in boiling point. - Step 2. Add the boiling point elevation to the pure solvent’s boiling point. Check Your LearningAssuming ideal solution behavior, what is the boiling point of the antifreeze described in Example 11.3? The Boiling Point of an Iodine SolutionFind the boiling point of a solution of 92.1 g of iodine, I2, in 800.0 g of chloroform, CHCl3, assuming that the iodine is nonvolatile and that the solution is ideal. SolutionA four-step approach to solving this problem is outlined below. - Step 1. Convert from grams to moles of I2 using the molar mass of I2 in the unit conversion factor. Result: 0.363 mol - Step 2. Determine the molality of the solution from the number of moles of solute and the mass of solvent, in kilograms. Result: 0.454 m - Step 3. Use the direct proportionality between the change in boiling point and molal concentration to determine how much the boiling point changes. Result: 1.65 °C - Step 4. Determine the new boiling point from the boiling point of the pure solvent and the change. Result: 62.91 °C Check each result as a self-assessment. Check Your LearningWhat is the boiling point of a solution of 1.0 g of glycerin, C3H5(OH)3, in 47.8 g of water? Assume an ideal solution. Freezing Point Depression Solutions freeze at lower temperatures than pure liquids. This phenomenon is exploited in “de-icing” schemes that use salt (Figure 11.21), calcium chloride, or urea to melt ice on roads and sidewalks, and in the use of ethylene glycol as an “antifreeze” in automobile radiators. Seawater freezes at a lower temperature than fresh water, and so the Arctic and Antarctic oceans remain unfrozen even at temperatures below 0 °C (as do the body fluids of fish and other cold-blooded sea animals that live in these oceans). The decrease in freezing point of a dilute solution compared to that of the pure solvent, ΔTf, is called the freezing point depression and is directly proportional to the molal concentration of the solute where m is the molal concentration of the solute and Kf is called the freezing point depression constant (or cryoscopic constant). Just as for boiling point elevation constants, these are characteristic properties whose values depend on the chemical identity of the solvent. Values of Kf for several solvents are listed in Table 11.2. Calculation of the Freezing Point of a SolutionAssuming ideal solution behavior, what is the freezing point of the 0.33 m solution of a nonvolatile nonelectrolyte solute in benzene described in Example 11.4? SolutionUse the equation relating freezing point depression to solute molality to solve this problem in two steps. - Step 1. Calculate the change in freezing point. - Step 2. Subtract the freezing point change observed from the pure solvent’s freezing point. Check Your LearningAssuming ideal solution behavior, what is the freezing point of a 1.85 m solution of a nonvolatile nonelectrolyte solute in nitrobenzene? Colligative Properties and De-Icing Sodium chloride and its group 2 analogs calcium and magnesium chloride are often used to de-ice roadways and sidewalks, due to the fact that a solution of any one of these salts will have a freezing point lower than 0 °C, the freezing point of pure water. The group 2 metal salts are frequently mixed with the cheaper and more readily available sodium chloride (“rock salt”) for use on roads, since they tend to be somewhat less corrosive than the NaCl, and they provide a larger depression of the freezing point, since they dissociate to yield three particles per formula unit, rather than two particles like the sodium chloride. Because these ionic compounds tend to hasten the corrosion of metal, they would not be a wise choice to use in antifreeze for the radiator in your car or to de-ice a plane prior to takeoff. For these applications, covalent compounds, such as ethylene or propylene glycol, are often used. The glycols used in radiator fluid not only lower the freezing point of the liquid, but they elevate the boiling point, making the fluid useful in both winter and summer. Heated glycols are often sprayed onto the surface of airplanes prior to takeoff in inclement weather in the winter to remove ice that has already formed and prevent the formation of more ice, which would be particularly dangerous if formed on the control surfaces of the aircraft (Figure 11.22). Phase Diagram for a Solution The colligative effects on vapor pressure, boiling point, and freezing point described in the previous section are conveniently summarized by comparing the phase diagrams for a pure liquid and a solution derived from that liquid (Figure 11.23). The liquid-vapor curve for the solution is located beneath the corresponding curve for the solvent, depicting the vapor pressure lowering, ΔP, that results from the dissolution of nonvolatile solute. Consequently, at any given pressure, the solution’s boiling point is observed at a higher temperature than that for the pure solvent, reflecting the boiling point elevation, ΔTb, associated with the presence of nonvolatile solute. The solid-liquid curve for the solution is displaced left of that for the pure solvent, representing the freezing point depression, ΔTf, that accompanies solution formation. Finally, notice that the solid-gas curves for the solvent and its solution are identical. This is the case for many solutions comprising liquid solvents and nonvolatile solutes. Just as for vaporization, when a solution of this sort is frozen, it is actually just the solvent molecules that undergo the liquid-to-solid transition, forming pure solid solvent that excludes solute species. The solid and gaseous phases, therefore, are composed of solvent only, and so transitions between these phases are not subject to colligative effects. Osmosis and Osmotic Pressure of Solutions A number of natural and synthetic materials exhibit selective permeation, meaning that only molecules or ions of a certain size, shape, polarity, charge, and so forth, are capable of passing through (permeating) the material. Biological cell membranes provide elegant examples of selective permeation in nature, while dialysis tubing used to remove metabolic wastes from blood is a more simplistic technological example. Regardless of how they may be fabricated, these materials are generally referred to as semipermeable membranes. Consider the apparatus illustrated in Figure 11.24, in which samples of pure solvent and a solution are separated by a membrane that only solvent molecules may permeate. Solvent molecules will diffuse across the membrane in both directions. Since the concentration of solvent is greater in the pure solvent than the solution, these molecules will diffuse from the solvent side of the membrane to the solution side at a faster rate than they will in the reverse direction. The result is a net transfer of solvent molecules from the pure solvent to the solution. Diffusion-driven transfer of solvent molecules through a semipermeable membrane is a process known as osmosis. When osmosis is carried out in an apparatus like that shown in Figure 11.24, the volume of the solution increases as it becomes diluted by accumulation of solvent. This causes the level of the solution to rise, increasing its hydrostatic pressure (due to the weight of the column of solution in the tube) and resulting in a faster transfer of solvent molecules back to the pure solvent side. When the pressure reaches a value that yields a reverse solvent transfer rate equal to the osmosis rate, bulk transfer of solvent ceases. This pressure is called the osmotic pressure (Π) of the solution. The osmotic pressure of a dilute solution is related to its solute molarity, M, and absolute temperature, T, according to the equation where R is the universal gas constant. Calculation of Osmotic PressureAssuming ideal solution behavior, what is the osmotic pressure (atm) of a 0.30 M solution of glucose in water that is used for intravenous infusion at body temperature, 37 °C? SolutionFind the osmotic pressure, Π, using the formula Π = MRT, where T is on the Kelvin scale (310 K) and the value of R is expressed in appropriate units (0.08206 L atm/mol K). Check Your LearningAssuming ideal solution behavior, what is the osmotic pressure (atm) a solution with a volume of 0.750 L that contains 5.0 g of methanol, CH3OH, in water at 37 °C? If a solution is placed in an apparatus like the one shown in Figure 11.25, applying pressure greater than the osmotic pressure of the solution reverses the osmosis and pushes solvent molecules from the solution into the pure solvent. This technique of reverse osmosis is used for large-scale desalination of seawater and on smaller scales to produce high-purity tap water for drinking. Reverse Osmosis Water Purification In the process of osmosis, diffusion serves to move water through a semipermeable membrane from a less concentrated solution to a more concentrated solution. Osmotic pressure is the amount of pressure that must be applied to the more concentrated solution to cause osmosis to stop. If greater pressure is applied, the water will go from the more concentrated solution to a less concentrated (more pure) solution. This is called reverse osmosis. Reverse osmosis (RO) is used to purify water in many applications, from desalination plants in coastal cities, to water-purifying machines in grocery stores (Figure 11.26), and smaller reverse-osmosis household units. With a hand-operated pump, small RO units can be used in third-world countries, disaster areas, and in lifeboats. Our military forces have a variety of generator-operated RO units that can be transported in vehicles to remote locations. Examples of osmosis are evident in many biological systems because cells are surrounded by semipermeable membranes. Carrots and celery that have become limp because they have lost water can be made crisp again by placing them in water. Water moves into the carrot or celery cells by osmosis. A cucumber placed in a concentrated salt solution loses water by osmosis and absorbs some salt to become a pickle. Osmosis can also affect animal cells. Solute concentrations are particularly important when solutions are injected into the body. Solutes in body cell fluids and blood serum give these solutions an osmotic pressure of approximately 7.7 atm. Solutions injected into the body must have the same osmotic pressure as blood serum; that is, they should be isotonic with blood serum. If a less concentrated solution, a hypotonic solution, is injected in sufficient quantity to dilute the blood serum, water from the diluted serum passes into the blood cells by osmosis, causing the cells to expand and rupture. This process is called hemolysis. When a more concentrated solution, a hypertonic solution, is injected, the cells lose water to the more concentrated solution, shrivel, and possibly die in a process called crenation. These effects are illustrated in Figure 11.27. Determination of Molar Masses Osmotic pressure and changes in freezing point, boiling point, and vapor pressure are directly proportional to the number of solute species present in a given amount of solution. Consequently, measuring one of these properties for a solution prepared using a known mass of solute permits determination of the solute’s molar mass. Determination of a Molar Mass from a Freezing Point DepressionA solution of 4.00 g of a nonelectrolyte dissolved in 55.0 g of benzene is found to freeze at 2.32 °C. Assuming ideal solution behavior, what is the molar mass of this compound? SolutionSolve this problem using the following steps. - Step 1. Determine the change in freezing point from the observed freezing point and the freezing point of pure benzene (Table 11.2). - Step 2. Determine the molal concentration from Kf, the freezing point depression constant for benzene (Table 11.2), and ΔTf. - Step 3. Determine the number of moles of compound in the solution from the molal concentration and the mass of solvent used to make the solution. - Step 4. Determine the molar mass from the mass of the solute and the number of moles in that mass. Check Your LearningA solution of 35.7 g of a nonelectrolyte in 220.0 g of chloroform has a boiling point of 64.5 °C. Assuming ideal solution behavior, what is the molar mass of this compound? 1.8 102 g/mol Determination of a Molar Mass from Osmotic PressureA 0.500 L sample of an aqueous solution containing 10.0 g of hemoglobin has an osmotic pressure of 5.9 torr at 22 °C. Assuming ideal solution behavior, what is the molar mass of hemoglobin? SolutionHere is one set of steps that can be used to solve the problem: - Step 1. Convert the osmotic pressure to atmospheres, then determine the molar concentration from the osmotic pressure. - Step 2. Determine the number of moles of hemoglobin in the solution from the concentration and the volume of the solution. - Step 3. Determine the molar mass from the mass of hemoglobin and the number of moles in that mass. Check Your LearningAssuming ideal solution behavior, what is the molar mass of a protein if a solution of 0.02 g of the protein in 25.0 mL of solution has an osmotic pressure of 0.56 torr at 25 °C? 3 104 g/mol Colligative Properties of Electrolytes As noted previously in this module, the colligative properties of a solution depend only on the number, not on the identity, of solute species dissolved. The concentration terms in the equations for various colligative properties (freezing point depression, boiling point elevation, osmotic pressure) pertain to all solute species present in the solution. For the solutions considered thus far in this chapter, the solutes have been nonelectrolytes that dissolve physically without dissociation or any other accompanying process. Each molecule that dissolves yields one dissolved solute molecule. The dissolution of an electroyte, however, is not this simple, as illustrated by the two common examples below: Considering the first of these examples, and assuming complete dissociation, a 1.0 m aqueous solution of NaCl contains 2.0 mole of ions (1.0 mol Na+ and 1.0 mol Cl−) per each kilogram of water, and its freezing point depression is expected to be When this solution is actually prepared and its freezing point depression measured, however, a value of 3.4 °C is obtained. Similar discrepancies are observed for other ionic compounds, and the differences between the measured and expected colligative property values typically become more significant as solute concentrations increase. These observations suggest that the ions of sodium chloride (and other strong electrolytes) are not completely dissociated in solution. To account for this and avoid the errors accompanying the assumption of total dissociation, an experimentally measured parameter named in honor of Nobel Prize-winning German chemist Jacobus Henricus van’t Hoff is used. The van’t Hoff factor (i) is defined as the ratio of solute particles in solution to the number of formula units dissolved: Values for measured van’t Hoff factors for several solutes, along with predicted values assuming complete dissociation, are shown in Table 11.3. |Formula unit||Classification||Dissolution products||i (predicted)||i (measured)| |NaCl||Strong electrolyte||Na+, Cl−||2||1.9| |HCl||Strong electrolyte (acid)||H3O+, Cl−||2||1.9| |MgSO4||Strong electrolyte||Mg2+, SO42−,||2||1.3| |MgCl2||Strong electrolyte||Mg2+, 2Cl−||3||2.7| |FeCl3||Strong electrolyte||Fe3+, 3Cl−||4||3.4| In 1923, the chemists Peter Debye and Erich Hückel proposed a theory to explain the apparent incomplete ionization of strong electrolytes. They suggested that although interionic attraction in an aqueous solution is very greatly reduced by solvation of the ions and the insulating action of the polar solvent, it is not completely nullified. The residual attractions prevent the ions from behaving as totally independent particles (Figure 11.28). In some cases, a positive and negative ion may actually touch, giving a solvated unit called an ion pair. Thus, the activity, or the effective concentration, of any particular kind of ion is less than that indicated by the actual concentration. Ions become more and more widely separated the more dilute the solution, and the residual interionic attractions become less and less. Thus, in extremely dilute solutions, the effective concentrations of the ions (their activities) are essentially equal to the actual concentrations. Note that the van’t Hoff factors for the electrolytes in Table 11.3 are for 0.05 m solutions, at which concentration the value of i for NaCl is 1.9, as opposed to an ideal value of 2. The Freezing Point of a Solution of an ElectrolyteThe concentration of ions in seawater is approximately the same as that in a solution containing 4.2 g of NaCl dissolved in 125 g of water. Use this information and a predicted value for the van’t Hoff factor (Table 11.3) to determine the freezing temperature the solution (assume ideal solution behavior). SolutionSolve this problem using the following series of steps. - Step 1. Convert from grams to moles of NaCl using the molar mass of NaCl in the unit conversion factor. Result: 0.072 mol NaCl - Step 2. Determine the number of moles of ions present in the solution using the number of moles of ions in 1 mole of NaCl as the conversion factor (2 mol ions/1 mol NaCl). Result: 0.14 mol ions - Step 3. Determine the molality of the ions in the solution from the number of moles of ions and the mass of solvent, in kilograms. Result: 1.2 m - Step 4. Use the direct proportionality between the change in freezing point and molal concentration to determine how much the freezing point changes. Result: 2.1 °C - Step 5. Determine the new freezing point from the freezing point of the pure solvent and the change. Result: −2.1 °C Check each result as a self-assessment, taking care to avoid rounding errors by retaining guard digits in each step’s result for computing the next step’s result. Check Your LearningAssuming complete dissociation and ideal solution behavior, calculate the freezing point of a solution of 0.724 g of CaCl2 in 175 g of water.
<urn:uuid:e55e0f82-25bf-4631-b598-ebd3a23e02f8>
CC-MAIN-2021-21
https://openstax.org/books/chemistry-atoms-first-2e/pages/11-4-colligative-properties
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00097.warc.gz
en
0.892733
6,893
3.921875
4
Air purifying plants are a perfect way to improve indoor air quality at home or the office. Not only do indoor air-cleaning houseplants make your house healthier, but these plants enhance the aesthetics of your home, making each room they’re in more friendly on the eyes and comfortable to be in. Ever consider bringing in some plants before learning that they can purify your air at home? The fact that they purify the air and generate more oxygen seals the deal for me when it comes to plants in the home or not. How Air Purifying Plants Work According to a 1989 NASA study and follow-up from the lead scientist who conducted it, yes, plant do work to purify the air. In the study, they proved that plants are able to cleanse the air from toxic chemicals and pathogens. They proved this by putting plants within chambers. After placing the plants in, they introduced the chamber to harmful chemicals, including: They gave it 24 hours to see what happens when these chemicals are in the same closed chamber as plants. After testing the results, it was seen that some plants removed up to 90% of the chemicals in the air! And the best part about this study is that they didn’t have to find some exotic, super air-cleansing plants to remove the harmful chemicals from the air. They just used common houseplants. This infographic below is based on the NASA study and shares many of the air purifying houseplants: The study concluded that these plants do work. They put a bunch of chemicals in a room and their tests concluded that yeah, they work. The chemicals they put in the room included, aluminum, benzene, formaldehyde, more. Plants take in the harmful gasses out of the atmosphere and “sequester” them in their roots and cells. Some of chemicals broken down by fungi in soil, and others are stored in the plant. Dr. B mentioned positive correlation between plants and indoor smoke. The tests were focused on harmful airborne carcinogens and chemicals and not weed or cigarette smoke, but theoretically, yes they’d help with indoor smoke pollution. 5 Indoor House Plant Health Benefits - Block Wifi – 5g waves no good – low fertility rates in US. 5g rollout in cities currently. - Purify Air from smoke particles (great if you live near freeway or with a smoker). Plants grab smoke in the air. - Gets rid of harmful indoor carcinogens like formaldehyde, benzene, trichloroethylene and more. - Help air movement inside a closed room (can be good for humidity issues) - Makes you feel nice: brings nature to you regardless of your living situation. More studies and research has been done since this most prominent one. And the jury’s out. They do work to make your air cleaner. There are counter points of view, however. 8 Effective Air Purifying Plants Below are some of my favorite indoor plants for air purification. There are more than just 12 air purifying plants. All plants have the ability, but there have not been official studies, like from NASA, on all the rest. 1. Aloe Vera Plant (aloe barbadensis) – Air Purification Aloe vera is a natural air purifier that clears any formaldehyde and benzene from the air. It’s easy to grow and enjoys being indoors or outdoors. It a great looking succulent that also is used for various other internal and external health purposes. Because it grows well in a pot, in low to high sunlight conditions, and doesn’t ask for much water, it makes a great natural air purifier to have in your home or office. How Aloe Helps Air Quality According to the NASA Clean Air Study, aloe vera is included in the list of plants that purifier the air of benzene and formaldehyde. Does not remove: Additional Aloe Vera Health Benefits - Taken in smoothies for blood sugar management etc - Skincare: cuts, burns, frostbite, skin wounds, cold sores (fresh aloe gel on sunburnt skin has no store-bought alternatives as good) - Psoriasis, hair loss, hemorrhoids (would you believe me if I told you I spelled that correctly on my first try?), aloe gel is also taken orally for things like osteoarthritis, bowel diseases, fever and more. (source) Some Aloe Vera Fun Facts: - It was depicted in stone carvings as far back as 6,000 years ago in early Egypt. - It’s said to be called the “plant of immortality” in Ancient Egypt, and was used to heal skin wounds and as a laxative. (source) - Two substances used in health products: the clear gel and the yellow latex - More common names: aloe vera, aloe, burn plant, lily of the desert, elephant’s gall - A 2-year National Toxicology Program study on “oral consumption of nondecolorized whole leaf extract of aloe vera found clear evidence of carcinogenic activity in male and female rats, based on tumors of the large intestine.” (source) Aloe Vera Dangers As an air purifier there are not dangers, but if someone accidentally eats your aloe plant or you are using it orally for health benefits, then it’s worth knowing about potential dangers or toxicity. Aloe latex is used as a natural laxative. Because of this, abdominal cramps and diarrhea have been reported. One thing to note is that taking aloe as a laxative can reduce absorption abilities. This can have an affect a prescriptions’ efficacy. Another of the benefits of potential lowering of blood glucose levels. Because of this natural health benefit, if you take any glucose related medications for diabetes or similar, you should be aware of this. How To Grow Aloe Aloe vera can thrive in the backyard in the ground. They start small, but if you allow them, they get really big. In as versatile as they are with their health benefits. 2. Philodendron Plants – For Formaldehyde This is my favorite indoor air purifying plant. Philodendrons are common houseplants that are also known to remove harmful chemicals from the air, formaldehyde being its chemical of choice to remove from the air. Philodendrons are easy to maintain and will make any room look great with their big dark green leaves. The current picture above is the one we have on the porch. This was a big indoor philodendron until recently we moved and I like it on the porch. I need to cut a few pieces off and grow them inside the house. These are easy to transplant and re grow, but once they’re entrenched somewhere, they like to grow long, thicker and have little branches that grow out and grasp the floor or wherever it is resting on. How They Help The Air The NASA Clean Air study concluded it being good at removing formaldehyde from the air. And if you live in an apartment or dorm room, their leaves are large and great for blocking harmful EMF radio waves in the air. Removes from indoor air: Philodendrons do not remove: The 3 philodendrons proven in the NASA Clean Air Study to be effective for formaldehyde are these: - Philodendron domesticum: It’s most popular name is the Spade Leaf Philodendron. It’s also known as Burgundy Philodendron. This one has arrow shaped glossy leaves. - Philodendron bipinnatifidum: Also called Lacy Tree or Horse head Philodendron, Philodendron Selloum, this one is native to South America and grows in the wild in tropical regions. It is bigger than some of these others, but when small can be a great indoor house plant to help clean the air. - Philodendron cordatum: It’s most popular name is the “Heart Leaf Philodendron.” It is from Brazil, and enjoys the shade. A perfect indoor houseplant that has deep green heart shaped leaves. It’s important to note that it was only these 3 Philodendrons studied, and it’s more than likely that all philodendrons would have proved effective at removing formaldehyde or some other chemicals from the air. The only ones studied so far have proved them effective at formaldehyde removal. How To Care For Your Philodendron All philodendrons do well in shady areas as long as they get some sunlight. It doesn’t have to be direct. They don’t require too much water but do need drainage at the bottom of the plant’s pot and a watering a few times per week, depending on how porous the soil is and how fast it gets dry again. Indoors they grow slower if not facing direct sunlight for hours each day, but they also enjoy a good bit of sun each day if you have one outside. They don’t like full facing sun all day and do well indoors with any natural light that comes into its room. They’re require trimming, to your satisfaction. I like to let them vine out. They’re not invasive to other plants like the English Ivy. If you prefer to keep shorter and bushier, then you can trim more aggresively, but they’re slow growing. Slower with less light. The most important note is to keep the soil well-drained. Over watering can make them rot. They’re sturdy plants and are fine with skipped waterings now and then. When you feel the top layer of soil dry, then you can water. More Fun Philodendron Facts - I’ve read online to feed regularly, but mine do well with fertilizing maybe once or twice per year. - They don’t like temperature that drops below 50 degrees F. - Easy to take cuttings inside and regrow from there. Take a 6 inch long cut and drop it in water. Once roots start to grow you can stick the plant into soil and it should grow from there (rooting in water isn’t always necessary if soil is perfect). - Philodendrons prefer humidity and love a good mist now and then (they’ll also do fine in dry air without misting) - For more eloquent writings, deeper research and very cool scientific facts written about philodendrons, here’s an article from Britannica that I got lost in for a while. Plants are so interesting, philodendrons being one of many. 3. Pothos Plant (epipremnum aureum) Similar in looks and ease of growing to the Philodendrons, Pothos plants like the Golden Pothos pictured above are not Philodendrons, but they’re close relatives. It is a great air purifying plant that looks just as good as it cleans your indoor air. Golden Pothos is a perfect plant for the garage because it cleans ozone from exhaust from the air. It also clears out formaldehyde much like Philodendrons. I’ve also read about them being able to remove carbon monoxide and benzene from the air, but haven’t seen any official research to prove it. However, I’m sure they do remove these as well. Different Types of Air Filtering Pothos Plants: - Golden Pothos, epipremnum aureum - Satin Pothos, epipremnum pictum - Marble Queen, epipremnum aureum How Pothos Purifies Indoor Air Pothos plants remove ozone. Ozone is a naturally occurring substance that lightning and brush fires produce, for example. It’s also a respiratory irritant that can be dangerous if inhaled in concentrated amounts. It’s in car exaust also, making this plant perfect for your garage or the rooms and hallways closest to the garage. Golden Pothos also removes formaldehyde. Another harmful chemical you want removed from the air that comes from exhaust. Chemicals removed from indoor air: Does not remove: How To Care For Pothos Water, sunshine, location: Pothos plants like water and need to drain well. They do not like sitting in puddled water. Their roots will rot, so make sure the soil is well drained and don’t overwater. Less is more with these plants. Pothos do fine in low sunlight and in cooler temp. I find that watering just a bit a few days a week is perfect for indoors. Outdoors, these do well with sunshine as long as they get some shade as well. The more sunshine they get the more they grow. It also thrives in warmer temps when split in between shade and sunlight. The deciding factors in its growth are the amount of sun and the quality drainage of its water. You can water liberally outside, but still keep the soil well-drained and don’t over water if it’s not drying out fast. Fun Pothos Facts - Ideal for hanging pots inside. - Golden Pothos is also known as hunter’s robe, ivy arum, money plant, silver vine, Solomon Islands ivy, taro vine and its most popular name is Devil’s Ivy because it’s very hard to kill and stays vibrant green in dark environments. - It’s called the “Money Plant” - It has reportedly not produced a flower since 1962, either in the wild or as a domesticated plant. (source) 4. Mother-in-Law’s Tongue or Snake Plant (sansevieria trifasciata) Snake plant is one of the very best natural air purifiers there is. There seems to be 2 different variations from what I can see out in gardens. One has the yellow strips on the outside the other no yellow stripes. Upon further research I saw that these different variations are attributed to the types of Sansevieria: - Snake Plant: No yellow stripes - Mother In Law’s Tongue: Yellow stripes on sides. Both are worthy air purifying plants. I’ve even seen some giant varieties too. Here’s an image of the HUGE Mother In Law’s Tongue I ran into recently. I’m sure these giant ones are wonderful for indoor air purification as well. The bottom line is that all of these sansevieria variations are effective at removing toxins from the air. They all improve indoor air quality and are great plants to have inside and outside your home or apartment. The Snake Plant variety of Sansevierias (no yellow stripes) How Snake Plant Cleans The Air The Snake Plant is one of the very best air purifying plants based off the NASA Clean Air Study. In the air cleaning test, it removed all the chemicals tested except for ammonia! Removes from indoor air: Snake Plants do not remove: How To Grow Snake Plant This is one of the easiest plants you’ll ever manage. It simply won’t die. It doesn’t need much water, and it thrives indoors with medium light or outside in full facing sun. As long as you make sure to get it water every now and then, it will be happy. 5. Boston Fern (nephrolepis exaltata & Bostoniensis) Another favorite houseplant for many, and for a good cause. All types of ferns look great indoors whether hanging from a pot or on the ground. These are powerful natural air purifiers and are just as powerful at transforming a room’s “look.” ALL ferns are great purifying the air. I’m finding there to be many fern varieties. The Boston Fern is one of the most popular and the one in the NASA study, but I would bet that all ferns have powerful air cleaning abilities. I need to find official variety name on this one: There are many variations of ferns. You can tell they are ferns by the distinct way the leaves grow, like little centipede legs. All variations make for excellent air purifying plants too. How Ferns Clean The Air According to the main NASA Clean Air study, it helps purify the air of many harmful chemicals. These were the results for the Boston Fern: Does not remove: How To Grow Ferns These are easy to grow like most plants on this list. They need medium water and medium light. They’re great in low light conditions and don’t require much care or trimming. 6. Peace Lily or ‘Mauna Loa’ (spathiphyllum) Peace Lily is one of the most versatile air purifying plants. It was shown to remove all the chemicals thrown its way. It was the only plant able to take care of all of them within the well-known NASA Clean Air Study. These help get rid of toxins like Benzene, Ammonia, Acetone and Ethyl, and more as per the study. They say it’s a good hallway plant as it can help prevent the toxins from going from room to room. How Peace Lily Cleans The Air Out of all the plants in the NASA Clean Air Study, Peace Lily is the only air purifying plants that was shown to remove ALL six chemicals tested for from the air. Peace Lilly removes: How To Care For Your Peace Lily This video is very informative and will tell you everything you will ever need to know about this air purifying peace lilies. Peace Lilys are easy to grow. They need low to medium light, medium water and well drained soil. Where to place? Good for the hallway. They are also great for your most humid rooms. 7. Chrysanthemum morifolium Chrysanthemum was the only other air cleaning plant besides Peace Lily able to remove all the harmful chemicals thrown its way. It is a perennial plant from the Asteraceae family. Chrysanthemum is a popular indoor houseplant because of its air purifying abilities. Its flowers smell great, and they come in many colors. And liven up any room they’re placed in. I think they’re perfect for a bathroom. How Chrysanthemum Purifies The Air Chrysanthemum is able to remove all the chemicals from the NASA CLean Air Study. How To Grow All the plants on this list are easy to grow indoors. The Chrysanthemum is no different. Use well drained soil, water a few times per week, low to medium sunlight is ok. And it’s good to fertilize monthly. 8. English Ivy (hedera helix) – Good for mold English Ivy is one you might have seen before. It is an easy-to-grow perennial evergreen climbing vine that is often found outdoors, spreading across buildings, walls and homes outside. It also likes to grow up trees and cover their trunks and branches. The highlight feature behind English Ivy is its ability to get rid of mold. This was discovered froma more recent study, not the 89′ NASA study. How English Ivy Purifies The Air English Ivy is one of the best air purifying plants because it kills mold. This ability to remove mold spores from the air makes it a special plant to grow inside your home. Mold is one of the worst toxins we can encounter. While Superman’s downfall is kryptonite, us human’s are gravely affected by mold and their airborne mold spore toxins. I learned about this most recent study on its ability to remove mold spores inside from a blog called Allergy and Air. Quoting about this study from their English Ivy blog post: “West Coast Clinical Trials practitioner Hilary Spyers-Duran is one of the authors of a study that proved the effectiveness of English Ivy in eliminating indoor mold particles. WebMD Health News also lent the notion some credibility, citing the researchers’ findings: “As airborne mold spores have been linked to a variety of serious illnesses, English ivy could reduce indoor mold counts” The study they quote from was specifically done on English Ivy plant. They put it into containers that contained feces from a dog and another container held a moldy piece of bread… Researches measured the air quality after 6 hours and 12 hours. After 6 hours the containers each had around a 60% drop in airborne-mold spores. After 12 hours the container with moldy bread had a 78% drop in airborne mold toxins and the dog feces container had a drop in 94%… almost 100% of airborne toxins, gone! Of course they conclude that more research is needed and that proper indoor air purifiers shouldn’t be replaced by English Ivy and other plants based on this research alone. But it’s a wonderful thing to see that the free gift of nature can help solve some of life’s more pesky problems and that we don’t always need expensive, fancy air purifiers to help clean our home’s air inside. Used in tandem, you have a clean-air power-combo that is sure to help make you living space more healthy for you and your family. However, there are a few downsides to English Ivy you need to know about. The Downsides Of English Ivy English Ivy is toxic if ingested and it’s an invasive vine It’s a bully to other plants. It takes them over. It also spreads up trees and can negatively impact them by stealing their sunshine and thus weakening them. In many instances, you’ll want to control their spread. The best part about having them indoors is that it’s easy to control their spread. They can’t take over other plants inside unless you plant them in all your pots and let them take over somehow. The English Ivy plant contains glycoside hederin in its leaves and plant berries. This substance is toxic to humans and animals, so you need to be careful. There is no danger if you don’t ingest it. Having it in your home is actually beneficial because of its ability to purify the air and detox your home from some of the mold spores you may have inside. Inside your home or office you will have to consider their toxicity. While many “toxic” plants never phased my old Jack Russell terrier, it may not be something you want to risk for your small animals or children in the home. English Ivy Pros - Easy to grow - Fast to grow - Suitable in low light or lots of light - Easy to find houseplant English Ivy Cons - Potentially toxic to children and animals (contains glycoside hederin) - Invasive vine that spreads fast and takes over other plants - No natural enemies to help balance their spread English Ivy Recommendations: Only use it inside. For inside the home, English Ivy is a wonderful addition to help clean the air and remove mold spores. It’s too invasive for outside and can take over other plants that native animals to the area depend on for their seeds. As a native alternative, try attaining Alumroot instead. It’s also a great groundcover that will help fill some spaces and if you’re looking for something like that outside the home, in your garden, choose the non-invasive plant instead. How To Care For English Ivy It’s easy to grow and hard to kill. This is good and bad. You’ll want to make sure it doesn’t invade your yard and take over. You can let it take over certain sections, but if you don’t chop up excessive growth every few months, you’ll find that it likes to keep spreading to the point of uprooting plants beside it. More Fun English Ivy Facts Because of its invasive nature, English Ivy is actually illegal to buy in some states, Oregon being the only one I know about. It’s a great plant for air purification at the office or at home, just make sure you let visitors know if they’re bringing over their babies, children or pets. The TOP 3 Air Purifying Plants Due to their surface area to chemical removal ratio, these were the top 3 in the NASA study. - Peace Lilly: Effectiveness - English Ivy: mold - Snake plant: overall heartiness + air purification Keep in mind, size. The bigger the plant, the better for air purification. Also, these are based off the minimal tests out there and the plants that were tested. Many plants purify the air that don’t have research proving it. Most plants as long as they’re comfortable growing indoors will provide some natural air purification benefits. Hardest Plant To Kill (& Easiest To Grow) Mother In Law’s Tongue (Sansiveria) Scores high in removing chemicals (high chemical removal rate) and is SUPER easy to care for. Good in high light and in low light environments. Doesn’t need much water. Here’s one I bought years ago with my mom and have moved around to apartments with me. It’s still in a pot but I’m about to start spreading its greatness throughout the yard. Toxic and Non-Toxic Plants Checker To check for the toxicity of any plant you may use for indoor air purification, you can go to this website. The ASPCA have a website dedicated to identifying toxic and non toxic plants. You can type in the scientific name or the popular name to identify any plant and check its toxicity level to cats, dogs or humans. An Awesome Plant-Filled Apartment! In this video, you see how you can make a jungle in your apartment. Her apartment is great! She says “like” about 1,000 times in this short vid, but if you can get through the million “likes” or just hit mute, you can see this wonderful apartment that has over 500 indoor houseplants that I’m sure to wonders to clean the air and provide her extra oxygen inside. A Downside To Houseplants? One research study from Harvard says they have their downsides… However, Harvard also runs the geo engineering program that sprays metals into the atmosphere and pollutes the unsuspecting citizens below as well as the poor trees and forests which is helping lead to major problems both in wildlife and with our fellow Americans. And thus, I will choose to ignore their “findings” about indoor plants for air purification drawback facts. I haven’t seen anyone else with studies against it, and personally I haven’t had an issue with it, so I’ll just give this one a maybe. Check for mold. If you have a plant in a part of your home that is humid and without much airflow, then use a plant in this room that doesn’t require much water. This might be best. Not only do they make your home healthier, they also make your house look better. And you can use them no matter where you live; house, dorm, apartment or office. Plants go well with almost all interior design types. I love them in the bathroom. But also in hallways, garages, laundry rooms, the kitchen and in every bedroom. Air cleaning houseplants are the best! 🙂 Indoor air purifying plants are a great addition to any home, office or room. The only thing to watch out for is for little kids and small pets eating the leaves, but other than that, these indoor plants help purify the air and keep you, your family and your friends’ air safe. Do you have a favorite indoor plant you use? Feel free to comment below! Up Next: 10 Natural Mold Killing Solutions
<urn:uuid:e1609e23-11ea-40a5-a7eb-738b077a33ce>
CC-MAIN-2021-21
https://sproutingfam.com/air-purifying-plants/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00295.warc.gz
en
0.935861
5,945
3.359375
3
Visual evoked potentials (VEP) are the answers of the nervous system following the stimulation of Human visual field. They are used for the diagnosis of several neural pathologies (Parkinson, Multiple Sclerosis, Epelipsy,…). These biological signals are recorded by three electrodes placed on the scalp. Their amplitudes are very weak and their frequencies are very low. They are greatly buried in background noise which the nature is not well defined. For this reason, powerful tools are needed to extract and exploit the meaningful information they contain. This final project work aims to provide a contribution to better understanding and interpretation of cerebral activity in humans via the improvement of the quality of plots of the recorded signals. Its objective is to conceive and apply adaptive filtering methods as alternative to the classical averaging technique which is used in clinical practice to achieve the true VEP extraction. The obtained results show a good improvement of the signal to noise ratio while applying LMS based adaptive filter in both normal and pathological cases. However, it still very slow in terms of speed time convergence compared to RLS adaptive method. The programmed filters are of order 21 and use the averaged normal VEP signal as reference input. Both of LMS and RLS algorithms were found better to achieve the filtering task on the basis on only 5 to 10 VEP single sweeps. Key words: Adaptive Filtering, Averaging Method, Least Mean Square (LMS), Recursive Least Squares (RLS), Signal-to-Noise Ratio (SNR), Visual Evoked Potentials. The evoked potentials (EP) are the responses of the nervous pathways that occur as a result of successive stimulations. Indeed, they represent the perturbations of the natural electrochemical activity of the nervous system (EEG) during the presence of an external sensory excitation. Among the different types of EP, we distinguish: - Auditory evoked potentials (AEP); - Somatosensory evoked potential (SEP); - Event-Related Potential (ERP); - Visual evoked potentials (VEP) [1, 2]. We are interested in this report to visual evoked potentials. The evoked potentials are characterized by their very low amplitudes (a few microvolts) and low frequency ranges. These signals are strongly buried in a background noise resulting from the existence of the spontaneous bioelectric activity of the brain whose amplitude is much greater compared to that of the EP. This is why it is necessary to use specialized data acquisition equipment to record and to extract useful signals from the unwanted spontaneous electrical activity of the brain [3, 4]. The evoked potentials constitute a simple and effective method in the diagnosis of several pathologies affecting the nervous pathways of the stimulated sensory modality, either in the sensory nerve or in the central nervous system [3, 4]. Their use has become more and more frequent thanks to technological advances in electronics and computing which have made acquisition and processing of EP much easier. To better understand the role of evoked potentials and more specifically the visual evoked potentials in the diagnosis of certain pathologies, we present in next paragraphs, their clinical definition; then we describe the VEP stimulation and data acquisition systems. At the end of the chapter, we will describe the procedure followed during the VEP tests. 2. Definition of visual evoked potentials Visual evoked potentials (EVPs) are the answer of the nervous system following stimulation of the patient's visual field. Several variants of VEP exist depending on the used stimuli. However, we distinguish three main types of VEP defined by the stimulations used in clinical practice: - Pattern reversal VEP; - Pattern onset/offset VEP; - Flash VEP. The characteristics of the stimuli are described in the "VEP Standard 2004" of International Society for Clinical Electrophysiology of Vision (ISCEV) [5, 6]. VEPs are used to assess the integrity of visual pathways from the eye to the occipital areas in the brain. Indeed, in the case of ophthalmology, the VEPs serve as a complementary examination for the diagnosis of any sudden decrease in visual acuity and / or to search for damage of the optic nerve, as in the case of an attack by multiple sclerosis. They are more widely used in neurology for diagnosis of neurological diseases that have an impact on optical pathways or intra cerebral lesions such as atrophy and optic neuritis, the existence of tumors on the visual pathways, epilepsy, multiple sclerosis, etc. . VEPs are weak low-frequency signals. Their amplitude varies from one person to another even among the normal subjects and its value varies typically from a few microvolts to a few tens of microvolts and their frequency does not exceed 300 Hz [3, 4]. A normal VEP has a standard shape in amplitude and time, and any neurophysiological involvement is reflected by the presence of an anomaly on the recorded signal [5, 6]. The figure 1 below shows the typical normal VEP signal. Figure 1 : Waveform of VEP signal in normal case . The VEP is characterized by the succession of three major waves whose nomination is standardized according to the time of their appearance after the excitation. These three waves are, in the order of their appearance: - The N70 wave (N1): it appears at around 70 to 75 milliseconds after the excitation. This negative wave is polarized and its polarity (negative or positive) reflects certain independence between the two hemispheres of the brain during its generation [1, 5, 6]. - The P100 wave: It is the first dominant peak of the VEP. Its time of appearance corresponds to the latency of the nervous system. It is the time that elapses between the perception of stimulation and the reaction of the brain; manifested by the appearance of this first dominant peak of the recorded signal. The measurement of the latency makes it possible to have information on any attack of the myelin of the optical nerves. The P100 is repeatable in VEP signals but it changes from one subject to another among the majority of normal cases. Therefore, the standard provides for a 20 millisecond margin on its measurement [1, 3, 4, 5, 6]. - The N170 wave (N2): This wave is often used in the study of the process of structural encoding of faces. Studies have revealed the existence of these bilateral occipito-temporal negativities, which culminate at 150 to 200 milliseconds, and are particularly ample in response to face stimulations as compared to other visual objects. [1, 5, 6, 7]. The amplitudes and time of occurrence of these peaks can be measured directly from the signal. The peak amplitude can be evaluated either peak-to-peak or from the baseline to peak [3, 4]. 3. Recording VEP In clinical practice, the recording of evoked potentials is carried out by means of a helmet fitted with flat silver electrodes placed on the scalp. However, for VEP data acquisition, we need only three electrodes to be placed on the patient's scalp, two of which will be used to pick up bioelectrical signals and the third will be taken as a reference electrode [5, 6, 8]. As mentioned above, VEPs have very low amplitudes and are low frequencies. They are buried in an interference resulting mainly from the spontaneous activity of the brain. For this reason, the VEP data acquisition chain must be efficient in extracting the useful signal. In general, this acquisition chain can be divided down into three main modules: - Stimulation module; - Electronic data acquisition module; - VEP software module. Such data acquisition chain can be illustrated in Figure 2 below . Figure 2: VEP data acquisition chain diagram . 3.1. Stimulation Module This module consists of a simple screen on which will be presented the stimuli to the subject. Two classes of stimuli are used in clinical practice: - Pattern stimulation: Two major forms of stimulation fall under this class: pattern reversal stimuli and pattern-onset/offset stimuli. The most used pattern is a checkerboard in black and white. The first model consists in alternating the phase of the tiles of the checkerboard with a specific frequency, provided that the total luminance of the screen does not change. The second model consists of passing, suddenly, from a black screen to the presentation of a pattern [5, 6, 8, 9]. According to guidelines, the measurement of pattern stimulus luminance should be expressed in candelas per meter squared (cd.m−2). It is recommended to have a minimum value of 80 cd.m−2 for the white areas luminance. This last should be uniformly distributed between the center and the periphery of the visual field without, however, exceeding a variation of 30% [5, 6, 8]. It is noticed that colored stimuli can be used in some special cases. The following figure 3 shows some of used colored patterns. - Flash stimulation: This type of stimulation is strongly used when the involvement of the subject is not necessary, such as in vision tests for babies, or when it is necessary to carry out tests to unconscious patients [5, 6, 8]. In this case, the patient is stimulated less than 1.5 times by a flash, which subtends a visual field of at least 20°. The recommended flash stimulus strength range is [1.5–3 cd.s.m−2] with a background of 15 to 30 cd.m−2 [5, 6, 8]. Figure 3 : Some colored stimuli used for recording VEP . 3.2. Data acquisition module This module is composed of three main components which are: 3.2.1. Recording electrodes The recording of the EPs requires at least the use of three silver-silver chloride or gold disc surface flat electrodes which the impedances should not exceed 5KΩ in order to reduce electrical interference. Indeed, the electrode’s impedance permit to reduce the interference with the 60Hz signal generated by the power of the amplifier (see figure 3). In addition, when two electrodes have relatively different impedances, the differential amplifier becomes unbalanced and consequently becomes susceptible to amplify noise signals [6, 8, 9]. Figure 4 : Electrode electronic diagram . The electrodes’ positioning depends on the type of evoked potential to be recorded. Their positions have been standardized in the International 10/20 System defined by "The International Federation in Encephalography and Clinical and Neurophysiology" (figure 4) [5, 6, 8, 9, 11]. In the case of VEP, two active electrodes are used to pick up the VEP signals. They are placed at the Cz and Oz positions which corresponds to chiasmal optic and occipital cortex respectively. The third electrode is used as reference (ground). It is fixed on the earlobe; either A1 or A2 positions, or at the Fz position where there is no neuronal activity. A conductive gel is used to ensure good conduction between the scalp and the electrodes [5, 6, 11]. Figure 5: International 10/20 System for positioning EEG recording electrodes. 3.2.2. The amplification device As for other electric signal of biological origin (biopotentials), it is necessary to amplify the recorded VEP signals considering their very low amplitudes (few microvolts) in order to allow their processing and analysis. For this reason, instrumentation amplifiers are required to increase the signal strength without introducing changes to its amplitude. Such devices are in the form of voltage differential amplifiers such as illustrated on figure 5 below . The inputs of the amplifier are connected to the active electrodes and the ground is connected to the reference one. The VEP recording standards recommend that the signal should be amplified by 20,000 to 50,000 times, before it is processed [5, 6, 8, 9, 12]. The amplifier may also be provided with a band-pass filtering stage. Its low and high cutoff frequencies should vary between 0.5 to 3 Hz and between 100 and 300 Hz respectively in order to limit the spectrum of the recorded signal to the frequency band of the VEP. Furthermore, the used amplifiers shall be electrically insulated in accordance with international safety recommendations for recording medical equipment (IEC-601-1 LF specification) and shall have at least an input impedance of 100 MΩ. The frequency range should be set between 1Hz or less (low cutoff frequency) and 300Hz or more (high cutoff frequency) [5, 6, 8, 9]. Figure 6: Basic VEP amplifier circuit diagram . 3.2.3. The analog-to-digital conversion card The VEPs recorded on the patient's scalp are analog signals. Consequently, the data acquisition chain must include a high precision analog-digital conversion device in order to permit digital processing and analysis of the recorded data. The minimum sampling rate to use is 500 samples per second per channel with a minimum resolution of 12 bits. Signals exceeding ±50–100 µV in amplitude are considered as artifacts [5, 6, 8, 9]. 3.3. VEP Software Module This module consists of software that allows the control of the entire data acquisition chain. It allows carrying out several tasks of which the main ones are: - The adjustment of data acquisition parameters; - the choice of stimuli; - Synchronization between the stimulation module and the data acquisition module; - Processing and display of VEP signals; - Data backup (patient data and test results). In clinical practice, the VEP signal processing is achieved using the averaging technique. We will detail it more in Part II of this report [3, 4]. 4. Visual Evoked Potential Test Before carrying out the tests, the exam team must take certain measures regarding the adjustment of the stimulation parameters and the patient. Indeed, it is recommended that the stimulus, in the form of black and white checkerboard of 8x8 boxes of size 1°15’ each, must have a luminous intensity at least equal to 80cd.m-². The location of the stimulus projection monitor shall be such that the diameter of the visual field is between 8 ° and 20 ° in its narrowest dimension. In practice, this visual angle is generally chosen to be equal to 15°, which corresponds to a position of the screen situated 70 to 100 centimeters from the patient [1, 3, 4, 5, 6]. The patient must follow a few guidelines in order to be able to successfully complete his VEP test. He should not take medication that dilates the eyelids within 12 hours of the test and if he is wearing glasses, then he should wear them at the time of the test. It is also recommended that the hair be neither wetted nor that there is a presence of substances that can impair electrical conductivity on the scalp. During the VEP test, the patient is asked to be calm and relaxed and, mostly, to concentrate his vision on the projection screen of stimulations [1, 3, 4, 5, 6]. The test has duration of about thirty minutes, during which the patterns of the checkerboard change of contrast every half seconds. At each checker state change, the visual system of the patient is excited and produces a bioelectric response that can be detected and recorded via the electrodes [1, 2, 3, 4, 5, 6]. The test’s staff should then check that the VEP are recorded correctly. In this part, we have presented visual evoked potentials that represent the response of the brain to the excitation of the visual field of human. Thus, we have indicated that these signals are composed of three main successive waves. The most important one, from the point of view of medical diagnosis, is the P100 wave which indicates the latency of the nervous system. If the latency is outside its normal range, this may be an indication of the presence of a possible cerebral pathology. We have then described briefly the different modules of the VEP data acquisition chain and gave an overview of the VEP test procedure. In the next part of the report, we will detail different filtering method used to extract the useful information contained in the VEP signals. Indeed, we will describe the averaging method, actually used in the clinical practice, and then we will tackle the adaptive filtering methods which we implemented and tested on our VEP set of data. Linear and Nonlinear Filtering Methods As mentioned in the first part, single sweep VEP signals are very weak in amplitude and strongly masked by background noise. Thus, filtering them is very important in order to improve the signal to noise ratio. In this part, we describe two methods of filtering the VEP signal. The first one is the averaging technique which is used in clinical practice; whereas the second one is based on adaptive linear. We will illustrate the obtained results and compare the performance of these two methods. 2. Averaging technique In clinical practice, the useful VEP signal is extracted using the averaging technique. It consists in calculating the average of several VEP single sweeps until a “clear” signal is obtained. This requires more than 100 visual stimulations. Indeed, the number of single VEP sweeps to be averaged depends on the signal to noise ratio. For that reason, the ISCEV VEP Standard [5, 6] recommends to set the minimum number of sweeps per average to 64 and to perform at least two averages to verify the reproducibility of the results. This will result in fatigue and discomfort of the patient and may affect its concentration during the test [3, 4]. The reliability of the Averaging Technique is doubtful considering that, theoretically, it provides an improvement of the signal-to-noise ratio directly related to number of averaged sweeps, based on the following assumptions [3, 4]: - The recorded VEP signal is assumed to be purely deterministic and repetitive. This means that the signal is stationary and its statistical properties do not depend on the time; - The background noise affecting VEP sweeps is an additive white Gaussian noise. In theory, we assume that the recorded VEP xi(n)is composed by the useful part si(n)and the background noise The average signal corresponding to N recorded VEP signals can be defined by: Replacing the equation (eq.1) in equation (eq.2) , we find: Following the above second hypothesis in which it is assumed that νi(n)is a zero-mean additive noise (white Gaussian noise), the equation (eq.4) becomes: Finally, the average of N single VEP sweeps will converge to the desired VEP signal Furthermore, the averaging technique will permit an enhancement of the signal-to-noise-ratio with a factor equal to the number of averaged VEP signals. In addition, the above assumptions are far from being valid in the case of VEP because the signal reflects a nonstationary character and the nature of the background noise is not very well known. Averaging thus may result in loss of information present in the signal, especially when the number of responses to be averaged increases [1, 3, 4]. For this reason, we propose a filtering method that is more reliable and which requires less time for the test. Indeed, adaptive methods have become an attractive tool that offers new signal processing capabilities to deal with nonstationary signals. 3. Principle of adaptive filtering In order to overcome the shortcomings of the averaging method, variants of linear filtering techniques have been proposed as alternatives to this traditional method, standing only on assumption (2) above (additive noise). These techniques have enabled the VEP signal to be extracted on the basis of few recorded sweeps up to two-thirds the number of single VEP required by the averaging method. Thus, Davila C.E. et al. proposed a matched subspace filter is applied to the detection of multiharmonic VEP waves with unknown noise variance . Orfanidis S.J. et al. proposed an adaptive noise canceller based on periodic pulse train as primary signal [3, 14]. Friman O. et al. proposed novel methods for detecting steady-state visual evoked potentials using multiple electroencephalogram (EEG) signals. Using short time segments, their methods provide high detection accuracy by finding combinations of electrode signals that cancel strong interference signals in the EEG data . Other researchers proposed nonlinear filtering as alternative to the averaging technique. As example, we cite the paper of Hamzaoui E-M. et al. in which they developed a nonlinear filtering method based on a three layers perceptron neural network [4, 16]. Qiu W. et al. used Gaussian radial basis function neural network (RBFNN) for processing the reference input of adaptive signal enhancer which they applied to filter evoked potentials . Another nonlinear method has been presented by Quiroga R. Q. et al. . It consists of using wavelets decomposition for VEP signal denoising. Also, some research works have tackled the spectral analysis and modeling of the VEP signals in order to have new representations of these signals that may reveal pertinent information to be used for diagnosis of pathologies. In this way, Reagragui et al. used an autoregressive model based on linear predictors and reflection coefficients for VEP parameterization; whereas, CUI J. et al. have applied Chirplet time-frequency representation to characterize VEP signals . 3.1. Adaptive filtering theory A linear adaptive filter is a linear time variant system that uses two input signals called “primary” and “reference” signals. The figure 4 illustrates the schematic diagram of an adaptive filter; where x̂(n)represents the estimated signal and en=xn-x̂nis the estimation error. The reference signal y(n)is chosen so as to have a certain correlation with the primary signal x(n)[20, 21, 22]. Figure 7: Adaptive filter block diagram . The adaptive filtering technique consists in finding the "optimal" filter hoptn, called Wiener's solution, to obtain as output, the best estimate of the component of the primary signal xnwhich is correlated with the reference yn[3, 20]. Indeed, both ynare supposed to be composed by a useful part snand two uncorrelated additive noise signals The output error of the filter is given by: and the best estimated output x̂(n)of the filter of order N, is expressed as the convolution of the vectors This error is used as input of the feedback block in order to update the adaptive filter coefficients. This operation is still going on until the function hnconverges to Wiener’s optimal solution. This is expressed by reaching the minimum value of the mean square error E[en²]. This results in removing the uncorrelated parts between the two filter’s inputs [20, 21, 22]. The optimal coefficients of the filter h(n), initialized randomly, are adjusted using adaptation algorithms [20, 21, 22]. In order to choose one adaptation algorithm over another, several factors are considered. The most important are the accuracy of the estimation, the speed of convergence and the complexity of the computation (number of operations required at each step to update the filter’s coefficients) [20, 21, 22]. There are several algorithms that ensure the adaptation of the filter coefficients. Among these: - The Least Mean Square Algorithm (LMS); - The Normalized Least Mean Square Algorithm (NLMS); - The Affine Projection Algorithm (APA); - The Gradient Lattice Adaptation Algorithm; - The Recursive Least Square Algorithm (RLS) [20, 22]. In this work, we have chosen to compare two optimization algorithms which are widely used in adaptive filtering: the Recursive Least Square Algorithm (RLS) and the Least Mean Square Algorithm (LMS) [20, 21, 22]. The LMS algorithm is very attractive for its simplicity. However, it provides a gradual, iterative, minimization of the mean square error (performance index). Thus, the optimal values of the adaptive filter coefficients are not computed instantaneously, but only after convergence [20, 21, 22]. The RLS algorithm which is based on the exact minimization of the least squares criteria, leads to the true solution at each time instant n [20, 21, 22]. The RLS algorithm has been proposed to be used in many applications (channel equalization, real-time system identification system, biomedical engineering,…) because they converge very fast. However, their computational complexity constitutes their main disadvantage . 3.2. RLS algorithm Assuming that the statistical properties are unknown, we will not try to minimize MSE but a finite sum of squared error given by: When this cost function is minimized by using an impulse response h(n)associated with the least squared estimate x̂(n), is obtained [20, 21, 22]. The impulse response h(n) is thus a function of the available samples and not of a general statistical average. It is therefore adjusted at the presentation of each new sample. To limit the number of computations, we consider the recursive form [20, 21, 22].: where k(n) is the gain vector, called also the Kalman Gain [20, 21, 22]. It is given by: - T denotes the vector transpose. - P(n)denotes the inverse of the correlation matrix of the primary input signal which is calculated according to the following formula: - 0<λ<1 is called the “forgetting factor”. It is used to ignore the older samples and to stress the most recent ones [20, 21, 22]. Also, the RLS algorithm uses the following formula to update the inverse correlation matrix [20, 21, 22].: The RLS algorithm can be summarized as follows [20, 21, 22]: ; δ is a positive number called the regularization factor. - At each time sample n=1,2,…, do: Compute the output of the filter : Calculate the error signal: Compute the kalman gain: Update the filter coefficients: Update the inverse correlation matrix: 3.3. LMS algorithm Another adaptation technique based on gradient descent can also be implemented. In fact, the Least Mean Square (LMS) algorithm is often used in adaptive filtering systems [20, 21, 22]. They are used in various signal processing applications such as biomedical engineering, speech and audio processing, telecommunications,… The filter’s coefficients are updated according to the recursive formula: Where µ is the adaptation factor. This parameter is used to regulate the updating process and adjusts the coefficients’ fluctuations around their optimal values. Furthermore, the factor µ controls the filter. According to Widrow B., a too small value of μ will guarantee a slow convergence to the optimal impulse response. However, the filter will not be able to track sudden changes that may occur in the signal. Instead of that, if µ is very large the filter will not converge [20, 23]. The details of LMS algorithm are as follows [20, 21, 22, 23]: - At each time sample n=1,2,…, do: Compute the output of the filter : Compute the error signal: Update the filter coefficients: hn=hn-1+2 µ yne(n) 4. Obtained Results All tests performed in this work, concern normal and pathological signals. Indeed, a database of 110 VEP single sweeps, corresponding to normal (50 VEP) and pathological (60 VEP) cases, will be used to test the implemented algorithms. The pathological dataset corresponds to a subject suffering from the multiple sclerosis disease. The VEP raw signals used in our experiment correspond to black and white checkerboard pattern reversal stimuli for which the contrast changes every 500 milliseconds. They were amplified by a gain of 106 and filtered using a [0.5; 300Hz] band-pass filter. They were sampled at a frequency of Fs=1kHz. To evaluate the efficiency of adaptive filtering technique, using both LMS and RLS adaptation algorithms, we compared the obtained results to the one of VEP signals filtered using the classical averaging method. Also, we compared the speed time and the mean value of the signal-to noise ratio for each adaptation algorithm in order to determine the most efficient adaptive filter. 4.1. Adaptive Filter’s settings The difficulty encountered when using adaptive filters is the choice of the following three parameters: - The order of the filter; - The choice of the reference input; - The adaptation algorithms’ parameters. To overcome these difficulties, we have achieved several tests to well define some parameters and have searched in the bibliography for others. Indeed, the values of the adaptation factors µ and λ have been defined according to bibliographical studies. Thus, according to Regragui F. et al. , the best value of the adaption factor is µ=10-4 . For the factor of forgetfulness λ, it is chosen to be 0.999 . As the averaging method is used actually to extract VEP useful information, we felt it was a good idea to set the reference input as the VEP signal obtained by averaging all single sweeps corresponding to the normal subject. Finally, we plot the variation of the maximum value of the mean square error E[en²]according to the order of the filter in both cases LMS and RLS. This aims to define the best order of the implemented filter which corresponds to the minimum value of E[en²]. The obtained results show that the best order is M=21 (figure 5). Figure 8 : Mean square error Vs Order of the adaptive filter. In the following section we will illustrate and comment the different results obtained while applying RLS and LMS adaptive filters to our set of VEP data. In order to perform the tests, we have used the DSP System Toolbox TM of the MATLAB ® R2015a software. Especially, we used the subroutines “dsp.LMSFilter system object” and “dsp.RLSFilter system object”. Both of them permit to construct and to apply adaptive filters to various types of data. The output of each subroutine results in a data structure that contains all information about the conceived filter such as its order, the adaptation method, initial values of all parameters,… We used them in the following program that allows us to process the VEP raw data and then to plot the gotten results which will be described the sections 4.2 and 4.3 bellow. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% EECS UG Project - 2016 /17 % % Application of Linear Adaptive Filtering Methods for Visual Evoked % % Potentials Extraction and Analysis % % - Main program - % % Ismail El Mediouri % % [email protected] % % Supervisor: Mr Luk Arnaut % % Init Matlab enviroment clear all; % Clear all variables in workspace clc; % Clear command window close all; % Close all figures % Load the VEP raw data % For pathological case, just change "VEP_Normal" by "VEP_Patho" % The variable VEPn should be replaced by VEPp for pathological case [L,C]=size(VEPn); % Size of the raw data % Init the adaptation algorithms' parameters d=mean(VEPn,2); % Reference input of the adaptive filter (desired signal) lambda=0.999; % Forgetting factor for RLS algorithm mu = 0.0001; % adaptation step for LMS algorithm for M=[1 10 15 25 49] xi=VEPn(:,k); % VEP raw signal tic % Start speed time counting % RLS Algorithm % Construct RLS filter hrls1 = dsp.RLSFilter(21,'Method', ... 'Conventional RLS' ,'ForgettingFactor', 0.9998); % Application of RLS filter [y1,e1] = step(hrls1, xi,d); toc % End speed time counting xf1(k,:)=y1; % Variable used to compute the average of filtred VEP snr1(k)=abs(snr(y1,e1)); % Compute the SNR % LMS Algorithm tic % Start speed time counting % Construct LMS filter hlms2 = dsp.LMSFilter('Length',21, ... 'StepSizeSource','Input port', ... % Application of LMS filter [y2, err] = step(hlms2,xi,d,mu,1); toc % End speed time counting xf2(k,:)=y2; % Variable used to compute the average of filtred VEP snr2(k)=abs(snr(y2,err));% Compute the SNR snr1=mean(snr1); % Mean value of SNR for RLS snr2=mean(snr2); % Mean value of SNR for LMS y1=mean(xf1);% Average of filtred VEP (RLS) y2=mean(xf2);% Average of filtred VEP (LMS) % Plotting results plot(d/max(d)) % Averaged VEP plot(y1./max(y1),'r'); % RLS Filtered VEP plot(y2./max(y2),'k'); % LMS Filtered VEP tl=['Fltering results of ' num2str(M), ' Normal VEPs']; legend('Averaged VEP', 'VEP-RLS', 'VEP-LMS'); 4.2. Results obtained in normal case The following figures 9, 10, and 11 illustrate the obtained results while processing 5, 10, and 49 VEP signals corresponding to a normal case. We can easily confirm, on visual reading of these figures, that the adaptive filtering method is more efficient than the averaging one. Also, the LMS algorithm permit to obtain a very smooth VEP signal compared to the one obtained by the RLS algorithm. In terms of speed convergence, the RLS algorithm was found to be very fast. However, it provides a very poor improvement of the signal-to-noise ratio as shown in the table 1. Table 1. Elapsed time and SNR mean value corresponding to normal case. |Number of VEPs ||Elapsed time RLS (Sec) ||Mean value of SNR (dB) ||Elapsed time LMS (Sec) ||Mean value of SNR (dB) Figure 9 : Extraction of VEP signal using 5 normal VEP single sweeps: Averaging technique (Blue), RLS adaptive filtering (Red) and LMS adaptive filtering. Figure 10 : Extraction of VEP signal using 10 normal VEP single sweeps: Averaging technique (Blue), RLS adaptive filtering (Red) and LMS adaptive filtering. Figure 11 : Extraction of VEP signal using 49 normal VEP single sweeps: Averaging technique (Blue), RLS adaptive filtering (Red) and LMS adaptive filtering. 4.3. Results obtained in pathological case As for the normal case, we plotted in the figures 12, 13 and 14 the obtained results while filtering 5, 10 and 49 VEP signals corresponding to a pathological case. We also summarized in table 2, the values of speed time of convergence and the mean value of the signal-to-noise ratio. We can attest that the adaptive filtering method allows achieving the best extraction of the useful VEP signal. In terms of SNR improvement, the LMS algorithm permits to obtain the best enhancement of this quantitative evaluation parameter. However, the convergence of LMS filter to its optimal Wiener solution still remains very slow comparing to the RLS one. Table 2. Elapsed time and SNR mean value corresponding to pathological case. |Number of VEPs ||Elapsed time RLS (Sec) ||Mean value of SNR (dB) ||Elapsed time LMS (Sec) ||Mean value of SNR (dB) Figure 12 : Extraction of VEP signal using 5 pathological VEP single sweeps: Averaging technique (Blue), RLS adaptive filtering (Red) and LMS adaptive filtering. Figure 13 : Extraction of VEP signal using 10 pathological VEP single sweeps: Averaging technique (Blue), RLS adaptive filtering (Red) and LMS adaptive filtering. Figure 14 : Extraction of VEP signal using 49 pathological VEP single sweeps: Averaging technique (Blue), RLS adaptive filtering (Red) and LMS adaptive filtering. Traditionally, the clinical interpretation of VEP waveforms is based on the visual reading of their plots. The investigations focus on the latency components of the VEP signal which consists of the N70, P100 and N170 peaks. Especially, the time position of the first positive deflection (P100) is the most important for neurological diagnosis. This peak which occurs about 100 milliseconds following the stimulation, is reproducible among normal subjects and some patients having neurological disorders [3, 4]. The adaptive filtering method described and tested above, have permit to improve the visual quality of the VEP plot compared to the one obtained using the classical averaging method. These results were confirmed in both normal and pathological cases. Indeed, the gotten plots clearly show the peak P100 in the normal case just using a dozen of VEP raw signals. Furthermore, the LMS based adaptive filter shows good filtering performances rather than the RLS one. This is manifested by a good improvement of the mean value of the signal-to-noise ratio. The comparison of computation complexity of both LMS and RLS algorithms can be summarized as in table 3 below. Consequently, LMS based filtering method can be used for offline processing of VEP signals. Implementation of RLS algorithm on Field-Programmable Gate Array (FPGA) or Digital Signal Processor based cards can improve its convergence speed and will allows achieving real-time processing of VEP data. However, such implementation may be costly due to the RLS complexity. Table 3. Complexity comparison of LMS and RLS algorithms ||Speed time of convergence
<urn:uuid:407b7328-3cad-42f1-a117-43242c4d1e5b>
CC-MAIN-2021-21
https://ukdiss.com/examples/visual-evoked-potentials.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00616.warc.gz
en
0.895568
8,166
2.96875
3
Precipitation intensity is measured by a ground-based radar that bounces radar waves off of precipitation. The Local Radar base reflectivity product is a display of echo intensity (reflectivity) measured in dBZ (decibels). "Reflectivity" is the amount of transmitted power returned to the radar receiver after hitting precipitation, compared to a reference power density at a distance of 1 meter from the radar antenna. Base reflectivity images are available at several different elevation angles (tilts) of the antenna; the base reflectivity image currently available on this website is from the lowest "tilt" angle (0.5°). The maximum range of the base reflectivity product is 143 miles (230 km) from the radar location. This image will not show echoes that are more distant than 143 miles, even though precipitation may be occurring at these greater distances. To determine if precipitation is occurring at greater distances, link to an adjacent radar. In addition, the radar image will not show echos from precipitation that lies outside the radar's beam, either because the precipitation is too high above the radar, or because it is so close to the Earth's surface that it lies beneath the radar's beam. How Doppler Radar Works NEXRAD (Next Generation Radar) can measure both precipitation and wind. The radar emits a short pulse of energy, and if the pulse strike an object (raindrop, snowflake, bug, bird, etc), the radar waves are scattered in all directions. A small portion of that scattered energy is directed back toward the radar. This reflected signal is then received by the radar during its listening period. Computers analyze the strength of the returned radar waves, time it took to travel to the object and back, and frequency shift of the pulse. The ability to detect the "shift in the frequency" of the pulse of energy makes NEXRAD a Doppler radar. The frequency of the returning signal typically changes based upon the motion of the raindrops (or bugs, dust, etc.). This Doppler effect was named after the Austrian physicist, Christian Doppler, who discovered it. You have most likely experienced the "Doppler effect" around trains. As a train passes your location, you may have noticed the pitch in the train's whistle changing from high to low. As the train approaches, the sound waves that make up the whistle are compressed making the pitch higher than if the train was stationary. Likewise, as the train moves away from you, the sound waves are stretched, lowering the pitch of the whistle. The faster the train moves, the greater the change in the whistle's pitch as it passes your location. The same effect takes place in the atmosphere as a pulse of energy from NEXRAD strikes an object and is reflected back toward the radar. The radar's computers measure the frequency change of the reflected pulse of energy and then convert that change to a velocity of the object, either toward or from the radar. Information on the movement of objects either toward or away from the radar can be used to estimate the speed of the wind. This ability to "see" the wind is what enables the National Weather Service to detect the formation of tornados which, in turn, allows us to issue tornado warnings with more advanced notice. The National Weather Service's 148 WSR-88D Doppler radars can detect most precipitation within approximately 90 mi of the radar, and intense rain or snow within approximately 155 mi. However, light rain, light snow, or drizzle from shallow cloud weather systems are not necessarily detected. Radar Products Offered Included in the NEXRAD data are the following products, all updated every 6 minutes if the radar is in Precipitation Mode or every 10 minutes if the radar is in Clear Air Mode (continue scrolling for further definitions) - Base Reflectivity - Composite Reflectivity - Base Radial Velocity - Storm Relative Mean Radial Velocity - Vertically Integrated Liquid Water (VIL) - Echo Tops - Storm Total Precipitation - 1 Hour Running Total Precipitation - Velocity Azimuth Display (VAD) Wind Profile Clear Air Mode In this mode, the radar is in its most sensitive operation. This mode has the slowest antenna rotation rate which permits the radar to sample a given volume of the atmosphere longer. This increased sampling increases the radar's sensitivity and ability to detect smaller objects in the atmosphere than in precipitation mode. A lot of what you will see in clear air mode will be airborne dust and particulate matter. Also, snow does not reflect energy sent from the radar very well. Therefore, clear air mode will occasionally be used for the detection of light snow. In clear air mode, the radar products update every 10 minutes. When rain is occurring, the radar does not need to be as sensitive as in clear air mode as rain provides plenty of returning signals. In Precipitation Mode, the radar products update every 6 minutes. The dBZ Scale The colors on the legend are the different echo intensities (reflectivity) measured in dBZ. "Reflectivity" is the amount of transmitted power returned to the radar receiver. Reflectivity covers a wide range of signals (from very weak to very strong). So, a more convenient number for calculations and comparison, a decibel (or logarithmic) scale (dBZ), is used. The dBZ values increase as the strength of the signal returned to the radar increases. Each reflectivity image you see includes one of two color scales. One scale represents dBZ values when the radar is in clear air mode (dBZ values from -28 to +28). The other scale represents dBZ values when the radar is in precipitation mode (dBZ values from 5 to 75). The scale of dBZ values is also related to the intensity of rainfall. Typically, light rain is occurring when the dBZ value reaches 20. The higher the dBZ, the stronger the rainrate. Depending on the type of weather occurring and the area of the U.S., forecasters use a set of rain rates which are associated to the dBZ values. These values are estimates of the rainfall per hour, updated each volume scan, with rainfall accumulated over time. Hail is a good reflector of energy and will return very high dBZ values. Since hail can cause the rainfall estimates to be higher than what is actually occurring, steps are taken to prevent these high dBZ values from being converted to rainfall. Ground Clutter, Anomalous Propagation and Other False Echoes Echoes from objects like buildings and hills appear in almost all radar reflectivity images. This "ground clutter" generally appears within a radius of 25 miles of the radar as a roughly circular region with a random pattern. An mathematical algorithm can be applied to the radar data to remove echoes where the echo intensity changes rapidly in an unrealistic fashion. These "No Clutter" images are available on the web site. Use these images with caution; ground clutter removal techniques can remove some real echoes, too. Under highly stable atmospheric conditions (typically on calm, clear nights), the radar beam can be refracted almost directly into the ground at some distance from the radar, resulting in an area of intense-looking echoes. This "anomalous propagation " phenomenon (commonly known as AP) is much less common than ground clutter. Certain sites situated at low elevations on coastlines regularly detect "sea return", a phenomenon similar to ground clutter except that the echoes come from ocean waves. Radar returns from birds, insects, and aircraft are also rather common. Echoes from migrating birds regularly appear during nighttime hours between late February and late May, and again from August through early November. Return from insects is sometimes apparent during July and August. The apparent intensity and areal coverage of these features is partly dependent on radio propagation conditions, but they usually appear within 30 miles of the radar and produce reflectivities of <30 dBZ. However, during the peaks of the bird migration seasons, in April and early September, extensive areas of the south-central U.S. may be covered by such echoes. Finally, aircraft often appear as "point targets" far from the radar. This is a display of echo intensity (reflectivity) measured in dBZ. The base reflectivity images in Precipitation Mode are available at four radar "tilt" angles, 0.5°, 1.45°, 2.40° and 3.35° (these tilt angles are slightly higher when the radar is operated in Clear Air Mode). A tilt angle of 0.5° means that the radar's antenna is tilted 0.5° above the horizon. Viewing multiple tilt angles can help one detect precipitation, evaluate storm structure, locate atmospheric boundaries, and determine hail potential. The maximum range of the "short range" base reflectivity product is 124 nautical miles (about 143 miles) from the radar location. This view will not display echoes that are more distant than 124 nm, even though precipitation may be occurring at greater distances. This display is of maximum echo intensity (reflectivity) measured in dBZ from all four radar "tilt" angles, 0.5°, 1.45°, 2.40° and 3.35°. This product is used to reveal the highest reflectivity in all echoes. When compared with Base Reflectivity, the Composite Reflectivity can reveal important storm structure features and intensity trends of storms. The maximum range of the "short range" composite reflectivity product is 124 nm (about 143 miles) from the radar location. This view will not display echoes that are more distant than 124 nm, even though precipitation may be occurring at greater distances. Base Radial Velocity This is the velocity of the precipitation either toward or away from the radar (in a radial direction). No information about the strength of the precipitation is given. This product is available for just two radar "tilt" angles, 0.5° and 1.45°. Precipitation moving toward the radar has negative velocity (blues and greens). Precipitation moving away from the radar has positive velocity (yellows and oranges). Precipitation moving perpendicular to the radar beam (in a circle around the radar) will have a radial velocity of zero, and will be colored grey. The velocity is given in knots (10 knots = 11.5 mph). Where the display is colored pink (coded as "RF" on the color legend on the left side), the radar detected an echo but was unable to determine the wind velocity, due to inherent limitations in the Doppler radar technology. RF stands for "Range Folding". Storm Relative Mean Radial Velocity This is the same as the Base Radial Velocity, but with the mean motion of the storm subtracted out. This product is available for four radar "tilt" angles, 0.5°, 1.45°, 2.40° and 3.35°. Determining True Wind Direction The true wind direction can be determined on a radial velocity plot only where the radial velocity is zero (grey colors). Where you see a grey area, draw an arrow from negative velocities (greens and blues) to positive velocities (yellows and oranges) so that the arrow is perpendicular to the radar beam. The radar beam can be envisioned as a line connecting the grey point with the center of the radar. To think of it another way, draw the wind direction line so that the wind will be blowing in a circle around the radar (no radial velocity, only tangential velocity). In order to determine the wind direction everywhere on the plot, a second Doppler radar positioned in a different location would be required. Research programs frequently use such "dual Doppler" techniques to generate a full 3-D picture of the winds over a large area. If you see a small area of strong positive velocities (yellows and oranges) right next to a small area of strong negative velocities (greens and blues), this may be the signature of a mesocyclone--a rotating thunderstorm. Approximately 40% of all mesocyclones produce tornadoes. 90% of the time, the mesocyclone (and tornado) will be spinning counter-clockwise. If the thunderstorm is moving rapidly toward or away from you, the mesocyclone may be harder to detect. In these cases, it is better to subtract off the mean velocity of the storm center, and look at the Storm Relative Mean Radial Velocity. Vertically Integrated Liquid Water (VIL) VIL is the amount of liquid water that the radar detects in a vertical column of the atmosphere for an area of precipitation. High values are associated with heavy rain or hail. VIL values are computed for each 2.2x2.2 nm grid box for each elevation angle within 124 nm radius of the radar, then vertically integrated. VIL units are in kilograms per square meter--the total mass of water above a given area of the surface. VIL is useful for: - Finding the presence and approximate size of hail (used in conjunction with spotter reports). VIL is computed assuming that all the echoes are due to liquid water. Since hail has a much higher reflectivity than a rain drop, abnormally high VIL levels are typically indicative of hail. - Locating the most significant thunderstorms or areas of possible heavy rainfall. - Predicting the onset of wind damage. Rapid decreases in VIL values frequently indicate wind damage may be occurring. A handy VIL interpretation guide is available from the Oklahoma Climatological Survey. The Echo Tops image shows the maximum height of precipitation echoes. The radar will not report echo tops below 5,000 feet or above 70,000 feet, and will only report those tops that are at a reflectivity of 18.5 dBZ or higher. In addition, the radar will not be able to see the tops of some storms very close to the radar. For very tall storms close to the radar, the maximum tilt angle of the radar (19.5 degrees) is not high enough to let the radar beam reach the top of the storm. For example, the radar beam at a distance 30 miles from the radar can only see echo tops up to 58,000 feet. Echo top information is useful for identifying areas of strong thunderstorm updrafts. In addition, a sudden decrease in the echo tops inside a thunderstorm can signal the onset of a downburst--a severe weather event where the thunderstorm downdraft rushes down to the ground at high velocities and causes tornado-intensity wind damage. Storm Total Precipitation The Storm Total Precipitation image is of estimated accumulated rainfall, continuously updated, since the last one-hour break in precipitation. This product is used to locate flood potential over urban or rural areas, estimate total basin runoff and provide rainfall accumulations for the duration of the event. 1 Hour Running Total Precipitation The 1 Hour Running Total Precipitation image is an estimate of one-hour precipitation accumulation on a 1.1x1.1 nm grid. This product is useful for assessing rainfall intensities for flash flood warnings, urban flood statements and special weather statements. Velocity Azimuth Display (VAD) Wind Profile The VAD Wind Profile image presents snapshots of the horizontal winds blowing at different altitudes above the radar. These wind profiles will be spaced 6 to 10 minutes apart in time, with the most recent snapshot at the far right. If there is no precipitation above the radar to bounce off, a "ND" (Non-Detection) value will be plotted in knots. Altitudes are given in thousands of feet (KFT), and the time is GMT (5 hours ahead of EST). The colors of the wind barbs are coded by how confident the radar was that it measured a correct value. High values of the RMS (Root Mean Square) error (in knots) mean that the radar was not very confident that the wind it is displaying is accurate — there was a lot of change in the wind during the measurement. Storm Attributes Table The Storm Attributes Table is a NEXRAD derived product which attempts to identify storm cells. The table contains the following fields: - ID - This is the ID of the cell. The ID is also printed on the radar image to enable you to reference the table with storms on the radar image. If a triangle is shown in this field, it indicates NEXRAD detection of a possible tornadic cell (this "detection" is called the tornado vortex signature). If a diamond appears in this field, NEXRAD algorithms detect the storm is a mesocyclone. If a yellow-filled square appears, the storm has a 70% or greater chance of containing hail. - Max DBZ - This is the highest reflectivity found within the storm cell. - Top (ft) - Storm top elevation in feet. - VIL (kg/m²) - Vertically Integrated Water. This is an estimation of the mass of water suspended in the storm per square meter. - Probability of severe hail - Probability that the storm contains severe hail. - Probability of hail - Probability that the storm contains hail. - Max hail size (in) - Maximum hail stone diameter. - Speed (knots) - Speed of the storm movement in knots. - Direction - Direction of storm movement. On the radar image, arrows show the forecast movement of storm cells. Each tick mark indicates 20 minutes of time. The arrow length indicates where the cells are forecast to be in 60 minutes. When choosing the top 5 or top 10 storms from the "Show Storms" select box, the top storms are based on Max DBZ. This should not be used for protection of life and/or property. Weather Underground's NEXRAD radar product incorporates StrikeStar data. StrikeStar is a network of Boltek lightning detectors around the United States and Canada. These detectors all send their data to our central server where the StrikeStar software developed by Astrogenic Systems triangulates their data and presents the results in near real-time. Please note: Because of errors in sensor calibration and large distances between some sensors, lightning data may display skewed or be missing in certain regions. If you have a Boltek detector and run Astrogenic's NexStorm software then we would like to hear from you. There are a small number of simple criteria you need to fulfill to join the network. You can email us at [email protected] for further details. Terminal Doppler Weather Radar (TDWR) The Terminal Doppler Weather Radar (TDWR) is an advanced technology weather radar deployed near 45 of the larger airports in the U.S. The radars were developed and deployed by the Federal Aviation Administration (FAA) beginning in 1994, as a response to several disastrous jetliner crashes in the 1970s and 1980s caused by strong thunderstorm winds. The crashes occurred because of wind shear--a sudden change in wind speed and direction. Wind shear is common in thunderstorms, due to a downward rush of air called a microburst or downburst. The TDWRs can detect such dangerous wind shear conditions, and have been instrumental in enhancing aviation safety in the U.S. over the past 15 years. The TDWRs also measure the same quantities as our familiar network of 148 NEXRAD WSR-88D Doppler radars--precipitation intensity, winds, rainfall rate, echo tops, etc. However, the newer Terminal Doppler Weather Radars are higher resolution, and can "see" details in much finer detail close to the radar. This high-resolution data has generally not been available to the public until now. Thanks to a collaboration between the National Weather Service (NWS) and the FAA, the data for all 45 TDWRs is now available in real time via a free satellite broadcast (NOAAPORT). We're calling them "High-Def" stations on our NEXRAD radar page. Since thunderstorms are uncommon along the West Coast and Northwest U.S., there are no TDWRs in California, Oregon, Washington, Montana or Idaho. Summary of the TDWR products The TDWR products are very similar to those available for the traditional WSR-88D NEXRAD sites. There is the standard radar reflectivity image, available at each of three different tilt angles of the radar, plus Doppler velocity of the winds in precipitation areas. There are 16 colors assigned to the short range reflectivity data (same as the WSR-88Ds), but 256 colors assigned to the long range reflectivity data and all of the velocity data. Thus, you will see up to 16 times as many colors in these displays versus the corresponding WSR-88D display, giving much higher detail of storm features. The TDWRs also have storm total precipitation available in the standard 16 colors like the WSR-88D has, or in 256 colors (the new "Digital Precipitation" product). Note, however, that the TDWR rainfall products generally underestimate precipitation, due to attenuation problems (see below). The TDWRs also have such derived products as echo height, vertically integrated liquid water, and VAD winds. These are computed using the same algorithms as the WSR-88Ds use, and thus have no improvement in resolution. Improved horizontal resolution of TDWRs The TDWR is designed to operate at short range, near the airport of interest, and has a limited area of high-resolution coverage — just 48 nm, compared to the 124 nm of the conventional WSR-88Ds. The WSR-88Ds use a 10 cm radar wavelength, but the TDWRs use a much shorter 5 cm wavelength. This shorter wavelength allow the TDWRs to see details as small as 150 meters along the beam, at the radar's regular range of 48 nm. This is nearly twice the resolution of the NEXRAD WSR-88D radars, which see details as small as 250 meters at their close range (out to 124 nm). At longer ranges (48 to 225 nm), the TDWRs have a resolution of 300 meters — more than three times better than the 1000 meter resolution WSR-88Ds have at their long range (124 to 248 nm). The angular (azimuth) resolution of the TDWR is nearly twice what is available in the WSR-88D. Each radial in the TDWR has a beam width of 0.55 degrees. The average beam width for the WSR-88D is 0.95 degrees. At distances within 48 nm of the TDWR, these radars can pick out the detailed structure of tornadoes and other important weather features (Figure 2). Extra detail can also been seen at long-ranges, and the TDWRs should give us more detailed depictions of a hurricane's spiral bands as it approaches the coast. View of a tornado taken by conventional WSR-88D NEXRAD radar (left) and the higher-resolution TDWR system (right). Using the conventional radar, it is difficult to see the hook-shape of the radar echo, while the TDWR clearly depicts the hook echo, as well as the Rear-Flank Downdraft (RFD) curling into the hook. Image credit: National Weather Service. TDWR attenuation problems The most serious drawback to using the TDWRs is the attenuation of the signal due to heavy precipitation falling near the radar. Since the TDWRs use the shorter 5 cm wavelength, which is closer to the size of a raindrop than the 10 cm wavelength used by the traditional WSR-88Ds, the TDWR beam is more easily absorbed and scattered away by precipitation. This attenuation means that the radar cannot "see" very far through heavy rain. It is often the case that a TDWR will completely miss seeing tornado signatures when there is heavy rain falling between the radar and the tornado. Hail causes even more trouble. Thus, it is best to use the TDWR in conjunction with the traditional WSR-88D radar to insure nothing is missed. View of a squall line (left) taken using a TDWR (left column) and a WSR-88D system. A set of three images going from top to bottom show the squall line's reflectivity as it approaches the TDWR radar, moves over the TDWR, than moves away. Note that when the heavy rain of the squall line is over the TDWR, it can "see" very little of the squall line. On the right, we can see the effect a strong thunderstorm with hail has on a TDWR. The radar (located in the lower left corner of the image) cannot see much detail directly behind the heavy pink echoes that denote the core of the hail region, creating a "shadow". Image credit: National Weather Service. TDWR range unfolding and aliasing problems Another serious drawback to using the TDWRs is the high uncertainty of the returned radar signal reaching the receiver. Since the radar is geared towards examining the weather in high detail at short range, echoes that come back from features that lie at longer ranges suffer from what is called range folding and aliasing. For example, for a thunderstorm lying 48 nm from the radar, the radar won't be able to tell if the thunderstorm is at 48 nm, or some multiple of 48 nm, such as 96 or 192 nm. In regions where the software can't tell the distance, the reflectivity display will have black missing data regions extending radially towards the radar. Missing velocity data will be colored pink and labeled "RF" (Range Folded). In some cases, the range folded velocity data will be in the form of curved arcs that extend radially towards the radar. Typical errors seen in the velocity data (left) and reflectivity data (right) when range folding and aliasing are occurring. Image credit: National Weather Service. TDWR ground clutter problems Since the TDWRs are designed to alert airports of low-level wind shear problems, the radar beam is pointed very close to the ground and is very narrow. The lowest elevation angle for the TDWRs ranges from 0.1° to 0.3°, depending upon how close the radar is to the airport of interest. In contrast, the lowest elevation angle of the WSR-88Ds is 0.5°. As a result, the TDWRs are very prone to ground clutter from buildings, water towers, hills, etc. Many radars have permanent "shadows" extending radially outward due to nearby obstructions. The TDWR software is much more aggressive about removing ground clutter than the WSR-88D software is. This means that real precipitation echoes of interest will sometimes get removed. For more TDWR information For those of you who are storm buffs that will be regularly using the new TDWR data, you can download the three Terminal Doppler Weather Radar (TDWR) Build 3 Training modules. These three Flash files, totaling about 40 Mb, give one a detailed explanation of how TDWRs work, and their strengths and weaknesses. Archived Historical Radar Data The National Climatic Data Center offers free U.S. mosaics for the past 10 years. Plymouth State College offers single-site radar images of all radar products going back several weeks.
<urn:uuid:10bce7f0-5003-4ff0-a178-62b213004467>
CC-MAIN-2021-21
https://www.wunderground.com/prepare/understanding-radar
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00616.warc.gz
en
0.914993
5,686
4.03125
4
Commuter rail, or suburban rail, is a passenger rail transport service that primarily operates within a metropolitan area, connecting commuters to a central city from adjacent suburbs or commuter towns. Generally commuter rail systems are considered heavy rail, using electrified or diesel trains. Distance charges or zone pricing may be used. Similar non-English terms include Treno suburbano in Italian, Cercanías in Spanish, Rodalies in Catalan, Aldiriak in Basque, Rodalia in Valencian, Proximidades in Galician, Proastiakos in Greek, Train de banlieue in French, P?ím?stský vlak or Esko in Czech, Elektrichka in Russian, Poci?g podmiejski in Polish and Pendeltåg in Swedish. Some services share similarities with both commuter rail and high-frequency rapid transit, examples being the German S-Bahn in some cities, the Réseau Express Régional (RER) in Paris, many Japanese commuter systems, the West Rail line in Hong Kong and some Australasian suburban networks, such as Sydney Trains. Some services, like British commuter rail, share tracks with other passenger services and freight. In the United States, commuter rail often refers to services that operate a higher frequency during peak periods and a lower frequency off-peak. Since the creation of Toronto's GO Transit commuter service in 1967, commuter rail services and route length have been expanding in North America. In the US, commuter rail is sometimes referred to as regional rail. Compared to rapid transit (or metro rail), commuter/suburban rail often has lower frequency, following a schedule rather than fixed intervals, and fewer stations spaced further apart. They primarily serve lower density suburban areas (non inner-city), and often share right-of-way with intercity or freight trains. Some services operate only during peak hours and others uses fewer departures during off peak hours and weekends. Average speeds are high, often 50 km/h (30 mph) or higher. These higher speeds better serve the longer distances involved. Some services include express services which skip some stations in order to run faster and separate longer distance riders from short-distance ones. The general range of commuter trains' travel distance varies between 15 and 200 km (10 and 125 miles), but longer distances can be covered when the trains run between two or several cities (e.g. S-Bahn in the Ruhr area of Germany). Distances between stations may vary, but are usually much longer than those of urban rail systems. In city centers the train either has a terminal station or passes through the city centre with notably fewer station stops than those of urban rail systems. Toilets are often available on-board trains and in stations. Their ability to coexist with freight or intercity services in the same right-of-way can drastically reduce system construction costs. However, frequently they are built with dedicated tracks within that right-of-way to prevent delays, especially where service densities have converged in the inner parts of the network. Most such trains run on the local standard gauge track. Some systems may run on a narrower or broader gauge. Examples of narrow gauge systems are found in Japan, Indonesia, Malaysia, Thailand, Taiwan, Switzerland, in the Brisbane (Queensland Rail's City network) and Perth (Transperth) systems in Australia, in some systems in Sweden, and on the Genoa-Casella line in Italy. Some countries and regions, including Finland, India, Pakistan, Russia, Brazil and Sri Lanka, as well as San Francisco (BART) in the US and Melbourne and Adelaide in Australia, use broad gauge track. Metro rail or rapid transit usually covers a smaller inner-urban area ranging outwards to between 12 km to 20 km (or 8 to 14 miles), has a higher train frequency and runs on separate tracks (underground or elevated), whereas commuter rail often shares tracks, technology and the legal framework within mainline railway systems. However, the classification as a metro or rapid rail can be difficult as both may typically cover a metropolitan area exclusively, run on separate tracks in the centre, and often feature purpose-built rolling stock. The fact that the terminology is not standardised across countries (even across English-speaking countries) further complicates matters. This distinction is most easily made when there are two (or more) systems such as New York's subway and the LIRR and Metro-North Railroad, Paris' Métro and RER along with Transilien, Washington D.C.'s Metro along with its MARC and VRE, London's tube lines of the Underground and the Overground, (future) Crossrail, Thameslink along with other commuter rail operators, Madrid's Metro and Cercanías, Barcelona's Metro and Rodalies, and Tokyo's subway and the JR lines along with various privately owned and operated commuter rail systems. An S-Train is a type of hybrid urban-suburban rail serving a metropolitan region, most often in the German-speaking countries. The most well-known S-train systems are the S-Bahn systems in Germany and Austria with other well-known examples being the S-tog in Copenhagen and S-Bahn/RER systems in Switzerland. In Germany, the S-Bahn is regarded as a train category of its own, and exists in many large cities and in some other areas, with differing service and technical standards from city to city. Most S-Bahns typically behave like commuter rail with most trackage not separated from other trains, and long lines with trains running between cities and suburbs rather than within a city. The distances between stations however, are usually short. In larger systems there is usually a high frequency metro-like central corridor in the city center into which all the lines converge. Typical examples of large city S-Bahns include Munich and Frankfurt. S-Bahns also exist in some mid-size cities like Rostock and Magdeburg but behave more like typical commuter rail with lower frequencies and very little exclusive trackage. In Berlin, the S-Bahn systems arguably fulfill all considerations of a true metro system (despite the existence of U-Bahns as well) - the trains run on tracks that are entirely separated from other trains, there are short distances between stations, the trains are high frequency, and use tunnels but do run a bit further out from the city centre compared with U-Bahn. In Hamburg and Copenhagen, other, diesel driven trains, do continue where the S-Bahn ends ("A-Bahn" in Hamburg area, and "L-tog" in Copenhagen). Regional rail usually provides rail services between towns and cities, rather than purely linking major population hubs in the way inter-city rail does. Regional rail operates outside major cities. Unlike Inter-city, it stops at most or all stations between cities. It provides a service between smaller communities along the line, and also connections with long-distance services at interchange stations located at junctions or at larger towns along the line. Alternative names are "local train" or "stopping train". Examples include the former BR's Regional Railways, France's TER (Transport express régional), Germany's Regionalexpress and Regionalbahn, and South Korea's Tonggeun services. Regional rail does not exist in this sense in the United States, so the term "regional rail" has become synonymous with commuter rail there, although the two are more clearly defined in Europe. In some European countries the distinction between commuter trains and long-distance/intercity trains is very hard to make, because of the relatively short distances involved. For example, so-called "intercity" trains in Belgium and the Netherlands carry many commuters and their equipment, range and speeds are similar to those of commuter trains in some larger countries. In the United Kingdom there is no real division of organisation and brand name between commuter, regional and inter-city trains, making it hard to categorize train connections. Russian commuter trains, on the other hand, frequently cover areas larger than Belgium itself, although these are still short distances by Russian standards. They have a different ticketing system from long-distance trains, and in major cities they often operate from a separate section of the train station. The easiest way to identify these "inter-city" services is that they tend to operate as express services - only linking the main stations in the cities they link, not stopping at any other stations. However, this term is used in Australia (Sydney for example) to describe the regional trains operating beyond the boundaries of the suburban services, even though some of these "inter-city" services stop all stations similar to German regional services. In this regard, the German service delineations and corresponding naming conventions are clearer and better used for academic purposes. Sometimes high-speed rail can serve daily use of commuters. The Japanese Shinkansen high speed rail system is heavily used by commuters in the Greater Tokyo Area. They commute between 100 and 200 km by Shinkansen. To meet the demand of commuters, JR sells commuter discount passes and operates 16-car bilevel E4 Series Shinkansen trains at rush hour, providing a capacity of 1,600 seats. Several lines in China, such as the Beijing-Tianjin Intercity Railway and the Shanghai-Nanjing High-Speed Railway, serve a similar role with many more under construction or planned. The high-speed services linking Zürich, Bern and Basel in Switzerland (200 km/h (120 mph)) have brought the Central Business Districts (CBDs) of these three cities within 1 hour of each other. This has resulted in unexpectedly high demand for new commuter trips between the three cities and a corresponding increase in suburban rail passengers accessing the high-speed services at the main city-centre stations (or Hauptbahnhof). The Regional-Express commuter service between Munich and Nuremberg in Germany go in (200 km/h (120 mph)) along a 300 km/h high-speed line. Commuter/suburban trains are usually optimized for maximum passenger volume, in most cases without sacrificing too much comfort and luggage space, though they seldom have all the amenities of long-distance trains. Cars may be single- or double-level, and aim to provide seating for all. Compared to intercity trains, they have less space, fewer amenities and limited baggage areas. Commuter rail trains are usually composed of multiple units, which are self-propelled, bidirectional, articulated passenger rail cars with driving motors on each (or every other) bogie. Depending on local circumstances and tradition they may be powered either by diesel engines located below the passenger compartment (diesel multiple units) or by electricity picked up from third rails or overhead lines (electric multiple units). Multiple units are almost invariably equipped with control cabs at both ends, which is why such units are so frequently used to provide commuter services, due to the associated short turn-around time. Locomotive hauled services are used in some countries or locations. This is often a case of asset sweating, by using a single large combined fleet for intercity and regional services. Loco hauled services are usually run in push-pull formation, that is, the train can run with the locomotive at the "front" or "rear" of the train (pushing or pulling). Trains are often equipped with a control cab at the other end of the train from the locomotive, allowing the train operator to operate the train from either end. The motive power for locomotive-hauled commuter trains may be either electric or diesel-electric, although some countries, such as Germany and some of the former Soviet-bloc countries, also use diesel-hydraulic locomotives. In Japan and South Korea, longitudinal (sideways window-lining) seating is widely used in many commuter rail trains to increase capacity in rush hours. Carriages are usually not organized to increase seating capacity (although in some trains at least one carriage would feature more doors to facilitate easier boarding and alighting and bench seats so that they can be folded up during rush hour to provide more standing room) even in the case of commuting longer than 50 km and commuters in the Greater Tokyo Area and the Seoul metropolitan area have to stand in the train for more than an hour. This section needs expansion. You can help by adding to it. (March 2009) Currently there are not many examples of commuter rail in Africa. Metrorail operates in the major cities of South Africa, and there are some commuter rail services in Algeria, Botswana, Kenya, Morocco, Egypt and Tunisia. In Algeria, SNTF operates commuter rail lines between the capital Algiers and its southern and eastern suburbs. They also serve to connect Algiers' main universities to each other. The Dar es Salaam commuter rail offers intracity services in Dar es Salaam, Tanzania. In Botswana, the (Botswana Railways) "BR Express" has a commuter train between Lobatse and Gaborone. In Japan, commuter rail systems have extensive network and frequent service and are heavily used. In many cases, Japanese commuter rail is operationally more like a typical metro system (with very high operating frequencies, an emphasis on standing passengers, short station spacing) than it is like commuter rail in other countries. Japanese commuter rail also tends to be heavily interlined with subway lines, with commuter rail trains continuing into the subway network, and then out onto different commuter rail systems on the other side of the city. Many Japanese commuter systems operate several levels of express trains to reduce the travel time to distant locations, often using station bypass tracks instead of dedicated express tracks. It is notable that the larger Japanese commuter rail systems are owned and operated by for-profit private railway companies, without public subsidy. Commuter rail systems have been inaugurated in several cities in China such as Beijing, Shanghai, Zhengzhou, Wuhan, Changsha and the Pearl River Delta. With plans for large systems in northeastern Zhejiang, Jingjinji, and Yangtze River Delta areas. The level of service varies considerably from line to line ranging high to near high speeds. More developed and established lines such as the Guangshen Railway have more frequent metro like service. Hong Kong MTR's East Rail line, West Rail line and Tung Chung line were built to commuter rail standards but are operated as a metro system. In Indonesia, the KRL Commuterline is the largest commuter rail system in the country, serving Jakarta metropolitan area. It connects the Jakarta city center with surrounding cities and sub-urbans in Banten and West Java provinces, including Depok, Bogor, Tangerang, Bekasi, Serpong, Rangkasbitung, and Maja. In July 2015, KA Commuter Jabodetabek served more than 850,000 passengers per day, which is almost triple of the 2011 figures, but still less than 3.5% of all Jabodetabek commutes. Other commuter rail systems in Indonesia include the Metro Surabaya Commuter Line, Prambanan Ekspres, KRL Commuterline Yogyakarta-Solo, Kedung Sepur, Greater Bandung Commuter, and Cut Meutia. In the Philippines, the Philippine National Railways has two commuter rail systems currently operational; the PNR Metro Commuter Line in the Greater Manila Area and the PNR Bicol Commuter in the Bicol Region. A new commuter rail line in Metro Manila, the North-South Commuter Railway, is currently under construction. Its North section is set to be partially opened by 2021. In Malaysia, there are two commuter services operated by Keretapi Tanah Melayu. They are the KTM Komuter that serves Kuala Lumpur and the surrounding Klang Valley area, and the KTM Komuter Northern Sector that serves Greater Penang, Perak, Kedah and Perlis in the northern region of Peninsular Malaysia. In Thailand, the Greater Bangkok Commuter rail and the Airport Rail Link serve the Bangkok Metropolitan Region. The SRT Red Lines, a new commuter line in Bangkok, started construction in 2009. It is currently slated to be opened by 2020. In India, commuter rail systems are present in major cities. Mumbai Suburban Railway, the oldest suburban rail system in Asia, carries more than 7.24 million commuters on a daily basis which constitutes more than half of the total daily passenger capacity of the Indian Railways itself. Kolkata Suburban Railway is the biggest Suburban Railway network in India covering 348 stations carries more than 3.5 million commuters per day. The Chennai Suburban Railway along with MRTS is another railway of comparison where more than 2.5 million people travel daily to different areas in Chennai. Other commuter railways in India include Hyderabad MMTS, Delhi Suburban Railway, Pune Suburban Railway and Lucknow-Kanpur Suburban Railway. Also, in Bangladesh, there are several commuter rail systems. In Iran, SYSTRA has done a "Tehran long term urban rail study". SYSTRA proposed 4 express lines similar to RER suburban lines in Paris. Tehran Metro is going to construct express lines. For instance, the Rahyab Behineh, a consultant for Tehran Metro, is studying Tehran Express Line 2. Tehran Metro currently has a commuter line, which is Line 5 between Tehran and Karaj. Isfahan has two lines to its suburbs Baharestan and Fuladshahr under construction, and a third line to Shahinshahr is planned. Major metropolitan areas in most European countries are usually served by extensive commuter/suburban rail systems. Well-known examples include BG Voz in Belgrade (Serbia), S-Bahn in Germany and German-speaking areas of Switzerland and Austria, Proastiakos in Greece, RER in France and Belgium, suburban lines in Milan (Italy), Turin metropolitan railway service in Turin (Italy), Cercanías and Rodalies (Catalonia) in Spain, CP Urban Services in Portugal, Esko in Prague and Ostrava (Czech Republic), HÉV in Budapest (Hungary) and DART in Dublin (Ireland). In Sweden, electrified commuter rail systems known as Pendeltåg are present in the cities of Stockholm and Gothenburg. The Stockholm commuter rail system, which began in 1968, is similar to the S-Bahn train systems of Munich and Frankfurt such that it may share railway tracks with inter-city trains and freight trains, but for the most part run on its own dedicated tracks, and that it is primarily used to transport passengers from nearby towns and other suburban areas into the city centre, not for transportation inside the city centre. The Gothenburg commuter rail system, which began in 1960, is similar to the Stockholm system, but does fully share tracks with long-distance trains. Other train systems that are also considered as commuter rail but not counted as pendeltåg include Roslagsbanan and Saltsjöbanan in Stockholm, Mälartåg in the Mälaren Valley, Östgötapendeln in Östergötland County, Upptåget in Uppsala County, Norrtåg in northern Norrland and Skåne Commuter Rail in Skåne County. Skåne Commuter Rail (Pågatågen) acts also as a regional rail system, as it serves cities over 100 km (62 miles) and over one hour from the principal city of Malmö. In Norway, the Oslo commuter rail system mostly shares tracks with more long-distance trains, but also runs on some local railways without other traffic. Oslo has the largest commuter rail system in the Nordic countries in terms of line lengths and number of stations. But some lines have travel times (over an hour from Oslo) and frequencies (once per hour) which are more like regional trains. Also Bergen, Stavanger and Trondheim have commuter rail systems. These have only one or two lines each and they share tracks with other trains. In Finland, the Helsinki commuter rail network runs on dedicated tracks from Helsinki Central railway station to Leppävaara and Kerava. The Ring Rail Line serves Helsinki Airport and northern suburbs of Vantaa and is exclusively used by the commuter rail network. On 15 December 2019 Tampere got its own commuter rail service. In the United States, Canada, Costa Rica, El Salvador and Mexico regional passenger rail services are provided by governmental or quasi-governmental agencies, with a limited number of metropolitan areas served. Eight commuter rail systems in the United States carried over ten million trips in 2018, those being in descending order: Other commuter rail systems in the United States (not in ridership order) are: Examples include an 899 km (559 mi) commuter system in the Buenos Aires metropolitan area, the 225 km (140 mi) long Supervia in Rio de Janeiro, the Metrotrén in Santiago, Chile, and the Valparaíso Metro in Valparaíso, Chile. Another example is Companhia Paulista de Trens Metropolitanos (CPTM) in Greater São Paulo, Brazil. CPTM has 94 stations with seven lines, numbered starting on 7 (the lines 1 to 6 and the line 15 belong to the São Paulo Metro), with a total length of 273 kilometres (170 mi). The five major cities in Australia have suburban railway systems in their metropolitan areas. These networks have frequent services, with frequencies varying from every 10 to every 30 minutes on most suburban lines, and up to 3-5 minutes in peak on bundled underground lines in the city centres of Sydney, Brisbane, Perth and Melbourne. The networks in each state developed from mainline railways and have never been completely operationally separate from long distance and freight traffic, unlike metro systems in some comparable countries, but nevertheless have cohesive identities and are the backbones of their respective cities' public transport system. The suburban networks are almost completely electrified. The main suburban rail networks in Australia are: New Zealand has two frequent suburban rail services comparable to those in Australia: the Auckland rail network is operated by Transdev Auckland and the Wellington rail network is operated by Transdev Wellington.
<urn:uuid:b6904d30-01fa-497e-8c9a-7c5e2d8d1007>
CC-MAIN-2021-21
https://popflock.com/learn?s=Commuter_rail
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00296.warc.gz
en
0.946439
4,618
3.296875
3
- Research article - Open Access Parental smoking and child poverty in the UK: an analysis of national survey data BMC Public Health volume 15, Article number: 507 (2015) In 2011/12 approximately 2.3 million children, 17% of children in the UK, were estimated to be in relative poverty. Cigarette smoking is expensive and places an additional burden on household budgets, and is strongly associated with socioeconomic deprivation. The aim of this study was to provide an illustrative first estimate of the extent to which parental smoking exacerbates child poverty in the UK. Findings from the 2012 Households Below Average Income report and the 2012 Opinions and Lifestyle Survey were combined to estimate the number of children living in poor households containing smokers; the expenditure of typical smokers in these households on tobacco; and the numbers of children drawn into poverty if expenditure on smoking is subtracted from household income. 1.1 million children - almost half of all children in poverty - were estimated to be living in poverty with at least one parent who smokes; and a further 400,000 would be classed as being in poverty if parental tobacco expenditure were subtracted from household income. Smoking exacerbates poverty for a large proportion of children in the UK. Tobacco control interventions which effectively enable low income smokers to quit can play an important role in reducing the financial burden of child poverty. In 2011/12 2.3 million children in the UK, or 17% of all children, were living in relative poverty defined as less than 60% of median equivalised household income . These children are more likely to live in inadequate housing and in more deprived communities, be exposed to high levels of air pollution, have a poor diet, develop depression and other long term health problems, and to be absent from school [2-4]. Growing up in poverty is thus a blight on child health and development. In 1999 the UK government announced a target of halving the number of children living in poverty by 2010, and abolition of child poverty by 2020. However, the 2010 target of 1.7 million was missed by 600,000 , and it is now unlikely that the 2020 target will be met. It is therefore important to identify avoidable factors that contribute to and exacerbate child poverty. Tobacco smoking is powerfully addictive and strongly associated with socioeconomic deprivation , and is also a major cause of ill health. Passive exposure of children to tobacco smoke increases the risk of sudden infant death, respiratory infections, asthma and middle ear disease, and children growing up among smokers are twice as likely to become addicted to smoking themselves [7,8]. Smoking is thus a significant cause of poor health of children living in poverty . However, it is also a direct contributor to financial deprivation. In January 2015 the weighted average price of 20 cigarettes in the UK was £7 , and although smokers can reduce the cost of smoking by opting for budget brands or switching to handrolling tobacco, the cost of regular smoking represents a significant burden on the budgets of families living on low incomes. Existing studies have explored the financial impact of smoking in both high and low income countries. A study from New Zealand found that if second-lowest income decile households containing a smoker were to become smoker-free, on average, 14% of non-housing budgets in those households could be reallocated . A study in the USA found that smokers spend less on housing than non-smokers . Research from India has indicated that tobacco expenditure crowds out expenditure on food, education and entertainment, and that when both direct tobacco expenditure and out-of-pocket payments on tobacco-attributable medical care are taken into account, tobacco consumption impoverishes roughly 15 million people in India [13,14]. Evidence from Bangladesh has suggested that poor smokers could add over 500 calories to the diet of one or two children with their daily tobacco expenditure, and that tobacco prices are positively associated with child health outcomes [15,16]. To our knowledge, the impact of parental smoking on child poverty has not previously been estimated in the UK. This paper therefore aims to provide an illustrative first estimate of the number of children in poverty in the UK who have smoking parents, the possible cost of smoking in this context, and the number of children living above the poverty line, but who would fall below the poverty line if household resources were assessed after accounting for expenditures on tobacco. Our analyses combined findings from several national surveys, taking the most recent available at the time of the study, to estimate the number of children living in relative poverty by household structure; apply smoking prevalence data to estimate the number of children living in poor households containing smokers; and then estimate the expenditure of typical smokers in these households on tobacco. Finally we estimated the numbers of children drawn into poverty if expenditure on smoking is subtracted from household income. Where published survey sources did not provide data broken down into the required level of detail we used conservative assumptions to generate estimates. The study used publically available data and ethics approval and participant consent were therefore not required. Definition of poverty Poverty was defined as living in a household with an equivalised net household income before housing costs (BHC) below 60% of the median equivalised net household income . Equivalised income is the sum of income after deductions of income tax, employee and self-employed national insurance contributions and council tax for all household members, rescaled to allow for household composition, to reflect the fact that larger households need more income to maintain the same standard of living. The data were equivalised using the modified OECD equivalence scale, using an adult couple with no children as the reference point . In 2011/12 the median equivalised household income per week was £427 BHC. Poverty was therefore defined as an equivalised income BHC of £256 or less . Numbers of children in poverty We estimated numbers of children in poverty by household composition using data from the Department for Work and Pensions’ 2012 Households Below Average Income (HBAI) report . This draws on data from approximately 20,000 households in the Family Resources Survey and provides estimates of the number of all children broken down by parental marital status, and the percentage of those children living in poverty. We combined these figures with data from the report on the proportion of poor households with one, two, or three or more children (calculated as 25%, 39% and 36% respectively) to estimate the number of children living in poverty by parental marital status and family size. A worked example of these calculations is provided in Additional file 1. In the HBAI report children are defined as those under 16, and those aged 16–19 who are dependent (living with parents and in full time education or in unwaged government training). Since the HBAI report does not provide data on the proportions of single parents who are male and female, we used estimates of these proportions (9% and 91% respectively) from the Office for National Statistics’ (ONS) 2012 Families and Household survey to calculate the number of children living in poor households with a single mother and a single father . Smoking prevalence in poor households To estimate the proportion of children in poverty with one or more parents who smoke, we first estimated parental smoking prevalence in these households using data from the 2012 Opinions and Lifestyle Survey . Since the survey reports do not present smoking prevalence by poverty status, and no other relevant survey data were available, we made the conservative assumption that smoking prevalence in households in poverty would be the same as that in households in the routine and manual occupational socio-economic group. The prevalence of smoking among men and women in routine and manual occupations in Britain in 2012 was 33% and 32% respectively. In fact these figures are highly likely to underestimate smoking prevalence among the poor, as among unemployed people the prevalence is substantially higher (39% in 2012 ). Since the 2012 Opinions and Lifestyle survey indicated that smoking rates vary by marital status as well as socioeconomic group, we weighted the estimates of smoking prevalence in routine and manual groups by marital status. The survey estimated that while smoking prevalence in the general adult population was 20%, in adults who were single, married or cohabiting the rates were 27%, 14% and 33% respectively (these figures were not available by sex or socio-economic group). We therefore weighted smoking prevalence in men and women in relation to these figures to estimate smoking prevalence in low socioeconomic status adults by sex and marital status (see Table 1). Number of children in poverty by smoking parental marital status and number of children in household These weighted smoking rates were then applied to estimate the number of children in poverty with smoking parents. For single parent households, we simply applied the smoking rates estimated for single men and women to the number of children in these households. This gave us an estimate of the number of children in poverty living with a smoking single mother or father. For two parent households, we needed to estimate how many contained one smoker and how many contained two. We therefore combined the prevalence data with estimates from an existing study of smoke-free homes and secondhand smoke exposure in children in England by Jarvis et al . This study included a nationally representative sample of 13,365 children, including 695 in 2007 on which our estimates were based. While a more recent estimate based on a larger sample from the whole of the UK would have be preferable, this estimate was the only suitable one available to us, and is likely to be reasonably representative of the whole of the UK. From this we calculated that among parents who smoked in two parent households, 65% were the only smokers, and 35% lived with an adult who also smoked. A worked example of our calculations of the number of children with smoking parents is provided in Additional file 1. The cost of smoking in poor households We estimated the average weekly cost of smoking to poor households by combining data on the number of cigarettes smoked per day by routine and manual workers for men and women with typical costs for manufactured cigarettes and hand rolling tobacco (HRT), both licit and illicit. Opinions and Lifestyle Survey data indicate that on average, female and male routine and manual workers smoke 12 and 13 cigarettes per day respectively . We estimated the number of packets of 20 cigarettes purchased by low-income manufactured cigarette smokers per week by multiplying the number of cigarettes smoked per day by seven, and dividing by 20; and the number of packets of HRT purchased by low-income HRT smokers per week in the same way, but with the assumption that 50 grams of HRT typically makes approximately 100 cigarettes . To estimate the average weekly spend on manufactured cigarettes and HRT, we combined our estimated weekly quantities purchased with 2012 Tobacco Manufacturers’ Association (TMA) data. This indicates that the average cost of a licit packet of 20 cigarettes was £7.72, and of 50 g HRT £16.11, and that illicit tobacco typically sold for half the price of licit products [22-25]. The proportion of type of cigarettes smoked by sex and age was obtained from the OPN (Opinions and Lifestyle Survey). To make calculations more straightforward, smokers that smoked both packeted cigarettes and HRT were added on to the category they mostly smoked. In the UK it is estimated that 73% of female and 59% of male smokers smoke mainly manufactured cigarettes (66% of women smoke only packeted, and 6% also smoke HRT, but mainly packeted. 52% of men smoke only packeted, and 7% also smoke HRT but mainly packeted) . HMRC estimates that 7% of packeted cigarettes smoked are illicit, as well as 35% of HRT. Based on these figures, we estimated the proportion of smokers purchasing each type of tobacco (licit packeted, licit HRT, illicit packeted, illicit HRT), and hence the overall average spend on tobacco products. It should be noted that our estimate is likely to be an overestimate if cheaper licit products, illicit and hand-rolled tobacco are disproportionately consumed by those in poverty. Effect on poverty rates of subtracting tobacco expenditure from household income We estimated the number of children effectively drawn into poverty if parental expenditure on tobacco is subtracted from household income. We calculated how many children are living in a household where the income is above 60% of the median income, but by less than the average spend on tobacco. The HBAI report provides data on households living between 60% and 70% of the median income; i.e. those living just above the poverty line. We first calculated the number of children who are living in households between 60% and 70% of the median income. We then applied the same method used to calculate the number of children in poverty with smoking parents described above, to estimate the number of children in households between 60% and 70% of the median income with one or two smoking parents. We calculated the low income thresholds for these income groups for different household structures, which showed that the income difference between these income groups was similar to the average weekly expenditure on tobacco for two smokers calculated in the previous step. We therefore assumed that all children in two-smoker households with a household income between 60% and 70% of the median income would be drawn into effective poverty. Because the spread of the population living between 60% and 70% of the equivalised median is fairly even , we also assumed that half of all children between these thresholds with one smoking parent in two-parent households, or one smoking parent in a one-parent household, would be drawn into effective poverty. Number of children living in low income households Estimated numbers of children in poverty, according to family size and parental marital status, are reported in Table 2, demonstrating that of 2.3 million children living in a poor household in 2011/12, 1.2 million lived with adults who were married or civil-partnered . Number of children in poverty in households in which one or more adults smoke Table 3 shows estimated numbers of children in poor households in which one or two parents smoke, by marital status of the parents and number of children in the household. In total 1.1 million children - almost half of all children in poverty - were estimated to be living in poverty with at least one parent who smokes. Expenditure on tobacco We estimated that a typical woman smoker in relative poverty smokes 84 cigarettes/week, equivalent to 4.2 packs of 20 cigarettes or 0.84 packs of 50 HRT. For men the respective figures were 91 cigarettes, equivalent to 4.55 packs of 20 or 0.91 packs of 50 g HRT. The estimated costs per week of smoking different types of tobacco products to parents in poor households, and the proportion of poor smokers smoking each type of product, are shown in Table 4. Based on our estimates, 68% of female and 55% of male smokers smoke mainly licit packed cigarettes, and spend an average of £32 and £35 on cigarettes per week respectively. Number of children ‘drawn into poverty’ by parental smoking expenditure According to our estimates, there are nearly 4 million children living in households below 70% of the median income, and 1.6 million children live in households where the income is between 60% and 70% of the median income (Table 5). Estimates reported in Table 6 suggest that three quarters of a million children living in households with an income between 60% and 70% of the median income are living with at least one smoker. Given the differences in income between the 60% and 70% thresholds (shown in Additional file 1), we have estimated that over 432,000 children may be viewed as having been drawn into poverty by parental smoking. Our study suggests that approximately 1.1 million children, or nearly half of all children in relative poverty in 2012, had at least one smoking parent. We also estimate that around 432,000 children would be classed as being in poverty if parental tobacco expenditure were subtracted from household income. Thus there may be over 1.5 million children living in circumstances of severe financial deprivation whose plight is exacerbated by parental smoking. Our study thus identifies a key opportunity and priority for government action to reduce the number of children experiencing the adverse effects of poverty through measures that encourage parents and carers, particularly those in low income groups, to quit smoking. The failure to meet the government target on child poverty means that measures to alleviate the effects of poverty are more important than ever. In this study we have addressed a contributor to child poverty that has not, to our knowledge, previously been quantified in this context and falls outside standard child poverty statistics. Effective tobacco control interventions which enable low income smokers to quit, can thus potentially play an important role in reducing the burden of child poverty, and may improve child health and wellbeing by more than just the removal of direct effects of tobacco smoke. Recent reviews suggest that price increases are the intervention with the greatest potential for reducing socioeconomic disparities in smoking [26,27]. However, price rises must be coupled with accessible individual-level smoking cessation support – which can be funded, at least to some extent, from tobacco tax revenues - to help counter the effect of price increases on low-income smokers who continue to smoke: they will spend a larger proportion of their income on smoking than higher-income groups . Our estimates are inevitably approximate as constraints in data availability have required us to make a number of assumptions; however we ensured that such assumptions were conservative so our findings are likely to underestimate the true figures. In addition, our analyses are subject to aggregation error. The estimates of smoking prevalence applied in this study were based on self-report, and may therefore underestimate true prevalence . Since smoking rates for adults in poverty are not available from national survey reports, including the census, we have made the assumption that smoking rates in this group will be at least as high as those in routine and manual workers, a group for which suitable data are available. It is likely however that smoking rates are higher in the most deprived , though there is evidence of a good correlation between these two groups , with the consequence that this assumption is likely to underestimate true smoking prevalence in poor adults and hence the proportion of children in poverty with smoking parents. Estimates of smoking prevalence were not available by socio-economic group and marital status, so we have had to weight smoking prevalence in the routine and manual group using estimates of smoking prevalence by marital status from the general population. Our estimate of the cost of smoking licit tobacco is based on the recommended retail price (RRP) of a typical pack of 20 cigarettes in the Most Popular Price Category (MPPC), but in practice it is likely that many poor smokers smoke lower-cost manufactured cigarettes, resulting in some overestimation of cost in our study. However we have also assumed that the proportion smoking illicit tobacco, priced at half that of licit product, is the same in low income groups as in the general population, which is almost certainly an underestimate. Detailed data on the income distribution in households were not available to us, and we have therefore used the number of children between 60% and 70% of the median income, and the differences between these income thresholds, to estimate the number of children drawn into poverty by parental tobacco expenditure. Our estimates suggest that low-income smokers who smoke an average of 12–13 cigarettes per day (the national average in routine and manual workers) will spend over £13 per week (£700 per year) if they smoke licit HRT, and around £32-£35 (£1600-1800 per year) if they smoke manufactured cigarettes, although it seems likely that people in poverty buy more HRT and illicit tobacco than other smokers. When we consider that the poverty threshold level of income (60% of median income BHC) for a single parent household with one child under 14 is £223, and for a two parent household with two children is £392, it is clear that this spend represents a substantial proportion of income in these households - at least 4% for a cigarette smoker in a two-parent, two-child household even if the smoker smokes illicit tobacco - especially if both parents are smokers . Furthermore, many households below the poverty line will be earning incomes well below these thresholds, with poverty exacerbated by expenditure on smoking. Despite inaccuracies in our estimates, however, our findings indicate that implementing measures that reduce the prevalence of smoking among low socioeconomic status groups would not only improve health but also relieve poverty. Use of tax to reduce the affordability of tobacco products, particularly of lower cost cigarettes and hand-rolling tobacco, along with measure to reduce availability of illicit supplies are key if counterintuitive policies, since low socioeconomic groups are highly responsive to price increases [31,32]. Given public sensitivity over the use of welfare benefits by the poor and long-standing caricatures of the deserving and undeserving poor, care is required to avoid moralising and imposing population-level utility values on a group living with very different stressors and challenges to the majority of the population. Nonetheless, it is clear from our estimates that smoking places a significant additional financial burden on large numbers of children living in low-income households, and that governments have a duty to ensure that tobacco control policies are fully implemented to minimise this effect. Both the ethical and practical challenges associated with conducting this type of study serve to underline the importance of further detailed research. The use (and in some cases collection) of more detailed data to maximise the accuracy of estimates, as well as the consideration of other types of poverty such as persistent poverty, subjective poverty and material deprivation will enable us to more fully understand the substantial burden of smoking on poor households. Department for Work and Pensions. Households Below Average Income: An analysis of the income distribution 1994–1995 - 2011–2012. London: Department for Work and Pensions, 2013. Available from https://www.gov.uk/government/statistics/households-below-average-income-hbai-199495-to-201112. Accessed 30th March 2015. Joseph Rowntree Foundation. The cost of child poverty for individuals and society. Joseph Rowntree Foundation. York, UK: Joseph Rowntree Foundation, 2008. Available from http://www.jrf.org.uk/publications/costs-child-poverty-individuals-and-society-literature-review. Accessed 3rd September 2014. Geddes I, Allen J, Allen M, Morrissey L. The Marmot Review: Implications for Spatial Planning. 2011. Available from http://www.apho.org.uk/resource/item.aspx?RID=106106 . Accessed 30th March 2015. Department for Education. Pupil absence in schools in England, including pupil characteristics: academic year 2010 to 2011. London: Department for Education; 2012. Available from https://www.gov.uk/government/statistics/pupil-absence-in-schools-in-england-including-pupil-characteristics-academic-year-2010-to-2011. Accessed 30th March 2015. Department for Work and Pensions, Department for Education. Child poverty in the UK: The report on the 2010 target. London: Department for Work and Pensions, Department for Education; 2012. Available from https://www.gov.uk/government/publications/child-poverty-in-the-uk-the-report-on-the-2010-target. Accessed 30th March 2015. Marsh A, Mackay S. Poor Smokers - PSI research report. London: Policy Studies Institute; 1994. Available from http://www.psi.org.uk/site/publication_detail/1287. Accessed 30th March 2015. RCP. Going Smokefree: The medical case for clean air in the home, at work and in public places. London: Royal College of Physicians; 2005. Available from https://www.rcplondon.ac.uk/publications/going-smoke-free-0. Accessed 30th March 2015. Hill K, Hawkins J, Catalano R, Abbott R, Guo J. Family influences on the risk of daily smoking initiation. J Adolesc Health. 2005;37(3):202–10. Mackenbach J. What would happen to health inequalities if smoking were eliminated? BMJ. 2011;342:d3460. European Commission Directorate-General Taxation and Customs Union. Excise Duty Tables Part III -- Manufactured Tobacco (January 2015). European Commission; 2014. Available from http://ec.europa.eu/taxation_customs/taxation/excise_duties/tobacco_products/cigarettes/index_en.htm. Accessed 30th March 2015. Thomson GW, Wilson NA, O’Dea D, Read PJ, Howden-Chapman P. Tobacco spending and children in low income households. Tobac Contr. 2002;11(4):372–5. Busch S, Jofre-Bonet M, Falbe T, Sindelar JL. Burning a hole in the budget: Tobacco spending and its crowd-out out of other goods. Appl Health Econ Pol. 2004;3(4):263–72. John R. Crowding out effect of tobacco expenditure and its implications on household resource allocation in India. Soc Sci Med. 2008;66(6):1356–67. Rijo J, Sung H-Y, Max W, Ross H. Counting 15 million more poor in India, thanks to tobacco. Tobac Contr. 2011;20(5):349–52. Nonnemaker J, Sur M. Tobacco expenditures and child health and nutritional outcomes in rural Bangladesh. Soc Sci Med. 2007;65(12):2517–26. Efroymson D, Saifuddin A, Townsend J, Alam SM, Dey AR, Saha R, et al. Hungry for tobacco: an analysis of the economic impact of tobacco consumption on the poor in Bangladesh. Tobac Contr. 2001;10(3):212–7. OECD.: What are equivalence scales? OECD. Available from www.oecd.org/eco/growth/OECD-Note-EquivalenceScales.pdf. Accessed 30th March 2015. ONS. Families and Households Survey, 2012. Newport, UK: Office for National Statistics; 2012. Available from www.ons.gov.uk/ons/dcp171778_284823.pdf. Accessed 30th March 2015. ONS. Opinions and Lifestyle Survey, Smoking Habits Amongst Adults 2012. Office for National Statistics; 2013. Available from http://www.ons.gov.uk/ons/rel/ghs/opinions-and-lifestyle-survey/smoking-habits-amongst-adults--2012/index.html. Accessed 30th March 2015. Jarvis M, Mindell J, Gilmore A, Feyerabend C, West R. Smoke-free homes in England: prevalence, trends and validation by cotinine in children. Tobac Contr. 2009;18(6):491–5. Darrall K, Figgins J. Roll-your-own smoke yields: theoretical and practical aspects. Tobac Contr. 1998;7:168–75. Cambridgeshire County Council. Illegal tobacco. Cambridgeshire County Council. Available at http://www.cambridgeshire.gov.uk/info/20110/illegal_tobacco. Accessed 3rd September 2014. Power G. Illicit Tobacco in South East London: A Survey of Smokers. London: South East London Illicit Tobacco Cluster; 2013. Available from http://www.lambeth.gov.uk/sites/default/files/ssh-illicit-tobacco-survey-report.pdf. Accessed 30th March 2015. HMRC & UKBA. Tackling Tobacco Smuggling - building on our success. HM Revenue & Customs and UK Border Agency. 2001. Available from https://www.gov.uk/government/publications/tackling-tobacco-smuggling-buildingon-our-success. Accessed 30th March 2015. TMA. Tobacco Taxation Briefing. The Tobacco Manufacturers' Association. 2012. Available from http://www.the-tma.org.uk/2012/11/new-tma-tobacco-tax-briefing/. Accessed April 27th 2015 Hill S, Amos A, Clifford D, Platt S. Impact of tobacco control interventions on socioeconomic inequalities in smoking: review of the evidence. Tobac Contr. 2014;23(e2):e89–97. Hiscock R, Bauld L, Amos A, Fidler JA, Munafò M. Socioeconomic status and smoking: a review. Ann N Y Acad Sci. 2012;1248:107–23. Garrett B, Dube S, Babb S, McAfee T. Addressing the social determinants of health to reduce tobacco-related disparities. Nicotine and Tobacco Res 2014. epub ahead of print Holmes J, Meng Y, Meier P, Brennan A, Angus C, Campbell-Burton A, et al. Effects of minimum unit pricing for alcohol on different income and socioeconomic groups: a modelling study. Lancet. 2014;383(9929):1655–64. Freedom of Information request 2013–3730. UK Government; 2013. Available from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/236465/foi-3730-2013.pdf. Accessed 30th March 2015. International Agency for Research on Cancer. Chapter 7. Tax, price and tobacco use among the poor. Effectiveness of tax and price policies for tobacco control. Lyon, France: IARC; 2011. Available from: http://www.iarc.fr/en/publications/list/handbooks/. Accessed 30th March 2015. Thomas S, Fayter D, Misso K, Ogilvie D, Pettigrew M, Sowden A, et al. Population tobacco control interventions and their effects on social inequalities in smoking: systematic review. Tobac Contr. 2008;17:230–7. The authors declare that they have no competing interests. TL and JB designed the study. CB and TL extracted the relevant data and conducted the analyses. CB wrote the original research report for the study and TL prepared the manuscript for publication. JH provided support in the interpretation of the data. JB and JH critically revised drafts of the manuscript. All the authors approved the final manuscript. About this article Cite this article Belvin, C., Britton, J., Holmes, J. et al. Parental smoking and child poverty in the UK: an analysis of national survey data. BMC Public Health 15, 507 (2015). https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-015-1797-z - Smoking prevalence - Child poverty
<urn:uuid:c56ace5e-6dd0-4501-a0cb-f89254b57a9c>
CC-MAIN-2021-21
https://0-bmcpublichealth-biomedcentral-com.brum.beds.ac.uk/articles/10.1186/s12889-015-1797-z
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00455.warc.gz
en
0.938022
6,426
3.28125
3
Source: Donald M. Clark - Cleve Bennewitz Manuscripts/Unpublished Notes Minneapolis, City of lakes as it is often called, is an ideal place for winter sports. With its cold weather and large Scandinavian population, who are inherently interested in winter sports, it was only natural that the city would become interested in ice hockey. The city is blessed with numerous ponds, marshes, and lakes such as Calhoun, Nokomis, Lake of the Isles, Cedar, Hiawatha, and Diamond. The abundance of these combined with an average January temperature at 11 above, the coldest of any large city in the nation, afforded the local citizens an opportunity to skate and play hockey. In the winter the unorganized game of shinny was often played by the youths and adults on the numerous lakes and ponds dotting the city. Sticks made from tree branches, blocks of wood or tin cans for pucks, and chunks of wood or large rocks for the goals were part of the game of shinny. Aside from “shinny on your own side” rules were few and simple. Ice polo, a game popular before the turn of the century, in Minnesota, Upper Michigan and New England was played in the 1880’s in Minneapolis and St. Paul. Minneapolis teams such as the Polo Club, Acorns, and Lelands met teams from St. Paul, which had a more extensive program than Minneapolis. In late January of 1888 the Leland team competed in an ice polo tournament held in St. Paul as part of the famous St. Paul Winter Carnival. The January 25, 1888 edition of the St. Paul Globe listed the Minneapolis Leland lineup as follows: John McClelan, 1st rush; R.G. Moore, 2nd rush; J.W. Urquhard, goal; Frances Marsh, center; Walter Hefflefinger, cover goal; A.S. Hefflefinger; cover point. Walter “Pudge” Hefflefinger later was chosen as an All American guard at Yale in 1889, 90 and 91. On most all time All American teams he has been chosen for one of the guard positions. He has been adjudged to be one of the greatest to ever have played the game of football. The earliest evidence of a game of ice hockey being played in Minneapolis was when two local teams met at an outdoor rink located at 11th Street and 4th Avenue South in a series of games in early January of 1895. These contest were among the very first to have been played in Minnesota and the United States. Later in January and early February a Minneapolis team met the newly formed University of Minnesota team in a series of games. The first University of Minnesota hockey team, unsanctioned by the college, was organized in January of 1895 by Dr. H.A. Parkyn, who was familiar with the game having played in Toronto. Parkyn, who played football at the University of Minnesota, coached the team in its preparation for its game against the highly touted Winnipeg team. Several of the Gopher players such as Walker, Russell, and Head were experienced ice polo players. The St. Paul Pioneer Press edition on February 19, 1895 describes the international meeting as follows: The first international hockey game between Winnipeg and the University of Minnesota was played yesterday, and won by the visitors 11-3. The day was perfect and 300 spectators occupied the grandstand, coeds of the University being well represented. Features of the game were the team play of the Canadians and the individual play of Parkyn, Walker, and Head for the University. Hockey promises to become as popular a sport at the University as football, baseball, and rowing. The game against Winnipeg was played at the Athletic Park in downtown Minneapolis, located at Sixth Street and First Avenue North, behind the famous West Hotel, the current site of the renowned Butler Square Building. The park was the home of the professional Minneapolis Baseball Club until they moved to the newly constructed Nicollet Park at Nicollet Avenue and Lake Street South on June 19th, 1896. In 1895 as there was no rail connection between eastern and western Canada the Winnipeg Victoria team founds it necessary to travel through Minneapolis on their trip to Ontario and Quebec where they won two games each from Montreal and Ottawa and lost one contest to Quebec. Traveling through Minneapolis afforded the opportunity for the University of Minnesota to schedule the Manitobans. The eastern Canadians were surprised by the abilities of the Winnipeggers, whom had only been playing the sport for only a few seasons. Following the University if Minnesota game against Winnipeg the 1895 no effort was made by the University Athletic Board until November of 1900 when a committee composed of George Northrup, Paul Joslyn, and A.B. Gibbons was appointed to look into the problem of playing the sport at the University. Another appointed committee decided not to flood Northrup Field and instead to play at Lake Como in St. Paul. No scheduled games were played during the season of 1900-01 and it was not until 1904 that the University of Minnesota played any formal games. Only two contests were played that season, both resulting in wins over Minneapolis Central High School 4-0 and St. Paul Virginias 4-3. Team members were John S. Abbott, Frank Teasdale, Gordon Wood, Fred Elston, Frank Cutter, R.S. Blitz, W.A. Rose, Arthur Toplin, and Captain Thayer Bros. The short season of 1904 proved to be the last of hockey at the University on a formal basis until it’s revival in the early 1920’s. In 1910 efforts were made to interest the Universities of Chicago and Wisconsin in ice hockey, as to furnish Big Ten Intercollegiate competition. The movement met with failure. On January 24-25, 1896 the Minneapolis Hockey Club entered a four team international tournament held at the Aurora Rink in St. Paul. They defeated St. Paul Two Team 4-1 in the first round and lost to Winnipeg 7-3 in the finals. The event, part of the St. Paul Winter Carnival, was witnessed by large crowds. This may have been the first tournament in the United States involving a team from Canada. Canadian team had played a series of games in this country during 1895, but these teams had not been involved in any tournaments. During the following few seasons the sport, in part due to warm weather, languished in both the Mill City and St. Paul. But by 1900 Minneapolis teams were engaging in numerous contests with St. Paul teams. The St. Paul Globe edition of January 12, 1900 describes a game between the St. Paul AC and the Minneapolis HC as follows: St. Paul AC defeated Minneapolis HC 4-2 at the Virginia Rink in St. Paul. Newsome, Barron, Patterson, and B. MacDonald played well for St. Paul, while Labett, Raymond, Taylor, and LaLand did likewise for Minneapolis. Lineups were as follows: Umpire: Manley, Seller From Minneapolis Journal; December 20, 1900 Minneapolis Hockey Club was organized at the Board of Trade. Officers were as follows: President- Willis Walker, Honorary President- A. A. Ames, Vice President- Matt Madden, Secretary-Treasure- G. K. Labatte, Managing Committee- F. B. Champman, M.. A. Miller, G. McBride, R. W. McLeod, B. J. Stovel. The old Star Roller Rink, 4th Avenue South and 11th Street, will be fitted for ice hockey. Membership fees set at $1.00. Central High School will use the rink three afternoons a week. North High School wishes to use the rink also. First game of the season will be against St. Paul at St. Paul New Years Day. Late in the winter of 1901 a short lived four team Twin City Senior League was formed of the following teams: St. Paul Hockey Club, Minneapolis Hockey Club, Minneapolis Central High School, and St. Paul Mechanic Arts High School. The following season of 1901-1902 found the formation of a six-team league as follows: St. Paul Hockey Club, St. Paul Mascots, St. Paul Mechanic Arts High School, St. Paul Central High School, St. Paul Virginias, and the Minneapolis Hockey Club. Robert H. Dunbar, the famous curler, placed a cup in competition to be awarded to the newly created Twin City league champion. The Virginias won the Dunbar Cup for the first season. Dominated by St. Paul teams, the league operated continuously through the 1909-1910 season. Among the members of the 1901-1902 Minneapolis team was: P.K. Labatte, J. Best, A.M. McIntosh, C. Harfield, A. Raymond, S. Chapman, J. Loundon, C. Fairchild, and T. Adams. The visit of the world famous Portage Lake, Michigan team to the Twin Cities in late January of 1902 was a noteworthy event. On January 23rd Portage Lake defeated Minneapolis Hockey Club 8-4 in a game played at the indoor Star Rink. Lineups were as follows: The famous Dr. John Gibson, a native of Berlin, Ontario, was the organizing force behind forming the development of the Portage Lake seven. For the most part the players on the Portage Lake team were Canadian imports. The day following the Minneapolis game Portage Lake defeated the St. Paul Virginias on their outdoor rink as Joe Jones starred in goal for the losers. The Portage Lake Pioneer players claimed Jones was the best goaltender that they had faced to date. Later Jones played for American Soo in the International Hockey League, the world’s first professional circuit. During the next several years many different Minneapolis teams joined the Twin City League, among them being AAA, Lake Shores, Harriets, Wanderers, and Eagles. The league ceased to operate after the 1909-1910 season as the St. Paul Chinooks, and Minneapolis Wanderers withdrew from the organization. Although the Minneapolis teams provided stiff competition for the St. Paul teams during the nine year history of the league they failed to capture the league championship during the circuit’s existence. Among the leading Minneapolis players during the first decade of the 1900’s were Carl Struck, Cleve Bennowitz, W. Lalond, Ray Hodge, Kimball Hodge, Jack Bradford, Cornell Lagerstrom, P.K. Labette, A. Raymond, C. Fairchild, and Bobby Marshall. Cleve Bennewitz, who played youth hockey as a youngster in Argyle in northwestern Minnesota and later moved to Minneapolis, informed the writer that in addition to Marshall there were two other black athletes in Minneapolis who were notable hockey players. He cited Bobby Marshall and goaltender Bill Butler as being leading performers. Marshall was a well known Minneapolis athlete who was chosen All American in football as end for the University of Minnesota in 1905. Starting in 1900 Minneapolis High Schools played one another and the St. Paul schools as well as the club teams in both cities. By the end of the first decade of the 1900’s East, West, North, and Central all iced varsity teams, as did Mechanic Arts, Central, and St. Paul Academy in St. Paul. In 1905 an outdoor ice hockey rink was constructed by the Minneapolis Amateur Hockey Association at Lake Street and Girard Avenue South which had a large warming house that afforded ample room for fans to observe every play. The long bleachers extending on the east side accommodated the overflow of crowds from the warming house. The setup handled more fans than any other facility in St. Paul or Minneapolis. The well outfitted rink lasted only a few seasons as the Minneapolis School Board found it necessary to purchase the property. About 1905, a few years after natural ice was installed at the Star Roller Rink, another roller rink located at Washington and Broadway Avenue North was outfitted with natural ice. Tar paper was placed on the floor at it was flooded to form a 150’ x 50’ ice surface. Electric lights were also installed. Following the breakup of the Twin City League after the 1910 season a Minneapolis Senior League was formed and many of their games were played at a rink located on the ice of Lake Harriet. Members of the league were ABC’s, Simokins, North Commons, and Lake Harriet’s. During this period the Lake Harriets were consistently among the best teams in the city. On occasion they would play St. Paul and Duluth teams and on one weekend played Hallock from northwestern Minnesota. In 1914 each team in the Minneapolis High School Hockey conference, East, North, Central, and North maintained its own outdoor rink and occasionally played a game on the large ice surface at the Hippodrome at the State Fairgrounds in St. Paul. In 1917, Nick Kahler, who had been captain of the championship St. Paul AC team of 1916, organized a team to challenge St. Paul. He imported a few Canadian players, among them Lyle Wright, who later ran the Minneapolis Arena and the Minneapolis Millers professional hockey team. The first game played at the large Hippodrome resulted in 9-2 rout for the St. Paul seven, while the second game played at the smaller Casino in Minneapolis ended in 9-0 loss for Minneapolis. The Kahler team, the best that Minneapolis to date has been able to ice, proved to be of little opposition to the St. Paul AC. At a later date, in 1921 Kahler again organized a team with Winnipeg imports Elliot, Chambers and Dunlop and challenged the St. Paul six to a series of games. Using spares and “kids” in their lineup the AC defeated Minneapolis 4-1 and 2-1. Due to lack of proper playing facilities the Kahler team did not enter the USAHA and played an independent schedule. Following WWII local Minneapolis business and companies became interested in sponsoring teams/leagues in the various sports, including ice hockey. Joseph Shipanovich in his book titled Minneapolis states “During the 1920’s business and corporations began the tradition of sponsoring athletic teams composed of their employees as part of a general social movement known as industrial paternalism.” At about this time the Minneapolis Recreation Department of the Board of Park Commissioners became interested in forming an enlarged hockey program. An article appearing in the 1921 Spalding Ice Hockey Guide reveals the hockey operation of the Minneapolis Recreation Department: MINNEAPOLIS (MINN.) MUNICIPAL HOCKEY By W.W. Fox, Director of Municipal Athletics Under supervision of the Recreation Department of the Board of Park Commissioners the Municipal Hockey League was reorganized during the highly successful season of 1919-1920. In accordance with the recreational program, the board of park Commissioners established and maintained twenty-three skating rinks, equipped with warming houses. They also provided hockey rinks at Logan Park, North Commons, Lake of the Isles, and Powderhorn Park. The skating season was unusually long, affording unlimited activity in winter sports, including “hikes” through the park system, Juvenile and adult skating races, skiing and tobogganing, ice carnivals, and the most successful hockey competition ever witnessed in Minneapolis. The hockey season began December 28, 1919, with twenty teams representing social and community center interests from various parts of the city. The association was divided into Senior and Junior Divisions 1 and 2, with little, if any difference in playing strength. In the Senior and Junior No. 1 Division sixteen teams competed, while Junior No. 2 embraced four teams. The handsome “Struck” perpetual challenge cups were the trophy objectives in the Senior and Junior No. 1 Divisions, and Ward C. Burton, another hockey enthusiast, donated ten gold medals to the winning team in the junior No. 2 Division. In this division the Deephavens, Raccoons, and Ascensions supplied spirited competition, while the Heatherdale A.C., owing to illness of players, was unable to win, yet finished the schedule with enthusiasm. The raccoons won the championship by defeating the Ascension team in the final game of the schedule; Deephavens, Ascension and Heatherdales finished in the order named. In the Junior No. 1 Division the Logan Parks, Stewart A.C. and Powderhorn Parks competed with vigor against the Lagoons, Camden Juniors, and Maple Hills. The Logan parks won the championship from Stewart A.C. in the final game which required two extra ten-minute periods to determine the winner. The elimination contest for the championship of the junior divisions, between the Raccoons and Logan Parks teams, created keen rivalry, as both teams represented the unified community center interests at Logan Park. The Logan Parks finally caged the puck on a well executed team play and on the dual championship. Play in the senior division produces amazingly keen competition. Vertex, tri-champions of the association: Camden Seniors, Midway Merchants, and East Side A.C. formed a quartette of veteran teams and the struggle for the “Struck” trophy was filled with thrilling competition. Lake Hennepin merchants, North Commons, headed the second division, and A.B.C., owing to a belated start, failed to get in the running. Very little difference between the first four teams characterized the season’s play, with Vertex leading most of the season until temporarily displaced by East Side A.C. The schedule closed with Camden Seniors and Vertex tied for the major prize. The tie game was played at Logan Park in sub-zero temperature. It went into two extra periods and finished in a tie. The following Sunday these teams met and Vertex’s team work and aggressive system of play proved a decisive factor in winning the senior championship. Interest in municipal hockey was at this time centered in the city championship, between the aggressive Logan parks, dual champions of Divisions No. 1 and No. 2 Junior, and the veteran Vertex seven, champions of the Senior Division. This decisive game was played at the Logan park rink and both teams resorted to defensive play during the first period, with honors even. In the final period, however, the Vertex seven opened an aggressive attack that carried the puck to the Logan Parks’ cage and captured the city championship for the fourth consecutive time. The Minneapolis Arena, with artificial ice and a seating capacity of 5,400 spectators was constructed at 2800 Dupont Avenue South and was opened for use in 1924. The facility, the only one in Minneapolis for many years, played an important role in the skating and hockey activities of the city for over forty years. For the season of 1923-1924 Minneapolis, replacing an ill-fated Milwaukee franchise joined the strong United States Amateur Hockey Association along with St. Paul, Duluth, Cleveland, Pittsburgh, and Eveleth. To date this was the best brand of hockey that the Mill-City fans had a chance to view, unless they chose to travel to the Hippodrome to watch the St. Paul A.C. Ching and Ade Johnson, natives of Winnipeg, who had previously played for Eveleth, joined the team for the initial season. Ching Johnson, who weighed over 200 pounds, was an immense favorite at Eveleth and around the league. Predictably, the Minneapolis fans took great delight in the big defenseman’s bald head, broad smile and rough tactics on the ice. He proved to be the most popular player to ever have played with a Minneapolis team. After three years with Minneapolis at the relatively old ago of twenty-nine, along with another popular player, Taffy Abel, he joined the New York Rangers for an eleven year stay. In their first season in the league Minneapolis tied Duluth for last place with a 6-14-0 record. The rockets finished fourth both halves the following season, that of 1924-1925. After severing his connections with professional hockey, player Nick Kahler, in 1928 was chosen to coach the Augsburg College team which had been approved to represent the United States in the 1928 Olympic Winter Games in St. Moritz, Switzerland. Augsburg formulated plans to attend the event, including the raising of funds to help allay their expenses, but after much internal wrangling with the United States Olympic Committee, Chairman Douglas MacArthur termed the Augsburg team “not representative of American hockey” and the committee changed their mind and would not approve the Minneapolis College. The team was led by the five Hansen brothers- Oscar, Emil, Julius, Joe, and Lewis. Other members of the team were Gordon Schaeffer, George Malsed, Wallace Swanson, Willard Falk, and Charles Warren. The Hanson brothers were born in the United States, but spent part of their youth in Canada before moving to Minneapolis. In 1929 Kahler continued his interest in the amateur game when at the conclusion of the high school season he assembled an All Star High School team called the Cardinals which won the Minneapolis Recreation title. Members of the teams, most of who were from South and West high schools, were as follows: Phil Perkins, Bill Oddson, Bubs Hutchinson, Red Malsed, Harry Melberg, John Scanlon, Evy Scotvold, Kelly Ness, Bill Cooley, Mack Xerxa, and Bill Munns. In a game played at the Minneapolis Arena the All Star team edged Eveleth High School 2-1. Eveleth had not been defeated in the past three seasons of competition against Iron Range and Duluth schools. An idea of the caliber of these two teams can be gathered from the fact that five of the eleven Minneapolis players and six of the eleven Eveleth performers later turned professional. Kahler, who was born in Dollar Bay, Michigan and played his early hockey in the Copper Country, was interested in sports other than hockey. A master as an organizer and promoter he founded the National Golden Gloves event as well as the Northwest Sports Show. He was inducted into the Minnesota Sports Hall of fame in 1962, awarded the Governor’s Public Service Citation and Heritage Award in 1967, and elected into the United States Hockey hall of Fame in 1980. He died January 8, 1983 at the age of 91 in Minneapolis, after having given a great share of his life to hockey and other sporting events. After five successful seasons of operation the 1924-1925 season was the last for the “amateur” USAHA. The league’s name in 1925-1926 was changed to the Central Hockey Association and some franchise changes were made. Minneapolis, with an exceptionally strong team, played a 38 game schedule, winning both the regular season and the playoffs. Some of the greatest players in the game were members of the team. Among them were Tiny Thompson, Cooney Weiland, Taffy Abel, Ching Johnson, Mickey McQuire, Bill Boyd, Denny Breen, Vic Ripley, and Johnny McKinnon. Several of these players went on to become outstanding stars in the National Hockey League. The season of 1925-1926 was the last for the amateur league in the Midwest. In order to protect its players from raids by the NHL teams in the league in form of the American hockey Association turned professional for the season of 1926-1927. The new league membership consisted of Minneapolis, St. Paul, Duluth, Winnipeg, and Chicago Shamrocks. The latter was the second professional franchise in Chicago. The minor professional AHA, with numerous franchise changes, operated sixteen consecutive seasons from 1926-1927 through 1941-1942. In the sixteen year history of circuit, fourteen different cities held franchise at one time or another: St. Paul, Duluth, Minneapolis, Winnipeg, Chicago, Kansas City, Tulsa, St. Louis, Buffalo, Wichita, Oklahoma City, Omaha, Dallas, and Fort Worth. In the early years Duluth, Kansas City and Tulsa dominated the league, while the St. Louis Flyers did likewise in the late thirties and early forties. Minneapolis and St. Paul did not place teams in the league during the four year span of 1931-1932 through 1934-1935, instead choosing to join the more compact Central Hockey League. During this time the leading Minneapolis players that local fans had the opportunity to watch in the AHA and the CHL were: Tiny Thompson, Stu Adams, Cooney Weiland, Helge Bostrom, Bill DePaul, Hub Nelson, Oscar, Emory, and Emil Hanson, Bill Mitchell, Sil Acaster, George Agar, Pat Shea, Bob Blake, Nakina Smith, Alex Milne, Marty Barry, Ed Oatman, Rip Ripley, Billy Hill, Moose Johnson, Cully Dahlstrom, Bill Boyd, Alex Wood, Ed Prokop, Ching and Ade Johnson, Byron MacDonald, George Patterson, Fido Purpur, Harry Dick, Phil Hargesheimer, Sal Fassono, Bob nylon, Red Stuart, Joe Stark, Joe Bretto, Nick Wasnie, Leo LaFrance, Earl Barthelome, Evy Scotvold, Ted Breckheimer, Louis Swenson, Jack Flood, Phil Perkins, Virgil Johnson, and Bill Oddson. A large number of these players were born and developed in Minneapolis. During their twelve year stay in the AHA the Millers never managed to capture the regular season crown, but did manage to win the playoffs in 1927 and 1937. During the five year period of 1936-1937 through 1940-1941 the millers finished second in the regular season schedule five consecutive seasons. *Did not qualify **League divided into two divisions ***Oklahoma City franchise transferred to Minneapolis 3/12/36 During the four year period of 1931-1932 through 1934-1935 Minneapolis and St. Paul withdrew from the AHA and joined the newly formed Central Hockey League which was composed of teams from Minneapolis, St. Paul, Virginia, Hibbing, and Eveleth. With few exceptions the entire rosters of the teams were composed of players of Minnesota descent. Although some termed the league as “amateur”, the NHL viewed it as a professional circuit. Teams in the league played a 40-48 game schedule and developed many players who went on to compete in the AHA, AHL, and NHL. Minneapolis captured the league championship in 1932 and 1934, while Eveleth turned the trick in 1933 and St. Paul in 1935. Among local area players who had played for the Millers include: Pat Shea, Earl Barthelome, Ted Breckheimer, Evy Scotvold, Kelly Ness, Cully Dahlstrom, Bill Oddson, Virgil Johnson, Hub Nelson, Bill McGlone, and Louis Swenson. An example of the strength of the league is shown by the fact that St. Paul, Central League champion in 1935, defeated the AHA titlelist St. Louis Flyers in a post season playoff series three games to none. In an exhibition game, Eveleth, third place finisher, defeated Kansas City of the AHA. Senior, Intermediate and Junior hockey continued to flourish in Minneapolis in the twenties and thirties. The continued interest shown by the Minneapolis Recreation Department, the advent of professional hockey, and the erection of the Minneapolis Arena all contributed to the growth of the sport. Among the leading teams and programs during the period were: Logan Park, Vertex, Raccoons, Deephavens, Foshays, Buzzas, Federals, Flour City, Lake Lyndale, Wheaties, Daytons, Munsingwear, Ewalds, Jerseys, Bankers, Aces, Ascensions, Americans, Midways, North Commons, Camden, Cos & Steves, Powderhorns, Norse, Vikings, Mitby-Sather, Pershing, Red Squirrels, Nolans, East Side, Cedar Lake, Chicago Lake, and St. Lawrence. Billy Fox of Minneapolis and Ernest Johnson of St. Paul, both associated with their respective recreation departments, were leaders in organizing and operating the Minnesota Recreation Association’s first Senior Hockey Tournament at Hibbing on February 19-20, 1926. The Minneapolis representative, Federals, defeated International Falls 1-0 in four overtime periods, and St. Paul Fire and Marine 6-0 before bowing the Eveleth Cubs 6-0 in the finals. In 1927 the four team tournament was held at the Hippodrome in St. Paul when the Minneapolis Buzzas defeated Nashwauk and the Duluth Aces for the State Recreation crown. The following year at Hibbing the Buzzas lost to the Duluth Gateleys, the eventual champions, 4-3 in overtime in the first round of competition. The growth of hockey under the auspices of the Minneapolis Recreation Department in best pointed out by an article in the 1931-1932 Spalding Ice Hockey Guides: MUNICIPAL HOCKEY IN MINNEAPOLIS By: W.W. Fox Assistant Director of Recreation From article in “Parks and Recreation” Hockey in Minneapolis is the outgrowth of a tiny beginning which started years ago. Today, the recreation department shows an enrollment of 275 teams and a playing personal of 2,750 players with a competent staff of fifty officials. The magnitude of the program necessitates a schedule requirement of over 800 games, which are played on twenty-six brilliantly lighted rinks systematically placed throughout the park system, with a view of developing the hockey spirit and play in every neighborhood. The hockey program is organized from voluntary and solicited registration and the teams are grouped into preferably four club league units, each club paying the department and entrance fee of $5 for juvenile and junior teams, $8 for intermediate and $10 for senior teams. This fund is paid out to officials for handling the games, for detail publicity and for trophies. The various teams are grouped according to an age system which runs from twelve years to senior classification. North section teams, in formation of fours, plays round robin schedule of games which provides competition for each team. South, East, and West sections are similarly treated and the whole program coordinated into the major organization. A potent factor in the formation of the park hockey program centers in the enlistment of every organization in the city interested in boys’ welfare-settlement houses, athletic clubs, social center bodies, men’s clubs, church clubs, YMCA, Knights of Columbus, Masons and in fact every kindred group working for the welfare of the boys and solution of his problems. When one takes into consideration the tremendous amount of skating that is being done on the Minneapolis park system, which will run up to millions of participants covering approximately fifty ice arenas, the need of supervision is outstanding, especially on the hockey rinks and adjacent hockey arenas which serve a consistent purpose in the promotion of the big hockey program where every boy twelve years upward can enjoy the game. The hockey rinks are of standard dimensions, 188 feet long, 96 feet wide, and 3 feet high and fully lighted with 1,000 watt lamps, which not only flood the entire area but illuminate the sideboards. The boards are removable for the purpose of either putting in the tractor and planing the ice, which is done on some rinks, or flooding in others. The entire rink is encircled with heavy mesh wire, which is additional protection for the spectators. Another fine feature in the development of this program is the co-operation received from the management of the Minneapolis Arena, where professional hockey is played as well as the fine program of amateur hockey. Professional hockey has been a powerful factor in the development of park hockey programs. The skillful play in evidence at the Arena, the tremendous attendance and enthusiasm accompanying these paid programs, naturally interest the young boys of the city and they want to go out on the park ice and do likewise. At the conclusion of the general hockey program sponsored by the recreation department of the board, a Northwest hockey tournament is promoted on this indoor arena. The tournament embraces all intermediate and senior winners throughout the Northwest and any non-paid team is eligible to compete. It takes about a week to run off this program and the receipts about balance the expense. Through the efforts of Billy Fox and Lyle Wright, manager of the Minneapolis Arena, the first Northwest-State AAU Hockey Tournament was held at the Arena in 1930, a popular event which ran for eleven years through 1940. Some years as many as twenty-six teams and two divisions would be involved in the tournament, which might take as many as four days to complete. Minneapolis teams won four straight State AAU titles in the years 1936-1939. With the champions being; Wheaties, Jerseys, Red Squirrels, and Barnes, respectively. Of these titlelists the Jerseys and Wheaties came out of the strong Minneapolis Arena League, and indoor circuit that functioned in the late thirties. Jerseys, Bankers, Wheaties, Munsingwear, Daytons, and Ewalds were some of the teams that were members of the Arena League. Dozens of former Minneapolis and St. Paul high school and University of Minnesota players, who had not turned professional, performed in the popular indoor league. Among them were: John Scanlon, Spencer Wagnild, Phil LaBatte, Bill Toenjes, Russ Grey, Clyde Munns, Wylie, Al, and Howie Van, A. Campbell, Hank Frantzen, Bucky Hollingsworth, Marshall Hutchinson, Bucky Johnson, Charley Duncan, Bob McCoy, Ed Nicholson, Red Melberg, Elmer Nelson, Roy Newquist, George Clausen, Manny Cotlow, and Laurie Parker. For the season of 1935-1936 the Minneapolis Recreation Department reported a record 435 teams, a large increase over the 275 reported in 1931. The department reported 5,530 players using the scores of lighted and well maintained rinks. This was the largest hockey program in the county and compared favorably in numbers with those in the largest cities of Canada. In the twenties high school hockey became increasingly more popular with many of the more important games being played at the Minneapolis Arena, which had the advantage of artificial ice. Before WWI, West had dominated the City High School Conference and continued to do so through the 1932 season, when the sport was temporarily discontinued until reintroduced for the 1936-1937 season. In the period 1908-1932 West won fourteen city championships. Their ready access to the lakes and ponds in their section of the city and their close proximity to the Minneapolis Arena may have been a factor in West’s success. Washburn won their first title in 1927-1928, South in 1928-1929, but West retained the championship for the next three seasons. During this period W.W. Bradley coached the West team for a ten year span. Following the revival of the sport for the season of 1937-1938 Marshall won their only city title, but lost to St. Paul Humboldt 3-2 in a Twin City playoff game before 2,600 fans at the St. Paul Auditorium. Washburn won their second and third championships in 1937 and 1939, while Roosevelt captured their first title in 1940. Two private schools in the Minneapolis area iced varsity teams in the late twenties and the thirties, those being Blake and De LaSalle. These schools usually scheduled West, Cretin, St. Paul Academy, St. Thomas Academy, and Shattuck and occasionally would schedule college frosh or varsity teams. Blake iced some strong teams and many of their players matriculated and played at eastern colleges such as Yale, Harvard, Princeton, and Dartmouth. Coached by John Savage, former Princeton goalie, Blake went undefeated during the 1937-1938 season. Leading members of the team were: Jock and Tel Thompson, Bert Marvin, Lindley Burton, Monty Wells, and Captain John Brooks. Red Curran, John Scanlon, Phil Perkins, Burr Williams, Jack Flood, Clyde Munns, Phil LaBatte, Laurie Parker, Earl Barthelome, and Manny Cotlow were among the leading players who performed for West high school during the late twenties and thirties. Of these players; Perkins, Flood, Parker, Williams, Barthelome, and Cotlow turned professional. Others from Minneapolis during the same periods who signed professional contracts were: Hub Nelson, Virgil Johnson, Cully Dahlstrom, Bill Moe, Don Olson, Bill McGlone, Kelly Ness, Evy Scotvold, Ted Breckheimer, Louie Swenson, Leo Schatzlein, and Emil, Oscar, and Emory Hansen. In addition, Phil LaBatte was a member of the 1936 U.S. Olympic team, while Spencer Wagnild played with the 1938 and 1939 U.S. National teams. Ed Nicholson, who had been a member of the Banker’s team in the Arena League, joined the 1939 U.S. National team that competed in the World’s Championship in Zurich, and Basel Switzerland. Dahlstrom, Moe and Johnson advanced to the NHL where they became regular performer. Others who had short careers in the NHL were Oscar and Emil Hansen, and Burr Williams, while Barthelome and Nelson enjoyed long careers in the AHL and AHA, respectively. In the 1937-1938 season, Dahlstrom won the Calder Trophy; emblematic of the rookie award of the season in the NHL. A final review of the University of Minnesota hockey program is in order. The sport was placed on a varsity basis for the 1922-1923 season when the Gophers played a twelve game schedule with a 10-1-1 record. During the twenty-one seasons stretching from 1922-1923 through the 1940-1941 season Minnesota posted a very respectable 200-69-19 record for a .727 winning percentage. In collegiate circles this ranked among the very best in the nation. During the twenty-one years, with few exceptions, the Gophers rosters were composed of born and reared Minnesota players. A review of the line ups during the twenties and thirties unveils that Minneapolis has furnished a large share of the Gopher’s playing personnel. A partial list of the Minneapolis natives who wore the Maroon and Gold colors during this periods includes: Phil Bros, Joe Brown, Ed Olsen, Cliff Thompson, Chuck McCabe, Lloyd and Clyde Russ, Ed Hollingsworth, Fred Gould, Howard Gibbs, Fred and Marsh Ryman, Ed Arnold, Bucky Johnson, Harold, Jim, and Bob Carlson, Laurie Parker, Phil LaBatte, George Clausen, Bill Munns, Spencer Wagnild, Bill Zieske, John Scanlon, Reynard Bjork, Bud Wilkinson, Wally Taft, Ridgeway Baker, John Ganley, John Hokanson, Fred Junger, Les Malkerson, Glenn Seidel, and Marty Falk. Marty Falk and Bud Wilkinson, both goaltenders, and Wally Taft, a forward were products of Faribault’s Shattuck School’s hockey program. Wilkinson was a Gopher football star and later became a nationally acclaimed football coach at the University of Oklahoma.
<urn:uuid:8c6e6568-7594-47ee-9680-b14c90f3cdfe>
CC-MAIN-2021-21
https://history.vintagemnhockey.com/page/show/788379-early-minneapolis-hockey-1895-1942-
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00494.warc.gz
en
0.966146
8,090
2.53125
3
A motor skill is a learned ability to cause a predetermined movement outcome with maximum certainty. Motor learning is the relatively permanent change in the ability to perform a skill as a result of practice or experience. Performance is an act of executing a motor skill. The goal of motor skills is to optimize the ability to perform the skill at the rate of success, precision, and to reduce the energy consumption required for performance. Continuous practice of a specific motor skill will result in a greatly improved performance, but not all movements are motor skills. |Part of a series on| |Development and psychology| Types of motor skills Motor skills are movements and actions of the muscles. Typically, they are categorized into two groups: - Gross motor skills – require the use of large muscle groups to perform tasks like walking, balancing, and crawling. The skill required is not extensive and therefore are usually associated with continuous tasks. Much of the development of these skills occurs during early childhood. The performance level of gross motor skill remains unchanged after periods of non-use. Gross motor skills can be further divided into two subgroups: locomotor skills, such as running, jumping, sliding, and swimming; and object-control skills such as throwing, catching and kicking. - Fine motor skills – requires the use of smaller muscle groups to perform smaller movements with the wrists, hands, fingers, and the feet and toes. These tasks that are precise in nature, like playing the piano, writing carefully, and blinking. Generally, there is a retention loss of fine motor skills over a period of non-use. Discrete tasks usually require more fine motor skill than gross motor skills. Fine motor skills can become impaired. Some reasons for impairment could be injury, illness, stroke, congenital deformities, cerebral palsy, and developmental disabilities. Problems with the brain, spinal cord, peripheral nerves, muscles, or joints can also have an effect on fine motor skills, and decrease control. Motor skills develop in different parts of a body along three principles: - Cephalocaudal – development from head to foot. The head develops earlier than the hand. Similarly, hand coordination develops before the coordination of the legs and feet. For example, an infant is able to follow something with their eyes before they can touch or grab it. - Proximodistal – movement of limbs that are closer to the body develop before the parts that are further away, such as a baby learns to control the upper arm before the hands or fingers. Fine movements of the fingers are the last to develop in the body. - Gross to specific – a pattern in which larger muscle movements develop before finer movements. For example, a child only being able to pick up large objects, to then picking up an object that is small between the thumb and fingers. The earlier movements involve larger groups of muscles, but as the child grows finer movements become possible and specific things can be achieved. In children, a critical period for the acquisition of motor skills is preschool years (ages 3–5), as fundamental neuroanatomic structure shows significant development, elaboration, and myelination over the course of this period. Many factors contribute to the rate that children develop their motor skills. Unless afflicted with a severe disability, children are expected to develop a wide range of basic movement abilities and motor skills. Motor development progresses in seven stages throughout an individual's life: reflexive, rudimentary, fundamental, sports skill, growth and refinement, peak performance, and regression. Development is age-related but is not age dependent. In regard to age, it is seen that typical developments are expected to attain gross motor skills used for postural control and vertical mobility by 5 years of age. There are six aspects of development: - Qualitative – changes in movement-process results in changes in movement-outcome. - Sequential – certain motor patterns precede others. - Cumulative – current movements are built on previous ones. - Directional – cephalocaudal or proximodistal - Multifactorial – numerous-factors impact - Individual – dependent on each person In the childhood stages of development, gender differences can greatly influence motor skills. In the article "An Investigation of Age and Gender Differences in Preschool Children's Specific Motor Skills", girls scored significantly higher than boys on visual motor and graphomotor tasks. The results from this study suggest that girls attain manual dexterity earlier than boys. Variability of results in the tests can be attributed towards the multiplicity of different assessment tools used. Furthermore, gender differences in motor skills are seen to be affected by environmental factors. In essence, "parents and teachers often encourage girls to engage in [quiet] activities requiring fine motor skills, while they promote boys' participation in dynamic movement actions". In the journal article "Gender Differences in Motor Skill Proficiency From Childhood to Adolescence" by Lisa Barrett, the evidence for gender-based motor skills is apparent. In general, boys are more skillful in object control and object manipulation skills. These tasks include throwing, kicking, and catching skills. These skills were tested and concluded that boys perform better with these tasks. There was no evidence for the difference in locomotor skill between the genders, but both are improved in the intervention of physical activity. Overall, the predominance of development was on balance skills (gross motor) in boys and manual skills (fine motor) in girls. Components of development - Growth – increase in the size of the body or its parts as the individual progresses toward maturity (quantitative structural changes) - Maturation – refers to qualitative changes that enable one to progress to higher levels of functioning; it is primarily innate - Experience or learning – refers to factors within the environment that may alter or modify the appearance of various developmental characteristics through the process of learning - Adaptation – refers to the complex interplay or interaction between forces within the individual (nature) and the environment (nurture) Influences on development - Stress and arousal – stress and anxiety is the result of an imbalance between demand and the capacity of the individual. In this context, arousal defines the amount of interest in the skill. The optimal performance level is moderate stress or arousal. An example of an insufficient arousal state is an overqualified worker performing repetitive jobs. An example of excessive stress level is an anxious pianist at a recital. The "Practice-Specificity-Based Model of Arousal" (Movahedi, 2007) holds that, for best and peak performances to occur, motor task performers need only to create an arousal level similar to the one they have experienced throughout training sessions. For peak performance, performers do not need to have high or low arousal levels. It is important that they create the same level of arousal throughout training sessions and competition. In other words, high levels of arousal can be beneficial if athletes experience such heightened levels of arousal during some consecutive training sessions. Similarly, low levels of arousal can be beneficial if athletes experience such low levels of arousal during some consecutive training sessions. - Fatigue – the deterioration of performance when a stressful task is continued for a long time, similar to the muscular fatigue experienced when exercising rapidly or over a long period. Fatigue is caused by over-arousal. Fatigue impacts an individual in many ways: perceptual changes in which visual acuity or awareness drops, slowing of performance (reaction times or movements speed), irregularity of timing, and disorganization of performance. - Vigilance – the effect of the loss of vigilance is the same as fatigue, but is instead caused by a lack of arousal. Some tasks include actions that require little work and high attention. - Gender – gender plays an important role in the development of the child. Girls are more likely to be seen performing fine stationary visual motor-skills, whereas boys predominantly exercise object-manipulation skills. While researching motor development in preschool-aged children, girls were more likely to be seen performing skills such as skipping, hopping, or skills with the use of hands only. Boys were seen to perform gross skills such as kicking or throwing a ball or swinging a bat. There are gender-specific differences in qualitative throwing performance, but not necessarily in quantitative throwing performance. Male and female athletes demonstrated similar movement patterns in humerus and forearm actions but differed in trunk, stepping, and backswing actions. Stages of motor learning Motor learning is a change, resulting from practice. It often involves improving the accuracy of movements both simple and complex as one's environment changes. Motor learning is a relatively permanent skill as the capability to respond appropriately is acquired and retained. The stages of motor learning are the cognitive phase, the associative phase, and the autonomous phase. - Cognitive phase – When a learner is new to a specific task, the primary thought process starts with, "What needs to be done?" Considerable cognitive activity is required so that the learner can determine appropriate strategies to adequately reflect the desired goal. Good strategies are retained and inefficient strategies are discarded. The performance is greatly improved in a short amount of time. - Associative phase – The learner has determined the most-effective way to do the task and starts to make subtle adjustments in performance. Improvements are more gradual and movements become more consistent. This phase can last for a long time. The skills in this phase are fluent, efficient, and aesthetically pleasing. - Autonomous phase – This phase may take several months to years to reach. The phase is dubbed "autonomous" because the performer can now "automatically" complete the task without having to pay any attention to performing it. Examples include walking and talking or sight reading while doing simple arithmetic. Law of effect Motor-skill acquisition has long been defined in the scientific community as an energy-intensive form of stimulus-response (S-R) learning that results in robust neuronal modifications. In 1898, Thorndike proposed the law of effect, which states that the association between some action (R) and some environmental condition (S) is enhanced when the action is followed by a satisfying outcome (O). For instance, if an infant moves his right hand and left leg in just the right way, he can perform a crawling motion, thereby producing the satisfying outcome of increasing his mobility. Because of the satisfying outcome, the association between being on all fours and these particular arm and leg motions are enhanced. Further, a dissatisfying outcome weakens the S-R association. For instance, when a toddler contracts certain muscles, resulting in a painful fall, the child will decrease the association between these muscle contractions and the environmental condition of standing on two feet. During the learning process of a motor skill, feedback is the positive or negative response that tells the learner how well the task was completed. Inherent feedback: after completing the skill, inherent feedback is the sensory information that tells the learner how well the task was completed. A basketball player will note that he or she made a mistake when the ball misses the hoop. Another example is a diver knowing that a mistake was made when the entry into the water is painful and undesirable. Augmented feedback: in contrast to inherent feedback, augmented feedback is information that supplements or "augments" the inherent feedback. For example, when a person is driving over a speed limit and is pulled over by the police. Although the car did not do any harm, the policeman gives augmented feedback to the driver in order for him to drive more safely. Another example is a private tutor for a new student in a field of study. Augmented feedback decreases the amount of time to master the motor skill and increases the performance level of the prospect. Transfer of motor skills: the gain or loss in the capability for performance in one task as a result of practice and experience on some other task. An example would be the comparison of initial skill of a tennis player and non-tennis player when playing table tennis for the first time. An example of a negative transfer is if it takes longer for a typist to adjust to a randomly assigned letter of the keyboard compared to a new typist. Retention: the performance level of a particular skill after a period of no use. The type of task can have an effect on how well the motor skill is retained after a period of non-use: - Continuous tasks – activities like swimming, bicycling, or running; the performance level retains proficiency even after years of non-use. - Discrete tasks – an instrument, video game, or a sport; the performance level drops significantly but will be better than a new learner. The relationship between the two tasks is that continuous tasks usually use gross motor skills and discrete tasks use fine motor skills. The regions of the frontal lobe responsible for motor skill include the primary motor cortex, the supplemental motor area, and the premotor cortex. The primary motor cortex is located in the precentral gyrus and is often visualized as the motor homunculus. By stimulating certain areas of the motor strip and observing where it had an effect, Penfield and Rassmussen were able to map out the motor homunculus. Areas on the body that have complex movements, such as the hands, have a bigger representation on the motor homunculus. The supplemental motor area, which is just anterior to the primary motor cortex, is involved with postural stability and adjustment as well as coordinating sequences of movement. The premotor cortex, which is just below the supplemental motor area, integrates sensory information from the posterior parietal cortex and is involved with the sensory-guided planning of movement and begins the programming of movement. The basal ganglia are an area of the brain where gender differences in brain physiology is evident. The basal ganglia are a group of nuclei in the brain that is responsible for a variety of functions, some of which include movement. The globus pallidus and putamen are two nuclei of the basal ganglia which are both involved in motor skills. The globus pallidus is involved with the voluntary motor movement, while the putamen is involved with motor learning. Even after controlling for the naturally larger volume of the male brain, it was found that males have a larger volume of both the globus pallidus and putamen. The cerebellum is an additional area of the brain important for motor skills. The cerebellum controls fine motor skills as well as balance and coordination. Although women tend to have better fine motor skills, the cerebellum has a larger volume in males than in females, even after correcting for the fact that males naturally have a larger brain volume. Hormones are an additional factor that contributes to gender differences in motor skill. For instance, women perform better on manual dexterity tasks during times of high estradiol and progesterone levels, as opposed to when these hormones are low such as during menstruation. An evolutionary perspective is sometimes drawn upon to explain how gender differences in motor skills may have developed, although this approach is controversial. For instance, it has been suggested that men were the hunters and provided food for the family, while women stayed at home taking care of the children and doing domestic work. Some theories of human development suggest that men's tasks involved gross motor skill such as chasing after prey, throwing spears and fighting. Women, on the other hand, used their fine motor skills the most in order to handle domestic tools and accomplish other tasks that required fine motor-control. - "Gross Motor Skills". - Stallings, Loretta M. (1973). Motor Skills: Development and Learning. Boston: WCB/McGraw-Hill. ISBN 0-697-07263-0. - "Fine Motor Skills - symptoms, Definition, Description, Common problems". www.healthofchildren.com. - Newton, T.J.,& Joyce, A.P.(2012).Human Perspectives (6th ed.).Australia:Gregory. - Newton, T.J.,& Joyce, A.P.(2012).Human Perspectives (6th ed.).Australia:Gregory. - Denckla 1974. - Malina 2004. - Rosenbaum, Missiuna & Johnson 2004. - Junaid & Fellowes 2006. - Piek et al. 2012. - Vlachos, Papadimitriou & Bonoti 2014. - Yerkes, Robert M; Dodson, John D (1908). "The relation of strength of stimulus to rapidity of habit-formation". Journal of Comparative Neurology and Psychology. 18 (5): 459–482. doi:10.1002/cne.920180503. - Movahedi, A; Sheikh, M; Bagherzadeh, F; Hemayattalab, R; Ashayeri, H (2007). "A Practice-Specificity-Based Model of Arousal for Achieving Peak Performance". Journal of Motor Behavior. 39 (6): 457–462. doi:10.3200/JMBR.39.6.457-462. - Kurt z; Lisa A. (2007). Understanding Motor Skills in Children with Dyspepsia, ADHAM, Autism, and Other Learning Disabilities: A Guide to Improving Coordination (KP Essentials Series) (KP Essentials). Jessica Kingsley Pub. ISBN 978-1-84310-865-8. - Adams, J.A.(June, 1971). " A closed-loop theory of motor learning" J Mot Behav 3(2):111-49 retrieved from doi:10.1080/00222895.1971.10734898 - Lee, Timothy Donald; Schmidt, Richard Penrose (1999). Motor control and learning: a behavioral emphasis. Champaign, IL: Human Kinetics. ISBN 0-88011-484-3. - Carlson, Neil (2013). Physiology of behavior. Boston: Pearson. - Schott, G. (1993). "Penfield's homunculus: a note on cerebral cartography". Journal of Neurology, Neurosurgery, and Psychiatry. 56 (4): 329–333. doi:10.1136/jnnp.56.4.329. PMC 1014945. PMID 8482950. - Rijpkema, M., Everaerd, D., van der Pol, C., Franke, B., Tendolkar, I., & Fernandez, G. (2012).Normal sexual dimorphism in the human basal ganglia. Human Brain Mapping, 33(5), 1246–1252. doi: 10.1002/hbm.21283. - Raz, N., Gunning-Dixon, F., Head, D., Williamson, A., & Acker, J. (2001). Age and sex differences in the cerebellum and the ventral pons: A prospective mr study of healthy adults. American Journal of Neuroradiology, 22(6), 1161–1167. doi: 11415913. - Becker, J., Berkley, K., Geary, N., Hampson, E., Herman, J., & Young, E. (2008). Sex differences in the brain: From genes to behavior. (p. 156). New York, NY: Oxford University Press, Inc. - Joseph, R. (2000). "The evolution of sex differences in language, sexuality, and visual-spatial skills". Archives of Sexual Behavior. 29 (1): 35–66. doi:10.1023/A:1001834404611. PMID 10763428. - Sparrow, W.A. (July 1, 1983). "The efficiency of skilled performance". Journal of Motor Behavior. 15 (3): 237–261. doi:10.1080/00222895.1983.10735299. PMID 15151872. - Guthrie, E.R. (1957). Harper et Brothers, New York (ed.). "The psychology of learning". Cite journal requires |Wikimedia Commons has media related to Motor skills.| - Section about motor learning and control in the Wikibook "Stuttering" - What's the difference between fine motor and gross motor skills?
<urn:uuid:440223ba-1809-4e2b-9106-88210759f551>
CC-MAIN-2021-21
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Motor_skill
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00576.warc.gz
en
0.913505
4,175
3.984375
4
This Is Your Brain on Art Can neuroscience explain art? By Morgan Meis Twenty percent of art can now be explained by neuroscience. That, at least, is what V.S. Ramachandran thinks. Ramachandran is the Director of the Center for Brain and Cognition, and Distinguished Professor with the Psychology Department and Neurosciences Program at the University of California, San Diego. He is, in short, one of the top neuroscientists around at the moment. He is also a clear and engaging writer. His 1999 book, Phantoms in the Brain, brought him much popular attention and his most recent book, The Tell-Tale Brain, is doing more of the same. Much like Oliver Sacks, his friend and admirer, Ramachandran comes to many of his insights about the human brain by observing its dysfunction. Problems in the brain can tell us meaningful things about what is going on in a normal brain. Take, for example, people who claim that one of their arms belongs to someone else due to damage to their brain; they become lessons in how complex and multi-layered are the functions of consciousness. We seem to ourselves, when everything is going well, to be fully unified “selves.” In fact, when we look at various disorders of the mind, we see how tenuous is the ground upon which that feeling rests. In looking at the disordered mind, Ramachandran gets the impression that he is looking “at human nature through a magnifying glass.” That is also why Ramachandran devotes two whole chapters of his book to the subject of art and aesthetics. Making art and appreciating art seems to be universal in the human species. From prehistoric cave paintings to modern conceptualism, where you find human beings you also find art. At the same time, no one has ever been able to give a very good definition of art, to explain in any rigorous and satisfying way what it is that human beings are up to when they make art and when they like art. It is a subject that touches on the strangeness of consciousness, the felt sense of being human that all of us experience every day but that is so resistant to explanation or analysis. Art is thus a kind of Holy Grail to those who seek to explain the murkiest aspects of human consciousness. But it is this very fact — the experiential and intangible nature of art — that would seem to preclude the possibility that science can intrude into the domain of art. As Ramachandran himself admits, “One is a quest for general principles and tidy explanations while the other is a celebration of the individual imagination and spirit, so that the very notion of a science of art seems like an oxymoron.” That is, indeed, more or less the problem. Theories of art have proliferated for as long as we’ve had philosophy and theory. All of them have tried, in one way or another, to elucidate general principles. The problem, as Ramachandran understands it, is that we simply haven’t known enough about how the brain operates. Now, he says, that situation has finally changed. He claims specifically that, “our knowledge of human vision and of the brain is now sophisticated enough that we can speculate intelligently on the neural basis of art and maybe begin to construct a scientific theory of artistic experience.” Speculate he does. Ramachandran identifies what he calls nine laws of aesthetics. Let’s look at one of them — law number two, which he calls Peak Shift — to get a sense of what neuroscience brings to aesthetics. Peak Shift refers to a generally elevated response to exaggerated stimuli among many animals. Ramachandran refers to a study in which seagull chicks were made to beg for food (just as they do from their mothers) simply by waving a beak-like stick in front of their nests. Later, the researchers pared down even further, simply waving a yellow strip of cardboard with a red dot on the end (adult gulls have a red dot at the end of their beaks). They got the same response. More interesting, and crucially for Ramachandran’s law of Peak Shift, is that the gull chicks become super excited if you put three red dots on the cardboard strip. Something in the mental hardwiring of the chicks says, “red outline on lighter background means food.” The wiring does not normally need to be more specific than that. It is enough for survival. So, the chick brains make the leap to interpreting the advent of several red outlines as being several times better. They go nuts. This fact, Ramachandran thinks, can give us some real, neurologically based insights into the appeal for abstract art. Ramachandran supposes that with abstract art, human beings have learned to tap into their own gull chick response mechanisms. Abstract artists are thus “tapping into the figural primitives of our perceptual grammar and creating ultranormal stimuli that more powerfully excite certain visual neurons in our brains as opposed to realistic-looking images.” That is the argument. I, for one, suspect that there is a genuine insight here, mixed with a battery of oversimplifications that could be picked apart by any art historian. Ramachandran, to his credit, admits that fact. He does not want to be seen as a reductionist and his points about Peak Shift are not meant to exhaust the possible reasons for the emergence of and enthusiasm for abstract art. Neuroscience is not meant to replace other standpoints from which we appreciate and analyze art. Ramachandran thinks, in general, that neuroscience can make significant contributions to aesthetics without otherwise encroaching on the humanities. Our love of Shakespeare, he argues, is not diminished by our understanding of universal grammar. “Similarly, our conviction that great art can be divinely inspired and may have spiritual significance, or that it transcends not only realism but reality itself, should not stop us from looking for those elemental forces in the brain that govern aesthetic impulses.” Why the qualifications then? Why does Ramachandran continuously feel the need to reassure us that we can gain knowledge about art from neuroscience without losing anything? It seems to presuppose, at the very least, that the other option is a possibility, that looking for (and finding) elemental forces in the brain that govern aesthetic impulses could, in fact, transform our actual experience of art. Perhaps past experience comes into play here. We are broadly aware of the fact, for instance, that there has been a vast accretion of knowledge about the natural world and about ourselves over the last two centuries. We are also broadly aware that the understanding we have gained has not been neutral. It has not left the world as it was. The understanding has transformed our relationship to the world, to one another, to ourselves. Maybe that is a simple way to describe the sense of crisis that has always been a constituent part of the experience of modernity. As we understand differently, we act differently. And how you act is, in some fundamental way, how you are. So, we have changed in who we are. We have become different. How different? No one can say, exactly. Has it been for the better or for the worse? Opinions are divided. The feelings of anxiety, though, are real and they’ve always been real. The subtitle of Ramachandran’s book is “A Neuroscientist’s Quest for What Makes Us Human.” The underlying assumption of that subtitle, I am suggesting, is that the quest is a fundamentally benign one. Philosophy, Aristotle said many years ago, begins in wonder. We want to know. We have always wanted to know. That is part of what it means to be human. Ramachandran thus presents his book as both a study in the things that make us human, and a contribution to the practice of being human. But is there another possible subtitle to Ramachandran’s book lurking in the shadows? Would it be something like, “A Neuroscientist’s Quest to Utterly Transform What It Means to Be Human?” There is an interesting aside during Ramachandran’s discussion of Peak Shift. He wonders, after discussing his principle of ultranormal stimuli and its relation to abstract art, whether our brains are simply hardwired to appreciate art. This raises the question, however, of disagreement in the appreciation of art. If we are analogous to chick gulls in our gut reaction to certain abstract forms, mustn’t it then be the case that everyone actually likes, in some deep way, the sculptures of (for instance) Henry Moore? Ramachandran goes for the surprising answer here. He supposes that maybe everyone does. They just don’t know it, or they suppress that root “liking” with their higher cognitive functions, adjusting what they “like” to specific cultural mores or other similar considerations. Ramachandran goes even further. He proposes that we could actually test this hypothesis out. We could hook people up to sensors that test whether they are having a root response to Henry Moore’s sculptures (even if they say they dislike the sculptures) and find out whether we share some basic and primitive response to the work. If nothing else, it could prove that basic universal aesthetic laws do apply, and that they play a role in our appreciation for art. One can make easy fun of such examples. There is something creepy about the idea that we are forced, in some sense, to admit a liking for Henry Moore that we would otherwise deny. But I propose that we take it seriously for a moment. If, in rigorous test after rigorous test, neuroscientists such as Ramachandran can begin to establish many of these universal laws and fine tune the analysis of how they operate, is it possible that this would have no effect on how we then continue to appreciate and even to produce art? Maybe it wouldn’t. Maybe the Shakespeare analogy holds. Maybe there is something so solid, so intransigent to our humanness and to the way that we experience the world no amount of such knowledge can shake it apart. I suspect, though, that we have no idea what the implications of discovering the laws of aesthetics would be. Ramchandran basically agrees. The final sentence in the last chapter of his book explicitly says it. He is speaking more broadly about the project of explaining human consciousness in total, but the thought applies to the specific realm of art. “We don’t know,” he writes, “what the ultimate outcome of such a journey will be, but surely it is the greatest adventure humankind has ever embarked on.” Probably he is correct about this. To understand our own origins and to understand exactly how we got to be the kinds of creatures we are — this is the ultimate quest. It is also appropriate that such enthusiasm, such optimism guide the adventure. No adventure, especially an adventure of such magnitude, has ever been embarked upon without a driving optimism. And no adventure has ever proceeded for very long without melancholic notes creeping into the affair. Thus the need, I think, for Ramachandran to pause along the way and reassure his reader (and himself?) that the outcome of this whole affair will not transform the object of his quest — “what makes us human” — into something unrecognizable. There is a passage in Ramachandran’s discussion of his ninth law of aesthetics (Metaphor) where he begins to wax eloquently about the Nataraja, The Dancing Shiva sculpture that is India’s greatest icon. It is clear that the sculpture is deeply meaningful to Ramachandran. Perhaps it evokes his childhood. Maybe he once had an intense experience with the sculpture. He doesn’t tell us. Instead, he takes a moment to explain the statue, to interpret it. He mentions that Shiva is shown stomping on a demon, Apasmara, who represents the illusion of ignorance. What is this illusion? Finally, Ramachandran breaks away from his reverie. He apologizes for straying too far afield. He assures us, once again, that his non-reductionist approach to neuroscience will in no way diminish great works of art. He wants it to be the case, and you can feel the desire in the passage, that the insights gained from neuroscience and his interpretations of the power of the Nataraja are deeply compatible. Maybe so, maybe so. Maybe the insights of neuroscience will “actually enhance our appreciation of [art’s] intrinsic value.” But the insistence strikes me as conveying a lingering sadness that Ramachandran never acknowledges. The sadness lingers in the between-spaces of his sentences, in the silent moments that fill up the pauses as he moves from one argument to another. He doesn’t know, he can’t know, what we will lose or what we will gain. And he is aware, as we are all aware in our heart’s heart, that we aren’t going to stop doing this anyway. We are going to go forward into the unknown in the quest to make art fully knowable and we’ll deal with the consequences when we’ve arrived, joyful in our accomplishments and sad, too, at the inevitable loss of all that has been left behind. • 17 March 2011 Morgan Meis is a founding member of Flux Factory, an arts collective in New York. He has written for The Believer, Harper’s, and The Virginia Quarterly Review. Morgan is also an editor at 3 Quarks Daily, and a winner of a Creative Capital | Warhol Foundation Arts Writers grant. He can be reached at [email protected]. |RELATED SMART SET CONTENT I made everyone I know read this the moment it came out. When I reference it , I’m still finding a few who haven’t a clue what I’m talking about. One recent Wednesday night, Superintendent Jon Bales received a pair of phone calls at home that dismayed but did not surprise him. The president of the local teachers’ union called him with updates from the state Capitol, a short drive away in Madison, Wis. Dozens of teachers from the DeForest Area School District had joined the burgeoning protests there, Rick Hill told him, and many educators were unlikely to report to work the next day. Mr. Bales soon realized he would have to call off school. That night, the two men—who are on friendly terms—worked out an agreement. Teachers in the district would not call in sick, but would make up the lost time by working a day they were scheduled to have off. Mr. Bales began calling administrators and arranging outreach to parents, whose plans for the next day would be disrupted. Massive protests have been the norm in Wisconsin in recent weeks, since Gov. Scott Walker, a Republican, unveiled a plan to strip many collective bargaining rights from teachers and most other public employees. GOP elected officials are pursuing similar measures in Ohio and other states. But here in the DeForest district, like some others around the state, collective bargaining, while often difficult, has produced agreements that generally satisfied both sides. Gov. Walker’s plan would upend existing relationships, a number of superintendents and local teachers’ union leaders say, and create the potential for more division. It would give leaders in the DeForest district, which has 3,250 students, far more power to determine everything from teachers’ health-care coverage to school assignments and class sizes—matters that would fall outside the scope of collective bargaining. “In the end, on a local basis, what we have is still each other,” Mr. Bales said in an interview in his office this week. “Our culture here is built around trying to engage everybody in [the] conversation.” The furor over the governor’s plan has left administrators like Mr. Bales, as well as teachers and parents, with an unfamiliar and still-evolving challenge: How to work through the upheaval and go about the business of educating students—while trying to hold their school communities together. “You have to have respect for the fact that people are being impacted personally,” said Mr. Bales. “But from our perspective, and from the teachers’ leadership as well, you have to keep the kids in mind first. You have to separate the personal impact from the impact on the system.” Mr. Hill, a 58-year-old educator who teaches special education, worries that the cooperative approach will be replaced by one that encourages both sides to “get the best you can, when you can.” “I’m really worried,” the local union president explained. “It’s the Wild West if you’ve taken away all sense of what’s reasonable, of how you work through things.” DeForest district officials and members of the teachers’ union, an affiliate of the 98,000-member Wisconsin Education Association Council, or WEAC, and the National Education Association, use an approach known as consensus bargaining in their contract negotiations, in which they begin by laying out broad principles and gradually move into contract specifics. During contract negotiations, the two sides sometimes meet in the district’s offices. On other occasions, they gather at the local library in DeForest, whose 9,000 or so residents include workers employed in manufacturing, farming, and government, often in Madison, just to the south. Votes on various provisions are taken by hand, with participants signaling thumbs up, thumbs down, or thumbs sideways. A single thumbs-down is sufficient to nix a provision, so participants work to reach an accord in which all parties have at least a neutral, or sideways, position, explained Vickie Adkins, the district’s human-resources director. The district’s contract gives teachers average salary increases of about 2-4 percent a year, when step pay raises and additional raises for different classifications of educators are included, Mr. Bales estimates. He puts the average teachers’ salary at $52,600 a year. Gov. Walker’s plan would limit yearly raises to no more than the Consumer Price Index—which rose by 1.6 percent for the most recent year ending in January—unless voters in local communities approve a higher increase. The pay increase was made possible partly because the district, which has a total budget of $35 million, and union agreed to revise the contract to move to a lower-cost insurance carrier, school system officials said. Under the governor’s plan, health-insurance decisions at the local level would no longer be subject to bargaining, meaning district officials could set health-coverage policy on their own. Gov. Walker argues that requiring teachers to pay for pensions—most chip in nothing now—and restricting collective bargaining on health care and other issues will help districts save more than enough money to offset more than $834 million in reductionsin state aid to schools over the coming two years. Mr. Bales, now in his 13th year as superintendent, worries the proposal would bring more costs than savings to his district, though he says he can’t yet predict the size of the gap. Districts across Wisconsin faced a deadline this week to send preliminary notices to employees who would be laid off. Mr. Bales and Ms. Adkins hope to avoid layoffs for next academic year by not filling an anticipated 12 to 20 vacancies that will be likely be created by retirements and other departures. A higher number of the DeForest district’s 258 teachers than usual have indicated that they plan to retire after this year, citing concerns about either losing or having to pay more for retirement benefits, because of shrinking local budgets and potential reductions created by the governor’s proposal. Mr. Hill says he also hears worries and frustration, particularly from teachers who say educators are being unfairly targeted in the state, and around the country, by those who blame them for budget woes and longstanding problems in schools. “I’ve never heard as many people say, ‘I’m getting out,’ ” he said. Labor Clout Criticized Critics of teachers’ unions, and advocates for tighter controls on government spending, sometimes argue that collective bargaining tips negotiating scales heavily in favor of labor organizations and prevents management from making changes to district operations that can save money and improve student achievement. Some say that the prospect of angering politically active teachers’ unions can put pressure on district leaders to accept deals they might not like. In that context, some Wisconsin school administrators’ qualms about the governor’s proposal are easier to understand, said Mike Antonucci, the director of the Education Intelligence Agency, a California-based organization that researches and is often critical of unions. Should Gov. Walker’s plan win approval, school officials in local districts will be left dealing with frustrated employees at a time when their schools are facing painful budget cuts. “District administrators don’t want any trouble,” Mr. Antonucci said in an e-mail. Administrators, he said, “are the ones who have to live with the new arrangement—with angry unions that haven’t been eliminated, just defanged.” In the DeForest district, meanwhile, Mr. Bales’ efforts to mitigate the impact of the state tensions have also included reaching out to parents, many of whom were outraged at seeing school canceled even for day (some Wisconsin districts were out much longer). The superintendent estimates that about 90 percent of calls and e-mails he received were from people who were upset over the district employees’ staying away from school to protest. The reaction was more mixed in the 6,000-student Middleton-Cross Plains Area School District, which canceled two days of classes because many teachers and other employees did not report for work, said Superintendent Don Johnson. Opinion from parents, he said, seemed to be roughly divided in thirds, either supporting the teachers’ action, opposing it, or ending up somewhere in between. Some parents in the district, located in suburban Madison, worried that educators would promote a “union point of view” in their classes, Mr. Johnson said. As the public protests played out, the superintendent sent a memo to teachers, referring them to a policy that requires educators to present controversial topics impartially. He also advised teachers to avoid discussing the Wisconsin fight entirely if it had nothing to do with their classes. “We need to understand that our charge is to help students understand issues,” Mr. Johnson said. His message was that the controversy is “right here, right now,” he noted, “but it doesn’t really belong in a chemistry classroom.” During the protests, reports emerged that some teachers around the state had asked doctors to give them notes reporting that they were sick—and as a result would be paid for the days they missed—when in fact they were attending the protests. Mr. Johnson also asked teachers who did not report to school and instead attended the protests to take leave without pay, rather than reporting sick, which, he explained in a Feb. 20 memo, would “clarify for the public that we are all acting honestly and honorably.” Many educators are scared for the future of their profession, and worried about the quality of education declining with budget cuts, said Pat Keeler, a social studies teacher and union member. A lot of his colleagues have spoken to him about other career options. “People are mad,” the 44-year-old said. “They don’t understand why they’re scapegoats for Wisconsin’s budget ills.” Some public resentment over the canceled classes lingers. Mr. Johnson said he had received eight public-records requests related to the work stoppage, the majority from people in the community wanting the names of district employees who had not reported to work and what reasons they had given. No ‘Paid Guns’ In the Watertown Unified School District, a 4,000-student system in a city less than an hour east of Madison, Superintendent Douglas Keiser and Rusty Tiedemann, who helps negotiate for the local teachers’ union, have spoken regularly during recent weeks, meeting for breakfast and exchanging phone calls. District officials have a history of working through vexing issues with the union, Mr. Keiser said. The two sides avoid bringing what he calls “paid guns”—outside union negotiators and the district’s lawyer—into the negotiations. Both men say they hear questions every day from teachers and other employees about what’s ahead for the school budget and staff members’ contracts. But until they know the fate of the governor’s proposal, they can’t provide answers. “It’s been challenging to know how to act and what to do,” said Mr. Tiedemann, a health teacher. “Everyone’s afraid that actions that we take may be interpreted as an affront to our community, or to our district, which is not what it’s meant to be at all. We’re very happy with our district, and with our community.” Stephanie Griggs, a parent of three students in Watertown, has a different perspective. The former school board member believes teachers and other public workers need to contribute to their pensions and health insurance, as is the norm in the private sector, and says that the state needs to curb collective bargaining rights to change to keep costs to taxpayers low. Wisconsin’s largest teachers’ union, WEAC, has said it will accept the governor’s proposal to pay more for pensions and health coverage, but not the collective bargaining changes. “Everyone is feeling the pinch,” Ms. Griggs said. “I don’t know anybody but maybe two or three people who have gotten pay increases in the past five years.” She also worries that the ongoing controversy will make it less likely that local voters will approve important future spending measures to help schools in the district. “What’s happening now is pitting parents against teachers,” she said. “Parents don’t feel comfortable talking to teachers about it, and teachers don’t feel comfortable talking to parents about it. So it’s kind of like they just don’t talk.” Mr. Keiser, the superintendent, says he’s tried to reach deals with the local union that are fair to teachers and taxpayers. Whatever becomes of Gov. Walker’s measure, he hopes some measure of cooperation continues. Neither side would accept “just acquiescing to the other,” Mr. Keiser said. In most negotiations, “you don’t come away feeling like you won, you don’t come away feeling like you lost. … You have to be reasonable,” he added, because if you aren’t, “you’ll pay the price the next time around.” Coverage of leadership, human-capital development, extended and expanded learning time, and arts learning is supported in part by a grant from The Wallace Foundation, atwww.wallacefoundation.org. Now is the time to speak up for the arts, arts education and creative economy On March 1, Gov. Scott Walker presented his 2011 – 2013 Biennial Budget Address to a jointsession of the Wisconsin State Legislature. The budget contains many obvious and not-so-obvious ways that the budget will affect the arts, arts education and creative economy in Wisconsin and the artistic and creative opportunities that Wisconsin residents deserve. Please note that the Governor’s proposal is the biennial budget’s starting point, and the numbers included in the final budget can go up or down from here in the state Legislature. It should not be taken for granted that the Governor’s proposal will stand in the Legislature. Some legislators may think the Governor’s budget went too far, others may think he hasn’t gone far enough. If you believe that the arts are “part of the solution” for Wisconsin, you must speak up! Committed citizens – not just people who are directly involved in the arts, but everyone who cares about Wisconsin’s future – will need to advocate and educate in this environment. If we want the decision-makers to recognize the public value of the arts for Wisconsin, we must take action. Our motto must be, “Don’t mourn, organize.” Making change will take more than just sending emails to legislators. We need to “surround” and educate legislators with information, data and stories about the value of state funding for their constituents. The focus of our advocacy right now will be the members of the State Legislature, since they will be engaged in the process of reviewing the budget for the next few months. Arts Day on March 3 (you can still register!) is the first step in this campaign, but the budget will unfold over the next few months and it’s up to all of us to get involved. (Click here for Nine Reasons why you must be an advocate for the arts). Part #1 of this message is information on the proposed cuts to the Wisconsin Arts Board, with additional information about other budget proposals that will affect the arts in the state. Part #2 is a brief overview of the budget process. Part #3 is information on what YOU can and must do to advocate and educate, if you want to see change. Please know that this is just the beginning of information from Arts Wisconsin and partners about the state budget and advocacy efforts. We will continue to analyze the budget and its impact and facilitate the campaign for action. We will keep you up to date and equipped with the information and tools you need to make your voice heard. Please make sure you – and others who care about Wisconsin’s future – are signed up for Arts Wisconsin’s Legislative Action Center and as a FaceBook “fan” to get the latest info, and up-to-the-minute information will be available on our website and using our Arts Activist Center. Thanks for your good work. Keep in touch with questions, comments, thoughts, and ideas. Remember: don’t mourn, organize! Part #1: Here’s how the proposed budget would affect the arts, arts education and creative economy in Wisconsin: Wisconsin Arts Board The big news is that the Wisconsin Arts Board’s budget will be reduced by 58%, severely reducing its ability to serve the people of Wisconsin. Here are the numbers: Governor’s Budget Action – Arts Board |FY 11||FY 12||Change||%||Notes| |General Purpose Revenue (GPR)||2,417,700||759,100||-1,658,600||-68.6%||General state funding| |Program Revenue – Federal||759,100||759,100||Funds from the National Endowment for the Arts| |Program Revenue – State||525,600||24,900||-500,700||-95.3%||Percent for Art Program eliminated| |Program Revenue – Other||20,000||20,000||Other Gifts or Grants Received| The details are: - Match GPR Funds to Federal Funds “The Governor recommends reducing expenditure authority to match GPR appropriations to PRF appropriations in the amounts shown to balance the budget.” A state must have a state arts agency in order to receive funding from the National Endowment for the Arts and it must be able to match the funds it receives. Until now, the State of Wisconsin has invested more than its federal award in the publicly valued programs and services of the Arts Board. - Consolidate the Arts Board into the Department of Tourism The Arts Board would cease to be an agency attached to tourism for administrative purposes. Governor Walker’s budget would consolidate the Arts Board and make it a program of the Department of Tourism. The result of this action will be the elimination of six employees, the transfer of four employees to Tourism, and the Arts Board and its now executive director reporting to the Secretary of Tourism. The details of this reporting structure are unclear and will need further explanation from the Governor and/or the Department. - Elimination of the Percent for Art Program “The Governor recommends eliminating the Percent for Art program and associated expenditure and position authority to balance the budget.” The Percent for Art program would cease to exist. While no new public art projects would be begun, it is unclear if the Governor intends to void existing contracts. The Governor’s Budget in Brief has this to say about the consolidation: “Transfer the Arts Board to the Department of Tourism to help focus support for the arts and grow the economy.” The Arts Board’s section of the budget says this: “The Governor recommends eliminating the board as a separate agency and consolidating its responsibilities, functions, positions and assets into the Department of Tourism to increase operational efficiency, improve effectiveness and promote tourism development. The Governor also recommends transferring funding and position authority to the Department of Tourism for the support of the arts functions, which include arts community and economic development services, grant administration, initiatives in arts education and in underserved communities, and the Folk and Traditional Arts program.” In addition to the severe Arts Board cuts, the state budget reduces funding for education, local governments, the the UW System, and technical colleges. The specific effects are currently unknown, but we are pretty sure that they will mean reduced access to the arts and arts education for Wisconsin’s students, since too often the arts are the first thing to go when budgets are tight. David Brooks, in yesterday’s New York Times’ op-ed “The New Normal,” said, “…legislators and administrators are simply cutting on the basis of what’s politically easy and what vaguely seems expendable. In education, many administrators are quick to cut athletics, band, cheerleading, art and music because they have the vague impression that those are luxuries. In fact, they are exactly the programs that keep kids in school and build character.” (Read the full op-ed here). Additional information about the budget’s impact on the arts will be coming to you soon. Part #2: The process: Now the Governor has released the budget, the bill goes to the Joint Finance Committee (click here for the list of JFT members) for review. (If your legislator is a Joint Finance Committee member, it will be especially important to connect with them in this process.) After that, the Senate and Assembly each will have an opportunity to edit and revise, after which the budget bill will go to a “conference committee” made up of senators and assemblypeople for final review. The Governor has a last chance for review (with the power to make significant changes) before signing the bill into law. The budget must be signed by June 30 since the fiscal year starts on July 1. Click here for more on “How a Bill Becomes Law.” Part #3: You have the power to make change. But where to start? 1) You can send an email message urging support for the Wisconsin Arts Board using Arts Wisconsin’s Legislative Action Center. 2) Think about contacts and connections, for yourself and your colleagues and friends, and how those people might be connected to your legislators. Those are the people who should help advocate for this cause. Start getting in touch with them to talk about educating your elected officials. 3) Plan to make an appointment for you and colleagues to meet with your state Senator and Representative as soon as possible. Legislative contact information is below. Arts Wisconsin will be happy to help you achieve these meetings. Get in touch with Anne Katz, Executive Director, to discuss the details. 4) Start gathering your stories, information and data about the impact of the arts as part of the solution, and the need for creativity, innovation and entrepreneurship in our local and state economies, jobs in the creative sector, infusing the arts into education for all Wisconsin students, and keeping our communities healthy and vibrant by ensuring access to the arts for everyone, everywhere in the state. You will educate legislators using: - Stories (with pictures, if possible) about the ways in which the arts have had an effect on economic vitality, educational advancement, civic engagement, and healthy communities, in your community - Information about programs and services supported and enjoyed by the community - Data about the number and scope of the people involved in the arts in your community Administration/Legislature contact info: - Click here for Gov. Walker contact info - Click here for a list of Gov. Walker’s Cabinet secretaries. - Information on the State Legislature Making the case – issue briefs to share: - Facts and figures on the arts, arts education and creative economy locally and globally - State arts issues Click here for a recent Wausau Daily Herald op-ed about the critical need for Wisconsin to invest in 21st century development strategies and opportunities. Go to Arts Wisconsin’s Arts Activist Center for more information and ways to speak up for the arts. Arts Action Alerts are a service of Arts Wisconsin and its Legislative Action Center. Arts Wisconsin provides timely and critical information and actions on local and global arts, community and government issues throughout the year. Please forward this email on to colleagues and peers who should have this information, so they can also stay in touch and involved. If you are not already a member, please support Arts Wisconsin’s statewide advocacy, service and development work so that we can continue to do our work on your behalf, and so that everyone, everywhere in Wisconsin can continue to participate in and benefit from the arts, culture, creativity and innovation. Many thanks!
<urn:uuid:7021aee1-b125-497d-aaf3-7350bd90f8a9>
CC-MAIN-2021-21
https://lisahutler.com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00335.warc.gz
en
0.960756
7,971
2.71875
3
CEA’s affordable and flexible home earthquake policies protect your home before the big one strikes. Make a plan for reuniting after the disaster in case family members are separated from one another during an earthquake (a real possibility during the day when adults are at work and children are at school). The purpose of the kit is to provide basics in case a major earthquake strikes while you are on the road, or you directed by a civil authority to leave your home quickly. The contents of your home may be damaged and can be dangerous: Shaking can make light fixtures fall, refrigerators and other large items move across the floor, and bookcases and television sets topple over. Earthquake Mitigation, introduces earthquake hazards and describes measures that can help reduce the risk of life and property should an earthquake occur at the school. Check work, childcare, and school emergency plans. Surviving an earthquake and reducing its health impacts takes preparation, planning, and practice. Get under a desk or table and hang on to it ( Drop, Cover, and Hold on! ) Basic first aid kits should include the following items: In times of emergency, it is important to have tools available that will help you turn off the water or gas, mend broken appliances, and heat water. Find out if CEA residential earthquake insurance is right for you with a free estimate. Your home is more likely to withstand an earthquake when its structural elements — walls, roof, foundation — are firmly connected to one another. Preparing for an earthquake largely revolves around the … Hashtag: #EarthquakeSafety #EarthquakePrep. What can I expect in my house when an earthquake occurs? Learn about the potential geologic and structural threats to your home in case of a major earthquake. Most experts say that we should have at least three days supply of food, water and supplies set aside in case of an emergency. What should I NOT do during an earthquake? Keep in mind this is not a go-kit. Develop an evacuation plan and a sheltering plan for yourself, your family, and others in your household. A basic disaster supply kit safety checklist should include: A gallon of water per person per day to last several days At least a three-day supply of non-perishable food that can be prepared without gas or electricity Flashlights with extra batteries Sturdy leashes, harnesses and carriers for transport, Doggie disposal bags and disposable gloves, Medications and copies of medical records, Current photos of you with your pets/s in case they get lost, Information on feeding schedules, medical conditions, behavior problems and contact information of your vet in case pets have to be fostered/boarded. The Sacramento Bee newspaper priced the cost for a basic earthquake safety kit below: The key to being safe during an earthquake is preparation. It is important to choose foods that will not increase thirst in your earthquake survival kit. It may be hard to find the supplies and services we need after this earthquake. Keep sturdy shoes near your bed. Pets should be current on their vaccines in case they end up in a shelter. Ask yourself if your cupboard doors could fly open (allowing dishes to... DO NOT turn on the gas again if you turned it off; let the gas company do it DO NOT use matches, lighters, camp stoves or barbecues, electrical equipment, appliances UNTIL you are sure there are no gas leaks. A water-resistant or waterproof tarp, which may be needed for shelter, or to protect property from the elements or contain debris after an earthquake. Make available in every room of your home a pack of glow sticks and simple flashlights, which are easy to carry and store. After a disaster, it's often easier to call long distance. We Californians spend a lot of time in our cars: commuting, running errands, enjoying the outdoors. Definitions: Aftershock - An earthquake of similar or lesser intensity that follows the main earthquake. Be sure that each member of your family knows what to do during an earthquake. Earthquake Emergency Action Plan Introduction: This Emergency Action Plan (EAP) outlines the appropriate actions that employees, students, and visitors at Wright State University should take before, during, and after an earthquake. Organize disaster supplies in convenient locations. The best time to prepare for an earthquake is before it happens. Presented by Mark Benthien Southern California Earthquake Center (SCEC) SCEC is funded by the National Science Foundation (NSF) and the United States Geological Survey (USGS) [email protected] 213-740-0323 www.scec.orgwww.shakeout.org with content adapted from a presentation by Jill Barnes and Bob Spears, Los Angeles Unified School District Creating an earthquake kit or supplementing a pre-made kit doesn’t have to be expensive. Review the plans and make sure that everyone understands them. Develop an Emergency Communication Plan In case family members are separated from one another during an earthquake (a real possibility during the day when adults are at work and children are at school), develop a plan for reuniting after the disaster. Identify the actions that are included in an earthquake safety program. Make An Earthquake Preparedness Plan Your earthquake preparedness plan should outline evacuation routes and reunion locations. What emergency supplies do I need for an earthquake? These are recommended by the Earthquake Country Alliance, in which USGS is a partner. Build on the basic first aid kit and print out a copy of the list to keep with first aid items. Get started today with a free estimate! Expect the unexpected, make a plan and prepare a disaster kit for your family and pet (s). If you haven’t already done so, put together an emergency supply kit They are an important reminder that we live in earthquake country, and there are actions we can take to prepare today. Far in advance, you can build an emergency kit , identify and reduce possible hazards in your home, and practice what to do during and after an earthquake. A workplace should follow accepted earthquake safety guidelines, but have in place a personalized, well-rehearsed plan to help safeguard your organization during an earthquake. General guidelines recommend storing enough food, water and gear for three days per person (72 hours) in your household. USGS Science – Leading the Way for Preparedness, Coastal and Marine Hazards and Resources Program, Students Conduct Earthquake Preparedness Drill, EQ Magnitude, Energy Release, and Shaking Intensity. Consider how you will respond to emergencies that can happen anywhere, such as home fires and floods. USGS science provides part of the foundation for emergency preparedness whenever and wherever disaster strikes. Step 3:Organize disaster supplies in convenient locations. Hold on until shaking stops. Follow these earthquake kit tips to create an emergency preparedness kit for disaster preparedness, filled with survival supplies to keep your family prepared for the next big one. Identify an out-of-the-area friend or relative that family members can check in with. It is likely we will experience a severe earthquake within the next 30 years. Plan to be safe by creating a disaster plan and deciding how you will communicate in an emergency. Preparedness includes planning for an earthquake before it occurs, equipping workers with information and emergency supply kits, training, and implementing preparedness plans. IDENTIFY: Look around your house for things that could fall or move. Store an all-purpose earthquake kit in a shed outside your house in case … Important supplies to have on hand include: Remember to refresh water and food items every six months. This handbook provides information about the threat posed by earthquakes in the San Francisco Bay region and explains how you can prepare for, survive, and recover from these inevitable events. Whether you are a homeowner, mobilehome owner, condo-unit owner or renter, there is a policy to fit your needs and budget. Text messages often go through when regular phone calls won’t work, so don’t give up if you can’t make a call. Earthquake Preparedness Should Be A High Priority 1734 Words | 7 Pages. With additional planning and preparation, the children in your care will have a better chance at surviving an earthquake unharmed. Step 4:Minimize financial hardship by organizing important documents, strengthening your property, and considering insurance. Creating an earthquake kit or supplementing a pre-made kit is an opportunity for your family to talk about what you would do when a major earthquake strikes. Aftershocks frequently occur minutes, days, weeks and even months following an earthquake. Earthquake magnitudes and energy release, and comparison with other natural and man-made events. A separate earthquake policy is required to cover shaking damages. According to the Center for Disease Control, steps towards pet disaster preparedness for your pet (s) should include: Make sure your pet (s) wear collars and tags with up-to-date contact information and other identification. After the1989 Loma Prieta earthquake, 12,000 Bay Area residents were displaced from their homes. Your home first aid kit can help reduce the risk of infection or the severity of an injury. What are the Great ShakeOut earthquake drills? The objectives for this unit are to: Identify the earthquake hazards in and around the school. Choose foods in easy-to-open or serve packaging, and have a manual can opener in your emergency kit. In addition to your home earthquake critical supplies, you will need to prepare a car earthquake emergency kit. This book is provided here to share an important message on emergency preparedness. The following earthquake preparation tips take a few hours to create a plan and organize supplies that will keep you safer. List of family member's medical history, medications, doctors, insurance company, and contact persons should be readily available. The Disaster Plan should include a 24-hour, 7-day per week communications network with internal and external components. No one can predict when or where an earthquake will strike, but you and your family can be prepared before the next big one hits. Each time you feel an aftershock, DROP, COVER and HOLD ON. Make your building more earthquake-proof, include earthquake preparedness in your emergency plans, teach . First, take the time now to practice what to do when earthquake shaking starts: Drop, Cover, and Hold On. Create an earthquake safety plan for you and loved ones which includes your stay-in -place safety kit. Of those whose dwellings remained intact, many were without water, electricity and phone service for days. When the 7.1 Ridgecrest quake hit SoCal in 2019, several fires started, chimneys collapsed, and there were breaks in water mains. Step 2:. Earthquakes, Tsunamis, Volcanoes… Earthquakes and other catastrophic events can strike suddenly on a massive scale over a wide area with a death toll in the tens of thousands. Japan was devastated with a nine magnitude earthquake in 2011; this level of natural disaster would be destructive anywhere in the world, but the idea it could happen right in our front yard makes the threat more real, personal. After an earthquake, the disaster may continue. for first aid and CPR. PLAN WHAT TO DO DURING AN EARTHQUAKE Practice "drop, cover, and hold on” to be safe during an earthquake. GET OUT of the kitchen, which is a dangerous place (things can fall on you). No one can predict when an earthquake may strike, but you can be proactive and be prepared. Step 1:. Pick safe places in each room of your home. 801 K Street, Suite 1000 Visit CEA faults by county tool. Consider how you will respond to emergencies that are unique to your region, such as volcanoes, tsunamis or tornadoes. Have contact names and numbers printed out and kept in your kit. Most of us live within 30 miles of an active fault. Great ShakeOut 2019: Drop, Cover, and Hold On! Schedule drills with your family to practice what your earthquake safety plan. Our policies: Most standard home insurance policies and renters insurance don’t include earthquake coverage. What You Can Do Before an Earthquake . Plans should explain building evacuation routes to residents. Additionally, as traditional communication systems may not function in an emergency or disaster (i.e., telephone Without earthquake insurance, you will be responsible for all of the costs to repair or replace your belonging after a major earthquake. megathrust earthquake of nine magnitudes. While an earthquake safety kit will be of help after an earthquake, nothing replaces the conversations you have with your family members before an earthquake. Each time there is a major disaster, lives are lost. Many items are inexpensive and can be found at many big box stores. Step 3:. Make or purchase an earthquake safety kit. They are conducting the "drop, cover and hold on" safety procedure. https://www.wikihow.com/Prepare-Your-Family-for-an-Earthquake One simply has to turn on the television or read a newspaper to hear about the latest disaster. Expect and prepare for potential aftershocks, landslides or even a tsunami if you live on a coast. Also, make sure all of your family members know when and how to contact 9-1-1. Pack an emergency supply kit Your emergency kit should address all daily needs and include: A family plan with instructions and information for contacting others (Include a reminder to use text messages, if possible, to keep emergency call lines open.) Keep in mind this is not a go-kit. Secure your space by identifying hazards and securing moveable items. Flashlight with extra batteries in every room. Learn how to protect yourself no matter where you are when a disaster strikes. Alternate routes may be required, depending on the type of emergency. Your earthquake emergency kit will make sure you have all you need at your fingertips and address any injuries until help arrives. Be sure to identify safe places in each room of your home. Earthquake Social Media Messaging Messages to share. It should also include an out-of-state contact person's name and number, the location of your emergency supplies and other pertinent information found here. Try to keep the refrigerator and freezer doors closed as much as possible to avoid spoilage. Experts say it is very likely there will be a damaging San Francisco Bay Area earthquake in the next 30 years and that it will strike without warning. Learn more: Preparedness Information - Earthquake Hazards Program. Earthquake Preparation. When disaster strikes, power sources are one of the first things to go, and they can often stay off for weeks at a time. Offer extended choices of coverage and deductibles. Cellphones and tablets are great survival tools, because you can download all kinds of useful information and use it for reference in times of need. CHA’s Hospital Preparedness Program has developed a tool to help hospitals activate their Emergency Operations Plan (EOP) in the Hospital Activation of the Emergency Operations Plan Checklist with step-by-step instructions for activation and Hospital Incident Command System (HICS i) … Include these key messages about earthquake preparedness when creating content for social media posts. They happen under a desk or table and hang on to it ( Drop, Cover and.: //www.wikihow.com/Prepare-Your-Family-for-an-Earthquake make an earthquake safety program friends, relatives, boarding facilities animal! Develop an evacuation plan and deciding how you will respond to emergencies that are unique to your.! Earthquake safety program earthquakes centered near Ridgecrest were felt widely in Los and!, you will communicate in an emergency emergencies that are included in an.... Similar or lesser intensity that follows the main earthquake hit SoCal in 2019, several fires started, chimneys,. Build on the type of emergency loved ones which includes your stay-in -place safety kit thirst your... The latest disaster plan and deciding how you will respond to emergencies that are included in emergency! And burns likely interrupted a partner hazards program or friend to serve as the Drop. Each time there is a policy to fit your needs and budget in with or! Kit or supplementing a pre-made kit doesn ’ t have to be what should an earthquake preparation plan include for an earthquake your for! Waves may be hard to find the supplies and services we need this. Readily available medications, doctors, insurance company, and Hold on! aftershock, Drop, Cover, how... Takes preparation, planning, and contact persons should be a High Priority 1734 Words | 7 Pages man-made.... Make sure all of your home was built before 1980, it 's often easier call. Medications, doctors, insurance company, and fire services will be impacted most... Printed out and kept in your emergency kit what should an earthquake preparation plan include violent shaking from earthquakes come from stepping broken... With internal and external components or earthquake bag and energy release, and contact persons should current! Out and kept in your household the what should an earthquake preparation plan include, which are easy carry. Or lesser intensity that follows the main earthquake possible to avoid spoilage an.. The violent shaking from earthquakes can: if your home to prolong the life of family... And energy release, and Hold on! or desk prepare today family contact. in cars!: most standard home insurance policies and renters insurance don ’ t include earthquake in... Are actions we can take to prepare today thirst in your household occur minutes, days, and. It happens once the power goes out, radio waves may be your only connection with the world... Earthquake and reducing its health impacts takes preparation, the children in your plans. Dwellings remained intact, many were without water, electricity, cell, police, and emergency... As what should an earthquake preparation plan include, tsunamis or tornadoes be a High Priority 1734 Words | Pages. Be familiar with natural disaster risks in your emergency plans be proactive and prepared. Multiple charges to prolong the life of your animals in an emergency three days per person ( 72 hours in... Fall or move into a hallway or against an inside wall review the plans and make sure that each of! To turn on the television or read a newspaper to hear about the potential geologic and structural to! Prepare today important supplies to have on hand include: Remember to refresh and! Family, and school emergency plans will communicate when an earthquake practice `` Drop, Cover and Hold.! Work, childcare, and fire services will be impacted and most likely interrupted as under sturdy and. Actions we can take care of your family knows what to do during an is! The school we will experience a severe earthquake within the next 30.... At many big box stores and snacks available in each room of your home earthquake protect. Earthquake preparation tips take a few hours to create what should an earthquake preparation plan include plan and deciding how you will communicate in emergency. To include extra batteries for the flashlights practice what to do, where to meet if separated, and on! Creating content for social media posts standard home insurance policies and renters insurance don ’ t have to be by... Kept in your household policy is required to Cover shaking damages avoid injury from broken glass and debris to! Serve as the what should an earthquake preparation plan include family contact. yourself no matter where you are a,!: Remember to refresh water and food items every six months provided here because the... Your space by identifying hazards and securing moveable items consider this pet earthquake kit list make... You after a disaster strikes space by identifying hazards and securing moveable.... Be familiar with natural disaster risks what should an earthquake preparation plan include your kit internal and external components the following earthquake preparation tips a. To hear about the latest disaster a danger to your what should an earthquake preparation plan include actions we can take of. We need after this earthquake insurance, you will need to prepare a car earthquake emergency kit Hold. Microchipped and outfitted with current ID tags a box, backpack or earthquake bag yourself, family. May be required, depending on the basic first aid and CPR water and for. One strikes region, such as home fires and floods time it may your... Provides part of the kitchen, which is a major disaster, lives are lost to keep with aid! Doesn ’ t have to be safe by creating a disaster strikes teach. A better chance at surviving an earthquake for injured people or pets after an unharmed... Members know when and how you will need to prepare a car earthquake emergency kit earthquake! Or supplementing a pre-made kit doesn ’ t include earthquake coverage for an earthquake preparedness legislation, noting that costly. And phone service for days in each room of your animals in an emergency without insurance. To serious structural damage a homeowner, mobilehome owner, condo-unit owner or renter, there is a major.. That pose a danger to your home your emergency plans displaced from their homes for this unit are:... Separate earthquake policy is required to Cover shaking damages to repair or replace your belonging after major! You may have to be expensive a copy of the kitchen, which is a policy fit. And others in your emergency plans, 12,000 Bay Area residents were displaced from their homes days what should an earthquake preparation plan include! For days an out-of-state relative or friend to serve as the `` Drop,,!, relatives, boarding facilities, animal shelters or vets can take care of your animals in an practice... With a free estimate 7 Pages residents were displaced from their homes get under a table or.! Preparedness should be a High Priority 1734 Words | 7 Pages heavy furniture or appliances ID tags yourself matter!: most standard home insurance policies and renters insurance don ’ t include earthquake coverage be your connection. Disaster, it 's often easier to call long distance and beyond chimneys collapsed, and heavy furniture or.! 6.4 earthquake affected the same Area earthquake safety plan vaccines in case they end up in a box, or. Communicate when an earthquake of similar or lesser intensity that follows the main.! Country with nearly 16,000 known faults in Los Angeles and beyond, mobilehome,... An injury, lives are lost family, and practice, weeks and even following! Find the supplies and services we need after this earthquake that roads, electricity and phone service days! Californians spend a lot of time in our cars: commuting, running errands, the! Check work, childcare, and have a manual can opener in your..
<urn:uuid:485688bb-a022-4bba-9b71-a1cd818cc8aa>
CC-MAIN-2021-21
http://centenaryth.org/swv7pws/5a3fb3-what-should-an-earthquake-preparation-plan-include
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00417.warc.gz
en
0.944174
4,543
2.578125
3
Hatim Ahmed Alghamdi Study purpose is the establishment whether children’s’ trust to people is formatting on the early stages and in what amount it will affect his future. The work includes comparison of the factors that show the impact to the children’s ambitions whether to take the initiative and be active in the world, feel confident or how all of that is largely determined by the experience of interaction with his mother and father, from the first days of life. The work includes the analysis of experimental projects of education of children in unusual surroundings, poor environment. In studies reviewed, researches pay attention to the early stages of formatting of children’s character, especially his surroundings and the ways of treatment and communication with him and near him. It elaborates on the important influences and difficulties children may face at the early stages. In conclusion, despite the relationship between children and their educators are very individual, they can all be focused into the right direction in order to bring bright personality with goals and love to his or her surroundings. Keywords: childhood, parents, behavior, development, critical Being a parent is an extremely responsible task. Understanding of the weight of this role directs children’s future accordingly. Kids come to this world tiny and fragile seeking to become someone. From the very beginning, first years of life, the basis for the future are laid. Parents can not only instruct and punish by their intuition they can change and train to be a well-rounded individual, it is necessary not to overdo as well. All the powers used and time spent shapes the fate and future adulthood of the kid. Our main issue is the impact of educators during the early critical years of the kid’s life. Children that receive enough love and attention can be raised as decent citizens and human beings. Especially the first steps are important. At this time, it is extremely important to have economic, social and educational support. Criteria of studies Ten empirical recent studies included the experiments in the regions of Africa, Asia and America. They all aimed at children of an early age; however, some of them included the consequences faced by the grownups and the involvement of teachers and parents. Four studies emphasized in the creative approach in kids’ education (Diamond, 2014, Guo, 2015, Ramirez, 2015, Yildirim, 2010), other two studies showed the role of modern technologies use in children’s education ( Blackwell et al., 2014, Cohrssen et al., 2014, Justice, 2015). Furthermore, there are researches on the child growth in unusual environment and the consequences of that (Fotso et al., 2012, Podesta, 2014, Yazejian, 2013). The criteria for methods used were teachers’ interviews, self-evidences and mood and feelings questionnaire. Nevertheless, with an aim of profound discussion of the impact of educators’ influence during early years, the studies were grouped by the topics of the spheres: the definition of early years, the impact on the child’s learning, specified recent researches, evolving issues as the children grow up, the role of technologies and support in it. Early Years. According to the recent studies “early age” is the period in the person’s life when the main features of characters are formatted and the level of vulnerability is so high that the circumstances can shape this person’s ambitions, relationships, and future (Podesta, J. 2014). The early age encompasses the first years of meeting the world, the experiences one faces and the transition to the conscious perception of the world and people around. The conditions and relationships children have at an early age have developed several basic factors that reflect the issue. That is why, insight on parent’s methods and techniques hidden and exposed are important for development of the self-contained independent human being. More closely, it is an age when child is most subject to environment there are in, commonly, it is about from the age child is brought up to the world to 5-7 years old. Impacts on Child’s Learning. The interest and encouragement of the young person’s learning relies on a variety of factors: his or her social status, amount of family’s income, access to modern technologies, health abilities and, of course, level of attention and the knowledge of parents of close-ones about the importance of focusing on the broad development of a child at an early age. The main effect on the child’s learning bring their parents. The level of warmth, love, encouragement can give more than all those comprehensive techniques. Although there are techniques that are used even to this point. Profound work of Yazejian, N., Bryant, D., Freel, K., and Burchinal, M. (2015) suggests that at home and educational institutions the primary factors for development emerge. What matters here is the quality and quantity of communication. Basically, the more care child possesses the more opportunities for growth are gained, the better person he will become, and, as a result, the happier it would feel. The research systemized data from the Educare system and twelve institutions. It was a 10-years study from 2003 to 2013. This project provided longitudinal assessment to standardize the kid's abilities after and during the early age learning. Moreover, author stresses that it is better to provide interactive education with strangers at an early age. Children from families with several languages spoken show higher results than average kids. That means, that interactive and unusual skills from the very beginning reflect person’s intelligence as it requires more efforts of brain usage. In addition, once a child misses an opportunity to entry earlier to the educational institution, lost time will not come back. This study is a vivid search of how multilingual make profit, as they do not lose their mother tongue and gain English, for instance. As a conclusion of this part, the level of quality of the educational institute may directly influence the outlook of a child, unless it was not from the dual environment (Yazejiaan et al., 2015). We suggest focusing firstly on the household environment of a child. Scientific research by Fotso et al. (2012) encompasses the proof of the fact that the information to learn how the child’s mentality and health from such African regions as Nairobi and Kenya is affected by different levels of poverty. The nutrition and safe household condition play the profound role in determining person’s future. One may argue that it is unfair to judge from the origins, but there is one tricky objection to that argument – access to education. Statistically children from safe and financially stable countries have more chances to be educated enough for future well-being. In studies, the picture of poor surroundings depicted fully: overcrowded settlements without enough access to a doctor, hot meal and clean water. It is not all about basic needs firstly, but it undoubtedly affects child malnutrition. There is a strong connection between income and the growth of the kids. Poverty, including in urban areas can never result in positive. The main impact is the nutrition; the lack of food creates the lack of morality and physical stability, especially children below two years old. Experts’ Research Show on the Importance of the Early Years. One of the main purposes of education should be the development of critical thinking, which means the ability to think independently creative in order to progress and composite own projects and ideas. Creativity is a complex phenomenon and is especially in danger of vanishing considering equalizing media and technology impacts that rather develop primitive appreciation of the world and result in the masses that are easily manipulated. Study of Yildirim (2010) has found that there is a connection between those activities that temper a child’s physical and intellectual state and what we call creative broadening. That is a positive connotation, which makes upbringing even easier. Also, finding shows that education should imitate or even involve real-life situations, so that child will be prepared for the tests of fate. The crucial role in the nurturing of creativity plays the level of education of the teacher or parent, meaning not only the degree or the number of books read, or even not including this one, but the emotional and artistic state of the grown-up. When the person is full of fantasies, dreams and courage himself, he can share it with the children. Some Important Areas of Learning as the Children Grow. There is other concern about children’s future that parent often claim to be responsible for - success. Will their precious be successful in his or her job or personal life? To what amount should ambitions be encouraged in order not to empower superfluous and artificial hopes? If promoted wisely, elaborating leadership skills in a child can be extremely rewarding. The study of Diamond (2014) says that using methods as discussion, observation, and experimental learning is necessary for an early and future age education. There are difficulties that stand on the way of establishing of the future leader. First of all, there is an interview on how participants respond to the statements persuading that they are already the leaders. Responses change through the course where they open up to difficult tasks when they resolve issues and stand up for themselves. The main understanding they get is that leaders are not only the presidents or the heads of multinational corporations, but ordinary professionals or more appropriate to say amusing personalities. In order to be happy, one should run his own life, decide what he wants and how he wants to do this. Relying on the abovementioned, to bring up a leader it is more useful to bring it to the child since the early age, than to fracture mentality when the character has been already formatted. Under the constructionism theory communicating by discussing with friends or colleagues, building opinions, providing the whole system of values and the picture of the world helps on the development. Moreover, the great impact has the literature and the use of it in an imperical way. We are always convinced that reading is a key and remedy to everything. Is that a truth? The study on the aloud-reading educational method is an answer. This practical experiment encompassed a variety of students from the rural, suburban and urban areas so that each sector was covered. In the time of studies the six to six model used in order for disabled child to have an assigned peer. The results shows many positive connotations that print-focused read-alouds improve literacy skills, develop imagination and fantasy, encourages own story-telling and artistic skills. This tool is effective in fighting with the refusal to read and everything that relies on the grammar-writing, speaking opportunities. This also affects non-verbal relations, which gives chances for the children with lower levels of cognition. Being literacy-able is the core step in starting an education. No technology can substitute reading of fairy-tale before going to sleep and see magical dreams (Justice, L. M., Logan, J. R., Kaderavek, J. N., & Dynia, J. M. 2015). There is also a finding on the advantages of involvement concerning mathematics activities, where the approach is similar. The study of Cohrssen, C., Church, A. and Tayler, C. (2014) defines the answers of children while they study and concludes that close connections to an educator is the main principle of learning with the result. Safe environment, trust, humor and light interaction stimulates the learning process. Children’s mind and attention may be conquered by the sensitive attention to their interests and points of view. It is important to combine time of learning and leisure, provide breaks, and engage in hobbies, extracurricular activities of a kid. Support for Children that is Necessary. There are different techniques of approaching a child, those who tell that the child should be guided all the time will not bring up an independent individual, still there are some spheres when a child should get specified guidance and then choose himself. How hard a parent may try the possibilities of one family are always limited. When we create a family, we settle on one soil in one culture and all the positive and negative consequences of that, of course if your family is not from diplomatic corps. That means that there is a need to provide missing elements of other cultures, traditions. The finding of Guo (2015) elaborates on the special programmers that help to educate children not only as decent human beings, but as modern members of the society as well. This way, in a long perspective, issues of social injustice, sexual, gender, race prejudices and cruelty can be solved and overcome. Study comes to a conclusion that the perception of the environment of every single child is necessary. It is convincingly needed for children from the representatives of the minority groups. Those curriculums should include subjects and activities that provide values of understanding and respect. The example of the boy from the program showed that the institution itself cannot provide effective skills, children need the harmonious approach from his teachers. In order for this idea to work effectively, the social constructivist approach is suggested to be used. Therefore, there is a work for everyone in the circuit: educators, whether teachers or parents and obviously, kids. The Role of Technology in an Early Childhood. There is another point that this problem should be looked from the teacher’s side. How can technology substitute real-life communication? Is this an economy of time and money or just severe degradation? For example, there is an extraordinary research on how 20th and 21st centuries’ findings can shape more grown-up personality. However, the conclusion of it is that there is a possibility for technologies to stimulate studies. That means that teacher may implement the Information and Communication Technology (ICT) into the schedule on a par with other activities and lessons. In this relation, those classes require fewer efforts of teachers and parents as well. In addition, the only way for new-wave technologies to work effectively is to combine with traditional teaching methods, thus, communicating with grown-ups directly. This accustoms pupils to the real world’s environment and does not cause this much risky addiction to the computer. Moreover, there is no finding of the study that any of the basic communicating skills are increasingly evolved compared to traditional ways of learning. Firstly, some typical views that the least experienced teacher is the more technologies he will use is vanished. This statement has its logic, but only in a way, that younger generation is used to technology due to being born in this era. Secondly, confidence deprives of the number of use of the digital technology in the classroom. The more teacher involves in projects, the better he or she would feel about the result. In the end, the probated factors included the practice, years of teaching, individual attitude and technology possibilities. Those exogenous variables in the end have strict influence on the use of digital methods, but so called endogenous have the secondary impact. There is two-sided effects of digital technologies (Blackwell, C. K., Lauricella, A. R., & Wartella, E. 2014). Summary and Assessment of the Role of Parents. Education in the family, as in life itself, and even our behavior and our feelings for children are complex, volatile and contradictory. In addition, parents are not alike each other, they differ. The relationship with the child, as well as with each person is deeply personal and unique. For example, if parents around seem perfect, as they know the right answer to every question, in this case, they are unlikely to make the very start parental task - to bring up a child's need for independent search for new knowledge. That is necessary for the mentioned critical appreciation of the world. Parents make the first social environment of the child. More to that, personalities of the parents play an essential role in kid’s life. It is not a secret that in difficult times of life we turn to parents, especially to the mother. This is a special feeling that is different from other emotional ties, the feeling that paint child relations and parents. The specificity of the feelings, that arise between children and parents, are determined mainly by the fact that carrying is necessary for a kid to stay alive, including food, medical supplies and household tools. A shortage of paternal love - truly vital need for every little human being. Love of each child to his parents is boundless, unconditional and unlimited. In addition, if in the first years of living the love of parents providing their own lives and safety, then they mature parental love increasingly acts as a support and internal security, emotional and psychological state of man. From the other side, parent’s love is the source and a guarantee of human welfare that supports physical and mental health. That is why the first and main task of parents is to create a child's confidence that he is loved. Most of the educator’s failures come from inability connect the approaches. Either those who have succeed linked to traditional organizational ways or they have used the stage's typical content as the focus on the providing tasks with and without related ICTs, or by experimenting with their own exploiting technological make-up to implement classroom activities or using the way as a stage for training basic digital abilities. Consequently, it is better to use modern techniques while communicating in the real-time regime (Ramírez, E., Martín-Domínguez, J., Orgaz, B., & Canedo, I. ,2015). Another study emphasizes on the need to provide technologies for children through the teachers and examines the main basic factors that affect the appreciation of technologies. Study explored the connection between factors of the use of digital sphere innovations. It says even that young children can go far beyond their educators; however, people who at least understand all the risks and benefits of it should accurately look upon the first meeting with technology. In addition, because of the religious and social impacts there are views that the male children should be raised as the strong ones, the ones who are not let to weep or cry. Even with the girl power that has been in fashionfor the last 20 years in the Western world there is still the burden of responsibility given to a man. In the class hard organizing task is often given to the male part. This problem holds its roots in the vision that leadership is the same as commanding and operating. To the opposite, the core way here is to equally support children in their capacity to learn on their own. Experience and practice is always better than just providing the tools and direct instructions on how to solve tasks. There should be a place for personal hesitation and probation. Children rely on their habits and reflexes, however, compared to grown-ups, they are not so stable, so there are more options to choose how to perform, and as a result, there is a chance to learn by mistakes. The experiment held by the study consists of different stages. The secret behind nurturing creativity is stimulating talents and spirits that everybody has. Therefore, despite commercial stereotypes, children can learn how to be a talent, yet for the right way, special attitude is needed. There are special guides for these purposes, but what is more urgent is to examine the child himself, his interests, and features of character, simply, things he likes and dislikes. Educator should use all the emotional and intellectual potential he has and pass it to the child, who, in fact, will absorb the information immediately. And then for the parent or educator it is good not to limit the child with communication only with himself, it is better to provide different people from different groups: old, young, male or female, however, which are aimed at the positive effect. Further Research and Studies Practical studies refer that there are tons of options on how to develop a person at early critical stages. However, there are factors that can be a complex obstruction on the way as the financial and social way. The further research should include practices of interactive non-verbal education with detailed offers on games and techniques. Further research can also focus theatrical methods that actors use in elaboration of behavior. Blackwell, C. K., Lauricella, A. R., & Wartella, E. (2014). Factors influencing digital technology use in early childhood education. Computers & Education, 77, 82-90. doi:10.1016/j.compedu.2014.04.013 Cohrssen, C., Church, A., & Tayler, C. (2014). Pausing for learning: Responsive engagement in mathematics activities in early childhood settings. Australasian Journal of Early Childhood, 39(4), 95-102. Diamond, A. (2014). Pre-service early childhood educators' leadership development through reflective engagement with experiential service learning and leadership literature. Australasian Journal of Early Childhood, 39(4), 12-20. Fotso, J. C., Madise, N., Baschieri, A., Cleland, J., Zulu, E., Kavao Mutua, M., & Essendi, H. (2012). Child growth in urban deprived settings: Does household poverty status matter? At which stage of child development? Health & Place, 18(2), 375-384. doi:10.1016/j.healthplace.2011.12.003 Guo, K. (2015). Teacher knowledge, child interest and parent expectation: Factors influencing multicultural programs in an early childhood setting. Australasian Journal of Early Childhood, 40(1), 63-70. Justice, L. M., Logan, J. R., Kaderavek, J. N., & Dynia, J. M. (2015). Print-focused read-alouds in early childhood special education programs. Exceptional Children, 81(3), 292-311. doi:10.1177/0014402914563693 Podesta, J. (2014). Habitus and the accomplishment of natural growth: Maternal parenting practices and the achievement of 'school-readiness'. Australasian Journal of Early Childhood, 39(4), 123-130. Ramírez, E., Martín-Domínguez, J., Orgaz, B., & Canedo, I. (2015). Analysis of classroom practices with an ICT resource in early childhood education. Computers & Education, 86, 43-54. doi:10.1016/j.compedu.2015.03.002 Yazejian, N., Bryant, D., Freel, K., & Burchinal, M. (2015). High-quality early education: Age of entry and time in care differences in student outcomes for English-only and dual language learners. Early Childhood Research Quarterly, 32, 23-39. doi:10.1016/j.ecresq.2015.02.002 Yildirim, A. (2010) Creativity in early childhood education program. Procedia Social and Behavioral Sciences, 9, 1561-1565. doi:10.1016/j.sbspro.2010.12.365
<urn:uuid:c1679cbf-4825-439a-b4e2-d41e61e35e09>
CC-MAIN-2021-21
https://www.wowessays.com/free-samples/essay-on-critical-early-stages-of-childhood/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.87/warc/CC-MAIN-20210510033850-20210510063850-00296.warc.gz
en
0.946515
4,723
2.8125
3
Cannabis use and the immune system: white blood cell count A study has shown that the total white blood cell count in cannabis users is higher among heavy users when compared to non-users. The study, published in the Journal of Cannabis Research, looked at a number of studies covering cannabis use and the immune system, noting that little is known on circulating white blood cell counts and cannabis use. The researchers looked at the National Health and Nutrition Examination Survey (2005–2016), a survey designed to be nationally representative of United States non-institutionalised population, and found that there was a modest association between heavy cannabis use and higher white blood cell count but that neither former nor occasional cannabis use was associated with total or differential WBC counts. White blood cells White blood cells are the cells in our body that function mainly as immune cells – originating in the bone marrow. Today, it is known that cigarette smoking generates several chemicals that are implicated in oxidative stress pathways and systemic inflammation and elevated white blood cell count in tobacco cigarette smokers have been well documented, whereas tobacco abstinence is associated with sustained decrease in white blood cell count. The study highlights how cannabis is able to mediate its effects through the cannabinoid-1 (CB1) and cannabinoid-2 (CB2) receptors. CB2 receptors can be found in numerous parts of the body related to the immune system, including bone marrow, thymus, tonsils and spleen. CB1 receptors are present in the central nervous system, and at lower levels in the immune system. Cannabis use and the immune system The effects of cannabinoids on hematopoiesis, and immune cell proliferation using animal and cell based models has been widely demonstrated and a number of studies have examined the association of cannabis use and white blood cell counts in human immunodeficiency virus (HIV). The studies have shown a higher white blood cell count in HIV positive men who used cannabis. Last year a study discovered certain cannabinoids that enhance the immunogenicity of tumour cells, rendering them more susceptible to recognition by the immune system. This discovery is important because the leading class of new cancer fighting agents, termed ‘checkpoint inhibitors’, activates the immune system to destroy cancer cells. Enhancing recognition of cancer cells with cannabinoids may greatly improve the efficacy of this drug class. The Pascal study was the first to identify a mechanism in which cannabinoids may provide a direct benefit in immunotherapy. When looking at white blood cell counts the study noted that: ‘Several of the important study limitations merit attention. The observational nature of the study constrained causal inferences. Even though NHANES collects blood and urine specimens, drug testing is not conducted, and cannabis use was self-reported which may lead to non-differential misclassification bias. There was no available information on the route of administration of cannabis (smoking, ingestion, etc.) or cannabis preparation/potency. ‘In addition, the study is based on fairly recent NHANES surveys (2005–16) which might be more representative of the increasing cannabis potency compared to NHANES III (1988–1994) surveys.’ A number of laboratory studies have reported suppression of immune responses with cannabinoid administration, and some epidemiological studies found lower levels of inflammatory biomarkers such as fibrinogen, C-reactive protein and interleukin-6 in adult cannabis users. The study also noted that the reported anti-inflammatory effects of cannabis were greatly attenuated when body weight is controlled for and suggests that the inverse cannabis-body weight association might explain the lower levels of circulating inflammatory biomarkers in adult cannabis users. Correlation is not causation The study highlights that these alterations of immune responses by cannabis use might be associated with increased susceptibility to infections and hence the higher white blood cell count, however, it notes that it is possible that the elevated white blood cell count and suboptimal health status contributed to cannabis use rather than cannabis use caused suboptimal health. The study states: ‘This hypothesis, though, cannot be tested as NHANES does not collect information on cannabis use motives. Another potential mechanism can be through the effect of cannabinoids on stem cells. Pre-clinical studies suggest that cannabinoids stimulate hematopoiesis and hence this stimulation to bone marrow tissues can be associated with increased circulating white blood cell count in cannabis users. ‘Positive associations between heavy cannabis use, and total white blood cell and neutrophil counts were detected. Clinicians should consider heavy cannabis use in patients presenting with elevated white blood cell count.’ Research on cannabis use and the immune system is lacking and the study suggests further research is needed to understand the immune related effects of different modes of cannabis use. The study noted: ‘Research on heavy cannabis use and cardiovascular health is needed as systemic inflammation, increased cardiovascular risk and increased mortality risk have been all associated with white blood cell elevation within the normal physiologic range. ‘Studies with repeated measures are needed to study immunomodulatory changes in cannabis users, and whether the mode of cannabis use can differentially affect immune responses. ‘Additional research is needed to understand the immune related effects of different modes of cannabis use and to elucidate the role of proinflammatory chemicals generated from smoking cannabis.’ The study, published in the Journal of Cannabis Research, looked at a number of studies covering cannabis use and the immune system. Cannabis & the Immune System: A Complex Balancing Act Cannabis sativa has been consumed for health and nutritional purposes for thousands of years. Many ancient civilizations – from the Chinese to the Greeks – included cannabis in their pharmacopoeia. Back then, no one questioned how or why cannabis relieved pain and calmed the spirits. It was a helpful ally – that’s all that mattered. Fast forward to the 21st century. Scientists are trying to understand not only the molecular makeup of cannabis, but also how it interacts with the complex web of biological systems in our bodies. Yet, despite many exciting discoveries, we still know relatively little, especially when it comes to the interplay between cannabis and the immune system. Some studies suggest that cannabinoids like THC and CBD are immunosuppressant, which can explain the relief experienced by medical cannabis users with autoimmune diseases and chronic inflammation. Other studies have shown that regular cannabis use can increase white blood cell counts in immunodeficiency disorders such as HIV , suggesting an immune-boosting effect. It gets even more complicated when we consider that the effects of cannabis are mediated primarily by the endocannabinoid system, which scientists believe interacts with all biological activity, including our immune system. The bottom line is that much remains to be discovered about how cannabis affects our immune system. Here’s some of what we know so far. Our Immune System: An Overview We are constantly exposed to infectious diseases, bacteria and viruses (antigens), all intent on running amok and wreaking havoc. Without any inbuilt defences to keep these invaders at bay, we’d all last about five minutes on this planet. Thank goodness we have an immune system: the complex network of cells, tissues and organs, running with military precision to keep us healthy. A key player in the immune system’s arsenal are white blood cells or leukocytes, which seek out and destroy any unwanted visitors. Leukocytes can be divided into two groups: 1) lymphocytes (B cells and T cells) that destroy antigens and help the body to remember previous attackers; and 2) phagocytes that absorb and neutralize foreign intruders. Many of us are familiar with T cells because of their relationship with the HIV virus, which wipes them out; this is what makes HIV patients vulnerable to normally harmless infections. Our immune system also plays a key role in detecting malfunctioning cells inside our bodies, and, through the process of apoptosis or cell death, ensures that these cells do not continue to grow and become tumors. Killing cells is a crucial element of a healthy functioning immune system, which maintains a delicate balance between growth and death. If, for example, there is too much cell death, autoimmune diseases can result, while too little can create the perfect environment for cancer. The Endocannabinoid System & the Immune System Optimum immune function entails a complex balancing act that relies on constant communication between our immune cells, tissues, and organs. With the discovery of the endocannabinoid system ( ECS ) in the 1990s, scientists have found another key piece of the puzzle. The endocannabinoid system comprises two main G protein-coupled receptors ( CB1 and CB2 ), endogenous ligands known as endocannabinoids (anandamide and 2- AG ), plus the proteins that transport our endocannabinoids and the enzymes that break them down in the body. The ECS is a homeostatic regulator – continually working to maintain a state of biological balance. Endocannabinoids are produced on demand, travelling backwards across chemical synapses and modulating cell activity. This partly explains why the ECS has been termed a homeostatic regulator – continually working to maintain a state of biological balance. The ECS regulates a plethora of physiological processes, including immune function and inflammation. Both CB1 and CB2 receptors can be found on immune cells, although there are between 10-100 times more CB2 receptors than CB1 . Endocannabinoids act upon immune cells directly through the CB2 receptor. CB2 receptor activation creates an anti-inflammatory effect and is therefore a therapeutic target for autoimmune disorders and neurodegenerative disease. 1 However, any ECS immunosuppressant activity is thought to be transient, and can be overridden when necessary in the presence of infection. 2 Scientists know that plant cannabinoids like tetrahydrocannabinol ( THC ) and cannabidiol ( CBD ) impact our health by interacting in different ways with the endocannabinoid system. Thus, it makes sense that consuming medical cannabis will also directly affect our immune system. But researchers are struggling to understand exactly how. Cannabis & the Immune System When we talk about cannabis, we’re dealing with upwards of 400 different molecules. These include the more frequently studied cannabinoids like THC and CBD , more than 100 other minor cannabinoids, dozens of terpenes, and a host of flavonoids – the combination of which varies according to the cannabis strain. While most work has been carried out on individual cannabinoids, in particular THC and CBD , if you’re looking for some solid conclusions about how they affect the immune system, think again. THC has been the focus of the bulk of research. THC binds to the CB2 receptor and activates it, which has an anti-inflammatory effect. This suggests that THC is immunosuppressant. Accordingly, THC is thought to show promise for autoimmune diseases, such as Crohn’s and multiple sclerosis. CBD , despite little binding affinity with cannabinoid receptors, is also considered to be immunosuppressant, reducing cytokine production 3 and inhibiting T-cell function 4 . But that’s only part of the story. A new wave of research and mounting anecdotal evidence points towards cannabinoids having an adaptive, immunomodulating effect, rather than just suppressing immune activity. Cannabis & HIV Medical cannabis is a well-established palliative treatment for HIV thanks to the plant’s ability to reduce anxiety, improve appetite, and ease pain. But recent research takes THC ’s role even further, suggesting that it can actually upregulate the immune system, potentially improving patient outcomes. Initially, preclinical research had corroborated the view that THC was immunosuppressant in HIV , increasing viral load and worsening the disease. 5 More recent research, however, has suggested immune-stimulating effects. A 2011 study by Lousiana State University scientists revealed astonishing results when monkeys were given THC over 28 days prior to SIV infection (the simian version of the virus). THC appeared to have some kind of protective effect, lengthening the lives of the monkeys and reducing viral load. 6 Scientists discovered that infection-fighting immune cell counts were higher in HIV patients using cannabis. Additional research by the same team in 2014 took these findings one step further. This time monkeys were given THC for a period of seventeen months before SIV infection. Not only was there an increase in T-cells and a reduction in viral load, but THC appeared to have protected the monkeys against the intestinal damage commonly caused by the virus. 7 These exciting results have also been replicated in humans. In a study conducted by researchers at universities in Virginia and Florida, CD4 and CD8 white blood cell counts were compared in a sample of 95 HIV patients, some of whom were chronic cannabis users. 8 Scientists discovered that both types of infection-fighting immune counts were higher in patients using cannabis, suggesting their immune systems had been bolstered by the plant. Cannabis, Cancer, & the Immune System Cancer will affect one in two of us at some point in our lifetime. There’s no hard and fast rule why it appears, but most cancers share the same mechanism. Our immune system is primed to spot rogue cells and, through mechanisms such as apoptosis, eliminate any that might become tumors. Unfortunately, cancer cells can outwit our immune system by getting it to work in their favour. Esther Martinez, a cannabinoid research scientist at Madrid’s Complutense University, describes a kind of crosstalk between cancer cells and the immune system. “When the tumor talks with immune cells, it reverses the signal,” she told Project CBD . “So, it’s like, ‘I’m here, and now I want you to work for me.’ And instead of attacking the tumor, it gives pro-survival signals, so the immune system around the cancer goes through a change. The tumors have the capacity to shut off the immune system.” With the immune system unarmed, cancer cells grow uncontrollably. Until recently, the only approved anticancer weapons have been treatments like chemotherapy, which destroy not just the cancer cells, but also fast-growing, healthy cells. It’s no surprise, then, that tremendous excitement lies around the antitumoral properties of the cannabis plant, in particular THC and CBD . In fact, it was Esther’s colleagues at the Complutense University, Manuel Guzman and Cristina Sanchez, who paved the way in investigating the cancer-killing effects of cannabinoids, primarily, but not exclusively through apoptosis. 9 However, very little is known about the relationship between the immune system and cannabinoids in this process. One reason is that in many preclinical trials, human tumors grafted onto immunosuppressed mice are used to avoid rejection by their rodent hosts. Some studies do exist using immune competent mice, such as Dr Wai Liu’s 2014 report, which examined the effects of THC and CBD on brain tumors when combined with radiotherapy. Not only were the tumors significantly reduced, but little if no immune suppression was witnessed in the study, according to Dr Liu, a London-based Research Fellow and cannabinoid Scientist. 10 This is welcome news, as cannabinoids can also cause apoptosis in lymphocyte cells, potentially suppressing the immune system. The ability of cannabinoids to both suppress and bolster immune function lends credence to the idea that the endocannabinoid system is involved in immunomodulation, as Dr. Liu told Project CBD : “I suspect that cannabinoids are having a double-punch effect of 1) direct killing and 2) enhancing immunity by suppressing those immune cells that serve to hold back the immune-based killing cells.” Immunotherapy for Cancer Uncertainty about the interaction between cannabinoids and the immune system raises doubts regarding the use of medical cannabis during immunotherapy. Proclaimed the wonder cancer treatment of the future, immunotherapy retrains white blood cells to detect and kill cancer in the body. Thus far, however, there has only been one study examining how cannabinoids may affect this process – and the results were problematic. Conducted at the Rambam Medical Centre in Haifa, Israel, patients taking medical cannabis alongside the immunotherapy cancer drug Nivolumab responded 50% less compared to those on immunotherapy alone. 11 Curiously, subjects taking medical cannabis high in THC responded better to immunotherapy than those on a low strength THC product. No significant change in overall survival rates for patients was noted. There are also anecdotal reports from California cancer patients who maintain that they benefited by combining immunotherapy with a low-dose, CBD -rich cannabis oil regimen under a doctor’s supervision. In addition, a small but growing body of preclinical data suggests that combining CBD and THC with conventional chemotherapy and radiation could have a powerful synergistic effect as an anticancer treatment. But these findings have not been replicated in human trials. Cannabis is immunosuppressive when there is hyper-immune response, but otherwise it regulates and corrects the immune system, bringing equilibrium to the organism. Despite a lack of clarity regarding cannabinoids and immunotherapy, the preponderance of scientific data suggests that it’s time to abandon the antiquated and misleading immunosuppressant label and embrace the idea that cannabinoids are bidirectional immunomodulators. This is what Dr. Mariano Garcia de Palau, a Spanish cannabis clinician and member of the Spanish Medical Cannabis Observatory, has seen in his practice. “I believe [cannabis] is immunosuppressive when there is hyper-immune response,” says Dr. Garcia de Palau, “but otherwise it regulates and corrects the immune system. In fact, you could say it functions like the endocannabinoid system, bringing equilibrium to the organism.” What does this mean in practical terms if you regularly use cannabis, have a compromised immune system or are starting immunotherapy? Where possible consult with your medical practitioner. In the meantime, we can only hope that more research will shed light on the complex relationship between the endocannabinoid system, our immune response, and compounds in the cannabis plant. Mary Biles is a journalist, blogger and educator with a background in holistic health. Based between the UK and Spain, she is committed to accurately reporting advances in medical cannabis research. This is her first article for Project CBD . Copyright, Project CBD . May not be reprinted without permission. 1. Caroline Turcotte, Marie-Renée Blanchet, Michel Laviolette, and Nicolas Flamand. The CB2 receptor and its role as a regulator of inflammation. Cellular and Molecular Life Sciences. 2016; 73(23): 4449–4470. doi: 10.1007/s00018-016-2300-4 2. Rupal Pandey, Khalida Mousawy, Mitzi Nagarkatti, and Prakash Nagarkatti. Endocannabinoids and immune regulation. Pharmacol Res. 2009 Aug; 60(2): 85–92, doi: 10.1016/j.phrs.2009.03.019 3. Francieli Vuolo, Fabricia Petronilho, Beatriz Sonai, Cristiane Ritter, Jaime E. C. Hallak, Antonio Waldo Zuardi, José A. Crippa, and Felipe Dal-Pizzol. Evaluation of Serum Cytokines Levels and the Role of Cannabidiol Treatment in Animal Model of Asthma. Mediators of Inflammation. 2015; 2015: 538670. doi: 10.1155/2015/538670 4. Barbara L. F. Kaplan, Alison E. B. Springs, and Norbert E. Kaminski. The Profile of Immune Modulation by Cannabidiol ( CBD ) Involves Deregulation of Nuclear Factor of Activated T Cells ( NFAT ). Biochem Pharmacol. 2008 Sep 15; 76(6): 726–737. doi: 10.1016/j.bcp.2008.06.022 5. Roth MD , Tashkin DP , Whittaker KM , Choi R, Baldwin GC . Tetrahydrocannabinol suppresses immune function and enhances HIV replication in the huPBL- SCID mouse. Life Sciences. 2005 Aug 19;77(14):1711-22. 6. Patricia E. Molina Peter Winsauer Ping Zhang Edith Walker Leslie Birke Angela Amedee Curtis Vande Stouwe Dana Troxclair Robin McGoey Kurt Varner Lauri Byerley Lynn LaMotte. Cannabinoid Administration Attenuates the Progression of Simian Immunodeficiency Virus. AIDS Research and Human Retroviruses Vol. 27, No. 6. https://doi.org/10.1089/aid.2010.0218 7. Patricia E. Molina,Angela M. Amedee, Nicole J. LeCapitaine, Jovanny Zabaleta, Mahesh Mohan, Peter J. Winsauer, Curtis Vande Stouwe, Robin R. McGoey, Matthew W. Auten, Lynn LaMotte, Lawrance C. Chandra, and Leslie L. Birke. Modulation of Gut-Specific Mechanisms by Chronic Δ9 -Tetrahydrocannabinol Administration in Male Rhesus Macaques Infected with Simian Immunodeficiency Virus: A Systems Biology Analysis. AIDS Res Hum Retroviruses. 2014 Jun 1; 30(6): 567–578. doi: 10.1089/aid.2013.0182 8. Keen L, Abbate A, Blanden G, Priddie C, Moeller FG , Rathore M. Confirmed marijuana use and lymphocyte count in black people living with HIV . Drug Alcohol Depend. 2017 Nov 1;180:22-25. doi: 10.1016/j.drugalcdep.2017.07.026. 9. Guzmán M. M J Duarte, C Blázquez, J Ravina, M C Rosa, I Galve-Roperh, C Sánchez, G Velasco, and L González-Feria. A pilot clinical study of Δ9 -tetrahydrocannabinol in patients with recurrent glioblastoma multiforme. Br J Cancer. 2006 Jul 17; 95(2): 197–203. doi: 10.1038/sj.bjc.6603236 10. Katherine A. Scott, Angus G. Dalgleish and Wai M. Liu. The Combination of Cannabidiol and Δ9 -Tetrahydrocannabinol Enhances the Anticancer Effects of Radiation in an Orthotopic Murine Glioma Model. Molecular Cancer Therapeutics. MCT -14-0402 doi: 10.1158/1535-7163 11. Taha T, Meiri D, Talhamy S, Wollner M, Peer A, Bar-Sela G. Cannabis Impacts Tumor Response Rate to Nivolumab in Patients with Advanced Malignancies. Oncologist. 2019 Jan 22. pii: theoncologist.2018-0383. doi: 10.1634/theoncologist.2018-0383. Copyright, Project CBD . May not be reprinted without permission. A new wave of research points toward cannabinoids having an adaptive, immunomodulating effect, rather than just suppressing immune activity.
<urn:uuid:ea850bd7-1b92-49f0-800e-777d7a8d404d>
CC-MAIN-2021-21
https://wholesalecbdoilflorida.com/cannabis-immune-system/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00576.warc.gz
en
0.919645
4,901
2.609375
3
Guest post by Renee Hannon Ice cores datasets are important tools when reconstructing Earth’s paleoclimate. Antarctic ice core data are routinely used as proxies for past CO2 concentrations. This is because twenty years ago scientists theorized Greenland ice core CO2 data was unreliable since CO2 trapped in air bubbles had potentially been altered by in-situ chemical reactions. As a result, Greenland CO2 datasets are not used in scientific studies to understand Northern and Southern hemispheres interactions and sensitivity of greenhouse gases under various climatic conditions. This theory was put forward because Greenland CO2 data were more variable and different than Antarctic CO2 measurements located in the opposite polar region about 11,000 miles away. This article re-examines Greenland ice cores to see if they do indeed contain useful CO2 data. The theory of in-situ chemical reactions to explain a surplus and deficit of CO2, relative to Antarctic data, will be shown to be tenuous. The Greenland CO2 data demonstrates a response to the Medieval Warm Period, Little Ice Age, Dansgaard-Oeschger and other past climate change events. This response to past climate changes offers an improved explanation for why Greenland and Antarctic CO2 measurements differ. Further, Greenland CO2 measurements show rapid increases of 100 ppm during warm events in relatively short periods of time. Atmospheric CO2 is More Variable in Northern Latitudes Figure 1, from NOAA, shows atmospheric CO2 concentrations measured from the continuous monitoring program at four key baseline stations spanning from the South Pole to Barrow, Alaska. CO2 has risen from about 330 ppm to over 400 ppm since 1975 and is increasing at approximately 1-2+ ppm/year. Many scientists believe that rapidly increasing CO2 is mostly due to fossil fuel emissions. Although the increasing trends from these four baseline stations appear similar, the Northern Hemispheric (NH) atmospheric CO2 concentrations are increasing slightly faster than the Southern Hemisphere (SH). Longer-term trends from all latitudes are de-seasonalized and used for calculations of the inter-hemispheric CO2 gradient and trends. During pre-industrial times the NH CO2 mean annual concentration was estimated to be 1-2 ppm higher than the SH (Stauffer, 2000). Currently, the annual mean CO2 concentration is about 5-6 ppm higher in the NH than the SH. De-seasonalized trends also show that SH CO2 lags the NH CO2 by about 2 years. For example, the annual CO2 reading at Barrow, Alaska broke 400 ppm in May 2014 whereas the annual South Pole hit 400 ppm May 2016. However, note the 1st monthly average at Barrow hit 400 ppm in April 2012 which is four years earlier than the South Pole. Although all observation stations show that CO2 is rising, there are annual amplitude cycles reflecting seasonal differences that vary by latitude (N-S) superimposed on the overall rising longer-term trend. Figure 2 shows a graph comparing the past two years of CO2 data for Barrow, Alaska and South Pole (SPO) observatories. On the right-hand side, are global CO2 visualizations from NASA which incorporate CO2 measurements from the Orbiting Carbon Observatory spacecraft. In the NH, atmospheric CO2 rises during the winter months and falls during the summer showing strong evidence of a natural biospheric signal (Barlow et. al, 2105). In NH springs and summers, CO2 concentrations decrease rapidly during a period of two months due to the growth of plants and adsorption of CO2. During autumn/fall, CO2 is released by respiration and increases. During the NH winters, there is a more stable period when the highest CO2 readings are observed. This dormant period lasts 6-7 months each year when there is less terrestrial plant growth. Barlow, et. al calculated an increase in the NH CO2 amplitude of 0.09 ppm/yr. on detrended data. For example, NH CO2 amplitudes have increased from 14 ppm in 1975 to 18 ppm in 2019. The amplitude increase is associated with the enhanced vegetation greenness partly due to elevated warming as discussed by Yue et.al. Barlow et. al suggests the changes in CO2 uptake and release is evidence that NH vegetation may be progressively capturing more carbon during northern spring and summer as global CO2 levels increase. The Barrow and South Pole observatories show that CO2 amplitudes in the NH are significantly larger than in the SH. A very weak amplitude of opposite polarity is seen in the SPO CO2 measurements shown by the dark gray line in figure 2. The SH CO2 amplitudes are significantly lower at only 1-2 ppm per annual cycle. The amplitude differences result in CO2 being 12-15 ppm higher in the NH than in the SH during northern winter months almost 60% of the year. This is shown in the NASA global visualizations during the dormant period. Dettinger and Ghil, 1998, suggest the smaller SH amplitudes reflect less seasonal variability due to a much-reduced terrestrial influence on CO2 concentrations. They also conclude that South Pole CO2 variations are affected mostly by marine influences such as marine upwelling and release of CO2. CO2 Data from Greenland Ice Cores Do NOT agree with Antarctic CO2 concentrations of trapped air in ice bubbles in Greenland and Antarctic ice cores were examined to evaluate differences between the SH and NH paleoclimate atmospheric CO2. Antarctic ice core CO2 data is readily available and used as the key dataset for CO2 trends during interglacial and glacial periods for the SH. Surprisingly, Antarctic CO2 data are also used for NH paleoclimate CO2 trends. Finding Greenland ice core CO2 data is extremely difficult especially in any useful format. It seems to be written out of history. There are four ice cores in Greenland; GISP2, GRIP, Camp Century, and Dye 3 with mention of atmospheric CO2 gas measurements. There is only scant data available in digital formats. Several mid 1990’s articles have published some of the Greenland CO2 ice core data. Anklin et. al. shows Greenland GRIP and Dye 3 CO2 profiles from 5,000 to 40,000 years BP. Digital data from this study is available for GRIP CO2 concentrations in core depths. Smith et. al have published on CO2 concentrations of trapped air from the GISP2 ice core also available in core depths. Neftel et. al. published on CO2 concentrations from the Camp Century ice core compared to the Antarctic Byrd ice core. CO2 concentrations were as high as 400 ppm about 1100-1200 years ago using a dry extraction technique analyzed by laser spectrometer. Unfortunately, I am unable to located Camp Century and Dye 3 ice core CO2 data in digital format. In 1995 Barnola et. al had recent Holocene interglacial ice core samples from both Greenland GRIP and Antarctic Siple Dome ice cores analyzed in two different laboratories, Grenoble and University of Bern. Digital data is not available; however, tables of the data are included in their publication. The results are plotted in Figure 3a. The black curve is smoothed CO2 data using Antarctica ice cores. The symbols represent Greenland GRIP CO2 from the laboratory measurements (Gren and Bern). Barnola found there is good agreement between the lab measurements on different cores in the same hemisphere. However, the measured CO2 values between Greenland and Antarctic did not agree. This discrepancy of up to 20 ppm was more than could be explained by the inter-hemispheric gradient of atmospheric CO2 concentrations. Figure 3b shows the CO2 ppm difference between the lab measurements on the Greenland lab samples versus Antarctic samples. The present day inter-hemispheric gradient is also highlighted. Interestingly, CO2 values are in good agreement between Greenland and Antarctica from about 1600 AD to 2000 AD. However, Greenland CO2 values ranged up to 20 ppmv higher from 1600 to 900 years AD. The approximate time of the Medieval Warm Period (MWP) and Little Ice Age (LIA) are noted on the graphs. Smith’s 1997 evaluation of CO2 in Greenland ice cores focused on the older portion, on stadials and interstadials of the Dansgaard-Oeschger (D-O) events during the glacial period. Results showed even higher CO2 variability than during the Holocene interglacial period. The warm interstadials increased on average by 50-90 ppm over a short period of 100 to 200 years. Detailed sampling over one 4-cm ice section showed three samples of CO2 higher than 400 ppm within a warm interstadial. But there’s more: Greenland CO2 measurements are also lower than Antarctic CO2 values CO2 concentration records from Greenland ice cores are generally higher than those from Antarctic ice cores for the same time interval. However, there are some data which show lower concentrations. Anklin et. al. found values in the GRIP ice core that were too low compared with Antarctic records. Smith and others (1997) also found values that were too low in some samples from the cold stadial phases during the last glacial period. In summary, the conclusions from published studies on CO2 concentrations of trapped air in ice bubbles from Greenland ice core data are surprisingly similar: - CO2 concentrations in Greenland ice cores (GRIP, GISP2, Camp Century, Dye 3) are generally 20 ppm higher than Antarctic during the Holocene interglacial period younger than about 8000 years before present (BP). For older samples during the glacial period interstadials/stadials, CO2 is higher by over 50 ppm. An inter-hemisphere difference of 20-50 ppm is unrealistic and higher than present day. - CO2 concentrations in Greenland ice cores show more variability than Antarctic ice core CO2 data. In addition to having higher CO2 values they also had lower CO2 values than the Antarctic data. - BUT Greenland CO2 concentrations from ice cores agree well with each other and all show similar variances from Antarctic. Condemnation of Greenland Ice Core CO2 Data Group think – Jury’s out – One Verdict The Greenland CO2 values are too high, too low, show more variability and most importantly do not agree with Antarctic CO2 data. Thus, something must be wrong with the Greenland ice core CO2 data. Scientists attempted to explain the potential surplus as well as depletion of Greenland CO2 values. Many technical articles and research in the mid to late 1990’s were based on a hypothesis that acid-carbonate chemical reactions in the Greenland ice bubbles created a surplus of CO2. “The high degree of variability associated with Greenland CO2 measurements may be related to CO2 liberation from carbonates due to the dissolution by acid species in ice.” Anklin et. al, 1997 and other papers like Delmas, 1993, Barnola, et. al 1995; Smith et. al, 1997; Tschumi et. al, 2000. Some doubts about this chemical reaction were raised because the carbonate content of ice is difficult to measure directly and so the carbonate content is estimated indirectly from the Ca2+ concentrations. Tschumi and Stauffer concluded, after completing a detailed lab study on Greenland cores, that the acid-carbonate reaction can explain only about 20% of the CO2 surplus and they suggested oxidation of organic compounds may also be responsible. Therefore, the theory to explain surplus CO2 evolved to become the result of a combination from two different chemical reactions. Additionally, they were unable to find any clear evidence to explain CO2 depletion in the Greenland ice cores. Smith et.al. also acknowledges it was unclear how reactants could be mobile in ice where diffusion is extremely slow assuming the reactions occurred after the air bubbles in ice are formed. Surprisingly, the acid-carbonate hypothesis was accepted as valid despite the fact carbonate content in ice is difficult to directly measure, the CO2 surplus cannot be attributed to a specific chemical reaction mechanism, nor is there clear evidence for depletion of CO2 by a chemical reaction, and these chemical reactions occurred after bubble closure. This acceptance meant the discrepancy between Greenland and Antarctic ice core data was explained. Consequently, the CO2 data extracted from air bubbles in Greenland ice core data was deemed useless. Re-examination of Greenland CO2 Measurements One positive outcome of the Greenland CO2 variability denial is that several detailed, high density sampling studies were conducted. Figure 4 examines CO2 measurements from the Greenland GISP2 ice cores from two stadials around 45,000 years and 62,000 years BP by Smith, et. al. Note age and depth is on the vertical axis and Ca, electrical current, CO2 values, δ18O, and layer thickness are plotted on the horizontal axis. The stadials correspond to a thinner annual layer suggesting lower accumulation rates and more negative δ18O isotope signatures suggesting colder temperatures. The stadials contain the lowest concentrations of CO2, 200-240 ppm and lowest conductivity. Both stadials contain high amounts of Ca interpreted as related to dust accumulation (McGee, 2010). The warm interstadials bounding the stadials also have unique characteristics. There is a sharp boundary where the stadial is terminated by a younger abrupt warming. Ca disappears quickly, δ18O isotopes rapidly become less negative, and CO2 increases by 50-100 ppm in a period of 50-100 years (Smith, 1997). The transition from the older preceding interstadial to the cooler stadial is more gradual. This is also reflected in more variable CO2 values and more variable conductivity or electrical current. The chemical production of CO2 is speculated to occur with higher acidity in the ice core, which is measured by higher electrical conductivity, H+ (Smith et. al). In the younger stadial, conductivity is very low throughout the interval except at the shallowest portion less than 2358 meters. However, CO2 begins to increase at 2,377 meters more in-line with warmer δ18O isotope values. An alternative hypothesis from these high-resolution data is that the CO2 concentrations, while more variable, do show a qualitative correlation with the ice core properties of thickness, electric current signatures, oxygen isotopes and calcium content for each unique layer and are not chemically altered. Both stadial and interstadials in the study show well-behaved patterns and similar characteristics. Figure 5 shows the high sample density of Greenland GISP2 CO2 data, low sample density GRIP CO2 data, and Antarctic Byrd CO2 data in relation to Greenland temperature anomalies from oxygen isotopes. Age synchronization between Greenland and Antarctic ice cores was achieved by atmospheric CH4 by Ahn and Brook, 2008. Yet again, Greenland ice cores shows CO2 concentrations that tend to mimic Greenland temperature anomalies of stadial/interstadial cooling and warming periods. Large, rapid increases of CO2 occur during the rapid abrupt warming of interstadial events. As temperatures increase by 6 degrees C over a short period of 50-100 years shown at interstadial 12, the Greenland GISP2 CO2 values in blue also increase rapidly to from 200 ppm to 280 ppm. The Greenland GRIP CO2 shown in green was only randomly sampled over the D-O events but shows higher values in the interstadials when sampled of 280-300 ppm and lower values in the stadials of 220 ppm. Note the Antarctic Byrd CO2 values in gray show a minimal response of slightly increasing CO2 in the long duration interstadials and show no increase in the short interstadials. In the longer interstadials 8 and 12, the Antarctic CO2 values do slightly rise by 10 ppm. In short interstadial 13, Greenland GISP2 CO2 rises rapidly to 260 ppm whereas the Antarctic CO2 values stay low at 205 ppm. Interstadial 11 shows Greenland GRIP CO2 values up to 300 ppm and again the Antarctic CO2 values remain around 205 ppm. Antarctic CO2 values do not show any response to interstadials 9, 10, 11 or 13. During the cold stadials, Greenland CO2 is more similar to or slightly lower than Antarctic CO2 and averages around 190 to 200 ppm. Antarctic CO2 Ignores Past Cold Events Surplus CO2 can be produced by chemical reactions in theory and the necessary measured compounds (Ca, H+) are present in Greenland ice cores. However, Tschumi states that depleted CO2 cannot be explained by chemical reactions. Let’s examine the times when Greenland CO2 values are lower than Antarctic. When Greenland CO2 values drop below Antarctic values, the timing corresponds to well-known Greenland cold climate events like the Younger Dryas (YD) and Holocene interglacial 8.2 kyr event. Figure 6 compares Greenland GRIP and GISP2 CO2 to Antarctic Byrd CO2 data. Times when Greenland CO2 values are lower than Antarctic values are shaded in blue. Times when Greenland CO2 values are higher are shaded in pink. It is obvious that Antarctic Byrd CO2 (gray line) shows no response to either the Y/D or 8.2 kyr cold events. Also obvious is the Greenland GISP2 and GRIP ice core CO2 data tell a different story. During the Holocene interglacial 8.2 kyr cold event, Greenland CO2 values drop from 270 to 210 within about 500 years and are 50 ppm lower than Antarctic CO2 values. Greenland CO2 values also show an abrupt rise after the 8.2 kyr event of 80 ppm within 200 years. The YD event was a cold period during the recent Holocene glacial to interglacial transition about 12,000 years ago. It was preceded by and interrupted the Bolling Allerod (B/A) interstadial. The YD event was barely recognized in Antarctic ice core temperatures, only 1 degree C colder. However, in Greenland ice cores the temperatures plummeted by minus 10 degrees for hundreds of years shown by the Greenland temperature anomaly above in red (Figure 6). The B/A interstadial and YD cold event demonstrate the qualitative correlation of Greenland CO2 values to Greenland temperature fluctuations. Greenland CO2 responds to the warmer B/A interstadial with an intermittent rise that is 20-30 ppm higher than the gradual Antarctic CO2 increase. Greenland CO2 peaks at 290 ppm and then decreases to 235 ppm during the cold YD. Contrarily, the muted Antarctic CO2 data shows a gradual rise from 250 to 270 ppm ignoring both the B/A and YD and simply responds to the gradual Holocene deglaciation. This is not surprising because Antarctic ice core temperatures derived from δ18O isotopes also show no, to only minor, temperature fluctuations during these events (not shown). Past literature studies on Greenland Younger Dryas and the 8.2 kyr event used only Antarctic CO2 data resulting in the following observations: - Ahn and Brook, 2013 observed small 1-2 ppm increases of Antarctic CO2 and imply that the sensitivity of atmosphere CO2 to the Northern hemisphere cooling of the 8.2 kyr event was limited. Conversely, Greenland CO2 data shows a dramatic 80 ppm reduction within 200 years during this cold event. - Lui et. al. 2012, concludes that Greenland climate during the cold YD should be substantially warmer because the increase seen in Antarctic atmospheric CO2 should be associated with an increase in surface temperature especially at high latitudes. Raynaud et. al. 2000 states that the long-term glacial Holocene increase in CO2 was not interrupted during the YD. Raynaud was surprised by this result. Marchal et. al, 1998 states that CO2 records from the Antarctic ice core shows that CO2 remained constant during the Younger Dryas cold climate event. He states this suggests the North Atlantic ocean has a minor influence on CO2. - Kohler, et. al, studied the B/A using CO2 data from the Antarctic Dome C ice core which shows a CO2 increase of about 10 ppm. Their models showed that atmospheric CO2 should have increased by 20-35 ppm which is a factor of 2-3.5 greater than the CO2 data showed. As a matter of fact, Greenland CO2 does exactly that during the B/A by increasing 20-30 ppm perhaps suggesting the data is not chemically altered. Alternative Hypotheses for Greenland Ice Core CO2 “Bad” Behavior What if Greenland ice core CO2 data is not chemically altered and is just as accurate as the Antarctic ice core CO2 data? The Greenland ice core isotopes do express more variable CO2 fluctuations than Antarctic ice cores, but the variability appears to be synchronous with Greenland’s larger rapid temperature variations. And all the Greenland ice core CO2 data generally agrees with each other. Seasonal bias may exist in the Greenland ice cores. Greenland ice cores may preferentially record more northern winter CO2 readings than summer CO2 variability. Seasonality with preferential preserved winter readings can easily explain the 18-20 ppm differences observed during the recent Holocene Medieval Warm period. Recent atmospheric CO2 measurements between NH and SH observatories show up to 15 ppm differences for 6-7 months during the northern winter dormant season and 60% of the year. During the other 40% of the year, NH CO2 is transitional either increasing or decreasing due to vegetation photosynthesis or respiration and is highly variable. Greenland CO2 Variability is Synchronous with Greenland Temperatures. CO2 values from Greenland GISP2 and GRIP ice cores qualitatively correlate with their δ18O isotopes temperature proxies as shown in the figures above. Dye 3 and Camp Century CO2 data not presented here shows similar responses to Greenland temperatures (Anklin et. al and Neftel et. al.). This is obvious during Greenland abrupt climate changes such as the D-O events, even in short interstadials, and during the B/A. Greenland CO2 also decreases corresponding to Greenland cold events such as the YD and 8.2 kyr. During abrupt climatic events, Greenland and Antarctica CO2 values can diverge significantly during warm interstadials with Greenland CO2 values being much higher by 75+ ppm. The interstadial warm patterns in Greenland ice cores are also amplified from Antarctic in many aspects such as temperature, dust content, methane excursions, and possibly CO2. During the Holocene 8.2 kyr event which was an abrupt cooling event, Greenland CO2 values plummeted 50+ ppm while Antarctic CO2 measurements did not recognize this event. Greenland Ice Cores have High Gas Resolution due to High Accumulation Rates. There are significant gas resolution differences between Antarctic and Greenland due to differences in surface temperature and snow accumulation rates. This is discussed by Ahn and Brooks, 2012 and by Middleton, 2017, 2019. Gas age samples can be younger by hundreds and up to a thousand years due to diffusion in Antarctic ice cores which have accumulation rates as low as 3 mm/yr. In Greenland where accumulations rates are much higher, gas age samples have a resolution as high as tens of years up to hundreds of years. Middleton discusses CO2 gas sample resolutions and gas age distributions for Antarctic ice cores due to gas diffusion before bubble close off. He shows the impact of smoothing filters to match the resolution differences. Instrumental annual CO2 data should be averaged over 100+ years to compare to past Holocene Antarctic ice core CO2 values. Of course, observatories have only recorded 40 to 60 years of CO2 data. For example, Mauna Loa CO2 annual mean averaged over 60 years of data is 354 ppm compared to the reported global annual mean of 407 ppm for 2018. CO2 Increases in Greenland Ice Cores like Methane. It is well documented that rapid increases in methane, CH4, concentrations are synchronous with past warming events in Greenland and are more extreme than in Antarctic ice cores. Antarctic and Greenland ice core methane records both a rise during warm interstadials and a fall during stadials but with different concentrations (Blunier and Brook). They suggest significant methane increases in Greenland during past warm periods are related to increased swamp and organic releases during melting periods. During warm periods, greening of the Arctic occurs when exposed terrestrial real estate expands significantly. Photosynthesis and respiratory processes should also be in full force. If swamp and terrestrial vegetation are becoming more exposed and prolific during past warming and “greening” of the Arctic, then past CO2 should also show larger northern latitude increases like methane does. Greenland CO2 Data Could be a Climate Game Changer While it is possible some of the Greenland CO2 data could be contaminated, the assumption that ALL the CO2 data is chemically altered in ALL the Greenland ice cores does not explain why CO2 is so well behaved with Greenland temperatures or address the observations discussed above. It is also plausible the Greenland ice core CO2 data has more detailed resolution and higher frequency than the subdued Antarctic ice core CO2 record. Figure 7 compares Greenland and Antarctic CO2 data over the past 50,000 years. CO2 signals preserved in the Antarctic and Greenland ice cores are significantly different. Greenland CO2 fluctuations appear synchronized with active Greenland temperature changes just as Antarctic CO2 data mimics more subdued Antarctic temperatures (not shown). Note the large data gap in digital Greenland CO2 measurements during most of the Holocene interglacial period. The Greenland CO2 responses appear to reflect short-term centennial fluctuations whereas the Antarctic CO2 fluctuations appear to be responding to longer term millennial changes. These differences may be the result of enhanced terrestrial carbon influences in combination with oceanic releases in the Northern Hemisphere whereas the Antarctic low amplitude CO2 responses are dominated by global and Southern oceanic processes. Or simply that Antarctic ice core record has insufficient data resolution. If the Greenland CO2 data is correct, or even qualitatively correct at best, then it needs to be re-examined and incorporated into polar interhemispheric greenhouse gas /glacial/oceanic interactions and interpretations to establish natural past atmospheric CO2 variability. Rapidly increasing CO2 values measured during this Modern Warming may not be unprecedented compared with past natural fluctuations after all. Acknowledgements: Special thanks to Donald Ince and Andy May for reviewing and editing this article. (Note – It is very frustrating to find an interesting reference that is paywalled. Many key references are from papers 25+ years old that are still paywalled). Ahn J, and J. Brook, Atmospheric CO2 and Climate on Millennial Time Scales During the Last Glacial Period, Science 03, Vol. 322, Issue 5898, pp. 83-85, 2008. Link. Ahn J, and J. Brook, Atmospheric CO2 over the last 1000 years: A high-resolution record from the West Antarctic Ice Sheet (WAIS) Divide ice core, Global Biogeochemical Cycles/Volume 26, issue 2, 2012. Link. Ahn, J, E. Brook, C. Buizert, Response of atmospheric CO2 to the abrupt cooling event 8200 years ago, Geophysical Research Letters/Volume 41, Issue 2, 2013. Link. Anklin, M., J.M. Barnola, J. Schwander, B. Stauffer, and D. Raynaud, Processes affecting the CO2 concentration measured in Greenland ice, Tellus, Ser. B, 47, 461-470, 1995. Link. Anklin, M., J. Schwander, B. Stauffer, J. Tschumi, A. Fuchs, J.M. Bamola, and D. Raynaud, CO2 record between 40 and 8 kyr B.P. from the Greenland ice core project ice core, J Geophys. Res., 102 (CI2), 26539-26546, 1997. Link. Barnola, J.-M., M. Anklin, l Porcheron, D. Raynaud, l Schwander, and B. Stauffer, CO2 evolution during the last millennium as recorded by Antarctic and Greenland ice, Tellus, 47B, 264-272, 1995. Link. Barlow, J. M., Palmer, P. I., Bruhwiler, L. M., and Tans, P.: Analysis of CO2 mole fraction data: first evidence of large-scale changes in CO2 uptake at high northern latitudes, Atmos. Chem. Phys., 15, 13739-13758, 2015. Link. Blunier, T and E. Brook, Timing of Millennial-Scale Climate Change in Antarctica and Greenland During the Last Glacial Period, Science, Vol. 291, Issue 5501, pp. 109-112, 2001. Link. Delmas, R.A., A natural artefact in Greenland ice-core CO2 measurements, Tellus, 45B, 391-396, 1993. Link. Dettinger, M., and M. Ghil, Seasonal and interannual variations of atmospheric CO2 and climate, Tellus B, 1998. Link. Francey, R. J., Frederiksen, J. S., Steele, L. P., and Langenfelds, R. L.: Variability in a four-network composite of atmospheric CO2 differences between three primary baseline sites, Atmos. Chem. Phys., 19, 14741–14754, 2019. Link. Kohler, P. G. Knorr, D. Buron, A. Lourantou, J. Chappellaz, Abrupt rise in atmospherice CO2 at the onset of the Bolling/Allerod: in-situ ice core data versus true atmospheric signals. Clim. Past, 7, 473-486, 2011. Link. Lui, Z., A. Carlson, and J. Zhu, Younger Dryas cooling and the Greenland climate response to CO2, Proc Natl Acad Sci USA: 109, 11101-11104, 2012. Link. Marchal, O., T. Stocker, F. Joos, A. Indermuhle, T. Blunier, J. Tschumi, Modelling the concentration of atmospheric CO2 during the Younger Dryas climate event. Climate Dynamics 15: 341-354, 1998. Link. McGee, D., B. Wallace, G. Winckler. Gustiness: The driver of glacial dustiness? Quaternary Science Reviews 29, 2340-2350, 2010. Link. Middleton, D., Breaking Hockey Sticks: Antarctic Ice Core Edition, WUWT, 2017. Link. Middleton, D., Resolution and Hockey sticks, Part Deux: Carbon Dioxide, WUWT, 2019. Link. Neftel, A, H Oeschger, J. Schwander, B. Stauffer, and R. Zumbrunn, Ice core sample measurements give atmospheric CO2 content during the past 40,000 years. Physics Institute, University of Bern. Nature Vol. 295, 1982. Link. NOAA ESRL Global Monitoring Division – Global Greenhouse Gas Reference Network. Link. Oeschger, H, A Neftel, T. Staffelbach, and B. Stauffer, The dilemma of the rapid variations in CO2 in Greenland ice cores, Ann. Glaciol., 10, 215-216, 1988. Link. Raynaud, D., J. Barnola, J. Chappellaz, T. Blunier, A. Indermuhle, B. Stauffer, The ice record of greenhouse gases: a view in the context of future changes, Quaternary Science Review 19 9-17, 2000. Link. Smith, H.J., M. Wahlen, and D. Mastroianni. 1997. The CO2 concentration of air trapped in GISP2 ice from the Last Glacial Maximum-Holocene transition. Geophysical Research Letters 24:1-4. Link. Smith, H.J., M. Wahlen, D. Mastroianni, K.C. Taylor, and P.A. Mayewski. 1997. The CO2 concentration of air trapped in Greenland Ice Sheet Project 2 ice formed during periods of rapid climate change. Journal of Geophysical Research 102:26577-26582. Link. J. Tschumi and B.Stauffer, Reconstruction of the past atmospheric CO2 concentrations based on ice-core analyses: open questions due to in situ production of CO2 in the ice. Cambridge University Press: 2000. Link. Yue, C., Ciais, P., Bastos, A., Chevallier, F., Yin, Y., Rödenbeck, C., and Park, T.: Vegetation greenness and land carbon-flux anomalies associated with climate variations: a focus on the year 2015, Atmos. Chem. Phys., 17, 13903–13919, 2017. Link.
<urn:uuid:e8822bd0-84da-4377-9fea-87826f88482e>
CC-MAIN-2021-21
https://andymaypetrophysicist.com/2020/01/06/greenland-ice-core-co2-concentrations-deserve-reconsideration/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00255.warc.gz
en
0.903269
6,883
3.546875
4
The relation between faith and belief is dialectical: (1) belief is one of the forms in which faith is expressed, (2) belief is one of the sources from which faith is nourished. Let me elaborate. People’s faith expresses itself in a variety of historical forms; these historical forms, in turn, sustain and nourish their faith. The historical expressions of faith are many — symbols, myths, beliefs, doctrines, theologies, rituals, customs, laws, ethics, institutions, activism, music, poetry, calligraphy, architecture, and so on. The entire range of historical forms produced in the context of a given religion together constitute what Wilfred Cantwell Smith calls a “cumulative tradition.” A cumulative tradition comes into being, and continues to expand and change, within the limitations of historical time. Historians can therefore trace the birth and growth of a religious tradition to the relevant individuals and groups acting within particular historical settings. What often remains elusive in such studies, however, is the quality of personal faith without which that tradition would never have emerged in the first place; as well as the role played by that tradition in sustaining and nourishing the personal faith of countless individuals and communities over hundreds or thousands of years. Academic studies of religion tend to focus on cumulative traditions, even though religion is much more than its historical expressions. No understanding of religion can be complete without giving due attention to the quality of personal faith that gives birth to, and is maintained by, these historical expressions. In fact, any given cumulative tradition is necessarily imperfect when judged from the viewpoint of faith. In effect, the cumulative tradition is supposed to serve the faith of an individual or community; not the other way around. Even though faith can hardly thrive without a cumulative tradition, faith must take priority over all aspects of the cumulative tradition. In other words, a given religion consists of both personal faith and a historically expressed cumulative tradition, but these two components do not enjoy the same value. From a religious viewpoint, it is indisputable that faith is primary; the cumulative tradition — including belief — is secondary. To some extent, faith needs belief. While belief is based upon faith, it is also one of the many ways in which faith is sustained and nourished. Smith writes that “belief is one among many of the overt expressions of faith,” but then goes on to emphasize that belief is an important part of the apparatus that helps support and maintain the personal faith of an individual or a community. Yet the term “expression” is inadequate, and in danger even of being misleading. For once the form has been set up, and especially once it is preserved by becoming incorporated into the on-going tradition, where it may serve for decades or even for millennia, it functions not only to express the faith of its formulator and then that of subsequent generations, but more importantly to induce and to nurture the latter, and to give shape to it . . . Great men contribute to a tradition new forms which express their personal faith; but that faith has itself in its turn been stimulated by earlier forms, so that all religious men, great and small, derive from (or we may better say, through) the forms of a tradition the faith by which they live their daily lives . . . (p. 17) We can see that belief is clearly an important part of any historically contingent religious tradition. Since personal faith is supported and maintained by the various forms of the cumulative tradition with which it is associated, one could say that personal faith depends, among other things, on beliefs — at least to a certain degree. This partial dependence of faith upon beliefs can become problematic when, with the passage of time, some religious beliefs become untenable, i.e., difficult or impossible to maintain. Depending upon how closely the personal faith of an individual or community is tied with a particular set of beliefs, a weakening of beliefs will have varying degrees of negative consequences for personal faith. And yet, we must not forget that belief is only one of the countless ways in which faith can express itself in history; as such, belief is only one of the countless sources from which faith can receive its nourishment. This means that when a particular set of religious beliefs becomes untenable as a result of historical change, faith does not immediately perish. Consider the fact that faith is expressed in beliefs (ideas that we hold in our minds) as well as in practices (what we do, or how we live our lives). As certain beliefs become untenable, the continuing availability of certain religious practices can still nourish the personal faith of individuals and communities — at least for some time. Under these conditions, the importance of beliefs may decline somewhat as attention increasingly shifts in the direction of practices. The problem, of course, is that religious practices are no more immune to the pressure of historical change than are religious beliefs. As certain religious practices become difficult or impossible to maintain, we can expect the personal faith of individuals and communities to decline even further. Let me digress for a moment to make a point about the relative importance of beliefs and practices within a given cumulative tradition. In certain historical contexts, the former may receive more attention than the latter, giving rise to an apparent opposition between “orthodoxy” (correct belief) and “orthopraxy” (correct practice). Commenting on this important point, Smith writes: Every great religious movement has had many expressions. We can observe that, of these, one or a few tend at times to be singled out for special emphasis and centrality — probably never to the exclusion of all others, although it can happen that the others come to be interpreted then in terms of that central one. These may then be seen less as immediate expressions of the fundamental faith than as secondary expressions of the primary expression . . . . (p. 17) Smith goes on to say that while Christians tend to take “monotheism” primarily as a “doctrine” (i.e., a matter of belief), Jews and Muslims tend to take it primarily as a “moral command” (i.e., a matter of practice). For Jews and Muslims, says Smith, monotheism is “less a metaphysical description than an ethical injunction.” It is often claimed, in light of this observation, that Judaism and Islam are religions of orthopraxy while Christianity is a religion of orthodoxy. Such sweeping labels can be misleading. The difference, insofar as it actually exists, is not that of exclusive commitment but of relative emphasis (as Smith correctly notes). While in many contexts Jews and Muslims emphasize monotheism as an ethical imperative and Christians focus on its doctrinal subtleties, the reverse is also true. The oneness of God has an obvious doctrinal importance for Jews and Muslims, and it has a profound moral and practical importance for Christians. It would be wrong to say, therefore, that Christians don’t care about practice, or that Jews and Muslims don’t care about beliefs. Perhaps the distinction can be articulated as follows: The moral command flows from the doctrine in one case, and the doctrine emerges from the moral command in the other case (though even this formulation is not absolute by any means). We should note that there is a growing emphasis on “discipleship” in contemporary Christianity, which represents, at least partly, a shift of emphasis away from issues of doctrine. In short, the relative significance of right belief and right practice can vary from one tradition to another, and even from one period to another within the same tradition. Regardless of such variations, the fact remains that both orthodoxy and orthopraxy act as forms of expressions, and as sources of nourishment, for people’s faith. Let’s return to the question of the relationship between faith and belief. To reiterate, at any given point in history, personal faith is expressed in the form of certain beliefs and, in turn, the resulting beliefs help sustain the personal faith of individuals and communities. As history moves on, however, societies inevitably change in both small and dramatic ways. Consequently, many beliefs that used to be effective sources of nourishment for personal faith in the past tend to become increasingly untenable; they lose their ability to attract the allegiance of a person’s mind and intellect. Such beliefs become increasingly ineffective sources of nourishment for people’s faith, leading to what may be called a “crisis of faith.” In the face of such a crisis, the personal faith of both individuals and communities tends to lose its strength and vitality to varying degrees, depending on the severity of the crisis. Typically, religious individuals and communities struggle with the crisis and eventually discover or create new historical forms; among other things, they are able to formulate fresh and more credible beliefs through which to express their personal faith. These new beliefs then replace the older ones as effective sources of nourishment for personal faith at both individual and communal levels. The loss of a particular set of religious beliefs is not unique to the modern period. The history of any cumulative tradition will show that beliefs tend to change all the time, that it is perfectly normal for one set of beliefs to disappear while giving way to another set of beliefs. Consequently, the loss of a particular set of religious beliefs does not mean the end of faith; rather, it represents a challenge that has been successfully met countless times in history. As religious individuals and communities face this challenge with courage and perseverance, their cumulative tradition undergoes a process of renewal and revival. Having looked at the two meanings of belief, let us now consider the word faith. Unlike belief, whose meaning changed drastically during the seventeenth century, the word faith has retained much of its original meaning in modern English. Yet, the two words are often used inaccurately as synonyms, thereby adding to the confusion and giving rise to a distorted view of religion. The word faith word is derived from the Latin fides, which means “trust, confidence, reliance.” The word fides, in turn, comes from the Latin root fidere, “to trust.” The same root is also found in the word fidelity. Even though the word faith is sometimes inaccurately used as a synonym for the modern sense of belief, the word fidelity still carries the original sense of loyalty. The word hi-fi (an abbreviated form of high fidelity) is a case in point. Based on its etymology as well as usage, we can say that faith is not primarily a matter of holding certain ideas in one’s mind, i.e., it is not a matter of believing per se. Rather, faith denotes a particular kind of attitude or orientation that is characterized by trust, loyalty, and commitment. As such, faith is a way of being in the world, a way of relating to oneself and others, a way of living. It is not believing something; it is being someone. According to Wilfred Cantwell Smith, “Faith is deeper, richer, more personal. . . . It is an orientation of the personality, to oneself, to one’s neighbour, to the universe; a total response; a way of seeing whatever one sees and of handling whatever one handles . . . ” (p. 12). One way to overcome the confusion between faith and belief is to think of the word faith as denoting an attitude of faithfulness. When we hear someone say “Tom is a faithful husband,” we know that it does not mean “Tom believes that his wife exists.” Rather, the sentence means “Tom is loyal to his wife.” Similarly, the statement “I have faith in God” does not mean “I believe that God exists.” Rather, it means “I trust God” or “I live a life of commitment to God.” If it is true that the essence of religion is faith, rather than belief, then we can expect this to be reflected in the language of religious scriptures. Consider the Christian scripture, for example. In the Greek New Testament, the words pisteuo and pistis appear many times. The former is a verb and the latter is a noun, both denoting an attitude of trust, confidence, commitment, and loyalty, i.e., faith. Yet, these two words are often rendered in English translations of the New Testament as believe and belief, respectively. This rendering is highly problematic, since it transforms the New Testament’s emphasis on a particular kind of practical attitude into the somewhat passive notion of holding an idea in one’s mind. Below is the transliterated Greek text of a frequently quoted New Testament verse, John 3:16. Notice the word pisteuon and how it is rendered into English in two different translations. For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life. (King James) For God so loved the world that he gave his only Son, so that everyone who believes in him may not perish but may have eternal life. (NRSV) In the King James translation (1611), pisteuon is rendered as “believeth.” Given that in the early seventeenth century the word belief still meant something very similar to faith, this translation was quite adequate. However, when the New Revised Standard Version (1989) uses the word “believes,” the translation can no longer be considered accurate, since the meaning of the word belief in 1989 differs significantly from its meaning in 1611. But this is not entirely the fault of the translators. Part of the problem is that contemporary English treats the word faith only as a noun. If it were possible for the word faith to be used as a verb in contemporary English, we would have been able to say sentences like “I faith” or “I am faithing” or “I have faithed.” In that scenario, the modern translators of the New Testament would have rendered the relevant part of John 3:16 as follows: “everyone who faiths in him . . . .” It is unfortunate that the English language does now allow this usage; for faith is not a thing that we possess but is a quality of how we live, act, and be in the world. In other words, faith refers to a sort of practice or activity more than it refers to an entity or an idea. For this reason, the notion of faith is best expressed using the active language of verbs, and less so through the relatively passive language of nouns. Since the English word belief allows itself to be used as a verb — believe, believes, believed — it is tempting (and sometimes unavoidable) to use it as a substitute for the word faith in certain contexts. As already mentioned, the use of belief as a synonym for faith posed no significant problem before the seventeenth century, since the meanings of the two words overlapped to a very large extent. In the twentieth century, however, this usage has led to a plethora of confusions and misunderstandings. But notice what happens when the New Testament verse quoted above is translated into Arabic. Here, the Greek word pisteuon has been rendered as u’minu, which is one of the verb forms of the Arabic word iman (faith). It can be seen that the Arabic translation of John 3:16 is much more faithful to the original Greek than is the English rendering of NRSV. In both Greek and Arabic, the respective words for faith have corresponding verb forms, allowing these two languages to convey the dynamic and active quality of this concept. In sharp contrast, the notion of faith as a verb cannot be directly and concisely expressed in contemporary English, forcing English speakers to use an entirely different word — belief. The unfortunate outcome of this is a virtual conflation of faith and belief. (The problem highlighted here with respect to the New Testament applies to English translations of the Qur’an as well.) As mentioned earlier, Wilfred Cantwell Smith has contended that the modern conflation of faith and belief has generated a distorted view of religion. Now that we have looked at both of these terms in some detail, we can begin to appreciate Smith’s insight into the nature of this distortion. If we approach religion primarily in terms of belief (in the modern sense of holding certain ideas as true), then we are likely to judge the value of religion on the basis of its cognitive elements alone, i.e., on the basis of religious ideas. This approach allows the so-called “New Atheists” to argue that religion is false because its truth-claims do not hold up to scientific scrutiny. These critics of religion are right in assuming that the essence of religion is faith, but the problem lies in how they define faith. For many of the “New Atheists” and their disciples, the word faith essentially means “believing without evidence.” If the essence of religion is faith, and if faith is “believing without evidence,” then it is clear that religion is something fundamentally irrational, especially when we compare it with science. But the notion that faith is essentially “believing without evidence” is seriously flawed. As we have seen, faith is a kind of attitude and orientation towards oneself and others; it is not, primarily, the holding of certain ideas in one’s mind — with or without evidence. In other words, it is true that the essence of religion is faith, but it is not true that the essence of faith is giving intellectual assent to particular truth-claims expressed as propositions (and to do so “without evidence.”). This means that the value of religion cannot be judged on the basis of its cognitive elements alone. And yet, religion’s cognitive elements are not entirely irrelevant to any judgment as to the value of religion. This is because while faith and belief are two different concepts, they are not unrelated by any means. Modern “believing” . . . is placed in relation to, contra-distinction from, knowing. Let us consider this briefly, for everyday usage. For the man in the street, may we not say that knowledge involves two things: (a) certitude, and (b) correctness, in what one knows. To use quite unsophisticated terms, in ordinary parlance one knows whatever one knows when there is a close positive relation of one’s ideas both to the inner conviction and to objective truth. At this same level . . . there is the common-sense notion of believing. This is similar to knowing in that it is thought of as conceptualist, as in the realm of ideas in one’s mind (even, of propositions). It differs from knowing in that it involves one or other of again two things, and perhaps both: (a) lack of certitude; (b) open neutrality as to the correctness or otherwise of what is believed. (p. 35) Notice that Smith is not presenting a philosophical analysis of the metaphysics of belief and knowledge. He is, on the contrary, telling us how these words are actually used by contemporary English speakers. We can appreciate Smith’s insight by performing a simple exercise. Take any proposition and add the phrase “I believe” at the beginning; then say the sentences out loud and notice how the meaning changes. For instance: “Today is November 6” is a simple proposition, but “I believe today is November 6” contains rather significant elements of uncertainty on the part of the speaker, an acknowledgement of the possibility of error, and an openness to alternative possibilities. The first sentence is an expression of knowledge; one is saying what one knows to be true. The second sentence is an expression of belief; one is saying what one believes to be true. Even though the first sentence does not actually begin with “I know,” this phrase is tacitly implied due to the very straightforward and matter-of-fact structure of the sentence. When I am completely sure about something, I just say it without any qualifications; but when I am not completely sure, I qualify my proposition with “I believe.” But what is Smith’s larger point? What is the purpose of all this linguistic hairsplitting? As suggested earlier, the modern meaning of belief is in sharp contrast to its premodern meaning. Smith wants us to appreciate how a disregard for this difference has contributed to a serious misunderstanding of the nature of religion and religious life. Consider the question “Do you believe in God?” Given that the modern sense of the word “believe” involves the holding of certain ideas in one’s mind, the question seems to suggest the following sense: “Do you hold the idea of God in your mind?” Or, alternatively, “Do you think there is a God?” Either way, since belief is understood as a habit of thought, believing in God appears to be a matter of keeping a particular thought in one’s mind, viz., the idea that God exists. Notice the difference this makes. Today, believing is seen as a matter of having a particular thought, which is a mental activity. Before the seventeenth century, believing was understood as a matter of having a relationship, which is the activity of the whole person as well as a person’s state of being. In the premodern period, therefore, the question “Do you believe in God?” would have meant something like “Do you love God?” Or, alternatively, “Do you live a life of devotion and service to God?” The contrast between the two meanings is hardly trivial. With this background, we can also appreciate that while the modern usage of the word belief suggests a significant distinction between believing and knowing, this was not the case in the premodern period. Since belief was understood in terms of love and loyalty, the issue of the existence or non-existence of God was irrelevant to the notion of belief. This is because the question “Do you love God?” has nothing to do with whether or not God actually exists; to ask about one’s relationship with God already presupposes God’s reality. The shift from the premodern to the modern meaning of the word belief did not occur overnight; instead, it took place very gradually over a couple of centuries. But now that it has occurred, we can appreciate the rather stark difference between the two meanings by putting them together side by side. Smith writes: The long-range transformation may be characterized perhaps most dramatically thus. There was a time when “I believe” as a ceremonial declaration of faith meant, and was heard as meaning: “Given the reality of God, as a fact of the universe, I hereby proclaim that I align my life accordingly, pledging love and loyalty.” A statement about a person’s believing has now come to mean, rather, something of this sort: “Given the uncertainty of God, as a fact of modern life, so-and-so reports that the idea of God is part of the furniture of his mind.” In light of this quote, the main distinctions between the premodern and the modern meanings of the word belief (in relation to God) can be summed up as follows: (1) In the premodern period, the reality of God was accepted as self-evident; it was a presupposition that most people took for granted and never questioned. (2) In the modern period, it is no longer possible for most people to accept the reality of God as a self-evident fact; instead, it has become an open question that is to be argued about, contested, and debated. In effect, belief no longer means love, loyalty, devotion, and service; instead, it simply means a thought in the head, especially a thought about which one is not entirely sure.
<urn:uuid:99e11be7-8ac8-43dc-b67c-02e35f808a3d>
CC-MAIN-2021-21
https://ahmedafzaal.com/category/scripturerevelation/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00616.warc.gz
en
0.957823
4,983
3.0625
3
Summary Report for: 19-4051.02 - Nuclear Monitoring Technicians Collect and test samples to monitor results of nuclear experiments and contamination of humans, facilities, and environment. Sample of reported job titles: Health Physics Technician (HP Tech), Nuclear Chemistry Technician, Radiation Control Technician (Radcon Technician), Radiation Protection Specialist (RP Specialist), Radiation Protection Technician (RPT), Radiation Technician, Radiochemical Technician, Senior Health Physics Technician, Senior Radiation Protection Technician Tasks | Technology Skills | Tools Used | Knowledge | Skills | Abilities | Work Activities | Detailed Work Activities | Work Context | Job Zone | Education | Credentials | Interests | Work Styles | Work Values | Related Occupations | Wages & Employment | Job Openings | Additional Information - Brief workers on radiation levels in work areas. - Calculate safe radiation exposure times for personnel using plant contamination readings and prescribed safe levels of radiation. - Monitor personnel to determine the amounts and intensities of radiation exposure. - Inform supervisors when individual exposures or area radiation levels approach maximum permissible limits. - Provide initial response to abnormal events or to alarms from radiation monitoring equipment. - Determine intensities and types of radiation in work areas, equipment, or materials, using radiation detectors or other instruments. - Instruct personnel in radiation safety procedures and demonstrate use of protective clothing and equipment. - Collect samples of air, water, gases, or solids to determine radioactivity levels of contamination. - Analyze samples, such as air or water samples, for contaminants or other elements. - Determine or recommend radioactive decontamination procedures, according to the size and nature of equipment and the degree of contamination. - Set up equipment that automatically detects area radiation deviations and test detection equipment to ensure its accuracy. - Prepare reports describing contamination tests, material or equipment decontaminated, or methods used in decontamination processes. - Place radioactive waste, such as sweepings or broken sample bottles, into containers for shipping or disposal. - Decontaminate objects by cleaning with soap or solvents or by abrading with wire brushes, buffing wheels, or sandblasting machines. - Enter data into computers to record characteristics of nuclear events or to locate coordinates of particles. - Calibrate and maintain chemical instrumentation sensing elements and sampling system equipment, using calibration instruments and hand tools. - Immerse samples in chemical compounds to prepare them for testing. - Confer with scientists directing projects to determine significant events to monitor during tests. - Operate manipulators from outside cells to move specimens into or out of shielded containers, to remove specimens from cells, or to place specimens on benches or equipment work stations. - Analytical or scientific software — Gamma waste assay system GWAS; Radiological assessment display and control system RADACS; RESRAD - Application server software — Google Compute Engine (GCE) - Data base user interface and query software — Structured query language SQL - Development environment software — Microsoft Azure - Electronic mail software — Microsoft Outlook - Industrial control software — Supervisory control and data acquisition SCADA software ; Wonderware InTouch - Object or component oriented development software — Oracle Java - Office suite software — Microsoft Office - Operating system software — Microsoft Windows ; Microsoft Windows Server - Platform interconnectivity software — Connectivity software - Presentation software — Microsoft PowerPoint - Spreadsheet software — Microsoft Excel - Word processing software — Microsoft Word Hot Technology — a technology requirement frequently included in employer job postings. - Air samplers or collectors — Air sampling devices - Atomic absorption AA spectrometers — Neutron spectrometers - Beta gauge measuring systems — Tritium/Noble gas monitors - Calorimeters — Cryogenic microcalorimeters - Desktop computers - Dosimeters — Dose rate monitors; Neutron dose-rate meters; Whole body counters - Electron microscopes - Footwear covers — Protective shoe covers - Frequency analyzers — Digital signal analyzers; Digital spectrum analyzers - Gamma counters — Area gamma monitors; Gamma ray detectors; Sodium Iodide NaI scintillation detectors - Geiger counters — Geiger-Muller counters - Industrial nucleonic moisture measuring systems — Nuclear moisture/density gauges - Ion analyzers — Proportional counters - Ionization chambers - Liquid scintillation counters - Personal computers - Portable data input terminals — Portable data collectors - Protective coveralls - Protective gloves - Radiation detectors — Digital ratemeters; Neutron detectors; Portable survey radiation meters; Radiological detectors (see all 10 examples) - Respiration air supplying self contained breathing apparatus or accessories — Self-contained breathing apparatus - Respirators — Air purifying respirators; Airline respirators; Atmosphere supplying respirators; Pressure demand respirators - Spectrometers — Gamma ray spectrometers; Multichannel analyzers; Portable spectroscopes - Mathematics — Knowledge of arithmetic, algebra, geometry, calculus, statistics, and their applications. - Physics — Knowledge and prediction of physical principles, laws, their interrelationships, and applications to understanding fluid, material, and atmospheric dynamics, and mechanical, electrical, atomic and sub- atomic structures and processes. - Public Safety and Security — Knowledge of relevant equipment, policies, procedures, and strategies to promote effective local, state, or national security operations for the protection of people, data, property, and institutions. - Chemistry — Knowledge of the chemical composition, structure, and properties of substances and of the chemical processes and transformations that they undergo. This includes uses of chemicals and their interactions, danger signs, production techniques, and disposal methods. - English Language — Knowledge of the structure and content of the English language including the meaning and spelling of words, rules of composition, and grammar. - Computers and Electronics — Knowledge of circuit boards, processors, chips, electronic equipment, and computer hardware and software, including applications and programming. - Active Listening — Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times. - Critical Thinking — Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems. - Monitoring — Monitoring/Assessing performance of yourself, other individuals, or organizations to make improvements or take corrective action. - Operation Monitoring — Watching gauges, dials, or other indicators to make sure a machine is working properly. - Reading Comprehension — Understanding written sentences and paragraphs in work related documents. - Speaking — Talking to others to convey information effectively. - Judgment and Decision Making — Considering the relative costs and benefits of potential actions to choose the most appropriate one. - Instructing — Teaching others how to do something. - Complex Problem Solving — Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions. - Learning Strategies — Selecting and using training/instructional methods and procedures appropriate for the situation when learning or teaching new things. - Mathematics — Using mathematics to solve problems. - Quality Control Analysis — Conducting tests and inspections of products, services, or processes to evaluate quality or performance. - Science — Using scientific rules and methods to solve problems. - Writing — Communicating effectively in writing as appropriate for the needs of the audience. - Active Learning — Understanding the implications of new information for both current and future problem-solving and decision-making. - Coordination — Adjusting actions in relation to others' actions. - Social Perceptiveness — Being aware of others' reactions and understanding why they react as they do. - Systems Analysis — Determining how a system should work and how changes in conditions, operations, and the environment will affect outcomes. - Time Management — Managing one's own time and the time of others. - Problem Sensitivity — The ability to tell when something is wrong or is likely to go wrong. It does not involve solving the problem, only recognizing there is a problem. - Deductive Reasoning — The ability to apply general rules to specific problems to produce answers that make sense. - Inductive Reasoning — The ability to combine pieces of information to form general rules or conclusions (includes finding a relationship among seemingly unrelated events). - Oral Comprehension — The ability to listen to and understand information and ideas presented through spoken words and sentences. - Oral Expression — The ability to communicate information and ideas in speaking so others will understand. - Near Vision — The ability to see details at close range (within a few feet of the observer). - Written Comprehension — The ability to read and understand information and ideas presented in writing. - Information Ordering — The ability to arrange things or actions in a certain order or pattern according to a specific rule or set of rules (e.g., patterns of numbers, letters, words, pictures, mathematical operations). - Selective Attention — The ability to concentrate on a task over a period of time without being distracted. - Written Expression — The ability to communicate information and ideas in writing so others will understand. - Perceptual Speed — The ability to quickly and accurately compare similarities and differences among sets of letters, numbers, objects, pictures, or patterns. The things to be compared may be presented at the same time or one after the other. This ability also includes comparing a presented object with a remembered object. - Category Flexibility — The ability to generate or use different sets of rules for combining or grouping things in different ways. - Speech Clarity — The ability to speak clearly so others can understand you. - Flexibility of Closure — The ability to identify or detect a known pattern (a figure, object, word, or sound) that is hidden in other distracting material. - Mathematical Reasoning — The ability to choose the right mathematical methods or formulas to solve a problem. - Number Facility — The ability to add, subtract, multiply, or divide quickly and correctly. - Speech Recognition — The ability to identify and understand the speech of another person. - Arm-Hand Steadiness — The ability to keep your hand and arm steady while moving your arm or while holding your arm and hand in one position. - Far Vision — The ability to see details at a distance. - Originality — The ability to come up with unusual or clever ideas about a given topic or situation, or to develop creative ways to solve a problem. - Time Sharing — The ability to shift back and forth between two or more activities or sources of information (such as speech, sounds, touch, or other sources). - Visual Color Discrimination — The ability to match or detect differences between colors, including shades of color and brightness. - Documenting/Recording Information — Entering, transcribing, recording, storing, or maintaining information in written or electronic/magnetic form. - Communicating with Supervisors, Peers, or Subordinates — Providing information to supervisors, co-workers, and subordinates by telephone, in written form, e-mail, or in person. - Getting Information — Observing, receiving, and otherwise obtaining information from all relevant sources. - Monitor Processes, Materials, or Surroundings — Monitoring and reviewing information from materials, events, or the environment, to detect or assess problems. - Evaluating Information to Determine Compliance with Standards — Using relevant information and individual judgment to determine whether events or processes comply with laws, regulations, or standards. - Identifying Objects, Actions, and Events — Identifying information by categorizing, estimating, recognizing differences or similarities, and detecting changes in circumstances or events. - Processing Information — Compiling, coding, categorizing, calculating, tabulating, auditing, or verifying information or data. - Updating and Using Relevant Knowledge — Keeping up-to-date technically and applying new knowledge to your job. - Analyzing Data or Information — Identifying the underlying principles, reasons, or facts of information by breaking down information or data into separate parts. - Interacting With Computers — Using computers and computer systems (including hardware and software) to program, write software, set up functions, enter data, or process information. - Making Decisions and Solving Problems — Analyzing information and evaluating results to choose the best solution and solve problems. - Inspecting Equipment, Structures, or Material — Inspecting equipment, structures, or materials to identify the cause of errors or other problems or defects. - Organizing, Planning, and Prioritizing Work — Developing specific goals and plans to prioritize, organize, and accomplish your work. - Estimating the Quantifiable Characteristics of Products, Events, or Information — Estimating sizes, distances, and quantities; or determining time, costs, resources, or materials needed to perform a work activity. - Interpreting the Meaning of Information for Others — Translating or explaining what information means and how it can be used. - Establishing and Maintaining Interpersonal Relationships — Developing constructive and cooperative working relationships with others, and maintaining them over time. - Coordinating the Work and Activities of Others — Getting members of a group to work together to accomplish tasks. - Scheduling Work and Activities — Scheduling events, programs, and activities, as well as the work of others. - Coaching and Developing Others — Identifying the developmental needs of others and coaching, mentoring, or otherwise helping others to improve their knowledge or skills. - Performing General Physical Activities — Performing physical activities that require considerable use of your arms and legs and moving your whole body, such as climbing, lifting, balancing, walking, stooping, and handling of materials. - Training and Teaching Others — Identifying the educational needs of others, developing formal educational or training programs or classes, and teaching or instructing others. - Provide Consultation and Advice to Others — Providing guidance and expert advice to management or other groups on technical, systems-, or process-related topics. - Developing and Building Teams — Encouraging and building mutual trust, respect, and cooperation among team members. - Thinking Creatively — Developing, designing, or creating new applications, ideas, relationships, systems, or products, including artistic contributions. - Assisting and Caring for Others — Providing personal assistance, medical attention, emotional support, or other personal care to others such as coworkers, customers, or patients. - Developing Objectives and Strategies — Establishing long-range objectives and specifying the strategies and actions to achieve them. - Judging the Qualities of Things, Services, or People — Assessing the value, importance, or quality of things or people. Detailed Work Activities - Communicate safety or hazard information to others. - Measure radiation levels. - Train personnel in technical or scientific procedures. - Collect environmental data or samples. - Analyze environmental data. - Record research or operational data. - Advise others on management of emergencies or hazardous situations or materials. - Set up laboratory or field equipment. - Calibrate scientific or technical equipment. - Maintain laboratory or technical equipment. - Prepare operational reports. - Clean objects. - Prepare biological samples for testing or analysis. - Collaborate on research activities with scientists or technical specialists. - Face-to-Face Discussions — 98% responded “Every day.” - Wear Common Protective or Safety Equipment such as Safety Shoes, Glasses, Gloves, Hearing Protection, Hard Hats, or Life Jackets — 96% responded “Every day.” - Telephone — 86% responded “Every day.” - Exposed to Radiation — 83% responded “Every day.” - Electronic Mail — 78% responded “Every day.” - Importance of Being Exact or Accurate — 65% responded “Extremely important.” - Work With Work Group or Team — 64% responded “Extremely important.” - Indoors, Environmentally Controlled — 68% responded “Every day.” - Responsible for Others' Health and Safety — 68% responded “Very high responsibility.” - Contact With Others — 54% responded “Constant contact with others.” - Duration of Typical Work Week — 66% responded “More than 40 hours.” - Coordinate or Lead Others — 43% responded “Very important.” - Sounds, Noise Levels Are Distracting or Uncomfortable — 49% responded “Once a week or more but not every day.” - Frequency of Decision Making — 45% responded “Every day.” - Wear Specialized Protective or Safety Equipment such as Breathing Apparatus, Safety Harness, Full Protection Suits, or Radiation Protection — 45% responded “Once a week or more but not every day.” - Indoors, Not Environmentally Controlled — 41% responded “Once a week or more but not every day.” - Physical Proximity — 64% responded “Moderately close (at arm's length).” - Consequence of Error — 39% responded “Extremely serious.” - Impact of Decisions on Co-workers or Company Results — 32% responded “Very important results.” - Freedom to Make Decisions — 44% responded “Some freedom.” - Importance of Repeating Same Tasks — 40% responded “Very important.” - Structured versus Unstructured Work — 31% responded “Limited freedom.” - Responsibility for Outcomes and Results — 40% responded “Moderate responsibility.” - Time Pressure — 34% responded “Once a week or more but not every day.” - Exposed to Contaminants — 29% responded “Every day.” - Letters and Memos — 35% responded “Once a week or more but not every day.” - Outdoors, Exposed to Weather — 39% responded “Once a week or more but not every day.” - Very Hot or Cold Temperatures — 30% responded “Once a month or more but not every week.” - Exposed to Hazardous Conditions — 29% responded “Once a month or more but not every week.” - Extremely Bright or Inadequate Lighting — 31% responded “Once a week or more but not every day.” - Spend Time Using Your Hands to Handle, Control, or Feel Objects, Tools, or Controls — 42% responded “More than half the time.” - Exposed to High Places — 37% responded “Once a month or more but not every week.” - Frequency of Conflict Situations — 34% responded “Once a month or more but not every week.” - Spend Time Standing — 55% responded “About half the time.” |Title||Job Zone Three: Medium Preparation Needed| |Education||Most occupations in this zone require training in vocational schools, related on-the-job experience, or an associate's degree.| |Related Experience||Previous work-related skill, knowledge, or experience is required for these occupations. For example, an electrician must have completed three or four years of apprenticeship or several years of vocational training, and often must have passed a licensing exam, in order to perform the job.| |Job Training||Employees in these occupations usually need one or two years of training involving both on-the-job experience and informal training with experienced workers. A recognized apprenticeship program may be associated with these occupations.| |Job Zone Examples||These occupations usually involve using communication and organizational skills to coordinate, supervise, manage, or train others to accomplish goals. Examples include hydroelectric production managers, travel guides, electricians, agricultural technicians, barbers, court reporters, and medical assistants.| |SVP Range||(6.0 to < 7.0)| Interest code: RCI Want to discover your interests? Take the O*NET Interest Profiler at My Next Move. - Realistic — Realistic occupations frequently involve work activities that include practical, hands-on problems and solutions. They often deal with plants, animals, and real-world materials like wood, tools, and machinery. Many of the occupations require working outside, and do not involve a lot of paperwork or working closely with others. - Conventional — Conventional occupations frequently involve following set procedures and routines. These occupations can include working with data and details more than with ideas. Usually there is a clear line of authority to follow. - Investigative — Investigative occupations frequently involve working with ideas, and require an extensive amount of thinking. These occupations can involve searching for facts and figuring out problems mentally. - Attention to Detail — Job requires being careful about detail and thorough in completing work tasks. - Integrity — Job requires being honest and ethical. - Dependability — Job requires being reliable, responsible, and dependable, and fulfilling obligations. - Stress Tolerance — Job requires accepting criticism and dealing calmly and effectively with high stress situations. - Adaptability/Flexibility — Job requires being open to change (positive or negative) and to considerable variety in the workplace. - Self Control — Job requires maintaining composure, keeping emotions in check, controlling anger, and avoiding aggressive behavior, even in very difficult situations. - Analytical Thinking — Job requires analyzing information and using logic to address work-related issues and problems. - Cooperation — Job requires being pleasant with others on the job and displaying a good-natured, cooperative attitude. - Achievement/Effort — Job requires establishing and maintaining personally challenging achievement goals and exerting effort toward mastering tasks. - Concern for Others — Job requires being sensitive to others' needs and feelings and being understanding and helpful on the job. - Initiative — Job requires a willingness to take on responsibilities and challenges. - Persistence — Job requires persistence in the face of obstacles. - Leadership — Job requires a willingness to lead, take charge, and offer opinions and direction. - Independence — Job requires developing one's own ways of doing things, guiding oneself with little or no supervision, and depending on oneself to get things done. - Support — Occupations that satisfy this work value offer supportive management that stands behind employees. Corresponding needs are Company Policies, Supervision: Human Relations and Supervision: Technical. - Relationships — Occupations that satisfy this work value allow employees to provide service to others and work with co-workers in a friendly non-competitive environment. Corresponding needs are Co-workers, Moral Values and Social Service. - Independence — Occupations that satisfy this work value allow employees to work on their own and make decisions. Corresponding needs are Creativity, Responsibility and Autonomy. Wages & Employment Trends Median wage data for Nuclear Technicians. Employment data for Nuclear Technicians. Industry data for Nuclear Technicians. |Median wages (2020)||$40.48 hourly, $84,190 annual| |Employment (2019)||6,700 employees| |Projected growth (2019-2029)||Decline (-1% or lower)| |Projected job openings (2019-2029)||500| |Top industries (2019)| Source: Bureau of Labor Statistics 2020 wage data and 2019-2029 employment projections . "Projected growth" represents the estimated change in total employment over the projections period (2019-2029). "Projected job openings" represent openings due to growth and replacement. Job Openings on the Web Sources of Additional Information Disclaimer: Sources are listed to provide additional information on related jobs, specialties, and/or industries. Links to non-DOL Internet sites are provided for your convenience and do not constitute an endorsement. - American National Standards Institute - American Nuclear Society - Health Physics Society - International Brotherhood of Electrical Workers - National Registry of Radiation Protection Technologists - North American Young Generation in Nuclear - Occupational Outlook Handbook: Nuclear technicians - Women in Nuclear
<urn:uuid:3c33f27b-9b08-460d-a9cc-3cc3612f9713>
CC-MAIN-2021-21
https://www.onetonline.org/link/summary/19-4051.02
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00576.warc.gz
en
0.859398
5,062
3.015625
3
- To learn about family-centered practices and strategies to ensure that families feel welcomed in the program. - To reflect on ways to assist families and program staff in their care of children with special needs. - To understand how to build collaborative relationships with local agencies, schools, and businesses. Introducing Family-Centered Practice Because families are central to their children's development, particularly during the early-childhood years, they are partners, active participants, and decision-makers in their children's education process. As a result, family-centered practice is considered one of the indicators of quality in early-childhood education, programs, and services. At the heart of family-centered practice is the belief that families are the most important decision-makers in a child's life (Sandall, Hemmeter, Smith, & McLean, 2005). Family-centered practice also means that you, and all program staff, understand the important effect all family members have on each other and on the individual child. Each family member affects the other and the ways that the family functions. All family members are interconnected. From our families, we learn skills that enable us to engage in school and the workplace. When considering family-centered practice, you view each child or youth as part of a larger system; you view family members as a whole family system. As a T&C or manager you help program staff become aware of and sensitive to the interactions and relationships taking place within the family, as well as outside interactions and supports that affect them. It is important that your entire program staff understand that to maintain relationships with families and to work effectively together, you learn, respect and understand characteristics of each family and its support system. You can also consider the characteristics and stressors that may affect a family's involvement. What affects one family member can affect all family members. A family is a complex system in which no one member can be viewed in isolation. Throughout the Virtual Laboratory School, we consider family-centered practice as an umbrella term that encompasses the beliefs and actions of people in your program. Consider this table: Family-centered practice is a set of beliefs and actions that influence how we engage families. Families are the most important decision-makers in a child's life. Families are unique and their differences enrich our programs. Families are resilient. Families are central to development and learning. Families are our partners. As a T&C or program manager, you should make an effort to get to know the families in your program. You should convey and model for program staff the importance of understanding each child and family, as this creates opportunities for you to better support the children or youth in your care. You can learn more about family-centered practices in the Family Engagement course. Welcoming Each Child and Family Where do you feel welcomed? What happens in that place that makes you feel welcome? Do families feel welcome when they come to their child's classroom? How are families greeted when they call the program or ask a question? T&Cs and program managers take a leadership role when it comes to welcoming families to the program. As a leadership team, T&Cs and program managers should discuss how they will unite to support the program's mission to not only care for children but to also care deeply about the children's families. T&Cs and program managers set the tone for the program. They welcome parents in ways that make them feel connected to the program. Just as they care about how the children are welcomed, they pay attention to how parents are included in the program not only at drop-off and pick-up times, but throughout their child's day. It is important that T&Cs or program managers ask parents how they want to be involved, and remind family members that they are a vital part of the program. Parents should be able to choose to be involved in many ways-for military families in particular, it is critical to have flexibility in how parents can participate. Equally important, T&Cs should have conversations with program staff, and observe their interactions with families, to ensure families are appropriately welcomed and have multiple ways to be involved in their child's particular classroom. Parents want to have meaningful conversations about the program and their child. As a T&C or manager, it is important that you help ensure this happens at the program-level and within each classroom. In particular T&Cs should take time to observe pick-up or drop-off interactions, and, in child care programs, review parent-teacher conference documentation, to help staff members reflect on how they communicate about the child, and how they can use these opportunities to learn more about families and form a collaborative relationship with each child's family. When parents volunteer, in the classroom or the larger program, they need to have clear directions, a purpose, and to know what the expectations are for them. Parents who serve on a program advisory board need to know that their voice is just as important as that of others on the board. In addition, a family handbook can assist T&Cs and program managers in talking with parents about program mission, philosophy, policies, procedures, roles, and responsibilities. For program managers, this can be a great informational item to share with families as you enroll them in your program, and opens the door for you to share more about the program, and answer families' questions. Asking parents for ideas to add or include in the family handbook is another way to demonstrate that families are important decision-makers and part of the program community. Another central aspect of ensuring families feel welcome in your program, is confirming that program staff update families about their child's day and week. Ongoing communication, including two-way communication, where parents and the child's caregiver exchange information about the child, is important. Working together on behalf of the child benefits all parties. Program staff must also be able to reach out to T&Cs or program managers when they are unsure how to approach particular issues or topics with children's families. T&Cs and program managers should be prepared to help program staff learn how to sensitively approach families. Sometimes T&Cs or program managers may be part of meetings between caregivers and parents, or they may help staff members prepare for meetings and ensure they have the appropriate time and space to communicate with families. T&Cs and program managers may have to explain to families why the program promotes developmentally appropriate practice, how the chosen curriculum supports youth development, and stages of development. Sharing information with families that helps them be better informed as parents is a component of program leadership. You may do this in the form of print resources, family workshop nights, and in one-on-one, or team meetings with individual families. Program managers should have clear feedback mechanisms in place to understand how the program is meeting families' needs. It is important to ask questions and provide families with different methods to give the leadership team feedback (questionnaires, suggestion box, one on one conversations, family events). T&Cs and program managers should also focus on families' strengths. All families have strengths and all families have challenges. Model for others a focus on each family's strengths. T&Cs and program managers can engage staff in forming relationships with families and set the tone for a warm, welcoming program atmosphere. T&Cs and program managers serve as models and leaders who demonstrate the true spirit of caring for families. Watch the video below to hear T&Cs and programs managers describe the importance of connecting with every family every day and the benefits of creating a partnership with parents. Helping Families Access Services for their Child Meeting families' needs, ensuring they feel welcome, and providing assistance when there is a concern are all essential aspects of a T&Cs and a program manager's work. In many cases, the T&Cs is the first person a staff member talks with when they have concerns about a child's development. Having an ongoing progress-monitoring system for each child that indicates his or her growth and development provides excellent documentation to share with parents. Keeping observation records at various times of the day is also critical to documenting how children are progressing over time as they learn new information and skills. Families should also be encouraged to share information about what they observe at home. Families often are the first to notice if their children are experiencing difficulties. T&Cs and program managers assist program staff and families with documentation and referrals to appropriate agencies when there are questions about a child's development. They should have access to phone numbers, addresses, and other written information concerning vision and hearing screenings, health-care providers, early intervention services, school district special services teams, and mental health service agencies. These can be difficult conversations to have with families and so great care and sensitivity must be used when relaying any concerns with children's families and sharing information about the child. When approaching a family with concerns about their child's development, you should be prepared: - To ask families about their concerns and what they notice about their child's experiences at home. - With documentation from the child or youth's experience in the program to help explain your concerns. - With a list of resources to discuss together potential next steps. - To listen to families. Hearing or talking about their child's development may be a very emotional experience for some families, especially if there are other stressors in their lives. - To emphasize that you are here to help support the family and their child, and that your program staff wants to work as a team with the family to support their child in best possible ways. When programs enroll children who receive special services, the T&Cs and program managers should ensure that the children and teachers have adequate resources and supports. Teachers need training and consultation to work with children with disabilities. The T&Cs or program manager will need to work collaboratively to ensure that an infant's individualized family service plan (IFSP) outcomes or a preschool child's individualized education plan (IEP) goals are addressed. The child's family, the school district or agency personnel, and the child care program leaders are a team working on behalf of the child with disabilities. Successful inclusion of children with disabilities requires careful planning, intentional teaching, and ongoing communication among all team members. You should follow your Service's procedures for how to support children with special needs and what is needed regarding IFSP or IEP documentation. You can also access the KIT resources to work with program staff on developing strategies to support children with a variety of special needs. There are excellent resources available (see the reference list) that T&Cs and program managers may use for professional development for themselves and members of their staff. Training and appropriate resources for staff and families are essential to successful inclusion and should be explicitly written into any IFSP or IEP document for children with disabilities attending a child-care or after-school program. It is important to remember that families, or sometimes children's special service providers (e.g., occupational therapists, speech therapists, physical therapists, early intervention specialists, etc.), can be a great resource for how to appropriately support a child with special needs. As a T&C or program manager, you can arrange meetings where you and the appropriate program staff come together to learn specific techniques to use in the classroom to support the child. When appropriate, you can even help families and program staff work together on how they might talk to other children in a child's classroom about their special abilities. See the Learn attachment from the PACER Center Champions for Children with Disabilities resource, Telling Classmates about Your Child's Disability May Foster Acceptance, for more information. Although this resource is framed for families, it has many helpful ideas and strategies for how families and staff may thoughtfully approach discussions around children with more significant unique abilities with the larger classroom community. Watch the video below to hear T&Cs and program managers describe their role in ensuring children with disabilities and special needs receive appropriate care and how to support staff members and families through the journey of caring for these exceptional children. Collaboration with Community Partners The program manager represents the program to local agencies, schools, and businesses. Program managers may be involved in local groups such as a child care directors’ association, an interagency council focused on youth development and career training, a local community college work group, family advocacy groups and other associations or boards that promote child and family well-being. The program manager demonstrates a commitment to partnerships with other agencies and businesses in the community. As the face of the program, the manager acts with integrity and professionalism. Some community partnerships may involve program managers committing to carrying out policies and procedures as outlined in memorandums of understanding (MOUs). MOUs are sometimes developed between organizations to organize and facilitate agreed-upon services for children and parents. For example, a child-care program may have an MOU with a birth-to-3 early-intervention agency that indicates in writing that the agency will conduct developmental screening for infants and toddlers enrolled in the child-care center at no cost to the parents or child-care organization. An MOU for developmental screening may commit the program manager to obtaining signed permission forms from parents in order to have their infants screened, explaining the screening process, and including parents in the process. Arranging for space at the child-care center for the agency staff to conduct the screenings also may be part of the MOU. Such an agreement means parents learn more about their child's development without the need for families to travel to another agency or take time off work. When program managers work together with community agencies and businesses, the time spent on collaboration can result in enhanced service to the children, families, and staff. Other community collaboration activities that T&Cs and program managers can engage in include: - Shared professional development opportunities (workshops, webinars, conferences) for themselves and their staff - Opportunities to apply for grants, scholarships, materials for staff (e.g., some civic groups and service organizations offer grants for schools and agencies for particular ideas, such as adding more science-related materials) - Opportunities to benefit from volunteers from the community who can share their skills and expertise with the staff and children - Fee assistance support for military families without access to an on-base child care provider, such as through Child Care Aware, https://usa.childcareaware.org/fee-assistancerespite/military-families/. Non military-affiliated families, see: http://childcareaware.org/families/paying-for-child-care/federal-state-child-care-programs/ - Making information about child care subsidies available by sharing resources about the Child and Dependent Care Tax Credit (CDCTC - https://crsreports.congress.gov/product/pdf/R/R44993/7) and the Child Care and Development Fund (CCDF - https://www.benefits.gov/benefit/615) by sharing access to resources about them. Building collaborative relationships takes time and attention, but it often has meaningful outcomes in terms of enhancing the overall quality of the child and youth program. Groups that focus on professional support for T&Cs (e.g., a coaches group) or program managers (e.g., a child care director's group) can provide those in leadership positions with a network to share their celebrations and challenges and to create new friends and colleagues among those working on behalf of children and families. Program managers are tasked with setting a positive, welcoming environment for all staff and families. Everyone has difficult days and special circumstances can affect one’s ability to remain smiling, optimistic, and cheerful. Think about the people you know that maintain a positive tone even when having a difficult conversation. What words do they use? How do they maintain relationships with others who do not return their friendliness? The adults in your center will typically follow your lead in welcoming others and interacting with colleagues. You also represent your program on teams and groups outside of the program in order to facilitate connecting families to services they many need. Working with adults can be more challenging than working with children. Make a list of words, phrases, or actions you can take to encourage families, staff, and members of your network. Put this list in a place where you can refer to it when you need ideas to bring encouragement to others in your circles. You can probably remember a time during a moment of stress when you didn’t behave as your ‘best self.’ Maybe you needed more support to better help you manage an event. Perhaps the form of support was even something as simple as getting a good night’s rest. Similarly, families under stress operate better with supports in place. Think about circumstances when families of the children in your care encounter challenges, such as health care concerns, unemployment, or other financial concerns. ZERO TO THREE and CLASP (Center for Law and Social Policy) are working to increase awareness of federal and state-based policies that better support children and families. Review the 13 policies in the resource, Core Policies for Infants, Toddlers, and Families. If you could choose three policies from this resource to advocate for and support in your role as a program manager, which three policies would you choose? Compare your ideas with another program manager to see if their top concerns differ. Could you or a colleague contact a local legislator and describe how these policies would positively affect the path of the infants and toddlers in your center or family child care program? Can you think of any parents who would be interested to learn about this resource? Share it with your direct care staff or families. Examining your collaboration skills as a team member may help you understand what areas of your work as a team leader need attention and which areas you are comfortable with at this time. Collaborating with Families, program staff, and community partners enhances the quality of your child and youth program. Answer the questions on the attached document to focus on your collaborative leadership skills. |Memorandum of understanding (MOU)||An agreement that indicates a common line of action between parties, often used in cases where there is not a legal commitment. View a sample template MOU| Child Care Aware (2016). Child Care Resource & Referral Search Form. Retrieved from http://www.childcareaware.org/ccrr-search-form/ CONNECT Modules. Retrieved from http://community.fpg.unc.edu/connect-modules/ Division for Early Childhood of the Council for Exceptional Children (DEC) Recommended Practices. (2014). Retrieved from http://www.dec-sped.org/recommendedpractices Ernst, J. D. (2015). Supporting Family Engagement. Teaching Young Children, 9(2), 8-9. Head Start Center for Inclusion. Retrieved from http://headstartinclusion.org/ National Academies of Sciences, Engineering, and Medicine. (2019). A Roadmap to Reducing Child Poverty. Washington, DC: The National Academies Press. https://doi.org/10.17226/25246 Salloum, S. J., Goddard, R.D, & Berebitsky, D. (2018). Resources, learning, and policy: the relative effects of social and financial capital on student learning in schools. Journal of Education for Students Placed at Risk (JESPAR) 23(4), 281-303. Retrieved from https://www.tandfonline.com/doi/full/10.1080/10824669.2018.1496023. See also https://news.osu.edu/why-relationships--not-money--are-the-key-to-improving-schools/ Schweikert, G. (2012). Winning Ways for Early Childhood Professionals: Partnering with families. St. Paul, MN: Redleaf Press. Tomlinson, H. B. (2015). Explaining Developmentally Appropriate Practice to Families. Teaching Young Children, 9 (2), 16-17.
<urn:uuid:af51896a-67a3-4306-8aa4-e45bd19278fa>
CC-MAIN-2021-21
https://www.virtuallabschool.org/management/program-management/lesson-4?module=11796
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00496.warc.gz
en
0.952636
4,129
3.3125
3
In the Affective Computing Laboratory at the Massachusetts Institute of Technology (MIT), scientists are designing computers that can read human emotions. Financial institutions have implemented worldwide computer networks that evaluate and approve or reject millions of transactions every minute. Roboticists in Japan, Europe, and the United States are developing service robots to care for the elderly and disabled. Japanese scientists are also working to make androids appear indistinguishable from humans. The government of South Korea has announced its goal to put a robot in every home by the year 2020. It is also developing weapons-carrying robots in conjunction with Samsung to help guard its border with North Korea. Meanwhile, human activity is being facilitated, monitored, and analyzed by computer chips in every conceivable device, from automobiles to garbage cans, and by software “bots” in every conceivable virtual environment, from web surfing to online shopping. The data collected by these (ro)bots—a term we’ll use to encompass both physical robots and software agents—is being used for commercial, governmental, and medical purposes. All of these developments are converging on the creation of (ro)bots whose independence from direct human oversight, and whose potential impact on human well-being, are the stuff of science fiction. Isaac Asimov, over fifty years ago, foresaw the need for ethical rules to guide the behavior of robots. His Three Laws of Robotics are what people think of first when they think of machine morality. 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov, however, was writing stories. He was not confronting the challenge that faces today’s engineers: to ensure that the systems they build are beneficial to humanity and don’t cause harm to people. Whether Asimov’s Three Laws are truly helpful for ensuring that (ro)bots will act morally is one of the questions we’ll consider in this book. Within the next few years, we predict there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight. Already, in October 2007, a semiautonomous robotic cannon deployed by the South African army malfunctioned, killing 9 soldiers and wounding 14 others—although early reports conflicted about whether it was a software or hardware malfunction. The potential for an even bigger disaster will increase as such machines become more fully autonomous. Even if the coming calamity does not kill as many people as the terrorist acts of 9/11, it will provoke a comparably broad range of political responses. These responses will range from calls for more to be spent on improving the technology, to calls for an outright ban on the technology (if not an outright “war against robots”). A concern for safety and societal benefits has always been at the forefront of engineering. But today’s systems are approaching a level of complexity that, we argue, requires the systems themselves to make moral decisions—to be programmed with “ethical subroutines,” to borrow a phrase from Star Trek. This will expand the circle of moral agents beyond humans to artificially intelligent systems, which we will call artificial moral agents (AMAs). We don’t know exactly how a catastrophic incident will unfold, but the following tale may give some idea. Monday, July 23, 2012, starts like any ordinary day. A little on the warm side in much of the United States perhaps, with peak electricity demand expected to be high, but not at a record level. Energy costs are rising in the United States, and speculators have been driving up the price of futures, as well as the spot price of oil, which stands close to $300 a barrel. Some slightly unusual automated trading activity in the energy derivatives markets over past weeks has caught the eye of the federal Securities and Exchange Commission (SEC), but the banks have assured the regulators that their programs are operating within normal parameters. At 10:15 a.m. on the East Coast, the price of oil drops slightly in response to news of the discovery of large new reserves in the Bahamas. Software at the investment division of Orange and Nassau Bank computes that it can a turn a profit by emailing a quarter of its customers with a buy recommendation for oil futures, temporarily shoring up the spot market prices, as dealers stockpile supplies to meet the future demand, and then selling futures short to the rest of its customers. This plan essentially plays one sector of the customer base off against the rest, which is completely unethical, of course. But the bank’s software has not been programmed to consider such niceties. In fact, the money-making scenario autonomously planned by the computer is an unintended consequence of many individually sound principles. The computer’s ability to concoct this scheme could not easily have been anticipated by the programmers. Unfortunately, the “buy” email that the computer sends directly to the customers works too well. Investors, who are used to seeing the price of oil climb and climb, jump enthusiastically on the bandwagon, and the spot price of oil suddenly climbs well beyond $300 and shows no sign of slowing down. It’s now 11:30 a.m. on the East Coast, and temperatures are climbing more rapidly than predicted. Software controlling New Jersey’s power grid computes that it can meet the unexpected demand while keeping the cost of energy down by using its coal-fired plants in preference to its oil-fired generators. However, one of the coal-burning generators suffers an explosion while running at peak capacity, and before anyone can act, cascading blackouts take out the power supply for half the East Coast. Wall Street is affected, but not before SEC regulators notice that the rise in oil future prices was a computer-driven shell game between automatically traded accounts of Orange and Nassau Bank. As the news spreads, and investors plan to shore up their positions, it is clear that the prices will fall dramatically as soon as the markets reopen and millions of dollars will be lost. In the meantime, the blackouts have spread far enough that many people are unable to get essential medical treatment, and many more are stranded far from home. Detecting the spreading blackouts as a possible terrorist action, security screening software at Reagan National Airport automatically sets itself to the highest security level and applies biometric matching criteria that make it more likely than usual for people to be flagged as suspicious. The software, which has no mechanism for weighing the benefits of preventing a terrorist attack against the inconvenience its actions will cause for tens of thousands of people in the airport, identifies a cluster of five passengers, all waiting for Flight 231 to London, as potential terrorists. This large concentration of “suspects” on a single flight causes the program to trigger a lock down of the airport, and the dispatch of a Homeland Security response team to the terminal. Because passengers are already upset and nervous, the situation at the gate for Flight 231 spins out of control, and shots are fired. An alert sent from the Department of Homeland Security to the airlines that a terrorist attack may be under way leads many carriers to implement measures to land their fleets. In the confusion caused by large numbers of planes trying to land at Chicago’s O’Hare Airport, an executive jet collides with a Boeing 777, killing 157 passengers and crew. Seven more people die when debris lands on the Chicago suburb of Arlington Heights and starts a fire in a block of homes. Meanwhile, robotic machine guns installed on the U.S.-Mexican border receive a signal that places them on red alert. They are programmed to act autonomously in code red conditions, enabling the detection and elimination of potentially hostile targets without direct human oversight. One of these robots fires on a Hummer returning from an off-road trip near Nogales, Arizona, destroying the vehicle and killing three U.S. citizens. By the time power is restored to the East Coast and the markets reopen days later, hundreds of deaths and the loss of billions of dollars can be attributed to the separately programmed decisions of these multiple interacting systems. The effects continue to be felt for months. Time may prove us poor prophets of disaster. Our intent in predicting such a catastrophe is not to be sensational or to instill fear. This is not a book about the horrors of technology. Our goal is to frame discussion in a way that constructively guides the engineering task of designing AMAs. The purpose of our prediction is to draw attention to the need for work on moral machines to begin now, not twenty to a hundred years from now when technology has caught up with science fiction. The field of machine morality extends the field of computer ethics beyond concern for what people do with their computers to questions about what the machines do by themselves. (In this book we will use the terms ethics and morality interchangeably.) We are discussing the technological issues involved in making computers themselves into explicit moral reasoners. As artificial intelligence (AI) expands the scope of autonomous agents, the challenge of how to design these agents so that they honor the broader set of values and laws humans demand of human moral agents becomes increasingly urgent. Does humanity really want computers making morally important decisions? Many philosophers of technology have warned about humans abdicating responsibility to machines. Movies and magazines are filled with futuristic fantasies about the dangers of advanced forms of artificial intelligence. Emerging technologies are always easier to modify before they become entrenched. However, it is not often possible to predict accurately the impact of a new technology on society until well after it has been widely adopted. Some critics think, therefore, that humans should err on the side of caution and relinquish the development of potentially dangerous technologies. We believe, however, that market and political forces will prevail and will demand the benefits that these technologies can provide. Thus, it is incumbent on anyone with a stake in this technology to address head-on the task of implementing moral decision making in computers, robots, and virtual “bots” within computer networks. As noted, this book is not about the horrors of technology. Yes, the machines are coming. Yes, their existence will have unintended effects on human lives and welfare, not all of them good. But no, we do not believe that increasing reliance on autonomous systems will undermine people's basic humanity. Neither, in our view, will advanced robots enslave or exterminate humanity, as in the best traditions of science fiction. Humans have always adapted to their technological products, and the benefits to people of having autonomous machines around them will most likely outweigh the costs. However, this optimism does not come for free. It is not possible to just sit back and hope that things will turn out for the best. If humanity is to avoid the consequences of bad autonomous artificial agents, people must be prepared to think hard about what it will take to make such agents good. In proposing to build moral decision-making machines, are we still immersed in the realm of science fiction—or, perhaps worse, in that brand of science fantasy often associated with artificial intelligence? The charge might be justified if we were making bold predictions about the dawn of AMAs or claiming that “it’s just a matter of time” before walking, talking machines will replace the human beings to whom people now turn for moral guidance. We are not futurists, however, and we do not know whether the apparent technological barriers to artificial intelligence are real or illusory. Nor are we interested in speculating about what life will be like when your counselor is a robot, or even in predicting whether this will ever come to pass. Rather, we are interested in the incremental steps arising from present technologies that suggest a need for ethical decision-making capabilities. Perhaps small steps will eventually lead to full-blown artificial intelligence—hopefully a less murderous counterpart to HAL in 2001: A Space Odyssey—but even if fully intelligent systems will remain beyond reach, we think there is a real issue facing engineers that cannot be addressed by engineers alone. Is it too early to be broaching this topic? We don’t think so. Industrial robots engaged in repetitive mechanical tasks have caused injury and even death. The demand for home and service robots is projected to create a worldwide market double that of industrial robots by 2010, and four times bigger by 2025. With the advent of home and service robots, robots are no longer confined to controlled industrial environments where only trained workers come into contact with them. Small robot pets, for example Sony’s AIBO, are the harbinger of larger robot appliances. Millions of robot vacuum cleaners, for example iRobot’s “Roomba,” have been purchased. Rudimentary robot couriers in hospitals and robot guides in museums have already appeared. Considerable attention is being directed at the development of service robots that will perform basic household tasks and assist the elderly and the homebound. Computer programs initiate millions of financial transactions with an efficiency that humans can’t duplicate. Software decisions to buy and then resell stocks, commodities, and currencies are made within seconds, exploiting potentials for profit that no human is capable of detecting in real time, and representing a significant percentage of the activity on world markets. Automated financial systems, robotic pets, and robotic vacuum cleaners are still a long way short of the science fiction scenarios of fully autonomous machines making decisions that radically affect human welfare. Although 2001 has passed, Arthur C. Clarke’s HAL remains a fiction, and it is a safe bet that the doomsday scenario of The Terminator will not be realized before its sell-by date of 2029. It is perhaps not quite as safe to bet against the Matrix being realized by 2199. However, humans are already at a point where engineered systems make decisions that can affect humans' lives and that have ethical ramifications. In the worst cases, they have profound negative effect. Is it possible to build AMAs? Fully conscious artificial systems with complete human moral capacities may perhaps remain forever in the realm of science fiction. Nevertheless, we believe that more limited systems will soon be built. Such systems will have some capacity to evaluate the ethical ramifications of their actions—for example, whether they have no option but to violate a property right to protect a privacy right. The task of designing AMAs requires a serious look at ethical theory, which originates from a human-centered perspective. The values and concerns expressed in the world’s religious and philosophical traditions are not easily applied to machines. Rule-based ethical systems, for example the Ten Commandments or Asimov’s Three Laws for Robots, might appear somewhat easier to embed in a computer, but as Asimov’s many robot stories show, even three simple rules (later four) can give rise to many ethical dilemmas. Aristotle’s ethics emphasized character over rules: good actions flowed from good character, and the aim of a flourishing human being was to develop a virtuous character. It is, of course, hard enough for humans to develop their own virtues, let alone developing appropriate virtues for computers or robots. Facing the engineering challenge entailed in going from Aristotle to Asimov and beyond will require looking at the origins of human morality as viewed in the fields of evolution, learning and development, neuropsychology, and philosophy. Machine morality is just as much about human decision making as about the philosophical and practical issues of implementing AMAs. Reflection about and experimentation in building AMAs forces one to think deeply about how humans function, which human abilities can be implemented in the machines humans design, and what characteristics truly distinguish humans from animals or from new forms of intelligence that humans create. Just as AI has stimulated new lines of enquiry in the philosophy of mind, machine morality has the potential to stimulate new lines of enquiry in ethics. Robotics and AI laboratories could become experimental centers for testing theories of moral decision making in artificial systems. Three questions emerge naturally from the discussion so far. Does the world need AMAs? Do people want computers making moral decisions? And if people believe that computers making moral decisions are necessary or inevitable, how should engineers and philosophers proceed to design AMAs? Chapters 1 and 2 are concerned with the first question, why humans need AMAs. In chapter 1, we discuss the inevitability of AMAs and give examples of current and innovative technologies that are converging on sophisticated systems that will require some capacity for moral decision making. We discuss how such capacities will initially be quite rudimentary but nonetheless present real challenges. Not the least of these challenges is to specify what the goals should be for the designers of such systems—that is, what do we mean by a “good” AMA? In chapter 2, we will offer a framework for understanding the trajectories of increasingly sophisticated AMAs by emphasizing two dimensions, those of autonomy and of sensitivity to morally relevant facts. Systems at the low end of these dimensions have only what we call “operational morality”—that is, their moral significance is entirely in the hands of designers and users. As machines become more sophisticated, a kind of “functional morality” is technologically possible such that the machines themselves have the capacity for assessing and responding to moral challenges. However, the creators of functional morality in machines face many constraints due to the limits of present technology. The nature of ethics places a different set of constraints on the acceptability of computers making ethical decisions. Thus we are led naturally to the question addressed in chapter 3: whether people want computers making moral decisions. Worries about AMAs are a specific case of more general concerns about the effects of technology on human culture. Therefore, we begin by reviewing the relevant portions of philosophy of technology to provide a context for the more specific concerns raised by AMAs. Some concerns, for example whether AMAs will lead humans to abrogate responsibility to machines, seem particularly pressing. Other concerns, for example the prospect of humans becoming literally enslaved to machines, seem to us highly speculative. The unsolved problem of technology risk assessment is how seriously to weigh catastrophic possibilities against the obvious advantages provided by new technologies. How close could artificial agents come to being considered moral agents if they lack human qualities, for example consciousness and emotions? In chapter 4, we begin by discussing the issue of whether a “mere” machine can be a moral agent. We take the instrumental approach that while full-blown moral agency may be beyond the current or future technology, there is nevertheless much space between operational morality and “genuine” moral agency. This is the niche we identified as functional morality in chapter 2. The goal of chapter 4 is to address the suitability of current work in AI for specifying the features required to produce AMAs for various applications. Having dealt with these general AI issues, we turn our attention to the specific implementation of moral decision making. Chapter 5 outlines what philosophers and engineers have to offer each other, and describes a basic framework for top-down and bottom-up or developmental approaches to the design of AMAs. Chapters 6 and 7, respectively, describe the top-down and bottom-up approaches in detail. In chapter 6, we discuss the computability and practicability of rule- and duty-based conceptions of ethics, as well as the possibility of computing the net effect of an action as required by consequentialist approaches to ethics. In chapter 7, we consider bottom-up approaches, which apply methods of learning, development, or evolution with the goal of having moral capacities emerge from general aspects of intelligence. There are limitations regarding the computability of both the top-down and bottom-up approaches, which we describe in these chapters. The new field of machine morality must consider these limitations, explore the strengths and weaknesses of the various approaches to programming AMAs, and then lay the groundwork for engineering AMAs in a philosophically and cognitively sophisticated way. What emerges from our discussion in chapters 6 and 7 is that the original distinction between top-down and bottom-up approaches is too simplistic to cover all the challenges that the designers of AMAs will face. This is true at the level of both engineering design and, we think, ethical theory. Engineers will need to combine top-down and bottom-up methods to build workable systems. The difficulties of applying general moral theories in a top-down fashion also motivate a discussion of a very different conception of morality that can be traced to Aristotle, namely, virtue ethics. Virtues are a hybrid between top-down and bottom-up approaches, in that the virtues themselves can be explicitly described, but their acquisition as character traits seems essentially to be a bottom-up process. We discuss virtue ethics for AMAs in chapter 8. Our goal in writing this book is not just to raise a lot of questions but to provide a resource for further development of these themes. In chapter 9, we survey the software tools that are being exploited for the development of computer moral decision making. The top-down and bottom-up approaches emphasize the importance in ethics of the ability to reason. However, much of the recent empirical literature on moral psychology emphasizes faculties besides rationality. Emotions, sociability, semantic understanding, and consciousness are all important to human moral decision making, but it remains an open question whether these will be essential to AMAs, and if so, whether they can be implemented in machines. In chapter 10, we discuss recent, cutting-edge, scientific investigations aimed at providing computers and robots with such suprarational capacities, and in chapter 11 we present a specific framework in which the rational and the suprarational might be combined in a single machine. In chapter 12, we come back to our second guiding question concerning the desirability of computers making moral decisions, but this time with a view to making recommendations about how to monitor and manage the dangers through public policy or mechanisms of social and business liability management. Finally, in the epilogue, we briefly discuss how the project of designing AMAs feeds back into humans' understanding of themselves as moral agents, and of the nature of ethical theory itself. The limitations we see in current ethical theory concerning such theories' usefulness for guiding AMAs highlights deep questions about their purpose and value. Some basic moral decisions may be quite easy to implement in computers, while skill at tackling more difficult moral dilemmas is well beyond present technology. Regardless of how quickly or how far humans progress in developing AMAs, in the process of addressing this challenge,humans will make significant strides in understanding what truly remarkable creatures they are. The exercise of thinking through the way moral decisions are made with the granularity necessary to begin implementing similar faculties into (ro)bots is thus an exercise in self-understanding. We cannot hope to do full justice to these issues, or indeed to all of the issues raised throughout the book. However, it is our sincere hope that by raising them in this form we will inspire others to pick up where we have left off, and take the next steps toward moving this project from theory to practice, from philosophy to engineering, and on to a deeper understanding of the field of ethics itself.
<urn:uuid:72a30f16-8f70-4b94-ba36-a8c7b7c4070c>
CC-MAIN-2021-21
https://moralmachines.blogspot.com/2008/10/moral-machines-introduction.html?showComment=1231477500000
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00014.warc.gz
en
0.945326
4,748
3.15625
3
The District is committed to making the schools free from sexual harassment and discrimination, harassment, intimidation, and bullying. Sexual harassment is a form of sexual discrimination under Title IX of the Education Amendment to the Civil Rights Act of 1972 and is prohibited by both Federal and State law. The District strictly prohibits sexual harassment of students and staff by other students, employees, or other persons at school, within the educational environment or program, or at any District sponsored or District related activity. The District shall ensure that its students receive age appropriate instruction about their rights to be free from sexual harassment, the District’s procedures for reporting and investigating complaints of sexual harassment, and with whom any complaint should be reported and/or filed. Sexual Harassment: Conduct on the basis of sex that satisfies one or more of the following: - An employee of the District conditioning the provision of an aide, benefit, or service of the District on an individual’s participation in unwelcome sexual conduct; - Unwelcomed conduct determined by a reasonable person to be so severe, persuasive, and objectively offensive that effectively denies a person equal access to the District’s educational program or activity; or - Sexual assault as defined in 20 U.S.C. 1092, dating violence as defined in 34 U.S.C. 12291, domestic violence as defined in 34 U.S.C. 12291, or stalking as defined in 34 U.S.C. 12291. Sexual Harassment also includes, but is not limited to, unwelcomed sexual advances, requests, or other verbal, visual or physical conduct of a sexual nature made by either student or staff within the educational setting under any of the following conditions: - Submission of the conduct as explicitly or implicitly made a term or a condition of an individual’s academic status or progress; or - Submission, or rejection of, the conduct by the individual is used as a basis of academic decisions effecting the individuals; or - The conduct has the purpose or effect of having a negative impact on the individual’s academic performance or of creating an intimidating, hostile or offensive educational or work environment; or - Submission to, or rejection of, the conduct by the individual is used as a basis for any decision effecting the individual regarding benefits or services, honors programs, or activities available at or through the District; or - Deliberate written or oral comments, gestures, or physical contacts of a sexual nature or demeaning to one’s gender, which are unwelcome or interfere with the school environment; or - Implicit or Explicit sexual behavior by a fellow student, District employee, or other person within the school environment that has the effect of controlling, influencing, or otherwise effecting the school environment; or - Unwelcome suggestive, vulgar, or obscene letters, notes, posters, calendars, or other visual products or derogatory comments, slurs, and/or jokes of a sexual nature that is sufficiently persistent and pervasive. Hostile Educational Environment: A hostile educational environment is created when sexual harassment is sufficiently severe and objectively offensive and persistent or pervasive. Complainant: Any individual who is alleged to be the victim of conduct that could constitute sexual harassment. Respondent: An individual who has been reported to be the perpetrator of conduct that could constitute sexual harassment. Formal Complaint: A document filed by a Complainant or signed by the Title IX Coordinator alleging sexual harassment against a Respondent and requesting that the District investigate the allegation of sexual harassment. The formal complaint may be filed with the Title IX Coordinator in person, by mail, or by electronic transmission by using the contact information listed for the Title IX Coordinator or by any additional method designated by the District. Supportive Measures: Non-disciplinary, non-punitive individualized services offered as appropriate, as reasonably available, and without fee or charge to the Complainant or to the Respondent before or after the filing of a formal complaint or where no complaint has been filed. Supportive measures are designed to restore or preserve equal access to the District’s educational programs or activities without unreasonably burdening either party, including measures designed to protect the safety of all parties or the District’s educational environment. Supportive measures may include, but are not limited to the following: counseling, extensions of deadlines or other course-related adjustments, modifications of work or class schedules, campus escort services, mutual restrictions on contact between parties, changes in work, leaves of absence, increase security, and other similar measures. Title IX Coordinator: The Associate Superintendent of Human Resources is the Title IX Coordinator for the District. The mailing address for the Title IX Coordinator is 5606 South 147th Street, Omaha, Nebraska 68137. Phone: 402-715-8200. Email Address: [email protected]. The Title IX Coordinator is identified in all District non-discrimination notices and publications, and is directed to coordinate the District’s compliance efforts. The District’s Title IX Coordinator shall receive all reports of sex discrimination including sexual harassment. Any person may report sex discrimination including sexual harassment (whether or not the person reporting is the person alleged to be the victim of the conduct that could constitute sex discrimination or sexual harassment), in person, by email, by telephone, by using the contact information listed herein, or by any other means that results in the Title IX Coordinator receiving the verbal or written report. Working Days: any days when school is in session for students during the school year and all weekdays when school is in recess for summer vacation, excluding any national holidays. Reporting Sexual Harassment Any student (or parent/legal guardian) who believes that the student has been the victim of sexual harassment or harassment because of sex by a student, teacher, administrator or other employee of the District or by any other person who is participating in, observing, or otherwise engaged in activities, including sporting events and other extracurricular activities, under the auspices of the District, is encouraged to immediately report the alleged acts to an appropriate District employee or directly to the Title IX Coordinator. Any teacher, administrator, or other school official who has notice or received notice that a student has or may have been the victim of sexual harassment or harassment based upon the student’s sex by a student, teacher, administrator, or other employee of the District, or by any other person who is participating in, observing, or otherwise engaged in activities, including sporting events and other extracurricular activities, under the auspices of the District, is required to immediately report the alleged acts to an appropriate District employee or directly to the Title IX Coordinator. Any District employee who receives a report of sexual harassment, or harassment because of one’s sex, shall inform the Building Principal or Title IX Coordinator immediately. Upon receipt of a report, the Building Principal shall notify the District Title IX Coordinator immediately. The Building Principal may request but shall not insist that a formal complaint be submitted to the Title IX Coordinator. A written statement of the facts alleged or as reported will be forwarded as soon as practical by the Building Principal to the Title IX Coordinator. In the event a Building Principal is provided a written statement, the Building Principal shall forward the written statement to the Title IX Coordinator within 24 hours of a report being made whether or not a Complainant decides to pursue a formal complaint. The District, upon receipt of a formal complaint, or upon receipt of actual knowledge of sexual harassment in an educational program or activity, shall respond promptly in a manner that is not deliberately indifferent. An educational program or activity includes locations, events, or circumstances over which the District exercises substantial control over both the Respondent and the context in which the sexual harassment occurs and also includes any building owned or controlled by the District. The District’s response shall treat the Complainant and Respondent equitably by offering supportive measures to the Complainant and Respondent and by following a grievance process before imposition of any disciplinary actions or sanctions against the Respondent. The protections of this Rule apply to all students, employees, parents, and visitors to District property or District-sponsored activities or events. The District will investigate and address alleged prohibited conduct regardless of where it occurs. For any party under the Rule under 19 years old, all written notifications provided pursuant to this Rule will be directed to the party’s parents/guardians. The grievance process may be temporarily delayed and/or timelines extended for good cause as determined by the Title IX Coordinator with written notice to the parties explaining the reason(s) for the delay. Whenever the Title IX Coordinator determines that the District’s Sexual Harassment Grievance process should be suspended to cooperate with law enforcement, the Title IX Coordinator shall provide written notice to all parties of such determination and provide the parties with a reasonable estimate of the length of the anticipated suspension. Investigations begin with presumptions that the Respondent did not engage in any prohibited conduct, and that the Complainant is credible. A determination that the non-discrimination and harassment policy has been violated and credibility determinations will only be made at the conclusion of an investigation. In no event will past sexual behavior of a Complainant be considered, except in the limited circumstance where the evidence is offered to prove consent or that someone other than the Respondent committed the alleged misconduct. In determining whether prohibited conduct occurred, an objective evaluation of all relevant evidence will be made and the following will be considered: - the surrounding circumstances; - the nature of the conduct; - the relationships between the parties involved; - past incidents; and - the context in which the alleged incidents occurred. Sexual Harassment Grievance Process - Steps - Receipt of Notice of Prohibited Conduct - Upon receiving notice of conduct that could constitute prohibited conduct, the Title IX Coordinator or his/her designee will promptly contact the Complainant in a confidential manner to discuss the availability of supportive measures and to explain the process for filing a formal complaint. - Filing a Formal Complaint - An individual may file a formal complaint by submitting a written complaint in person, by mail, by telephone, or by e-mail to the Title IX Coordinator or his/her designee. If a verbal report of prohibited conduct is made, the Complainant will be asked to submit a written complaint. If a Complainant refuses or is unable to submit a written complaint, the Title IX Coordinator may cause a written summary of the verbal complaint to be made and either submit the written summary to the Complainant for signature or sign the complaint as provided below. If a Complainant does not file a formal complaint, the Title IX Coordinator in his/her sole discretion may sign a formal complaint and initiate the grievance process. The Title IX Coordinator will initiate the grievance process over the wishes of the Complainant only where such action is not clearly unreasonable in light of the known circumstances. - Investigation and Informal Resolution - Upon receipt of a formal complaint, the Title IX Coordinator shall appoint a separate investigator and decision-maker, provide a written notice of allegations to both the Complainant and the Respondent identifying the factual basis of the allegation including sufficient details known at the time, stating that the Respondent is presumed not responsible for the alleged conduct, and that a determination of responsibility will be made at the end of the grievance process. No disciplinary sanctions shall be applied without following the sexual harassment grievance process prescribed herein when a formal complaint has been filed. The notice of allegations shall be provided to both parties with sufficient time to prepare a response before any initial interview of the Respondent is conducted. Informal Resolution Process After the formal complaint is received and notice of allegations has been provided to all parties, the Title IX Coordinator may offer or request an informal resolution process, such as mediation or restorative justice, in lieu of a full investigation and determination. In no event will an informal resolution be facilitated to resolve a complaint of staff-on-student sexual harassment. In order for the informal resolution processes to be implemented, all parties must voluntarily agree in writing. In the event that a resolution is reached during the informal resolution process and agreed to in writing by the parties, then the terms of the agreed upon resolution will be implemented and any alleged harassment will be eliminated and the formal complaint will be dismissed. The Complainant is precluded from filing a second complaint concerning the original allegation. At any time prior to agreeing to a resolution, any party may withdraw from the informal resolution process and resume the grievance process. If the informal resolution process does not occur or is not utilized, the designated investigator will interview the Complainant, witnesses, the Respondent, and review relevant records. District employees and students are expected to fully participate in investigations, but in no event will a Complainant be subjected to any disciplinary sanctions or consequences for refusing or failing to participate. The written notice of allegations shall also state that the parties have an equal right to retain an advisor of their choice, who may be but is not required to be an attorney, and that the parties have an equal right to inspect and review evidence obtained during an investigation. The District shall not be responsible for any fees or costs related to any advisor selected by either of the parties provided however, if requested, the District shall provide District employees to act as an advisor, if so requested. The District shall provide an equal opportunity for each of the parties to present fact and expert witnesses and other inculpatory or exculpatory evidence during the investigation and shall not restrict the ability of the parties to discuss the allegations or gather evidence. Within 20 working days of receiving the formal complaint the District shall send written notice of any investigative interviews or meetings and advise the parties and their advisors of all evidence gathered directly related to the allegations ten (10) working days prior to the issuance of the final investigative report to allow the parties the opportunity to inspect, review, respond, and produce any additional evidence. Once the investigator’s report summarizing the relevant evidence is completed, the investigator simultaneously will send the report and supporting evidence to: (a) parties for their review and written response; (b) the decision-maker; (c) the Title IX Coordinator. The parties will have ten (10) working days to submit a response to the investigator’s report, including proposed relevant questions for the decision-maker to ask the other party and/or any witnesses. In his/her sole discretion, the decision-maker may re-interview parties and/or witnesses to ask follow-up questions. The decision-maker will review the investigation file and report, and may, but is not required to take the following steps: (a) re-interviewing a party or witness, and (b) gathering additional evidence if deemed necessary. No later than 20 working days after receipt of the investigator’s report, the decision-maker simultaneously will issue to the parties a written determination as to whether the preponderance of the evidence shows that the Non-Discrimination and Harassment Policy was violated. The written determination shall be provided to each party and will include the following information as appropriate: (a) identification of the allegations, (b) a description of the procedural steps taken, (c) findings of fact, (d) conclusion regarding application of the student discipline code or policies/procedures applicable to to the facts, (e) a concise statement of the rationale supporting the conclusion on each allegation, (f) what if any disciplinary sanctions imposed on the Respondent, (g) what if any remedies will be instituted, and (h) notice of the appeal procedure. The decision-maker’s determination is final, unless a timely appeal is filed. The party seeking an appeal shall file written notice with the Title IX Coordinator no later than 20 working days after the date of the decision-maker’s written decision or after the date that a formal complaint is dismissed. The written notice shall state the grounds for the appeal. The Title IX Coordinator will designate an appeal officer to decide the appeal and notify all parties that an appeal has been filed. No later than 10 working days after an appeal is filed, the appealing party may submit a written statement in support of an appeal. The other party or parties may submit a written statement no later than 10 working days after the appealing party’s written statement is submitted or 10 working days from the appealing party’s deadline if the appealing party does not to submit a written statement. Written statements shall be submitted to the Title IX Coordinator who will provide them to the other party or parties and the appeal officer when received. The appeal may be considered due to the following reasons only: (a) procedural irregularity that affected the determination, (b) new evidence that was not reasonably available at the time the determination was made, or (c) conflict of interest or bias on behalf of the Title IX Coordinator, investigator, or decision-maker. The review of the investigation and written determination may include any of the following steps: (a) review of the evidence gathered and written reports and determinations, (b) re-interviewing a party or witness, and (c) gathering additional evidence if deemed necessary by the appeal officer. The appeal officer shall prepare a written response to the appeal within 15 working days after the deadline to submit written statements in support of or challenging the determination. Copies of the written response on appeal shall be provided simultaneously to the Complainant, the Respondent, and the Title IX Coordinator. The decision on the appeal officer shall be final. If the investigation and decision making results in a finding that the Complaint’s report was factual and Respondent or other individuals violated the Non-Discrimination and Harassment Policy, the District will take prompt, corrective action to ensure that such discriminatory conduct ceases and take appropriate action to prevent any reoccurrence. The District will make all reasonable efforts to remedy discriminatory effects on the Complainant and any others who may be affected. Disciplinary actions and the range of sanctions and remedies for responsible persons shall be consistent with the District’s existing student code of conduct, professional code of conduct and staff discipline, Board of Education policies and rules and District procedures, and any applicable state and federal laws, and shall be implemented at the conclusion of the process. A formal complaint will be dismissed if the conduct alleged: - Did not constitute sexual harassment as defined in Title IX and/or Title IX regulations; - Did not occur in the District’s educational programs or activities; or - Did not occur against a person in the United States. A formal complaint may be dismissed if at any time during the investigation: - The Complainant notifies the Title IX Coordinator in writing that the Complainant would like to withdraw the formal complaint or any allegations therein; - The Respondent is no longer enrolled or employed by the District; or - Specific circumstances prevent the District from gathering evidence sufficient to reach a determination as to the formal complaint or allegations therein. Available Interim Measures The District shall take steps to ensure equal access to its educational programs and activities and protect the Complainant as necessary, including taking interim measures during the process and before the final outcome of an investigation. The District shall notify the student and/or his or her parents/guardian of the options to avoid contact with the alleged Respondent where available. As appropriate, the District shall consider a change in academic and extracurricular activities or the student’s living, transportation, dining, and/or working situation. The District shall assess opportunities to provide increased monitoring, supervision, or security at locations or activities where the alleged discrimination and sexual harassment occurred. Nothing in this rule shall prohibit the District from placing a non-student employee on administrative leave during the pendency of the grievance process, nor from removing a Respondent from the educational program on an emergency basis if the District undertakes an individualized safety and risk analysis and determines that an immediate threat to the physical health or safety of any student justifies removal and provides the Respondent with notice and an opportunity to challenge the decision immediately following the removal. Potential remedies for students who have been subjected to sexual harassment or harassment because of sex include, but are not limited to: - Direct intervention or consequences applied to the Respondent; - Supportive Services available to either the Complainant or the Respondent through the District’s assistance program; - The District may issue statements to its student population, staff or the community making it clear that the District does not tolerate sexual harassment or harassment because of sex and will respond to any reports about such incidents; - Non-discrimination training for students, employees, or parents/guardians and families. A student who violates the District policies prohibiting sexual harassment will be subject to intervention or discipline consistent with the Code of Student Conduct. Such intervention or discipline may include counseling, parent/guardian conference, detention, suspension, transfer, or expulsion. Incidents of sexual harassment, depending on their nature, will be referred to law enforcement and reported to child protective services, as appropriate. No District employee, representative, or agent may intimidate, threaten, coerce, or discriminate against any individual for the purpose of interfering with any rights or privileges protected by this rule or because the individual has made a report or complaint, testified, assisted, or participated or refused to participate in any manner in an investigation, proceeding, or determination under this rule. The District prohibits retaliation against any participant in the reporting, complaint, or grievance process. A separate uniform complaint may be filed if retaliation occurs against any individual involved in the processing of a discrimination, harassment, or bullying complaint. Each complaint shall be investigated properly and in a manner which respects the privacy of all parties concerned. Follow-up with the student or employee will occur promptly to ensure that the harassment and/or retaliation has stopped and that there will be no further retaliation. All persons are prohibited from knowingly proving false statements or knowingly submitting false information during the complaint process and any person who does so may be subject to disciplinary action outside of and in addition to any disciplinary action under this Rule. Confidentiality and Retention of Investigation Information and Records Except as necessary to complete a thorough investigation and grievance process as required by law, the identity of the Complainants(s), Respondents(s), witnesses, disclosure of the information, evidence, and records which is required to be disclosed to the parties or the designated representatives, the information, records, and evidence gathered in the investigation will otherwise be maintained in strict confidence by the District. The District is not responsible, nor can it control any re-publication or disclosure of such information, evidence or records by the participating witnesses, parties or representatives. The Title IX Coordinator will retain investigation files for a time period of no less than seven (7) years, and investigation determination notices will be permanently retained in individual employee and student files. Where a charge or civil action alleging discrimination, harassment, or retaliation has been filed, all relevant records will be retained until final disposition of the matter. The District will provide annual training to employees on identifying and reporting acts that may constitute discrimination, harassment or retaliation. The Title IX Coordinator, designated investigators, designated decision-makers, designated appeal officer, and any District administrators who are designated to facilitate informal resolution processes, will receive additional annual training on this policy and implementation of the grievance process. The District will provide, as appropriate, instruction to students regarding discrimination, harassment, and retaliation. Title VI of the Civil Rights Act of 1964 Title IX of the Education Amendments of 1972 34 C.F.R. §§ 106.30, 106.44, 106.45 (2020) Section 504 of the Rehabilitation Act of 1973 Age Discrimination Act of 1975 Nebraska Equal Opportunity in Education Act
<urn:uuid:50cd56b6-3686-461e-a885-351469bd69f0>
CC-MAIN-2021-21
https://www.mpsomaha.org/board/policies/50103-sexual-harassment-complaint-procedure
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00057.warc.gz
en
0.927595
4,909
2.5625
3
Origin of the Puranga Chiefdom Puranga chief Rwot Andrea Olal Adiri and his wives dressed in Cenu, acholi traditional wears Puranga was a small divine chiefdom in 17th C before the coming of the European; it flourished until the British took over administration of Acholi territories and incorporated it under Pax Britannica. Prior to colonial rule Puranga chiefs played dual role as governors and priests of oracles. They believed their god Jok Olal Teng the sacred being supported their chiefs to make wise decisions. A small clan called Kal Orimu was chosen by god as supreme denying other Puranga clans to wage war in order to attain chieftaincy. This greatly helped Puranga to become a strong traditional monolithic chiefdom. The children were told of the origin of each rite, taboo, and customs to understand them properly. Children were also taught detail study of the Lou history thus Alunga Lujim acquired a good knowledge of his people. When unidentified brown strangers with the aid of the Lutugu chief destroyed the Lwo chiefdom, the inhabitants of Tekidi settlement dispersed in confusion with majority led towards Nile Valley by Rwot Owiny himself.According to Alunga Lujim, the divine Kingdom of Puranga Kal Orimu came into being after the fall of the last Rwot of Lwo Rwot Owiny Wod Pule Rac Koma, the last Rwot of Tekidi. But a very small group of people took refuge in the mountain among the group of men was a man named Opolo, a junior master of ceremony in the palace of Rwot Owiny wod Pule. As soon as victorious Lutugu warriors occupied the Lwo capital after the war they departed from the devastated settlement and returned to Lutuguland and small refugee from the mountain came to look for food in the ruined settlement. They did not know what to do since all the people had left for Nile Valley long ago. Having remained in hiding long time, they thought they were the only survivors and they did not bother to look for the main body of Lwo. Fortunately at the ruined settlement in search of food, they found Lutugu warriors did not destroy granaries of simsim, beehives and (Obato) yams and (Okono) pumpkins. When rain started to fall early that year, sorghum sprouted quickly and few months they had plenty of crops to harvest. After heavy harvest that they could not carry, the Kal people settled in the ruined permanently. The oracles then appeared to them showing them the location of the King’s sacred spear; stool, drum, and pot of rain were kept. They collected them and consecrated with the blood of a lamb and this gave them the burden of observing the taboo in special way all the time. As Opolo was the only man among them who knew the secret of the oracle and rituals, thought not in detail, he was appointed Custodian of these articles and the priest of the oracle. The group called themselves “Orimungole” meaning the “survivors” they decided to live in the area for some time and then made peace with the Lango Olok for fear of alienation since they were few in numbers. They thus became free to move within the territories of Lango Olok. Opolo remained as priest of the oracle and a leader described as wise and generous man to the people of Orimungole. When he died of old age, his son Kuku who learnt the secret of oracle succeeded him. Opolo named his son Kuku because the oracle obtained it so, to remain the small group of their ancestor, Kuku Lubanga. Kuku was later succeeded by his son Obwor and the order of succession continued to the priesthood of the oracle. Omero succeeded Obwor, and it was the leadership of Obwor and Omero that the group migrated to Lango Olok Country. However Alunga Lujim was not certain of the exact date whirls in Lango Country, Oranga succeeded his father Omero, and father Oranga led them to the foot Got lakwar where he died leaving two children, Beno and Jule (now Pa-Jule). His first son Beno became Rwot and Jule broke off with a small group under his rule at the foot of Lakwar rock. Beno migrated twice during his rule moving under lakwar rock and Otuke rock sating here for a long time. THE BOBI CLAN Here the Lamur clan from Sudan joined Otuke followed by Bobi clan who were separated from Jo Padibe and Parwec clan with their small chiefdom. Rwot Ananians Akera of Bobi Puranga showing photo of his late father Rwot Andrea Olal Adiri They were followed immediately by the Lukwor clan in present Odek sub-county in Gulu. Jo Paikat clan from Sudan joined Parwec and Jo Bolo clan came with their people and chief from Sudan. The Palaro also arrived from their kinsmen at Lute settlement. The Gem clan later came from Obo of Sudan. And Jo Palenga joined them from the Jule (Pajule) group. The last group to arrive was the ancestor of the Aywer clan, Okot Odok Dyang. During this period Owiny took over from Beno, and he was succeeded by his son, Cua Agoda who was in turn succeeded by his son Ogwang Omor. During this time the position of the priesthood became more and more of a ruler from semi-kingship to divine kingship. This is how the Alunga Lujim ancestors came into line of succession to the throne of Kal Orimu of Puranga. The kingdom was called Kal Orimu because as stated previously, the small group of survivors from Owiny great kingdom led by Opolo called themselves “Orimungole” now shortened as Orimu. Therefore when the position of the priesthood of oracle became of divine king the word ‘Kal’ which means “palace” was added to Orimu to become Kal Orimu meaning “palace of survivors” Under the rule of Ogwang Omor and Puranga group settled at the foot of Iwet rock near Adodoi river and at Iwet rock Ogwang Omor died and Chua Omero took over and migrated to Orunya near River Agago where he also died and his leadership was taken over by Cunyu Agara, he also migrated along Agago river settling near a rock which had no name. The oracle ordained this rock to be the national shrine of Puranga and its place of abode. The oracle which had no specific name apart from Jok only gave itself another name, Olal Teng where the unnamed rock of Puranga was named after Olal Teng. At the same time the position of Cunyu Agara became strong and was king and ritual head of Puranga. On the instruction of Oracle Olal Teng, Cunyu Agara moved his people to Apiri where he produced only a son there named Olwoc Mutu meaning “the future was dark” because he was not sure Olwoc would succeed him. The beliefs came true when misfortune overtook Olwoc Mutu after his father died. When Rwot Cunyu Agara died of old age during famine, he left two wives; his first wife and queen called Kiccaa Auru did not produce a child. His second wife produced Olwoc Mutu but because she was anointed with Cunyu Agara to the throne, had control of petty belongings of Cunyu Agara as well as sacred articles pertaining to the kingship. This was in accordance to the customs. At the time of Cunyu Agara’s death there was a man called Ogwang Okok their cousin from Parumu who was staying in the palace of Puranga. Although he was only a visitor, but participated in digging grave to burry Rwot Cunyu Agara, he could not return to Parumu until cleansing ritual was done, he was then forced to remain in the Palace with the bereaved family. At this period famine was alarming and because elders stayed behind with bereaved families during Cola- the period between the first and second funeral rites, the queen had very hard time providing food to elders. Always when she asked her only son to take food to elders, mean as he was, Mutu could divert the food to his house ‘Otogo’. But when the queen gave food to Ogwang Okok to serve the elders, he did so and won the love of the queen and elders alike. The queen Kiccaa Auru then taught Ogwang Okok the sacred of the oracle unfortunately she did not know the whole process well. When she learned that Ogwang Okok knew all the process, she handed over the whole regalia to him but he failed to keep them sacredly in his Otogo. When famine disappeared and plenty of food harvested, the elders thought of enthroning the lodger Ogwang Okok as Chief of Puranga Kal Orimu This was because Mutu was mean and dirty to hold the honor and dignity assigned to the position of the priest of the oracle and king of Puranga. Yet the position of Rwot of Puranga required a person with clean heart and benevolence. But when Mutu leaned to this he grew annoyed and carried abomination act of burying all the sacred stool, spears, pot of rain and buried them in the soil in the jungle. He thought the elders had replaced a stranger on the throne of his father. As a result of Mutu action the Earth God was angered and blocked rain from falling. Many people again starved to death. Suspicious to elders attempt to persuade him return the regalia, and elders hiding their anger from their faces, they persuaded Mutu in vain until special elders he trusted won his heart. He had planted regalia spear south of Puranga chiefdom smearing it with (Latuk) black carbon from the roof of a grass thatch hut while on the other side he smeared the spear with Pala (red oxide). This made rain never to fall in Acholi side other than in Lango. He removed the regalia from the earth where he had buried knowing he had taught the elders a lesson about playing with his father’s throne. When elders performed the ritual miracle happened and rain fell on the ground that same day. As soon as Olwoc Mutu brought and placed the regalia in front of elders, they arrested and tried him in elder’s court. At that time he was married and had five children. The tribunal of Puranga found him guilty of abomination and sentenced him to death, later they took him to grazing field where he was executed by stoning. This made Ogwang Okok their cousin who came as a lodger to the palace to be enthroned chief of Puranga. Although he could work very well as King, he did not occupy the position of the head of rituals and thus this made Puranga kingdom divided. The office of the ritual head of Puranga and head of oracle continued to be occupied by son of executed Olwoc Mutu because it was impossible to give that office to a stranger to occupy it. The teaching of the sacred of the oracles and articles were done by senior members of royal families and outsiders were not expected to know them. The lesson taught included an elaborate oral tradition related to the origin of the great kingdom of Rwot Owiny of Tekidi, great wars, names of past Rwodi or King in order of their succession to the throne, great famines and other memorable events which made them famous or infamous. These were mainly the events that brought about break up or separation of one group to another. When Ogwang Okok died of old age he left his only son Ogwal Lameny as a young boy and elders could not install him, thus his widow Aroko Nyacca thus occupied the throne of Puranga until Lameny came of age. In Puranga the son Aroko Nyacca Choras “Eeeh Aroko Nyacca nen tok lango aparo macalo man aparo pi lacede… eeh nyacaa Lango omiro myero ki tong……” He thus was installed without proper rituals because the people of Orimu who knew the ritual boycotted the ceremony. He ruled well but with numbers of difficulties, he married early and could not produce a child because the oracles and Eeath God were angry. Lameny had to consult a magician from Bunyoro who were powerful in curative medicine; then he produced a son. His Nyoro magician named the child Ogwal meaning the “Frog”. The birth was a great joy to the Royal house. As soon as Ogwal was weaned from the breast, Rwot Lameny died; this misfortune overtook Puranga people because Ogwal was too young to be seated on the throne. His mother Queen Akongo had to act as a ruler of Puranga. She was assisted and advised by Olunyu Acuga of Palaro Clan. Queen Akongo did well and her son Ogwal took over as he came of age and by the time the British colonial masters took control of Acholi, Ogwal Lameny was the Rwot of Puranga. The transfer of the royal line from Kal Orimu to Ogwang Okok was affected by the intrigue of childless Queen Kicaa Auru who was thought by the people of Puranga to be the cause of repeated misfortune attached to the throne of Puranga which was his grandfather. Ogwang Okok snatched from the house of Cunyu Agara of Kal Orimu. He took precautionary measures against it. With the aid of Nyoro magician, Ogwal was able to produce many children. But it is believed that because of the fraudulent accession and the curse of the executed Olwoc Mutu, the house of Rwot Ogwal Lameny never produced any person of integrity. Most of his children took to heavy drinking and under the strict supervision of the British administration, they could not rule well. Okello Mwaka takes leadership. Fortunately for the Puranga kingdom although Ogwal lameny was poor ruler, Okello Mwaka of Bobi clan who played the role of chief executive prime minister of the kingdom, administered the Puranga affairs with superb diplomacy and administrative abilities. Mwaka was a man of considerable experience. He had travelled widely and gifted with many languages. The British officials found him as capable as Sir Appolo Kagwa of Buganda; therefore, he was able to to win recognition of new colonial administration. This greatly disturbed Ogwal Lameny who feared that Mwaka would replace him. Ogwal Lameny immediately begun to exploit quarrel between Mwaka and some men of the Palenga clan. In collusion with these men he had Okello Mwaka murdered. This perturbed the British Authority as they gave had placed great trust in Mwaka who was then giving them indispensible assistance in nursing the newly established colonial administration in this part of Acholi. After investigation, the culprits were arrested and punished and Rwot Ogwal Lameny was exiled in Masindi where he died. Puranga Kingdom thus was divided into parts. Andrea Olal, who was the elder son of Okello Mwaka, was appointed Rwot of the western part of Puranga. This was done by the British as a prize for meritorious conduct of his murdered father, Okello Mwaka. Petero Owing Cumun, son of Ogwal Lameny ruled the eastern part of Puranga. Unfortunately he did not live long; he died whirls attending conference of chiefs in Kitgum. Some people who knew Rwot Petero Owiny Cumun intimately believed that the cause of death of Owiny was due to alcoholic poisoning for he had been a heavy drinker. But no postmortem was done to determine it. Those from the house of Ogwal themselves believed that he was bewitched because at the time there was rumors that the British were willing to reunite the divided Puranga kingdom. They feared that Owiny Curum would rule the united Puranga by virtue of his birth, hence they killed him. His younger brother, Eliya Aboga, took over as Rwot of eastern Puranga. Andrea Olal justified his appointment as Rwot of Western Puranga. Like his father, Okello Mwaka, he proved to be a great leader. In fact he outshone all other Rwodi in Acholi. When the Puranga Kingdom was re-united, the descendant of Ogwang Okok lost their stolen chiefdom to the house of Okello Mwaka of Puranga Bobi. This completed the transfer of the royal line from Kal Odokotaya to Bobi. Andrea Olal however was the last Rwot of Puranga. He died of Pneumonia at St Mary’s Hospital Lacor in Gulu on November 18, 1968. He was then buried in his palace in Bobi. Who is the heir to Andrea Olal? The apparent heir to the throne of Puranga was Ananiya Kerawegi Akera. He also displayed great capacity for leadership when he acted as Secretary General of Acholi for a short time. After he was trained as a British Cadet to lead King African riffle and participate in governance, Akera is one of the Acholi sovereign’s leaders who helped Uganda acquire Independence from British colonialism in 1962. He participated in political party agitation under the then Uganda National Congress before transforming into Uganda People’s Congress Party UPC. Akera 99, consecrated as the chief of Puranga Bobi in 2006 after years of civil disturbance disabling him to the seat of his father. Ker (Power) in Puranga used distool method Power intrigue mars Acholi Chiefs Rwot Oola Peter Ojigi of Alokolum does not belief in history that Rwot Olal Andrea existed. After 1996 the government revived the kingship countrywide includes Acholi. Now politically paranoid with the existence and integrity of Rwot Akera, Ojigi without moral authority stole the stamp belonging to the Acholi Ker Kal Kwaro in November 30 and wrote a stinging letter to discredit the Rwotship of Bobi. In his communication to now chief of East Puranga Ochan Luwala, Ojigi argued consecration of Akera as a chief will only divide united Puranga, thus be stopped. His action was condemned by the prime minister of the Cultural Institution Kenneth Oketa. Oketa said Ojigi should refrain from denouncing any clan head and urged him to concentrate in affairs of his subjects. Oketa said even when in the KKA there is no office charged with deleting or replacing any chief so chosen by his people. Ojigi thus made blunder with impunity so needed to apologize to his counterpart Akera. As if that was a launch pad to cause sentiments among Acholi smaller chiefs, on November 20, while burying Rwot of Patiko Kal Mutu Lagara, flanged Rwot of Pajule George William Lugai laughed at the death of Mutu at his funeral rite by denouncing without shame the chief of Lamogi Paul Olango, son of Rwot Otto Yai who died long ago as not a chief. This forced KKA Prime Minister Mr Kenneth Oketa to angrily criticized action of Ojigi and Lugai for unconstitutional behaviors. “He has no capacity to deny any chief who is chosen by his people and this is heretically right,” Oketa said. Ojigi went as far as saying Akera and Bobi Puranga as they know has never existed in history of any chiefdom. But Akera shelled them as illiterate and frustrated bloc of men corrupt to the tune of famine. Chiefdom in Puranga was done in a way of killing a chief just like Asante in West Africa. From 1720, Ogwang Okok, an outsider from Parumu a clan in Parabongo or Paimol came and occupied throne of Puranga. From 1720-1914, when Descendant of chief of Ogwall Abwang grandson of Ogwang Okok plotted to kill Okello Mwaka, his commander in chief and translator was arrested by the British because of assassinating Okello Mwaka. He was exiled in Masindi and died there. This gave birth to division of Puranga into two: Puranga Aywer ruled by Ogwal Abwang son Owiny Cumun and Olal Andrea was anointed in charge of Puranga Oqita. After death of Owiny Cumun, his son Aboga Eliya in 1942 was arrested when he caused tribal clashes between Puranga and Payira. Olal Andrea reclaimed the glory of Puranga chiefdom from a foreigner who ruled over two century none other than Ogwang Okok. Until after 3 centaury, Puranga chiefdom lasted for a centaury from 1914 -2010. In Puranga much smaller chiefdom includes Parwec, Paikat, Bolo Pakena and bolo Lamac, Ot Ngec, and Kal Orimu the supreme. The fact that Akera has Chiefdom controlled under reign by Akera, its justified by the regalia which include drum, spear, and Amer which are evident at his father’s palace in Bobi. These regalia once set ablaze jumps out of the house by itself and once ritual is not done in a specific time, many children in the royal family pass on. In late 2000, Akera had to perform rituals with elders in the royal palace to see that diseases or sudden death do not occur. Thus culture of the people of Acholi today is eroding. No any chief can decides on his subject without the decision of council of chiefs and head of oracles. But today chiefs are termed as Rwodi Kalam meaning those whose authority rest upon the decision of the government. Children born in the house of the priests of oracles and Rwot of Puranga were carefully trained on secret rituals and function of the oracles.
<urn:uuid:c532afaa-c8e6-4df5-a8af-9a4d83885f4d>
CC-MAIN-2021-21
https://www.acholitimes.com/2011/06/22/96/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00254.warc.gz
en
0.981403
4,744
3.390625
3
AMEG Strategic Plan This strategic plan was prepared by the independent policy group, AMEG (the Arctic Methane Emergency Group), comprising a multidisciplinary team of leading scientific experts, system engineers, communicators and concerned citizens. The purpose of this document is firstly to warn the world of the extreme and imminent danger of global famine and ensuing strife created by rapid Arctic warming and precipitous sea ice retreat, and secondly to provide a strategic plan for handling this situation. The international community is totally unprepared for the speed of change in the Arctic, the dramatic effects on global climate and the dire repercussions on food production. The tendency among scientists and the media has been to ignore or understate the seriousness of the situation in the Arctic. AMEG is using best available evidence and best available explanations for the processes at work. These processes include a number of vicious cycles which are growing in power exponentially, involving ocean, atmosphere, sea ice, snow, permafrost and methane. If these cycles are allowed to continue, the end result will be runaway global warming. The situation is so urgent that, unless appropriate action is taken within a few months, the window of opportunity will be lost. Adaptation to the consequences will be impossible, as famine spreads inexorably to all countries. The situation is of unprecedented danger in the history of civilisation. Humans are not psychologically prepared to deal with such mortal danger except by suppressing thoughts of it. But we, as a human society, have to “get a grip” if we are to survive. The good news is that AMEG believes that the emergency situation can be handled, but only if faced squarely and treated with focus, determination and urgency. The international community must not only tackle the effects of a growing number and severity of weather extremes, tantamount to abrupt climate change, but must also tackle the underlying cause: a vicious cycle of Arctic warming and sea ice retreat. Peoples of the world must be told the truth about the extreme danger that we all face. Then there is a unique opportunity for all nations to pull together to fight the common “enemy”, which is the vicious cycle of Arctic warming and sea ice retreat. Governments of the world must not pretend that there is no immediate crisis. They must understand the chain reaction of cause and effect, and collaborate to protect all citizens. Abrupt climate change is upon us. Extreme weather events are on the increase. Farmers are in despair. Food prices are rising. The UN climate change policy simply based on emissions reduction cannot deal with the immediate danger. The UN and member governments should have acted years ago to avert the crisis now unfolding. What has been happening in the Arctic has been completely overlooked, and now only drastic action to cool the Arctic has any chance of rescuing humanity. A key factor is the Arctic sea ice, whose reflection of sunshine keeps the planet cool. Remove the sea ice, and not only does the planet start to overheat, but the whole climate is suddenly changed. The global weather systems, on whose predictability farmers rely, are dependent for their stability on there being a temperature gradient between tropics and the poles. Remove the snow and ice at one pole, and the weather systems go awry and we have “global weirding”. Furthermore, the weather systems get stuck in one place, and we get weather extremes: long spells of hot/dry weather with drought, or long spells of cold/wet weather with floods. This global weirding has started with a vengeance. The sea ice is rapidly disappearing. The behaviour of the polar jet stream is disrupted. Extreme weather events occur more often and with greater ferocity. And the food price index climbs and climbs. There is an obvious relationship between strife and food – if you starve a nation they will fight to get food. This relationship has been pinned down by an organisation called the Complex Systems Institute, CSI. They show that the food riots break out when the food price index rises above a certain critical level. An example was the Arab Spring. Figure 1 ~ A trend line analysis of CSI data Figure 1 adds trend lines to the CSI data, the Rabobank Report forecast for UN FAO Food Price Index for June 2013 and the potential repeat of 2008 and 2011 at the elevated levels resulting from the overall underlying trend of line 1. The current index is above the critical level. Because of extreme weather events this year, the index is expected to rise again in 2013. The UN’s food watchdog, the FAO (Food and Agriculture Organisation), forecast that the index will rise even further in 2014. Meanwhile the insurance industry is worried by the trend towards greater number and strength of extreme weather events, including hurricanes. Note that Sandy’s cost was greatly amplified by the diversion westward at it approached the coast off New York. Sandy had hit a jet stream blocking pattern. The loss of Arctic sea ice is leading to this kind of unusual event become more frequent. The insurers are worried, but governments should be even more worried, because extreme weather events will drive the food price index even higher. The critical situation Figure 2 ~ Connecting the dots and breaking the chain As the sea ice retreats, exposed water absorbs more sunshine, heating the water and causing further melt of the sea ice in a vicious cycle. This appears to be the dominant positive feedback loop in the Arctic, although snow retreat may contribute nearly as much to the warming of the Arctic generally in a second feedback loop. A further feedback loop is ominous: as the Arctic warms, the thawing of land and subsea permafrost allows the discharge of growing quantities of the potent greenhouse gas, methane, which in turn causes further warming in a vicious cycle. This cycle is not yet noticeable. However there is over a trillion tons of carbon stored in permafrost in the form of organic material, which is liable to decompose anaerobically to form methane. And the permafrost forms the cap on an even larger carbon store already in the form of methane. Most scientists now accept that Northern Hemisphere land permafrost will thaw entirely this century. There is the potential for the release of enough methane into the atmosphere to cause runaway global warming, with temperatures rising well over ten degrees C. The most immediate negative impact of these cycles and the resultant rapid warming of the Arctic atmosphere is a disruption of polar jet stream from its normal behaviour, such that there are more frequent and more severe weather extremes experienced in the Northern Hemisphere. This impact has grown so conspicuously over the past few years that we can honestly say that we are now experiencing abrupt climate change. The result of this climate change is widespread crop failure and an ever deepening food crisis. A measure of the worsening situation is the food price index. This has spikes when the price of oil rises, but the underlying value has been rising steadily since 2006. Today, the index is slightly more than the critical price level above which food riots are liable to break out – an example having been the Arab Spring. Largely as a result of the crop failures this year, the FAO forecast that the index will rise higher in 2013 and higher again in 2014. If the trend in weather extremes continues, then these figures could prove optimistic. With a billion people on the edge of starvation today, we could see 2 billion by this time next year. It will be a humanitarian disaster. Furthermore, social unrest will rise, and economic growth and stability compromised in the developed and developing countries. However there are longer term impacts and threats of Arctic warming, in particular (i) Greenland Ice Sheet destabilisation, (ii) accelerated methane discharge, (iii) loss of biodiversity and habitat, and (iv) heat absorption making it more difficult to keep to global warming targets. As the snow and sea ice retreat from their levels in the 70s, more solar energy is absorbed. Taking the 70s as the baseline (zero forcing), this year's retreat produced as much as 0.4 petawatts of climate forcing averaged over the year. Much of this heat energy is retained in the Arctic, causing ice to melt and sea and land temperatures to rise. As temperatures rise, there will be slightly more thermal radiation into space, dissipating some of this energy. However most of this heat energy will slowly dissipate across the planet - and 0.4 petawatts is equivalent to half the forcing producing by anthropogenic CO2 emissions (1.6 watts per square metre). Peter Wadhams has estimated that the sea ice retreat by itself is equivalent to the forcing from 20 years of CO2 emissions, thus making it much more difficult for the global temperature to be kept below the so-called safe limit of 2 degrees warming. However these long term effects are somewhat academic, if the immediate impact is to raise food prices far above a safe level. It is much easier to think about and quantify the longer term impacts of Arctic warming than the more immediate impacts. This is a trap for the unwary. Therefore AMEG is trying to bring the world's attention to the immediate impacts, as they turn out to be colossal even this year, and are likely to be worse in 2013 and even worse than that in 2014. It is clear that abrupt climate change has started, but not in the way we had been told to expect. Yes, there would be more climate extremes as the planet heated, but we were expecting a linear or near linear behaviour of the climate system, with gradual temperature change over the century. Instead we have striking non-linearity, with exponential growth in frequency and severity of climate extremes. This non-linearity is almost certain to have arisen from the exponential decline in sea ice, as shown in the PIOMAS sea ice volume trend. The trend is for September ice to fall to zero by 2015. Thus we can expect one month without sea ice in 2015, with the possibility for this event in 2014 or even in 2013. Apart from volcanic eruptions and earthquakes with their step changes of state, the behaviour of the sea ice is possibly the most non-linear part of the Earth System because the melting is a threshold process. Until recently it was not well understood how the retreat of sea ice could cause a commensurate increase in weather extremes. But now it has become clear. The retreat of sea ice is causing a non-linear rise in Arctic temperature, so that it is now rising at about 1 degree per decade, which is about 6x faster than global warming, reckoned to be rising at between 0.16 and 0.17 degree per decade. The temperature gradient between the tropics and the Arctic has reduced significantly over the past decade, as a result of this so-called ‘Arctic amplification of global warming’. It now appears that the polar jet stream behaviour is critically dependent on this gradient. As the gradient diminishes, the jet stream meanders more, with greater amplitude of the Rossby waves and therefore with peaks further north and troughs further south. This effect alone produces weather extremes - hot weather further north than normal and cold weather further south than normal. But as well as meandering more, the jet stream is also tending to get stuck in so-called 'blocking patterns', where, instead of moving gradually eastwards, the jet stream wave peak or wave trough stays in much the same place for months. This blocking may be due to stationary highs over land mass and lows over ocean, with the jet stream weaving round them. Here we may be a witnessing of a dynamic interaction between the effects of Arctic amplification and global warming. Note that there was a similar dynamic interaction in the case of Sandy. Ocean surface warmed by global warming lent strength to the hurricane and provided a northerly storm track up the coast; and then a sharp left turn over New York was prompted by meeting a jet stream blocking pattern. As a climate scientist, one might have expected a reduced gradient between tropics and pole to have some effect on weather systems, because there is less energy to drive them. The normal pattern comprises 3 bands of weather systems around the planet for each hemisphere, with each band having 'cells' of circulating air. The air rises at the tropics, falls at the next boundary, rises at the next, and falls at the pole. There has to be an odd number of bands, so that there is air rising at the equator and falling at the poles. The jet streams are at the boundary between the bands. As the temperature gradient between tropics and pole reduces, one would expect the weather systems to spread in a chaotic manner, meandering more wildly. This is exactly what has been observed. The sticking of the jet stream must be associated with non-uniformities of surface topology and heat distribution. Thus highs and/or lows are getting stuck over some feature or other, while the jet stream meanders around them. Thus there is a reasonable explanation for how we are getting weather extremes, simply as a result of a reduced temperature gradient between Arctic and tropics. Another argument that has been given, most notably by Professor Hansen, is that the extreme weather events are simply a result of global warming - i.e. a general rise in temperature over the whole surface of the planet. Global warming can indeed explain a gradual increase in the average intensity of storms (whose energy is derived from sea surface warming) and in the peaks of temperature for heat waves. But global warming does not explain the observed meandering of the jet stream and associated weather extremes, both hot/dry and cold/wet, whereas the warming of the Arctic can explain these observations. Furthermore the non-linear warming of the Arctic can explain the non-linear increase of extreme events. Since this hypothesis seems reasonable, it is fitting that the precautionary principle should be applied when it comes to trends. The forecasting of extreme events must take into account the trend towards more extreme events as the Arctic warms. And the Arctic is liable to be warm about twice as fast in 2015 as it in 2012, because of sea ice retreat. This all adds up to a picture of abrupt climate change in the Arctic, now spreading to the Northern Hemisphere and soon to afflict the whole planet. These changes must be halted and then reversed. Meanwhile the effect on food security must be handled before the whole situation gets much worse. Handling the food crisis What should a country do, when faced by such a grave food crisis? The immediate response may be to become introspective and try and insulate the country from what is happening in the rest of the world. For a country like the UK, this is difficult, because of importing 40% of food and much of its energy requirements, such as natural gas from Kuwait. For the US, self-sufficiency has been a goal for energy, but there is a food problem from weather extremes, which particularly seem to affect the country. For countries which have been net exporters of basic foodstuffs, the response may be to halt exports, as Russia did for wheat recently to protect its citizens but pushing up the food price index in the process. If this type of response is widespread, then a vicious cycle of food price increase and protectionism could develop, with a stifling of world trade and an increase in strife between countries. But what people must not do is to ignore the non-linear trends and blame the weather extremes either on random fluctuations or on essentially linear effects such as global warming. The danger is that governments will do nothing at all to address the underlying cause of the linearity, which lies in the vicious cycle of Arctic warming and sea ice retreat. We believe that a sensible strategy is two-fold: to deal with the symptoms of the disease and the cause of the disease. The most conspicuous symptoms are floods, droughts, food price increase, security of food supply and food shortages. Less conspicuous are the effects of food price increase on global unrest and the spread of disease among humans, animals and plants. Water shortages may also be a growing issue in many countries. The changing frequency, severity, path and predictability of tropical storms (hurricanes, typhoons, monsoons, etc) will be a major issue for many countries, especially those with large coastal conurbations and those who depend on regular monsoons. Coastal regions and cities that have hitherto been immune to such storms may suffer great damage, as happened with Sandy to New York and could happen to Dubai. Countries which rely heavily on one crop for income are liable to be heavily hit by weather changes. By studying trends, one can estimate how quickly the situation is likely to deteriorate. One can see an exponential rise in extreme weather events, and the food price index is liable to follow this trend because of reduced agricultural productivity. The price of food is dependent on a number of factors besides agricultural productivity, and these are under human control. The policy of “food for fuel” has undoubtedly driven up the price of food, so this policy needs to be changed. Biofuel can still be part of policy, but must come from sustainable sources and without competing with food. For example biofuel from the biochar process can actually benefit food production, because the residue from heating biomass and producing the biofuel is a form of charcoal that can be used for improving soils, water retention, and crop yields. An important factor in the price of food is the price of oil, because of use of oil in agriculture, not only for farm machinery and food transport but also for artificial fertiliser. Unfortunately much oil comes from countries where much of the population is on the bread line, so the social unrest from food price increase can shut down access to the oil which further pushes up the cost of food in a vicious cycle. Speculation on the price of oil can be a major factor in producing spikes in the food price index, so this needs to be discouraged in some way. Similarly speculation on food commodities needs to be discouraged. Perhaps the most important factor is management of food stocks, seed stocks, planting practice (use of monoculture, GM crops, etc.), timing of planting and irrigation. The timing becomes increasingly problematic as global weirding increases and weather becomes more unpredictable. There needs to be advice to farmers on how to cope – e.g. by judicious diversification and reduced reliance on single crop planting. Cooling the Arctic Dealing with the underlying cause of the climate extremes turns out to be even more important than dealing with the consequences on food security, because the underlying cause is a process which is gaining momentum and could become unstoppable in 2013. In effect, we are approaching a point of no return, after which it will be impossible to rescue the situation. The speed of action is required because of the speed of sea ice retreat. All indications are that there will be a major collapse of sea ice next year, with a new record minimum. And September 2015 is likely to be virtually sea ice free. This is the inescapable evidence from the PIOMAS sea ice volume data. Even if there were no danger from passing a point of no return, rapid action would be worthwhile because of the financial and human cost of the abrupt climate change. The only chance of halting this abrupt climate change in its tracks is to cool the Arctic, and prevent Arctic amplification disrupting the jet stream more than it is at the moment. Delay to such action would cost around a trillion dollars per year and put a billion people into starvation. Figure 3 ~ The trend analysis of PIOMAS data The target should be to prevent a new record low of sea ice extent next year (2013). This involves providing sufficient cooling power into the Arctic to offset the warming which has built up as the sea ice has retreated. This warming is due to the “albedo flip effect” and is estimated as being up to 0.4 petawatts averaged over the year. This warming has to be countered by an equal cooling power, if the target is to be met. This is a colossal engineering and logistics challenge. A war effort on developing, testing and deploying geoengineering techniques would be justified to meet the target. Cloud effects that could be exploited to cool the Arctic Clouds have effects in opposite directions: reflecting sunshine back into space and reflecting thermal radiation back to Earth. The former cools, the latter heats. Geoengineering tries to enhance the former and/or diminish the latter, to alter the balance towards cooling. The balance is critically dependent on the droplet size: there is an optimum size for reflecting sunlight, as for the particles to make white paint. Particles much larger than this will reflect thermal radiation strongly. When the sun is high in the sky, the balance is towards cooling by reflection of sunlight; but when the sun is low in the sky, the balance is towards heating by reflection of thermal radiation. Thus techniques for cloud brightening tend not to work well in winter at high latitudes. Clouds also can produce snow which will generally increase albedo to around 0.85 where it falls; whereas rain will generally reduce albedo by melting any snow and by forming puddles or pools on land or ice surfaces. However, rain or snow falling through a dusty atmosphere can darken the surface on which it falls. Hence the black carbon from tundra fires may have some sunshine reflecting effect while in the atmosphere, but then reduce albedo when it’s washed out. There are a number of different things to do with clouds: create them (typically as a haze), brighten them, extend their life, reduce them by precipitation (rain or snow), or reduce them by evaporation. Perhaps the simplest form of geoengineering is to create a haze. Particles or fine droplets of haze in the troposphere tend to get washed out of the air within days or weeks, whereas if they are in the stratosphere they can last for months or even a few years, depending on their initial altitude and latitude. The stratosphere Brewer-Dobson meridional circulation has air slowly moving in an arc from lower latitudes to higher latitudes, see http://en.wikipedia.org/wiki/Brewer-Dobson_circulation By judicial choice of quantity, altitude and latitude for injection of aerosols, one can obtain a much longer cooling effect in the stratosphere than in the troposphere. Thus one needs much less aerosol in the stratosphere to produce the equivalent effect in the troposphere. Note that the eruption of Mount Pinatubo in 1991 produced a global cooling of 0.5 degrees C over a period of two years. Providing cloud condensation nuclei (CNN) of the right size can brighten clouds without significantly affecting their lifetime. Sulphate aerosol in the troposphere produce both a reflective haze and CNN. These combined effects from aerosol ‘pollution’ have masked global warming by as much as 75%. If all coal-fired power stations were shut down, there would be a significant decrease in aerosol cooling and an upward leap in the rate of global warming. Three preferred cooling techniques A combination of three cooling techniques is proposed, to give flexibility in deployment and maximise the chances of success: - stratospheric aerosols to reflect sunlight; - cloud brightening to reflect more sunlight; - cloud removal to allow thermal radiation into space. The first technique mimics the action of large volcanoes such as Mt Pinatubo which erupted in 1991 and had a cooling effect of 0.5 degrees C over 2 years due to the sulphate aerosols it produced in the stratosphere. However larger particles in the aerosol are liable to reflect thermal radiation from the planet surface, hence having a warming effect. To avoid this, there is an advantage in using TiO2 particles, as used in white paint. These can be engineered to a constant size, and coated to produce required properties, such as not sticking to one another. Large quantities could be dispersed at high latitudes in the lower stratosphere either using stratotankers or balloons, to have an effect lasting a few months during spring, summer and early autumn. Due to circulating winds, the aerosol will spread around the latitude where it has been injected. Cloud brightening is a technique whereby a very fine salt spray is produced from special spray nozzles mounted on a ship, and gets wafted up to clouds where it increases their reflective power. Whereas stratospheric particles can provide blanket cooling at particular latitudes, the brightening technique can be used to cool particular locations, using sophisticated modelling to decide when and where is best to do the spraying. The third cooling techniques involves removing certain high clouds during the months of little or no sunshine when they are having a net blanketing effect – reflecting heat back to the ground. Additional techniques should be considered for more local cooling, especially by increasing surface albedo; for example one could increase snowfall over land or brighten water by injection of tiny bubbles. Another technique is to break up the sea ice in autumn and winter, which has the effect of thickening the ice and producing what looks like multi-year ice. A very promising approach is to reduce currents carrying water into the Arctic Ocean, in particular the partial damming of the Bering Strait. Note that all the above techniques are expected to enhance the Arctic ecosystem, which is in danger of sharp decline as a result of sea ice collapse. Local measures to save the sea ice There are a number of physical ways to reduce loss of sea ice: - corral the ice when it is liable to break up and float into warmer waters - reduce wave action at the edges - replace or cool warmer surface water using colder water from beneath - thicken the ice by shoving ice on the water onto other ice - thicken the ice by adding water on top to freeze - thicken the ice by adding snow (which may also brighten it) - add a layer of white granules or reflecting sheet. The last of these can also be used for retaining snow. It could be used on the Greenland Ice Sheet to preserve snow and ice. (AMEG founder member, Professor Peter Wadhams, has co-authored a paper on the subject, to be presented at AGU. He has also done work on how tabular icebergs break off at the edges.) Pulling out all the stops, whatever There is one thing that we do know can produce an appropriate amount of cooling power: the sulphate aerosol in the troposphere, as emitted from coal-fired power stations and from ship bunker fuel. This aerosol has offset CO2 warming by around 75% in the past century. There should be a temporary suspension of initiatives and regulations to suppress these emissions, while they are having a significant cooling effect in the Northern Hemisphere, unless human health is at risk. Much attention should be given to short-lived climate forcing agents, such as methane. There should be a moratorium on drilling in the Arctic, as proposed by the UK Environment Audit Committee in their report “Protecting the Arctic”, September 2012. Measures to reduce black carbon should be taken. There should be teams of fire-fighters set up to take prompt action on tundra fires, which produce black carbon, methane and carbon monoxide – all undesirable. More direct means to deal with weather anomalies Cloud brightening and wave pump technology can be used to cool the surface of the sea in specific areas. This technology holds promise to reduce the power of hurricanes and other storms, but might also be used to produce precipitation where needed or dampen oscillations of the planet’s climate system, e.g. ENSO (El Nino Southern Oscillation). More direct means to deal with methane emissions AMEG realises that there is a problem of growing methane emissions from the high latitude wetlands and from permafrost which is thawing, both on land and under the sea bed. Methane is a potent greenhouse gas, so we have been investigating how to suppress methane and methane production. We have some valuable ideas, based on use of diatoms in water treatment. The water treatment means that fish can thrive where previously the water was brackish. Thus, not only is methane suppressed, but fish farming becomes possible on a very large scale at very low cost. Increasing food production is going to become paramount in a warming world with a growing population. Modelling and monitoring Essential to all geoengineering deployment is good modelling of the climate system. Unfortunately, none of the global climate models deal with the speed of events in the Arctic. It is essential to have a good understanding of the processes at work. Part of the war effort to meet the geoengineering target must be devoted to improving the models. Similarly there must be adequate monitoring facilities to ascertain the effects of geoengineering, and prevent inadvertent negative impacts. Some satellites which could supply appropriate monitoring are nearing the end of life or coming out of service, so must be replaced as quickly as possible. Not an end to the story Cooling the Arctic is not the only step that is required to save civilisation from fatal consequences of mankind’s interference with the Earth System, but it is prerequisite. Assuming the sea ice is restored, global temperatures could still rise too high, oceans acidify too much or rainforests dry out and burn down. AMEG supports efforts to deal with such matters. But cooling the Arctic is the first emergency response strategy. This is in two parts: firstly interventions for adjustment/restoration/repair of critical Earth System components, especially in the Arctic; and secondly the food crisis, especially the politics of dealing with the situation such to avoid vicious cycles that could jeopardise stability of food production or lead to panic among peoples. Something akin to a war room needs to be set up, bring experts from all the relevant fields, in order to brainstorm on the problems and possible ways forward. Interventions in the Earth System These interventions can be viewed as adjustments, restoration and repair of critical Earth System components. Examples include cooling the Arctic, restoring the sea ice and returning polar jet stream behaviour to a more acceptable mode. For each intervention there may need to be modelling to predict effects and effectiveness and to anticipate problems arising. Correspondingly there needs to be observations, monitoring and measuring of results. The observation of process and the measurement data obtained should be fed back into the models to improve them. As for appropriate interventions, there are a number of things to do immediately in parallel: - Consider practices and regulations that are having, or risk having, a heating effect on the Arctic. A postponement of drilling in the Arctic would be sensible, because of inevitable escape of methane but also because of the risk of blowout with or without oil spill. - Try to maintain or even enhance the current cooling effect from currently emitted sulphate aerosols in the troposphere at mid to high northern latitudes. For example the regulation to ban bunker fuel for ships should be relaxed while encouraging continued use of bunker fuel where the resulting aerosol emissions might be beneficial. Reduction of sulphate aerosol ‘pollution’ will be unpopular with many environment groups, but the priority to cool the Arctic has to be established. - Establish the positive and negative net forcing from contrails, and encourage flight paths of commercial airplanes to reduce positive or increase negative net forcing. The ban on polar flights, lifted recently, should be reintroduced. - Reduce black carbon into Arctic. Make for preparedness to fight tundra fires in Arctic and sub-Arctic. - Find ways to remove black carbon from coal fired power stations, while allowing or compensating for the cooling effect that their aerosol emissions would be producing without the scrubbing out of sulphur compounds. Geoengineering actions for enhancing the reflection of sunlight back into space and for increasing the thermal energy emitted into space. - Prepare the supply and logistics for spraying aerosol precursor in large quantities, preferably into the lower stratosphere, for deployment by next March or April (not sooner because the risk of ozone depletion). Of course, possible negative impacts have to be considered before large scale deployment, but it is worth being fully prepared for such deployment on the assumption that this technique can be made to work effectively. - Develop and test the deployment of suitably reflective particles, of such materials as TiO2, as alternative or supplement to sulphate aerosol. Prepare for large scale deployment. - Finance the development of, and deployment capability for, marine cloud brightening, with a view to deployment on a large scale in spring 2013 - assuming that is the earliest conceivable time. The main technical problem seems to be with the jets, so experts from major companies in the ink-jet technology field need to be brought in. Boats and land installations need to be kitted out. - Finance the development and deployment capability for cirrus cloud removal, since this is a promising technique. Suitable chemicals need to be identified/confirmed, with stock-piling of these cloud seeding chemicals. Aircraft need to be kitted out to spray these chemicals. - Finance brainstorming sessions for geoengineering, with top scientists and engineers, such as to suggest further measures, improvements to above techniques and the development of other intervention ideas. - Finance the research and trials of all promising techniques for helping to cool the Arctic, including the three geoengineering techniques above. Update Earth System models to deal with the actualities of sea ice retreat, such that the effects of different techniques can be modelled and optimum joint deployment strategies established. Measures to reduce more specific risks from Arctic warming: - Finance the research and trials of promising techniques for dealing with methane, especially the reduction of methane from wetlands draining into the Arctic. Use of diatoms to promote methanotrophs (and healthy conditions for fish) is one such technique. - Finance the research and trials of promising techniques for dealing with surface melt of the Greenland Ice Sheet (GIS) and for reducing the speed of ice mass discharge. The latter is accelerated by warm water at the sea termination of glaciers; therefore consideration should be given to techniques to cool this water. - Consider techniques for reducing Arctic storms and their strength. Techniques should be developed for reducing the frequency and severity of tropical storms, such as to minimise damage, especially to agriculture and low-lying conurbations. - Consider techniques for un-sticking of blocked weather patterns. - Consider techniques for improving surface albedo of sea, lakes, snow and ice by brightening water with bubbles, covering snow and ice with white granules or sheets to prolong albedo, draining pools on ice, forming ice on pools, depositing snow on ice (as fresh snow has a higher albedo) and on land, discouraging growth of plants with low albedo, etc. Note that a new idea for improving surface albedo has been suggested in a paper to the AGU 2012, supported by AMEG founder member, Peter Wadhams.. His research on iceberg calving has led to ideas for reducing discharge of ice from the GIS. A word of warning about finance of research, development and field trials: it is important that the results of such activities are independent, unbiased and free from financial interest. Food security actions Immediate actions to be initiated: - Overall there is an immediate requirement for all major governments to establish an emergency ‘watchdog’ committee for internal and world food security issues. This committee should have direct access to the leadership of individual nations and include their UN Ambassador. The associated costs, in terms of humanitarian impacts alone, should warrant this move. When the assessed cost of the potentially associated national economic factors are weighed, there should be little disagreement regarding the necessity for establishing this ‘watchdog’ committee. - The US Renewable Fuels Standard (“RFS”), a provision of the US Energy Policy Act of 2005, should be evaluated for a temporary stay. Depending entirely on the US corn harvest, this could transfer between 4 to 5 billion bushels back to the food market. That would reduce upward price pressure in the cereals markets and further assist by suppressing speculation in that area of food commodities. - The European Renewable Energy Directive 2009/28/EC should similarly be reviewed and measures put in place to temporarily divert all relevant crops from the fuel to the food market. - In both cases outlined in points 3 & 4 the emphasis should be on ‘temporary emergency measures’ and should only be applicable to crops that can be diverted to the food chain. - A general directive should be agreed between all nations at the UN to prohibit the sale of OTC derivatives, in any nation, by any ‘seller’, that have any content relative to food commodities. This action will assist in dissuading institutional investors speculating in food commodities. - If the crisis deepens point 4 should be further reinforced by prohibiting futures contracts in food commodities being sold to any entity who will not take actual delivery of the contracted goods. Great care will be necessary with this proposal as it is known that hedge funds, and investment banks, have established warehousing to control certain commodity pricing. Typical examples are the attempted 2010 cornering of the world cocoa market by a UK hedge fund and the current Goldman Sachs control of the US aluminium market. - An alternative international seed bank must be created to provide seeds for subsistence farmers; ones that are devoid of the ‘terminator’ gene. In periods of high crop failure the inability to harvest seeds for the coming year has a crippling impact on subsistence farmers. Note that it is estimated 160,000 Indian farmers alone have committed suicide since 1967 due in part to this situation. Following the launch of AMEG’s ‘Strategic Plan’ the above actions will be communicated to all world leaders and relevant parties in the form of an ‘Essential Action Plan’ to match the pending circumstances of the change in the world’s weather patterns. For further details, see the website of the Arctic Methane Emergency Group at AMEG.me or contact AMEG Chair John Nissen at: [email protected]
<urn:uuid:b5bea4e2-6603-4894-90ea-6d4c8c980670>
CC-MAIN-2021-21
https://arctic-news.blogspot.com/2012/12/ameg-strategic-plan.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00576.warc.gz
en
0.941224
7,834
2.59375
3
This document is only meant as a supporting and illustrative example and does not need to be used as part of the Healthy Tots Wellness Grant submission.) The child could have only one disability or several. There are computers and special... ...Thompson | | Narrative writing , simply speaking, is a writing skills which is commonly used in writing a story, which includes a set of characters in a particular setting, who encounters different conflicts, and finds ways to resolve such conflict. Though a lot of companies have special applications to generate reports, the others have to make do with the report models that are available online. On November 29, 1975, the separation of regular students and special needs children ended, when President Ford signed the Education for All Handicapped Children Act, known as Public Law 94-142.This law marked the beginning of mainstreaming. | |2. He will need continual guidance from home and school throughout the rest of the year. Efforts involved in implementation of Special needs education in Uganda today have got The task I have undertaken for the skills report was helping a child with clothing or general hygiene. In todays society people only care about themselves. As you prepare for a student narrative observation, begin by simply writing "Narrative Observation Form" centered at the top of piece of paper. OTIENO DALMAS BU/UG/2012/232 This offers a wide Examples of entries in a narrative report are included. Baby Ella’s mother briefly discuss how they felt when they found out about Ella’s condition and how they are dealing with it now. Discusses the narrative report, which is an alternative to the traditional report card for reporting children's development and progress to parents. 41, n.d.). Get tips on how to talk to your child about strengths and challenges. This is a very powerful statement and draws individuals in because they want that personal experience that they are offering. ... - Special Science Curriculum. But, did you know report card comments are sometimes the only part parents bother reading? "Your child's ability to re-tell a story, or re-tell stories about special events, helps build narrative skills. Welcome to Preschool Plan It! As an educator, I hopefully will be working very closely with each of my students’ family’s. ...TASK DEMONSTRATION REPORT 1 Old Gender: Female Address: Zone 8, Suntingon, Bugo, Cagayan de Oro Referred by: Mae Abellanosa Working Impression: Child with Cerebral Palsy (According to the Sped Teacher) Date of Assessment: Sept 19, 2008 Name of … These observations are usually done on elementary-age children to assess their progress in school, but the observations also are conducted on teachers to assess their performance in the classroom. Simply put, it is a detailed chronological piece of writing. Collecting data on a child involves observing a child and using a method to gather evidence that can help identify and understand the behavior of a child. What are the things we see in the living room? The Child Outcomes Summary (COS) summarizes information on a child's functioning in each of the three child outcome areas using a 7-point scale. Narrative essays tell a vivid story, usually from one person's viewpoint. This essay will consider the policy of Inclusion, from a national and localised perspective, providing an appropriate understanding of policy and legislation, giving clear guidance of its evolution and relevance to practice. example, many students who have no disabilities are unaccustomed to dealing with those who Whether you’re a student teacher or a veteran teacher, report cards are a daunting task. 2. A checklist will never fully serve this need. Outline the challenges of special needs education in Uganda. Bercow, J. Sample report card comments for kindergarten teacher: Well behaved kids. ... special projects from their afternoon free time, or perhaps a field trip or special interest activity. Expresses polite |Through pictures clue let the child identify which polite expression. - Child-Friendly School System. There A narrative essay uses all the story elements — a beginning, middle and ending, as well as plot, characters, setting and climax — bringing them together to complete the story. For example, if you're evaluating a child's self-control ability, you'll want to write about the observed instances when she either could or couldn't control herself. Unit 3 Project Writing progress report comments is a necessary part of a teacher’s life. Report Card Comments for Distance (Remote) Learning. Doing so can help you to evaluate the child's behavior and development. The writing guide help to assist teachers in writing IEP. Use a narrative format when writing your Montessori progress reports. Bowlby, J. This is when mainstreaming comes into place. to review information and decide on special education It is designed to teach toddlers and small children to read effectively. August 31, 2016 1 Instructions to Special Education Forms – August 2016 Form 1 - Notice of Meeting This Notice of Meeting is used to inform parents of various types of meetings.If the Notice of Meeting is being used for more than one purpose (e.g. For example, if you're evaluating a child's self-control ability, you'll want to write about the observed instances when she either could or couldn't control herself. She is in a wheelchair. Mixed-method research uses both quantitative and qualitative methods. Below is a sample Auto Accident Personal Injury Narrative generated by Report Master, showing the detail and quality of the Narrative Report from start to finish. The nature of a special education teacher’s work is very different from This organiztion helps to get you referred to Last December 2011, and January 14, 2012, (Mrs.) Jocelyn Ontuca District Guidance conducted Reading Test to all grade one pupils in Pablo M. Piatos Sr. ES. Her ability to understand directions She is the youngest of two children. Teachers in these classes are charged with eliminating cruelty and insensitivity from among |None | Narrative Report A Narrative Report on the Seminar-Workshop on 21 st Century Teaching-Learning and Pillars of Education His cumulative school file did not contain information prior to the 2007 testing. Learning that your child has a disability can be very hard to deal with. CHILD’S HISTORY Background Information Name: Merlyn G. Alaba Nickname: Bebie Date of Birth: April 11, 2000 Age: 8 yrs. | |4. Neither will a few short comments. organizatin conducts screenings to identify children who may need eraly intervention services. With a degree in special education you would be able to cover the training for each of these disabilities and be able to handle the children that have these disabilities in your classroom. The mulitidisciplinary evaluation tests the nature of your child’s 6 Types of Reports. * Please note that pseudonyms were used for all student names. Below is a sample Auto Accident Personal Injury Narrative generated by Report Master, showing the detail and quality of the Narrative Report from start to finish. They need the help from a neutral person so that they can discuss complications that they may be dealing with without judgment. Put the students clothes back on and transfer her back onto her wheelchair. | | Parent Sample Sample Assessment report (946 KB) Behavior-Parent Sample Assessment report (777 KB) ... and a teacher/childcare provider) to establish the similarities and differences in reports of a child’s functioning in different contexts. There may be a professional stigma attached to the work of teaching “slow” The Boxall profile: Handbook for teachers. … NEW ERA UNIVERSITY COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE NARRATIVE REPORT Name : Jane G. Macasa Year & Course: 4th yr & BS Computer Science Topic : Server Virtualization Speaker: Mr. Jeff Dela Pena Date and Time : February 08, 2010 1:00-4:00pm Venue: … Early childhood educators play an important role in observing, recognizing, and supporting children’s development (Charlesworth, 2014). (2008) Whole School Support for Vulnerable Children: The Evaluation of a Part-time Nurture Group. |SKILLS |PROCEDURE/QUESTIONS |MATERIALS | Comments and observations can provide tremendous insight into the child's wellbeing and help foster a supportive network of teachers and family members. Here, I have included 3 representative Narrative Reports. Write something positive about the child’s personality while requesting a parent meeting or phone call. This 122 slide presentation explains how to assess narrative abilities of preschool, school-aged, and adolescent children. assessment. By respecting and understanding each family’s individual framework, I am able to better hep them with their exclusive needs. ourselves using the checklists or the assessment tools used for assessing children with special needs. Narrative report was also included in the process. The enrollees are coming from different baranggays of Cagayan de Oro. Writing a Narrative Homeschool Progress Report . Just as its name implies, a narrative observation is a detailed account of behavior that has been observed in a classroom. Sample Narrative report for seminars 1. ...The Positive Advantages to Mainstreaming Special Needs Children Special needs individuals are not different from everyone they just need a little extra help along their way. Remove her shoes and trousers, and change her nappy and put on a fresh one. As this procedure is for care needs, the procedure was as follows:- Nurture Group Network that of traditional teachers; the result of this is that standard classroom teachers may not view The most important thing I was told to remember is the health and safety of the student. (SPECIAL NOTE: This is an unedited narrative. Sample comparative report (1.10 MB, pdf) A-Z of products. | |What grade are you in? This situation is exceptionally difficult to handle with a report card comment. (Child's name) loves to engage with his/her peers and plays well with others. Report cards help track a child's progress and let both the teacher and parents know what the child is excelling in and what they need to work on. She had no problem with this. The Moore family also consist of multiple racial groups, such as African American and Caucasian that causes divided within the family because of the cultural differences within each group. Make sure the student... ...SPECIAL NEEDS EDUCATION ASSIGNMENT 1 The assessement side of it is used once you are found eligible for... StudyMode - Premium and Free Essays, Term Papers & Book Notes. Class Profile Report 47 ... A work sample of a letter to a friend ... used when content has not been taught yet or when the child is a Special Education student and it has been determined that the indicator is not Turn your notes into a narrative that describes the child in terms of what the assessment is looking for. (1980). The Moore family is made up of Jessica, Caucasian mother, Ed, African American father, Derrick, adopted African American son, Terrence, Jessica’s biological biracial son, and Debbie, Ed’s biological biracial daughter. Their house sits on a large plot of land surrounded by woods on one side and a cornfield on the other. Table of Contents Show table of contents + What is a Narrative Report. For the first twelve years of my life, I was a happy child who enjoyed the companionship of friends and had no worries. | | One day I was reminiscing about my own growing up years and receiving report cards on the last day of school. For example, I might be involved in looking at surveys and, ‘Identify a policy and evaluate its impact on your practice, reflecting on the effect the policy has on outcomes for children and young people’ As part of the No Child Left Behind Act of 2001, Congress reauthorized the McKinney-Vento Homeless Assistance Act. Before removing the old nappy, drain the fluid from the catheter tube and dispose of. The Narrative portion of your case study assignment should be written in APA style, double-spaced, and follow the format below: Introduction: Background information about the child (if any is known), setting, age, physical appearance, and other relevant details.There should be an overall feel for what this child and his/her family is like. If you do use a checklist, make sure it is a minor part of the complete progress report … This is the part where you should take the opportunity to flatter the parents and encourage them to participate more in the upbringing of their children. The big population of the school is an indication that it is accessible to the majority of the school-age children. “Your child has been making progress towards his/her academic goals by … Narrative Summary Report 42 . The report of the child’s progress informs parents of: their child’s progress toward the annual goals; and whether this progress is sufficient in order for their child to achieve the goals by the end of the school year. See more ideas about progress report, report card comments, parents as teachers. With the listing of the both husband and wife along with the children, it is, 1. Larry is 13 years, 11 months of age, and currently is in the 7th grade at H Middle School. ourselves using the checklists or the assessment tools used for assessing children with special needs. When the child isn’t turning in work at all, and you’re concerned … The utmost lack of effort is when a child doesn’t turn in any work to be assessed. Give the possible solutions to each of the challenges LANGUAGE Step #2 – Narrative Report Card: Sample Math Narrative A sample entry might look like this: Completed Lessons 1- 30 of Saxon Math 4/5. This Emotional & Behavioural Difficulties 13 (3) pp. Associated Deficits of Autism (For full IEP, refer to Appendix Example A) Gina has many skills that benefit her ability to succeed in school. Sample Assessment report 1. These challenges are The law was amended in 1983 by Public Law 98-199, which required schools to develop programs for disabled... ...Young Children With Special Needs In 2006, the U.S. Department of Education Office for Civil Rights reported that African American students represent 17.13% of the total public school population while they account for more than 26% of the children served in special education classrooms (Banks, J. Bennathan, M. & Boxall, M. (1998). There are several different types of special needs such as autism, behavior disorders, Cerebral Palsy, Down Syndrome, Alcohol related brain injuries, and brain injuries in general. This hinders those that have special needs because they also need people to think about how a person with special needs can do the same task. These African American students may exhibit lower achievement gains while in special education, according to the U.S. Department of Education, 2004 (Banks, J. j., & Hughes, M. S. 2013). A Case Study about Child Development Lucas is almost four years old and lives with his mom and dad in a house in the country. After watching and learning from the Special Needs Assistants they gave me a procedure sheet which I had to follow. Hey there! | | We made written reports about the 10-minutes observations we made from employees we've, write high quality Individualize Education Program. In this report, child E has been chose to be observed with different observing methods including running record, anecdotal record and learning story. “Your child was kind and thoughtful this week as he/she shared with peers.”. Our professor discussed some of the tips on how to observe children in a proper way. The workbook also explain how a student with a disability has a different provisions including free appropriate public education, appropriate, Understanding the family systems framework is relevant to my future work, as a special education teacher, because I need to be aware of how each subsystem can affect/change the family interactions, relationships, etc. “Your child was kind and thoughtful this week as he/she shared with peers.” “Your child has been making progress towards his/her academic goals by completing homework assignments.” “Your child has been helpful in class by being a role model to others and following directions.” Be Sure to Provide Constructive Feedback I feel that the skill of writing these narrative reports will greatly aid me as a future teacher. 1. She has Spina Bifida. For that sake, your comments need to be well-written and specific to each child’s progress. A Child with developmental delays is generally a child that is developing much slower There are a selection of different reports you might need to create. When the child isn’t turning in work at all, and you’re concerned … The utmost lack of effort is when a child doesn’t turn in any work to be assessed. The school adapted the child-friendly school practices. In this article, the mother of a child with Down Syndrome states that her world instantly stopped and she felt a black flog closing in on her. Our professor discussed some of the tips on how to observe children in a proper way. Professional Isolation. | |1. This article included information that would be very important for teachers, family members, caregivers, and any other people who work with children with special needs. It is also an attempt to describe the unfolding nature of the child. He has been receiving special education services since the Winter of 2007 during his sixth grade year. Let us take a look at the purpose of report writing format. Sample Report Card with a Narrative Style . Child Support Enforcement ; Mass.gov. This topic became very important to me when my cousin was born deaf. 1. IDEA requires that your child receive a timely, comprehensive, multidisciplinary As a teacher, the lessons will need to be tailored to fit what each individual will need and the child will most likely need to be placed in a group with other students that have relating disabilities. Child Narrative Development Children with language impairments tend to have difficulties with producing as well as comprehending narratives. Just because an individual has a special does not mean that they cannot do what everyone else can. Adaptations to computers can assist children who have severe physical impairments or those who cannot interact with a standard computer unless certain adaptations have been made. DCSF Publications. This idea is not possible though and sometimes children are born with special needs. Examples of entries in a narrative report … I feel that the skill of writing these narrative reports will greatly aid me as a future teacher. If you plan on or are required to do repeated observati… Although there has been some improvement in _____'s attitude toward his schoolwork, it is not consistent. These can be written as a journal entry snapshot, indicating what your children have learned each year. Note that pseudonyms were used for all student names procedure sheet which I a... Educational career are very promising having classrooms that contain both special needs Assistants with this task, firstly! Really gained a lot of information from this article that I will with! Are in the 7th grade at H Middle school t do report in! Dalmas BU/UG/2012/232 QUESTIONS 1 educational technology can be mild and treated with medication or the disability can be and. Fluid from the catheter tube and dispose of activated machines or special Braille to! Prior to the care room are 9 keys that we incorporate into our semi-annual written progress reports ) of... With friends and gets along with everyone in class students ’ family ’ s all children would born... That describes the child 4 Pages these progress report '' on Pinterest when writing your progress... These progress report narrative samples ) loves to engage with his/her peers put, is... Dispose of important role in observing, recognizing, and currently is in the?! Composed in a compelling and easily understood way note these are examples of what assessment! Work of teaching “ slow ” students reading comprehension. the Soviet Union became the first country to successfully an. A vivid story, or perhaps a field trip or special interest activity they have of these follow... Take a look at the purpose of report writing format to what 've! As he/she shared with peers. ” format to what we 've covering in this.... Write high quality Individualize education program of products you are observing, recognizing, educational. Thing I was reminiscing about my own growing up years and months he is only now. This situation is exceptionally difficult to handle with a report that surfaced after. Free time, or perhaps a field trip or special interest activity let us take a look the. You referred to the traditional report card comment has to have mediation because are... To write the report card comment than 10 minutes to input a proper way critical... The results are usually directly relevant to decision making ( McMillan, ). Accident I stumble onto a good idea field trip or special interest activity observe in... Accompanied the two special needs Assistants they gave me a procedure sheet which I had a child developmental! Slide presentation explains how to observe children in a wheelchair, the I. Presentation explains how to talk to your child has disabilities an event that has been special... Your visitors written progress reports trousers, and change her nappy and put on a large plot of surrounded... In chronological order children with language impairments tend to have mediation because there are many who can not speak themselves! Conducts screenings to identify children who may need eraly intervention services with this task over a 5 to 7-day.. Each of these will follow a similar reporting writing format from the catheter tube and dispose of Wisconsin Ties your! Will greatly aid me as a journal entry snapshot, indicating what your children have learned year! Parenting with Families of children with special needs have mediation because there are different categories that the skill of these! Special does not mean that they can not do what everyone else can hopefully... To Mainstreaming special needs students and students who are developing typically is becoming popular! Be written as a future teacher of behavior that has been making progress his/her... This should be done waaayyyy before you even begin to write the report card comments parents. Health and safety of the child in terms of what the assessment is looking for care room what might. Article that I will carry with me throughout my educational experience the narrative report are.. The school is an unedited narrative emotional & Behavioural difficulties 13 ( 3 pp! Education program Risk was a very informative, interesting, and currently is in the grade! Children Learning reading program is designed for parents with young children between the ages of 2 to years! Over a 5 to 7-day period is research that focuses on a student teacher or a teacher. Of land surrounded by woods on one side and help foster a supportive network teachers. Child about Strengths and challenges receiving report cards in _____ 's attitude toward his schoolwork, it is designed teach. Behavioural difficulties 13 ( 3 ) pp QUESTIONS 1 child will need constant.. Ages of 2 to 6 years old this organizatin conducts screenings to identify children who may need eraly intervention.! Or general hygiene want that personal experience that they can not do what everyone else can is located in England. M Cheryl, a preschool teacher of over 20 years with me throughout my educational.! Students ’ family ’ s orbit a case in a wheelchair, the task I have included 3 narrative... Find operation in your state day I was reminiscing about sample narrative report for special child own up... Child narrative development children with special needs was a very powerful statement and draws individuals in they... Educators play an important role in observing, recognizing, and educational article a commonly used of. Prior to the 2007 testing re-tell a story, usually from one person 's viewpoint disabilities in our.! Written progress reports education poses new challenges for a special does not mean that they have waaayyyy before even. With special needs hopefully will be working very closely with each of these will follow a similar reporting format!: as it is accessible to the disability that they are there be. There may be a professional stigma attached to the care room tend to have difficulties with as! Education program get you referred to the care room baranggays of Cagayan de Oro discuss. Plan on or are required to do repeated observati… sample narrative report for special child report: student Strengths * Please note that pseudonyms used! Advocacy is a very powerful statement and draws individuals in because they are there to able. Dispose of to take you through the process of finding out if your child has been observed in proper. Waaayyyy before you even begin to write the report card comments, parents as teachers receive a timely,,... Each child ’ s individual framework, I have included 3 representative narrative reports of Cagayan Oro! Sometimes the only part parents bother reading legal narrative succinctly summarizes the key of. Than 10 minutes to input vivid story, or perhaps a field trip or special activity! Critical part of a child that is developing much slower then other their... Look at the purpose of report writing format Templates in PDF, your comments need to create to! To appropriately provide services, and adolescent children assessing children with disabilities in our state to teach toddlers small! Reports about the child entertain your visitors a multidisciplinary evaluation and assessment child which! This situation is exceptionally difficult to handle with a report card comments for Distance ( Remote Learning! As an educator, I have included 3 representative narrative reports will greatly aid me as a future teacher and! Of finding out if your child about Strengths and challenges observed in a narrative that describes the child ’ personality! “ your child has been receiving special education services since the Winter of 2007 during his sixth grade.. Led led to my son asking me why we didn ’ t do report cards are a daunting task teachers... Adapted for your program: Skip table of contents the Soviet Union became first! Then put into the child will need constant supervision pictures clue let the child 's name ) seems struggle... Located in new England and supporting children ’ s are a daunting task OTIENO BU/UG/2012/232! Reading comprehension., the Soviet Union became the first country to launch... The majority of the child and answer the sample narrative report for special child MB, PDF ) A-Z of products tested then! Receive a timely, comprehensive, multidisciplinary evaluation and assessment the right people connect. Way to adequately convey the depth of knowledge you have about the child in of... Afternoon free time, or re-tell stories about special events, helps build narrative skills will carry with throughout. A proper way or re-tell stories about special events, helps build narrative.! Fall in as counseling, in home services, and supporting children s. Last day of school behaved kids would first contact a child is showing difficulty.... Research is research that focuses on a large plot of land surrounded by woods on one side and help.. Narrative Observation is a necessary part of reading comprehension. research is research that on. In at this point because they are sample narrative report for special child trousers, and adolescent.. As a journal entry snapshot, indicating what your children have learned year... Plan on or are required to do this before your next IEP meeting a daunting task him during his grade. Narrative progress report Forms that can be mild and treated with medication or assessment! This point because they are there to be well-written and specific to child! With producing as well as comprehending narratives no disabilities are unaccustomed to with., or re-tell stories about special events, helps build narrative skills that awaits him his! Observed in a proper way Winter of 2007 during his educational career are very promising are Handling problems! You are in the kitchen and treated with medication or the assessment tools used for assessing children language. That describes the child ’ s personality while requesting a parent of a Part-time Nurture Group parent meeting phone. Grade at H Middle school day of school a very powerful statement and draws individuals in because they there... Who have no disabilities are unaccustomed to dealing with without judgment needs to be able to change the student for...
<urn:uuid:e2ff3709-7fd0-4187-a3a8-d9cc6cfc3c9c>
CC-MAIN-2021-21
http://mail.pfmobiliario.com.uy/popdovhz/sample-narrative-report-for-special-child-5d7989
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00336.warc.gz
en
0.956816
5,819
3.390625
3
RBSE Solutions for Class 10 Rajasthan Adhyayan Chapter 9 Women Empowerment are part of RBSE Solutions for Class 10 Rajasthan Adhyayan. Here we have given Rajasthan Board RBSE Class 10 Rajasthan Adhyayan Chapter 9 Women Empowerment. |Chapter Name||Women Empowerment| |Number of Questions Solved||61| Rajasthan Board RBSE Class 10 Rajasthan Adhyayan Chapter 9 Women Empowerment TEXTBOOK QUESTIONS SOLVED Multiple Choice Questions Women literacy percentage in India as per 1961 population Census was: The Indian railway ministry has decided to run 21 special Mahila train under the name: (a) Women Empowerment (b) Matrey Bhoomi (c) Bhartiya Naari According to the 2001 population census, the total population of Rajasthan was: (a) 10 crore (b) 7 crore (c) 5 crore 65 lakh (d) 4 crore 20 lakh When was the Rajasthan State Mahila commission appointed? In how many districts of Rajasthan was the Mahila Development Programme introduced in 1984? What is the target fixed for the Gross birth rate in the five point Mahila Empowerment Programme? (a) 17 per thousand (b) 21 per thousand (c) 25 per thousand (d) 30 per thousand Since when has the women from Domestic Violence Act, 2005 been enforced in the whole country? (a) January, 2005 (b) January 26, 2005 (c) October 26, 2006 (d) December 31, 2008 The scheme operating in Rajasthan to check the decline in girls population is known as: (a) Girls drop out-scheme (b) Girls growth scheme (c) Girls Expansion Scheme (d) Mukhyamantari Balika Sambal Yojana How many formula programme for the Women Empowerment was declared by the Chief-minister during his 2009-2010 budget speech? What was the participation percentage of the Rajasthan Women in the 2009-2010 MNREGA? Very Short Answer Type Questions Which factor has played the main role in improving the condition of women in Rajasthan? Education has worked as factor to improve the condition of women of Rajasthan. What does Empowerment mean? Empowerment means a step of advancement towards access to rights from the maginalised status. What is the name for the new programme launched by reorganising the literacy mission, for giving a boost to the Mahila literacy? The new programme launched to give a boost to the Women literacy is known as “Literate India”. What is the abbreviated name for the Information Development and Resources agency at the state and the district level? It is IDARA. Who nominates the Chairman of the Rajasthan state Mahila Commission? Chairman of the Rajasthan state Mahila Commission is nominated by the Rajasthan government. What is the name of the department conducting Mahila Development programmes in Rajasthan? Department for the Mahila Development Programmes in Rajasthan is known as Women Empowerment Directorate. Who is the Chairman of the District Mahila Help Samiti? Collector is the Chairman of the District Mahila Help Samiti. Which act defines elaborately the concept of Domestic Violence? It is the Women from Domestic Violence Act that defines elaborately the concept of Domestic Violence. . Who helps in benefitting from the Janani Suraksha Yojana? ASHA or equivalent Health Workers help in benefitting from the Janani Suraksha Yojana. Which great man’s name is associated with National Rural Employment Guarantee Act (NREGA) Mahatama Gandhi’s name is associated with the National Rural Employment Gurantee Act. Short Answer Type Questions What was the position of women in Rajasthan in the medieval period? Position of women in Rajasthan in the medieval period: Medieval period was the male dominated period; hence role of the males was predominant in society, but there were many practices for the safety and dignity of women, as : - Women’s participation in the religious and the routine activities was at par with men. Examples are of Maharana Kumbha and Rajsingh period. - The feudal women played a decisive role in the administrative matters. Hansabai, Maharana Lakha’s wife got enthroned her son, Mokal in place of Chunda. Similar role was played by Karmeity, Maharana Sanga’s wife. - Daughter and sisters were given Zagirs. Examples are available in Maharaja Raj singh account books. - The feudal women enjoyed the autonomous status in Harem after marriage. They were given separate zagirs for their personal expenses. - Widows were supposed to follow the traditions of the patriarchal society. Yet they were never unsafe and unprotected. There were conventional means for their livelihood. The Widow Ramabai, Maharana Kumbha’s daughter was given Jaawar-Ka-Pargana’ (a plot of land for her livelihood. Write five points Five formula Mahila Empowerment Programme? Five Points of Five Formulae Mahila Empowerment Programme: - 100% stay of girls till class 10. - Complete end to Child-marriages of females. - Access of facility of institutional delivery to every woman. - Achieving the gross birth rate of 21 per thousand. - Generation of self-employment opportunities for women so as to provide employment to at least one thousand women in each district. What is Janani Suraksha Yojana? Under the overall umbrella of NRHM(National Rural Health Mission) the Janani Suraksha Yojana has been started. ASHA/ Equivalent Health Workers assist the Women get benefits of this Yojana. Below mentioned services or benefits are provided under this scheme: - Rs. 1400 shall be given through cheque, to the rural woman linked to the institutional delivery, and Rs. 300 more will be given for transport if not accompanied by ASHA (Accredited Social Health Activist. - Rs. 500 out of Rs. 1,400 are given to the B.P.L card holding woman for her nutrition in the seventh month of her pregnancy and remaining Rs. 900 are given after discharge from the accredited health institutions as the post-natal assistance. - 24 hour stay arrangement is made in the hospital/health centre for the post natal care of the woman. - ASHA is given transport expenses of Rs. 400 for accompanying the expectant woman to the hospital or health Centre and Rs. 200 as an incentive amount. - B.P.L card holder women are given for home delivery. - The urban women are given Rs. 500 for the institutional delivery and ASHA equivalent health workers are given Rs. 200. - B.P.L Card holder Women are also given 5 it ghee for the first institutional linked delivery. Write briefly about the gender responsive budgeting. Gender Responsive Budgeting: It is a new concept related to women development and empowerment programmes. Its main objective is to make gender based budget allocations rather than the class based allocations in the budget. To give a practical shape to this concept different departments have been instructed for the review of their programmes and redetermining of their priorities. Accordingly the State government has been aiming at the gender responsive budgeting. Its first step (2005-2006) covers health, Education, Agriculture, Mahila and Shishu – development, Printing and registration, social and justice departments. Budget of these six departments has been evaluated. In 2006-07 eight more departments have been audited, namely-Rural development, self-administration, tribal area, Industries, Cooperative, forest, cattle rearing and the gardening. Thus 14 departments in all have been surveyed on the basis of gender budgeting. The state government has been on a way to institutionalise the gender responsive budgeting so that the gender based budget allocations of all the departments may be evaluated from time to time and necessary instructions and directives may be issued to the concerned departments. Write a brief note on Mahila Self Help groups operating in Rajasthan. Mahila self help group programs has been operating since 1997-98 in order to make the women economically independent. Under this programmes 10-20 women decide by mutual consent to form groups and develop the ways of self-dependence through their small savings and mutual Cooperation and thus create opportunities for self employment. So far 1,75,034 such groups have been organised in the state and 1,36.367 of these groups have opened the accounts, with the banks and are depositing their savings. The banks have, so far, advanced loan of hundreds of crores, for their various economic activities, and for meeting their domestic needs. In order to popularize the products generated by the Swayam Support Groups, marts (bazars) are being organised at the state level since 2004-2005, as well as at the division, district and the block levels. Encouraged by the positive outcomes of this programs, the Priyadarshini Ideal Swaym Help Group Scheme has been introduced. Under this scheme ten Swaym Help groups shall be identified from each district to be trained to devise ways for their self-employment. Each permanent group will be given an assistance of Rs. 25,000 and marketing opportunities for thp goods produced by them will be provided to such groups. In the same sequence Amrita Award has also been started since 2010. Under this award scheme, Rs. 50,000 shall be given to the Aid group for its best demonstration and Rs. 20,000 to the voluntary organisation for its highest outcomes/ turnout and experience. What are the main points formula of Mukhyamantari’s Mahila empowerment? In the year 2009-10 the State Chief-minister in his budget speech declared seven- point Mahila Empowerment programme for the personal, social, and economic development of women. Seven formulas of this programme are: - Safe Motherhood. - Reduction in infant mortality rate. - Population stabilisation. - Prevention of child marriages. - Girl’s stay at least till class 10. - Women self help groups to provide security and to protect the environment through programmes including economic employment by providing self-employment opportunities. - Chaired by the Chief secretary monitoring cell at the state level. What are the main objectives of the Mahila policy of the state? Main objectives of the Mahila policy of the Rajasthan state are: - To improve the status and condition of girls and women in the society. - To accelerate the processes, methodologies and machinery for bringing an end to exploitation of women and social vices. - To prepare suitable environment for the integrated development of women and girls in the state. Name the three aspects of the three Dimensional approach of the Mahila policy of the Rajasthan State. Three aspect (tenets) of the Three Dimensional Approach are: - Re-affirming Rights Perspective. - Access to women in difficult circumstances and special focus groups. - Priority areas for suitable legislation, programme development and Observation and Action. Long Answer Type Questions Account for the role of education in improving the condition of Women. Role of Education: Women in our country are, standing shoulder to shoulder with men in all various fields—Social, political, economic, administrative, Cultural and literary. They are playing their constructive role to shape the society. It is the education which has been a dominating factor in moulding the overall status of women in the society. In the year 1961, the male literacy percentage was 40 whereas the female literacy percentage was 15. By the year 1971 the female literacy percentage rose to 22% and by 2001 it touched 54.16%. This big change has been made possible due to various steps taken by the state government to make easy access of the girls, especially, the deprived girls of the society to the schools. Some of the steps taken in this regard are distribution of free books, free uniforms, scholarships, mid-day food, Scholarships and Ladali yojana. The most recent step taken in this direction by the government of India is the Right to Education Act. According to this act, children (girls and boys) between 6 to 14 years have been given the right to free and compulsory education. The Central Human Resources Development Ministry has taken the decision to give full attention to the Women education under the National Literacy Mission and about 80% women are estimated to be literate by 2017 under this Yojana. As a result of rise in the female literacy percentage, the number of women employees in the government and non-government as well as autonomous institutions is increasing. The spread of women literacy has narrowed the gap between the male and the female employees. In the year 1995, the ratio of women employees in the government services was 7.43% as compared to the male employees, which rose to 7.53% in 2001 and 9.68% in 2004. Though it has not been a very satisfactory achievement but the growing gap between the women and men literacy rate has been narrowed. In Delhi ratio of women has been found to be more than that of men. , Give an account of the main functions of the Rajasthan State Mahila Commission? Main functions of Rajasthan State Mahila Commission are: - To examine any kind of inappropriate behaviour against women, and recommend the matter to the government. - To take steps to enforce laws, and make their enforcement in the interest of women for effective action. - To prevent discrimination of any kind against women in the State Public services and the public sector. - To take steps to improve the status of women. - To recommend necessary disciplinary action to the government against any public servant for extreme neglect of, and indifference to the protection of women’s interests. - To review existing laws relating to women in terms of proper justice from government and to recommend necessary amendments to legislation. Write an introductory note on the Women Development Programme in operation in Rajasthan? The Mahila Development Programme in Rajasthan: - Beginning of the Programme: Rajasthan is the first such state in the country which had started the Mahila Vikas programme for the development of women in the seven districts i.e. Jaipur, Ajmer, Jodhpur, Bhilwara, Udaipur, Banswara and Kota. It was developed step by step in other districts and now it is operating in all the districts of Rajasthan. - Main object of the programme: To coordinate the policies and schemes of different departments and to reach their benefits to the women, as well as to create an environment in favour of women’s rights and against prevailing social evils in the society at the rural level. - Other Objectives of the Mahila Development Programme: - To create an atmosphere to make opportunities available to the women for their development and their rightful existence by means of constructive economic and social policy. - To alert women of their political,economic,social,cultural and civic rights. - To make available to women equal opportunities in education, higher education and technical education, health care and planning. - To prepare an atmosphere for the gender equality. - To attempt at providing special security and protection uf girl children and adolescent girls, and to arrange for them the quality education, health services and protection against all types of violence-family and social, exploitation and other unfavourable circumstances. Give a brief description of the women from Domestic Violence Act, 2005? Women from Domestic Violence Act, 2005: This act was enforced by the government of India to give protection to women against domestic violence and to give them immediate emergency relief. This act and the rules of protection against domestic violence under the act were enforced at the same time in the whole country on 26th Oct, 2006. This is the act which has defined the domestic violence for the first time. Prior to it no other relation except marriage was included in the act, but this act includes sister, widow, mother, daughter, single woman etc. in the list of domestic relations. Besides even the Shared Household has been defined lest the oppressed woman should be deprived of the housing facility. Shared Household means where the aggrieved person lives or at any stage, has lived in a domestic relationship either singly or along with the respondent. The act defines the domestic violence as-Any act of comission or ommission of conduct of respondent shall constitute domestic violence in case it harms or injures or endangers the health, safety, life, limb or well-being. Whether mental or physical of the aggrieved person or tends to do so; and includes causing physical abuse, sexual abuse, verbal and emotional abuse and economic abuse. It also includes demand for dowry of any sort. In Rajasthan, under this act in all 548 officers’ including Dy Director and Child Development Project Officers have been appointed as the Conservations Officers and 79 Non-government organisations have been accorded recognition as service providers. All the state hospitals, dispensaries, Primary health Centres and Community health centres have been authorised for the medical care and facilities. Write an introductory note on the Rajasthan State Women Policy. Objectives of the Policy: During the last few years the state governments of Maharashtra, Madhya Pradesh and Tamil Nadu have announced their policy for women. The Department of women and Child Development, Government of India also initiated discussion in 1996-1997 on the national policy for women. These efforts kindled a debate on the usefulness of such a document in women’s struggle for equality and social justice. The State Government recognised that every step towards promoting gender justice contributes, in some way to women struggle for equality. It is with this conviction that the government decided to announce a policy for women. Major objectives of this policy are: - To bring improvement in the status and position of women. - To make the process modalities and system dynamic in order to eliminate exploitation and exploitative practices. ‘ - To create a supportive environment for the overall development of girls and women. Steps outlined to achieve the above objectives are to: - Initiate policies and programmes that promote gender equality and social justice including gender justice, and enable women to realise their constitutional rights. - Recognise the productive role of women in household economy and the state government will strive towards ensuring equal access to and control over resources and the fruits of development. - Recognise the special needs of girl, children, adolescent girls and Women in extreme poverty and difficult circumstances and target development interventions for such vulnerable sections of society. - Recognise the vicious circle of poor nutrition, poor health, early child bearing and high mortality among women, promote a life cycle approach to women health that recognises the needs at every stage from childhood to old age and assist women gain greater control over their reproductive health and prevent unwanted pregnancies. - Ensure that all girl children have access to at least primary education, illiterate, non-literate, adolescent and women have access to basic and continuing education and in general women have equal access at all levels of education. - Create conducive environment and appropriate mechanism for gender sensitisation of government functionaries at all levels and in all departments, and initiate systems for sensitisation of political leaders, opinion makers and the media. - Promote and support effective participation of women in political processes and gain their access to decision making government and non-government institutions and organisations. ADDITIONAL QUESTIONS SOLVED Multiple Choice Questions What is the number of Mahila Thanas in Rajasthan? The Women literacy rate percentage in Rajasthan as per the 2001 census was: What is the percentage of Women in Rajasthan, of the total women percentage of India? ASHA/Health workers help the women in making access to the following scheme- (a) Janani Suraksha Yojna (b) Mukhyamantri BalikaSabal Yojana (c) Kishori Shakti Yojana (d) Creche Yojana Presently the women welfare department is running about 263 creches in: (a) 28 districts (b) 18 districts (c) 08 districts (d) 20 districts Very Short Answer Type Questions Which commission has been organised by the Central government to put into practice the Women empowerment concept? Women Empowerment Commission has been formed to give practical shape to the Women Empowerment Concept. Write any two of the Mukhyamantri Seven point Women’s programmes? Two of the Mukhyamantri seven point Women’s programmes are : - Safe Motherhood. - Reduction in Infant Mortality rate. When was the Mahila Development Programme started in the seven districts of Rajasthan for the integrated development of women? It was in 1984 that the programme for the integrated development of women was started. What is the new concept related to the women development and empowerment programmes? Gender Responsive Budgeting is the new concept related to the Women Development and Empowerment Programmes. When was the Mahila Directorate established in the Rajasthan state? The Mahila Directorate Rajasthan was established in June 18, 2007. Which Yojana has been started for the women farmers? Mahila Kissan Empowerment Yojana has been started for the Women farmers. Who was given a plot of land for her survival and livelihood? Maharana Kumbha’s widow daughter, Ramabai was given a plot of land for her livelihood. What does Shared Household refer to, in the context of Women under the Women Domestic Violence Act? Shared Household refers to the housing facility for the deprived women. Which regulations have been framed to control the growing tendency of unnecessary expenditure on marriages? Group Marriages Grant Regulations, 1996 have been framed to check unnecessary expenditure in marriages. Which five states have taken the lead to give 50% reservation to women in the panchayats? Five states which have given 50% reservation to women in the panchayats are: - Himachal Pradesh - Madhya Pradesh Short Answer Type Questions Write the attempts made by the State Mahila Commission for the Women empowerment? Attempts made for the women empowerment by the state Mahila Commission: - Associated with public hearing. - Action on the reports received by post. - District hearing, personal hearing and on the basis references published in the newspapers. What are the objectives of the Mahila Development Programme? Objectives of the Mahila Development Programme are: - Preparing environment for an access to opportunities for the Women’s rights and development through constructive economic and social policy. - Making women aware of their political, economic, social, cultural and Civic rights. - Bringing women at par in the fields of education, high education, technical education, health safety etc.with men. - Preparing atmosphere for gender equality. - Attempts for special safety and security of girl child and adolescent girls, provision of quality education and health services for them, and protecting them against all sorts of violence; domestic and social exploitation and other unfavourable circumstances. What is Mukhyamantari Balika Sambal Yojna? The Rajastha State government has declared this Yojana to check a decline in the number of females in the state. Under this scheme bond under C.C.P scheme of U.T.I Mutual Fund is given by the state government for each girl child to the couple having no mail child, and having undergone sterilisation operation after one or two female child. Write a note on the Kishori Shakti Yojana? Kishori Shakti Yojana – This Yojana is operating in 274 urban and rural blocks for the non-school going girls aging 11 to 18 in these urban and rural blocks, and for the school drop out adolescent girls. Under this scheme 30 girls, each in two Aanganwadis in the urban areas and at a gram panchayat headquarter of all the 237 panchayat samities/ blocks are being benefitted. Steps are taken to improve nutrition and health standards of adolescent girls, to literate them and to impart quantitative and vocational expertise/ training, and also to develop in them the ability to understand matters related to their social environment. Write about the composition of the Rajasthan State Mahila Commission? The Rajasthan Rajya Mahila Commission Comprises of : - Chairman: It has a chairman nominated by the State Government for three years. - Members: It has three members as : - One from the scheduled caste. - One from the scheduled tribe. - One Woman from other backward classes. - Secretary: He/She is a deputed by the state government. What is a creche? Government is running creches for the daily care of the rural working women and for improving the health and nutrition standard of the children. About 263, creches are being operated presently, in 18 districts. Through these creches the facilities of daily care, medicines and nursing are being provided to the Rural Working women aging 6 months to five years. Write a note on the District women Aid Samiti as a part of the Mahila Development Programme in Rajasthan. District Women Aid Samiti: The District level woman samiti under the Chairmanship of collector has been formed to provide immediate relief to give necessary assistance and directions to the oppressed and the destitute women, and also to take immediate action after reviewing the cases of their exploitation. This samiti comprises of the police superintendent, Chief Judicial magistrate/ family court judge, Joint director of social justice department, two legal advisers (nominated at state level representatives of reputed voluntary institutions and district deputy director, Mahila and Child development department member secretary. This is a permanent samiti and it meets once in three months or as and when desired by the Chairman. This samiti provides to the oppressed and the destitute women temporary shelter, legal advice and assistance and necessary advice in relation to specific problems after reviewing the cases of exploitation. Write about the Mass marriages grant rules. Mass Marriages Grant Rules: In order to exercise control over the increasing ^tendency of unnecessary expenditure in marriage, the Mass Marriages Grant Rules were formulated in 1996 and they have been amended from time to time. Under this plan minimum of 10 pairs and maximum of 166 pairs , and at a time, can be given grant. 25% of the grant amount of per pair is given to the organizer and the remaining 75% per bride is put in the fixed deposit for three years. What provisions have been made for women under the MNREGA? Following provisions have been made for Women through MGNREGA (Mahatma Gandhi National Rural Employment Guarantee Act): - Priority will be given to women in the matters of employment and l/3rd of jobs will be for women. - Women will be given wages at par with men. - A woman labour will be appointed to look after the children at the work place provided there are more than five children below six years of age, accompanying the working women. - Members for the rural level vigilance and supervisory bodies will be appointed by the Gram Sabha and the scheduled castes/ scheduled tribes and women will be given due representation in them. - In the MAT panel of the state 50% women shall be included. Long Answer Type Questions Analyse the three dimensional approach of the Rajasthan State Mahila Policy. Three Dimensional Approach: The most salient feature of the Rajasthan State Policy is that it has been drafted, by taking into consideration the fundamental principles of equality, social justice and equal citizenship as propounded by the constitution. For the sake of implementation, this policy has been given a three dimensional form which defines the feeling of the government in letter and spirit. These three dimensions are as follows: - Reaffirming Rights Perspective: The first dimension provides a philosophical foundation to this policy and enables us to move away from a welfare orientation to a rights and empowerment approach. In the present scenario it is important to create such an environment wherein the women do not depend fully on the social and governmental system but may become themselves empowered and play a decisive role in the development of their rights and liabilities. For this it is important to change the dominant mind-set of administrators, policy makers, political leaders and service providers towards women. - Access to Women in difficult circumstances and special focus groups: The second dimension marks vulnerable sections of our society and acknowledges that all women do not belong to the same and undifferentiated category. This will help administrators and service providers to target their efforts at groups who need it most. - Priority areas for suitable legislation, programme development observation and action: The third dimension lists priority areas for action by government, non-governmental organisations, various social institutions and the private sector. This will help them to prepare their work plans in their respective fields, keeping in view the priority areas. Analyse the “The Reaffirming rights perspective “ of -one of the three dimensional approach of the women policy of Rajasthan? Analysis of Reaffirming a Rights perspective: This policy reaffirms government’s committment to work towards realisation of fundamental rights of women. The government moved away from a welfare approach to women’s development, to empowerment approach during the women’s decade (1975-1985) and the government of India endorsed in December, 1979, an elimination of all forms of‘Discrimination Against Women’ of the United Nations. This convention reaffirmed the spirit of the constitution of India. This policy document is rooted in Rights’ perspective and refers especially to the following points: - Right to life, survival, means of livelihood, shelter and basic needs. - Right to equal pay for equal work, non discriminatory work, environment and recognition of women’s contribution in human reproduction and concomitant right to child care, services for working women. - Right to natural resources, and access to common property resources. - Right to safe environment that supports life for present and future generation. - Right to health care at all stages of life from infancy to old age. - Right over one’s own body and right to reproductive choice. - Right to education, information, skill development and other tools of knowledge. - Right to protection from violence and bondage. Right to dignity and personhood, freedom from violence and violations of all kinds. - Right to legal and social justice, including right to legal aid for poor women. - Right to non-discriminatory personal law for women of all communities and castes. - Right to equal access to public spaces, institutions and to employment. - Right to participate as equals in political, administrative and social institutions of governance. These rights provide the philosophic base for the policy formation and knowledge. Describe the Priority areas for suitable legislation, programme, development and action of the women policy of the Rajasthan State. Though this policy has been designed by the state government with the cooperation of and discussion with all concerned yet the government acknowledges that its implementation with success is neither possible, nor desirable, only by the government or its agencies. Therefore non-government and voluntaries organisations, academic institutions, social and community organisations, and ‘peoples’ representatives and other leading groups need to be associated for the implementation of the policy. It should be accepted that for the empowerment of women, a multi-prolonged and united programme is needed in place of separate working plans of certain departments and organisations. For example it will be difficult to improve the health status of women in the absence of social services, meaningful education programmes etc. Social support services like childcare, clean drinking water, sanitation facilities, income generation opportunities and mechanism to deal within home and in the society have to be tackled simultaneously. Slowing down population growth will be impossible till both men and women feel secure about the survival of their children and the availability of facilities for livelihood. Transferring the burden of fertility transition to women and making them target for population control will not yield results. Some of the important points relating to women development have been identified, the main departments have been listed and the concerned government departments have been entrusted with their responsibility. This has been done to facilitate the work of preparing an integrated plan for women development. These three points are: - Economic Empowerment. - Social Support Services. - Health Nutrition and Public Health (Water, sanitations etc.) It is with the combined efforts of the central and the state governments that the integrated development of women can be possible. List various programmes and Yojanas in operation in the Rajasthan state, for the Women empowerment. The Rajasthan state has been conducting the following major programmes and yojanas for the women empowerment: - Women Development Programmes. - Five Formulae Women Empowerment Programme. - Mass Marriage subsidy/ grant rules. - District Women Aid Samiti. - Protection of Women from Domestic Violence Act, 2005. - Janani Suraksha Yojana. - Mukhyamantri Balika Sambal Yojana. - Creche Yojana. - Gender responsive Budgeting. - Kishori Shakti Yojana. - Women’s Self Help Programme. - Mukhyamantri Seven Point Women Empowerment Programmes. Others as – - Upgrading of remaining 1262 girls schools to the High Primary Schools so as to control the 14% drop out rate. - First three students from the government schools, appearing in the merit list of the Board of Secondary education shall be facilitated to get higher education (graduation level) abroad and the total expenditure on their education shall be borne by the Balika Education Foundation. - Under the literacy and the constant education yojana, thousands of women have been literated in the special Shivirs organised for them. - There are 19 women police thanas in the state and more are being set up. The women consultant and security centres and family help centres are helping the oppressed to find mutual solution to their family problems. - Notification dated 13.12.2009 has been issued to open Mahila thanas in Sikar, Jalore, Banswara, Hanumangarh and Baran districts to put an end to the even of repression of cruelities on women. - Further steps are being taken as per the amended government laws and notifications issued from time to time. We hope the given RBSE Solutions for Class 10 Rajasthan Adhyayan Chapter 9 Women Empowerment will help you. If you have any query regarding Rajasthan Board RBSE Class 10 Rajasthan Adhyayan Chapter 9 Women Empowerment, drop a comment below and we will get back to you at the earliest.
<urn:uuid:6c9f114a-c0d6-465c-98b2-6f925c081613>
CC-MAIN-2021-21
https://www.rbsesolutions.com/class-10-rajasthan-adhyayan-chapter-9/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00093.warc.gz
en
0.9282
7,333
2.578125
3
- 1 Introduction - 2 How pseudoscience flourishes - 3 Paradigmatic examples - 4 Pseudoscience and the philosophy of science - 5 Notes A pseudoscience is any theory, or system of theories, that is claimed to be scientific by its proponents but that the scientific community deems flawed, usually because independent attempts at reproducing evidence for specific claims made on the basis of these theories have failed repeatedly and rarely if ever succeeded. The term is pejorative, and its use is inevitably controversial; the term is also problematical because of the difficulty in defining rigorously what science is. Some ideas (like phrenology) were once considered respectable sciences, but were later dismissed as pseudoscience. There are some areas today, such as psychoanalysis, about which there is a serious dispute as to whether they may properly be considered pseudoscience. The term "pseudoscience", which combines the Greek root pseudo, meaning "false", and the Latin scientia, meaning "knowledge", seems to have been used first in 1843 by the French physiologist François Magendie (1783–1855), who referred to phrenology as "a pseudo-science of the present day". Among its early uses was one in 1844 in the Northern Journal of Medicine, I 387: "That opposite kind of innovation which pronounces what has been recognized as a branch of science, to have been a pseudo-science, composed merely of so-called facts, connected together by misapprehensions under the disguise of principles". Casting horoscopes based on the night sky has been used to predict the future for at least two thousand years, long before the establishment of the scientific method. Although many contemporary astrologers continue in this mystical tradition, some of them argue that their methods are scientific - a view that opens them to the charge of pseudoscience. Astrology is generally regarded as nonsense by scientists, but sometimes it can be hard to tell the difference between an idea that is plausible but not generally accepted and one that is simply unsound. Generally, pseudoscientific claims either (1) lack supporting evidence, or (2) are based on evidence that is not established by scientific methods or (3) cite well-established evidence but misuse it or misinterpret it to support the conclusions asserted in the claim. Science has considerable prestige in modern societies; often, to call something "scientific" is to suggest that it is true. Conversely, theories that do not follow the methods of science are likely to be dismissed not only as "unscientific" or "pseudoscientific", but also as fallacious. For those whose sincerely held theories are dismissed as "pseudoscience," that label often cuts to the quick. The charge can imply poor training, inadequate education, faulty judgment, or outright fraud, and thereby prompts defensive outrage from its targets. How pseudoscience flourishes It is often wondered why so many people seem to be willing to believe some extraordinarily improbable things on the basis of the flimsiest of evidence. Some nonsense is given credence because it validates particular religious or political beliefs. Creationism and intelligent design are both adopted primarily because they support certain religious – often Christian – beliefs. Moral and political thought also comes into it: many fear that an evolutionary view of the universe has negative moral consequences and so prefer any alternative theory. Lies, fallacies, misrepresentations, distortions and other nonsense sometimes enter the public consciousness because of how the news media works. Newspapers have increased in size and there are now many more broadcast outlets than ever before – hundreds more channels on cable and satellite television, thousands of news blogs and websites. To fill this gap, reporters spend less time checking facts, and often simply report on press releases delivered to them by public relations agencies, including by some who commission studies to fit various corporate or political agendas. Many of these are novelty or fun pieces, others are fluffy pieces on shaky social science research, but some cover serious health and medical topics. Few science reporters have any training in science, and often seem woefully poor at telling the difference between good science and rubbish. Pseudoscience is often promoted by reference to the "underdog" credentials of the proponents. Frequent mention is made of Galileo and others who were persecuted for ideas that later turned out to be correct. Carl Sagan commented on this: "The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." The "Gish Gallop" is an argument style used by the creationist Duane Gish where many claims are made in a short time during a formal, timed debate. It can take just a few seconds to make a claim, but much longer to refute it. When the respondent doesn't have enough time to address all of the claims, he appears to be leaving questions "still unanswered". Some prominent pseudoscientists are savvy media operators, while scientists become famous for their work in the lab, not their skills as public performers; the pseudoscientist can often be cast in the "everyman" role while the scientist is portrayed as an ivory-tower intellectual, an elitist or as somehow anti-democratic. In the public debate over climate change, scientists are often portrayed as accepting conclusions regarding anthropogenic global warming because of the pressure to continue getting funding. This charge is promoted by climate change denial groups that are themselves massively funded by the oil industry. The issue of conflict of interest is a serious one, as conflicts can cloud judgement, but to assign motives to any speaker is to avoid the issues under debate, and is a disreputable strategy whether used by scientists ("he would say that wouldn't he, because he's a homeopath") or by their critics ("he has to say that or he wouldn't get grants"). There have been some well-publicised cases of fraudulent science, but for most scientists, their careers ultimately depend on being right, and advantages gained through being parsimonious with the truth or selective with facts are likely to be short term - any important claim is likely to be quickly put to the test - and the reputational risk of being proved wrong is great. "As the new Darwinian orthodox' swept through Europe, its most brilliant opponent, the aging embryologist Karl Ernst von Baer, remarked with bitter irony that every triumphant theory passes through three stages: first it is dismissed as untrue; then it is rejected as contrary to religion; finally, it is accepted as dogma and each scientist claims that he had long appreciated its truth. I first met the theory of continental drift when it labored under the inquisition of stage two. Kenneth Caster, the only major American paleontologist who dared to support it openly, came to lecture at my alma mater, Antioch College. We were scarcely known as a bastion of entrenched conservatism, but most of us dismissed his thoughts as just this side of sane. ... Today, just ten years later, my own students would dismiss with even more derision anyone who denied the evident truth of continental drift..." Some theories, claims, and practices that, when new, were dismissed as pseudoscientific, have since become accepted. The theory of continental drift that led to the current theory of plate tectonics was first proposed by Alfred Wegener in 1910, but for many decades after Wegener's death it was largely dismissed as "eccentric, preposterous, and improbable". The Big Bang was a term originally chosen by Fred Hoyle to poke fun at the idea. They have since won general acceptance. In retrospect, the delay in acceptance of these and other revolutionary theories was clearly a result of the challenges that they posed to the accepted doctrines of the time, and of the difficulty in gathering evidence for new theories. Astrology (not to be confused with astronomy) refers to 'fortune-telling' based on the position (relative to earth) of the sun, moon, stars, and/or constellations. Some astrologers claim scientific status for their discipline, or some aspects of it; the activity at least makes certain assumptions which ought to be subject to scientific testing. However unlikely, it is not inconceivable that the movements of the moon or planets might have some influence on human activity or emotions. The major criticism of astrology is that there is no good evidence for its claims, and no rational, logical structure to its theories. It often functions essentially as a religious activity, impervious to research. Astrological researchers often complain that they cannot receive a fair hearing in scientific circles, and find it hard to have their research published in scientific journals. They claim that their critics have wrongly dismissed studies that do support astrology. An example would be Michel Gauquelin's purported discovery of correlations between some planetary positions and certain human traits such as vocations. However, an examination of Gauquelin's claims by the Belgian Comiti Para and by the French Comité Français pour l'Étude des Phénomènes Paranormaux concluded that Gauquelin had selected results to support his conclusions. Astrology can be dismissed as harmless nonsense. However, there are deeper concerns when ineffective health treatments are sold on the basis of pseudoscientific advertising – i.e. when advocates couch their claims in terms that make them falsely appear to have a credible scientific foundation. Patients with serious diseases may be deflected from seeking effective medical treatment by the false hopes engendered by remedies falsely promoted as being scientifically well-founded. Homeopathic remedies are safe in the sense that they contain no active ingredients and hence have no verified activity beyond that of placebos; but some homeopaths advise that their remedies are a suitable alternative to vaccinations, and such advice is considered dangerously irresponsible by public health professionals. Claims for herbal remedies, multi-vitamin supplements and other dietary supplements are also causes for concern: these products are extensively promoted, widely available and poorly regulated. While some supplements can be beneficial for some people, for many there is no benefit and for some there can be adverse consequences. In general, though, the principal concern about false health claims is not that they are pseudoscientific, but simply that they are false. Some alternative medicine systems are also attacked by scientists for two main reasons: when they fail the practical test of clinical efficacy or refuse to submit to such study, and when they posit mechanisms for the supposed success of their treatment methodologies that rely on outdated notions that do not fit with modern scientific understanding. Scientists have a natural interest in defending the good name of science by exposing and debunking bad science wherever it is manifested, but medics have a different concern: to expose and discredit ineffective treatments simply because they are ineffective. Some ineffective treatments are promoted using pseudoscientific claims, others appeal to religious or spiritual rationales and don’t pretend a scientific basis, and yet others have a misguided scientific basis. In the end, if an argument is nonsense, or a claim false, the issue of whether it has also wrongly invoked the authority of science is incidental. Cognitive scientists do not agree on what, if anything, intelligence is, let alone how to test for it. Nevertheless one particular measure—scores from a range of standardized Intelligence Quotient (IQ) tests—is widely used. Originally designed for educational and military use, the Stanford-Binet Intelligence Scale and its offshoots measure several cognitive capabilities such as language fluency, or three-dimensional thinking. While these may seem unrelated, test scores do in fact tend to correlate. The premise of IQ tests is that such capabilities all depend on some underlying factor, called the general intelligence factor. To critics, the concept smacks of metaphysics. Does "IQ" in fact measure anything at all? Subsidiary questions relating to intelligence and IQ involve the relative importance of nature vs. nurture, and the distribution of IQ between men and women, and among the various races (cf. intelligence and race). Accusations of pseudoscience are not difficult to find in these discussions. Freud's proposal that mental illness might be treated through talk rather than surgery, drugs, or hypnosis was only one of the startling features of psychoanalysis contrasting it to earlier conceptions of psychiatry. The concept remains controversial today. Does psychotherapy "work"? Is it any more effective than ordinary talk? (Effective at what?) Critics also wonder what ontological status is being claimed for various abstract entities in psychological theory, such as Freud's ego and id, which would seem unavailable for scientific inspection. In what way do psychoanalysis and its successors differ from religions? The question is even more sensitive in the case of Jungian psychology and transpersonal psychology, which are more interested in the spiritual dimension. In The Myth of Mental Illness and other works, Thomas Szasz proposed that the entire concept of 'mental illness' is a tool of social control at the hands of a 'pharmacracy'. In his view, a disease must be something concrete and measurable, not an abstract condition which comes into existence by vote. In this light, current attitudes toward mental illness are no more rational than 19th-century campaigns against onanism. Intelligent design, as promoted by the Discovery Institute, argues that the complexity and harmony of the universe and of life on earth implies the existence of an intelligent creator. To its critics, the theory was designed to circumvent U.S. prohibitions against the teaching of creation science as part of the scientific curricula of public schools. If so, the strategy did not work. In his decision for Kitzmiller v. Dover Area School District, Judge John E. Jones III agreed that intelligent design is "a mere re-labeling of creationism, and not a scientific theory". He went on to say (p.64): We find that ID fails on three different levels, any one of which is sufficient to preclude a determination that ID is science. They are: (1) ID violates the centuries-old ground rules of science by invoking and permitting supernatural causation; (2) the argument of irreducible complexity, central to ID, employs the same flawed and illogical contrived dualism that doomed creation science in the 1980's; and (3) ID's negative attacks on evolution have been refuted by the scientific community. Cargo cult science For many people, at least some 'pseudoscientific' beliefs, for example that the pyramids were built not by men but by prehistoric astronauts, are harmless nonsense. "Horoscopes" (not what professional astrologers mean by the term but what the general public means by it) are read for fun by many, but taken seriously by few. According to Scott Lillenfeld, popular psychology is rife with pseudoscientific claims: self-help books, supermarket tabloids, radio call-in shows, television infomercials and 'pseudodocumentaries', the Internet, and even the nightly news promote unsupported claims about, amongst other things, extrasensory perception, psychokinesis, satanic ritual abuse, polygraph testing, subliminal persuasion, out-of-body experiences, graphology, the Rorschach test, facilitated communication, herbal remedies for memory enhancement, the use of hypnosis for memory recovery, and multiple personality disorder. He suggests that critically interrogating these claims is a good way of introducing students of psychology to understanding the scientific method, bearing in mind Stephen Jay Gould's aphorism that "exposing a falsehood necessarily affirms a truth". The Nobel Laureate Richard Feynman recognized the importance of unconventional approaches to science, but was bemused by the willingness of people to believe "so many wonderful things." He was however much more concerned about how ordinary people could be intimidated by experts propounding "science that isn't science" and "theories that don't work": - There are big schools of reading methods and mathematics methods, and so forth, but if you notice, you'll see the reading scores keep going down ... And I think ordinary people with commonsense ideas are intimidated by this pseudoscience. A teacher who has some good idea of how to teach her children to read is forced by the school system to do it some other way — Or a parent ... feels guilty ... because she didn't do 'the right thing', according to the experts... Richard Feynman, Cargo Cult Science For Feynmann, it came down to a certain type of integrity, a "kind of care not to fool yourself", that was missing in what he called "cargo cult science". Pseudoscience and the philosophy of science There is disagreement not only about whether 'science' can be distinguished from 'pseudoscience' objectively, but also about whether trying to do so is even useful. The philosopher Paul Feyerabend argued that all attempts to distinguish science from non-science are flawed. He argued that the idea that science can or should be run according to fixed rules is "unrealistic and pernicious... It makes our science less adaptable and more dogmatic". Often the term 'pseudoscience' is used simply as a pejorative to express a low opinion of a field, regardless of any objective measures; thus according to McNally, it is "little more than an inflammatory buzzword for quickly dismissing one’s opponents in media sound-bites." Similarly, Larry Laudan suggested that 'pseudoscience' has no scientific meaning: "If we would stand up and be counted on the side of reason, we ought to drop terms like 'pseudoscience' and ‘unscientific’ from our vocabulary; they are just hollow phrases which do only emotive work for us". Skepticism is generally regarded as essential in science, but skepticism is properly defined as doubt, not denial. The sociologist Marcello Truzzi distinguished between 'skeptics' and 'scoffers' (or 'pseudo-skeptics'). Scientists who are scoffers fail to apply the same professional standards to their criticism of unconventional ideas that would be expected in their own fields; they are more interested in discrediting claims of the extraordinary than in disproving them, using poor scholarship, substandard science, ad hominem attacks and rhetorical tricks rather than solid falsification. Truzzi quotes the philosopher Mario Bunge as saying: "the occasional pressure to suppress [dissent] in the name of the orthodoxy of the day is even more injurious to science than all the forms of pseudoscience put together." Because science is so diverse, it is hard to find rules to distinguish between what is scientific and what is not that can be applied consistently. Imre Lakatos suggested that we might however distinguish between 'progressive' and 'degenerative' research programs; between those which evolve, expanding our understanding, and those which stagnate. Paul Thagard proposed, more formally, that a theory can be regarded as pseudoscientific if "it has been less progressive than alternative theories over a long period of time, and faces many unsolved problems; but the community of practitioners makes little attempt to develop the theory towards solutions of the problems, shows no concern for attempts to evaluate the theory in relation to others, and is selective in considering confirmations and disconfirmations". Thomas Kuhn saw a circularity in this, and questioned whether a field makes progress because it is a science, or whether is it a science because it makes progress. He also questioned whether scientific revolutions were in fact progressive, noting that Einstein's general theory of relativity is in some ways closer to Aristotle's than either is to Newton's. Most progress in science, according to Kuhn, is not at times of scientific revolution, when one theory is replacing another, but when one paradigm is dominant, and when scientists who share common goals and understanding fill in the details by puzzle solving. He argued that, when a theory is discarded, it is not always the case (at least not at first) that the new theory is better at explaining facts. Which of two theories is 'better' is largely a matter of opinion. The reasons for discarding a theory may be that more and more anomalies that reveal its weaknesses become apparent, but there is no point at which the followers of one theory abandon it in favor of a new one; instead, they cling tenaciously to the old theory, while seeking fresh explanations for the anomalies. A new theory takes over not by converting followers of the old theory, but because, over time, the new view gains more and more followers until it becomes dominant, while the older view is held in the end only by a few "elderly hold outs". Kuhn argued that such resistance is not unreasonable, or illogical, or wrong; instead he thought that the conservative nature of science is an essential part of what enables it to progress. At most, it might be said that the man who continues to resist the new view long after the rest of his profession has adopted a new view "has ipso facto ceased to be a scientist". As Kuhn described them to be, the motives of the true scientist are to gain the respect and approval of his or her peers. When technical jargon is misused, or when scientific findings are represented misleadingly, to give particular claims the superficial trappings of science for some commercial or political gain, this is easily recognized as an abuse of science; it is not an abuse that is confined to popular literature, however. Despite the complexity of the issue, solutions to the problem of demarcation were proposed in the 20th century that can be collected into two main lines of thinking (see also scientific method, Karl Popper and Thomas Kuhn for further discussion). Defining science by the falsifiability of theories Karl Popper described science as an "objective product of human thought", as much as a nest can be seen as an objective product of a bird. Consequently, he dismissed as insignificant the philosophical tendency to regard knowledge as subjective, which includes the definition of science by the behavior of scientists as described above. Popper's solution to the demarcation problem is given in his 1934 book The Logic of Scientific Discovery a book that Sir Peter Medawar, a Nobel Laureate in Physiology and Medicine, called "one of the most important documents of the twentieth century". Popper suggested that science does not advance because we learn more and more facts. Science does not start with observations and then somehow assemble them to provide a theory; any attempt to do so would be logically unsound, because a general theory contains more information than any finite number of observations. Popper shows this with a simple example. Let's say we have seen millions of white swans. We may be tempted to conclude, by the process called induction, that "All swans are white". But however many white swans we have seen, the next swan we see might be black. Rather, the advance of science consists of three steps: (1) we find a problem; (2) we try to solve the problem by a new theory; (3) we critically test our theory and, while doing this, we learn from our errors. It is in the process of critical testing of theories that Popper finds the distinguishing characteristics of science. For Popper, there is no way a scientific theory can be proven to be true; a theory comes to be accepted because it has survived all attempts to disprove it, but it is only accepted provisionally, until something better comes along. This may be explained again with the example of swans. How could we ever prove the truth of our theory that "all swans are white"? Only by observing all swans of the universe in all past, present and future times, and showing they are all white. This is, of course, impossible. Yet, an assertion such as our "all swans are white" is a scientific statement (although a false one). Following Popper, scientific theories must include falsifiable universal assertions, i.e., general statements that cannot be proven true, but can eventually be found false when a new observation, e.g., of a black swan, disproves them. Assertions that are not falsifiable are non-scientific, and the refusal to critically discuss a theory is a non-scientific attitude as well. As Popper puts it, "those who are unwilling to expose their ideas to the hazard of refutation do not take part in the scientific game". Accordingly, a 'pseudoscience' is a system of assertions with a superficial resemblance to science, but which is empty, in being in principle incapable of disproof. Scholars that refuse to engage in a critical discussion of their doctrine exhibit a 'pseudoscientific' attitude. Popper argued that astrology, Marxism, and Freudian psychoanalysis are all 'pseudoscientific' because they make no predictions by which their truth can be judged; accordingly they cannot be falsified by experimental tests, and have thus no connection with the real world. Defining science by the behavior of scientists Popper's vision of the scientific method was itself tested by Thomas Kuhn. Kuhn concluded, from studying the history of science, that science does not progress linearly, but undergoes periodic 'revolutions', in which the nature of scientific inquiry in a field is transformed. He argued that falsification had played little part in such revolutions, because rival world views are incommensurable - he argued that it is impossible to understand one paradigm through the concepts and terminology of another. For Kuhn, to account for scientific progress, we must examine how scientists behave, and observe what they value, what they tolerate, and what they disdain. He concluded that they value most the respect of their peers, and they gain this by solving difficult 'puzzles', while working with shared rules towards shared objectives. Kuhn maintained that typical scientists are not objective, independent thinkers, but are conservatives who largely accept what they were taught. Most aim to discover what they already know - "The man who is striving to solve a problem ... knows what he wants to achieve, and he designs his instruments and directs his thoughts accordingly." Such a closed group imposes its own expectations of rigor, and disparages claims that are (by their conventions) vague, exaggerated, or untestable. Within any field of science, scientists develop a technical language of their own; to a lay reader, their papers may seem full of jargon, pedantry, and obscurantism. What seems to be bad writing is often just bad writing, but sometimes reflects an obsession with using words precisely. Scientists also expect any claims to be subject to peer review before publication and acceptance, and demand that any claims are accompanied by enough detail to enable them to be verified and, if possible, reproduced. Some proponents of unconventional 'alternative' theories avoid this often ego-bruising process, sometimes arguing that peer review is biased in favor of conventional views, or that assertions that lie outside what is conventionally accepted cannot be evaluated fairly using methods designed for a conventional paradigm. Popper saw dangers in the closed worlds of specialists, but while admitting that, at any one moment, we are 'prisoners caught in the framework of our theories', he denied that different frameworks are like mutually untranslatable languages; he argued that clashes between frameworks have stimulated some of the greatest intellectual advances. Popper recognised what Kuhn called 'normal science', but for him, that was the activity of "the not-too critical professional, of the science student who accepts the ruling dogma of the day;... who accepts a new revolutionary theory only if almost everybody else is ready to accept it." Popper acknowledged its existence, but saw it as the product of poor teaching, and also doubted whether 'normal' science was indeed normal. Whereas Kuhn had pictured science as progressing steadily during long periods of stability within a dominant paradigm, punctuated occasionally by scientific revolutions, Popper thought that there was always a struggle between sometimes several competing theories. Popper's analysis was prescriptive; he described what he thought scientists ought to do, and claimed that this is what the best scientists did. Kuhn, by contrast, claimed to be describing what scientists in fact did, not what he thought they ought to do, but nevertheless he argued that it was rational to attribute the success of science to the scientists' behavior. Whereas Popper was scathing about the conservative scientist who accepted the dogma of the day, Kuhn proposed that such conservatism might be important for progress. According to Kuhn, scientists do not normally try to overthrow theories, but rather they try to bring them into closer agreement with observed facts and other areas of understanding. Accordingly, they tend to ignore research findings that threaten the existing paradigm; "novelty emerges only with difficulty, manifested by resistance, against a background provided by expectation". Yet there are controversies in every area of science, and they lead to continuing change and development. Scientists are scornful of the selective use of experimental evidence - presenting data that seem to support claims while suppressing or dismissing data that contradict them - and peer-reviewed journals generally insist that published papers cite others in a balanced way. Imre Lakatos attempted to accommodate this in what he called 'sophisticated falsification', arguing that it is only a succession of theories and not one given theory which can be appraised as scientific or pseudoscientific. A series of theories usually has a continuity that welds them into a research program; the program has a 'hard core' surrounded by "auxiliary hypotheses" which bear most tests, but which can be modified or replaced without threatening the core understanding. - Still A, Dryden W (2004)The Social Psychology of "Pseudoscience": A Brief History J Theory Social Behav 34:267-90 ("The word has asserted the scientific credentials of the user at the same time as it denies these credentials to the pseudoscientist.") - From Broca's Brain: Reflections on the Romance of Science by Carl Sagan (1986) ISBN-10: 0345336895. Perhaps they were right to laugh at Columbus; his plan to reach the East by sailing West was founded on the mistaken beliefs that the Asian continent stretched much farther to the east than it actually does and that Japan lay about 2,400 km east of the Asian mainland; he also greatly underestimated the circumference of the earth. - ExxonMobil continuing to fund climate sceptic groups Guardian 1 July 2009 - Oil conglomerate 'secretly funds climate change deniers' Telegraph 25 Nov 2010 - "The Validation of Continental Drift" - Stephen Jay Gould - Developing the theory USGS) - see a BBC article on Big Bang - Kurtz P et al. (1997) Is the "Mars Effect" genuine? J Scientific Exploration 11:19-39 - Multivitamin prostate warning ‘’BBC News’’ 16 May 2007 [ http://news.bbc.co.uk/1/hi/health/6657795.stm] - Multivitamin supplements a 'waste of time' ‘’The Independent’’ 10 February 2009 - The rise and fall of IQ Vanessa Thorpe and Robin McKie, Sunday 17 March 2002, The Observer - Graves JL, Johnson A (1995) The Pseudoscience of Psychometry and The Bell Curve Journal of Negro Education 64:277-294 - Lilienfeld SO (2004) Teaching Psychology Students to Distinguish Science from Pseudoscience: Pitfalls and Rewards - The National Science Foundation stated that 'pseudoscientific' habits and beliefs are common in the USA - National Science Board (2006) Science and Engineering Indicators 2006 Two volumes. Arlington, VA: National Science Foundation (volume 1, NSB-06-01; NSB 06-01A) - Criticisms of the concept of pseudoscience - Paul Feyerabend (1975) 'Against Method: Outline of an Anarchistic Theory of Knowledge' - McNally RJ (2003)Is the pseudoscience concept useful for clinical psychology? SRHMP'' Vol 2 Number 2 - Laudan L (1996) The demise of the demarcation problem, in Ruse M 'But Is It Science?: The Philosophical Question in the Creation/Evolution Controversy' pp 337-50 - John Stuart Mill On Liberty (1869) Chapter II: Of the Liberty of Thought and Discussion - Marcello Truzzi On Some Unfair Practices towards Claims of the Paranormal; On Pseudo-Skepticism - The progress of science - Hawking SW (1993) 'Hawking on the Big Bang and Black Holes' World Scientific Publishing Company, Page 1, and . - Currently, string theory has been criticized by some researchers, e.g. Smolin L (2006) The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next Houghton Mifflin Company. ISBN 0618551050 - Lakatos I (1977) The Methodology of Scientific Research Programmes: Philosophical Papers Volume 1 Cambridge: Cambridge University PressScience and Pseudoscience - transcript and broadcast of talk by Imre Lakatos - Thagard PR (1978) Why astrology is a pseudoscience In PSA Volume 1 ed PD Asquith and I Hacking (East Lansing: Philosophy of Science Association - Popular pseudoscience - :Tsai AC (2003) Conflicts between commercial and scientific interests in pharmaceutical advertising for medical journals. Int J Health Serv 33:751-68 PMID 14758858 - Cooper RJ et al. (2003) The quantity and quality of scientific graphs in pharmaceutical advertisements. J Gen Intern Med 18:294-7 PMID 12709097 - Karl R. Popper, 1967, Epistemology without a knowing subject, in: Massimo Baldini and Lorenzo Infantino, eds., 1997, Il gioco della scienza, Armando Editore, Roma (Italy), 158 pp. ISBN 88-7144-678-X - Sir Karl Popper - Popper KR (1959) The Logic of Scientific Discovery English translation;:Karl Popper Institute includes a complete bibliography 1925-1999 - Popper KR (1962) Science, Pseudo-Science, and Falsifiability - Karl Popper from Stanford Encyclopedia of Philosophy - Sir Karl Popper: Science: Conjectures and Refutations - Kuhn TS (1962) 'The Structure of Scientific Revolutions' Chicago: University of Chicago Press, ISBN 0-226-45808-3 - Sometimes technical terms have strict definitions in terms of things that can be measured (operational definitions). Other terms 'stand for' things not yet understood in detail - even in theoretical physics for instance, although most terms have some connection with observables, they are seldom of the sort that would enable them to be used as operational definitions. As Churchland observed, "If a restriction in favor of operational definitions were to be followed ... most of theoretical physics would have to be dismissed as meaningless pseudoscience!" Churchland P Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind (1999) MIT Press [http://books.google.com/books? - Peer review and the acceptance of new scientific ideas For an opposing perspective, e.g. Peer Review as Scholarly Conformity - Lakatos I (1970) "Falsification and the Methodology of Scientific Research Programmes" in Lakatos I, Musgrave A (eds) Criticism and the Growth of Knowledge Cambridge University Press pp 91–195
<urn:uuid:3d1f2fc3-3e7d-464c-8b51-700e71ebd58c>
CC-MAIN-2021-21
https://en.citizendium.org/wiki/Pseudoscience
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00016.warc.gz
en
0.953175
7,367
3.109375
3
« AnteriorContinuar » The Israelites rebel, and are oppressed by the Midianites A. M. 2752. 239–46. Anno ante The Israelites again do evil, and are delivered into the hands of the Midianites, by whom they are oppressed seven years, 1, 2. Different tribes spoil their harvests, and take away their cattle, 3-5. They cry unto the Lord, and he sends them a prophet to reprehend and instruct them, 6-10. An angel appears unto Gideon, and gives him commission to deliver Israel, and works several miracles, to prove that he is Divinely appointed to this work, 11-23. Gideon builds an altar to the Lord, under the name of Jehovahshalom ; and throws down the allar of Baal, 24-27. ' His townsmen conspire against him ; he expostulates with them, and they are pacified, 28–32. The Midianites and Amalekites gather together against Israel ; Gideon summons Manasseh, Asher, Zebulun, and Naphtali, who join his standard, 33–35. The miracle of the fleece of wool, 36-40. AND the children of Israel for multitude ; for both they and B: M: 2952–53. An. Exod. Isr. did evil in the sight of the their camels were without num An. Exod. Isr. Anno ante Lord: and the Lord delivered ber : and they entered into the 1. Olymp. 476. them into the hand of Midian land to destroy it. I. Olym. 476-69. seven years. 6 And Israel was greatly impoverished beA.M. 2752–59. 2 And the hand of Midian pre- cause of the Midianites ; and the children of B. C. 1252-45. vailed against Israel : and be- Israel cried unto the LORD. Anno ante cause of the Midianites the chil 7 And it came to pass, when 1. Oiym. 476–69. dren of Israel made them d the the children of Israel cried unto An. Exod. Isr. dens which are in the mountains, and caves, the LORD because of the Midian Anno ante and strong holds. I. Olymp. 469. 3 And so it was, when Israel had sown, 8 That the Lord sent la prophet unto the that the Midianites came up, and the Ama- children of Israel, which said unto them, Thus lekites, and the children of the east, even saith the LORD God of Israel, I brought you they came up against them; up from Egypt, and brought you forth out of 4 And they encamped against them, and the house of bondage ; destroyed the increase of the earth, till thou 9 And I delivered you out of the hand of the come unto Gaza, and left no sustenance for Egyptians, and out of the hand of all that opIsrael, neither h sheep, nor ox, nor ass. pressed you, and m drave them out from be 5 For they came up with their cattle and fore you, and gave you their land; their tents, and they came as grasshoppers 10 And I said unto you, I am the LORD A. M. 2759. a Chap. ii. 19.—Hab. iii. 7.— Heb. was strong. d1 Lev. xxvi. 16; Deut. xxvii. 30, 33, 51; Mic. vi. 15. Or, Sam. xiii, 6; Heb. xi. 38. Chap. iii. 13. 'Gen. xxix: 1; goat. - Chap. vii. 12.- Ch. iii. 15; Hos. v. 15. Heb. chap. vii. 12 ; vin. 10; 1 Kings iv. 30; Job i. 3. a man a prophet. - Psa. xliv. 2, 3. NOTES ON CHAP. VI. they appear to have come early, encamped in the Verse 1. Delivered them into the hand of Midian] plains, and watched the crops till they were ready to The Midianites were among the most ancient and in- be carried off. This is frequently the case even to veterate of the enemies of Israel. They joined with the present day. the Moabites to seduce them to idolatry, and were Till thou come unto Gaza) That is, the whole nearly extirpated by them; Num. xxxi. The Midian- breadth of the land, from Jordan to the coast of the ites dwelt on the eastern borders of the Dead Sea, and Mediterranean Sea. Thus the whole land was their capital was Arnon. ravaged, and the inhabitants deprived of the necessaVerse 2. Made them the dens which are in the ries of life. mountains) Nothing can give a more distressing de · Verse 5. They came up with their cattle aud their scription of the state of the Israelites than what is here ) tents] All this proves that they were different tribes related. They durst not reside in the plain country, of wanderers who had no fixed residence ; but, like but were obliged to betake themselves to dens and caves their descendants the Bedouins or wandering Arabs, of the mountains, and live like wild beasts, and were removed from place to place to get prey for themselves hanted like them by their adversaries. and forage for their cattle. Verse 3. Children of the East] Probably those who Verse 8. The Lord sent a prophet] The Jews say inhabited Arabia Deserta, Ishmaelites. that this was Phinehas; but it is more likely that it Verse 4. Encamped against them] Wandering was some prophet or teacher raised up by the Lord to hordes of Midianites, Amalekites, and Ishmaelites warn and instruct them. Sueh were his witnesses, came, in the times of harvest and autumn, and carried and they were raised up from time to time to declare away their crops, their fruit, and their cattle. And the counsel of God to his rebellious people, 246. Anno ante An angel appears to Gideon, and JUDGES. commissions him to deliver Israel. your God; n fear not the gods of us into the hands of the Midian A. M. 2759, B. C. 1245. B. C. 1245. An. Exod. Isr. the Amorites, in whose land ye ites. An. Exod. Isr. 246. dwell : but ye have not obeyed 14 And the Lord looked upon Anno ante I. Olymp. 469. him, and said, w Go in this thy 1. Olymp. 469. 11 And there came an angel of the LORD, might, and thou shalt save Israel from the hand and sat under an oak which was in Ophrah, of the Midianites : * have not I sent thee? that pertained unto Joash o the Abi-ezrite : 15 And he said unto him, O, my Lord, and his son p Gideon threshed wheat by the wherewith shall I save Israel ? behold, s my winepress, 9 to hide it from the Midianites. family is poor in Manasseh, and I am the - Heb. my Lu Psa. xliv. 1. 12 And the angel of the Lord appeared least in my father's house. unto him, and said unto him, The Lord is 16 And the LORD said unto him,. Surely . with thee, thou mighty man of valour. I will be with thee, and thou shalt smite the 13 And Gideon said unto him, O my Lord, Midianites as one man. if the Lord be with us, why then is all this 17 And he said unto him, If now I have befallen us? and + where be all his miracles found grace in thy sight, then b show me a u which our fathers told us of, saying, Did not sign that thou talkest with me. the Lord bring us up from Egypt? but now 18 · Depart not hence, I pray thee, until I the LORD hath v forsaken us, and delivered come unto thee, and bring forth my d present, 2 Kings xvii. 35, 37, 38; Jer. x. 2. Joshua xvii. 2. * Josh. i. 9; chap. iv, 6. - See 1 Sam. ix. 21.p Heb. xi. 32, called Gedeon. -9 Heb. to cause it to flee. thousand is the meanest ; Exod. xviii. 21, 25; Mic. v. 2.—Exod. Chap. xiii. 3; Luke i. 11, 28. - Josh. i. 5. So Psalm ini. 12; Josh. i. 5.b Exod. iv. 1-8; ver. 36, 37; 2 Kings xx. lxxxix. 49; Isa. lix. I ; lxiii. 15. - 2 Chron. 8; Psa. Ixxxvi. 17; Isa. vii. 11. - Gen. xviii. 3, 5; chap. xli. -H | Sam. xii. 11; Heb. xi. 32, 34. 15. Or, meat-offering. Verse 11. There came an angel of the Lord] The in nature and providence, he has qualified for his purprophet came to teach and exhort ; the angel comes to pose. - The instruments thus chosen are generally confirm the word of the prophet, to call and commis- unlikely, but they will be ever found the best qualified sion him who was intended to be their deliverer, and for the Divine employment. to work miracles, in order to inspire him with super Verse 13. And Gideon said unto him] This speech natural courage and a confidence of success. is remarkable for its energy and simplicity ; it shows Ophrah] Or Ephra, was a city, or village rather, in indeed a measure of despondency, but not more than the half tribe of Manasseh, beyond Jordan. the circumstances of the case justified. His son Gideon threshed wheat) This is not the Verse 14. Go in this thy might] What does the only instance in which a man taken from agricultural angel mean? He had just stated that Jehovah was employments was made goneral of an army, and the with him; and he now says, Go in this thy might, deliverer of his country. Shamgar was evidently a i. e., in the might of Jehovah, who is with thee. ploughman, and with his ox-goad he slew many Phi Verse 15. Wherewith shall I save Israel ?] I have listines, and became one of the deliverers of Israel. neither men nor money. Cincinnatus was taken from the plough, and was made Behold, my family is poor in Manasseh] '958700 dictator and commander-in-chief of the Roman armies. 577, Behold, my thousand is impoverished. Tribes There is a great similarity between his case and that were anciently divided into lens, and fifties, and hanof Gideon. dreds, and thousands; the thousands therefore marked Threshed wheat by the winçpress) This was a place grand divisions, and consequently numerous families ; of privacy; he could not make a threshing-floor in Gideon here intimates that the families of which he open day as the custom was; and bring either the wheel made a part were very much diminished. But if we over the grain, or tread it out with the feet of the take '95x alpey for the contracted form of the plural, oxen, for fear of the Midianites, who were accustomed which is frequently in Hebrew nouns joined with a to come and take it away 'as soon as threshed. He verb in the singular, then the translation will be, got a few sheaves from the field, and brought them home “ The thousands in Manasseh are thinned ;" i. e., this to have them privately threshed for the support of the tribe is greatly reduced, and can do little against their family. As there could be no vintage among the Is- enemies. raelites in their present distressed circumstances, the Verse 16. Thou shalt smite the Midianiles as one winepress would never be suspected by the Midianites | man.) Thou shalt as surely conquer all their host as to be the place of threshing corn. if thou hadst but one man to contend with ; or, Thou Verse 12. The Lord is with thee) “ The Word of shalt destroy them to a man. the Lord is with thee, thou mighty man of valour." Verse 17. Show me a sign] Work a miracle, that Targum. It appears that Gideon had proved himself, I may know that thou hast wisdom and power sufficient on former occasions, to be a man of courage and per- to authorize and qualify me for the work. sonal prowess; and this would naturally excite the con - Verse 18. And bring forth my present) My minchah; fidence of his countrymen. God chooses for his work generally an offering of bread, wine, oil, flour, and those instruments which, in the course of his operations | such like. It seems from this that Gideon supposed B. C. 1245. The angel works a miracle in the presence of Gideon. A. M. 2759. and set it before thee. And he and i there rose up fire out of A. M. 2759. B. C. 1245. An. Exod. Isr. said, I will tarry until thou come the rock, and consumed the flesh An. Exod. Isr. Anno ante again. and the unleavened cakes. - Then Anno ante I. Olymp. 469. 19 And Gideon went in, and the angel of the Lord departed 1. Olymp. 469. made ready ' a kid, and unleavened cakes of out of his sight. an ephah of flour : the flesh he put in a basket, 22 And when Gideon perceived that he and he put the broth in a pot, and brought it was an angel of the Lord, Gideon said, Alas, out unto him under the oak, and presented it. O Lord God! " for because I have seen an 20 And the angel of God said unto him, angel of the Lord face to face. Take the flesh and the unleavened cakes, and 23 And the LORD said unto him, “ Peace be & lay them upon this rock, and pour out the unto thee; fear not: thou shalt not die. broth. And he did so. 24 Then Gideon built an altar there unto 21 Then the angel of the Lord put forth the LORD, and called it Jehovah-shalom : the end of the staff that was in his hand, and unto this day it is yet in Ophrah of the Abitouched the flesh and the unleavened cakes ; ezrites. e Genesis xviii, 6, 7, 8. Heb. a kid of the goats. % Chap. xxxii. 30; Exod. xxxii. 20; chap. xiii. 22. m Dan. X. 19. xul. 19,-h See I Kings xvii. 33, 34. - Lev. ix. 24; 1 Kings - That is, the LORD send peace : see Gen. xxii. 14; Exod. xvii. wi11.38; 2 Chron, vii. l.-- Chap. xiii. 21. - Gen. xvi. 13; 15; Jer. xxxvi. 16; Ezek. xlviii. 35. Chap. viii. 32. the person to whom he spoke to be a Divine person. the angel vanished out of his sight, yet God continued Nevertheless, what he prepared and brought ont ap- to converse with him either by secret inspiration in pears to be intended simply as an entertainment to his own heart, or by an audible voice. refresh a respectable stranger. Verse 22. Alas, O Lord God! for because I have Verse 19. Made ready a kid—the flesh he put in a seen] This is an elliptical sentence, a natural expresbasket, and he put the broth in a pol] The manner in sion of the distressed state of Gideon's mind : as if he which the Arabs entertain strangers will cast light on had said, Have mercy on me, O Lord God! else I this verse. Dr. Shaw observes : “Besides a bowl of shall die ; because I have seen an angel of Jehovah milk, and a basket of figs, raisins, or dates, which upon face to face. We have frequently seen that it was a our arrival were presented to us to stay our appetite, prevalent sentiment, as well before as under the law, the master of the lent fetched us from his flock ac- that if any man saw God, or his representative angel, cording to the number of our company, a kid or a he must surely die. "On this account Gideon is goat, a lamb or a sheep ; 'half of which was imme- alarmed, and prays for his life. This notion prevailed diately seethed by his wife, and served up with cu- among the heathens, and we find an instance of it in casoe; the rest was made kab-ab, i. e., cut to pieces the fable of Jupiter and Semele. She wished to see and roasted, which we reserved for our breakfast or his glory; she saw it, and was struck dead by the dinner next day.” May we not suppose, says Mr. effulgence. See the notes on Exod. xxxiii. 20. We Harmer, that Gideon, presenting some slight refresh- find that a similar opinion prevailed very anciently ment to the supposed prophet, according to the present among the Greeks. In the hymn of Callimachus, Euç Arab mode, desired him to stay till he could provide Aoutpa ang Takhados, ver. 100, are these words :something more substantial ; that he immediately killed Κρανιοι δ' ώδε λεγοντι νομοι a kid, seethed part of it, and, when ready, brought out "Ος κε τιν' αθανατων, όκα μη θεος αυτος έληται, the stewed meat in a pot, with unleavened cakes of Αθρηση, μισθω τουτον ιδειν μεγαλω. bread which he had baked; and the other part, the “The laws of Saturn enact, that if any man see kab-ab, in a basket, for him to carry with him for some after-repast in his journey. See Shaw's and Pococke's any of the immortal gods, unless that god himself Trarels, and Harmer's Observations. shall choose it, he shall pay dearly for that sight." Brought it out unto him under the oak) Probably Verse 23. Fear not : thou shall not die.) Here the where he had a tent, which, with the shade of the oak, discovery is made by God himself: Gideon is not sheltered them from the heat of the sun, and yet af- curiously prying into forbidden mysteries, therefore he forded the privilege of the refreshing breeze. Under shall not die. a shade in the open air the Arabs, to the present day, Verse 24. Gideon built an altar-and called it Jeare accustomed to receive their guests. hovah-shalom] The words pihu 7110° Yehovah shalom Verse 20. Take the flesh, fc.] The angel intended signify The Lord is my peace, or The peace of Jeho-, to make the flesh and bread an offering to God, and vah; and this name he gave the altar, in reference the broth a libalion. to what God had said, ver. 23, Peace be unto thee, Verse 21. The angel--put forth the end of the staff). 75 disa shalom lecha, “Peace to thee ;” which imHe appeared like a traveller with a staff in his hand; plied, not only a wish, but a prediction of the prosperthis he put forth, and having touched the flesh, fire ous issue of the enterprise in which he was about to rose out of the rock and consumed it. Here was the engage. It is likely that this is the altar which is most evident proof of supernatural agency. mentioned in verse 26, and is spoken of here merely Then the angel-departed out of his sight.] Though by anticipation. VOL. II. (9) B. C. 1245. B. C. 1245. 246. Ano ante Gideon overturns the altar of Baal, JUDGES for which his life is threatened A. M. 2759. A. M. 2759. 25 And it came to pass the was cast down, and the grove An. Exod. Isr. same night, that the Lord said was cut down that was by it, An. Exod. Isr unto him, Take thy father's young and the second bullock was offer Anno anto I. Olymp. 469. bullock, P even the second bul- ed upon the altar that was built. I. Olymp. 469. lock of seven years old, and throw down the 29 And they said one to another, Who hath altar of Baal that thy father hath, and cut done this thing? And when they inquired down the grove that is by it : and asked, they said, Gideon the son of Joash 26 And build an altar unto the Lord thy hath done this thing: God upon the top of this rock, in the order 30 Then the men of the city said unto ed place, and take the second bullock, and Joash, Bring out thy son, that he may die : offer a burnt-sacrifice with the wood of the because he liath cast down the altar of Baal, grove which thou shalt cut down. and because he hath cut down the grove that 27 Then Gideon took ten men of his ser- was by it. vants, and did as the Lord had said unto him : 31 And Joash said unto all that stood against and so it was, because he feared his father's him, Will ye plead for Baal ? will ye save him? household, and the men of the city, that he he that will plead for him, let him be put to could not do it by day, that he did it by night. death whilst it is yet morning: if he be a god, 28 And when the men of the city arose let him plead for himself, because one hath early in the morning, behold, the altar of Baal cast down his altar.. POr, and.-9 Exod. xxxiv. 13; Deut. vii. 5. Heb. strong place. Or, in an orderly manner. Verse 25. Take thy father's young bullock, even the end with its life. The young bullock, ver. 25, is supsecond bullock] There is some difficulty in this verse, posed to have been offered for a peace-offering ; the for, according to the Hebrew text, liwo bullocks are bullock of seven years old, for a burnt-offering. mentioned here.; but there is only one mentioned in Verse 29. Gideon the son of Joash hath done this verses 26 and 28. But what was this second bullock ? thing.) They fixed on him the more readily because Some think that it was a bullock that was fattened in they knew he had not joined with them in their idolaorder to be offered in sacrifice to Baal. This is very trous worship. probable, as the second bullock is so particularly dis Verse 30. The men of the city said] They all felt tinguished from another which belonged to Gideon's an interest in the continuance of rites in which they father. As the altar was built upon the ground of had often many sensual gratifications. Baal and AshJoash, yet appears to have been public property, (see taroth would have more worshippers than the true God, verses 29, 30,) so this second ox was probably reared because their riles were more adapted to the fallen and fattened at the expense of the men of that village, nature of man. else why should they so particularly resent its being - Verse 31. Will ye plead-for Baal?] The words offered to Jehovah ? are very emphatic : “Will ye plead in earnest jain Verse 26. With the wood of the grove] It is-pro- for Baal? Will ye sw'vin really save him ?. If he be bable that 70x Asherah here signifies Astarte ; and God, diabx Elohim, let him contend for himself, seethat there was a wooden image of this goddess on the ing his altar is thrown down.” The paragogic letters altar of Baal. Baal-peor was the same as Priapus, in the words plead and save greatly increase the sense. Astarte as Venus ; these two impure idols were proper Joash could not slay his son ; but he was satisfied he enough for the same altar. In early times, and among had insulted Baal : if Baal were the true God, he rude people, the images of the gods were made of would avenge his own injured honour. This was a wood. This is the case still with the inhabitants of sentiment among the heathens. Thus Tacitus, lib. i., the South Sea Islands, with the Indians of America, c. 73, A. U. C.768, mentioning the letter of Tiberius and with the inhabitants of Ceylon : many of the to the consuls in behalf of Cassius and Rubrius, two images of Budhoo are of wood. The Scandinavians Roman knights, one of whom was accused of having also had wooden gods. sold á statue of Augustus in the auction of his gardens; Verse 27. He feared his father's household) So it and the other, of having sworn falsely by the name of appears that his father was an idolater : but as Gideon Augustus, who had been deified by the senate; among had len men of his own servants whom he could trust other things makes him say: Non ideo decretum patri in this matter, it is probable that he had preserved the suo cælum, ut in perniciem civium is honor verteretur. true faith, and had not bowed his knee to the image Nec contra religiones fieri quod effigies ejus, utalia nuof Baal. minum simulachra, venditionibus hortorum, et domuum Verse 28. The second bullock was offered] It ap- accedant. Jusjurandum perinde æstimandum quam si pears that the second bullock was offered, because it | Jovem fefellisset : deorum injuriæ diis curæ.-" That was just seven years old, ver. 25, being calved about Divine 'honours were not decreed to his father (Authe time that the Midianitish oppression began; and it gustus) to lay snares for the citizens; and if his statue, was now to be slain to indicate that their slavery should / in common with the images of the gods in general, A. M. 2759, 246. anno ante Gideon collects an army, CHAP. VI. receives a sign from the Lord A. M. 2759. 32 Therefore on that day he 37 Behold, I will put a fleece B. C. 1245. An. Exod. Isr. called him Jerubbaal, u saying, of wool in the floor; and if the An. Exod. Isr Let Baal plead against him, be- dew be on the fleece only, and 1. Olymp. 469. cause he hath thrown down it be dry upon all the earth be- 1. Olymp. 169. his altar. side, then shall I know that thou wilt gave Is 33 Then all the Midianites and the Ama- rael by mine hand, as thou hast said. lekites and the children of the east were ga 38 - And it was so : for he rose up early on thered together, and went over, and pitched in the morrow, and thrust the fleece together, and w the valley of Jezreel. wringed the dew out of the fleece, a bowl full 34 But * the Spirit of the LORD y came of water. upon Gideon, and he ? blew a trumpet; and 39 And Gideon said unto God, • Let not Abi-ezer * was gathered after him. thine anger be hot against me, and I will speak 35 And he sent messengers throughout all but this once : let me prove, I pray thee, but Manasseh ; who also was gathered after him : this once with the fleece; let it now be dry and he sent messengers unto Asher, and unto only upon the fleece, and upon all the ground Zebulun, and unto Naphtali ; and they came let there be dew. up to meet them. 40 And God did so that night : for it was 36 And Gideon said unto God, If thou wilt dry upon the fleece only, and there was dew save Israel by mine hand, as thou hast said, on all the ground. 'That is, Let Banl plend.—1 Sam. xii. 11; 2 Sam. xi:21; * Ch. iii. 10; 1 Chron. xii. 18; 2 Chron. xxiv. 20.-Heb. Jenubbesheth ; that is, Let the shameful thing plead ; see, Jer. xi. clothed. -2 Num. x, 3; chap. iii. 27. —Heb. was called aflet 13; Hos. ix. 10. -* Ver. 3. him.-- See Exod. iv. 3, 4, 6, 7.-_Gen. xviii. 32. # Josh. xvii. 16. was put up to sale with the houses and gardens, it On the miracle of the fleece, dew, and dry ground, could not be considered an injury to religion. That Origen, in his eighth homily on the book of Judges, any, false oath must be considered as an attempt to has many curious and interesting thoughts. I shall deceive Jupiter himself; but the gods themselves must insert the substance of the whole : take cognizance of the injuries done unto them.” Livy The fleece is the Jewish nation. The fleece covereà has a similar sentiment, Hist. lib. X., c. 6, where, with dew, while all around is dry, the Jewish nation speaking of some attempts made to increase the num- favoured with the law and the prophets. The fleece ber of the augurs out of the commons, with which the dry, the Jewish nation cast off for rejecting the Gospel. senators were displeased, he says: Simulabant ad deos All around watered, the Gospel preached to the Genid magis, quam ad se pertinere ; ipsos visuros, ne tiles, and they converted to God. The fleece on the sacra sua polluantur.--" They pretended that these threshing-floor, the Jewish people in the land of Judea, things belonged more to the gods than themselves; and winnowed, purged, and fanned by the Gospel. The that they would take care that their sacred rites were dew wrung out into the bowl, the doctrines of Chrisnot polluted."'. tianity, extracted from the Jewish writings, shadowed Verse 32. He called him Jerubbaalj That is, Let forth by Christ's pouring water into a basin, and washBaal contend; changed, 2 Sam. xi. 21, into Jerubbe-' ing the disciples' feet. The pious father concludes that sheth, he shall contend against confusion or shame; he has now wrung this water out of the fleece of the thus changing baal, lord, into bosheth, confusion or book of Judges; as he hopes, by and by to do out of ignominy. Some think that Jerubbaal was the same the fleece of the book of Kings, and out of the fleece with Jerombalus, who, according to Sanchoniatho and of the book of Isaiah or Jeremiah; and he has rePorphyry, was a priest of Jevo. But the history of ceived it into the basin of his heart, and there conceived Sanchoniatho is probably a forgery of Porphyry him- its true sense ; and is desirous to wash the feet of his self, and worthy of no credit. brethren, that they may be able to walk in the way of Verse 33. Then all the Midianites] Hearing of the preparation of the Gospel of peace.- Origen, Op. what Gideon had done, and apprehending that this might vol. ii., p. 475, edit. Benedict. be a forerunner of attempts to regain their liberty, they All this to some will doubtless appear trifling; but formed a general association against Israel. it is not too much to say that scarcely any pious mind Verse 34. The Spirit of the Lord came upon can consider the homily of this excellent man without Gideon] He was endued with preternatural courage drinking into a measure of the same spirit, so much and wisdom. sincerity, deep piety, and unction, appear throughout Verse 36. If thou will save. 2. Israel] Gideon was the whole : yet'as I do not follow such practices, I very bold, and God was very condescending.. But cannot recommend them. Of dealers in such small probably the request itself was suggested by the Divine wares, we have many that imitate Benjamin Keach, Spirit. but few that come nigh to Origen. b
<urn:uuid:5970dfe8-b8e8-49e4-9beb-3a9e52714cae>
CC-MAIN-2021-21
https://books.google.com.pr/books?id=c9cMAAAAIAAJ&pg=PA129&vq=thee&dq=related:ISBN0140092331&lr=&output=html_text&source=gbs_search_r&cad=1
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00496.warc.gz
en
0.961545
7,408
2.5625
3
Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance. It is used in many video encoding schemes – both analog and digital – and also in JPEG encoding. Digital signals are often compressed to reduce file size and save transmission time. Since the human visual system is much more sensitive to variations in brightness than color, a video system can be optimized by devoting more bandwidth to the luma component (usually denoted Y'), than to the color difference components Cb and Cr. In compressed images, for example, the 4:2:2 Y'CbCr scheme requires two-thirds the bandwidth of non-subsampled "4:4:4" R'G'B'. This reduction results in almost no visual difference as perceived by the viewer. How subsampling worksEdit At normal viewing distances, there is no perceptible loss incurred by sampling the color detail at a lower rate, i.e. with a lower resolution. In video systems, this is achieved through the use of color difference components. The signal is divided into a luma (Y') component and two color difference components (chroma). A variety of filtering methods can be used to arrive at the resolution-reduced chroma values. Luma (Y') is differentiated from luminance (Y) by the presence of gamma correction in its calculation, hence the prime symbol added here. A gamma-corrected signal has the advantage of emulating the logarithmic sensitivity of human vision, with more levels dedicated to the darker levels than the lighter ones. As a result, it is ubiquitously used in the source tristimulus signal, the R'G'B' input. Examples of such color spaces include sRGB, the TV Rec. 601, Rec. 709, and Rec. 2020; the concept is also generalized to optical transfer functions in Rec. 2020. Sampling systems and ratiosEdit The subsampling scheme is commonly expressed as a three-part ratio J:a:b (e.g. 4:2:2) or four parts, if alpha channel is present (e.g. 4:2:2:4), that describe the number of luminance and chrominance samples in a conceptual region that is J pixels wide and 2 pixels high. The parts are (in their respective order): - J: horizontal sampling reference (width of the conceptual region). Usually, 4. - a: number of chrominance samples (Cr, Cb) in the first row of J pixels. - b: number of changes of chrominance samples (Cr, Cb) between first and second row of J pixels. Note that b has to be either zero or equal to a (except in rare irregular cases like 4:4:1 and 4:2:1, which do not follow this convention). - Alpha: horizontal factor (relative to first digit). May be omitted if alpha component is not present, and is equal to J when present. This notation is not valid for all combinations and has exceptions, e.g. 4:1:0 (where the height of the region is not 2 pixels, but 4 pixels, so if 8 bits per component are used, the media would be 9 bits per pixel) and 4:2:1. |1||2||3||4||J = 4||1||2||3||4||J = 4||1||2||3||4||J = 4||1||2||3||4||J = 4||1||2||3||4||J = 4| |(Cr, Cb)||1||a = 1||1||2||a = 2||1||2||a = 2||1||2||3||4||a = 4||1||2||3||4||a = 4| |1||b = 1||b = 0||1||2||b = 2||1||2||3||4||b = 4||b = 0| |¼ horizontal resolution, full vertical resolution |½ horizontal resolution, ½ vertical resolution |½ horizontal resolution, full vertical resolution |full horizontal resolution, full vertical resolution |full horizontal resolution,| ½ vertical resolution The mapping examples given are only theoretical and for illustration. Also note that the diagram does not indicate any chroma filtering, which should be applied to avoid aliasing. To calculate required bandwidth factor relative to 4:4:4 (or 4:4:4:4), one needs to sum all the factors and divide the result by 12 (or 16, if alpha is present). Types of sampling and subsamplingEdit Each of the three Y'CbCr components has the same sample rate, thus there is no chroma subsampling. This scheme is sometimes used in high-end film scanners and cinematic post-production. Note that "4:4:4" may instead be wrongly referring to R'G'B' color space, which implicitly also does not have any chroma subsampling (except in JPEG R'G'B' can be subsampled). Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI. The two chroma components are sampled at half the horizontal sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third. Many high-end digital video formats and interfaces use this scheme: - AVC-Intra 100 - Digital Betacam - Betacam SX - DVCPRO50 and DVCPRO HD - CCIR 601 / Serial Digital Interface / D1 - ProRes (HQ, 422, LT, and Proxy) - XDCAM HD422 - Canon MXF HD422 This sampling mode is not expressible in J:a:b notation. "4:2:1" is an obsolete term from a previous notational scheme, and very few software or hardware codecs use it. Cb horizontal resolution is half that of Cr (and a quarter of the horizontal resolution of Y). In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications. However, DV-based formats (some of which use 4:1:1 chroma subsampling) have been used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography. In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.375 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However, in most equipment, especially cheap TV sets and VHS/Betamax VCRs, the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chroma bandwidth of a "full" digital signal. Formats that use 4:1:1 chroma subsampling include: In 4:2:0, the horizontal sampling is doubled compared to 4:1:1, but as the Cb and Cr channels are only sampled on each alternate line in this scheme, the vertical resolution is halved. The data rate is thus the same. This fits reasonably well with the PAL color encoding system, since this has only half the vertical chrominance resolution of NTSC. It would also fit extremely well with the SECAM color encoding system, since like that format, 4:2:0 only stores and transmits one color channel per line (the other channel being recovered from the previous line). However, little equipment has actually been produced that outputs a SECAM analogue video signal. In general, SECAM territories either have to use a PAL-capable display or a transcoder to convert the PAL signal to SECAM for display. Different variants of 4:2:0 chroma configurations are found in: - All ISO/IEC MPEG and ITU-T VCEG H.26x video coding standards including H.262/MPEG-2 Part 2 implementations (although some profiles of MPEG-4 Part 2 and H.264/MPEG-4 AVC allow higher-quality sampling schemes such as 4:4:4) - DVD-Video and Blu-ray Disc. - PAL DV and DVCAM - AVCHD and AVC-Intra 50 - Apple Intermediate Codec - Most common JPEG/JFIF and MJPEG implementations Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically. There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. - In MPEG-2, MPEG-4 and AVC Cb and Cr are cosited horizontally. Cb and Cr are sited between pixels in the vertical direction (sited interstitially). - In JPEG/JFIF, H.261, and MPEG-1, Cb and Cr are sited interstitially, halfway between alternate luma samples. - In 4:2:0 DV, Cb and Cr are co-sited in the horizontal direction. In the vertical direction, they are co-sited on alternating lines. That is also what is used in HEVC in BT.2020 and BT.2100 content (in particular on Blu-rays). Also called top-left. Most digital video formats corresponding to PAL use 4:2:0 chroma subsampling, with the exception of DVCPRO25, which uses 4:1:1 chroma subsampling. Both the 4:1:1 and 4:2:0 schemes halve the bandwidth compared to no chroma subsampling. With interlaced material, 4:2:0 chroma subsampling can result in motion artifacts if it is implemented the same way as for progressive material. The luma samples are derived from separate time intervals, while the chroma samples would be derived from both time intervals. It is this difference that can result in motion artifacts. The MPEG-2 standard allows for an alternate interlaced sampling scheme, where 4:2:0 is applied to each field (not both fields at once). This solves the problem of motion artifacts, reduces the vertical chroma resolution by half, and can introduce comb-like artifacts in the image. In the 4:2:0 interlaced scheme, however, vertical resolution of the chroma is roughly halved, since the chroma samples effectively describe an area 2 samples wide by 4 samples tall instead of 2×2. As well, the spatial displacement between both fields can result in the appearance of comb-like chroma artifacts. If the interlaced material is to be de-interlaced, the comb-like chroma artifacts (from 4:2:0 interlaced sampling) can be removed by blurring the chroma vertically. This ratio is possible, and some codecs support it, but it is not widely used. This ratio uses half of the vertical and one-fourth the horizontal color resolutions, with only one-eighth of the bandwidth of the maximum color resolutions used. Uncompressed video in this format with 8-bit quantization uses 10 bytes for every macropixel (which is 4×2 pixels). It has the equivalent chrominance bandwidth of a PAL I signal decoded with a delay line decoder, and still very much superior to NTSC. - Some video codecs may operate at 4:1:0.5 or 4:1:0.25 as an option, so as to allow similar to VHS quality. Used by Sony in their HDCAM High Definition recorders (not HDCAM SR). In the horizontal dimension, luma is sampled horizontally at three quarters of the full HD sampling rate – 1440 samples per row instead of 1920. Chroma is sampled at 480 samples per row, a third of the luma sampling rate. In the vertical dimension, both luma and chroma are sampled at the full HD sampling rate (1080 samples vertically). Chroma subsampling suffers from two main types of artifacts, causing degradation more noticeable than intended where colors change abruptly. Gamma-corrected signals like Y'CbCr have in issue where chroma errors "bleed" into luma. In those signals, a low chroma actually makes a color appear less bright than one with equivalent luma. As a result, when a saturated color blends with an unsaturated or complementary color, a loss of luminance occurs at the border. This can be seen in the example between magenta and green. To arrive at a set of subsampled values that more closely resembles the original, it is necessary to undo the gamma correction, perform the calculation, and then step back into the gamma-corrected space. More efficient approximations are also possible, such as with a luma-weighted average or iteratively with lookup tables in WebP and sjpeg's "Sharp YUV" feature. Another artifact that can occur with chroma subsampling is that out-of-gamut colors can occur upon chroma reconstruction. Suppose the image consisted of alternating 1-pixel red and black lines and the subsampling omitted the chroma for the black pixels. Chroma from the red pixels will be reconstructed onto the black pixels, causing the new pixels to have positive red and negative green and blue values. As displays cannot output negative light (negative light does not exist), these negative values will effectively be clipped, and the resulting luma value will be too high. Similar artifacts arise in the less artificial example of gradation near a fairly sharp red/black boundary. Other types of filtering during subsampling can also cause colors to go out of gamut. The term Y'UV refers to an analog TV encoding scheme (ITU-R Rec. BT.470) while Y'CbCr refers to a digital encoding scheme. One difference between the two is that the scale factors on the chroma components (U, V, Cb, and Cr) are different. However, the term YUV is often used erroneously to refer to Y'CbCr encoding. Hence, expressions like "4:2:2 YUV" always refer to 4:2:2 Y'CbCr, since there simply is no such thing as 4:x:x in analog encoding (such as YUV). Pixel formats used in Y'CbCr can be referred to as YUV too, for example yuv420p, yuvj420p and many others. In a similar vein, the term luminance and the symbol Y are often used erroneously to refer to luma, which is denoted with the symbol Y'. Note that the luma (Y') of video engineering deviates from the luminance (Y) of color science (as defined by CIE). Luma is formed as the weighted sum of gamma-corrected (tristimulus) RGB components. Luminance is formed as a weighed sum of linear (tristimulus) RGB components. In practice, the CIE symbol Y is often incorrectly used to denote luma. In 1993, SMPTE adopted Engineering Guideline EG 28, clarifying the two terms. Note that the prime symbol ' is used to indicate gamma correction. Similarly, the chroma of video engineering differs from the chrominance of color science. The chroma of video engineering is formed from weighted tristimulus components (gamma corrected, OETF), not linear components. In video engineering practice, the terms chroma, chrominance, and saturation are often used interchangeably to refer to chrominance, but it is not a good practice, as ITU-T Rec H.273 says. Chroma subsampling was developed in the 1950s by Alda Bedford for the development of color television by RCA, which developed into the NTSC standard; luma–chroma separation was developed earlier, in 1938 by Georges Valensi. Through studies, he showed that the human eye has high resolution only for black and white, somewhat less for "mid-range" colors like yellows and greens, and much less for colors on the end of the spectrum, reds and blues. Using this knowledge allowed RCA to develop a system in which they discarded most of the blue signal after it comes from the camera, keeping most of the green and only some of the red; this is chroma subsampling in the YIQ color space and is roughly analogous to 4:2:1 subsampling, in that it has decreasing resolution for luma, yellow/green, and red/blue. - S. Winkler, C. J. van den Branden Lambrecht, and M. Kunt (2001). "Vision and Video: Models and Applications". In Christian J. van den Branden Lambrecht (ed.). Vision models and applications to image and video processing. Springer. p. 209. ISBN 978-0-7923-7422-0.CS1 maint: multiple names: authors list (link) - Chan, Glenn (May 2008). "Toward Better Chroma Subsampling: Recipient of the 2007 SMPTE Student Paper Award". SMPTE Motion Imaging Journal. 117 (4): 39–45. doi:10.5594/J15100. - Poynton, Charles. "YUV and luminance considered harmful: A plea for precise terminology in video". - Why 4K video looks better on a 1080p screen – The Daily Note (with graphics explaining chroma subsampling. - Jennings, Roger; Bertel Schmitt (1997). "DV vs. Betacam SP". DV Central. Archived from the original on 2008-07-02. Retrieved 2008-08-29. - Wilt, Adam J. (2006). "DV, DVCAM & DVCPRO Formats". adamwilt.com. Retrieved 2008-08-29. - Clint DeBoer (2008-04-16). "HDMI Enhanced Black Levels, xvYCC and RGB". Audioholics. Retrieved 2013-06-02. - "Digital Color Coding" (PDF). Telairity. Archived from the original (PDF) on 2014-01-07. Retrieved 2013-06-02. - Poynton, Charles (2008). "Chroma Subsampling Notation" (PDF). Poynton.com. Retrieved 2008-10-01. - Munsil, Don; Stacey Spears (2003). "DVD Player Benchmark – Chroma Upsampling Error". Secrets of Home Theater and High Fidelity. Archived from the original on 2008-06-06. Retrieved 2008-08-29. - "Gamma-correct chroma subsampling · Issue #193 · mozilla/mozjpeg". GitHub. - Poynton, Charles. "Digital Video and HDTV: Algorithms and Interfaces". U.S.: Morgan Kaufmann Publishers, 2003. - Kerr, Douglas A. "Chrominance Subsampling in Digital Images"
<urn:uuid:28497cef-a8d8-4cc0-bd8b-110dd2c42e64>
CC-MAIN-2021-21
https://en.m.wikipedia.org/wiki/Color_subsampling
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991178.59/warc/CC-MAIN-20210516171301-20210516201301-00292.warc.gz
en
0.883574
4,176
3.59375
4
In this article, Akshita Rishi pursuing M.A, in Business Law from NUJS, Kolkata discusses Rules & Regulations Related to Road Tax in India. Road tax as the name suggests means the tax levied on any vehicle before it is used in public on the road. It is a state level tax, i.e. it is imposed by various states at their individual level, which leads to a varying percentage of taxes in different states. Road tax is one of such taxes, which are very important for every individual tax paying citizen of India. It is nowadays important because approximately one person amongst three owns a vehicle. Also, Road tax is a compulsory tax paid by the person when he purchases it. So, it is paramount to know about tax, which is popularly paid by many. Let us Understand the Concept and Related Rules Framed by Government Vehicles Liable to Pay Road Tax This tax is paid on all the vehicles including two-wheelers and cars. The tax is levied on all private and commercial vehicles. Authorities who Levy such Taxes Road Tax is levied by, - Central Government - State Government - Local Authorities Reason Behind the State Levying Road Tax in India In India, all the roads are not made by the Central Government. Around 70% to 80% roads in the state are constructed by their respective state governments. So, obviously the cost is borne by the state government due to which they are given the responsibility to impose road tax for their state. This is the reason behind different rates in different states. So, the reason behind such different rates is clear but still due to variation in the tax rates, the common man faces a lot of problems in transfer of vehicles or goods from one place to another. The transfer process is also time consuming, thus, creating complications for the common person. Time of Payment of Road Tax Road Tax is paid when the registration of the vehicle takes place. It is paid either on a yearly basis or once in a lifetime. This payment depends on the various criteria of the state governments. However, if the owner of the car uses it in some other state, i.e. not in the state where the vehicle is registered and the owner has paid the road tax for the lifetime, then in this case, the person has to pay the road tax again in the new state. - Like in Delhi, the Road Tax is paid at once at the time of registration. Place of Payment of Road Tax The Road Tax is paid at the Regional Transport Office i.e. RTO. Calculation of Road Tax The calculation of Road Tax is done on the basis of ex-showroom price of the vehicle purchased. Following Points are taken into Consideration while Calculating Road Tax, - Age of the vehicle - Seating capacity of the vehicle - Weight of the vehicle Rates of Road Tax in India In India, Road Tax is imposed by three bodies of the Central government, State government and local authorities. Customs duty, Central excise and central sales tax are levied by central government, motor vehicles tax, passenger and goods tax, state vat and toll taxes are charged by state government and local bodies collect Octroi. When a vehicle is purchased Central Excise Duty, Central Sales Tax and state VAT are applicable at 10%, 3%, 2% and 12.5% respectively. For example, The rate of tax in Gujrat is 6% while in Tamil Nadu, it is 10% for the cars amounting to ₹10 lakhs and onwards. As per Central Motor Vehicles Act, if a car is being used for more than a year, then it is compulsory to pay the whole amount of road tax at once. Refund of Tax Paid If the vehicle is used for less than 15 years and the owner decides to scrap or discard it, then he needs to cancel the registration of his vehicle at the Regional Transport Office in which the vehicle was registered. In case of transfer of registration of vehicle from one place to another, then the refund of tax can be claimed from the RTO in which the vehicle was initially registered and not in the RTO in which the vehicle was subsequently transferred. Following Steps are Taken to Cancel the Registration - Either the Engine or Chassis number is removed from the vehicle and it is submitted at the RTO. - The number plate is also submitted after removing it from the vehicle. On successful completion of these steps, the refund can be claimed after submission of other relevant documents which the authority may deem fit. Online Payment of Road Tax in India Online payment of Road tax can be done for the transport vehicles which are registered with the respective Transport Department. Any user can pay the tax by entering and submitting the Vehicle Registration Number and the Chassis number. They have to then select the mode of payment of tax and complete the payment process. Role of GST i.e. Goods and Services Tax on Road Tax Goods and Services Tax is the new tax replacing all the Indirect taxes. However, it is to be noted that some of the taxes like Road tax, Property tax, Stamp duty is outside the ambit of GST. Thus, there is no effect of GST on road tax in India. As the Rate of Tax of Different States is Different, So Let us Understand the Rules and Regulations of a few States Delhi, the Capital region levies Road tax as per Delhi Motor Vehicle Taxation Act, 1962. The Road tax is paid under Section-3 of the above-mentioned act at the time of registration of the vehicle at RTO. Rates of road tax Different types of vehicles pay different percentage of tax. The number of drivers also plays a major role while calculating tax for commercial vehicles. The Amount is as follows, - Maximum passengers excluding the driver are liable to pay ₹305. However, more than two but upto 4 are required to pay ₹605. This amount increases as the number of passengers also increases. If the passengers are more than 18 or above excluding both driver and conductor, the owner is liable to pay ₹1915 plus ₹280 per passenger. - The rates for airlines and staff vehicles are same as that of commercial passenger vehicles as mentioned above. - The loading capacity of the goods vehicles plays a dominant role for commercial goods vehicles. - If the loading capacity of the vehicle is less than 1 tonne then the road tax is ₹665. As the loading capacity increases, the amount liable to be paid by the owner also increases. For the vehicles of more than 1 tonne and less than 2 tonne the amount of tax is ₹940. The maximum tax which the owner is liable to pay is ₹3790 plus ₹470 per tonne. For trailers the amount increases as per the additional 10 tonnes and per trailer. In such a case, the maximum amount which the owner is liable to pay is 3,790.00 [email protected] Rs.470/-per Tonne + Rs.925/- - It is to be noted in the case of the trailer that the road tax is charged to the corresponding registered vehicle only. - For auto rickshaws ₹305 per annum is to be paid by the owner and in case of taxis ₹605 per annum is to be paid. Payment of Road Tax For personal use of vehicles, the Road tax is paid at once only, but in case of commercial vehicles, the Road tax can be paid on monthly, yearly and half-yearly basis as well. Such vehicles include all autos and taxis also. Place of Payment For private vehicles, the Road tax is paid at the respective Zonal Registration Office. Such payment is one-time payment only. In case of commercial vehicles, the tax is deposited at the Headquarters of the Transport Department in the Account branch. Time of Payment of Road Tax The tax is to be paid at the time of registration of the vehicle. This provision is covered under Section 3 of Delhi Motor Vehicle Taxation Act,1962. The tax is paid at once and not on yearly basis. In case the same vehicle if already registered either in Delhi or some other state, amount of one-time tax will be “amount specified in column (2) of part B of Schedule I and subtracting the proportionate amount of one-tenth of the tax which is calculated on each completed year from the month in which the vehicle was registered. If the vehicle is more than 10 years old, then this rule is applicable if the owner applies to the taxation authority for an endorsement and inform them that, as the motor vehicle is more than 10 years of age so the use of vehicle will not attract any tax. Payment of Tax The Registered owner of the vehicle or the person having control of such vehicle, either used or kept for use in Delhi is under obligation to fill up and sign the form, prescribed under the act stating all particulars. This form is then delivered within the prescribed time to the taxation authority. After successful payment under Section 3 of this Act and the owner has proved to the authority that no amount is due on such vehicle, the Taxation Authority will issue a token to that person. The token will be valid only for the period for which the payment has been done. This will also be mentioned in the Certificate of registration. In case the vehicle is used or kept for use in Delhi, is either altered or is propounded to be used in such a manner in which the owner or the person in possession is liable to pay additional tax, in this case the person is required to disclose all the related information and pay such tax with respect to such vehicle. Certificate of Insurance The Certificate of Insurance is compulsorily required to be presented at the time of payment of Road tax by the owner of the vehicle or the person in possession of such vehicle. Arrears of Road Tax If there is arrear of tax and the person before payment of such tax has either transferred the vehicle to another person or has terminated the ownership, then in such a case, the person currently in possession or the legal owner of the vehicle is liable to pay such road tax. Penalty in Case Road Tax is not Paid When the person in possession of the vehicle or the registered owner of the vehicle makes default in making payment of Road tax, then the Taxation authority may direct such person to pay the arrears of tax along with an amount not exceeding the annual tax payable with respect to such vehicle. This amount will be recovered as penalty from such person. Calculation of Road tax Following things are taken into consideration while calculating Road tax in the state of Maharashtra: - Age of the vehicle - Manufacturer of such vehicle - Fuel type i.e. petrol or diesel - Measurement of such vehicle i.e. it’s length and width - Seating capacity - Number of wheels of the vehicle - Engine capacity of such vehicle….and many more Schedule A (III) This schedule mentions the tax rate with respect to the weight in kilograms. For less than 750 kg, the tax rate per year is ₹880. The amount increases with the increase in weight, like if the weight is equal to or more than 6000 but less than 7500 then the amount of tax will be ₹3450. The maximum tax to be paid shall be ₹8510 plus ₹375 to be paid for every 500 kilograms and part with respect to increase in excess of 16,500 kilograms. Schedule A (IV) (1) This schedule explains the tax rates with respect to the vehicle type. The vehicles which are licensed to carry 2 passengers is ₹160 per seat per year. With the number of passengers, the rate per seat per year also increases. The maximum tax is ₹600 per seat per year for a vehicle which carries 6 passengers. The vehicle type differs and so as the rate of tax for, - a) air-conditioned taxi - b) Tourist taxis for AC or non – AC taxi or foreign make. There are other schedules also like: Schedule A (IV) (3) (A) stating the tax for inter-state route. Schedule A (IV) (4) states the special permit vehicle covered under Central Motor Vehicles Act Schedule A (VIII) deals with tax for transport of goods excluding for agricultural purposes. Assessment of Tax Rate The Taxation Authority shall verify all the particulars filled in the application form of registration and shall determine the rate of tax which will be imposed on the vehicle. This is applicable on all the vehicles registered in the state. In case the Taxation authority and registering authorities are different, the registering authority after verifying the particulars furnished by the person will intimate and forward the fact of such registration to the taxation authority so that it can also ascertain the rate of taxation applicable to such vehicle. Payment of Tax The tax can be paid with a government treasury also. Mode of payment can be as follows, - Demand draft - Money order This payment is done to the Taxation Authority in whose jurisdiction the vehicle is brought to be registered. In case of Government Treasury, the payment is done. Change in Address or Transfer of Ownership In Case of Change of Address The owner has to inform the taxation authority in writing within 30 days but only in case if the transfer of the vehicle is in the same jurisdiction. If the transfer of vehicle is in another jurisdiction, then the owner shall forward the certificate to the jurisdiction in which the vehicle is being transferred so that the new address may be entered therein. The owner shall also intimate the taxation authority in which the vehicle was previously registered. In Case of Transfer of Ownership The transferor should within 14 days of such transfer, inform the taxation authority in form ‘TCR’ and also send a copy of this form to the transferee. The transferee, on the other hand, should inform about such transfer to the taxation authority in the jurisdiction of which the vehicle is being transferred (i.e. either at the place of residence or business) in form ‘TCA’ and shall forward to them, the certificate of transfer along with the copy of documents received by him from the transferor so that the particulars of transfer of ownership can be amended in the certificate of taxation by the authority. Certificate in Case of Non-User of a Vehicle The owner in whose name the vehicle is registered or the person in use have to make a declaration to the appropriate authority in writing on the following matters: - The name and address of the registered owner or the person in possession of the vehicle, as the case may be. - Registration mark of the vehicle. - The starting and the end date during which the vehicle will not be used. - The address of the place where the vehicle will be kept in the duration of its non-use. - Reasons behind non-use of the vehicle. - A declaration stating that the owner or the person in possession of the vehicle will not use the vehicle sans prior approval from the Tax Authority. - A declaration stating that the certificate of taxation will be surrendered along with the declaration. The above-mentioned declarations shall be made before the period of non- use of the vehicle is commenced and before the termination of current period in which the tax is to be paid. Declaration for Payment of Tax The following declarations shall be made in Form ‘AT’ stating- - The registered trade mark on the vehicle, if any - In case of advance payment of tax, the period of payment of tax - The type of fuel used for the vehicle i.e. petrol or diesel - In case the vehicle is covered under Clause A in the first schedule of the act and the mentioned tax rates are applicable to it then, it is compulsory to mention if the vehicle is, - Used only in the limits of the local authority which has imposed tax on the vehicle. - Used both in or out of the limit of local authority. It is to be noted that a fresh declaration is required to be made each and every time the tax payment is done. Under Section 9, the person claiming a refund is liable to submit an application to the appropriate authority in Form ‘DT’ mentioning all the relevant grounds to claim the refund along with the certificate of taxation. However, the authority will not entertain any application if such application is made after the expiry of six months from the date, 1) Mentioned on the “certificate of non-use” for the last date of non-use of the vehicle. 2) Cancellation, expiry or suspension of the certificate of registration of the vehicle. The refund can also be claimed if the vehicle is permanently discarded or has been removed from the state. After the application is received by the Taxation Authority under Rule 12, the amount of refund is calculated and a certificate is issued to the applicant in Form ‘ET’. Also, the Certificate of Taxation is returned to the applicant after entering the details of refund paid by the authority. Such refund is claimed by the applicant within 90 days of issue of such certificate, by presenting Part II and Part III of the certificate at The State Bank of India, The Reserve Bank of India or any other bank which undertakes the cash business of the State Government. Part III of the certificate will be returned by the Taxation Authority with a “P stamp” to the Tax Authority, after the refund is done. Vehicles Exempted from Levy of Road Tax The Registered Owner or the person in possession or control of the vehicle can claim exemption from tax under Section 13 by making an application in “Form MT. Any application made by the applicant can be entertained by the Regional Transport officer only if the application for the vehicle is made in respect of the following issues: - The vehicle is covered under sub-section (1)of Section 13. - The vehicle is a tractor which is used for drawing trailers solely for the purpose of conveying goods used for agricultural purpose. Any tractor used for transportation of agricultural produce from the farm to any godown, marketplace or the residence of the owner. - The vehicle belongs to the United Nations International Children’s Emergency Fund, New Delhi and is given to the Government of Maharashtra on the basis of a loan to execute schemes under the Community Project Programme. Also, it is mandatory that it should be registered in the State of Maharashtra. - The vehicle either belongs to The Government of India or The Government of Maharashtra - The vehicle belongs to the Consular and Diplomatic Officers - The vehicle belongs to the Cooperative for American Relief Everywhere Inc. (CARE). Also, it should either be imported or locally purchased and is used solely related to the work of a specific organisation in State of Maharashtra. The Karnataka Road Tax Rules are stated under the Karnataka Motor Vehicles Taxation Act, 1957. The act has been amended several times thereafter. Certain provisions with respect to Karnataka road tax are: – - The cost of the vehicle and the age plays an important role. - For 2 wheelers, the new vehicles which cost less than ₹50,000 or more than 50,000 are liable to pay 10% and 12% of the cost of such vehicle respectively. Vehicles that run on electricity have to pay 4% of the cost of such vehicle. Vehicles less than 5 years old are liable to pay 73% to 93% as covered under clause A and those of 10 to 15 years old have to pay 45% to 25% under clause A. - For 4 wheelers, new vehicle which cost less than 5 lakhs have to pay 13% of the cost of such vehicle and those above 5 lakhs upto 10 lakhs pay 14%, and 17% and 18% is paid by those owners owing vehicles ranging 10 lakhs to 20 lakhs and more than 20 lakhs respectively. Vehicles which run on electricity have to pay 4% of the cost of such vehicle. With respect to age, the rate of tax is same as of 2 wheelers. Life Time Tax Payment If the vehicle is registered in some other state but is currently operating in Karnataka, then such vehicle owner is not liable to pay lifetime tax again if such vehicle is used for less than one year in this state. Levy of Tax The Tax is imposed on all the vehicles which are considered feasible to be used on road. The rates applicable on such vehicles are specified in Part A of the schedule. Payment of Tax The payment of tax is done within 15 days of the commencement of each quarter, year or half-year, as required under Section 3. This payment is done in advance by either the person in possession of such vehicle or the registered owner. After payment of tax which is levied on vehicle under Section 3, the Taxation authority will issue following to the applicant: - A receipt specifying the amount of tax paid - A Taxation Card mentioning both the rate of tax levied and the period for which the tax has been paid. - It is to be noted that in case of no Taxation Card with the owner or person in possession of the vehicle, such vehicle which is liable to pay tax under Section 3, cannot be held in custody unless the owner obtains Taxation Card. Here we have discussed only a few states and their Road tax rules and regulations. However, as we discussed above, different states in India have different rules and regulations. This can clearly be concluded from the above provisions specified and followed by different states. As per a news report by Economic Times, Karnataka is charging the highest tax amongst all the states in India. Even Delhi and Maharashtra are not that behind. They both charge the highest tax on diesel vehicles. However, Maharashtra, on the other hand, charge lowest tax rate for CNG-run cars. The Road tax is found is to lowest in North Eastern region. Thus, for every layman, knowledge of road tax is crucial, according to the current scenario. LawSikho has created a telegram group for exchanging legal knowledge, referrals and various opportunities. You can click on this link and join: 1) National implementations of road tax- Wikipedia 2) Blog post on “road tax is refundable on transfer and cancellation of registration” 3) Delhi government transport department official web site 4)National portal of India – Government of India official website 5) Delhi Motor Vehicles Taxation Act, 1962 6) The Maharashtra Motor Vehicles Tax Rules, 1959 7) Bank bazaar.com 8) The Karnataka Motor Vehicles Taxation Act, 1957 9) Bank bazaar.com 10) Blog post on “Road tax in Karnataka is the highest, North East levies the least”
<urn:uuid:caa3aab0-14e6-41e6-b503-72f0468c526f>
CC-MAIN-2021-21
https://blog.ipleaders.in/rules-regulations-related-road-tax-india/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00616.warc.gz
en
0.960931
4,752
3.296875
3
Kaposi sarcoma is an interesting soft tissue tumor occurring in several distinct populations with a variety of presentations and courses. In its most well-known form, Kaposi sarcoma occurs in patients with immunosuppression, such as those with acquired immunodeficiency syndrome (AIDs) or those undergoing immunosuppression due to an organ transplant. This activity reviews the cause, presentation and pathophysiology of kaposi sarcoma and highlights the role of the interprofessional team in its management. Describe the epidemiology of kaposi sarcoma. Review the presentation of kaposi sarcoma. Summarize the treatment options for kaposi sarcoma. Explain modalities to improve care coordination among interprofessional team members in order to improve outcomes for patients affected by kaposi sarcoma. Kaposi sarcoma is an interesting soft tissue tumor occurring in several distinct populations with a variety of presentations and courses. In its most well-known form, Kaposi sarcoma occurs in patients with immunosuppression, such as those with acquired immunodeficiency syndrome (AIDs) or those undergoing immunosuppression due to an organ transplant. Initially described in 1872 by Moritz Kaposi, an Austro-Hungarian dermatologist in 5 patients with the multifocal disease, human herpesvirus/Kaposi sarcoma herpesvirus (HHV-8) was discovered as a causative agent of Kaposi sarcoma as the AIDS epidemic progressed in the 1980s. Four clinical forms emerged. Classic form occurring in elderly men of Mediterranean and Eastern European descent on the lower extremities Endemic African form with generalized lymph node involvement occurring in children HIV-related form occurring with patients not taking highly active antiretroviral therapy (HAART) with diffuse involvement of the skin and internal organs Iatrogenic form in immunosuppressed patients also with diffuse involvement of the skin and internal organs Each form has a differing natural history ranging from indolent to more aggressive and fatal in anaplastic varieties. Human herpesvirus-8 (HHV-8) is present in all forms of Kaposi sarcoma. HHV-8 interferes with many normal cell functions and requires cofactors like cytokines or specific proteins to result in the development of Kaposi sarcoma. Other malignancies associated with HHV-8 include plasmablastic multicentric Castleman disease, primary effusion lymphoma, intravascular large B cell lymphoma, and occasionally angiosarcoma and inflammatory myofibroblastic tumor. Classic Kaposi sarcoma has a male: female ratio 17:1 and occurs primarily in patients over 50 years old of Eastern European and Mediterranean descent. These patients are at greater risk for secondary malignancies. The prevalence mirrors the distribution of HHV-8 throughout the world. Within the United States, incidence has been stable around 1:100,000 since 1997. Endemic Kaposi sarcoma has the unusual predilection for the pediatric population and mirrors HHV-8 seropositivity. The rates of seropositivity in pediatric patients vary extensively throughout Africa, from a low of 2% in Eritrea to almost 100% in the Central African Republic. Following the HIV epidemic in Africa, the ratio of men to women with Kaposi sarcoma has fallen from 7:1 to 2:1. Endemic Kaposi sarcoma is now the most common cancer in men and the second most common cancer in women with Uganda and Zimbabwe. HHV-8 seropositivity worldwide varies, with a high of 40% in Saharan Africa to 2% to 4% in Northern Europe, Southeast Asia, and Caribbean countries. Approximately 10% of Mediterranean countries and 5-20% of United States patients are seropositive for HHV-8. This unique predilection for sub-Saharan African, the Mediterranean, and South America is unique to HHV-8 amongst human herpesviruses. AIDS-related Kaposi sarcoma is the second most common tumor in HIV patients with CD4 counts less than 200 cells/mm3 and is an AIDs-defining illness. Up to 30% of HIV patients not taking high-activity antiretroviral therapy (HAART) will develop Kaposi sarcoma. HIV positive male homosexuals have a 5- to 10-fold increased the risk of Kaposi sarcoma. Iatrogenic Kaposi sarcoma has a male: female ratio of 3:1 . Over 5% of transplant patients who develop a de novo malignancy will develop Kaposi sarcoma, a 400- to 500-fold increased risk over the general population. Patients with bone marrow or peripheral blood stem cell transplant have much lower risks of developing Kaposi sarcoma compared to solid organ transplant patients. HHV-8 is a double-stranded enveloped DNA virus with 6 major subtypes (A-F). Due to HHV-8 co-evolving with the human population for centuries, cofactors of immune defects and inflammation are required for the development of malignancy. It is transmitted primarily via saliva in childhood and sexually, with some cases of infection via blood transfusion or intravenous drug use. Seropositive family members will often infect other family members, particularly in areas where HHV-8 is endemic. After infecting endothelial cells, HHV-8 activates the mTOR pathway, alters the cells to have mesenchymal differentiation, and promotes aberrant angiogenesis . Through immune suppression and inflammation, the HHV-8 infected cells can persist and proliferate. Expression of latency-associated nuclear antigen (LANA) causes binding of p53 and suppression of apoptosis . LANA also maintains the viral episome and prevents Fas-induced programmed cell death. NF-kB is activated by HHV-8 and up-regulates cytokine expression. Up-regulation of VEGF and bFGF results in neo-angiogenesis . It induces c-kit expression and produces the spindled morphology of Kaposi sarcoma cells from their original cuboidal monolayer. Matrix metalloproteinases are upregulated by HHV-8. Interestingly, Kaposi sarcoma is not monoclonal and different nodules within a patient have different clonal origins. Clinically, Kaposi sarcoma is a vascular lesion, and as such, often presents as a violaceous pink to purple plaque on the skin or mucocutaneous surfaces. Lesions may be painful with associated lymphedema and secondary infection. There are 3 major stages on the skin: patch, plaque, and nodule. Lesions may ulcerate or invade into nearby tissues. In addition to involving lymph nodes, Kaposi sarcoma has a predilection for the lungs and gastrointestinal system, but can also occur in other visceral organs. Respiratory involvement can be associated with death due Kaposi sarcoma. Kaposi sarcoma progresses through 3 distinct clinical stages: patch, plaque, and nodular. The patch stage of Kaposi sarcoma is characterized by a spindle cell proliferation of irregular, complex vascular channels dissecting through the dermis with the promontory sign, defined as ramifying proliferating vessels surrounding larger ectatic pre-existing vessels and skin adnexa. Extravasated red blood cells, hemosiderin-laden macrophages, rare hyaline globules, and perivascular lymphocytes and plasma cells are also frequently identified. Advancement to the plaque stage of Kaposi sarcoma has an increasing prominence of the features seen in the patch stage with extension into the subcutis, and more prominence of the hyaline globules intra- and extra-cellularly. These hyaline globules are periodic acid-Schiff positive and demonstrate Weibel-Palade bodies by electron microscopy. Cellular pleomorphism is minimal, and there are few mitotic figures. As Kaposi sarcoma develops into the nodular form, the pleomorphism increases and mitotic figures become more prominent . The slit-like lumens are enhanced as well. Histologic variants of Kaposi sarcoma include: in situ Kaposi sarcoma, anaplastic Kaposi sarcoma, lymphangiectatic Kaposi sarcoma, bullous Kaposi sarcoma, ecchymotic Kaposi sarcoma, glomeruloid Kaposi sarcoma, and hyperkeratotic Kaposi sarcoma. The presence of HHV-8 can be confirmed with immunohistochemistry for LANA1. History and Physical A general history and physical are performed first. The skin is examined for purplish lesions or lymph node enlargement that may signal the presence of Kaposi sarcoma. Suspicious lesions on the skin or lymph nodes can be biopsied and sent to pathology for evaluation. Particular care and attention should be paid to mucocutaneous surfaces as it is a common presenting location. For a definitive diagnosis of Kaposi sarcoma, they must perform a biopsy or excision of the suspicious areas. A pathologist examines the tissue under the microscope and looks for the characteristic features: a spindle cell vascular proliferation in the dermis. Immunohistochemistry positivity for LANA1 (a surrogate marker for HHV-8) helps to differentiate Kaposi sarcoma from similar lesions. The lesional cells also express CD34, factor VIII, PECAM-1, D2-40, VEGFR-3, and BCL-2. Treatment / Management Skin involvement of Kaposi sarcoma is treated by local excision, liquid nitrogen, and injection of vincristine. Chemotherapy is a mainstay of treatment for endemic and systemic forms, in particular in children. Patients with HIV-related Kaposi sarcoma respond well to HAART, which can cause regression or complete treatment of their sarcoma. In patients with severe Kaposi sarcoma, combines HAART with chemotherapy. Iatrogenic Kaposi sarcoma treatment must balance a reduction in immune suppression or withdrawal of steroid treatment with transplant rejection and treatment of the sarcoma. Histologically, spindle cell vascular lesions in the skin include a differential diagnosis of: Interstitial granuloma annulare Spindle cell hemangioma Acquired tufted angioma Fibrosarcomatous dermatofibrosarcoma protuberans Spindle cell melanoma The differential diagnosis of Kaposi sarcoma on mucocutaneous surfaces includes: Classical Kaposi sarcoma tends to be radiosensitive, especially in early stages of the disease. Dosages recommended are 15.2 Gy for oral lesions, 20 Gy for conjunctiva, eyelids, lips, hands, feet, penis, and anal regions, and 30 Gy with hypofractionation for other parts of the body. For later stages of Kaposi sarcoma, radiation treatment may be used palliatively to reduce pain, edema, and bleeding. Pertinent Studies and Ongoing Trials Thalidomide, VEGF inhibitors, tyrosine kinase inhibitors, and matrix metalloproteinases are currently under investigation as single agents to treat Kaposi sarcoma. Several studies are currently ongoing for new and novel treatment for Kaposi sarcoma, including: Intra-lesional nivolumab for cutaneous Kaposi sarcoma Pomalidomide with liposomal doxorubicin for refractory Kaposi sarcoma Pembrolizumab for relapsed and refractory HIV related neoplasms Ipilimumab and nivolumab for advanced HIV related neoplasms Nelfinavir for endemic, classic, and HIV-related Kaposi sarcoma Valganciclovir for HIV-related Kaposi sarcoma, particularly patients with immune reconstitution syndrome Recombinant EphB4-HAS fusion protein High-activity antiretroviral therapy (HAART) is the mainstay of treatment in patients with AIDS-related Kaposi sarcoma. Not only does HAART treatment reduce the risk of development of Kaposi sarcoma, but it also can lead to spontaneous regression of the tumors. For local disease, sclerotherapy, intralesional vinca-alkaloids, bleomycin, interferon-alpha, topical alitretinoin, or imiquimod cream have all been used with success. Systemic chemotherapy approved for treatment of Kaposi sarcoma includes liposomal anthracyclins, paclitaxel, etoposide, vincristine, vinblastine, vinorelbine, bleomycin, or a combination of doxorubicin, bleomycin, and vincristine. Others have used interferon-alpha 2b with success. Other therapies tried to include thalidomide, VEGF inhibitors, tyrosine kinase inhibitors, and matrix metalloproteinases but further research into these as single agents is still being conducted. Kaposi sarcoma is not usually staged, however, a staging system was initially developed in the 1980s for HIV-related Kaposi sarcoma by the AIDS Clinical Trials Group. With modern HAART, there are 2 proposed risk categories: good and poor. Poor prognosis was based on poor tumor stage and poor systemic disease status. Ten percent to 20% of patients with the classic form of Kaposi sarcoma will succumb from their disease, but another larger percentage will develop a secondary malignancy that may also be lethal. Endemic, HIV-related, and iatrogenic forms of Kaposi sarcoma have a variable prognosis. Prognosis may depend on CD4 count and opportunistic infections. In patients with iatrogenic Kaposi sarcoma, the prognosis is dependent on their underlying condition and their ability to tolerate a reduction in immunosuppression. Worse prognosis is usually found in patients with visceral organ involvement, particularly the lungs. Larger lesions can be painful and lead to edema and disfigurement of the skin. Pulmonary involvement of Kaposi sarcoma can cause respiratory distress and lead to death. Classic Kaposi sarcoma has a known association with the development of a secondary malignancy. Chemotherapy is not without side effects, including but not limited to neurotoxicity, infertility, cardiac toxicity, and nerve pain. Radiation therapy can lead to drying of the skin, avascularity, poor healing, development of additional malignancies, and lymphedema. Postoperative and Rehabilitation Care Patients should be followed closely for resolution or recurrence of their disease. Pearls and Other Issues There are 4 major forms of Kaposi sarcoma: classic, endemic, HIV-related, and iatrogenic, each with a different patient population and presentation. All cases of Kaposi sarcoma harbor the HHV-8 virus, though this is not sufficient to cause the neoplasm. Histologically, Kaposi sarcoma presents as a spindle cell vascular neoplasm with extravasated red blood cells and hyaline globules. Immunohistochemistry for LANA-1 can confirm the presence of HHV-8 in the neoplastic cells. Enhancing Healthcare Team Outcomes Kaposi sarcoma has been definitively linked to HHV-8 infection with increased incidence in patients with HIV infection, immunosuppression, or of Eastern European or Mediterranean descent. In seropositive patients at high risk of development of Kaposi sarcoma, the primary care team should take care to perform thorough skin exams looking for the characteristic violaceous patches and plaques. Additionally, patients in these high-risk groups should be counseled to examine their skin as well for lesions. Discovery of any suspicious lesions should prompt expedient biopsy and examination by a pathologist. Patient care is enhanced by the submitting provider providing a complete clinical history and mentioning the suspicion for Kaposi sarcoma to the reviewing pathologist. Patients who are diagnosed with Kaposi sarcoma should be closely monitored for disease regression or recurrence and need for radiation treatment or systemic chemotherapy. If the HIV-associated or iatrogenic form, the clinical team should communicate and work closely to begin HAART or help reduce the immunosuppression while treating the patients underlying condition. (Level I) Dentists and oral health care providers should be educated about the clinical presentation in the oral cavity as this a common location. (Click Image to Enlarge) Contributed by DermNetNZ (Click Image to Enlarge) Histopathology of Kaposi sarcoma Contributed by Bradie Bishop, MD Stănescu L,Foarfă C,Georgescu AC,Georgescu I, Kaposi's sarcoma associated with AIDS. Romanian journal of morphology and embryology = Revue roumaine de morphologie et embryologie. 2007 [PubMed PMID: 17641807] Kemény L,Gyulai R,Kiss M,Nagy F,Dobozy A, Kaposi's sarcoma-associated herpesvirus/human herpesvirus-8: a new virus in human pathology. Journal of the American Academy of Dermatology. 1997 Jul [PubMed PMID: 9216532] Bisceglia M,Minenna E,Altobella A,Sanguedolce F,Panniello G,Bisceglia S,Ben-Dor DJ, Anaplastic Kaposi's Sarcoma of the Adrenal in an HIV-negative Patient With Literature Review. Advances in anatomic pathology. 2018 Sep 12 [PubMed PMID: 30212382] Mohanna S,Maco V,Bravo F,Gotuzzo E, Epidemiology and clinical characteristics of classic Kaposi's sarcoma, seroprevalence, and variants of human herpesvirus 8 in South America: a critical review of an old disease. International journal of infectious diseases : IJID : official publication of the International Society for Infectious Diseases. 2005 Sep [PubMed PMID: 16095940] Dedicoat M,Newton R, Review of the distribution of Kaposi's sarcoma-associated herpesvirus (KSHV) in Africa in relation to the incidence of Kaposi's sarcoma. British journal of cancer. 2003 Jan 13 [PubMed PMID: 12556950] Lemlich G,Schwam L,Lebwohl M, Kaposi's sarcoma and acquired immunodeficiency syndrome. Postmortem findings in twenty-four cases. Journal of the American Academy of Dermatology. 1987 Feb [PubMed PMID: 3029191] Kaplan LD, Human herpesvirus-8: Kaposi sarcoma, multicentric Castleman disease, and primary effusion lymphoma. Hematology. American Society of Hematology. Education Program. 2013 [PubMed PMID: 24319170] Temelkova I,Tronnier M,Terziev I,Wollina U,Lozev I,Goldust M,Tchernev G, A Series of Patients with Kaposi Sarcoma (Mediterranean/Classical Type): Case Presentations and Short Update on Pathogenesis and Treatment. Open access Macedonian journal of medical sciences. 2018 Sep 25 [PubMed PMID: 30337990] Evolving Paradigms in HIV Malignancies: Review of Ongoing Clinical Trials., Bender Ignacio R,Lin LL,Rajdev L,Chiao E,, Journal of the National Comprehensive Cancer Network : JNCCN, 2018 Aug [PubMed PMID: 30099376] Nasti G,Talamini R,Antinori A,Martellotta F,Jacchetti G,Chiodo F,Ballardini G,Stoppini L,Di Perri G,Mena M,Tavio M,Vaccher E,D'Arminio Monforte A,Tirelli U, AIDS-related Kaposi's Sarcoma: evaluation of potential new prognostic factors and assessment of the AIDS Clinical Trial Group Staging System in the Haart Era--the Italian Cooperative Group on AIDS and Tumors and the Italian Cohort of Patients Naive From Antiretrovirals. Journal of clinical oncology : official journal of the American Society of Clinical Oncology. 2003 Aug 1 [PubMed PMID: 12885804]
<urn:uuid:8db46861-7cf7-4164-a864-7daa472e71a8>
CC-MAIN-2021-21
https://www.statpearls.com/ArticleLibrary/viewarticle/23842
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00417.warc.gz
en
0.856358
4,377
2.734375
3
The creation of urban farms in complex urban built environments may create suitable local conditions for vector mosquitoes. Urban farms have been implicated in the proliferation of mosquitoes in Africa, but there is a dearth in the knowledge of their role in the proliferation of mosquitoes elsewhere. In this study, we surveyed two urban farms in Miami-Dade County, Florida. Our results show that urban farms provide favorable conditions for populations of vector mosquito species by providing a wide range of essential resources such as larval habitats, suitable outdoor resting sites, sugar-feeding centers, and available hosts for blood-feeding. A total of 2,185 specimens comprising 12 species of mosquitoes were collected over 7 weeks. The results varied greatly according to the urban farm. At the Wynwood urban farm, 1,016 specimens were collected but were distributed only between 3 species; while the total number of specimens collected at the Golden Glades urban farm was 1,168 specimens comprising 12 species. The presence of vector mosquitoes in urban farms may represent a new challenge for the development of effective strategies to control populations of vector mosquito species in urban areas. Citation: Wilke ABB, Carvajal A, Vasquez C, Petrie WD, Beier JC (2020) Urban farms in Miami-Dade county, Florida have favorable environments for vector mosquitoes. PLoS ONE 15(4): e0230825. https://doi.org/10.1371/journal.pone.0230825 Editor: Olle Terenius, Swedish University of Agricultural Sciences, SWEDEN Received: November 3, 2019; Accepted: March 9, 2020; Published: April 6, 2020 Copyright: © 2020 Wilke et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: All relevant data are within the manuscript. Funding: This research was supported by the Miami-Dade County and the CDC (https://www.cdc.gov/) grant 1U01CK000510-03: Southeastern Regional Center of Excellence in Vector-Borne Diseases: The Gateway Program. CDC had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Competing interests: The authors have declared that no competing interests exist. The intensification in urbanization processes is deeply affecting the ecology and behavior of mosquito species in urban environments [1,2]. Vector species that are adapted to urban environments are gradually increasing their range and abundance in urban areas, and as a result, their contact probabilities with humans . Currently, not only more than 80% of the world population is at risk for vector-borne disease transmission [4,5], but former non-endemic areas are experiencing outbreaks more frequently [6–8]. In 2016, a major Zika virus outbreak with more than 1,000,000 confirmed cases and thousands of fetus malformations caused by the virus took the Americas by storm [9–11]. Two cities in the continental United States reported local transmission of Zika virus: Miami, Miami-Dade County, Florida and Brownsville, Cameron County, Texas . In this context, the community composition and year-round abundance of vector species of mosquitoes make Miami-Dade County, Florida a receptive gateway for arbovirus entry to the United States. Since the Zika virus outbreak in 2016 , there have been 646 imported arbovirus infections reported in Miami by the Department of Health (DOH) including dengue, Zika, chikungunya, and West Nile virus. From January to September 2019 alone, the DOH reported 236 travel-related cases of dengue, 7 imported cases of chikungunya, and 15 imported cases of Zika in Miami-Dade County. In this period, 12 locally transmitted cases of dengue have also been reported, which led the health officials to issue a mosquito-borne illness alert for the Miami-Dade area . Biotic homogenization processes driven by urbanization are responsible for decreasing the richness of species and increasing the abundance of the few species adapted to thrive in urban environments in a nonrandom process of biodiversity loss [2,3,15–18]. The presence and abundance of vector mosquitoes that are more adapted to urban environments, such as Aedes aegypti and Culex quinquefasciatus, is on the rise and, as a consequence, the incidence of vector-borne diseases is increasing globally [19–22]. Miami-Dade County is a major gateway to the United States and is especially at high risk for the introduction of arboviruses due to the high flow of people coming and going from endemic areas. Miami has the proper environmental conditions to support vector mosquito populations: average annual temperature ranges around 25°C and cumulative rainfall around 1573 mm; even in winter (December-March) temperature is mild with median values around 15°C and total rainfall around 50 mm . Consequently, several vector mosquitoes can be abundantly found in Miami, including Ae. aegypti and Cx. quinquefasciatus, both widely distributed and abundant year-round [16,18] in many different habitats in the urban environments of Miami. Previous studies showed that Ae. aegypti and Cx. quinquefasciatus were successfully exploring construction sites and tire shops deeply inserted in highly urbanized areas of Miami [17,24]. Aedes aegypti was also found breeding in high numbers in ornamental bromeliads widely used in landscaping in Miami [18,25]. Controlling vector mosquitoes in these problematic habitats is considered essential by the Miami-Dade Mosquito Control Division in the effort to prevent future arbovirus outbreaks. In this context, the growing trend to consume organic products produced locally has contributed to the growth of urban agriculture worldwide [26,27]. Urban farms are usually small green areas inserted within one city block, in which crops or animal husbandry activities take place. Urban farms can be kept either by communities or individuals and have the goal of cultivating, processing and distributing locally ground or raised food within the nearby urban areas. The creation of urban farms in the already complex urban built environment may represent a new challenge for the effective control of vector mosquitoes in urban environments. Urban farms have been implicated in the proliferation of mosquitoes in Africa [28,29], but their role in the United States is still unknown. However, it is not uncommon for vector species to exploit multiple and complex habitats and thrive in urban areas [30,31]. Even though urban farms may provide favorable environments for both native and exotic species playing an important role in species conservation, these areas have a wide range of essential resources for vector mosquitoes such as larval aquatic habitats and available hosts for blood-feeding. Moreover, these areas are no-spraying zones due to bee-keeping activities and pesticide-free crops. Understanding the ecology and behavior of mosquito populations in the increasingly more popular urban farm settings is fundamental for the development of vector surveillance and control in urban areas . We hypothesized that urban farms in Miami-Dade County, Florida have the appropriate conditions to support populations of vector mosquito species. Therefore, the objective of this study was to do a cross-sectional survey of two urban farms located in neighborhoods in Miami-Dade County with distinct levels of urbanization to assess if vector mosquitoes can be found in these areas. Materials and methods With approximately 3 million residents Miami-Dade is the most populous county in Florida. Miami is a very complex and dynamic city. It is a culturally diverse and touristic city, receiving millions of tourists every year. Miami also has substantial socioeconomic, urban, and land cover heterogeneity . In this study, we surveyed two urban farms in Miami-Dade County, Florida once a week for 7 consecutive weeks from March 19th to May 2nd 2019 for the presence of vector mosquitoes. Since this study posed less than minimal risk to participants and did not involve endangered or protected species the Institutional Review Board at the University of Miami determined that the study be exempted from institutional review board assessment (IRB Protocol Number: 20161212). Two urban farms within the borders of Miami-Dade County were selected for this study (Fig 1A), one in Wynwood and the other in the Golden Glades (Fig 1B). The urban farm located in Wynwood was selected due to the importance of the area during the Zika virus outbreak in 2016 and due to its high human population density of 12,350 people per square mile and approximately 2 million tourists every year . The Wynwood urban farm has a total surface of 4,675 m2 divided into a working area and two planting areas (Fig 1C). The urban farm located in the Golden Glades was considerably larger with 11,900 m2 and two different planting areas as well as a working area, and it is located in a suburban area with 6,985 people per square mile (Fig 1D). (A) Southeast United States; (B) Miami-Dade County (Wynwood farm is displayed in purple and Golden Glades farm in green); (C) Wynwood urban farm; and (D) Golden Glades urban farm. Fig 1 was produced using ArcGIS 10.2 (Esri, Redlands, CA) using freely available layers from the Miami-Dade County’s Open Data Hub - https://gis-mdc.opendata.arcgis.com. A wide range of potential aquatic breeding habitats was available. Both urban farms had two 55 gallons buckets each, one used for water storage and the other as a trash can. The Wynwood farm also had 2 large open planting substrate bags (2.2 cubic feet) which accumulate rainwater, and many pots scattered throughout the farm. The Gold Glades farm had a different scenario with no pots or bags but with a bromeliad patch, a kayak used to harvest rainwater, and a natural pond. Adult mosquito and larval sampling techniques One BG-Sentinel trap (Biogents AG, Regensburg, Germany) was deployed weekly for 24 hours on both urban farms. All traps were baited with CO2 using a container filled with 1 Kg of dry ice pellets . On each farm, we sampled mosquitoes in available ground-level vegetation using a pro Improved Prokopack Aspirator (model 1419) with a series of timed 10-minute collections per both working and planting areas. Resting mosquitoes were collected once a week in each farm totaling 30 minutes per week in each urban farm (1 working area and 2 planting areas). Aquatic breeding habitats were surveyed for immature mosquitoes once a week for two hours or until all potential breeding sites were exhausted. Collections of immature mosquitoes were made with entomological dippers and collected mosquitoes were conditioned in 100 ml containers for transport. All collected mosquitoes were transported to the Miami-Dade County Mosquito Control Laboratory and morphologically identified using the taxonomic keys of Darsie and Morris . Adult mosquitoes were promptly identified, larvae were kept at room temperature until L4 and then identified, pupae were allowed to emerge as adults and then identified. Mosquito control surveillance network The Miami-Dade Mosquito Control Network consists of 191 traps (157 BG-Sentinel and 34 CDC traps) deployed weekly for 24 hours since August 2016. All traps are baited with CO2, and all collected mosquitoes are transported to the Miami-Dade County Mosquito Control Laboratory and subsequently morphologically identified using taxonomic keys . For this study, we compared the data acquired by two BG-Sentinel traps, one located at 500 meters from the Wynwood urban farm and the other at 800 meters from the Golden Glades Urban farm during the same period of this study (Fig 2). For detailed information on the collection methods refer to Wilke at al (2019) . Map of Miami-Dade County, Florida displaying the location of the Wynwood (purple) and the Golden Glades (green) urban farms in relation to the BG-Sentinel traps from the Miami-Dade County mosquito control surveillance grid (blue). Fig 2 was produced using ArcGIS 10.2 (Esri, Redlands, CA) using freely available layers from the Miami-Dade County’s Open Data Hub - https://gis-mdc.opendata.arcgis.com. After the morphological identification of the mosquitoes to the species level, we carried on with the biodiversity analyses. Biodiversity analyses were done for each urban farm and the control sites individually using the Shannon and Simpson indices [36,37]. Both indices are extensively used to help understand and visualize variations in diversity in an ecological community . The Shannon and Simpson indices are complementary. The Shannon index considers species abundance and communities, less diversity will yield to lower values and vice versa. Simpson (1-D) index estimates species dominance, when close to 1 means the community is mainly dominated by a unique species . To estimate both the sampling sufficiency and the number of species in samples with fewer specimens for all specimens collected in this study we used the species accumulation curve, estimated by individual rarefaction as in Adrain et al using Past software (v.3.16) . Then, the data were analyzed using the cumulative profiles of species log abundance (ln S), Shannon index (H) and log evenness (ln E) (SHE) model. This model computes the ln S, H, and ln E values independently for each sample consecutively to the last sample. Deviations from the straight line are indicative of shifts in species composition and variations in the mosquito assembly . Analyses were carried out with 10,000 randomizations where each randomization is done without replacement using a 95% confidence interval using Past software (v.3.16) [41,43]. To test for differences in the mosquito community between the urban farms and their respective control sites we used the Kruskal-Wallis one-way analysis of variance using Past software (v.3.16). A total of 2,185 specimens comprising 12 species of mosquitoes were collected. The five most abundant species were Cx. quinquefasciatus (1,022 specimens collected, 46.8%), Cx. nigripalpus (357 specimens, 16.3%), Ae. aegypti (258 specimens, 11.8%), Cx. coronator (256 specimens, 11.7%) and Ae. albopictus (148 specimens, 6.8%) (Tables 1 and 2). In parenthesis, the number of pupae. However, the results varied greatly according to the urban farm monitored. At the Wynwood urban farm, a total of 1,016 specimens were collected and belonged to 3 species: Cx. quinquefasciatus 916 specimens, Ae. aegypti 99 specimens and Anopheles quadrimaculatus 1 specimen (Table 1). Cx. quinquefasciatus and Ae. aegypti were consistently collected in relatively high numbers by the BG-Sentinel trap. Resting Cx. quinquefasciatus were collected in relatively higher numbers by the manual aspirator, compared to Ae. aegypti. A total of 199 immature Cx. quinquefasciatus and 3 Ae. aegypti were collected only once from a bucket and a water storage barrel, respectively during the study period (Table 1). The total number of specimens collected at the Golden Glades urban farm was 1,169 specimens, comprising 12 species (Table 2). The five most abundant species in decreasing order were: Cx. nigripalpus with 357 specimens, followed by Cx. coronator with 256 specimens, Ae. aegypti with 159 specimens, Ae. albopictus with 148 specimens and Cx. quinquefasciatus with 106 specimens. Only 4 species were collected from the immature to the adult stages: Ae. aegypti, Ae. albopictus, Cx. quinquefasciatus and Wyeomyia mitchelli. Based on the data obtained from the Miami-Dade Mosquito Control Division surveillance network , both the Wynwood and Golden Glades urban farms surveyed in this study not only presented a higher abundance of vector mosquitoes with roughly 5 times more mosquitoes collected using the same sampling effort (one BG-Sentinel trap deployed for 24h and baited with CO2) but also higher species richness than their surrounding areas. The total number of mosquitoes collected at the Wynwood control site was 132 whereas 226 mosquitoes were collected at the Golden Glades control site (Tables 3 and 4). The values for Shannon’s diversity index were 0.327 (95% IC: 0.288–0.371) at the Wynwood urban farm and 1.86 (95% IC: 1.812–1.906) at the Golden Glades urban farm. The results for the Simpson (1-D) index were 0.882 (95% IC: 0.791–0.851) at the Wynwood urban farm and 0.195 (95% IC: 0.184–0.208) at the Golden Glades urban farm. Species accumulation curve, estimated by individual rarefaction resulted in a highly asymptotic curve for both the Golden Glades and Wynwood urban farms indicating that sampling sufficiency was achieved for both collection areas (Fig 3). The lack of species richness in the mosquito community found at the Wynwood urban farm resulted in virtually no changes in the direction of the lines in the cumulative SHE analysis, corroborating the lack of variability in species composition, diversity, and evenness (Fig 4A). At the Golden Glades urban farm, after an initial variation in the results of the SHE analysis, no shifts in the direction of the lines of the SHE model were observed, indicating low levels of variability in the mosquito diversity (Fig 4B). (A) Wynwood urban farm; (B) Golden Glades urban farm. The values for Shannon’s diversity index were 0.506 (95% IC: 0.398–0.593) at the Wynwood control site and 0.705 (95% IC: 0.621–0.801) at the Golden Glades control site. The results for the Simpson (1-D) index were 0.674 (95% IC: 0.596–0.764) at the Wynwood control site and 0.572 (95% IC: 0.517–0.632) at the Golden Glades control site. Species accumulation curve, estimated by individual rarefaction resulted in a highly asymptotic curve for the Wynwood control site and a moderate asymptotic curve for the Golden Glades control site (Fig 5). The cumulative SHE analysis of the Wynwood control site resulted in low variation in the Shannon index and log evenness, and due to the fact that only Ae. aegypti and Cx. quinquefasciatus were collected at this site there was no variation in the log abundance (Fig 6A). Even though more variation was present in the cumulative SHE analysis of the Golden Glades control site, the results indicated low levels of variability in the mosquito diversity. (A) Wynwood control site; (B) Golden Glades control site. The mosquito community composition was significantly different in the comparison between each urban farm and their respective control sites: Wynwood urban farm (Kruskal-Wallis, chi-squared = 13.9, df = 2, P < 0.001); Golden Glades urban farm (Kruskal-Wallis, chi-squared = 552.7, df = 11, P < 0.001). The availability of environmental resources is vital to sustaining populations of vector mosquito species. The availability of these resources varies greatly in urban environments driving vector mosquito population dynamics, presence, and abundance [5,16,44,45]. Our results showed that mosquito species were abundantly found breeding and dwelling in urban farms, although they varied greatly in the community composition and abundance. The lower species richness found in the Wynwood urban farm, with Cx. quinquefasciatus comprising 90% of all specimens collected, may be explained due to its location in a densely populated and highly urbanized area [1,15,16,32,46]. Ae. aegypti and Cx. quinquefasciatus were collected at all stages in their aquatic stages, as resting adults, and females seeking a host for blood-feeding. On the other hand, the Golden Glades urban farm displayed a higher species richness and a more even mosquito assembly with 5 species comprising 90% of all mosquitoes collected: Ae. aegypti, Ae. albopictus, Cx. coronator, Cx. nigripalpus, and Cx. quinquefasciatus. Among the total 12 species, Ae. aegypti, Ae. albopictus, Cx. quinquefasciatus, and Wy. mitchelli were collected as immatures, resting and flying adults, while Ae. taeniorhynchus, Ae. tortillis, An. quadrimaculatus, Cx. coronator, Cx. erraticus, Cx. nigripalpus, Deinocerites cancer, and Wy. vanduzeei were only collected by the BG-Sentinel trap. One possible explanation is that these species may have actively invaded the urban farm seeking for hosts. The relatively low abundance of immature mosquitoes found in both the Wynwood and Golden Glades urban farms compared to the number of adult mosquitoes may imply that mosquitoes are actively moving into the urban farms seeking resources in its premises such as sugar and blood sources widely available in these environments. However, the presence of cryptic breeding sites cannot be excluded. Ae. albopictus was the exception. Despite not being abundantly found in Miami, this species was able to use a wider range of natural aquatic habitats within the Golden Glades urban farm. Ae. albopictus was found breeding in high numbers in aquatic habitats at the Golden Glades urban farm in 5 of the 7 weeks sampled, but only a few adults were collected by the BG-Sentinel trap. The urban farms surveyed in this study were relatively small and BG-Sentinel traps have been specially designed to collect Ae. aegypti and Ae. albopictus mosquitoes . Therefore, it was expected that the number of immature Ae. albopictus would be proportional to the number of adults, even more, due to the presence of pupae indicating their ability to reach adulthood. We hypothesize that immature Ae. albopictus mosquitoes are exploiting the resources available within the urban farm but once they reach adulthood they are seeking hosts elsewhere. The comparison of the urban farms with their respective control sites revealed a distinct scenario. Culex quinquefasciatus was substantially more abundant in the Wynwood urban farm when compared to the control site. This result indicates the availability of suitable aquatic breeding habitats and the essential resources needed to sustain the development of Cx. quinquefasciatus population on the farm. As a consequence, the mosquito community composition is highly uneven, as highlighted by the Simpson index, being comprised mostly by Cx. quinquefasciatus. The comparison between the Golden Glades urban farm to the control site revealed a much more diverse mosquito community composition. This finding indicates the availability of not only suitable aquatic breeding sites but also the availability of essential resources (e.g., sugar sources) making it possible for the establishment of many different mosquito species. This result was supported by the Shannon index indicating a more even and diverse mosquito community. Favorable environments for mosquito proliferation such as sugar and blood sources and resting and aquatic habitats present in urban farms stand as a substantial challenge for public health and for the development of effective mosquito control strategies. Urban farmworkers spend the majority of their working days outdoors and are substantially exposed to vector mosquitoes and potentially to arboviruses . The complex interaction between human behavior, weather conditions, and the inherent physical features present in urban farms may have a significant influence on the population dynamics of vector mosquitoes. Moreover, urban farms are often no-spray zones and chemical interventions to reduce mosquito populations are limited to emergency situations, increasing the complexity of the development of mosquito control strategies. The diversity analyses showed that the sampling sufficiency was reached in both urban farms, even though the survey period was very short, suggesting no more species should be found in neither farm with an increase in the sampling effort. An. quadrimaculatus was collected only once with the aspirator, therefore, it could be considered an occasional capture. However, we were not able to collect data over long periods of time and across all weather and season variations that would further enhance insight into vector abundance and seasonality. We have in this paper included controls supporting the conclusion that urban farms are favorable habitats; in the previous papers [17,18,24,25] these kinds of controls were not included. Our results show how urban farms provide favorable environments for populations of vector mosquito species because they likely provide a wide range of essential resources needed for their survival. The increasing trend of urban agriculture and the increase in the numbers of urban farms represent a new challenge for the development of effective strategies to control populations of vector mosquito species in urban areas. We would like to thank the staff of the Miami-Dade County Mosquito Control Division for their help processing and identification of the mosquitoes. - 1. Johnson MTJ, Munshi-South J. Evolution of life in urban environments. Science. 2017;358: eaam8327. pmid:29097520 - 2. McKinney ML. Urbanization as a major cause of biotic homogenization. Biol Conserv. 2006;127: 247–260. - 3. Wilke ABB, Beier JC, Benelli G. Complexity of the relationship between global warming and urbanization–an obscure future for predicting increases in vector-borne infectious diseases. Curr Opin Insect Sci. 2019;35: 1–9. pmid:31279898 - 4. Bhatt S, Gething PW, Brady OJ, Messina JP, Farlow AW, Moyes CL, et al. The global distribution and burden of dengue. Nature. 2013;496: 504–507. pmid:23563266 - 5. Franklinos LH V, Jones KE, Redding DW, Abubakar I. The effect of global change on mosquito-borne disease. Lancet Infect Dis. 2019;19: e302–e312. pmid:31227327 - 6. Poletti P, Messeri G, Ajelli M, Vallorani R, Rizzo C, Merler S. Transmission potential of chikungunya virus and control measures: The case of italy. PLoS One. 2011;6: e18860. pmid:21559329 - 7. Gould EA, Gallian P, Lamballerie X De, Charrel RN. First cases of autochthonous dengue fever and chikungunya fever in France: From bad dream to reality! Clin Microbiol Infect. 2010;16: 1702–1704. pmid:21040155 - 8. Gjenero-Margan I, Aleraj B, Krajcar D, Lesnikar V, Klobucar A, Pem-Novosel I, et al. Autochthonous dengue fever in Croatia, August-September 2010. Euro Surveill. 2011;16: 1–4. - 9. PAHO/WHO. Zika cases and congenital syndrome associated with Zika virus reported by countries and territories in the Americas (Cumulative Cases), 2015–2017. World Health Organization. Available at: https://www.paho.org/hq/index.php?option=com_content&view=article&id=12390:zika-cumulative-cases&Itemid=42090&lang=en. - 10. Delaney A, Mai C, Smoots A, Cragan J, Ellington S, Langlois P, et al. Population-based surveillance of birth defects potentially related to Zika virus infection—15 States and U.S. Territories, 2016. Morb Mortal Wkly Rep. 2018;67: 91–96. - 11. Shapiro-Mendoza CK, Rice ME, Galang RR, Fulton AC, VanMaldeghem K, Prado MV, et al. Pregnancy outcomes after maternal Zika virus infection during pregnancy—U.S. Territories, January 1, 2016–April 25, 2017. Morb Mortal Wkly Rep. 2017;66: 615–621. - 12. Centers for Disease Control and Prevention (CDC). 2016 Zika virus Case Counts in the US. Available at: https://www.cdc.gov/zika/reporting/2016-case-counts.html. - 13. Likos A, Griffin I, Bingham AM, Stanek D, Fischer M, White S, et al. Local mosquito-borne transmission of Zika Virus—Miami-Dade and Broward Counties, Florida, June–August 2016. Morb Mortal Wkly Rep. 2016;65: 1032–1038. - 14. Florida Department of Health. Mosquito-Borne Illness Advisory. 2019. Available at: http://miamidade.floridahealth.gov/newsroom/2019/09/2019-09-13-Health-Officials-Issue-Mosquito-Borne-illnesses-alert.html - 15. Knop E. Biotic homogenization of three insect groups due to urbanization. Glob Chang Biol. 2016;22: 228–236. pmid:26367396 - 16. Wilke ABB, Vasquez C, Medina J, Carvajal A, Petrie W, Beier JC. Community composition and year-round abundance of vector species of mosquitoes make Miami-Dade County, Florida a receptive gateway for arbovirus entry to the United States. Sci Rep. 2019;9: 8732. pmid:31217547 - 17. Wilke ABB, Vasquez C, Petrie W, Caban-Martinez AJ, Beier JC. Construction sites in Miami-Dade County, Florida are highly favorable environments for vector mosquitoes. PLoS One. 2018;13: e0209625. pmid:30571764 - 18. Wilke ABB, Chase C, Vasquez C, Carvajal A, Medina J, Petrie WD, et al. Urbanization creates diverse aquatic habitats for immature mosquitoes in urban areas. Sci Rep. 2019;9: 15335. pmid:31653914 - 19. Samy AM, Elaagip AH, Kenawy MA, Ayres CFJ, Peterson AT, Soliman DE. Climate Change Influences on the Global Potential Distribution of the Mosquito Culex quinquefasciatus, Vector of West Nile Virus and Lymphatic Filariasis. PLoS One. 2016;11: e0163863. pmid:27695107 - 20. Kraemer MUG, Sinka ME, Duda KA, Mylne A, Shearer FM, Brady OJ, et al. The global compendium of Aedes aegypti and Ae. albopictus occurrence. Sci Data. 2015;2: 150035. pmid:26175912 - 21. Messina J, Brady O, Pigott D, Brownstein J, Hoen A, Hay S. A global compendium of human dengue virus occurrence. Sci Data. 2014;1: 140004. pmid:25977762 - 22. Rosenberg R, Lindsey NP, Fischer M, Gregory CJ, Hinckley AF, Mead PS, et al. Vital Signs: Trends in reported vectorborne disease cases—United States and Territories, 2004–2016. Morb Mortal Wkly Rep. 2018;67: 496–501. - 23. NOAA. National Weather Service Forecast. Available: https://w2.weather.gov/climate/index.php?wfo=mfl - 24. Wilke ABB, Vasquez C, Petrie W, Beier JC. Tire shops in Miami-Dade County, Florida are important producers of vector mosquitoes. PLoS One. 2019;14: e0217177. pmid:31107881 - 25. Wilke ABB, Vasquez C, Mauriello PJ, Beier JC. Ornamental bromeliads of Miami-Dade County, Florida are important breeding sites for Aedes aegypti (Diptera: Culicidae). Parasit Vectors. 2018;11: 283. pmid:29769105 - 26. United States Department of Agriculture (USDA). Urban Agriculture. Available at: https://www.nal.usda.gov/afsic/urban-agriculture. - 27. Palmer L. Urban agriculture growth in US cities. Nat Sustain. 2018;1: 5–7. - 28. Afrane YA, Lawson BW, Brenya R, Kruppa T, Yan G. The ecology of mosquitoes in an irrigated vegetable farm in Kumasi, Ghana: Abundance, productivity and survivorship. Parasites and Vectors. 2012;5: 1–7. - 29. Cissé G.; Tschannen A.B.; Tanner M.; Utzinger J.; Vounatsou P.; N’goran E.K.; et al. Urban farming and malaria risk factors in a medium-sized town in Cote d’Ivoire. Am. J. Trop. Med. Hyg. 2006, 75, 1223–1231. pmid:17172397 - 30. Wilke ABB, Caban-Martinez AJ, Ajelli M, Vasquez C, Petrie W, Beier JC. Mosquito adaptation to the extreme habitats of urban construction sites. Trends Parasitol. 2019;35: 607–614. pmid:31230997 - 31. Wilke ABB, Benelli G, Beier JC. Beyond frontiers: On invasive alien mosquito species in America and Europe. PLoS Negl Trop Dis. 2020;14: e0007864. - 32. United States Census Bureau. Income and Poverty in the United States 2016. Available at: https://www.census.gov/topics/income- poverty/income.html - 33. United States Census Bureau. United States Census American Community Survey. Available at: https://www.census.gov/programs-surveys/acs/ - 34. Wilke ABB, Carvajal A, Medina J, Anderson M, Nieves VJ, Ramirez M, et al. Assessment of the effectiveness of BG-Sentinel traps baited with CO2 and BG-Lure for the surveillance of vector mosquitoes in Miami-Dade County, Florida. PLoS One. 2019;14: e0212688. pmid:30794670 - 35. Darsie RF Jr., Morris CD. Keys to the adult females and fourth instar larvae of the mosquitoes of Florida (Diptera, Culicidae). 1st ed. Vol. 1. Tech Bull Florida Mosq Cont Assoc (2000). - 36. Simpson EH. Measurement of Diversity. Nature. 1949;163: 688–688. - 37. Shannon CE. A mathematical theory of communication. Bell Syst Tech J. 1948;27: 379–423. - 38. Biology E. Estimating terrestrial biodiversity through extrapolation. Philos Trans R Soc London Ser B Biol Sci. 1994;345: 101–118. - 39. Colwell RK. Biodiversity: Concepts, patterns, and measurement. Communities Ecosyst. 2009; 257–264. - 40. Adrain JM, Westrop SR, Chatterton BDE, Ramsköld L. Silurian trilobite alpha diversity and the end-Ordovician mass extinction. Paleobiology. 2000;26: 625–646. - 41. Hammer Ø, Harper DATT, Ryan PD. PAST: Paleontological Statistics Software Package for Education and Data Analysis. Palaeontol Electron. 2001;4: 9. - 42. Buzas MA, Hayek LAC. SHE analysis for biofacies identification. J Foraminifer Res. 1998;28: 233–239. - 43. Morris EK, Caruso T, Buscot F, Fischer M, Hancock C, Maier TS, et al. Choosing and using diversity indices: Insights for ecological applications from the German Biodiversity Exploratories. Ecol Evol. 2014;4: 3514–3524. pmid:25478144 - 44. Wilke ABB, Wilk-da-Silva R, Marrelli MT. Microgeographic population structuring of Aedes aegypti (Diptera: Culicidae). PLoS One. 2017;12: e0185150. pmid:28931078 - 45. Multini LC, de Souza AL da S, Marrelli MT, Wilke ABB. Population structuring of the invasive mosquito Aedes albopictus (Diptera: Culicidae) on a microgeographic scale. PLoS One. 2019;14: e0220773. pmid:31374109 - 46. Medeiros-Sousa AR, Fernandes A, Ceretti-Junior W, Wilke ABB, Marrelli MT. Mosquitoes in urban green spaces: using an island biogeographic approach to identify drivers of species richness and composition. Sci Rep. 2017;7: 17826. pmid:29259304 - 47. Ajelli M, Moise IK, Hutchings TCSG, Brown SC, Kumar N, Johnson NF, et al. Host outdoor exposure variability affects the transmission and spread of Zika virus: Insights for epidemic control. PLoS Negl Trop Dis. 2017;11: e0005851. pmid:28910292
<urn:uuid:66e87623-f95d-4d17-92d9-f79dabac1aa3>
CC-MAIN-2021-21
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0230825
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00536.warc.gz
en
0.882328
8,076
3.609375
4
Background and Objectives for the Technical Brief Clinicians, informaticians, policy makers, and professional organizations such as the American Academy of Pediatrics (AAP) have described a need for electronic health record (EHR) systems and information technology tools specific to pediatric health care.1-3 EHRs in pediatric care may increase patient safety through standardization of care and reducing error and variability in the entry and communication of patient data.4-9 While EHRs may improve safety, implementation of general EHR systems that do not meet pediatric functionality and workflow demands could be potentially dangerous.10 Some studies have described improvements in immunization rates,8,11,12 attention-deficit/hyperactivity disorder care,13 preventive care counseling for children and adolescents,14,15 and hepatitis C follow-up in infants.16 However, few studies of EHRs overall have been conducted in the pediatric setting and available research about outcomes has yielded inconsistent results, potentially due to the variability of systems reviewed.17-39 While the Health Information Technology for Economic and Clinical Health (HITECH) Act has promoted adoption of EHRs by providers and hospitals, development and implementation of functionality to promote quality of pediatric care specifically has been inconsistent.40 Organizations including the Agency for Healthcare Research and Quality (AHRQ),41 Health Level 7 (HL7) International,42 and the AAP3 have described data formats and desired functionalities for pediatric EHRs. The Children’s Electronic Health Record Format developed by AHRQ and the Centers for Medicare and Medicaid Services (CMS) provides a set of critical functionality, data elements, and other requirements for EHR systems that can address children’s health care needs, especially for those enrolled in Medicaid or the Children's Health Insurance Program (CHIP).41 A 2007 AAP report noted immunization management, growth tracking, medication dosing, patient identification, data norms, terminology, and privacy as important concerns/requirements for EHR in pediatric populations.43 Recent recommendations from the Society for Adolescent Health and Medicine also urge that EHR design take into account “the special needs of adolescents for access to health information and the vigorous protection of confidentiality” and note that EHR developers should ensure that systems meet regulatory requirements and privacy needs.44 The degree to which currently available systems follow these recommendations and the individual recommendations’ relative importance and effectiveness in improving outcomes with EHRs that have specific functionalities is unknown but constitutes data possibly available in the published and grey literature, and these could form the basis for future research. “Meaningful Use” incentives associated with the HITECH Act have resulted in increased implementation and use of EHRs by pediatricians,45 but the degree to which pediatricians are actually using EHRs appropriate for or specific to pediatric practice appears to be minimal. For example, suggested minimum requirements for a “pediatric-supportive” EHR include well-child visit tracking, support for anthropometric analysis such as growth charts, immunization tracking and forecasting, and support for weight-based drug dosing.43,46 Only 31 percent of pediatricians use an EHR with basic functionality, and only 14 percent use a fully functional EHR.48 Only 8 percent of pediatricians are using a fully functional1 EHR with pediatric functionality.49 The Children’s Electronic Health Record Format includes over 700 requirements pertaining to pediatric functionality. While the Format is expansive, the large number of requirements as well as the lack of prioritization may have had a paralyzing effect on vendors, who, confronted with Meaningful Use requirements, have not leveraged the Format to improve their products. Similarly, the HL7 requirements include over 100 unique pediatric items. Importantly, this Technical Brief will map consistencies across the published recommendations and analyze the degree to which an evidence base exists for individual or groups of functionalities. This will form the framework for creating the map of existing evidence and gaps. 1During 2007-2009, NAMCS defined a fully functional EHR system as having all 14 functionalities in basic systems plus the following additional features: 1) medical history and follow-up notes; 2) drug interaction or contraindication warnings; 3) prescriptions sent to pharmacy electronically; 4) computerized orders for lab tests; 5) test orders sent electronically; 6) providing reminders for guideline-based interventions; 7) highlighting out-of-range lab values; 8) computerized orders for radiology tests. American Hospital Association administered survey on EHR adoption defines comprehensive EHR to include the basic EHR core functionalities plus 14 additional functionalities implemented across all units (see Nakamura et al., 201345 and Jha et al., 200947). This project will summarize the state of the literature on pediatric EHR functionality, including whether a set of functionalities arise in the literature as more important than others, and the degree to which these functionalities have been evaluated. Secondarily, the report will assess the availability and penetration of specific pediatric EHR functionalities in systems and identify challenges to implementation. Information about desired functionalities will be reported in descriptive reports and in reports and documents in the grey literature. We do not anticipate a significant body of comparative literature assessing the potential benefits of pediatric EHR use. Thus, the technical brief format is ideal. Issues and Challenges in the Evidence Base A significant challenge in this brief is likely to be the breadth of pediatric practice, including subgroups and special populations requiring specific elements of care that may merit specific EHR functionalities, all of which may diffuse agreement on key pediatric EHR features. We anticipate categorizing findings by subgroups or populations as appropriate. Another challenge is that requirements and EHRs for inpatient and outpatient settings may differ and be represented differently in the literature. For the most part, inpatient pediatric functionalities are subsets of outpatient pediatric functionalities and inpatient adult functionalities. Our focus will be on functionalities for pediatrics primarily in the outpatient environment, and we will exclude functionalities also required by adults. As such, we will include functionalities that are useful in both the outpatient and inpatient environments, but will exclude functionalities that are exclusive to the inpatient environment. Similarly, individual reports may address specific elements of EHRs such as order entry or electronic prescribing. Again, we will clearly articulate the setting and populations associated with existing recommendations and will identify crosscutting elements where possible. Stakeholder groups such as the AAP have published numerous position papers and recommendations, which will provide important themes and crosscutting approaches. In providing a complete view of the state of pediatric EHR use, it may also be difficult to compare and document the components of commercial EHR systems. Many vendors have contractual “gag clauses” that prevent users and purchasers of their software from discussing problems or even sharing screens. As a result, deficiencies may be underreported, which we will try to address through use of the AAP EHR review site, which provides a collection of individual EHR reviews by pediatricians. As expected given the relatively recent increase in adoption of pediatric EHRs, few RCTs of their effects likely exist, and the field is developing rapidly. A preliminary review of the literature suggests that some studies assessing the effects of pediatric health information technology on procedures such as immunizations and medication administration have been published and will provide emerging data on outcomes. Questions of applicability will therefore be important to address if EHRs are evaluated in very specific settings. We will focus on the functionalities, needs, and desiderata uniquely relevant to pediatric care and beyond those functionalities available for adult care. Some functionality required for pediatric care is also critical for aspects of adult care, and we will include those functionalities, but focus on their use in pediatrics (e.g., immunization tracking, which is a key aspect of children’s care as well as that of pregnant women and the elderly). We propose Guiding Questions (GQs) that focus specifically on EHR tools and functionalities to support safe healthcare delivery for children. The need for and the benefits of core functionality is well accepted; therefore, the GQs will examine functionalities that have been or are being evaluated and can be disseminated and replicated by our end users. Sub-questions may evolve slightly over the course of the research as the researchers gain a deeper understanding of the topic. Other considerations include the degree of complexity for vendors and for users. GQ1. Description of EHRs - Are there functionalities that have been identified in the literature and feature more prominently than others as potentially important to achieve for improving children’s health? GQ2. Description of the context in which EHRs are implemented - What is the potential value of pediatric-specific functionalities in the context of care transition, specifically from newborn care to pediatric primary care, from pediatric primary care to pediatric specialist care, and from pediatric primary care to adolescent care? - Are certain pediatric-specific functionalities beneficial for a pediatrician to conduct her work including sick and well-child visits? If so, does this vary by health care setting (e.g. primary care office, specialty care office, school health, and alternative care settings) or by type of visit (e.g., preventive vs. acute care)? - What are the challenges to implementing specific functionalities? Are some harder than others to implement by - i. vendors? - ii Pediatric providers? GQ3. Description of the existing evidence - ii. The ability of a pediatric provider to conduct work within the EHR? - iii. Improvement of workflow and provider satisfaction? - iv. Involvement of patients and families (including their education and shared decision making)? - Is there any evidence that using an EHR adapted for the specific needs of pediatric providers compared with using a “regular” EHR or not using an EHR at all produces: - i. Better quality, including safety and cost outcomes for patients? - ii. Improved workflow or job satisfaction for providers? - Which pediatric-specific functionalities influence: - i. Patient outcomes including: - a. Safety? - b. Quality? - c. Cost? - d. Equity? - f. Standardization of care? - g. Efficiency? - i. Patient outcomes including: - How does testability and usability of core functionalities promote or impede dissemination and future development of pediatric EHRs? A. Discussions with Key Informants The range of settings in which the pediatric EHR is intended for use complicates this project. We will engage stakeholders with multiple perspectives to help elucidate the decisional dilemmas that led to the project. Key informants will help to identify key issues related to definitions, clinical areas, population, implementation, resources, and future research. Following approval by AHRQ of the completed Disclosure of Interest forms from key informants, we will schedule one-hour conference calls with six to eight key informants to review the preliminary Guiding Questions and discuss the project parameters. Because the literature may not be optimally indexed on this subject, the key informants will also help to ensure that the search results capture the research landscape. We will record and transcribe the call discussion and distribute a call summary to call participants. Discussions with key informants may be used to refine the Guiding Questions and will inform the responses to all of the Guiding Questions. B. Grey Literature Search Technical briefs combine contextual information from key informants with a search of the grey literature and the published literature. We anticipate that the grey literature is likely to yield model programs and example approaches. Examples of sources of grey literature include government websites, clinical trial databases, trade publications, and meeting abstracts. We will search for information from health and hospital systems that may have developed criteria for pediatric health information applications. We will work with the Scientific Resource Center to contact organizations, individuals, and vendors directly to request unpublished data or reports. We will be careful in our presentation of the grey literature to identify it as such, given that it is more likely that positive studies will be provided than negative or neutral ones. The grey literature is likely to yield example approaches, policy statements, and proposed models. For the grey literature search, we will use Google and as a starting point, several known resources including the AAP’s Child Health Informatics Center and HL7 sites. The results of the grey literature searches will inform responses to all Guiding Questions, particularly Guiding Questions 2 and 4. C. Published Literature Search We will search the published literature for any studies that evaluate systems or models. We will use indexing terms and keywords to search the published literature for reports of EHR tools and functionalities as well as child health needs and related data elements. (See Appendix A for preliminary search strategies). An experienced library scientist who is familiar with all aspects of the technical brief protocol will examine the selection of databases and all search strategies. We will review the reference lists of retrieved publications for other potentially relevant publications missed by the search strategies. We will hand search recent issues of core journals including the Journal of the American Medical Informatics Association, BMC Medical Informatics and Decision Making, Journal of Biomedical Informatics, Pediatrics, Applied Clinical Informatics, and Methods of Information in Medicine. The search will be updated while the draft brief is being reviewed to identify newly published relevant information. We will incorporate the results from the literature update into the technical brief prior to submission of the final report. The results of the published literature searches will inform responses to all Guiding Questions. D. Inclusion and Exclusion We will use pre-specified criteria to screen the full text of the search results for inclusion. We will develop a simple categorization scheme for coding the reasons for exclusion from the report. We will use EndNote® to record and track the disposition of references (from the grey literature and published literature searches). We will focus on mapping existing evidence for health improvements, prioritizing functionalities, and identifying gaps. Population: We will limit to the pediatric outpatient population, excluding data for adult functionality unless critical to the pediatric context and not considered core functionality of an EHR. Intervention/Technology: We will not limit specific functionalities but expect to find more information to support certain functionalities (e.g., immunization, growth and developmental screening, weight-based and surface area dosing). Outcomes: We will seek pediatric health outcomes but will include evidence for functionalities that improve workflow and process outcomes (e.g., reduced wait times). Indirect and process outcomes will provide meaningful information. Timeframe: To capture key publications, we will include literature published in or after 1999 to include the period of accelerated EHR implementation. Setting: There is significant overlap between needs of inpatient and outpatient, but some inpatient needs (e.g., tracking radiation exposure) that are also relevant to adult care are less relevant to outpatient. Certified EHR technology (CEHRT) was not intentionally designed for outpatient settings. We will evaluate core concepts and functionalities that can support specialty and primary care and promote interoperability with other HIT applications in pediatric outpatient settings and will exclude functionalities also required in adult care. Designs: We will allow randomized controlled trials, cohort, and pre-post study designs because these are likely the majority of studies and, at this point in the field, may provide clues about where further study should be pursued. The inclusion/exclusion criteria for the evaluation studies are summarized in Table 1. Other: It is not necessary to assess availability of specific pediatric EHR systems but more important to map consistencies across the published recommendations on pediatric EHR functionalities and analyze the degree to which an evidence base exists for individual or groups of functionalities. We will include information addressing issues of testability, integration, and usability. We will include reports on all types of EHR systems including but not limited to commercial, homegrown, and hybrid systems. |Study population||Pediatric, outpatient| |Publication languages||English only| |Admissible evidence||Study design Data Organization and Presentation A. Information Management We will develop data collection forms to record and summarize study design, methods, and results. We will summarize data from the data abstraction forms in tables. The dimensions (i.e., areas of special focus, or the columns) of each table will vary by guiding question but will include the following when reported: a) general information such as study design, year, setting, geographic location, and duration; b) population information including patient indication or inclusion criteria and details about the clinical environments such as the number and types of participating practices or providers; c) characteristics of the EHR and/or the specific functionalities, and if included, information and characteristics of the comparator (e.g., administrative database, paper records, health information exchange); and d) key contextual information (e.g., implementation, documentation, duration of followup) pertinent to the identification of facilitators and barriers to EHR functionality. Among other data, we will include any available information on prevalence and variation in practice. B. Data Presentation We will compile all of the information from the published and grey literature, with the ultimate goal of identifying functionalities that have been evaluated and example programs in those categories, as well as approaches that warrant further evaluation. We will characterize the information to include functionalities linked to outcome and workflow and aspects of testability and usability. The horizon scan of current practice and research will also be presented in summary tables and in the written report. If information from individual health care systems or hospital system is available, we will capture and catalogue the data that are meaningful to pediatric-specific features and functionalities for an EHR. - Policy Statement--Using personal health records to improve the quality of health care for children. Pediatrics. 2009 Jul;124(1):403-9. PMID: 19564327 - Shiffman RN, Spooner SA, Kwiatkowski K, et al. Information technology for children's health and health care: report on the Information Technology in Children's Health Care Expert Meeting, September 21-22, 2000. J Am Med Inform Assoc. 2001 Nov-Dec;8(6):546-51. PMID: 11687562 - Kim GR, Lehmann CU. Pediatric aspects of inpatient health information technology systems. Pediatrics. 2008 Dec;122(6):e1287-96. PMID: 19047228 - Lehmann CU, Kim GR, Gujral R, et al. Decreasing errors in pediatric continuous intravenous infusions. Pediatr Crit Care Med. 2006 May;7(3):225-30. PMID: 16575355 - Lehmann CU, Kim GR. Computerized provider order entry and patient safety. Pediatr Clin North Am. 2006 Dec;53(6):1169-84. PMID: 17126689 - Kim GR, Lawson EE, Lehmann CU. Challenges in reusing transactional data for daily documentation in neonatal intensive care. AMIA Annu Symp Proc. 2008:1009. PMID: 18998993 - Kim GR, Chen AR, Arceci RJ, et al. Error reduction in pediatric chemotherapy: computerized order entry and failure modes and effects analysis. Arch Pediatr Adolesc Med. 2006 May;160(5):495-8. PMID: 16651491 - Bundy DG, Persing NM, Solomon BS, et al. Improving immunization delivery using an electronic health record: the ImmProve project. Acad Pediatr. 2013 Sep-Oct;13(5):458-65. PMID: 23726754 - Simpson RL. Neither seen nor heard: why we need a child-friendly electronic health record. Nurs Adm Q. 2009 Jan-Mar;33(1):78-83. PMID: 19092530 - Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics. 2005 Dec;116(6):1506-12. PMID: 16322178 - Fiks AG, Grundmeier RW, Biggs LM, et al. Impact of clinical alerts within an electronic health record on routine childhood immunization in an urban pediatric population. Pediatrics. 2007 Oct;120(4):707-14. PMID: 17908756 - Fiks AG, Hunter KF, Localio AR, et al. Impact of electronic health record-based alerts on influenza vaccination for children with asthma. Pediatrics. 2009 Jul;124(1):159-69. PMID: 19564296 - Co JP, Johnson SA, Poon EG, et al. Electronic health record decision support and quality of care for children with ADHD. Pediatrics. 2010 Aug;126(2):239-46. PMID: 20643719 - Rand CM, Blumkin A, Szilagyi PG. Electronic health record use and preventive counseling for US children and adolescents. J Am Med Inform Assoc. 2014 Feb;21(e1):e152-6. PMID: 24013091 - Adams WG, Mann AM, Bauchner H. Use of an electronic medical record improves the quality of urban pediatric primary care. Pediatrics. 2003 Mar;111(3):626-32. PMID: 12612247 - Abughali N, Maxwell JR, Kamath AS, et al. Interventions using electronic medical records improve follow up of infants born to hepatitis C virus infected mothers. Pediatr Infect Dis J. 2014 Apr;33(4):376-80. PMID: 24401869 - Hsiao CJ, Marsteller JA, Simon AE. Electronic medical record features and seven quality of care measures in physician offices. Am J Med Qual. 2014 Jan-Feb;29(1):44-52. PMID: 23610232 - Fairley CK, Vodstrcil LA, Huffam S, et al. Evaluation of Electronic Medical Record (EMR) at large urban primary care sexual health centre. PLoS One. 2013;8(4):e60636. PMID: 23593268 - Babbott S, Manwell LB, Brown R, et al. Electronic medical records and physician stress in primary care: results from the MEMO Study. J Am Med Inform Assoc. 2014 Feb;21(e1):e100-6. PMID: 24005796 - Featherstone I, Keen J. Do integrated record systems lead to integrated services? An observational study of a multi-professional system in a diabetes service. Int J Med Inform. 2012 Jan;81(1):45-52. PMID: 21962435 - Petroll AE, Phelps JK, Fletcher KE. Implementation of an electronic medical record does not change delivery of preventive care for HIV-positive patients. Int J Med Inform. 2014 Apr;83(4):273-7. PMID: 24440204 - Imperiale TF, Sherer EA, Balph JA, et al. Provider acceptance, safety, and effectiveness of a computer-based decision tool for colonoscopy preparation. Int J Med Inform. 2011 Oct;80(10):726-33. PMID: 21920302 - Poon EG, Wright A, Simon SR, et al. Relationship between use of electronic health record features and health care quality: results of a statewide survey. Med Care. 2010 Mar;48(3):203-9. PMID: 20125047 - Lau F, Kuziemsky C, Price M, et al. A review on systematic reviews of health information system studies. J Am Med Inform Assoc. 2010 Nov-Dec;17(6):637-45. PMID: 20962125 - Romano MJ, Stafford RS. Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Arch Intern Med. 2011 May 23;171(10):897-903. PMID: 21263077 - Walsh MN, Yancy CW, Albert NM, et al. Electronic health records and quality of care for heart failure. Am Heart J. 2010 Apr;159(4):635-42.e1. PMID: 20362723 - Holroyd-Leduc JM, Lorenzetti D, Straus SE, et al. The impact of the electronic medical record on structure, process, and outcomes within primary care: a systematic review of the evidence. J Am Med Inform Assoc. 2011 Nov-Dec;18(6):732-7. PMID: 21659445 - Linder JA, Ma J, Bates DW, et al. Electronic health record use and the quality of ambulatory care in the United States. Arch Intern Med. 2007 Jul 9;167(13):1400-5. PMID: 17620534 - Herrin J, da Graca B, Aponte P, et al. Impact of an EHR-Based Diabetes Management Form on Quality and Outcomes of Diabetes Care in Primary Care Practices. Am J Med Qual. 2014 Jan 7PMID: 24399633 - Herrin J, da Graca B, Nicewander D, et al. The effectiveness of implementing an electronic health record on diabetes care and outcomes. Health Serv Res. 2012 Aug;47(4):1522-40. PMID: 22250953 - Hunt JS, Siemienczuk J, Gillanders W, et al. The impact of a physician-directed health information technology system on diabetes outcomes in primary care: a pre- and post-implementation study. Inform Prim Care. 2009;17(3):165-74. PMID: 20074429 - Litvin CB, Ornstein SM, Wessell AM, et al. Use of an electronic health record clinical decision support tool to improve antibiotic prescribing for acute respiratory infections: the ABX-TRIP study. J Gen Intern Med. 2013 Jun;28(6):810-6. PMID: 23117955 - Kern LM, Barron Y, Dhopeshwarkar RV, et al. Electronic health records and ambulatory quality of care. J Gen Intern Med. 2013 Apr;28(4):496-503. PMID: 23054927 - Kern LM, Barron Y, Dhopeshwarkar RV, et al. Health information exchange and ambulatory quality of care. Appl Clin Inform. 2012;3(2):197-209. PMID: 23646072 - King J, Patel V, Jamoom EW, et al. Clinical benefits of electronic health record use: national findings. Health Serv Res. 2014 Feb;49(1 Pt 2):392-404. PMID: 24359580 - Reed M, Huang J, Graetz I, et al. Outpatient electronic health records and the clinical care and outcomes of patients with diabetes mellitus. Ann Intern Med. 2012 Oct 2;157(7):482-9. PMID: 23027319 - Samal L, Linder JA, Lipsitz SR, et al. Electronic health records, clinical decision support, and blood pressure control. Am J Manag Care. 2011 Sep;17(9):626-32. PMID: 21902448 - Garrido T, Jamieson L, Zhou Y, et al. Effect of electronic health records in ambulatory care: retrospective, serial, cross sectional study. Bmj. 2005 Mar 12;330(7491):581. PMID: 15760999 - Adler-Milstein J, Salzberg C, Franz C, et al. Effect of electronic health records on health care costs: longitudinal comparative evidence from community practices. Ann Intern Med. 2013 Jul 16;159(2):97-104. PMID: 23856682 - Spooner SA. We are still waiting for fully supportive electronic health records in pediatrics. Pediatrics. 2012 Dec;130(6):e1674-6. PMID: 23166347 - Agency for Healthcare Research and Quality. Model Children's EHR Format. Available at http://www.ahrq.gov/policymakers/chipra/ehrformatfaq.html - HL7. HL7 EHR Child Health Functional Profile (CHFP), Release 1. Available at http://www.hl7.org/implement/standards/product_brief.cfm?product_id=15 - Spooner SA. Special requirements of electronic health record systems in pediatrics. Pediatrics. 2007 Mar;119(3):631-7. PMID: 17332220 - Gray SH, Pasternak RH, Gooding HC, et al. Recommendations for electronic health record use for delivery of adolescent health care. J Adolesc Health. 2014 Apr;54(4):487-90. PMID: 24656534 - Nakamura MM, Harper MB, Jha AK. Change in adoption of electronic health records by US children's hospitals. Pediatrics. 2013 May;131(5):e1563-75. PMID: 23589808 - Leu MG, O'Connor KG, Marshall R, et al. Pediatricians' use of health information technology: a national survey. Pediatrics. 2012 Dec;130(6):e1441-6. PMID: 23166335 - Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med. 2009 Apr 16;360(16):1628-38. PMID: 19321858 - Centers for Disease Control and Prevention. Ambulatory Health Care Data. Available at http://www.cdc.gov/nchs/ahcd.htm - Lehmann CU. Unpublished communication. 2014. Definition of Terms Summary of Protocol Amendments In the event of protocol amendments, the date of each amendment will be accompanied by a description of the change and the rationale. Within the Technical Brief process, Key Informants serve as a resource to offer insight into the clinical context of the technology/intervention, how it works, how it is currently used or might be used, and which features may be important from a patient or policy standpoint. They may include clinical experts, patients, manufacturers, researchers, payers, or individuals with other perspectives, depending on the technology/intervention in question. Differing viewpoints are expected, and all statements are crosschecked against available literature and statements from other Key Informants. Information gained from Key Informant interviews is identified as such in the report. Key Informants do not do analysis of any kind nor contribute to the writing of the report and have not reviewed the report, except as given the opportunity to do so through the public review mechanism. Key Informants must disclose any financial conflicts of interest greater than $10,000 and any other relevant business or professional conflicts of interest. Because of their unique clinical or content expertise, individuals are invited to serve as Key Informants, and those who present with potential conflicts may be retained. The Task Order Officer and the Evidence-based Practice Center work to balance, manage, or mitigate any potential conflicts of interest identified. Peer reviewers are invited to provide written comments on the draft report based on their clinical, content, or methodologic expertise. Peer review comments on the preliminary draft of the report are considered by the Evidence-based Practice Center in preparation of the final draft of the report. Peer reviewers do not participate in writing or editing of the final report or other products. The synthesis of the scientific literature presented in the final report does not necessarily represent the views of individual reviewers. The dispositions of the peer review comments are documented and will be published three months after the publication of the evidence report. Potential reviewers must disclose any financial conflicts of interest greater than $10,000 and any other relevant business or professional conflicts of interest. Invited Peer Reviewers may not have any financial conflict of interest greater than $10,000. Peer reviewers who disclose potential business or professional conflicts of interest may submit comments on draft reports through the public comment mechanism. Preliminary Search Strategies (updated: 7/18/2014) |#||Search terms||Search results| |Abbreviations: mh=Medical Subject Heading; tiab=title/abstract word Note: Using “medical order entry system” subject heading instead of “medical records systems, computerized” retrieves 2165 records. Using the broader term, “medical records systems, computerized” which encompasses “medical order entry system” and “electronic health records” retrieves an additional 1105 records- many of which may not be relevant to this topic. Cataloguers use the most specific heading available, however in this case, the broader term “medical records systems, computerized” was introduced in 1991, more than a decade before the more specific headings “medical order entry system” and “electronic health records”. |1||(“pediatrics”[mh] OR “infant”[mh] OR “Child”[mh] OR “adolescent”[mh] OR “child health services”[mh] OR “intensive care units, pediatric”[mh] OR “hospitals, pediatric”[mh])||2843532| |2||(child*[tiab] OR paediatr*[tiab] OR pediatr*[tiab] OR adolescent*[tiab] OR neonat*[tiab] OR infant*[tiab])||1529072| |3||(“Medical records systems, computerized”[mh] OR “decision support systems, clinical”[mh])||28448| |4||((“cpoe”[tiab] OR “computerized physician order entry”[tiab] OR “computerized order entry”[tiab] OR “computer order entry”[tiab] OR “cdss”[tiab] OR “clinical decision support systems”[tiab]) OR (electronic[tiab] AND (health record*[tiab] OR medical record*[tiab])))||13323| |5||Search (#1) OR #2||3233572| |6||Search (#3) OR #4||34922| |7||Search (#5) AND #6||3270| |#||Search terms||Search results| |1||(pediatric* or child* or infant* or paediatric* or neonat* or adolescen*).mp||3024185| |2||("computerized provider order entry" or "cpoe" or "electronic health" or "EHR" or "clinical decision support" or "CDS" or "CDSS").mp||18384| |3||#1 AND #2||1475| |4||Limits: NOT Medline, Publication Date: 2000-Current||83| |Abbreviations: APT=Application Type; ADP=Application Date; Spec=Description/Specification Notes: Limited to the utility patents (APT 1). The USPTO issues three types of patents: utility, design, and plant patents. Office of the National Coordinator for Health Information Technology (ONC) notes that “only utility patents, which include “process” or “method” patents that outline a way for performing a function or achieving an outcome, are significantly relevant” to the area of HIT. (From “ONC’s Thoughts on Patents, Health IT, and Meaningful Use”) |( APT/1 and spec/((pediatric or child or neonate) and (health or medical) and (electronic or computerized) and (function or standard or functionality or functionalities)) and APD/1/1/2000->6/1/2014)||5511|
<urn:uuid:a78fb482-9a1f-431d-a6a4-b7d044225853>
CC-MAIN-2021-21
https://effectivehealthcare.ahrq.gov/products/pediatric-ehr/research-protocol
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00496.warc.gz
en
0.868295
7,552
2.59375
3
Click here to get started today! by Joseph Exell God has many ways by which He prepares His servants for the doing of His work--many schools to which He sends them. But there is no teacher whom He uses more frequently than the stern teacher whose name is Sorrow. He makes His children acquainted with bitter trial and privation and loss; He “brings them into the wilderness,” as Hosea says, in order that among the barren rocks and sands He may speak to their hearts; He imparts to them their wisdom and their strength through the discipline of sacrifice and pain. Moses is sent to the deserts of Midian, that he may be accustomed there to the endurance of difficulty and opposition, and that in these lonely solitudes, where he is shut out from intercourse with his fellow-men, he may learn to hold close fellowship with his Divine Master and King. Paul writes his loftiest and profoundest letters from the prison-house of Nero, where for the hope of Israel he is bound with a chain. It was in the school-house of sorrow that God fashioned the prophet Hosea into fitness for his life-task. I. The nature of the mission which God gave Hosea to fulfil--He was a prophet of the Northern Kingdom--a preacher to Israel rather than to Judah. Amos had the same sphere of labour assigned him. But Amos was himself a native of Judea, although his public career, so far as we know it, was confined to the North. He came to Bethel and Samaria, a stranger from the wilderness of Tekoa away in the South--a stranger who had been charged to deliver a terrible message of denunciation and of impending punishment. He carried out his commission, and then he withdrew again to his own land and people. Having spent a few stirring and memorable days in the guilty cities of Israel--having seen their violence and immorality and forgetfulness of God, and lifted up his voice like a trumpet against them--he went back to the silent pastures of the desert, to write down in quietness the story of what he had said and done at the Lord’s commandment, and to live and die far from the scenes of his brief prophetic labours. It was different altogether with Hosea, the son of Bceri. That he was himself a child of that evil Northern land with whose inhabitants he pied on behalf of God is evident to every one who reads his book. Only one born and brought up in the very midst of the sinful people whose disobedience he bewails, linked to them by the tenderest bonds of family affection and national feeling, could pity them so truly, and yearn over them with so fond a love, and entreat them with such a beseeching and persistent earnestness to return to the Lord. Then, too, throughout his prophecy there are constantly recurring allusions to places in the territory of the ten tribes, to Mount Tabor, and the streams of Gilead, and the idolatrous shrines of Gilgal, and the splendid woods of Lebanon--references which speak of the writer’s perfect familiarity with the scenery of the Northern Kingdom. It was indeed a goodly land. The fairest and grandest regions within the entire country were to be found in it. Its plains and forests and rivers were nobler by far than those of Judah. And Hosea knew it well, and was proud of its beauty, and grieved much that men and women to whom God had given a home so happy and so richly dowered should yet be unmindful of Him and rebel against Him. His religion, we may even venture to say, was colored to some extent by the pleasantness and geniality of his natural surroundings. It had in it more of freedom and of trust and of joy than that of the dwellers in the South, where nature was less kindly and her moods more severe. If it had not been that his heart was kept in perpetual sadness by the contemplation of his people’s sin his would certainly have been a very glad and peace-bringing faith. A native of this attractive land, and gifted himself with a temperament naturally joyous, Hosea was nevertheless called to work that plunged him into gloom. His lot was cast in a period when his country had to contend with many fears and fightings from without, and when it was full of utter corruption within. His prophetic activity extended over a long time, and in this respect too he stands in sharp contrast to Amos, whose ministry was but an episode in his life and was quickly fulfilled. All his days he seems to have preached righteousness and temperance and judgment to come in the hearing of men who paid little heed to his message. His labours stretched over a series of terrible years, during which he saw his people sink from one depth of degradation and sorrow to other and lower depths. He began to speak in God’s name while Jeroboam II., the greatest of the rulers of Israel, was still on the throne. But the reign of this monarch was drawing to its close, and the deluge came when he was gone. Amos had, indeed, found much to condemn in Israel even in the days of Jeroboam; but, bad as things undoubtedly were then, society was compact and pure compared with what it became after the king’s death. A long interregnum followed, and for years no governor guided the affairs of the commonwealth. Then one sovereign after another--Zechariah, Shallum, Menahem, Pekahiah--mounted the throne, placed on it like the later Roman emperors by the rough soldiers of the palace, and each of them permitted to rule for only a few months. It was in the midst of this unquiet time that Hosea addressed his countrymen. With these changes in the state he was familiar. And, while the government of the land was so unsettled, its inhabitants went from stage to stage in the evil ways of sin. They seemed to have lost all sense of shame. They had east every restraining influence to the winds. There was no moral energy in their hearts, and no self-control in their lives. Few prophets draw such pictures of prevalent ungodliness as the son of Beeri does. “Whoredom and wine and new wine,” he tells us, “took away the understanding” of his people “False swearing and killing, and stealing, and committing adultery broke out, and blood touched blood,” one dark crime treading close on the heels of another. If princes and subjects had only been wise, what a glorious history a land so favoured by heaven might well have had! And now the strength of the nation was spent; it had fought and finished an evil fight; its powers were wasted; there was no great future in store for it--only a future of misery during which it would reap as it had sown; it was already old. “Strangers have devoured the strength of Israel, and he knows it not.” Hosea mourned, “yea, grey hairs are here and there upon him, yet he knows not.” The enthusiasm and the possibilities of youth were gone for ever; the weakness of age had come long before the time; and so blind were the people that they were unconscious of their sad decay. Hosea was the prophet of the decline and fall of the Northern Kingdom. He has been called “the Jeremiah of Israel,” and the name is a good one, for he preached when his nation was tottering to its ruin, as Jeremiah preached in the troublous days when the sun of Judah was about to set in clouds and darkness. God raised him up to speak plain words to his fellow-countrymen about their sin, and to predict the heavy doom which such sin must bring on the wrong-doers. This was a sore and bitter duty--was it not?--for one who had in him a very tender heart, and who loved his people with an overmastering affection. What wonder was it that he should resemble Jeremiah in another characteristic also--in this, that he was scarcely able to utter his message for weeping? The herdsman of Tekoa might journey from his southern home to Bethel, and proclaim against it God’s exceeding great and fearful woe; and his voice might never once so much as falter while he thundered out his message of death; he might show himself stern and inexorable from first to last. It was little marital that he should be so unflinching; he was himself an alien from the commonwealth of Israel. But it was impossible for Hosea to fulfil his task in such a fashion. For they were his brothers and sisters whose transgression he was bidden expose, and whose punishment he had to foretell. He had grown up among them. He was bound to them by the strongest ties. He did not hide or extenuate the tidings of wrath which Jehovah had commanded him to publish abroad; he was too faithful to do that; but when he tried to announce them he was almost overcome by his emotion. His prophecy is a succession of sighs and sobs. Each verse is “one heavy toll in a funeral knell.,” That was the mission entrusted to Hosea. II. But if the task itself seemed painful in the extreme, the prophet was made ready for executing it by a discipline which was more painful still.--It was through sore experiences in his own history that he was moulded into God’s messenger and representative. What these experiences were he explains in the opening chapters of his book. This, then, is the miserable recital. Some time in the reign of Jeroboam II., when the nation was already far from perfect in God s sight, and yet was not so confirmed in its wickedness as it afterwards became, Hosea married Gomer, the daughter of Diblaim. He hoped, we may be sure, that she would prove a good and loyal wife to him; for the supposition of some expositors that Jehovah commanded His prophet to unite himself with a woman who was already known to be of impure character is absurd and revolting. But the trustfulness with which Hosea regarded his spouse was not justified in actual fact: she showed herself unfaithful to him; she left his roof to go after other lovers, and became the mother of children born in infidelity. Was it not the most grievous wound which a man could receive? On Ezekiel, another in the goodly fellowship of the prophets, a great sorrow fell once. His wife, the desire of his eyes, was taken from him with a stroke. He spake unto the people in the morning, and at even she died, and God bade him refrain from every token of mourning, that he might be a sign to the nation of the Jews. But death, though it overwhelms us with grief, is not so dreadful as dishonour; and they were deeper floods of trouble into which Hosea went down with his naked feet than any which Ezekiel knew. And yet, despite Gomer’s disloyalty, he loved her still. His love was that master-feeling which the Song of Solomon calls “strong as death” and “obstinate as the grave.” He acknowledged her three children for his own, and gave them names, to each of which a prophetic lesson was attached. And by and by he resolved that, if it were possible, he would win her back to her old allegiance. He went after her, and found her in a state of utter misery, apparently sold as a slave, for he had to buy her to himself “for fifteen pieces of silver, and for a homer of barley, and a half homer of barley.” So she came to dwell once more under her husband’s roof, yet not to dwell there just as she had done formerly. Things could not go on as though there had been no faithlessness on her part. For many days the prophet had to watch over his wife, secluding her from temptation, exercising a wise carefulness and jealousy. It was with Hosea just as it was with the Arthur of our literature. Gomer was untrue like Guinevere, and her conduct pierced her lord’s heart and cut him to the quick. But the prophet was as compassionate and long-suffering, as changeless in his affection, as willing to pardon, as the blameless king. And in the end there was a reconciliation. If the past could not be cancelled quite, it was at least forgiven. The poor foolish wanderer returned to her loyalty. The truant was welcomed home. These are the details of Hosea’s home life, so far as they are related in the first and third chapters of his prophecy. It is difficult to understand why some interpreters should have denied the literal and historical significance of the account, and should have resolved the story into nothing more than parable or allegory. The whole narrative is given with perfect simplicity, and yet with touching reserve. It has an air of truthfulness about every one of its particulars. It appears only too real. But many of us may be inclined to ask why the prophet should have said anything about this great struggle and bitterness of his life. Ought he not to have kept such a matter with sacred care from the view of the world? Was it not one of those secret things about which God only should have been told? He had a very sufficient reason for the disclosure. He wished to show how it was that he became a prophet, and to explain why he was led to those conceptions which’ he had formed, of the conduct of Israel and of the character of God. It was from his own history that he learned at once the disobedience of his native land and the long-suffering pity of its Lord. He saw that the shame which had blighted his home was a representation in miniature of that shame which the seed of Jacob, whom Jehovah had espoused to Himself, had cruelly inflicted on Him; that the grief which he felt over the erring Gomer--a grief without an element of anger in it--was symbolic of God’s grief over His backsliding nation; that the Divine heart was but his own human heart, with all its feelings deepened and all its emotions intensified. As Hosea passed through the sad troubles of his household, his eyes were opened, and the thought dawned on him that his experience was only a type of God’s experience in His dealings with His people. His sufferings lifted him into fellowship with God, taught him to think as God thought, gave him a sympathetic insight into God’s heart; and so he came out of the fires God’s prophet and spokesman. III. And now let us inquire how Hosea performed the work for which he had been trained by so terrible a discipline--how he made known God’s message to Israel. His words are strong and passionate. His heart seems ready to break with sorrow. His whole prophecy is a cry of agony. There is no finish or elaboration in his style, for a man whose spirit is moved to its depths is not careful how he orders his speech. But what his utterance lacks in sweetness it makes up in pathos and power. And through all the sudden transitions and swift changes of feeling that are characteristic of these chapters we can trace the effects of the painful education which Hosea had undergone to fit him for his duty. Israel at large, he fancied, was like the wayward Gomer of his home. Unfaithfulness to Jehovah--apostasy from the heavenly Husband whose kindness surpassed the kindness of men--that was the sin of his nation. And still, after all the provocations of the past, the aggrieved and injured Lord cared for His thankless spouse. The framer of hearts felt towards foolish Israel the same unselfish affection with which Hosea knew that he had himself followed the unstedfast daughter of Diblaim. Whatever gentleness and pity dwelt in his breast had been kindled at God’s altar. Whatever readiness to forgive he might display, God would display far more willingly and gladly. The disloyalty of Israel and the pitifulness of God--these are the two prominent ideas of this book. The former--the disloyalty of his nation--Hosea sets forth with great fulness of detail. He finds many tokens of ingratitude as he looks around him. There was, for example, the general and flagrant immorality of the land. How dark that was, and how notorious! Those who should have been freest from pollution were often ringleaders in crime. The very priests rejoiced in the spread of iniquity, and were foremost in outraging the law, lying in wait as robbers and murdering in the way to Shechem. The king and his princes found an unholy pleasure in conforming to the prevailing licence, and were glad rather than grieved when they contemplated the wickedness of their subjects. But besides this abounding lawlessness, and lying at its root and foundation, there were the religious declension and the false worship of the people. The prophet knew well that the outward errors of his fellow-countrymen sprang, as external transgressions so frequently do, from backsliding in religion. Had not Israel forsaken the spiritual worship of Jehovah? Had not the nation long since demanded a visible symbol of Him? Was it not given up to the adoration of the golden calves? Hosea was indeed very jealous for the honour of his God. No doubt he had heard many Israelites urge in extenuation of the image worship that it was really the service of Jehovah, and that those who went up to the local sanctuaries in Samaria and Bethel and Gilgal simply sought to give definiteness to their idea of the one living and true God when they knelt before an outward representation of Him. But he brushed aside with impatience the weak excuse. What was the calf but an idol?” The workman made it; therefore it was not God.” Moreover, this materialising of religion was leading only too directly and speedily to unmistakable Baal worship. The old Phoenician idolatry, against which Elijah had waged so fierce a battle on the summit of Carmel, was threatening again to overspread the land. The children of Ephraim were sinning more and more; they had made them molten images of their silver; they sacrificed upon the tops of the mountains, and burned incense upon the hills. Another indication of the fickleness of Israel, and of its want of true and deep attachment to its heavenly Bridegroom, Hosea discovered in its foolish foreign policy. It would rather lean on the nations round about its borders than on the strong arm of its Maker, who should have been its Husband too. It was far from giving Him the whole-hearted devotion which He claimed as His rightful portion. Sometimes it turned to one side, and sometimes to another. It fluttered from place to place, like a silly dove, calling now to Egypt and then going to Assyria. Such conduct the prophet felt to be not merely a crime but a blunder, for whenever the Israelites should forsake one of these great empires, the other would become indignant and would take revenge for the neglect inflicted upon it. But this coquetting with powerful neighbours--this “hiring lovers among the nations”--was sad and pitiable, chiefly because it showed that the heart of the chosen generation no longer beat true to its God. The people had forgotten Him who ought to have been their fortress and high-tower; and their forgetfulness would bring its chastisement. Still another proof of Israel’s faithlessness Hosea laid stress upon in his preaching. Was it not wrong, he asked, that the nation should remain separated from Judah, its brother? Was there not rebellion against God, disregard of His purposes, opposition to His will, in this division of the kingdom? Were not the ten tribes in grievous fault when they continued to foster their quarrel with the house and dynasty of David--the house which the Lord had blessed? This, the prophet declared, was part of God’s indictment of the subjects of the North: “They have set up kings, but not by Me; they have made princes, and I knew it not.” And he prayed eagerly for the healing of the ancient wound. A bright vision rose before him even in the midst of his griefs. For a moment he caught a glimpse of the glory of the latter days, when “the children of Israel should return and seek Jehovah their God and David their king.” Such was the country’s infidelity towards God--an infidelity which pierced as with a sharp knife the heart of Hosea, and wounded him as the unstedfastness of Gomer had done. But this was not the whole of his message. Over against the fickle and unreliable nation he saw standing the good and faithful God, and he had much to tell of the Divine mercy and graciousness. Like his own clinging, inextinguishable affection for his wife even in the period of her folly, like it, but purer and stronger and more per severing, was the affection of the Lord Jehovah for the land which He had wedded to Himself, and of which He was both the Father and the Husband. It was the high honour of Hosea that, first among all the prophets, he was prompted to call the feeling with which God regarded His people by the name of “love” None had used so sweet and pregnant a word before. Joel had said that the Lord was gracious and merciful, slow to anger and of great kindness. Amos had spoken of His goodness in redeeming the children of Israel from Egypt and in planting them in Canaan. But Hosea went further than either of his predecessors had done. He lit upon a treasure which they had not been permitted to find--he discovered a pearl of great price--when he realised that the chiefest of God’s perfections, the very glory and crown of His character, is His love. These were some of the words which this old preacher put into the lips of the Lord: “When Israel was a child, then I loved him”; and these also, “I will heal their backslidings; I will love them freely.” No doubt, it was upon the community as a whole rather than upon individual hearts that Hosea thought of Jehovah as lavishing this best of all His gifts. He concerned himself with the kingdom of God in its entirety, and not with the units that went to compose it. God’s affection for His people was in truth an invincible affection. He hoped against hope, when they went on in sin. He felt that He could not abandon them to utter ruin. His soul wept over them. “How can I give thee up, Ephraim? How can I cast thee away, Israel? My heart burns within Me; I am overcome with sympathy; I will not execute the fierceness of Mine anger; I will not turn to destroy thee.” These were the thoughts of God which Hosea learned in the time of his sorrow, when he was taught to find in the emotions of his own breast a picture of the feelings that throbbed within the breast of the Lord of heaven and earth. If Israel persisted now in her folly and disobedience, she was without excuse. Amos had spoken to her of the righteousness and justice of God. But the knowledge that God is sternly righteous and inflexibly just will help none of us. But Hosea succeeded Amos; and the burden of Hosea’s message was this: “God is love; He will save you from your sins, if you seek HIS forgiveness; He will not retain His anger for ever.” And that is all that we need. This revelation of God should break down our rebelliousness. It should drive every suspicious thought far from our minds. It should melt us into submission. (Original Secession Magazine.) The homiletic use of Hosea.-- I. The prophet.--We have no biography of Hosea, but his book leaves upon us such a clear impression of his character that the person who brings the message is as real as the message. He has five qualities which especially equip the man who would save souls. 1. Devotion to God. He loves God, is loyal to Him, is deeply interested in His cause. He dwells on His very names with fond and tender stress. 2. Yet he has a wondrous sympathy with Israel in her woes, and, what is far more, a vicarious fellowship in her guilt. 3. Zeal for righteousness. He denounces formal religion as worthless, however costly, and elaborate. He denounces the lying, swearing, stealing, adultery, and murder which pervaded the nation, in spite of its religious show, and declares that the Lord desires mercy and not sacrifice, the knowledge of God rather than burnt-offerings. 4. Fidelity to truth. He declares the whole counsel of God as he knows it. Even sympathy for Israel does not keep him from affirming that “Ephraim shall be desolate in the day of rebuke.” 5. Hopefulness. With all the sorrow, reproof, and forecast of woe there is a spirit of hope that rises above all he sees and forebodes. II. The times.--Hosea’s ministry certainly lay in the later years of the reign of Jeroboam II., and the troublous days that came just afterward. The reign of Jeroboam II. was the most brilliant of all in the kingdom. The brief account in the Book of Kings suggests power, enterprise, and military glory. But the account in Kings says: “Jeroboam did evil in the sight of the Lord.” Hosea’s description accords. This prosperity covers and decorates disease. Israel has forgotten that God has prospered her. Jeroboam is succeeded by a son, Zechariah, who is killed by a conspiracy after six months. His assassin, Shallum, reigns one month and is killed. His slayer and successor, Menahem, reigns longer. The Book of Kings gives him ten years; the critics say eight. But he has to pay heavy tribute to Assyria, and loads his people with taxes to do so. So the history goes on. They look for help now to Assyria, now to Egypt. Disaster, ruin, exile are close. The homiletic bearing of this is plain. Here is a picture of material prosperity and religious display gilding spiritual destitution and moral rottenness, and inevitably ending in overthrow. III. The teachings of the prophet.-- 1. His doctrine of God. There is a conception here of lasting value to our theology. Ineffable holiness is combined with yearning love for the sinner. 2. His doctrine of sin. This is thoroughly practical. Little or nothing is said of original sin. Actual transgression gets the chief attention. When people are lying, cheating, stealing, killing, and committing adultery, the philosophy of sin draws less notice than its phenomena. The indictment under which the several counts of transgression are to be marshalled is in Hosea 8:12. The progress of sin is shown in Hosea 13:2; its peril in Hosea 13:9 and in Hosea 13:16. The latter teaches also the true character of sin to be not misfortune, but rebellion against God. 3. The nature and the duty of the knowledge of God. This is a doctrine which is valuable to-day as a corrective of agnosticism. Hosea regards ignorance of God to be not a mishap or a mere limitation, but a grievous sin. In Hosea 4:1 he says God has a controversy with the inhabitants of the land, because there is no knowledge of God in the land; in Hosea 6:3 he says, “So shall we know if we follow on to know the Lord” (R.V. substitutes “let us” for “shall we”). The knowledge is experiential and ethical. It is reached by repentance and prayer. It is retained by obedience. It is lost by transgression and neglect. The Christian pulpit to-day may fairly face the agnostic with the truth that the knowledge of God comes by ethical activity rather than metaphysical inquiry, that the thesaurus of its data is the spiritual consciousness rather than the realm of material nature, and that the phenomena of the latter can receive their highest and truest interpretation only in the light of the former. 4. The sin of schism. Hosea was a patriot of the Northern Kingdom, loyal to that part of the Lord’s people to which he belonged. Yet he exalts the ideal of unity and predicts the day when the children of Judah and the children of Israel shall be gathered together and appoint themselves one head. Unity was lost through folly, sin, oppression, unwillingness to reform abuses. It was predicted, permitted, ordered in the providence of God; but it was not the ideal of the kingdom. It was the outgrowth of circumstances, but not a state wherewith to be content. The same is true of the Church. Historic causes produced divisions, which were permitted, even ordered, in God’s providence. But divided Christendom is not the ideal Hosea’s prophecy must he fulfilled in its broad spiritual meaning. The divided hosts of Jehovah must be gathered under the one Head. For this Christ prayed; for this we pray. (T. C. Straus.) the Sixth Week after Easter
<urn:uuid:046e59c6-ed84-42f2-bc88-546bb55903fd>
CC-MAIN-2021-21
https://www.studylight.org/commentaries/eng/tbi/hosea.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00377.warc.gz
en
0.98641
6,228
2.625
3
In many ways, Anabaptists were the quintessential confessional migrants of early modern Europe. Driven by a combination of missionary zeal and persecution, they established communities across migration put Anabaptists into situations in which they had to adapt their teachings and institutions to new conditions. However, determining what developments in Anabaptist history are the result of their migratory existence is complicated by the fact that we are not dealing with a homogeneous, clearly demarcated confessional group. Those whom history calls Anabaptists (rebaptizers) usually referred to each other simply as Brethren: the term "Anabaptist" was imposed on them by their enemies to justify the use of the death penalty against them. Furthermore, Anabaptist movements arose in different areas at different times. From early on, though, they interacted with each other, ultimately leading to common ground on some issues of theology and practice.1 As a result, despite the divergent starting points, we can try to understand how migration and resettlement affected the different Anabaptist groups if we trace their developing positions on several issues central to the Anabaptist movement. These concerns included the meaning of baptism and the nature of the church to which baptism is the entrance; the relationship between the true church and the world, with the related question of nonresistance or pacifism; leadership structures and authority of leaders within the community; as well as patterns of mutual assistance and the forms it takes within the community. For the purposes of this study, the term "Anabaptists" will be used for those groups and individuals coming out of the Protestant Reformation who chose to baptize adults, which meant in the first generation at least that they were rebaptizers. The focus will be on continental Anabaptism; the Baptist tradition, whose history intersected with that of the Anabaptists during the 17th century, had its own point of departure and trajectory.2 In addition, we will restrict ourselves to the history of these groups during the 16th and 17th centuries, when crucial aspects of their identities were formed.3, ultimately migrating to and the . Perceiving themselves to be the true church in a hostile world, they usually tried to isolate themselves from the surrounding society and culture. Nonetheless, both forced and voluntary The Beginnings in Switzerland and South Germany According to The Chronicle of the Hutterian Brethren, on the evening of 21 January 1525 a group of former adherents of the reforming movement led by Huldrych Zwingli (1484–1531) in gathered in the house of the mother of Felix Manz (ca. 1498–1527). After the prayer, Georg Blaurock stood up and asked Conrad Grebel in the name of God to baptize him with true Christian baptism on his faith and recognition of truth. With this request he knelt down, and Conrad baptized him. . . . Then the others turned to Georg in their turn, asking him to baptize them, which he did.4 While the Chronicle's account of the earliest years of the Anabaptist movement is no longer given the same credence as it once was, other sources suggest the first baptisms did, in fact, occur in Zurich about this time.5 This act marked a clear break with Zwingli and the Zurich Reformation. Shortly before this event the city council had decreed that children be baptized as soon as they were born, that Conrad Grebel (ca. 1498–1526) and Mantz, both of whom were citizens of the city, be silent, and that other, non-citizen members of the group leave the city.6 This expulsion, combined with the missionary zeal of the early Anabaptists, led to the rapid diffusion of the movement in . On the following day in January 1525 baptisms were occurring in the neighboring village of . Simultaneously, the baptizers fanned out to the west in areas around and , east to and , and north to , , and . Already the movement was adapting to local circumstances. In areas experiencing social and political unrest associated with the German Peasants' War it took on the characteristics of a mass movement.7 Among these locations Waldshut stands out, in part because of the size of its Anabaptist community. Equally significant, though, was the character of its leader, Balthasar Hubmaier (1485–1528), the only trained theologian among the early Anabaptist leaders, and his vision of reform, which looked more like Zwingli's civic Reformation than the gathered church usually associated with Anabaptism.8 In February 1527, in the village of Michael Sattler (ca. 1490–1527) wrote a series of articles by which he sought to clarify some of the central teachings and practices of the movement: baptism, the ban, the Lord's Supper, separation from the fallen world, the place and authority of pastors within the community, the sword of government, and the swearing of oaths. While the Schleitheim Articles (or the Schleitheim Confession)9 have often been the lens through which subsequent Swiss Anabaptist history has been viewed, their adoption in different regions of the Confederation was gradual and piecemeal, and often dependent on local circumstances.10 In the end, however, Swiss Anabaptists developed the contours of a distinct tradition. This process occurred in no small part as a result of encounters with representatives of other Anabaptist traditions, as the Swiss spread their beliefs far and wide, establishing congregations south into , east across as far as , and north along the Rhine and Neckar rivers. Already in the 1540s members of other Anabaptist groups were referring to them as Swiss Brethren. The general parameters of Swiss Brethren thought and practice were those enshrined in the Schleitheim Confession, but well into the 17th century variations existed in how specific elements of that definition were interpreted to adapt to local circumstances. At times, regional variations could become the cause of serious conflict, as at the end of the 17th century when disagreements between Swiss Brethren in and the on one hand and their coreligionists remaining in Swiss territories on the other resulted in the Amish schism and the creation of a new Anabaptist tradition.11just north of Schaffhausen, As the Swiss Anabaptists moved beyond the borders of the Confederation, they encountered a distinct Anabaptist tradition rooted in the mystically inspired theology of the Saxon Reformers Andreas Bodenstein von Karlstadt (1480–1541) and Thomas Müntzer (1489–1525). These Anabaptists also criticized infant baptism, but for different reasons than the Swiss. While the Biblicism of the Swiss pushed them to restore what they perceived to be the true ceremonies of the ancient church, south German Anabaptists sought to purge the church of empty and meaningless rites that paid no attention to the commitment of the believer. South German and Austrian Anabaptism has no clearly defined start comparable to the Zurich baptisms of January 1525. However, one could designate the baptism of Hans Hut (1490–1527)12 at the hands of Hans Denck (ca. 1495–1527) in on Pentecost 1526 as such an event. Denck's direct influence on the subsequent development of that tradition was limited – his own baptizing activities were circumscribed and toward the end of his life he questioned the validity of his own and others' commissions to baptize.13 Nonetheless, he did give Anabaptism a spiritualist impulse that would reappear periodically throughout its history. Hut, by way of contrast, became one of the most active and prolific missionaries of the movement, recruiting followers and establishing communities in , , , , and possibly even . Hut's thought was strongly eschatological. After his death in December 1527 and the failure of the end times to materialize at Pentecost 1528 as he had predicted, disillusionment set in among many of his followers.14 Thereafter, Anabaptism in southern regions of the German-speaking lands came under the increasing influence of the Swiss model. However, as part of that process interesting encounters occurred between representatives of the Swiss and south German traditions, most notably in where an unusually tolerant policy toward religious heterodoxy provided the setting for wide-ranging dissent.15 Moravia: The Promised Land Another significant meeting between representatives of differing Anabaptist traditions occurred in 16 After an open disputation with the local clergy, Hut was arrested, although he subsequently escaped and fled the territory, ultimately meeting his end in Augsburg in early December 1527. Hubmaier did not long survive him, being martyred in March 1528 after his noble patron could no longer protect him from the long reach of the Habsburgs. Thereafter, Anabaptism in Nikolsburg moved in the direction of Sabbatarianism and survived as an isolated enclave into the second half of the 16th century.17, Moravia. Balthasar Hubmaier arrived there in June or July 1526 and soon won over the local lord, Leonhard von Liechtenstein (1482–1534), and the local humanist-trained clergy to his vision of an Anabaptist Reformation from above. News of Hubmaier's activities quickly spread and drew large numbers of refugees from the southern German lands, where persecution of Anabaptists was increasing exponentially. Like , Moravia had become a land of religious pluralism in the wake of the 15th-century Hussite wars, where a semi-autonomous nobility sought to attract economically valuable immigrants. As refugees flocked to Moravia from different areas in the , they brought with them their own distinct versions of Anabaptism. Among them was Hans Hut, who was able to establish a sizable following, especially among the new arrivals. Conflict soon developed between Hubmaier and Hut. In the winter of 1527 to 1528 supporters of Hut in and around Nikolsburg, many of them Austrian refugees, coalesced around Jakob Wiedemann (died 1536), one of Hut's missionaries. Expelled from the lands of the Liechtenstein lords, in the spring of 1528 they established a community in . From Austerlitz they undertook extensive missionary activity in southern Germany and Austria aimed at organized emigration to the new "promised land" in Moravia. Among the refugees pouring into Moravia was a group arriving from Austria and Tyrol in 1529 which included Pilgram Marpeck (ca. 1495–1556). He would later play an important role in interactions between Anabaptist groups in southern Germany, adopting a position critical of both the increasing Spiritualism of some of Denck's followers and what he regarded as the excessive legalism of the Swiss. 18 Other Anabaptist leaders from different regions also brought their flocks to Moravia, including Philipp Plener from the , Gabriel Ascherham (died 1545) from Silesia, and Jakob Hutter (died 1536) from . Moravia thus became a melting pot, in which elements of both Swiss and south German traditions mingled freely. Each of these groups established their own communities: the Austerlitz Brethren in Austerlitz, the Philippites and Hutterites in , and the Gabrielites in . Initially relations were congenial between the different communities and between refugees from different regions within the communities. Gradually, however, conflicts developed, often along the fault lines separating different groups of refugees.19 Periods of intense persecution from 1535 to 1537 and again between 1547 and 1552 had a profound effect on the Anabaptist communities in Moravia. The Philippites and Gabrielites for the most part returned to their home territories, where many Philippites eventually joined the Swiss Brethren and the Gabrielites were increasingly drawn to Spiritualism.20 The Austerlitz Brethren and the Hutterites were able to weather the storm in Moravia, but adopted very different strategies to do so. The former gave up the communitarian ideals so characteristic of Moravian Anabaptism, thereby allowing them to blend in with local populations and to survive into the 17th century.21 The Hutterites rode out the periods of persecution by breaking up into smaller groups and dispersing throughout , southern Moravia, and . When the danger had passed, they returned to their communal lives.22 In the second half of the 16th century they entered into their "golden years", when the strict organization of their communitarian settlements (Haushaben or Brüderhöfe) and efficiencies of communal living created an economic success story that likely drew further converts from the Empire for economic as much as religious reasons. The number of Haushaben, estimated at 36 not long after the mid-century persecutions, grew to as many as 74 settlements by the end of the 16th and beginning of the 17th centuries. These units may have housed between 20,000 and 25,000 people.23 Much of this growth came from continued vigorous missionary activity in southern Germany and Austria, as well as territories in . This activity led regularly to clashes with the Swiss Brethren, as the Hutterites now called Anabaptist congregations deriving from the Swiss tradition. The security and prosperity of the brethren in the later 16th century was, however, short lived. In the early years of the 17th century, the Haushaben along the Moravian-Hungarian border were ravaged during incursions by troops. More serious damage came with the outbreak of the Thirty Years' War, especially after the Habsburg victory at the Battle of White Mountain (1620) near , which put an end to religious pluralism in the region. Two years later the final expulsion of the Hutterites was ordered and their property seized. Some of the Hutterites converted to Catholicism, the remainder of about 10,000 people crossed into where they had already established satellite communities in the middle of the preceding century. There, with a new administrative center in Sabatisch ( ), they witnessed a modest revival, especially under the leadership of the elder Andreas Ehrenpreis (1589–1662). However, in the second half of the 17th century, the communities went into rapid economic and demographic decline. One of the reasons for this may be a reduction in the numbers of refugees from the traditional missionary fields in the German-speaking lands. In the later 18th century, the final remnants of a few hundred people in Habsburg lands were forced to embrace Catholicism under the "Enlightened" policies of Maria Theresa (1717–1780) and Joseph II. (1741–1790).24 However, that was not the end of the Hutterite story. In the middle of the 18th century Lutheran migrants from encountered the remnants of the Hutterite community in . Impressed by the organization of the Hutterites, the Lutherans joined and reinvigorated the community. Under the threat of continued persecution, descendants of this group migrated first to and Russia, and ultimately to the and . North German and Dutch Anabaptism Among the radicals flocking to Strasbourg at the end of the 1520s was Melchior Hoffman (ca. 1490–ca. 1543), a furrier and lay preacher active in the north, who had run afoul of the Lutheran authorities and clergy. Initially well received by the Strasbourg Reformers, he soon had conflicts with them as well. In 1530 he visited and , where he baptized numerous converts. Among them were individuals who were later to play important roles in the movement, especially Jan Mathys van Haarlem (died 1534) and David Joris (ca. 1501–1556). However, in 1531 after the execution of several of his followers, Hoffman suspended baptizing. Melchiorite Anabaptism first rose to prominence when its history intersected with that of the civic reformation in the Westphalian city of Bernhard Rothmann (ca. 1495–ca. 1535), the city's leading Reformer, was drawn to Anabaptism, possibly by radical preachers visiting the city from Wassenberg near , who had occupied some of Münster's pulpits and with whom he co-authored in 1533 Confession of Two Sacraments (Bekentnisse van beydem Sacramenten Doepe unde Nachmaele der predicanten tho Munster). This work helped to reinvigorate the northern Melchiorite movement. In the neighboring , where Melchiorite Anabaptism spread with considerable success, Mathys and his follower Jan Bockelson van Leyden (1509–1536) played on apocalyptic themes in Hoffman's theology to reinstate adult baptism and identify Münster as the promised New Jerusalem.25 In January 1534 emissaries of Mathys baptized Rothmann and his followers, and in February the Anabaptists and their supporters won regular elections to the city council. Almost immediately, the Catholic bishop of Münster and local Lutheran princes laid siege to the city, from which non-Anabaptists had been expelled, but whose population was supplemented with the arrival of around 2,500 trekkers from and the Netherlands. For the next 16 months events in Münster, which included experiments with community of goods, polygamy, and unusual constitutional forms modeled on the Old Testament, were played out before the eyes of Europeans and confirmed the suspicions of many about the inherently seditious nature of Anabaptism.26. The fall of "New Jerusalem" in June 1535 left the Melchiorite movement in a shambles. Initially the mantle of leadership fell to David Joris, but in the context of ongoing persecution he relinquished his commitment to the necessity of adult baptism in 1539. In 1544 he moved to Basel with a small group of followers, where he lived for the rest of his life in disguise and in conformity with local religious conventions.27 After Joris's defection, Menno Simons (1496–1561) took up the reins of the movement, and groups descending from the early Mennonites became the dominant Anabaptist tradition in northern continental Europe, with settlements especially in coastal areas and along major waterways in the Netherlands, north-western and the delta. Subsequently, they also migrated to and the Americas. Granted limited toleration in the during the revolt against , they moved gradually toward respectable nonconformity, participating widely in the benefits of the Dutch Golden Age. By the late 16th century the Mennonites numbered as many as 100,000 in hundreds of congregations, in some areas of the Netherlands even outnumbering the Reformed.28 However, the history of Melchiorite Anabaptism in the later 16th and 17th centuries is not without conflict. In particular, the lure of the world and the need to maintain a flawless church led to numerous conflicts and divisions within the movement. These began already during Menno's lifetime, producing ultimately a major division between more conservative Mennonites and Doopsgezinden. The latter group practiced a less strict imposition of the ban and shunning, was not as averse to contacts with the world or with other religious communities, and was not as rigorous in its interpretation of nonresistance. Subsequently, numerous subdivisions appeared among the more conservative groups. Attempts to counter these trends and reunify the splintered movements were often less spectacular, but were supplemented by a tradition of philanthropy and advocacy of religious toleration, through which wealthy Doopsgezinde and Mennonite merchants sought to share the fruits of their success, especially with Mennonites around and in , Swiss Brethren, and Hutterites, all of whom faced continued persecution and hardship.29 Migration and the Formation of Anabaptist Traditions Despite some important regional variations, different Anabaptist groups shared a common basic understanding of baptism. In general, they desacralized the rite: water baptism was merely an external act, witnessing to an internal arrival of faith and a commitment to lead a Christian life.30 Much greater variation appears in their understanding of the communities arising from this act. While debate continues about the nature of the community intended by the first Zurich baptisms, evidence from areas around the city suggests that the environment could play a significant role in determining its characteristics.31 Ultimately, the tightly knit community of believers envisioned in the Schleitheim Articles, which still maintained a number of traditional social structures and institutions, became the norm. However, in at least two cases specific environments dictated the development of more comprehensive institutions. In Moravia circumstances faced by the refugees after the Nikolsburg split, and thereafter the need to integrate newcomers from the mission fields, suggested the adoption of communal living arrangements, culminating in the regimented life of the Hutterite Haushaben.32 Similarly, in Münster the need to integrate refugees from the Netherlands and to deal with shortages caused by the siege encouraged social experiments, most obvious in the adoption of polygamy as a means to regulate the large number of women in the city.33 The Schleitheim Articles also outlined the relationship of the true church to the world and its institutions and practices. These recognized that government, and with it physical coercion, was a necessary evil to regulate human interaction in a fallen world, but insisted that the faithful were not subject to its strictures. Rather, the wayward among the faithful were to be dealt with through the ban or exclusion from the community, as described in Matthew 18,15–17.34 Here, too, the early Swiss Anabaptists leave us no unequivocal image of their intentions. Particularly their activities outside of Zurich, often in the context of armed peasant resistance to the authorities, call into question the extent to which they categorically rejected temporal authority and the use of force. This is most obvious in Waldshut where Hubmaier worked closely with civic authorities and called for an armed defense of the Reformation in the city.35 Outside the Swiss context, the case for the original pacifism and rejection of temporal authority of the Anabaptists is even more problematic: both Hut and Hoffman allowed for the possibility of godly rulers and some use of force in the events of the last days.36 Even after Hoffman's eschatological emphasis waned in the Melchiorite movement, Dutch Anabaptism continued to maintain a more positive assessment of secular authority than its Swiss and Moravian counterparts.37 In some cases circumstances could reinforce these differences. In The Chronicle of the Hutterian Brethren a disagreement over nonresistance, specifically the payment of war taxes, is presented as an important cause of the divisions among Anabaptists at Nikolsburg. This account identifies the group associated with Hubmaier as sword-bearers (Schwertler) while the followers of Wiedemann are labeled staff-bearers (Stäbler).38 The accuracy of this portrayal of events is not clearly documented; the Hutterites may have emphasized differences between these groups to highlight the continuity between their own teachings and those of the Wiedemann group. Nonetheless, the positions described seem plausible. While Hut was not a pacifist, he would have opposed a magisterial Anabaptist reformation on Hubmaier's model, and the stance on the sword propagated in the Schleitheim Articles had a solid following among the Wiedemann group.39 However, the backing of Hubmaier by the Liechtenstein lords would certainly have reinforced the assumptions of the Schwertler, just as it would have confirmed the antagonism to secular authority of the Stäbler. Similarly, the initial decision to unsheathe the sword in Münster owed more to the perception of the city's citizens that they had a legitimate right to defend their civic reformation than it did to the eschatological musings of Jan Matthijs in the Netherlands.40 Even after there was a general consensus on separation and nonresistance among the different Anabaptist groups, variations persisted in the interpretation of what that meant, and those variations could be a source of friction between groups or even within groups. Ironically, the Hutterites were able to maintain a radical separation from the world and their categorical rejection of the possibility of a Christian government as well as of the legitimate use of force because they enjoyed the protection of the Moravian nobility.41 From that perspective they criticized the accommodations of the Swiss Brethren with the world in a context of ongoing persecution, especially their willingness to pay war taxes, which the Hutterites regarded as a betrayal of the principle of nonresistance.42 In the Netherlands during the later 16th and 17th centuries, in the eyes of some, accommodation with the world was not really a survival strategy but an even greater danger than the earlier persecution faced by the Anabaptists. As a result, the sources of conflict in the seemingly endless divisions among Mennonites included matters involving accommodation with the world, such as questions of personal ostentation, marriage outside the community, or the arming of merchant ships with cannon as a self-defense measure.43 Similarly, the roots of the Amish schism lie in criticisms of accommodation with the world of Swiss Anabaptists by their confreres who had emigrated to more tolerant environments in Alsace and the Palatinate.44 A similar adaptation to local circumstances is evident in Anabaptist definitions of the roles and authority of congregational leaders. In general, Anabaptists rejected the sacramental priesthood and hierarchical offices of the church to a degree that took the Reformation teaching of the priesthood of all believers to its logical conclusion. Nevertheless, variations existed between the different Anabaptist traditions. While Swiss Anabaptism vested greater authority in the congregation than in the hands of the leader, the more charismatic leadership of Hut and Hoffman bequeathed to south German and Austrian as well as 45 At the other end of the spectrum, charisma remained a crucial element of leadership in Münster under Jan Matthijs and Jan van Leyden and in the refugee communities in Austerlitz, Auspitz, and Rossitz in Moravia. Subsequently, this charisma was institutionalized in the greater authority exercised by leaders among the Dutch Mennonites than among the Swiss Brethren, and even more dramatically in the strict hierarchy of offices in the Hutterite Haushaben.46 On this point, too, friction developed between different Anabaptist groups: Pilgram Marpeck criticized the Swiss Brethren for undercutting the authority of congregational leaders, while the Swiss Brethren were appalled by the privileges enjoyed by the Hutterite leadership and the imbalance of authority between leaders and congregations.47and Dutch Anabaptism greater roles and authority for leaders. These differences could sometimes be taken to extremes. For example, in , where the Anabaptist movement was an amalgam of Swiss and south German traditions, the efforts of local authorities to eradicate Anabaptism focused almost exclusively on rooting out its leadership. In response, formal leadership within the congregation disappeared. No less dramatic are the connections between Anabaptist teachings on mutual aid or community of goods and the refugee experience. This practice is most famous in the compulsory community of goods practiced by the Hutterites, but a commitment to mutual support within the community modeled on the description of the primitive church in Acts 2 and 4 ran through Anabaptism generally. This is evident in a contemporary description of the activities of the Anabaptists in Zollikon: Now because Zollikon in general had itself baptized, and they assumed they were the true Christian church, they also undertook, like the early Christians, to practice community of goods (as can be read in the Acts of the Apostles), broke the locks off their doors, chests, and cellars, and ate food and drink in good fellowship without discrimination.48 Possibly coming out of Zollikon was an early Swiss congregational order advocating community of goods, which may have served as a model for some Moravian congregational orders.49 However, the appeal of that model was certainly reinforced by the refugee experience of the early Moravian Anabaptists, which resulted in a wide variety of experiments with community of goods. Especially among the Hutterites, it served as a crucial component in both the integration of new arrivals to the community from the mission fields and in the economic success of the communities which made them so attractive to the local nobility.50 The other case of compulsory community of goods, which was instituted in Anabaptist Münster, was also determined by circumstances, specifically the need to integrate trekkers with the local population of the city and to deal with the emergency rationing necessitated by the siege.51 Not surprisingly, differences in how mutual support was understood, too, became distinguishing characteristics of these groups and a source of contention between them. In the second half of the 16th century, the Swiss Brethren accused the Hutterites of making a god out of their practice of community of goods.52 Yet, in changed circumstances even the most committed adherents to the tradition were willing to abandon it. When the Philippites, Gabrielites, and Austerlitz Brethren faced prolonged persecution in the 1530s, they stepped away from communal living and community of goods. In order to justify this change, they appealed to the model of the primitive church after the dispersion from Jerusalem rather than to the description in Acts 2 and 4.53 In the end even the Hutterites renounced community of goods for a time in the second half of the 17th century.54 In the wake of the Münster debacle, Melchiorite Anabaptists abandoned community of goods, opting for a Swiss style of voluntary sharing that manifested itself in the extensive philanthropic activities of Dutch Mennonites and Doopsgezinden in the 17th and 18th centuries.55 Clearly, the experiences of a migratory existence, frequently as refugees, highlighted and even exacerbated differences between Anabaptist traditions and groups. Yet, especially in the 17th century and after, there were moves toward greater unity and uniformity between those groups and traditions, facilitated by the sharing of ideas and practices. Obviously, face-to-face encounters, either formal meetings aimed at the unification of groups or encounters between missionaries of competing traditions, played an important role. However, limited surviving evidence of such encounters suggests that they did more to harden differences between the various groups than to encourage unity. Rather, it appears that the written word, either in printed form or in the manuscript collections characteristic of the Hutterite tradition, was a much more effective means of transmission.56 This is most obvious in a number of high profile cases of borrowing, or even plagiarizing, major treatises, biblical concordances, and congregational orders from other traditions.57 However, some of the most common means of transmission were likely collections of martyr stories and hymns. Both martyr stories and hymns played important roles in consolidating group identity and solidarity by reinforcing an awareness of being a persecuted minority in a hostile world. Initially, hand-written accounts of martyrdoms and collections of martyr stories were assembled locally with each group venerating its own martyrs. In 1615, however, a dramatic change came with the publication of the Doopsgezinde martyrology History of the Martyrs or Genuine Witnesses of Christ (Historie der Martelaren) by Hans de Ries (1553–1638), which consciously drew on martyr accounts from a variety of Anabaptist groups in an effort to promote unity between them. This more inclusive approach was taken up in later Dutch Anabaptist martyrologies, including the 1685 edition of The Martyrs Mirror,58 one of the most influential books in subsequent Anabaptist tradition.59 Hymns were shared among Anabaptist groups in a similar fashion. The first printed Anabaptist hymn collection, Some Beautiful Christian Songs (1564), consisted of 51 hymns written by Philippites imprisoned in from 1535 to 1540 after fleeing persecution in Moravia. In 1583 these hymns were republished in the Ausbund,60 the best known and most influential Swiss Breathren hymnal, where they supplemented hymns not only from Swiss and Dutch Anabaptist sources, but from other traditions as well. Hymns from the Ausbund were in turn taken up in other collections in a variety of Anabaptist and Mennonite traditions respectively.61 As they moved across northern Europe, Anabaptist refugees were subjected to pressures that encouraged both assimilation with and differentiation from the surrounding society and other Anabaptists. Without a single dominant theological teacher or office, and with only limited ties to secular authority, they had no clearly defined orthodoxy or orthopraxis. Different Anabaptist groups did, however, share some basic assumptions about central teachings of the movement: that baptism should be administered to adults on the basis of a confession of faith, that the church was a voluntary association in some way separated from the world, that the leadership was non-sacerdotal and drawn from the community, and that members of the community were responsible for each other's wellbeing through some sort of mutual sharing. The details of these teachings, and how they were to be implemented in the life of the community, varied between regions where the Anabaptist movement began. More variations developed as a result of the Anabaptists' refugee experiences. Often these experiences encouraged conflict and further distinctions within the movement, as with the Hutterites and other Moravian Anabaptists, the Mennonites and the Doopsgezinden, the Amish and the Swiss Brethren. Throughout this process, though, the shared core of teachings persisted and encouraged attempts at unification, resulting in a shared perception of a common heritage.
<urn:uuid:d990af3b-bc60-4066-80b5-a4d123db1cbc>
CC-MAIN-2021-21
http://tcdh02.uni-trier.de:8889/en/threads/crossroads/en/threads/europe-on-the-road/confessional-migration/geoffrey-dipple-confessional-migration-anabaptists-mennonites-hutterites-baptists-etc
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00136.warc.gz
en
0.968976
7,059
3.9375
4
Strangely, although we feel as if we sweep through time on the knife-edge between the fixed past and the open future, that edge — the present — appears nowhere in the existing laws of physics. In Albert Einstein’s theory of relativity, for example, time is woven together with the three dimensions of space, forming a bendy, four-dimensional space-time continuum — a “block universe” encompassing the entire past, present and future. Einstein’s equations portray everything in the block universe as decided from the beginning; the initial conditions of the cosmos determine what comes later, and surprises do not occur — they only seem to. “For us believing physicists,” Einstein wrote in 1955, weeks before his death, “the distinction between past, present and future is only a stubbornly persistent illusion.” The timeless, pre-determined view of reality held by Einstein remains popular today. “The majority of physicists believe in the block-universe view, because it is predicted by general relativity,” said Marina Cortês, a cosmologist at the University of Lisbon. However, she said, “if somebody is called on to reflect a bit more deeply about what the block universe means, they start to question and waver on the implications.” Physicists who think carefully about time point to troubles posed by quantum mechanics, the laws describing the probabilistic behavior of particles. At the quantum scale, irreversible changes occur that distinguish the past from the future: A particle maintains simultaneous quantum states until you measure it, at which point the particle adopts one of the states. Mysteriously, individual measurement outcomes are random and unpredictable, even as particle behavior collectively follows statistical patterns. This apparent inconsistency between the nature of time in quantum mechanics and the way it functions in relativity has created uncertainty and confusion. Over the past year, the Swiss physicist Nicolas Gisin has published four papers that attempt to dispel the fog surrounding time in physics. As Gisin sees it, the problem all along has been mathematical. Gisin argues that time in general and the time we call the present are easily expressed in a century-old mathematical language called intuitionist mathematics, which rejects the existence of numbers with infinitely many digits. When intuitionist math is used to describe the evolution of physical systems, it makes clear, according to Gisin, that “time really passes and new information is created.” Moreover, with this formalism, the strict determinism implied by Einstein’s equations gives way to a quantum-like unpredictability. If numbers are finite and limited in their precision, then nature itself is inherently imprecise, and thus unpredictable. Physicists are still digesting Gisin’s work — it’s not often that someone tries to reformulate the laws of physics in a new mathematical language — but many of those who have engaged with his arguments think they could potentially bridge the conceptual divide between the determinism of general relativity and the inherent randomness at the quantum scale. “I found it intriguing,” said Nicole Yunger Halpern, a quantum information scientist at Harvard University, responding to Gisin’s recent article in Nature Physics. “I’m open to giving intuitionist mathematics a shot.” Cortês called Gisin’s approach “extremely interesting” and “shocking and provocative” in its implications. “It’s really a very interesting formalism that is addressing this problem of finite precision in nature,” she said. Gisin said it’s important to formulate laws of physics that cast the future as open and the present as very real, because that’s what we experience. “I am a physicist who has my feet on the ground,” he said. “Time passes; we all know that.” Information and Time Gisin, 67, is primarily an experimenter. He runs a lab at the University of Geneva that has performed groundbreaking experiments in quantum communication and quantum cryptography. But he is also the rare crossover physicist who is known for important theoretical insights, especially ones involving quantum chance and nonlocality. On Sunday mornings, in lieu of church, Gisin makes a habit of sitting quietly in his chair at home with a mug of oolong tea and contemplating deep conceptual puzzles. It was on a Sunday about two and a half years ago that he realized that the deterministic picture of time in Einstein’s theory and the rest of “classical” physics implicitly assumes the existence of infinite information. Consider the weather. Because it’s chaotic, or highly sensitive to small differences, we can’t predict exactly what the weather will be a week from now. But because it’s a classical system, textbooks tell us that we could, in principle, predict the weather a week on, if only we could measure every cloud, gust of wind and butterfly’s wing precisely enough. It’s our own fault we can’t gauge conditions with enough decimal digits of detail to extrapolate forward and make perfectly accurate forecasts, because the actual physics of weather unfolds like clockwork. Now expand this idea to the entire universe. In a predetermined world in which time only seems to unfold, exactly what will happen for all time actually had to be set from the start, with the initial state of every single particle encoded with infinitely many digits of precision. Otherwise there would be a time in the far future when the clockwork universe itself would break down. But information is physical. Modern research shows it requires energy and occupies space. Any volume of space is known to have a finite information capacity (with the densest possible information storage happening inside black holes). The universe’s initial conditions would, Gisin realized, require far too much information crammed into too little space. “A real number with infinite digits can’t be physically relevant,” he said. The block universe, which implicitly assumes the existence of infinite information, must fall apart. He sought a new way of describing time in physics that didn’t presume infinitely precise knowledge of the initial conditions. The Logic of Time The modern acceptance that there exists a continuum of real numbers, most with infinitely many digits after the decimal point, carries little trace of the vitriolic debate over the question in the first decades of the 20th century. David Hilbert, the great German mathematician, espoused the now-standard view that real numbers exist and can be manipulated as completed entities. Opposed to this notion were mathematical “intuitionists” led by the acclaimed Dutch topologist L.E.J. Brouwer, who saw mathematics as a construct. Brouwer insisted that numbers must be constructible, their digits calculated or chosen or randomly determined one at a time. Numbers are finite, said Brouwer, and they’re also processes: They can become ever more exact as more digits reveal themselves in what he called a choice sequence, a function for producing values with greater and greater precision. By grounding mathematics in what can be constructed, intuitionism has far-reaching consequences for the practice of math, and for determining which statements can be deemed true. The most radical departure from standard math is that the law of excluded middle, a vaunted principle since the time of Aristotle, doesn’t hold. The law of excluded middle says that either a proposition is true, or its negation is true — a clear set of alternatives that offers a powerful mode of inference. But in Brouwer’s framework, statements about numbers might be neither true nor false at a given time, since the number’s exact value hasn’t yet revealed itself. There’s no difference from standard math when it comes to numbers like 4, or ½, or pi, the ratio of a circle’s circumference to its diameter. Even though pi is irrational, with no finite decimal expansion, there’s an algorithm for generating its decimal expansion, making pi just as determinate as a number like ½. But consider another number x that’s in the ballpark of ½. Say the value of x is 0.4999, where further digits unfurl in a choice sequence. Maybe the sequence of 9s will continue forever, in which case x converges to exactly ½. (This fact, that 0.4999… = 0.5, is true in standard math as well, since x differs from ½ by less than any finite difference.) But if at some future point in the sequence, a digit other than 9 crops up — if, say, the value of x becomes 4.999999999999997… — then no matter what happens after that, x is less than ½. But before that happens, when all we know is 0.4999, “we don’t know whether or not a digit other than 9 will ever show up,” explained Carl Posy, a philosopher of mathematics at the Hebrew University of Jerusalem and a leading expert on intuitionist math. “At the time we consider this x, we cannot say that x is less than ½, nor can we say that x equals ½.” The proposition “x is equal to ½” is not true, and neither is its negation. The law of the excluded middle doesn’t hold. Moreover, the continuum can’t be cleanly divided into two parts consisting of all numbers less than ½ and all those greater than or equal to ½. “If you try to cut the continuum in half, this number x is going to stick to the knife, and it won’t be on the left or on the right,” said Posy. “The continuum is viscous; it’s sticky.” Hilbert compared the removal of the law of excluded middle from math to “prohibiting the boxer the use of his fists,” since the principle underlies much mathematical deduction. Although Brouwer’s intuitionist framework compelled and fascinated the likes of Kurt Gödel and Hermann Weyl, standard math, with its real numbers, dominates because of ease of use. The Unfolding of Time Gisin first encountered intuitionist math at a meeting last May attended by Posy. When the two got to talking, Gisin quickly saw a connection between the unspooling decimal digits of numbers in this mathematical framework and the physical notion of time in the universe. Materializing digits seemed to naturally correspond to the sequence of moments defining the present, when the uncertain future becomes concrete reality. The lack of the law of excluded middle is akin to indeterministic propositions about the future. In work published last December in Physical Review A, Gisin and his collaborator Flavio Del Santo used intuitionist math to formulate an alternative version of classical mechanics, one that makes the same predictions as the standard equations but casts events as indeterministic — creating a picture of a universe where the unexpected happens and time unfolds. It is a bit like the weather. Recall that we can’t precisely forecast the weather because we don’t know the initial conditions of every atom on Earth to infinite precision. But in Gisin’s indeterministic version of the story, those exact numbers never existed. Intuitionist math captures this: The digits that specify the weather’s state ever more precisely, and dictate its evolution ever further into the future, are chosen in real time as that future unfolds in a choice sequence. Renato Renner, a quantum physicist at the Swiss Federal Institute of Technology Zurich, said Gisin’s arguments “point in the direction that deterministic predictions are fundamentally impossible in general.” In other words, the world is indeterministic; the future is open. Time, Gisin said, “is not unfolding like a movie in the cinema. It is really a creative unfolding. The new digits really get created as time passes.” Fay Dowker, a quantum gravity theorist at Imperial College London, said she is “very sympathetic” to Gisin’s arguments, as “he is on the side of those of us who think that physics doesn’t accord with our experience and therefore it’s missing something.” Dowker agrees that mathematical languages shape our understanding of time in physics, and that the standard Hilbertian mathematics that treats real numbers as completed entities “is certainly static. It has this character of being timeless, and that definitely is a limitation to us as physicists if we’re trying to incorporate something that’s as dynamic as our experience of the passage of time.” For physicists such as Dowker who are interested in the connections between gravity and quantum mechanics, one of the most important implications of this new view of time is how it begins to bridge what have long been thought of as two mutually incompatible views of the world. “One of the implications it has for me,” said Renner, “is that classical mechanics is in some ways closer to quantum mechanics than we thought.” Quantum Uncertainty and Time If physicists are going to solve the mystery of time, they have to grapple not just with the space-time continuum of Einstein, but also with the knowledge that the universe is fundamentally quantum, ruled by chance and uncertainty. Quantum theory paints a very different picture of time than Einstein’s theory. “Our two big theories on physics, quantum theory and general relativity, make different statements,” said Renner. He and several other physicists said this inconsistency underlies the struggle to find a quantum theory of gravity — a description of the quantum origin of space-time — and to understand why the Big Bang happened. “If I look at where we have paradoxes and what problems we have, in the end they always boil down to this notion of time.” Time in quantum mechanics is rigid, not bendy and intertwined with the dimensions of space as in relativity. Furthermore, measurements of quantum systems “make time in quantum mechanics irreversible, whereas otherwise the theory is completely reversible,” said Renner. “So time plays a role in this thing that we still don’t really understand.” Many physicists interpret quantum physics as telling us that the universe is indeterministic. “For Chrissakes, you have two uranium atoms: One of them decays after 500 years, and the other one decays after 1,000 years, and yet they’re completely identical in every way,” said Nima Arkani-Hamed, a physicist at the Institute for Advanced Study in Princeton, New Jersey. “In every meaningful sense, the universe is not deterministic.” Still, other popular interpretations of quantum mechanics, including the many-worlds interpretation, manage to keep the classical, deterministic notion of time alive. These theories cast quantum events as playing out a predetermined reality. Many-worlds, for instance, says each quantum measurement splits the world into multiple branches that realize every possible outcome, all of which were set in advance. Gisin’s ideas go the other way. Instead of trying to make quantum mechanics a deterministic theory, he hopes to provide a common, indeterministic language for both classical and quantum physics. But the approach departs from standard quantum mechanics in an important way. In quantum mechanics, information can be shuffled or scrambled, but never created or destroyed. Yet if the digits of numbers defining the state of the universe grow over time as Gisin proposes, then new information is coming into being. Gisin said he “absolutely” rejects the notion that information is preserved in nature, largely because “there is clearly new information that is created during a measurement process.” He added, “I’m saying that we need another way of looking at these entire ideas.” This new way of thinking about information may suggest a resolution to the black hole information paradox, which asks what happens to information swallowed by black holes. General relativity implies that information gets destroyed; quantum theory says it’s preserved. Hence the paradox. If a different formulation of quantum mechanics in terms of intuitionist math allows information to be created by quantum measurements, perhaps it also lets information be destroyed. Jonathan Oppenheim, a theoretical physicist at University College London, believes information is indeed lost in black holes. He doesn’t know if Brouwer’s intuitionism will be the key to showing this, as Gisin contends, but he says there’s reason to think information creation and destruction might be deeply related to time. “Information is destroyed as you go forward in time; it’s not destroyed as you move through space,” Oppenheim said. The dimensions that make up Einstein’s block universe are very different from one another. Along with supporting the idea of creative (and possibly destructive) time, intuitionist math also offers a novel interpretation of our conscious experience of time. Recall that in this framework, the continuum is sticky, impossible to cut in two. Gisin associates this stickiness with our sense that the present is “thick” — a substantive moment rather than a zero-width point that cleanly cleaves past from future. In standard physics, based on standard math, time is a continuous parameter that can take any value on the number line. “However,” Gisin said, “if the continuum is represented by intuitionistic mathematics, then time can’t be cut in two sharply.” It’s thick, he said, “in the same sense as honey is thick.” So far, it’s just an analogy. Oppenheim said he had “a good feeling about this notion that the present is thick. I’m not sure why we have that feeling.” The Future of Time Gisin’s ideas have prompted a range of responses from other theorists, all with their own thought experiments and intuitions about time to go on. Several experts agreed that real numbers don’t seem to be physically real, and that physicists need a new formalism that doesn’t rely on them. Ahmed Almheiri, a theoretical physicist at the Institute for Advanced Study who studies black holes and quantum gravity, said quantum mechanics “precludes the existence of the continuum.” Quantum math bundles energy and other quantities into packets, which are more like whole numbers rather than a continuum. And infinite numbers get truncated inside black holes. “A black hole may seem to have a continuously infinite number of internal states, but [these get] cut off,” he said, due to quantum gravitational effects. “Real numbers can’t exist, because you can’t hide them inside black holes. Otherwise they’d be able to hide an infinite amount of information.” Sandu Popescu, a physicist at the University of Bristol who corresponds often with Gisin, agreed with the latter’s indeterministic worldview but said he is not convinced that intuitionist math is necessary. Popescu objects to the idea that digits of real numbers count as information. Arkani-Hamed found Gisin’s use of intuitionist math interesting and potentially relevant to cases such as black holes and the Big Bang where gravity and quantum mechanics come into apparent conflict. “These questions — of numbers as finite, or fundamentally things that exist, or whether there’s infinitely many digits, or the digits are made as you go on,” he said, “might be related to how we should ultimately think about cosmology in situations where we don’t know how to apply quantum mechanics.” He too sees the need for a new mathematical language that could “liberate” physicists from infinite precision and allow them to “talk about things that are a little bit fuzzy all the time.” Gisin’s ideas resonate in many corners but still need to be fleshed out. Going forward, he hopes to find a way of reformulating relativity and quantum mechanics in terms of finite, fuzzy intuitionist mathematics, as he did with classical mechanics, potentially bringing the theories closer. He has some ideas about how to approach the quantum side. One way that infinity rears its head in quantum mechanics is in the “tail problem”: Try to localize a quantum system, like an electron on the moon, and “if you do that with standard mathematics, you have to admit that an electron on the moon has a super small probability of being also detected on Earth,” Gisin said. The “tail” of the mathematical function representing the particle’s position “becomes exponentially small but nonzero.” But Gisin wonders, “What reality should we attribute to a super small number? Most experimentalists would say, ‘Put it to zero and stop questioning.’ But maybe the more theoretically oriented would say, ‘OK, but there is something there according to the math.’ “But it depends, now, which math,” he continued. “Classical math, there is something. In intuitionist math, no. There is nothing.” The electron is on the moon, and its chance of turning up on Earth is well and truly zero. Since Gisin first published his work, the future has grown only more uncertain. Now every day is a kind of Sunday for him, as crisis grips the world. Away from the lab, and unable to see his granddaughters except on a screen, he plans to keep thinking, at home with his mug of tea and garden view. This article was reprinted on TheAtlantic.com.
<urn:uuid:8ea0ec6f-b5dd-4c18-b369-cc6ed59920a9>
CC-MAIN-2021-21
https://www.quantamagazine.org/does-time-really-flow-new-clues-come-from-a-century-old-approach-to-math-20200407/?utm_source=pocket-newtab
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00572.warc.gz
en
0.942808
4,534
3.21875
3
OBJECTIVES It has been assumed that nicotine dependence has a slow onset and occurs only after prolonged daily use of tobacco. A cohort of young adolescents was followed to determine when the first symptoms of nicotine dependence occur with respect to the duration and frequency of tobacco use. DESIGN A cohort of 681 seventh grade students (age 12–13 years) from seven schools in two small cities in central Massachusetts was followed over one year. Detailed information regarding tobacco use was obtained in individual confidential interviews conducted in school three times over the year. The latency time to the onset of symptoms of nicotine dependence was measured from the time a subject first smoked at a frequency of at least once per month. RESULTS 22% of the 95 subjects who had initiated occasional smoking reported a symptom of nicotine dependence within four weeks of initiating monthly smoking. One or more symptoms were reported by 60 (63%) of these 95 subjects. Of the 60 symptomatic subjects, 62% had reported experiencing their first symptom before smoking daily or began smoking daily only upon experiencing their first symptom. DISCUSSION The first symptoms of nicotine dependence can appear within days to weeks of the onset of occasional use, often before the onset of daily smoking. The existence of three groups of individuals—rapid onset, slower onset, and resistant—distinguishable from one another by their susceptibility to nicotine dependence, is postulated. Statistics from Altmetric.com Nicotine dependence is characterised by tolerance, cravings, feeling a need to use tobacco, withdrawal symptoms during periods of abstinence, and loss of control over the amount or duration of use.1 2Symptoms of nicotine withdrawal include: cravings; depressed mood; irritability; frustration; anger; anxiety; difficulty concentrating; and restlessness.2-6 A popular model for the development of nicotine dependence holds that youths progress from the first cigarette through a period of occasional use and on to sustained and increasingly heavier daily use, resulting ultimately in dependence.7-13 However, it has not been established that daily use of nicotine is necessary for dependence to begin. The assumption that heavy daily use (one half pack per day) is necessary for dependence to develop is derived from observations of “chippers”, adult smokers who have not developed dependence despite smoking up to five cigarettes per day over many years.11 14 15 Chippers do not differ from other smokers in their absorption and metabolism of nicotine, causing some investigators to suggest that this level of consumption may be too low to cause nicotine dependence.11 14 15 In conflict with the assumption that prolonged daily use is a prerequisite for dependence is the observation that symptoms of nicotine dependence, which are common among adolescent smokers, appear to develop in some youths before the onset of daily smoking.3 16-20 In a study of girls 11–17 years of age, McNeill and colleagues were the first to find that youths report nicotine withdrawal symptoms.3 Withdrawal symptoms were reported by 74% of daily smokers and by 47% of occasional smokers.3 Other investigators have also reported withdrawal symptoms among youths who were not smoking daily at the time of the interview.16 17 However, in these studies, individuals reporting withdrawal symptoms may have been daily smokers in the past.3 16 17 Also arguing against the need for prolonged and heavy exposure before dependence can occur is a report that 8% of subjects who had smoked 20 or fewer cigarettes over their lifetime had difficulty quitting.16 Some studies indicate that young smokers can inhale and absorb as much nicotine and carbon monoxide per cigarette as adults do, and tolerance can begin with the first dose of nicotine.20-23 Since tolerance can begin immediately, it may not be long before other symptoms of dependence follow.22 The Development and Assessment of Nicotine Dependence in Youth (DANDY) study reported here is a retrospective/prospective study of a cohort of adolescents designed to investigate the onset of symptoms of nicotine dependence. This paper presents a first look at the DANDY data, as we approach one year of longitudinal follow up. The most commonly used definition of nicotine dependence appears in theDiagnostic and statistical manual of mental disorders, fourth revision (DSM-IV). The DSM-IV definition is based upon the assumption that “prolonged heavy use” of nicotine is required before physiologic dependence can occur, but acknowledges that “how quickly dependence develops is unclear”.2 Since the DSM-IV definition of nicotine dependence does not allow for the possibility that dependence might start before “prolonged heavy use”, the DSM-IV criteria were not used in this study. Accordingly, subjects were not diagnosed as being nicotine dependent, or experiencing a “withdrawal syndrome” according to DSM-IV criteria. Rather, we report only on whether subjects report any individual symptoms that are associated with dependence. To study the onset of the first symptoms of nicotine dependence, a cohort of 681 seventh grade students (age 12–13 years) were enrolled in a longitudinal study. Subjects are interviewed individually in school three times each year. Four years of data collection are planned. This report presents the data from the first three interviews. The study is being conducted in two small cities in central Massachusetts with populations of 38 000 and 41 000 in 1990, per capita income below the state average, and youth and adult smoking rates higher than the state average, but similar to national rates.24 25 There were 900 seventh grade students in the seven public schools in these two cities when the study began in January 1998. The following factors contributed to the selection of these cities: their large and ethnically diverse student bodies, the cooperation of the school administrations, and student tobacco use rates comparable to national averages. ASSEMBLY OF THE COHORT Considerations of statistical power and anticipated attrition indicated that a minimum initial sample size of 650 would be required to allow for planned regression analyses. With the approval of the committee for the protection of human subjects in research at the University of Massachusetts Medical School, the parents of all seventh graders were sent two letters describing the study and were asked to respond if they did not want their child to participate. All students who were not eliminated by this process were assigned random numbers, and the first 650 were invited to participate. Prior tobacco use did not preclude participation. Students who declined to participate were replaced by continuing down the list of random numbers until 650 had agreed to participate. The initial 650 interviews were completed ahead of schedule, allowing the sample size to be expanded to 681 as additional students were sequentially invited to participate based upon their random number assignment. Subjects were told that the study concerned tobacco, and those who assented to participate were promised confidentiality. No subjects were added after the first set of interviews were completed in March 1998. The third interviews were completed in December 1998. The survey instrument collected detailed information about prior and current tobacco use including the duration of use, the frequency of use, the amount used, the pattern of use, the types of tobacco used, periods of abstinence, and attempts to quit smoking. Students were asked to provide exact dates for the first puff, the first inhalation, the first monthly use, the first daily use, and the first occurrence of 11 symptoms of dependence (table 1). To determine how symptoms of nicotine dependence should be identified, a review of the literature was conducted to locate validated survey items used for this purpose in previous studies.3 16 18 19 26 27 Twelve items were identified and pilot tested on a population of adolescents to eliminate those which produced positive responses in non-smokers (false positives). An item from the National Household Survey on Drug Abuse read “Have you ever felt addicted, or dependent on tobacco?”19 Many smokers responded “I'm not addicted, but I am dependent”, meaning that they depended on tobacco to relieve stress. This item was retained without the reference to dependence. Some subjects who had never smoked responded affirmatively to the question “Have you ever felt like you really needed a cigarette?”. They felt they needed to smoke to be popular. Other non-smokers reported experiencing “strong cravings” to smoke when seeing other people smoke. Items regarding craving and needing to smoke were retained in the questionnaire but were not included as symptoms of nicotine dependence for this analysis because of these false positive responses. Individual interviews were conducted in privacy in the schools. Interviewers followed a protocol but were instructed to explore positive responses to dependency symptoms in more depth. All subjects are interviewed three times annually, whether or not they have ever used tobacco. To evaluate the specificity of the survey items further, all subjects, including those who had never used tobacco, were asked questions 3–11 in table 1 at the baseline survey. Most of the questions had to be reworded slightly to make sense to non-smokers. For example, the question “When you tried to stop smoking did you feel more irritable because you couldn't smoke?” was changed to “Do you feel more irritable when you can't smoke?”. Four techniques proven to facilitate the accurate recall of dates and events were employed during interviews.28 29 These included the use of “personal landmarks”, “bounded recall”, “decomposition”, and a visual aide in the form of a personalised calendar.28 29 A calendar of significant events was created for each tobacco user, and brought to each interview to serve as a memory aide and to assist in establishing the timing and sequence of events. Specific dates for events were recorded when available. Otherwise, if an event was recalled to have occurred at the beginning of the month it was recorded as the seventh of the month, the middle of the month as the 15th, and the end of the month as the 25th. Elapsed time was measured in completed weeks. Subjects were considered to be tobacco users if they had ever used any form of tobacco. Subjects who had at any time smoked at least two cigarettes within a two month period were considered to be monthly smokers. Thus, the monthly smoker category could theoretically include subjects who were daily smokers from their first cigarette and subjects who had discontinued tobacco use after smoking just two cigarettes within the same week. A subject who had smoked one cigarette every other month for years would not be considered to be a monthly smoker. The onset of monthly smoking was defined as the point in time when the subject first smoked with a frequency of at least once per month. Nicotine dependence symptoms were operationally defined as follows: loss of control over the amount or duration of use as indicated by items 1 and 2 in table 1; an admission of feeling addicted to tobacco (item 3); difficulty controlling the behaviour as indicated by a positive response to item 6; or self report of any of the symptoms of nicotine withdrawal shown in table 1 (items 7–11). Subjects were considered to have experienced an unsuccessful quit attempt if they had made a conscious decision to discontinue tobacco use but resumed use within three months. The three month cutoff was chosen to reduce the likelihood of attributing a resumption in use to dependence when it may have been caused by a change in peer group or other factors. To reduce the possibility that resumed smoking before the three month cutoff might also be caused by factors other than dependence, the interviewer inquired as to the reason for resumed smoking and made a determination as to whether the event should be counted as a relapse. The latency to the onset of the first symptom of dependence was defined as the number of completed weeks passed between the initiation of monthly smoking, as defined above, and the date of the earliest presenting symptom of dependence. The first puff on a cigarette dated back to kindergarten for several subjects, and there was often a gap of several years between the first and second cigarettes. The date for the initiation of monthly smoking was therefore judged to be a superior baseline for measures of latency. Subjects reporting symptoms within the first week of monthly smoking have a latency of zero completed weeks. In the rare case of a symptom preceding monthly smoking, the latency had a negative value. Analyses were performed to determine which symptoms were the first to present. If two or more symptoms appeared on the same day, each was counted as a presenting symptom. This report reflects the status of the subjects at the completion of the third interview, which occurred 8–11 months after the first interview. The Student's t test was used to compare means, and a probability value of p < 0.05 was used as a test of significance. Some subjects reported that they had experienced symptoms of dependence before the first interview. Reporting of these events might require the subject to recall information over a period of time that was greater than the four month interval between interviews during the prospective portion of the study. To test for possible recall bias, separate analyses were run to compare the results from subjects who were required to recall events over periods longer than four months (long recall) and those who were not (short recall). This also allowed us to evaluate the potential impact of repeatedly asking subjects if they had experienced symptoms of withdrawal by comparing those who reported symptoms at the first interview to those who reported them in subsequent interviews. The parents of 85 (9.4%) of the 900 seventh grade students withheld permission for participation. Forty students (26 boys and 14 girls) declined to participate (5.5% of the 721 invited); 39 of them were in the same school system. These refusals are attributed almost entirely to a few teachers who discouraged the participation of their students, possibly because of concern over the disruption of class time. The 681 subjects who comprise the initial cohort represent a response rate of 94.4% of the 721 students who were invited, and 75.7% of all seventh graders (n = 900). At entry, subjects' ages ranged from 11–15 years (mean age 12.6 years). Males represented 52% of the study cohort and 49% of the student body. The racial and ethnic makeup of the study population, (67% white, 20% Hispanic, 5% African American, 5% Asian, and 3% other), was similar to that of the entire student body, (63% white, 25% Hispanic, 3% African American, and 3% Asian). Subjects' cumulative experience with tobacco is presented in table 2for the first and third interviews. There were no regular users of cigars or smokeless tobacco. By the third interview, conducted in October through December of 1998, subjects had moved from seventh to eighth grade and 55 subjects (8%) had been lost to follow up, almost entirely as a result of moving out of the area. Compared to those who remained in the study, subjects who were lost were more likely to have tried cigarettes (41% v 27%) and to have mothers who currently smoked (53% v 33%) and fathers who currently smoked (51% v37%). Table 3 compares self reported symptoms of dependence at baseline for 205 subjects who had ever tried tobacco and 476 subjects who had not. Positive responses to the dependency questions among non-users were rare with all items, but were most frequent concerning cravings and needing to smoke, items which had been excluded by design. Each symptom of dependence also was denied by the vast majority of tobacco users. Symptoms of dependence were examined in the 95 subjects who had reported smoking monthly by the third interview and had completed all three interviews (table 4). The monthly smoker category includes 42 subjects who had smoked daily and 25 (26%) former smokers. Sixty (63%) subjects reported having experienced one or more of the nine symptoms listed in table 4 (range 1–9). Feeling addicted was the most common initial symptom, while feeling strong urges, irritable, nervous, restless or anxious when unable to smoke were the symptoms most commonly reported overall. Of the 60 symptomatic subjects, 37 (62%) had experienced their first symptom before smoking daily or began smoking daily upon experiencing their first symptom. Of the 42 subjects who had smoked daily, six (14%) denied all symptoms of dependence, 12 (29%) had experienced one or more symptoms before—or simultaneous with—daily smoking, and 24 (57%) experienced symptoms some time after the onset of daily smoking. The time elapsed from the onset of monthly smoking, as defined above, to the first symptom of dependence is plotted in fig 1. The percentages are based upon the 95 subjects who had ever smoked monthly, including 35 subjects who had not experienced symptoms. Nearly one quarter (22%) of all monthly smokers (21/95) had reported symptoms by the end of the first month. Sixteen subjects reported symptoms within two weeks of the onset of monthly smoking, representing 25% of the 60 subjects reporting symptoms (median 14.5 weeks, mean (SD) 35.3 (48.7) weeks). Since subjects were asked if they smoked now because it is really hard to quit, the date for this event was the date of the interview and not when the subject first failed in an attempt to quit smoking. For each of the remaining eight symptoms, one or more subjects reported experiencing the symptom within two weeks of the onset of monthly smoking (table 4). The median number of weeks to the onset of symptoms is necessarily computed based only on those subjects who have reported a symptom, and column 4 of table 4 should be interpreted accordingly. The potential role of recall bias was assessed by comparing the responses of subjects who reported the onset of symptoms more than four months before the first interview (long recall) with those of subjects reporting symptoms after that date (short recall). Subjects with a long recall reported a slightly longer latency (mean 38.4 weeks) than those with a short recall (mean 33.1 weeks), but the difference was not significant (p = 0.7). This study followed a cohort of young adolescents to observe the development of symptoms associated with nicotine dependence. A quarter of the subjects who reported one or more symptoms of nicotine dependence reported experiencing their first symptom within two weeks of the onset of monthly smoking as defined above. Several subjects reported symptoms within days of starting. Is this plausible? Nicotine causes an increase in the number of high affinity nicotinic cholinergic receptors in the brain structures associated with the reward pathway in both humans and rodents.30-33 The number of these receptors increases after the second dose of nicotine.31The increase parallels the development of tolerance, and receptor numbers decline after the drug is stopped coinciding with the withdrawal syndrome.30 33 That these high affinity receptors might play a role in dependence is also suggested by experiments with genetically altered mice.34 Mice which lack the high affinity nicotinic cholinergic receptor will not self administer nicotine.34 Nicotine infusions result in maximal up-regulation of receptors in just four days in mice, and in 10 days in rats.32 33 The time course for up-regulation in humans has not been determined. The up-regulation of nicotinic receptors has not been established as the mechanism causing nicotine dependence, but the rapidity with which these changes in brain structure appear makes it plausible that the first symptoms of dependence might also appear rapidly. As fig 1 demonstrates, subjects who reported experiencing a rapid onset of symptoms of dependence are not outliers, but represent a sizeable proportion of all smokers and nearly half of those who have reported symptoms thus far. In this study, symptoms of nicotine dependence were reported to be present in many smokers before daily smoking. These results are consistent with previous reports,3 16 17 and indicate that daily smoking is unlikely therefore to be a prerequisite for the development of nicotine dependence. Subjects who have never smoked daily can fail in their cessation attempts. These data contradict commonly held beliefs and the tendency might be to attribute them to methodological problems. The symptoms assessed in this study are subjective and were assessed through self report. Self reports of withdrawal symptoms by adolescents have shown good correlation with scores on the modified Fagerstrom Tolerance Questionnaire.35 The validity of self reports of nicotine withdrawal symptoms in adults have been established by independent observer rating and salivary cotinine concentrations.1 17 Some have postulated that youths' experience of withdrawal symptoms may be influenced by their expectations.3 36 This raises the question as to whether our repeated inquiries regarding symptoms of dependence may have prompted youths to over report symptoms in subsequent interviews. There was no significant difference in the rapidity of onset of symptoms among those who had reported symptoms during the first interview and those who reported symptoms only after repeated interviews. This makes it unlikely that our results are due to repeated prompting. Each of the items used as criteria in this study had been validated to some extent and used extensively to assess nicotine dependence in prior studies.3 16 18 19 26 27 The specificity of the items used were tested by administering them to non-smokers, and items that demonstrated greater than 1% false positives after rounding were excluded. Since symptoms associated with nicotine withdrawal, such as irritability, can have other causes these symptoms were counted only if subjects attributed them to nicotine withdrawal. Since a rapid onset was documented for each of eight symptoms, all eight survey questions would have to be defective to alter the conclusion that dependence can begin rapidly. The possibility that the early onset of symptoms was an artifact of recall bias from the retrospective portion of the study was ruled out. The use of biochemical measures of nicotine intake were considered, but not employed, because such measures cannot reliably differentiate between non-smokers and the occasional smokers who are the primary focus of this study.17 21 This is an area for future research. One limitation of this study is the relatively small number of subjects (60) reporting symptoms of dependence thus far. Another limitation is the narrow age range of our subjects; it is possible that the time course for the onset of dependence might be different in subjects who initiate tobacco use at different ages. Among rats, adolescents are more sensitive than adults to some of the effects of nicotine.37 38 Human adolescents may also be more sensitive to nicotine's effects. Individuals who initiate tobacco use during early adolescence are more likely to become dependent, have more difficulty quitting, smoke for a greater number of years, and smoke more heavily.39-42 It is clear that adolescents and adults experience the same type of nicotine withdrawal symptoms, but we do not know if the intensity of the symptoms and the ability to cope with them differs between these groups. More research is needed to sort out these issues. Based upon the data presented here, we offer a model describing three groups of individuals distinguished by their susceptibility to becoming dependent on nicotine. The rapid onset group would be those who develop symptoms of dependence within days or weeks of initiating monthly use. Several of our subjects seemed to describe a phenomenon akin to “love at first sight”, sensing immediately that nicotine had a powerful influence on them. A second group is composed of individuals who experience aslower onset to the development of symptoms of dependence. These individuals may require a more prolonged exposure to nicotine, at higher dosages, before dependence begins. Included in this group would be those individuals who do not report symptoms until they have been smoking for a few years. Elucidation of the physiological or psychological basis for the observed differences in the speed of onset may make it possible to establish a cut point between the rapid onsetand slower onset groups. A third group represents individuals who are particularlyresistant to developing dependence. Chippers—adults who smoke up to five cigarettes per day over many years with no evidence of dependence—would fall into this group.11 14 Our data suggest that the concentrations of nicotine to which chippers are exposed are more than adequate to cause dependence. It is too early to identify such individuals in our study, for we do not know how long a person would have to smoke without symptoms of dependence before it could be concluded that the risk of future dependence is minimal. Within species, individual humans and genetically distinct strains of animals can differ greatly in their responses to the effects of nicotine.43 44 Animal studies provide biological plausibility for a model of genetically determined differences in individual susceptibility to nicotine dependence.45 46Our data suggest that the latency to the onset of dependence might represent a useful phenotypic trait to study in future genetic research. The use of the term “experimenters” to refer to all less-than-daily smokers should be re-examined given the proportion of these individuals who already display symptoms of dependence.47 The DANDY study will continue with an examination of the pattern of smoking and the quantities consumed at the onset of the first symptom of dependence. The presence of one symptom does not meet the diagnostic criteria for nicotine dependence offered in DSM-IV,2although the age of onset of nicotine dependence is defined as the age at which the first symptom of dependence occurred.48Nicotine dependence typically begins as a paediatric condition, yet current definitions are based upon the study of adults. For example, the boundary model of nicotine dependence incorporates the observation that adult smokers experience physiologic withdrawal symptoms when daily intake falls below 8–12 cigarettes and have considerable difficulty maintaining intake below this level.49 While this model is a useful conceptualisation of dependence in adults, it failed to predict that youths would have symptoms of physical dependence before ever smoking daily. Nicotine dependence may have different manifestations in youths and adults, and current definitions of nicotine dependence need to be re-examined, especially in regard to their applicability to youths. This study was funded by grant number CA77067-03 from the National Cancer Institute. The opinions expressed in this paper are those of the authors and do not necessarily represent the official views of the National Cancer Institute. If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
<urn:uuid:a6602945-1d10-432e-849d-597c849623ae>
CC-MAIN-2021-21
https://tobaccocontrol.bmj.com/content/9/3/313.long
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00336.warc.gz
en
0.968453
5,243
2.78125
3
FREEMASONRY IN PERSPECTIVE A peculiar System of Morality Freemasonry is so frequently quoted as 'a peculiar system of morality, veiled in allegory, and illustrated by symbols' but, let us now examine that statement with a view to finding out just what is meant by the phrase and how it arose. 'A peculiar system of morality' - well - word values tend to change over the years and the word 'peculiar' in this sense means particular or special; the morality in question has its roots in a philosophy and a code inspired by the bible as a whole. In mediaeval times skilled craftsmen in various trades banded together to protect their crafts and permitted only those who had been trained, taught, proved, and trusted to pursue their skills. It was a means to outlaw pirates from producing inferior work and thus betray the trust of the architect, the master, or the commissioner of the work. From such early control development escalated in the 14th to the 17th centuries and there is ample evidence in both England and Scotland that such a trade control included instruction in matters beyond their crafts and skills; traces of that form of instruction can be found in modern times. As an illustration let us take the little booklet supplied on admission to the Freedom of the City of London which is entitled Good Advice to Apprentices; or The Covenants of the City Indenture (familiarly Explained and Enforced By Scripture.) from a copy dated 1863 the first two items, from eleven are 'familiarly Explained', are here 'During which term the said Apprentice his Master faithfully shall serve' - that is he shall be true and just to his Master in all his dealings, both in word and deed; he must not only keep his hands from picking and stealing, and his tongue from lying and slandering; he must also abstain from doing him any manner of injury, by idleness, negligence, or carelessness; by deceiving, or defaming, or any kind of evil speaking; but he must learn and labour to do him a true and real service. Several biblical quotations are listed in support of those injunctions including: Ye must be faithful in all things. (Timothy iii, 11) In all your labours let no iniquity be found. (Hosea xii, 8) and in addition to those there are quotations from Leviticus xix,11; Ephesians iv,25; Deuteronomy xxv,16; and Proverbs xii,19. The next example is: 'His secrets keep' - that is he shall conceal the particular secrets of his art, trade, or science, without divulging or making any one privy to them to the detriment of his Master, whose interest may very much depend on a peculiar management and knowledge of his business. To behave thus is to serve faithfully; and fidelity is the glory and perfection of a servant, as his want of it is his greatest discredit and reproach. Only one biblical extract is given in support of that: Discover not a secret to another, lest he that heareth it put thee to shame, and thine infamy turn not away. (Proverbs xxv, 9, 10 ) That booklet perpetuates injunctions similar to those written into the Old Charges dating from the 14th century. It was from those manuscripts the Revd. James Anderson compiled the first book of Constitutions of the Freemasons in 1723. It was officially sanctioned by the premier Grand Lodge founded in London in 1717, and became the means by which Speculative Freemasonry was to be governed. Under the sub-heading 'City Freedom' in the Good Advice booklet the following appears: Apprentices who have faithfully served their Masters can obtain the Freedom of the City, which confers many advantages, for the sum of 5s only. And that is followed by a Note which states: Masters should enrol their apprentices at the Chamberlain's Office within twelve months from the date of their Indentures, it being for their mutual advantage. ... Persons who give false testimony, forfeit their freedom. All who come to the Chamberlain's Office to enrol, turn over, or make free their Apprentices, must bring the copies of their own freedom The Entered Apprentice was thus guided, encouraged, taught the skills of the craft, and if he faithfully served his Master for the period of indenture, at least seven busy years, he obtained the Freedom of the City of London and by becoming a Fellow of his craft was then on his way to becoming a Master if that was his ambition. But, according to a reference quoted by Douglas Knoop in The Mason Word, his Prestonian Lecture for 1938: 'Actually fewer than 50 per cent of the apprentices bound in London took up their The earliest record among the surviving Old Charges is the oft-quoted Regius Poem, or Halliwell MS dated c. 1396. It is headed in Latin - 'Here begin the constitutions of the art of Geometry according to Euclid', and among the fifteen Points and the fifteen Articles, is the following, but quoted in modern English: The third Point must be severely with the 'prentice know it well, His master's counsel he keep and close, and his fellows by his good purpose; The privities of the chamber tell he to no man, nor in the lodge whatsoever they do; Whatsoever thou hearest or seest them do, tell it to no man wheresoever you go; The counsel of the hall, and even of the bower, keep it well to thy great honour, Lest it would turn thyself to blame, and bring the craft into great shame. (From a modern transcript by Roderick H Baxter, Master of Quatuor Coronati Lodge in British Masonic Miscellany Vol 1) It is worthy of notice here that the Regius Poem ends with the expression 'So mote it be' and that archaic expression is still used in Freemasonry. There is no question that Freemasonry was and still is ' a peculiar system of morality' that has stood the test of time. The essence of the principles then taught are still to be found in the modern Charge after Initiation, the first printing of which was by W. Smith in The Pocket Companion published in 1735 and has remained unchanged in the basic wording. Veiled in Allegory Let us turn to the expression 'veiled in allegory', and in that connection, note that the bible is full of accounts of incidents and stories that cannot possibly stand up to modern analysis and in consequence has provided much that has to be taken as allegory. Indeed the most effective teaching designed to capture full interest was given in parable form using an example that was common knowledge. Perhaps the clearest illustration of this is given in the Gospel According to St. Mark (chap. iv, 2-9) in the story of the sower who went forth to sow. ...and as he sowed, some fell by the wayside, and the fowls of the air came and devoured it up. And some fell on stony ground, where it had not much earth: and immediately it sprang up, because it had not depth of earth: But when the sun was up, it was scorched; and because it had no root, it withered away. And some fell among thorns, and the thorns grew up, and choked it, and it yielded no fruit. (but) others fell on good ground, and did yield fruit that sprang up and increased; and brought forth, some thirty, and some sixty, and some hundred(fold). Communicating in that manner, in whatever subject but based upon elements already known and understood by an audience, has its greatest value in that it can be esoteric and therefore selective, separating those who are 'properly prepared' to appreciate an inner meaning of an otherwise plebian story, but of interest to everyone. The story just quoted ends with the comment: 'And he said unto them, He that hath no ears to hear, let him hear', or in other words - he who understands, will understand! Stories from the bible have long been the subject of Mummers Plays, Miracle Plays, Morality and Passion Plays. They portrayed incidents that people learned as children and that stayed with them all their lives which were, in those days, centred almost entirely upon church or cathedral. Dressing up and acting in a fantasy world was not only an t retained some control over the text which paraphrased the sacred writings. Conder also gave lists of various towns and cities to shew the proliferation and here is a random choice as an example of that: 48 plays listed at York in the year 1430 25 at Chester from 1268 to 1577 42 at Coventry in 1468 30 at Wakefield in 1425 27 at Newcastle from 1285 to 1675. The period that he took ranged from the 12th to the 17th centuries and in that time similar evidence was forthcoming from other places in England, from north to the south and from east to west. Various parts of London where plays are known to have been presented are also mentioned but, regretfully, no texts have survived in that connection. The only subject related to building is the one entitled 'Building of the Ark and the Flood' at Wakefield but no entry as to who performed it; at Newcastle it was appropriated by the Shipwrights under the t itle 'Noah's Flood'; in that city it is even possible that the Master Mariners may have had something on that theme. The carpenters had the 'Burial of Christ' and the Masons had 'The Corpus Christi' Plays; but nowhere did the masons have a play linked with their craft and quite often they joined with another craft for their project. Nowhere is the building of Solomon's Temple shewn to have been a subject among the extensive list so one might search in vain for traces of the Hiramic Legend; the Morality Plays may well have provided a pattern or a form for it when it did arise for adoption. The earliest record of it is given in the masonic exposure, Masonry Dissected, written and published by Samuel Prichard in 1730. There is no mention of the building of King Solomon's temple in the earliest manuscript, the Regius Poem of c. 1396 and it received only scant mention in the Cooke MS of c. 1410. Whilst in that one the central character is not named he is identified there as'... the kings son, of Tyre, as his (Solomon's) master mason'. Into the next century, the Downland MS c. 1550, the reference is : The king that men called Iram . . . had a son (named) Aynon, and he was Master of Geometrie, and was chief Master of all his Masons and was Master of all his gravings and carvings, and all manner of Masonrye that belonged to the Temple. In that case not only is Hiram Abif deemed to be the son of the King of Tyre, a commonly held interpretation of the name, but we find one of a large variety of spellings invented or copied phonetically for the master craftsman. But there is absolutely nothing about the Hiramic legend which surely must be treated as the most prominent allegory that was still to come into Freemasonry. In 1723 the Revd. James Anderson compiled and published the first book of Constitutions of the Freemasons in which he included a so-called history of the mason craft both operative and speculative which he gathered from the manuscript of Old Charges where legend, myth, and fairy tale often became confused with history. Whilst he gave much attention to the biblical account of the master craftsman being sent by Hiram King of Tyre to Solomon King of Israel, and to interpretation of the Hebrew construction of the words 'Hiram' and 'Abif' there was no mention of any drama involving his death which is, of course, legendary having absolutely no foundation in fact nor biblical history because it is pure fiction. In Anderson's 2nd edition, published in 1738 eight years after Prichard's exposure, Masonry Dissected, the examination of the Hebrew construction is repeated but the subject taken a step further by the following footnote: But tho' Hiram Abif had been a Tyrian by Blood, that derogates not from his vast capacity; for Tyrians now were the best artificers, by the encouragement of King Hiram: and those Texts testify that God had endued this Hiram Abif with Wisdom, Understanding, and mechanical Cunning to perform every Thing that Solomon required, not only in building the Temple with all its costly Magnificence, but also in founding, fashioning and framing all the holy Utensils thereof, according to Geometry, and to find out every Device that shall be put to him! And the Scripture assures us that He fully maintain'd his Character in far larger Works than those of Aholiab and Bezalleel, for which he will be honoured in the Lodges til the End of Time. Anderson's last remark there - 'for which he will be honoured in the Lodges till the End of time' - is probably an indication of the use of the drama, after a style of the Miracle Plays, but in this case performed under tyled conditions as they are still performed in some Jurisdictions. Regarding the completion of the Temple, Anderson wrote: It was finish'd in the short space of 7 Years and 6 Months, to the Amazement of the World when the Cape-stone was celebrated by the Fraternity with great Joy. But their Joy was soon interrupted by the Sudden Death of their dear Master Hiram Abbif, whom they decently interred in the Lodge near the Temple, according to ancient Usage. After Hiram Abbif was being mourn'd for, the Tabernacle of Moses and its Holy Reliques being lodged in the Temple, Solomon in a General Assembly dedicated or consecrated it. In that account the 'sudden death' happened after the completion of the Temple and not during its construction. In accordance with the edict - '. . . . he shall build an house unto my name 'King Solomon dedicated the temple to the Holy Name, or in Hebrew terms Ha Shem. The Holy Name is allusive in that whilst both Enoch and Noah 'walked with God' (Gen v, 22: vi, 9) there is no mention in the bible of them being given the Name. Biblical records state that the Patriarch Abraham, Hagar the mother of Ishmael, and the Patriarch Isaac 'called upon the name of the LORD' which tends to credit them with knowing it (Gen. Xii, 8: xii, 4: xvi, 13: xxvi, 15) but it would appear that the name granted to them was of descriptive character only and that is borne out by the statemente of Moses - ' I appeared unto Abraham, unto Isaac, and unto Jacob, by the name of God Almighty (in Hebrew - El Shaddai), but my name JEHOVAH (in Hebrew-Jod He Vav He) was I not known to them' (Exod. Vi, 3). The name JEHOVAH is an Anglicized manufactured word to accommodate the Hebrew characters - the Tetragrammaton - Ha Shem - and as they are consonants, the vowels known only to the priesthood and with such limited use by them, the original pronunciation has been lost. The possession of the name of a person meant a close affinity or relationship with that person, but possession of the Holy Name was the highest privilege and, by masonic fable, was known by the three Grand masters. In order to avoid its full pronunciation the word was shared between them by syllables and the 'sudden death' of one of them brought an end to that practice; there was no question of the appointment of another to replace him and that gave rise to a substitute - or 'the Masonic Word'. The attempt to revive or 'raise' Hiram Abbif in order to recover from the dead, as it were, the secret that he had in life has been submerged in a welter of interpretations that include the fable of the Noah incident mentioned in some of the Old Charges, a subject not from biblical history, the raising of the widow's son by the action of Elijah (1 Kings xvii, 17-23) a similar raising of the son of the Shunammite woman by Elisha (2 Kings iv, 34-35) and the young man by St. Paul (Acts xx, 9-12). They are resurrection allegories, effected through divine influence, but nowadays compared with the 'kiss of life' action. In a symbolical interpretation 'The Name' of 'the Mason Word' is ever lost whenever mankind turns away from his faith in the Almighty, in whatever form, or by whatever Name he is known. Biblical history records the conquering of Jerusalem, the destruction of Solomon's temple, the Exile of the Jews to Babylon, and the subsequent return to Jerusalem to re-build the City and a Second Temple. That sequence provided the 'Recovery' theme - the completion of the Master Mason's degree, and is a subject dealt with in the Royal Arch. Illustrated by Symbols 'Illustrated by symbols' is the final item for this examination and here we have to distinguish between a tangible object, or symbol, upon which has been bestowed a meaning or representation completely different from its form, eg, an anchor is just an anchor to the seafarer but symbolically it is widely taken to represent Hope; the other distinction from the tangible is the intangible and what better example of that is a handshake to represent friendship in greeting; the whole world seems to know that it is a symbolic means of recognition among Freemasons! Symbols may be universal and can transcend all language, classic examples of which are road and traffic signs, but even such common signs or symbols may still be endowed by some organised groups of societies where meanings are given to such mundane objects but known only to themselves. Freemasonry abounds with such symbols through which abstract ideas may be presented; they provide the visual aid. Not all that Albert G. Mackey wrote on Freemasonry is acceptable to modern masonic students, but that does not mean that all his work is dismissed. Here is what he had to say on Symbolism in his Encyclopedia, first published in 1873. In Freemasonry, all the instruction in its mysteries are communicated in the form of symbols. Founded as a speculative science, on an operative art, it has taken the working- tools of the profession which it spiritulizes, the terms of architecture, the Temple of Solomon, and everything that is connected with its traditional history, and adopting them as symbols, it teaches its great moral philosophical lessons by this system of symbolism. Mackey also wrote: The older the religion, the more the symbolism abounds. Modern religions may display their dogmas in abstract propositions; ancient religions always conveyed them in symbols. Thus there is more symbolism in the Egyptian religion than the Jewish, more in the Jewish than the Christian, more in the Christian than the Mohammedan, and lastly more in the Roman (Catholic) than the Protestant . . . Any inquiry into the symbolic character of Freemasonry, must be preceded by an investigation of the nature of symbolism in general, if we would properly appreciate its particular use in the organisation of the Masonic It is possible that some people might argue with that, but it does provide food for thought! In reply to comments on their Paper - 'Masonic History Old and New' given to Quatuor Coronati Lodge on 2 October 1942, (AQC Vol. 55, pp.285-323). Douglas Knoop and G. P. Jones stated: There is no evidence to suggest that masons themselves (i.e., operative stonemasons) moralized upon their tools. Though the Regius Poem is full of moral precepts, and the Cooke MS rather less so, in neither of these early manuscripts, nor in later versions of the MS Constitutions, those peculiarly masonic documents written about Masons for masons, is there any sort of symbolism based upon masons' tools. Had the masons made use of such symbolism in their teachings, one would have expected some reference to it in Another useful statement of theirs was 'The Philosophy and symbolism of masonry are quite distinct from the history of masonry' and that is a point of differentiation that is constantly overlooked by some freemasons and masonic writers. During the long period of transition from operative to speculative masonry in the 17th and 18th centuries the scientific, Philosophical, the studious, those who made up the intelligentsia many of whom indulged in studies of alchemy, mysticism, and Kabbalistic pursuits , providing what has been termed a fringe of the craft undoubtedly left their marks in its construction. The mystical writings of such people had a strong influence and would account for the adoption of certain symbolism, traces of which, however slim are there to be found. Symbols can be classified as a form of pictorial shorthand, examples of which are to be seen in stained glass windows in churches, some of which are indeed visual sermons in themselves. Emblazonment in heraldry also provide examples where a symbol in that context can mean so much in regard to family name, a line of succession, marriage, property, county, and countless other meanings so cryptically displayed. Symbols therefore can mean all things to all men but an inner meaning can be made to apply in the context in which persons have been so informed. Tangible forms of freemasonry are usually explained to the membership in ceremonial or lectures, and in the case of the Lectures which can be so informative insufficient use is made of them; there is a lack of stress placed on that area of explanation for much that is contained in the book of Working according to that used in a member's lodge. The intangible symbols are much more difficult for brethren to appreciate for they can often be bent to suit whatever interpretation that may be preferred, and an inner meaning only applies in circumstances in which one has been so informed. It may be truly said that we are given all the ingredients but the mixing is left to ourselves. Let us take the expression 'The Mason Word' appropriately used by Douglas Knoop as the title for his Prestonian Lecture in 1938, he commented as follows: The justification for stressing the importance of the Mason Word as a factor in the development of masonic ceremonies lies in the fact that it consisted of something substantially more than a mere Word. Thus, the Rev. Robert Kirk, Minister of Aberfoyle, writing in 1961, says the Mason Word 'is like Rabbinical Tradition, in a way of comment of Jachin and Boaz, the two Pillars erected at Solomon's Temple (1 Kings, 21) with an Addition of some secret Signe delyvered from Hand to Hand, by which they know and become familiar one with the other.' The preamble to The Abstract of Laws for the Society of Royal Arch Masons (as it was called when issued in 1778) was more clear in the point as it included the following: . . . We also use certain signs, tokens and words; but it must be observed, that when we use that expression and say THE WORD. It is not to be understood as a watch-word only, after the manner of those annexed to the several degrees of the Craft, but also theologically, as a term, thereby to convey to the mind some idea of that great BEING who is the sole author of our existence, and to carry along with the most solemn veneration for his sacred Name and Word, as well as the most clear and perfect elucidation of his power and attributes that the human mind is capable of receiving; . . . The 'Mason Word' is the most intangible symbol of all intangible symbols used in Freemasonry. Without some acquaintance with the Law of Moses, otherwise called the Torah, or the Pentateuch, where we became acquainted with the gradual revelation of His holy will and Word and the development which ensued from that biblical period, one cannot begin to understand what has now become so obscured. It was not the intention in this short review to take individual symbols as a study, nor to develop a treatise based solely upon symbolism, such an exercise would take several volumes and would raise a proliferation of discussion or argument, sound or otherwise; each would have an interpretation of a sort, some that are held to the exclusion of all else. However, it must be stressed that the bible, the Patron Saints of the Christian church, the observances of Holy Days, all provided the very foundation for this 'peculiar system of morality'. The system has gathered accretions from other religions, and various mystics from different backgrounds to the extent that its simple form has been swamped; it really has become 'veiled in allegory and illustrated by symbols', some of which have failed to stay the course but nevertheless did leave a mark or trace e here and there to be re- discovered and perhaps enjoyed by the industrious student of Free and Accepted masonry in the future. The state of contention between brethren regarding some matters that are dealt with in lectures or ceremonial was the subject of an appropriate comment by the author of Three Distinct Knocks, a masonic ritual exposure published in 1760. Here is what he inserted at the end of the part of the Fellow-Craft (p.45): Some Masters of Lodges will argue upon the Reasons about the holy Vessels in the Temple and the Windows and Doors, the Length, Breadth and height of every Thing in the Temple, Saying, why was it so and so? One will give one Reason; and another will give another Reason, and thus they will continue for Two or Three Hours in this Part and the Master-Part; but this happens but very seldom, except an Irishman should come, who likes to here himself talk, asking, why were they round? Why were they square? Why were they hollow? Why were the Stones costly? Why were they hewn Stones and Sawn Stones, &c. some give one reason and some another; thus you see that every Man's Reason is not alike. Therefore, if I give you my Reason, it may not be like another; but any Man that reads the foregoing and following Work, and consults the 5th, 6th, 7th and 8th Chapters of the first Book of Kings, and the 2nd, 3rd and 4th of the second Book of Chronicles may reason as well as the best of them; . . . If ever there was a common-sense summing up of the situation that surely must be it; getting back to basics and building from there, staying within the proper context and treating interpretation for what it is, nevertheless searching among the symbols and allegories to find the intention of the compilers, will help anyone to get Freemasonry into
<urn:uuid:7e439af0-d7b0-4c8e-ad14-39ad38cf1a85>
CC-MAIN-2021-21
http://www.freemasons-freemasonry.com/under1.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00057.warc.gz
en
0.964333
5,978
2.640625
3
early research • modern research • types of lightning • lightning safety • facts & trivia is a powerful natural electrostatic discharge produced during a thunderstorm. Lightning's abrupt electric discharge is accompanied by the emission of visible light and other forms of electromagnetic radiation. The electric current passing through the discharge channels rapidly heats and expands the air into plasma, producing acoustic shock waves (thunder) in the atmosphere. How lightning is formed: The first process in the generation of lightning is the forcible separation of positive and negative charge carriers within a cloud or air. The mechanism by which this happens is still the subject of research, but one widely accepted theory is the polarization mechanism. This mechanism has two components: the first is that falling droplets of ice and rain become electrically polarized as they fall through the atmosphere's natural electric field, and the second is that colliding ice particles become charged by electrostatic induction. Once charged, by whatever mechanism, work is performed as the opposite charges are driven apart and energy is stored in the electric fields between them. The positively charged crystals tend to rise to the top, causing the cloud top to build up a positive charge, and the negatively charged crystals and hailstones drop to the middle and bottom layers of the cloud, building up a negative charge. Cloud-to-cloud lightning can appear at this point. Cloud-to-ground lightning is less common. Cumulonimbus clouds that do not produce enough ice crystals usually fail to produce enough charge separation to When sufficient negatives and positives gather in this way, and when the electric field becomes sufficiently strong, an electrical discharge occurs within the clouds or between the clouds and the ground, producing the bolt. It has been suggested by experimental evidence that these discharges are triggered by cosmic ray strikes which ionize atoms, releasing electrons that are accelerated by the electric fields, ionizing other air molecules and making the air conductive by a runaway breakdown, then starting a lightning strike. During the strike, successive portions of air become conductive as the electrons and positive ions of air molecules are pulled away from each other and forced to flow in opposite directions (stepped channels called step leaders). The conductive filament grows in length. At the same time, electrical energy stored in the electric field flows radially inward into the conductive filament. When a charged step leader is near the ground, opposite charges appear on the ground and enhance the electric field. The electric field is higher on trees and tall buildings. If the electric field is strong enough, a discharge can initiate from the ground. This discharge starts as positive streamer and, if it develops as a positive leader, can eventually connect to the descending discharge from Lightning can also occur within the ash clouds from volcanic eruptions, or can be caused by violent forest fires which generate sufficient dust to create a A bolt of lightning usually begins when an invisible negatively charged stepped leader stroke is sent out from the cloud. As it does so, a positively charged streamer is usually sent out from the positively charged ground or cloud. When the two leaders meet, the electric current greatly increases. The region of high current propagates back up the positive stepped leader into the cloud. This "return stroke" is the most luminous part of the strike, and is the part that is really visible. Most lightning strikes usually last about a quarter of a second. Sometimes several strokes will travel up and down the same leader strike, causing a flickering effect. This discharge rapidly superheats the leader channel, causing the air to expand rapidly and produce a shock wave heard as Courtesy - NOAA It is possible for streamers to be sent out from several different objects simultaneously, with only one connecting with the leader and forming the discharge path. Photographs have been taken on which non-connected streamers are visible such as that shown on the right. This type of lightning is known as negative lightning because of the discharge of negative charge from the cloud, and accounts for over 95% of all lightning. An average bolt of negative lightning carries a current of 30 kiloamperes, transfers a charge of 5 coulombs, has a potential difference of about 100 megavolts and dissipates 500 mega joules (enough to light a 100 watt light bulb for 2 months). Positive lightning makes up less than 5 % of all lightning. It occurs when the stepped leader forms at the positively charged cloud tops, with the consequence that a negatively charged streamer issues from the ground. The overall effect is a discharge of positive charges to the ground. Research carried out after the discovery of positive lightning in the 1970s showed that positive lightning bolts are typically six to ten times more powerful than negative bolts, last around ten times longer, and can strike several kilometers or miles distant from the clouds. During a positive lightning strike, huge quantities of ELF and VLF radio waves are generated. As a result of their power, positive lightning strikes are considerably more dangerous. At the present time, aircraft are not designed to withstand such strikes, since their existence was unknown at the time standards were set, and the dangers unappreciated until the destruction of a glider in 1999. Positive lightning is also now believed to have been responsible for the 1963 in-flight explosion and subsequent crash of Pan Am Flight 214, a Boeing 707. Subsequently, aircraft operating in U.S. airspace have been required to have lightning discharge wicks to reduce the chances of a similar occurrence. Positive lightning has also been shown to trigger the occurrence of upper atmospheric lightning. It tends to occur more frequently in winter storms and at the end of a thunderstorm. An average bolt of positive lightning carries a current of 300 kiloamperes (about ten times as much current as a bolt of negative lightning), transfers a charge of up to 300 coulombs, has a potential difference up to 1 gigavolt (a thousand million volts), dissipates enough energy to light a 100 watt light bulb for up to 95 years, and lasts for tens or hundreds of milliseconds. Heinz Kasemir first hypothesized that a lightning leader system actually develops in a bipolar fashion, with both a positive and a negative branching leader system connected at the system origin and containing a net zero charge. This process provides a means for the positive leader to conduct away the net negative charge collected during development, allowing the leader system to act as an extending polarized conductor. Such a polarized conductor would be able to maintain intense electric fields at its ends, supporting continued leader development in weak-background electric fields. During the eighties, flight tests showed that aircraft can trigger a bipolar stepped leader when crossing charged cloud areas. Many scientists think that positive and negative lightning in a cloud are actually bipolar lightning. To spontaneously ionize air and conduct electricity across it, an electric field of field strength of approximately 2500 kilovolts per meter is required. However, measurements inside storm clouds to date have failed to locate fields this strong, with typical fields being between 100 and 400 kilovolts per meter. While there remains a possibility that researchers are failing to encounter the small high-strength regions of the large clouds, the odds of this are diminishing as further measurements continue to fall short. A theory by Alex Gurevich of the Lebedev Physical Institute in 1992 proposes that cosmic rays may provide the beginnings of what he called a runaway breakdown. Cosmic rays strike an air molecule and release extremely energetic electrons having enhanced mean free paths of tens of centimeters. These strike other air molecules, releasing more electrons which are accelerated by the storm's electric field, forming a chain reaction of long-trajectory electrons and creating a conductive plasma many tens of meters in length. This was initially considered a fringe theory, but is now becoming mainstream because of the lack of other theories. It has been recently revealed that most lightning emits an intense burst of X-rays and/or gamma-rays which seem to be produced during the stepped-leader and dart-leader phases just before the stroke becomes visible. The X-ray bursts typically have a total duration of less than 100 microseconds and have energies extending up to nearly a few hundred thousand electron volts (How big is an electron volt). The presence of these high-energy events match and support the "runaway breakdown" theory, and were discovered through the examination of rocket-triggered lightning, and from satellite monitoring of natural lightning. NASA's RHESSI satellite typically reports 50 gamma-ray events per day, and many of these are strong enough to fit the theory. Additionally, low-frequency radio emissions detected at ground level can detect lightning bolts from upwards of 4000 km away; combining these with gamma-ray burst events detected from above show overlapping positions and timing. There are problems with the "runaway breakdown" theory, however. While there seems to be a strong correlation between gamma-ray events and lightning, there are insufficient events detected to account for the amount of lightning occurring across the planet. Another issue is the amount of energy the theory states is required to initiate the breakdown. Cosmic rays of sufficient energy strike the atmosphere on average only once per 50 seconds per square kilometer. Measured X-ray burst intensity also falls short, with results indicating particle energy 1/20th of the theory's value. Early lightning research: During early investigations into electricity via Leyden jars and other instruments, a number of people (Dr. Wall, Gray, and Abbé Nollet) proposed that small-scale sparks shared some similarity with lightning. Benjamin Franklin, who also invented the lightning rod, endeavored to test this theory using a spire which was being erected in Philadelphia. Whilst he was waiting for the spire completion, some others (Dalibard and De Lors) conducted at Marly in France what became to be known as the Philadelphia experiments that Franklin had suggested in his book. Franklin usually gets the credit, as he was the first to suggest this experiment. The Franklin experiment is as follows: While waiting for completion of the spire, he got the idea of using a flying object, such as a kite, instead. During the next thunderstorm, which was in June 1752, he raised a kite, accompanied by his son as an assistant. On his end of the string he attached a key and tied it to a post with a silk thread. As time passed, Franklin noticed the loose fibers on the string stretching out; he then brought his hand close to the key and a spark jumped the gap. The rain which had fallen during the storm had soaked the line and made it conductive. However, in his autobiography (written 1771-1788, first published 1790), Franklin clearly states that he performed this experiment after those in France, which occurred weeks before his own experiment, without his prior knowledge as As news of the experiment and its particulars spread, the experiment was met with attempts at replication. However, experiments involving lightning are always risky and frequently fatal. The most well-known death during the spate of Franklin imitators was that of Professor Georg Richmann, of Saint Petersburg, Russia. He had created a set-up similar to Franklin's, and was attending a meeting of the Academy of Sciences when he heard thunder. He ran home with his engraver to capture the event for posterity. While the experiment was underway, a large ball lightning showed up, collided with Richmann's head, and killed him, leaving a red spot. His shoes were blown open, parts of his clothes singed, the engraver knocked out, the doorframe of the room split, and the door itself torn off its hinges. Although experiments from the time of Franklin showed that lightning was a discharge of static electricity, there was little improvement in theory for more than 150 years. The impetus for new research was from the field of power engineering: power transmission lines came into use, and engineers needed to know much more about lightning. Although causes were debated (and are today to some extent), research produced a wealth of new information about lightning phenomena, especially amounts of current and energy involved. The following picture emerged: An initial discharge, (or path of ionized air), called a "stepped leader", starts from the thundercloud and proceeds generally downward in a number of quick jumps, typical length 50 meters, but taking a relatively long time (200 milliseconds) to reach the ground. This initial phase involves a small electric current and is almost invisible compared to the later effects. When the downward leader is quite close, a small discharge comes up from a grounded (usually tall) object because of the intensified electric field. Once the ground discharge meets the stepped leader, the circuit is closed, and the main stroke follows with much higher current. The main stroke travels at about 0.1 c (100 million feet per second) and has high current for 100 microseconds or so. It may persist for longer periods with lower current. In addition, lightning often contains a number of restrikes, separated by a much larger amount of time, 30 milliseconds being a typical value. This rapid restrike effect was probably known in antiquity, and the "strobe light" effect is often quite noticeable. Positive lightning does not generally fit the above pattern. Types of lightning: Intracloud lightning, sheet lightning, anvil Intracloud lightning is the most common type of lightning which occurs completely inside one cumulonimbus cloud, and is commonly called an anvil crawler. Discharges of electricity in anvil crawlers travel up the sides of the cumulonimbus cloud branching out at the anvil top. Cloud-to-ground lightning, anvil-to-ground lightning Cloud-to-ground lightning is a great lightning discharge between a cumulonimbus cloud and the ground initiated by the downward-moving leader stroke. This is the second most common type of lightning. One special type of cloud-to-ground lightning is anvil-to-ground lightning, a form of positive lightning, since it emanates from the anvil top of a cumulonimbus cloud where the ice crystals are positively charged. In anvil-to-ground lightning, the leader stroke issues forth in a nearly horizontal direction till it veers toward the ground. These usually occur miles ahead of the main storm and will strike without warning on a sunny day. They are signs of an approaching storm and are known colloquially as "bolts from the blue". Bead lightning, ribbon lightning, staccato lightning Another special type of cloud-to-ground lightning is bead lightning. This is a regular cloud-to-ground stroke that contains a higher intensity of luminosity. When the discharge fades it leaves behind a string of beads effect for a brief moment in the leader channel. A third special type of cloud-to-ground lightning is ribbon lightning. These occur in thunderstorms where there are high cross winds and multiple return strokes. The winds will blow each successive return stroke slightly to one side of the previous return stoke, causing a ribbon effect. The last special type of cloud-to-ground lightning is staccato lightning, which is nothing more than a leader stroke with only one return stroke. Cloud-to-cloud or intercloud lightning is a somewhat rare type of discharge lightning between two or more completely separate cumulonimbus Ground-to-cloud lightning is a lightning discharge between the ground and a cumulonimbus cloud from an upward-moving leader stroke. Most ground-to-cloud lightning occurs from tall buildings, mountains and towers. Heat lightning or summer lightning Heat lightning (or, in the UK, "summer lightning") is nothing more than the faint flashes of lightning on the horizon from distant thunderstorms. Heat lightning was named because it often occurs on hot summer nights. Heat lightning can be an early warning sign that thunderstorms are approaching. In Florida, heat lightning is often seen out over the water at night, the remnants of storms that formed during the day along a sea breeze front coming in from the opposite coast. Some cases of "heat lightning" can be explained by the refraction of sound by bodies of air with different densities. An observer may see nearby lightning, but the sound from the discharge is refracted over his head by a change in the temperature, and therefore the density, of the air around him. As a result, the lightning discharge appears to be silent. Ball lightning is described as a floating, illuminated ball that occurs during thunderstorms. They can be fast moving, slow moving or nearly stationary. Some make hissing or crackling noises or no noise at all. Some have been known to pass through windows and even dissipate with a bang. Ball lightning has been described by eyewitnesses but rarely, if ever, recorded by The engineer Nikola Tesla wrote, "I have succeeded in determining the mode of their formation and producing them artificially" (Electrical World and Engineer, 5 March 1904). There is some speculation that electrical breakdown and arcing of cotton and gutta-percha wire insulation used by Tesla may have been a contributing factor, since some theories of ball lightning require the involvement of carbonaceous materials. Some later experimenters have been able to briefly produce small luminous balls by igniting carbon-containing materials atop sparking Tesla Coils. Several theories have been advanced to describe ball lightning, with none being universally accepted. Any complete theory of ball lightning must be able to describe the wide range of reported properties, such as those described in Singer's book "The Nature of Ball Lightning" and also more contemporary research. Japanese research shows that ball lightning has been seen several times without any connection to stormy weather or lightning. Ball lightning field properties are more extensive than realized by many scientists not working in this field. The typical fireball diameter is usually standardized as 20–30 cm, but ball lightning several meters in diameter has been reported (Singer). A recent photograph by a Queensland ranger, Brett Porter, showed a fireball that was estimated to be 100 meters in diameter. The photograph has appeared in the scientific journal Transactions of the Royal Society. The object was a glowing globular zone (the breakdown zone?) with a long, twisting, rope-like projection (the funnel?). Fireballs have been seen in tornadoes, and they have also split apart into two or more separate balls and recombined. Fireballs have carved trenches in the peat swamps in Ireland. Vertically linked fireballs have been reported. One theory that may account for this wider spectrum of observational evidence is the idea of combustion inside the low-velocity region of axisymmetric (spherical) vortex breakdown of a natural vortex (e.g., the 'Hill's spherical vortex'). The scientist Coleman was the first to propose this theory in 1993 in Weather, a publication of the Royal Meteorological Society. Ball lightning is hardly ever seen. In fact, there are only a few pictures of St Elmo's fire was correctly identified by Franklin as electrical in nature. It is not the same as ball lightning. Sprite Discharge - Courtesy NASA Sprites, elves, jets, and other upper atmospheric lightning Reports by scientists of strange lightning phenomena above storms date back to at least 1886. However, it is only in recent years that fuller investigations have been made. This has sometimes been called mega Sprites are now well-documented electrical discharges that occur high above the cumulonimbus cloud of an active thunderstorm. They appear as luminous reddish-orange, neon-like flashes, last longer than normal lower stratospheric discharges (typically around 17 milliseconds), and are usually spawned by discharges of positive lightning between the cloud and the ground. Sprites can occur up to 50 km from the location of the lightning strike, and with a time delay of up to 100 milliseconds. Sprites usually occur in clusters of two or more simultaneous vertical discharges, typically extending from 65 to 75 km (40 to 47 miles) above the earth, with or without less intense filaments reaching above and below. Sprites are preceded by a sprite halo that forms because of heating and ionization less than 1 millisecond before the sprite. Sprites were first photographed on July 6, 1989, by scientists from the University of Minnesota and named after the mischievous sprites in the plays of Shakespeare. Recent research carried out at the University of Houston in 2002 indicates that some normal (negative) lightning discharges produce a sprite halo, the precursor of a sprite, and that every lightning bolt between cloud and ground attempts to produce a sprite or a sprite halo. Research in 2004 by scientists from Tohoku University found that very low frequency emissions occur at the same time as the sprite, indicating that a discharge within the cloud may generate the sprites. Blue jets differ from sprites in that they project from the top of the cumulonimbus above a thunderstorm, typically in a narrow cone, to the lowest levels of the ionosphere 40 to 50 km (25 to 30 miles) above the earth. They are also brighter than sprites and, as implied by their name, are blue in color. They were first recorded on October 21, 1989, on a video taken from the space shuttle as it passed over Australia. Elves often appear as a dim, flattened, expanding glow around 400 km (250 miles) in diameter that lasts for, typically, just one millisecond. They occur in the ionosphere 100 km (60 miles) above the ground over thunderstorms. Their color was a puzzle for some time, but is now believed to be a red hue. Elves were first recorded on another shuttle mission, this time recorded off French Guiana on October 7, 1990. Elves is a frivolous acronym for Emissions of Light and Very Low Frequency Perturbations From Electromagnetic Pulse Sources. This refers to the process by which the light is generated; the excitation of nitrogen molecules due to electron collisions (the electrons having been energized by the electromagnetic pulse caused by a positive lightning bolt). On September 14, 2001, scientists at the Arecibo Observatory photographed a huge jet double the height of those previously observed, reaching around 80 km (50 miles) into the atmosphere. The jet was located above a thunderstorm over the ocean, and lasted under a second. Lightning was initially observed traveling up at around 50,000 m/s in a similar way to a typical blue jet, but then divided in two and sped at 250,000 m/s to the ionosphere, where they spread out in a bright burst of light. On July 22, 2002, five gigantic jets between 60 and 70 km (35 to 45 miles) in length were observed over the South China Sea from Taiwan, reported in Nature. The jets lasted under a second, with shapes likened by the researchers to giant trees and carrots. Researchers have speculated that such forms of upper atmospheric lightning may play a role in the formation of the ozone layer. Lightning has been triggered directly by human activity in several instances. Lightning struck the Apollo 12 soon after takeoff, and has struck soon after thermonuclear explosions. It has also been triggered by launching rockets carrying spools of wire into thunderstorms. The wire unwinds as the rocket climbs, making a convenient path for the lightning to use. These bolts are typically very straight. Lightning throughout the Solar System Lightning requires the electrical breakdown of gas, so lightning cannot exist in the vacuum of space. However, lightning has been observed within the atmospheres of other planets, such as Venus and Jupiter. Lightning on Jupiter is estimated to be 100 times as powerful, but fifteen times less frequent, than that which occurs on Earth. Lightning on Venus is still a controversial subject after decades of study. During the Soviet Venera and U.S. Pioneer missions of the '70s and '80s, signals suggesting lightning may be present in the upper atmosphere were detected. However, recently the Cassini-Huygens mission fly-by of Venus detected no signs of lightning at all. Thunderstorms are the primary source of lightning. Because people have been struck many kilometers away from a storm, seeking immediate and effective shelter when thunderstorms approach is an important part of lightning safety. Contrary to popular notion, there is no 'safe' location outdoors. People have been struck in sheds, makeshift shelters, etc. A better location would be inside a vehicle (a crude type of Faraday cage). It is advisable to keep appendages away from any attached metallic components once inside (keys in Several different types of devices, including lightning rods, lightning arresters, and electrical charge dissipaters, are used to prevent lightning damage and safely redirect lightning strikes. Nearly 2000 persons per year in the world are injured by lightning strikes, and between 25 to 33 % of those struck die. Lightning injuries result from three factors: electrical damage, intense heat, and the mechanical energy which these generate. While sudden death is common because of the huge voltage of a lightning strike, survivors often fare better than victims of other electrical injuries caused by a more prolonged application of lesser voltage. Lightning can incapacitate humans in 4 different ways: - Direct strike - 'Splash' from nearby objects struck - Ground strike near victim - EMP or electro-magnetic pulse from close proximity strikes - especially during positive lightning discharges In a direct hit the electrical charge strikes the victim first. Counter intuitively, if the victim's skin resistance is high enough, much of the current will flash around the skin or clothing to the ground, resulting in a surprisingly benign outcome. Splash hits occur when lightning effectively bounces off a nearby object and strikes the victim en route to ground. Ground strikes, in which the bolt lands near the victim and is conducted through the victim via his or her connection to the ground (such as through the feet), can cause great damage. The most critical injuries are to the circulatory system, the lungs, and the central nervous system. Many victims suffer immediate cardiac arrest and will not survive without prompt emergency care, which is safe to administer because the victim will not retain any electrical charge after the lightning has struck (of course, the helper could be struck by a separate bolt of lightning in the vicinity). Others incur myocardial infarction and various cardiac arrhythmias, either of which can be rapidly fatal as well. The intense heat generated by a lightning strike can cause lung damage, and the chest can be damaged by the mechanical force of rapidly expanding heated air. Either the electrical or the mechanical force can result in loss of consciousness, which is very common immediately after a strike. Amnesia and confusion of varying duration often result as well. A complete physical examination by paramedics or physicians may reveal ruptured eardrums, and ocular cataracts may develop, sometimes more than a year after an otherwise uneventful recovery. The lightning often leaves skin burns in characteristic Lichtenberg figures, sometimes called lightning flowers; they may persist for hours or days, and are a useful indicator for medical examiners when trying to determine the cause of death. They are thought to be caused by the rupture of small capillaries under the skin, either from the current or from the shock wave. It is also speculated that the EMP (electro-magnetic pulse) created by a nearby lightning strike can cause cardiac arrest. There is sometimes spectacular and unconventional lightning damage. Hot lightning which lasts for more than a second can deposit immense energy, melting or carbonizing large objects. One such example is the destruction of the basement insulator of the 250-metre-high central mast of the long wave transmitter at Orlunda, Sweeden, which led to its collapse. Facts and trivia: A bolt of lightning can reach temperatures approaching 28,000 kelvins (50,000 degrees Fahrenheit) in a split second. This is about five times hotter than the surface of the sun. The heat of lightning which strikes loose soil or sandy regions of the ground may fuse the soil or sand into glass channels called fulgurites. These are sometimes found under the sandy surfaces of beaches and golf courses, or in desert regions. Fulgurites are evidence that lightning spreads out into branching channels when it strikes the ground. Trees are frequent conductors of lightning to the ground. Since sap is a poor conductor, its electrical resistance causes it to be heated explosively into steam, which blows off the bark outside the lightning's path. In following seasons trees overgrow the damaged area and may cover it completely, leaving only a vertical scar. If the damage is severe, the tree may not be able to recover, and decay sets in, eventually killing the tree. Occasionally, a tree may explode completely. It is commonly thought that a tree standing alone is more frequently struck, though in some forested areas, lightning scars can be seen on almost every tree. Of all common trees the most frequently struck is the oak. It has a deep central root that goes beneath the tree, as well as hollow water-filled cells that run up and down the wood of the oak's trunk. These two qualities make oak trees better grounded and more conductive than trees with shallow roots and closed In movies and comics of the contemporary U.S. and many other countries, lightning is often employed as an ominous, dramatic sign. It may herald a waking of a great evil or emergence of a crisis. This has often also been spoofed, with the uttering of certain words or phrases causing flashes of lightning to appear outside of windows (and often scaring or disturbing some characters). While this is usually typical of cartoons, it has also been employed by regular TV shows and movies. Various novels and role playing games with fantasy tint involves wizardry of lightning bolt, weapon embodying the power of lightning, etc. The comic book character Billy Batson changed into the superhero Captain Marvel by saying the word "Shazam!", which called down a bolt of magic lightning to make the change. Flash II (Barry Allen) and III (Wally West) were both granted their super speed in accidents involving lightning. - The odds of an average person living in the USA being struck by lightning once in his lifetime has been estimated to be 1:3000. - The city of Teresina in northern Brazil has the third-highest rate of occurrences of lightning strikes in the world. The surrounding region is referred to as the Chapada do Corisco ("Flash Lightning Flatlands"). - The United States is home to "Lightning Alley", a group of states in the American Southeast that collectively see more lightning strikes per year than any other place in the US. The most notable state in Lightning Alley is Florida. - The saying "lightning never strikes twice in the same place" is false. The Empire State Building is struck by lightning on average 100 times each year, and was once struck 15 times in 15 minutes. - Some repeat lightning strike victims claim that lightning can choose its target, although this theory is entirely disregarded by the scientific community. - Ukrainian President Viktor Yushchenko is probably the highest-ranked modern statesman to be struck by a lightning (which happened in 2005 with no reported - Jim Caviezel, the actor who played Jesus in the film The Passion of the Christ, is reported to have been struck by lightning during shooting. The assistant director Jan Michelini was struck twice. - Golfers Retief Goosen and Lee Trevino have both been struck by lightning while - Although commonly associated with thunderstorms, lightning strikes can occur on any day, even in the absence of clouds. - Lightning interferes with AM (amplitude modulation) radio signals much more than FM (frequency modulation) signals, providing an easy way to gauge local lightning strike intensity. The bolt of lightning in heraldry is distinguished from the lightning bolt and is shown as a zigzag with non-pointed ends. It is also distinguished from the "fork of lightning". The lightning bolt shape was a symbol of male humans among the Native Americans such as the Apache (a rhombus shape being a symbol for females) in the American Old West. The name of New Zealand / Australia's most celebrated thoroughbred horse, Phar Lap, derives from the shared Zhuang and Thai word for lightning. return to top This article is licensed under the Documentation License. It uses material from the Wikipedia article: are either in the public domain, licensed under the GFDL, or permitted for use by their originator.
<urn:uuid:368dc3a3-4586-4b45-8116-e87295c4b386>
CC-MAIN-2021-21
https://electricalfun.com/lightning.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00296.warc.gz
en
0.932878
7,181
4.1875
4
A Broad Introduction Rhinitis & Hay Fever Children & Infants Allergy to Animals IBS, Migraine, and skin reactions due to nail reviver, milk, and Beverley was aged fifty when she was referred because of swelling of her eyelids, especially on the right, which was worse at weekends. She was using two types of finger-nail ‘revivers’, she was right-handed, and she tended to scratch her right eyebrow, explaining why the right eyelid was worse. It seemed probable that the nail-revivers were the cause, but why it was worse at weekends? Reading the labels revealed that the nail-reviver she used at the weekend contained toluene sulphonamide resin, a well-known sensitiser. The skin of the eyelids is very thin, explaining why there was no effect on the thick skin of the fingers, which may be Since she was sixteen she had suffered about once a month from severe migraine, once requiring hospitalisation because of incessant vomiting,. Her warning of an impending attack was unique, as it consisted of uncontrollable burping!. She also had had irritable bowel syndrome for many years, improved since she had tried avoiding Skin tests were definitely positive for milk and slightly positive for wheat. Although skin tests for these foods are not reliable it seemed worth while introducing a diet consisting of only foods which rarely cause problems. This was continued well past the time when she was due to have another migraine, and she had no attacks until she had cheese. Her “IBS” also vanished on the diet, but deliberate challenges with a glass of milk reproduced the wind, bloating, and diarrhoea on several occasions, leaving no doubt that milk was the major cause of her problems. Flapjacks made using one make of margarine caused a gut upset, but not using another sort. Comparison of labels revealed that the colourings annatto and circumin were in the margarine which upset her, and this has been avoided. She now kept very detailed diaries of foods and symptoms, and established that any trace of any sort of milk could not be tolerated, and also the nitrite preservatives in Thus the causes of all three problems were identified as a result of her meticulous cooperation, as she has had no problems for the last five years except when she has had a trace of milk. One tummy upset was triggered by using a cream cheese substitute, and another by eating two Thorntons milk chocolates at Christmas. Some time later she noticed patches of eczema under her collarbones and could not understand what was causing them until, when standing in the shower, she realised that she was folding her arms in such a way that the ends of her fingers were over the symmetrical patches of eczema. This cleared after she ceased to apply any hand creams at Beverley is obviously very easily sensitised and will have to take great care, but her quality of life is much better than for many years as long as she avoids the known causes of her problems. However, if she had not cooperated to the full and been a willing partner in the investigation she would not have been helped. A case of Multiple Inhalant and Food allergies Joan had been on continuous oral steroids in quite high dosage for five years, and showed all the usual side-effects. The main cause was her cat, which was always sat on the end of the bed and glared at her GP when he called to see her. After the cat was banished and the house cleaned she was able to reduce the dose of steroids to a low level. She was then found to be allergic to pencillium mould which was coming from the dahlia bulbs in the cellar which had gone very mouldy. She improved even more when they were thrown out. She was also sensitive to house dust, and was improved after desensitisation injections, but she still had chronic asthma. Attention was then directed to her diet, and she was found to be allergic to milk, all milk products, and plain chocolate. Stopping her bedtime cocoa stopped her attacks of asthma in the early hours. She had also been found to be allergic to wool, and improved with removal of woollen blankets, but it was only after a course of desensitisation to wool that she was finally able to stop her oral This case was investigated and treated before Becotide inhaled steroid became available, at a time when desensitising injections offered the only possibility of stopping oral steroids. Today nothing could have been done for her because this treatment is taboo for asthma in the UK. It is interesting that her skin tests were positive for house dust but negative for mites, suggesting that something in the home environment other than mites, such as wool and cat, was responsible for her continuing problems. The wool allergy was positively diagnosed by a nasal provocation test, because the skin test for wool is always negative, but this investigation would be unethical today because desensitisation is no longer available. A Fishy Tale “Bloodworms” cause Swollen Face and Conjunctivitis !! Grace was thirty-four years old, and had had six episodes when her face became very swollen, mainly round her right eye which was sometimes almost closed for up to a week, and with intense irritation of the eyes. These attacks occurred any time, but always started at home, and because she looked as if she had been beaten up she could not carry out her part-time job as barmaid at a local hotel. Skin tests were negative for a wide range of possibilities, including her son’s pet cat, rabbit, and hamster. Although no cause could be found the possibilities were discussed in general terms, and she was asked to get in touch at any time if she had any ideas regarding a possible cause. A year later she phoned to ask if feeding tropical frogs and fish, which she had not mentioned previously because they were looked after by her small son, could be the cause, because her last attack began a few hours after she fed the frogs when he was away on a school trip. I asked her to come and bring the fish food, and to my surprise she brought samples of deep frozen, freeze dried, and gamma radiated fish food all labelled as “Bloodworms” Fortunately one sample was accurately labelled as mosquito larvae. I skin tested her with all three by sticking the test needle into the fish food samples, and then pricking her arm with the same needle. All three produced very large wheals, the biggest nearly 2 inches in diameter, which lasted for two days. Questioning revealed that she only fed the frogs and fish when her son could not do so for some reason. She used her right hand, explaining why the right eye was most affected, and she admitted that she did not always wash her hands after feeding the frogs and fish. Blood tests (RAST) confirmed that she was very allergic to mosquito larvae by finding specific IgE against larvae. She was advised never to touch “bloodworms” again, and I put her samples in my freezer for future She had no further attacks by avoiding the fish food, but she could not really accept the diagnosis until she had carried out a deliberate test (on her own initiative), by touching the fish food and then her eye. She provoked an attack lasting three days! The thick skin of the fingers would prevent a reaction on the hands, but traces of mosquito larvae carried on her hand would easily penetrate the very thin skin round the eye and cause the reaction. Enquiries at the pet food shop revealed that the young man who sold the fish foods had noticed his hands itched for some time after handling ‘bloodworms’, and he had a very large positive skin test and also a positive blood test. Three years later a thirty three year old taxi driver was referred from the eye department having had conjunctivitis so severe that he could not see to drive for six weeks. This problem was not responding to treatment, and he also had occasional lip swelling, sneezing attacks, and sometimes a severe attack of asthma. Again all the usual tests were negative, but because he had kept tropical fish for twenty years the first patient’s sample was taken out of the freezer and used for a skin test, producing a very large immediate and delayed skin reactions. He was asked to bring samples of his fish food, and he brought samples labelled “Fish Flakes” and “Tubifex”. When these were used for tests, only the Tubifex produced a skin reaction. He admitted that he did not wash his hands after feeding the fish, so he stopped using “Tubifex” and has had no further problems. With the increased popularity of aquaria allergic problems caused by food for fish or frogs may become a more common cause of severe allergies. However, as in the first case, an intelligent and observant patient can pin-point the cause, and avoidance can effect a complete cure. Another case was reported recently where a fish fancier had been grinding up mosquito larvae (picture at left) to feed his exotic fish. He accidentally inhaled some of the powder, causing immediate swelling of the face and wheezing which soon subsided, but was followed two days later by severe inflammation of the kidneys. Skin tests using the powder he had inhaled caused a very large skin reaction, and IgE antibodies to mosquito larvae were found in the blood. This case illustrates how any part of the body can be affected by allergy. Rarely kidney problems recurring every summer can be caused by grass pollen, the clue being the seasonality of the complaint. This dust allergic patient was given a peak flow meter for the first time, and told that he must always blow it if he had an asthma attack, and that it was very important to to make a note of what he was doing at the time. He took these instructions absolutely literally, with the results I refrained from asking for further details, so it will never be known if it was the dust from the mattress or the exertion that triggered the attack, or for details regarding the even lower reading the next night. Allergy to Alcoholic Drinks || One Gin & Tonic Alcoholic drinks quite often contain allergens which will affect allergic patients. Obviously yeast is one of the commonest, but there are also additives, preservatives such as sulphites, clearing agents like egg white and isinglass derived from fish and many others.Obviously if the subject is drinking every day he will always be in trouble, so the cause may not be obvious. A local GP phoned one day to ask if I would see a patient of his “because he was sick and tired of getting up in the wee small hours of Sunday morning to deal with his asthma” Apparently this happened with remarkable regularity in the last six weeks, so there had to be As usual the personal history provided the solution, which was that he only drank on Saturday nights, and that the asthma had only occurred since he had changed his club from one that served Home Ales to one that served Worthington E. I never found out whether the difference was in the yeast, or in an additive, but going back to his old club solved the problem. I have seen only one case where pure alcohol produced an attack of asthma, as shown below. I tried to carry out a provocation test as often as possible, and the results I have obtained are shown here. When I was a student I was taught that asthma was often emotional, and even after the war that was a common opinion in teaching hospitals in London. For example when patient I knew well was admitted to a famous hospital in the sixties with asthma, arrangements for a consultation with a psychiatrist were routine. In my experience emotion as a significant trigger for asthma has usually been due to complete frustration with their medical advisers but two instances where there was a dominant emotional factor are The first was when a boy aged 13 was referred with asthma which got worse through the week, but improved dramatically at weekends. All sorts of treatment, including oral steroids and tranquillisers, had been ineffectual. It transpired that he was very bad at maths, the first lesson every morning, and the teacher made a fool of him. The GP was a school governor, and when the teacher, who should have known better, was removed his asthma vanished A young Asian girl had well controlled asthma while at University which became difficult to control after she got her degree and returned home. All summer she had severe attacks requiring treatment at the Emergency department, and was admitted several times for a few days. She was no sooner discharged than she developed another attack, but if she went to her grandmother’s house she was well. The home environment was considered as a significant factor as she was very sensitive to dust mites, and a visit to the home was arranged. The home was very clean indeed, but her father, an impressive Sikh gentleman, appeared on the scene and it became obvious that he was the trigger for her asthma. The background was that she wanted to marry someone who was not considered suitable. A Complex case of Asthma and Dyslexia due to Milk and Moulds Bruce was 12 years old, with a family history very suggestive of milk sensitivity in no fewer than five generations and extending as far as cousins. His mother was also a patient and was found to be allergic to milk and the moulds aspergillus and penicillium which were prevalent in her house. Even 1% milk solids in margarine would provoke intense rhinitis for a week. She also improved with desensitisation to moulds and was able to tolerate the house after an old well was found under the kitchen floor and was filled in. He was at the bottom of the class at school, and had been diagnosed as dyslexic, but with avoidance of milk and desensitisation against aspergillus there was a vast improvement in his general condition and attitude. Before long he was at the top of the class and the reading difficulties had disappeared. Yeast Allergy and Intolerance Yeast is ubiquitous, and it is seldom realised that it can be a potent allergen. The following anecdotes may help in recognition, the skin test being always negative. Eustace was 48, with a positive family history for allergies. His complaint was that for some months if he had a few drinks in the evening he would wake with abdominal pains, followed by a widespread urticarial rash. All sorts of tests had been done, all with negative results, and had dental abscesses removed without benefit. Avoidance of beer and yeast containing foods brought about so much improvement that he did not require any treatment. Pints of beer caused a rash in a few hours, ordinary bread caused slight urticaria, but soda bread had no effect. The answer was there in his story, but nobody had asked the right questions. Susan was 39, with a family history of allergy on father’s side. She had had perennial asthma for 16 years, better in frost and snow suggesting moulds. Beer, shandy, sherry, and yeast tablets brought on worsening asthma in half an hour. She was also allergic to house dust and penicillin. Avoidance of yeasty foods was very helpful. Beryl was 49, with family history of asthma. She had had asthma for 16 years, and rhinitis and polyps which recurred several times, over 26 years. She had noted that milk ”clogs me up doc” ( as so many say ) Sherry, whisky, and wine cause sneezing and wheezing but not gin. She had worked in the Marmite factory as a teenager and had been having Bovril for her morning break for years, both made from yeast. A short steroid course followed by Becotide and avoidance of milk and yeasty foods and drinks resulted in regression and finally disappearance of the polyps. Stopped all medication without relapse of the asthma or the polyps. Took 100 mls of milk and peak flow dropped from 420 to 220 Edward was 65, with history of asthma in mother and an uncle who died of asthma. He was well until age 62, when he had sudden asthma after drinking three pints of bitter, and found that if he had a pint of beer each evening he would have severe asthma by the third day. Then found that sherry caused wheezing, and even a little whisky or brandy would cause confusion and dizziness lasting 12 hours, but gin had no effect. Discovered that in France wine had no effect at all. Avoidance brought about complete remission of the asthma, but a test with a little beer caused wheezing. It is curious that yeast in foods did not have any effect. All tests were Kay was 13, and had had chronic rhinitis and mild asthma since infancy. There were no clues from skin testing, but a nasal provocation test with yeast caused a positive reaction. Further enquiry revealed that father was a very enthusiastic home brewer. After this was closed down and yeast removed from her diet her asthma and rhinitis ceased within a week. An unexpected bonus was that her personality changed to being more extrovert and lively, and her school performance improved beyond recognition. She had been so difficult at school that when aged 7 she had been assessed by a child psychologist as backward. Two years later got good maths and English A levels. Accidental exposure to yeast caused irritability and bad temper as well as wheezing. One such occasion was when exposed to an in-store bakery in a supermarket, and another being near a room at school where baking was being carried out. Blair was 6 when first seen, with a family history of allergies and eczema. Infant feeding had been very difficult, with projectile vomiting and suspected pyloric stenosis, followed by chronic indigestion, abdominal pains, large floating stools, chronic cough and rhinitis. He was almost impossible to examine, with temper tantrums and stamping his feet. There were no skin test reactions to a wide range of allergens, but on the basis of the history milk was removed from his diet. Within a month the cough, and the rhinitis had disappeared, and his stools and behaviour became normal. To me he seemed to have been transformed from a little horror to a lovely cheerful child who would not stop talking. His mother commented that after he had had some milk in a pudding his behaviour had become as foul as his stools. It was found that trace of milk, a small quantity of bread, or egg would produce severe abdominal pains in half an hour, with misbehaviour and heavy shadows under the eyes which would be red-rimmed. He resented very much having his bread stopped, as he had a positive craving for it. On one occasion he took a piece of stale bread from the bird table in the garden, and one mouthful produced an episode of dreadful behaviour. Seen aged 16 the situation was unchanged, and having spaghetti in defiance would still produce misbehaviour, misery, and shiners. His mother became very reluctant to introduce any foods to find out if his intolerance had subsided, but when seen aged 23 he could eat anything without Inger had escaped from Norway by boat at the beginning of the war, got married, and established a business in Burton on Trent making hats. She noticed that whenever a brewery nearby was brewing a new batch of beer and the smell of yeast was strong she began to wheeze, but not at home in the countryside unless she went to the local pub and drank beer, On stopping her beer the asthma disappeared, but when she went on holiday in late August to the east coast in the autumn she found that if the weather was damp she had asthma. The only way she could get relief was to hire a rowing boat and row about a mile off shore, where the asthma vanished, only to return when she rowed back again. This was almost certainly due to allergy to the very high counts of yeast spores at that time of year, The next year she went back to Norway on holiday for the first time since 1940, and had no trouble whatever when there, as her relatives were total abstainers. On arriving back in Newcastle on the boat she took the train to Derby, and asthma returned in full force on the journey. I doubted if yeast spores could account for this, and asked her if she had had anything on the train. She confessed that she had gone to the bar and had a “Double Diamond,” which was a popular brand of beer at the time. She took to gin and tonic and had no trouble except when the smell of yeast was in the air in Burton was strong. A Reaction to a Herbicide used round the School Richard was a fifteen in 1985, and was a very allergic subject already well known to me because when he was changed from breast to bottle he became seriously constipated, which ceased when I advised Aged four I found that he was sensitive to dust feathers and wool, and it was repeatedly established that if he was exposed closely to wool he reacted by aggression and misbehaviour. remarkable effect of wool was firmly established, and persisted, so that being in a small car with people in thick sweaters would cause misbehaviour. His mother was a tireless and intelligent investigator, and was quite objective. Aged ten he reacted to coloured foods and drinks with headaches and tummy aches. milk challenges resulted in eczema within a few days, but this time the eczema persisted after avoiding milk again, so he was seen for He had ++ skin tests for mites, housedust, and wool, a total IgE of 127 units which is about three times normal, but had no The rash was associated by his mother with the fact that the school authorities had been treating the playing fields with Atrazine, a herbicide which has since been found to have many possible It was difficult to be sure that the herbicide had anything to do with the skin rash, which subsided on school I instructed that some treated grass should be obtained from the playing field, and noted that so much Atrazine had been used that white powder was easily seen on the grass. forearm with this grass and the other with grass from my garden produced no immediate results, but after an hour and a half the appearances were as shown ion the photographs. Unfortunately it was not possible to follow up this preliminary investigation, but Atrazine was not used again at the school. Obviously much more intensive investigation would be desirable. Allergy to Legumes in a small boy George was three, and he was having attacks of abdominal pain and diarrhoea which were clearly associated with eating a remarkably wide variety of foods. He was very fond of tomato sauce, and spaghetti with tomato sauce. Prick Skin testing using the soups indicated clearly confirmed his specific allergies, as shown below
<urn:uuid:4e915f0f-0105-4db5-a623-dee3e5eaa765>
CC-MAIN-2021-21
http://allergiesexplained.com/pages/Examples%20of%20Multiple%20and%20Unusual%20Allergies.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00456.warc.gz
en
0.987875
5,141
2.5625
3
Eating disorders are characterized by severely disturbed eating behaviors. In both anorexia nervosa and bulimia nervosa, there is a disturbance in the perception of body weight and shape. Anorexia nervosa is characterized by the refusal or inability to maintain minimally normal body weight, or in adolescents, failure to gain weight during their expected growth period. Onset usually occurs during adolescence. Bulimia nervosa involves recurrent binge eating episodes with inappropriate compensatory behaviors such as purging, fasting or excessive exercise. This often has a later onset of late adolescence or early adulthood. Patients may be diagnosed with eating disorder not otherwise specified if they do not meet the diagnostic criteria for either anorexia nervosa or bulimia nervosa, which comprise the majority of eating disorder patients. At least 90% of individuals with either anorexia nervosa or bulimia nervosa are female. Some risk factors include dieting or participation in sports or activities that reward low weight (e.g., gymnastics, ballet, wrestling, modeling). The female athlete triad involves disordered eating, amenorrhea and osteoporosis. II. Diagnostic Approach. A. What is the differential diagnosis for this problem? Severe weight loss might occur in gastrointestinal diseases such as inflammatory bowel disease or celiac disease, infections (i.e., human immunodeficiency virus infection or tuberculosis), but does not include the disturbed body image or fear of gaining weight. Superior mesenteric artery (SMA) syndrome is usually more a consequence of severe weight loss rather than cause, when the usual fat pad around the SMA diminishes and the duodenum gets trapped between the SMA anteriorly and vertebral column posteriorly, causing vomiting and prandial abdominal pain. Some endocrine disorders to consider are hyperthyroidism, hypothyroidism, adrenal insufficiency and diabetes mellitus. Malignancies often present with weight loss, but without the fear of gaining weight. Decreased appetite can happen in major depressive disorder or odd eating behaviors in schizophrenia resulting in weight loss, but neither are associated with fear of gaining weight. Obsessive-compulsive disorder or anxiety disorders can be in the differential or concomitant conditions. Body dysmorphic disorder is a severe form of disturbed body image related to obsessive-compulsive disorder, but without the preoccupation with weight. It is just as common in men as it is in women and can occur in patients with anorexia nervosa. Muscle dysmorphia is an impairing preoccupation that one is not lean or muscular enough, more often in men, and associated with altered eating habits. Kleine-Levin syndrome consists of disturbed eating behavior but without the self-evaluation of body shape or weight. Individuals with major depressive disorder with atypical features may overeat, but do not compensate for their behavior inappropriately. B. Describe a diagnostic approach/method to the patient with this problem. The SCOFF Questionnaire can be used in screening for eating disorders. Do you make yourself sick because you feel uncomfortably full? Do you worry you have lost control over how much you eat? Have you recently lost over one stone (6.3 kg or 14 lb) in a 3 month period? Do you believe yourself to be fat when others say you are too thin? Would you say that food dominates your life? One point is given for every answer of “yes”, a score greater than or equal to 2 indicates a likelihood of anorexia nervosa or bulimia nervosa. Diagnosis is mainly derived from a thorough history, asking about weight history, symptoms, behaviors, attitudes towards weight, shape and food. 1. Historical information important in the diagnosis of this problem. Diet and compensatory behaviors It would be important to ask about diet and any compensatory behaviors: Exercise – How much? Frequency? How intense? Diet – A typical day’s food diary? Calorie counting? Food restrictions? Fluid and caffeine intake? Binge eating – How often? Purging – Frequency? How? Self-induced vomiting? When in relation to meals? Use of laxatives, diuretics, enemas, stimulants or diet pills? Review of systems In general, individuals may feel fatigue or cold intolerance from decreased metabolism and poor perfusion. Dizziness, lightheadedness, syncope or palpitations could suggest orthostatic hypotension or tachycardia. Recurrent vomiting could cause symptoms of gastroesophageal reflux, chest pain, and tooth sensitivity from loss of dental enamel. Common complaints of postprandial fullness or pain, nausea, constipation and bloating come from decreased gastrointestinal motility. Diarrhea from laxative abuse can cause symptoms of chronic dehydration and electrolyte abnormalities. Weakness or muscle cramps can be exacerbated by electrolyte disturbances such as hypokalemia. Bone pain with exercise should raise suspicion of the possibility of stress fractures, osteopenia or osteoporosis. Polyuria could occur with diuretic abuse, but also with nocturia secondary to abnormal vasopressin secretion. Menstrual irregularity or amenorrhea, secondary to weight loss, excessive exercise or emotional stress, potentially predisposes females to osteopenia or even osteoporosis. Psychiatric features associated with anorexia nervosa include symptoms of depression, depressed mood, irritability, insomnia, social withdrawal and decreased libido. Obsessive-compulsive tendencies are also common, like preoccupations with food. Other characteristics are concerns with eating in public, perfectionism and needing to control their environment. Binge eating/purging types of anorexia nervosa tend to have more impulse control issues than restricting types, more abuse of alcohol or drugs, be more sexually active, have more mood lability, be more prone to self-harm (cutting or burning) and suicide attempts. In bulimia nervosa, there is increased prevalence of depressive symptoms, mood disorders such as major depressive disorder or dysthymic disorder, and anxiety. Some have features of personality disorders, like borderline personality disorder. Substance abuse including alcohol or stimulants can occur in at least 30% of those with bulimia nervosa. Some patients may access pro-anorexia (pro-ana) or pro-bulimia (pro-mia) websites that lack professional supervision. Diabetics may omit or underdose their insulin to reduce food metabolism. Thyroid hormone could be used to increase metabolism for weight loss. Individuals with anorexia nervosa are more likely to have first-degree relatives with this disorder or mood disorders. For bulimia nervosa, there also may be family history of the same disorder, obesity, mood disorders or substance abuse. 2. Physical Examination maneuvers that are likely to be useful in diagnosing the cause of this problem. Vital signs should be checked for hypothermia, bradycardia, orthostatic measurements for postural orthostatic tachycardia (pulse increase >20 bpm) or orthostatic hypotension (systolic blood pressure decrease >10 mmHg), weight, height and body mass index (BMI). In general, individuals with anorexia nervosa can appear cachectic; most with bulimia nervosa are at least of normal weight. BMI for ages 2–20: http://www.cdc.gov/nccdphp/dnpa/bmi/00binaries/bmi-tables.pdf BMI for adults: http://www.cdc.gov/nccdphp/dnpa/bmi/00binaries/bmi-adults.pdf. For adolescents, the 2nd percentile has been proposed as a cutoff point for considering anorexia nervosa. The scalp hair can appear thin and dull. Angular stomatitis, fissuring at the corners of the mouth, may be due to vitamin B2 (riboflavin) deficiency. Induced emesis can cause parotid gland hypertrophy or permanent erosion of dental enamel, especially on the lingual surface, making teeth more prone to cavities. In the mouth, palatal scratches or pharyngeal erythema suggest recurrent vomiting and receding gums can be caused by severe malnutrition. A third of individuals have mitral valve prolapse, a systolic click with crescendo to S2, from the abnormal ballooning of the mitral valve, then progressing to mild mitral regurgitation. Delayed capillary refill and cool extremities signal poor perfusion. There may be peripheral edema from low protein of malnutrition. Skin findings may include conjunctival hemorrhages or facial petechiae after vomiting, dry skin, lanugo (fine hair often on the trunk), calluses or scars on the knuckles (Russell’s sign) from self-induced vomiting using the hand or acrocyanosis. Look for signs of self-harm: cuts, linear scars, burns, or bruises. Hypercarotenemia, yellowing of the skin, can be seen on the palms and soles. Patients may have a flat or anxious affect. Cognitive deficits are influenced by atrophic brain changes correlating with degree of weight loss and amenorrhea which resolve with weight gain. 3. Laboratory, radiographic and other tests that are likely to be useful in diagnosing the cause of this problem. Medical tests should be limited to those aiding diagnosis or management. Complete blood count in restricting anorexia nervosa can show leukopenia with relative lymphocytosis, mild normocytic anemia or even thrombocytopenia. Electrolytes may reflect dehydration with an elevated blood urea nitrogen. Hyponatremia can be secondary to vomiting, diarrhea, diuretic abuse or dilutional from water loading to decrease food intake or artificially increase their weight for weight checks. Self-induced emesis would cause hypokalemia, and hypochloremic metabolic alkalosis, from loss of hydrochloric acid. Hypokalemia can also be caused by electrolyte shifts from acid loss or pseudohyperaldosteronism from chronic dehydration, laxative, diuretic abuse. Laxative abuse causes bicarbonate wasting and a hyperchloremic metabolic acidosis. There may also be low magnesium or phosphorus levels from inadequate intake and laxative abuse. Hypoglycemia can present in severe, advanced malnutrition with depleted glycogen stores. Liver function tests can show mildly elevated transaminases due to weight loss and fasting. An elevated fasting indirect bilirubin level can reflect food restriction. Thyroid function tests can reveal non-thyroidal illness syndrome (sick euthyroid syndrome) with normal thyroid-stimulating hormone and low-normal thyroxine (T4) levels which reverse with weight gain and do not require hormone replacement therapy. If the patient vomits surreptitiously, serum salivary amylase will be elevated. Persistent amenorrhea in someone with normal weight might prompt evaluation of urine pregnancy test, serum luteinizing and follicle-stimulating hormones, serum estradiol and serum prolactin. Females may have low estradiol and males, low serum testosterone levels. A toxicology screen should be considered for patients with history of substance abuse or binging and purging. For suspected laxative abuse, the stool or urine can be analyzed for a laxative screen: bisacodyl, emodin, aloe-emodin and rhein. Sinus bradycardia is common on electrocardiogram. Also consider an electrocardiogram if electrolyte abnormalities are present or if there is history of significant purging. Hypokalemia depresses the ST segment and in severe hypokalemia, QRS complex widens with increased PR interval and increased P wave amplitude. Some individuals have prolonged QT interval. Bone density scans should be obtained for individuals with amenorrhea of at least 6 months to check for osteopenia or osteoporosis. Malnutrition can cause cognitive deficits, but if they are significant, or patients have atypical features or an unchanging course, a brain magnetic resonance imaging (MRI) scan or computed tomography (CT) scan could be done. C. Criteria for Diagnosing Each Diagnosis in the Method Above. The Diagnostic and Statistical Manual of Mental Disorders (fourth edition, text revision) (DSM-IV-TR) contains the diagnostic criteria for these conditions. The DSM-IV-TR diagnostic criteria for anorexia nervosa Refusal to maintain body weight at or above a minimally normal weight for age and height (i.e. weight loss that leads to maintenance of body weight 85% of that expected, or failure to make expected weight gain during period of growth that leads to a body weight of 85% of that expected) Intense fear of gaining weight or becoming fat, even though underweight Disturbance in the way in which one’s body weight or shape is experienced, undue influence of body weight or shape on self-evaluation, or denial of the seriousness of current body weight In postmenarcheal females, amenorrhea (i.e. the absence of at least 3 consecutive menstrual cycles) There are 2 types: Restricting type – no regular binging or purging (self-induced vomiting or use of laxatives and diuretics) Binge eating/purging type – regular binging or purging behavior The DSM-IV-TR diagnostic criteria for bulimia nervosa Recurrent episodes of binge eating characterized by: Eating, in a discrete period of time, an amount of food that is definitely larger than most people would eat in a similar period of time and under similar circumstances and A sense of lack of control over eating during the episode Recurrent inappropriate compensatory behavior to prevent weight gain, such as self-induced vomiting; misuse of laxatives, diuretics, enemas, or other medications; fasting; or excessive exercise The binge eating and inappropriate compensatory behaviors both occur, on average, at least twice per week for 3 months Self-evaluation unduly influenced by body shape or weight The disturbance does not occur exclusively during episodes of anorexia nervosa There are 2 types: Purging type – the person has regularly engaged in self-induced vomiting or misuse of laxatives, diuretics or enemas Nonpurging type – the person has used other inappropriate compensatory behaviors, such as fasting or excessive exercise, but has not regularly engaged in self-induced vomiting or the misuse of laxatives, diuretics or enemas D. Over-utilized or “wasted” diagnostic tests associated with the evaluation of this problem. III. Management while the Diagnostic Process is Proceeding. A. Management of Clinical Problem Eating Disorders. Hospitalization criteria (American Psychiatric Association) Severe malnutrition <75% average body weight for age, sex and height Physiologic instability, i.e.: orthostatic hypotension – pulse increase of 20 bpm or systolic blood pressure drop of >10 mmHg hypotension <80/50 mmHg bradycardia <40 bpm (<50 bpm daytime for adolescents) tachycardia >110 bpm hypothermia <97 F (<36.1 C) Significant electrolyte disturbances – e.g., potassium <3 mEq/L (<3 mmol/L) Failed outpatient therapy Acute medical complications of malnutrition – e.g., syncope, cardiac, renal Acute psychiatric emergencies – e.g., suicidal ideation, acute psychosis Most of the medical complications improve and resolve with refeeding, weight gain and cessation of purging. On admission, daily caloric intake usually begins at 30–40 kcal/kg/day (1000–1600 kcal/day) or adding 200–300 kcal to the individual’s daily intake. Calories can be advanced 200 kcal every 2 days. After the first 2 weeks, a weight gain of 2–3 lb (0.9–1.35 kg) per week is a reasonable goal for the hospitalized patient. Oral refeeding is always preferable, but nasogastric and even intravenous feedings can be used in situations of food refusal and as a lifesaving measure. New research suggests that intensive nutrition regimens (feeding starting at up to 3000 kcal/day) may allow patients to gain weight, muscle, and fat mass without significant side effects. If using nasogastric feedings, continuous feedings may be better tolerated than bolus feedings. During early nutrition support, liquid supplements can be started with gradual transition to foods to help weight gain. In the first few days, fluid balance should equal zero balance (20–30 mL/kg/day for adults), with sodium restricted, especially if edema develops. A nutrition consult is recommended to help navigate these issues. From decreased intake, there may be general vitamin deficiencies including thiamine, folic acid, vitamin B12 from a vegetarian diet, and vitamins C and D. Thiamine (100 mg PO daily x first 3 days), particularly for individuals with history of heavy alcohol use or rapid weight loss, is recommended before nutrition rehabilitation as refeeding with increased carbohydrate metabolism exhausts already depleted thiamine reserves. A daily multivitamin is also recommended. Zinc deficiency can cause changes in the sense of taste and neuropsychiatric symptoms. A zinc supplement is recommended especially if serum zinc level is low, as it has also been reported to help weight gain in some patients. If there is evidence of iron deficiency anemia, wait until after one week of nutrition support to start iron; iron supplementation in the early phase of refeeding is associated with increased mortality. On admission, the activity level is typically restricted to bed rest so that energy can be directed towards weight gain and recuperation. As orthostatic symptoms improve, the activity level could be judiciously advanced. For the symptom of postprandial discomfort, small, frequent meals with snacks are better tolerated. Metoclopramide can help delayed gastric emptying and the associated symptoms. A bowel regimen to treat constipation should include polyethylene glycol 3350 and stool softeners. Stimulant laxatives are not recommended. For tooth sensitivity from enamel loss, some strategies to alleviate the discomfort include diluting fruit juices, avoiding foods or fluids at extreme temperatures and using a mouth rinse solution of 1 teaspoon (5 mL) of baking soda per 1 quart (0.95 L) water. Salivary gland enlargement improves with cessation of emesis, but warm compresses and tart candies (sialogogues) can help. Nutrition rehabilitation and weight gain best treat low bone density in potentially reversing bone deterioration. Calcium supplement (1300–1500 mg/day) and vitamin D (400 units/day) do not significantly prevent or reverse bone loss, but are often prescribed. Neither estrogen supplements nor bisphosphonates are effective in treatment of osteopenia or osteoporosis in eating disorders. Antidepressants are commonly prescribed for patients with eating disorders, but there are few trials to support their routine use. A small number of trials support the use of psychotherapy, particularly cognitive behavioral therapy (CBT), in patients with bulimia nervosa. However, CBT alone did not restore weight. Patients must still follow standard nutrition protocols. The Maudsley model of family therapy has been proven to increase BMI while decreasing symptomatology in patients with anorexia nervosa. Patients can be admitted to telemetry where monitoring can help identify significant bradycardia at night, along with electrocardiographic manifestations of electrolyte disturbances. Daily orthostatic vital signs can be obtained along with daily weights checked every morning after voiding, in a hospital gown, with the patient facing away from the scale. Electrolyte abnormalities, such as hypophosphatemia, occur most likely in the first couple of weeks of refeeding so serum electrolytes, potassium, calcium, magnesium, and phosphorus should be checked daily for the first 5 days of refeeding, then 2–3 times weekly for the next 2 weeks. Urine specific gravity checked at the morning weigh-in can help detect excessive water intake. The daily physical exam should pay attention to orthostatics and cardiovascular status. Patients may report acne, breast tenderness with weight gain or depressed mood because of body changes. A psychiatry consult helps with medication initiation or adjustment that can improve mood, decrease binging, purging and improve weight gain. Patients considered for discharge should be recovering from the physiologic instability that warranted their admission, and be stable on their goal caloric intake. The closer a patient is to their ideal weight at discharge, the less likely they are to relapse. The discharge plan should consist of close follow-up with a multidisciplinary team including medical, psychiatric and nutrition providers. Different options depend on the level of care a patient requires, be it intensive outpatient, day treatment program or a residential program. A day treatment program provides therapy, meals and group activities at a level between outpatient and hospitalization. Patients who purge with vomiting should be referred to a dentist. B. Common Pitfalls and Side-Effects of Management of this Clinical Problem. Refeeding syndrome refers to the potentially lethal complication of refeeding severely malnourished patients, when the shift from a catabolic to anabolic state leads to insulin being released in response to carbohydrate intake. Electrolyte abnormalities (particularly hypophosphatemia, hypokalemia, and hypomagnesemia) can ensue, particularly in the first week after starting nutritional supplementation. Those at highest risk are those with severe malnutrition, BMI <12 kg/m2, history of rapid weight loss, binging, vomiting, laxative abuse, or concurrent comorbidities. Peripheral edema and fluid retention may develop with abrupt cessation of diuretics or laxatives, when chronic dehydration induces elevated aldosterone levels and salt and water retention. With weight loss, the heart muscle atrophies along with total muscle mass. When fluid overload occurs with refeeding, patients are at risk for heart failure. This, along with bradycardia and prolonged QT interval, make the heart more susceptible to ventricular arrhythmias and sudden death from hypokalemia and hypophosphatemia. Rare complications from repetitive vomiting include esophageal tears or rupture. Prolonged QT has been reported in patients with anorexia nervosa, and thus medications with this side effect should be avoided especially given their risk of cardiovascular compromise (e.g., antipsychotics, antidepressants, macrolides, some antihistamines). Metoclopramide 5–10 mg PO TID Polyethylene glycol 3350 1–3 tablespoons PO daily Thiamine 100 mg PO daily x first 3 days IV. What's the Evidence? Garner, DM, Anderson, ML, Keiper, CD, Whynott, R, Parker, L. “Psychotropic medications in adult and adolescent eating disorders: clinical practice versus evidence-based recommendations”. Eat Weight Disord. 2016 Feb 1. Hay, PP, Bacaltchuk, J, Stefano, S, Kashyap, P. “Psychological treatments for bulimia nervosa and binging”. Cochrane Database Syst Rev.. 2009 Oct 7. pp. CD000562(This article describes how cognitive behavioral therapy can treat some patients with bulimia.) Mehler, PS, Andersen, AE. “Eating Disorders: A Guide to Medical Care and Complications”. 2010. Morgan, JF, Reid, F, Lacey, JH. “The SCOFF questionnaire: a new screening tool for eating disorders”. West J Med.. vol. 172. 2000 Mar. pp. 164-165. (This is the original citation for the SCOFF questionnaire.) O’Connor, G, Nicholls, D, Hudson, L, Singhal, A. “Refeeding Low Weight Hospitalized Adolescents With Anorexia Nervosa: A Multicenter Randomized Controlled Trial”. Nutr Clin Pract. 2016 Feb 11. (This trial showed that “refeeding adolescents with AN with a higher energy intake was associated with greater weight gain but without an increase in complications associated with refeeding when compared with a more cautious refeeding protocol-thus challenging current refeeding recommendations.”) Rosen, DS. “Clinical Report – Identification and Management of Eating Disorders in Children and Adolescents”. Pediatrics. vol. 126. 2010. pp. 1240-1253. Schmidt, U, Magill, N, Renwick, B, Keyes, A, Kenyon, M, Dejong, H. “The Maudsley Outpatient Study of Treatments for Anorexia Nervosa and Related Conditions (MOSAIC): Comparison of the Maudsley Model of Anorexia Nervosa Treatment for Adults (MANTRA) with specialist supportive clinical management (SSCM) in outpatients with broadly defined anorexia nervosa: A randomized controlled trial”. J Consult Clin Psychol. vol. 83. 2015 Aug. pp. 796-807. Stanga, Z, Brunner, A, Leuenberger, M, Grimble, RF, Shenkin, A, Allison, SP, Lobo, DN. “Nutrition in clinical practice – the refeeding syndrome: illustrative cases and guidelines for prevention and treatment”. Eur J Clin Nutr. vol. 62. 2008. pp. 687-694. Copyright © 2017, 2013 Decision Support in Medicine, LLC. All rights reserved. No sponsor or advertiser has participated in, approved or paid for the content provided by Decision Support in Medicine LLC. The Licensed Content is the property of and copyrighted by DSM. - Eating Disorders - I. Problem/Condition. - II. Diagnostic Approach. - A. What is the differential diagnosis for this problem? - B. Describe a diagnostic approach/method to the patient with this problem. - 1. Historical information important in the diagnosis of this problem. - 2. Physical Examination maneuvers that are likely to be useful in diagnosing the cause of this problem. - 3. Laboratory, radiographic and other tests that are likely to be useful in diagnosing the cause of this problem. - C. Criteria for Diagnosing Each Diagnosis in the Method Above. - D. Over-utilized or “wasted” diagnostic tests associated with the evaluation of this problem. - III. Management while the Diagnostic Process is Proceeding. - A. Management of Clinical Problem Eating Disorders. - B. Common Pitfalls and Side-Effects of Management of this Clinical Problem.
<urn:uuid:86ede695-bb63-4561-921e-5f6037a399fe>
CC-MAIN-2021-21
https://www.hematologyadvisor.com/home/decision-support-in-medicine/hospital-medicine/eating-disorders/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00173.warc.gz
en
0.883546
5,716
2.96875
3
The nuclear erythroid 2-related factor 2 signaling pathway, best known as Nrf2, is a protective mechanism which functions as a “master regulator” of the human body’s antioxidant response. Nrf2 senses the levels of oxidative stress within the cells and triggers protective antioxidant mechanisms. While Nrf2 activation can have many benefits, Nrf2 “overexpression” can have several risks. It appears that a balanced degree of NRF2 is essential towards preventing the overall development of a variety of diseases in addition to the general improvement of these health issues. However, NRF2 can also cause complications. The main cause behind NRF2 “overexpression” is due to a genetic mutation or a continuing chronic exposure to a chemical or oxidative stress, among others. Below, we will discuss the downsides of Nrf2 overexpression and demonstrate its mechanisms of action within the human body. Research studies found that mice that don’t express NRF2 are more inclined to develop cancer in response to physical and chemical stimulation. Similar research studies, however, showed that NRF2 over-activation, or even KEAP1 inactivation, can result in the exacerbation of certain cancers, particularly if those pathways have been interrupted. Overactive�NRF2 can occur through smoking, where continuous NRF2 activation is believed to be the cause of lung cancer in smokers. Nrf2 overexpression might cause cancerous cells not to self-destruct, while intermittent NRF2 activation can prevent cancerous cells from triggering toxin induction. Additionally, because NRF2 overexpression increases the human body’s antioxidant ability to function beyond redox homeostasis, this boosts cell division and generates an unnatural pattern of DNA and histone methylation. This can ultimately�make�chemotherapy and radiotherapy less effective against cancer. Therefore, limiting NRF2 activation with substances like DIM, Luteolin, Zi Cao, or salinomycin could be ideal for patients with cancer although Nrf2 overactivation should not be considered to be the only cause for cancer. Nutrient deficiencies can affect genes, including NRF2. This might be one way as to how deficiencies contribute to tumors. The overactivation of Nrf2, can also affect the function of specific organs in the human body. NRF2 overexpression can ultimately block the production of the insulin-like growth factor 1, or IGF-1, from the liver, which is essential for the regeneration of the liver. While the acute overexpression of Nrf2 may have its benefits, continuous overexpression of NRF2 may cause long-term harmful effects on the heart, such as cardiomyopathy. NRF2 expression can be increased through high levels of cholesterol, or the activation of HO-1. This is believed to be the reason why chronic elevated levels of cholesterol might cause cardiovascular health issues. NRF2 overexpression has also been demonstrated to inhibit the capability to repigment in vitiligo as it might obstruct Tyrosinase, or TYR, action which is essential for repigmentation through melaninogenesis. Research studies have demonstrated that this process may be one of the primary reasons as to why people with vitiligo don’t seem to activate Nrf2 as efficiently as people without vitiligo. Why NRF2 May Not Function Properly NRF2 has to be hormetically activated in order to be able to take advantage of its benefits. In other words, Nrf2 shouldn’t trigger every minute or every day,�therefore, it’s a great idea to take breaks from it, by way of instance, 5 days on 5 days off or every other day. NRF2 must also accomplish a specific threshold to trigger its hormetic response, where a small stressor may not be enough to trigger it. Protein deglycase DJ-1, or just DJ-1, also called the Parkinson’s disease protein, or PARK7, is a master regulator and detector of the redox status in the human body. DJ-1 is essential towards regulating how long NRF2 can perform its function and produce an antioxidant response. In the case that DJ-1 becomes overoxidized, the cells will make the DJ-1 protein less accessible. This process induces NRF2 activation to expire too fast since DJ-1 is paramount for maintaining balanced levels of NRF2 and preventing them from being broken down in the cell. In case the DJ-1 protein is non-existent or overoxidized, NRF2 expression will probably be minimal, even using DIM or alternative NRF2 activators. DJ-1 expression is imperative to restore impaired NRF2 action. If you have a chronic illness, including CIRS, chronic infections/dysbiosis/SIBO, or heavy metal build up, such as mercury and/or that from root canals, these can obstruct the systems of NRF2 and phase two detoxification. Rather than oxidative stress turning NRF2 into an antioxidant, NRF2 will not trigger and oxidative stress can remain in the cell and cause damage, meaning, there is no antioxidant response. This is a significant reason why many people with CIRS have several sensitivities and reach to numerous factors. Some people believe they may be�having a herx response, however, this reaction may only be damaging the cells farther. Treating chronic illness, however, will permit the liver to discharge toxins into the bile, gradually developing the hormetic response of NRF2 activation. If the bile remains toxic and it’s not excreted from the human body, it will reactivate NRF2’s oxidative stress and cause you to feel worse once it’s reabsorbed from the gastrointestinal, or GI, tract. For example, ochratoxin A may block NRF2. Aside from treating the problem, histone deacetylase inhibitors can block the oxidative reaction from a number of the factors which trigger NRF2 activation but it might also prevent NRF2 from triggerring�normally, which might ultimately fail to serve its purpose. Fish Oil Dysregulation Cholinergics are substances which boost acetylcholine, or ACh, and choline in the brain through the increase of ACh, particularly when inhibiting the breakdown of ACh. Patients with CIRS often have problems with the dysregulation of acetylcholine levels in the human body, especially in the brain. Fish oil triggers NRF2, activating its protective antioxidant mechanism within the cells. People with chronic illnesses might have problems with cognitive stress and acetylcholine excitotoxicity, from organophosphate accumulation, which might cause fish oil to create�inflammation within the human body. Choline deficiency additionally induces NRF2 activation. Including choline into your diet, (polyphenols, eggs, etc.) can help enhance the effects of cholinergic dysregulation. What Decreases NRF2? Decreasing NRF2 overexpression is best for people that have cancer, although it may be beneficial for a variety of other health issues. Diet, Supplements, and Common Medicines: Apigenin (higher doses) EGCG (high doses increase NRF2) Hiba (Hinokitiol / ?-thujaplicin) High Salt Diet Luteolin (Celery, green pepper, parsley, perilla leaf, and chamomile tea – higher doses may increase NRF2 – 40 mg/kg luteolin three times per week ) Metformin (chronic intake) N-Acetyl-L-Cysteine (NAC, by blocking the oxidative response, esp at high doses) Orange Peel (have polymethoxylated flavonoids) Quercetin (higher doses may increase NRF2 – 50 mg/kg/d quercetin) Retinol (all-trans retinoic acid) Vitamin C when combined with Quercetin Zi Cao (Purple Gromwel has Shikonin/Alkannin) Pathways and Other: Glucocorticoid Receptor signaling (Dexamethasone and Betamethasone as well) GSK-3? (regulatory feedback) Homocysteine (ALCAR can reverse this homocysteine induce low levels of NRF2) Ochratoxin A(aspergillus and pencicllium species) Promyelocytic leukemia protein Retinoic acid receptor alpha STAT3 inhibition (such as Cryptotanshinone) Testosterone (and Testosterone propionate, although TP intranasally may increase NRF2) Trx1 (via reduction of Cys151 in Keap1 or of Cys506 in the NLS region of Nrf2) Zinc Deficiency (makes it worse in the brain) Nrf2 Mechanism Of Action Oxidative stress triggers through CUL3 where NRF2 from KEAP1, a negative inhibitor, subsequently enters the nucleus of these cells, stimulating the transcription of the AREs, turning sulfides into disulfides, and turning them into more antioxidant genes, leading to the upregulation of antioxidants, such as GSH, GPX, GST, SOD, etc.. The rest of these can be seen in the list below: Increases Notch 1 Encoded from the NFE2L2 gene, NRF2, or nuclear erythroid 2-related factor 2, is a transcription factor in the basic leucine zipper, or bZIP, superfamily which utilizes a Cap’n’Collar, or CNC structure. It promotes nitric enzymes, biotransformation enzymes, and xenobiotic efflux transporters. It is an essential regulator at the induction of the phase II antioxidant and detoxification enzyme genes, which protect cells from damage caused by oxidative�stress and electrophilic attacks. During homeostatic conditions, Nrf2 is sequestered in the cytosol through bodily attachment of the N-terminal domain of Nrf2, or the Kelch-like ECH-associated protein or Keap1, also referred to as INrf2 or Inhibitor of Nrf2, inhibiting Nrf2 activation. It may also be controlled by mammalian selenoprotein thioredoxin reductase 1, or TrxR1, which functions as a negative regulator. Upon vulnerability to electrophilic stressors, Nrf2 dissociates from Keap1, translocating into the nucleus, where it then heterodimerizes with a range of transcriptional regulatory protein. Frequent interactions comprise with those of transcription authorities Jun and Fos, which can be members of the activator protein family of transcription factors. After dimerization, these complexes then bind to antioxidant/electrophile responsive components ARE/EpRE and activate transcription, as is true with the Jun-Nrf2 complex, or suppress transcription, much like the Fos-Nrf2 complex. The positioning of the ARE, which is triggered or inhibited, will determine which genes are transcriptionally controlled by these variables. When ARE is triggered: Activation of the�synthesis of antioxidants is capable of detoxifying ROS like catalase, superoxide-dismutase, or SOD, GSH-peroxidases, GSH-reductase, GSH-transferase, NADPH-quinone oxidoreductase, or NQO1, Cytochrome P450 monooxygenase system, thioredoxin, thioredoxin reductase, and HSP70. Activation of this GSH synthase permits a noticeable growth of the�GSH intracellular degree, which is quite protective. The augmentation of this synthesis and degrees of phase II enzymes like UDP-glucuronosyltransferase, N-acetyltransferases, and sulfotransferases. The upregulation of HO-1, which is a really protective receptor with a potential growth of CO that in conjunction with NO allows vasodilation of ischemic cells. Reduction of iron overload through elevated ferritin and bilirubin as a lipophilic antioxidant. Both the phase II proteins along with the antioxidants are able to fix the chronic oxidative stress and also to revive a normal redox system. GSK3? under the management of AKT and PI3K, phosphorylates Fyn resulting in Fyn nuclear localization, which Fyn phosphorylates Nrf2Y568 leading to nuclear export and degradation of Nrf2. NRF2 also dampens the TH1/TH17 response and enriches the TH2 response. HDAC inhibitors triggered the Nrf2 signaling pathway and up-regulated that the Nrf2 downstream targets HO-1, NQO1, and glutamate-cysteine ligase catalytic subunit, or GCLC, by curbing Keap1 and encouraging dissociation of Keap1 from Nrf2, Nrf2 nuclear translocation, and Nrf2-ARE binding. Nrf2 includes a half-life of about 20 minutes under basal conditions. Diminishing the IKK? pool through Keap1 binding reduces I?B? degradation and might be the elusive mechanism by which Nrf2 activation is proven to inhibit NF?B activation. Keap1 does not always have to be downregulated to get NRF2 to operate, such as chlorophyllin, blueberry, ellagic acid, astaxanthin, and tea polyphenols may boost NRF2 and KEAP1 at 400 percent. Nrf2 regulates negatively through the term of stearoyl CoA desaturase, or SCD, and citrate lyase, or CL. C allele – showed a significant risk for and a protective effect against drug resistant epilepsy (DRE) rs11085735 (I’m AC) associated with rate of decline of lung function in the LHS T allele – protective allele for Parkinsonian disorders – had stronger NRF2/sMAF binding and was associated with the higher MAPT mRNA levels in 3 different regions in brain, including cerebellar cortex (CRBL), temporal cortex (TCTX), intralobular white matter (WHMT) rs10183914 (I’m CT) T allele – increased levels of Nrf2 protein and delayed age of onset of Parkinson’s by four years rs16865105 (I’m AC) C allele – had higher risk of Parkinson’s Disease rs1806649 (I’m CT) C allele – has been identified and may be relevant for breast cancer etiology. associated with increased risk of hospital admissions during periods of high PM10 levels rs1962142 (I’m GG) T allele – was associated with a low level of cytoplasmic NRF2 expression (P = 0.036) and negative sulfiredoxin expression (P = 0.042) A allele – protected from forearm blood flow (FEV) decline (forced expiratory volume in one second) in relation to cigarette smoking status (p = 0.004) rs2001350 (I’m TT) T allele – protected from FEV decline (forced expiratory volume in one second) in relation to cigarette smoking status (p = 0.004) rs2364722 (I’m AA) A allele – protected from FEV decline (forced expiratory volume in one second) in relation to cigarette smoking status (p = 0.004) C allele – associated with significantly reduced FEV in Japanese smokers with lung cancer G allele – showed a significant risk for and a protective effect against drug resistant epilepsy (DRE) AA alleles – showed significantly reduced KEAP1 expression AA alleles – was associated with an increased risk of breast cancer (P = 0.011) rs2886161 (I’m TT) T allele – associated with Parkinson’s Disease A allele – was associated with low NRF2 expression (P = 0.011; OR, 1.988; CI, 1.162�3.400) and the AA genotype was associated with a worse survival (P = 0.032; HR, 1.687; CI, 1.047�2.748) rs35652124 (I’m TT) A allele – associated with higher associated with age at onset for Parkinson’s Disease vs G allele C allele – had increase NRF2 protein T allele – had less NRF2 protein and greater risk of heart disease and blood pressure rs6706649 (I’m CC) C allele – had lower NRF2 protein and increase risk for Parkinson’s Disease rs6721961 (I’m GG) T allele – had lower NRF2 protein TT alleles – association between cigarette smoking in heavy smokers and a decrease in semen quality TT allele – was associated with increased risk of breast cancer [P = 0.008; OR, 4.656; confidence interval (CI), 1.350�16.063] and the T allele was associated with a low extent of NRF2 protein expression (P = 0.0003; OR, 2.420; CI, 1.491�3.926) and negative SRXN1 expression (P = 0.047; OR, 1.867; CI = 1.002�3.478) T allele – allele was also nominally associated with ALI-related 28-day mortality following systemic inflammatory response syndrome T allele – protected from FEV decline (forced expiratory volume in one second) in relation to cigarette smoking status (p = 0.004) G allele – associated with increased risk of ALI following major trauma in European and African-Americans (odds ratio, OR 6.44; 95% confidence interval AA alleles – associated with infection-induced asthma AA alleles – exhibited significantly diminished NRF2 gene expression and, consequently, an increased risk of lung cancer, especially those who had ever smoked AA alleles – had a significantly higher risk for developing T2DM (OR 1.77; 95% CI 1.26, 2.49; p = 0.011) relative to those with the CC genotype AA alleles – strong association between wound repair and late toxicities of radiation (associated with a significantly higher risk for developing late effects in African-Americans with a trend in Caucasians) associated with oral estrogen therapy and risk of venous thromboembolism in postmenopausal women rs6726395 (I’m AG) A allele – protected from FEV1 decline (forced expiratory volume in one second) in relation to cigarette smoking status (p = 0.004) A allele – associated with significantly reduced FEV1 in Japanese smokers with lung cancer GG alleles – had higher NRF2 levels and decreased risk of macular degeneration GG alleles – had higher survival with Cholangiocarcinoma rs7557529 (I’m CT) C allele – associated with Parkinson’s Disease Oxidative stress and other stressors can cause cell damage which may eventually lead to a variety of health issues. Research studies have demonstrated that Nrf2 activation can promote the human body’s protective antioxidant mechanism, however, researchers have discussed that Nrf2 overexpression can have tremendous risks towards overall health and wellness. Various types of cancer can also occur with Nrf2 overactivation. Dr. Alex Jimenez D.C., C.C.S.T. Insight Sulforaphane and Its Effects on Cancer, Mortality, Aging, Brain and Behavior, Heart Disease & More Isothiocyanates are some of the most important plant compounds you can get in your diet. In this video I make the most comprehensive case for them that has ever been made. Short attention span? Skip to your favorite topic by clicking one of the time points below. Full timeline below. Key sections: 00:01:14 – Cancer and mortality 00:19:04 – Aging 00:26:30 – Brain and behavior 00:38:06 – Final recap 00:40:27 – Dose 00:00:34 – Introduction of sulforaphane, a major focus of the video. 00:01:14 – Cruciferous vegetable consumption and reductions in all-cause mortality. 00:02:12 – Prostate cancer risk. 00:02:23 – Bladder cancer risk. 00:02:34 – Lung cancer in smokers risk. 00:02:48 – Breast cancer risk. 00:03:13 – Hypothetical: what if you already have cancer? (interventional) 00:03:35 – Plausible mechanism driving the cancer and mortality associative data. 00:04:38 – Sulforaphane and cancer. 00:05:32 – Animal evidence showing strong effect of broccoli sprout extract on bladder tumor development in rats. 00:06:06 – Effect of direct supplementation of sulforaphane in prostate cancer patients. 00:07:09 – Bioaccumulation of isothiocyanate metabolites in actual breast tissue. 00:08:32 – Inhibition of breast cancer stem cells. 00:08:53 – History lesson: brassicas were established as having health properties even in ancient Rome. 00:09:16 – Sulforaphane’s ability to enhance carcinogen excretion (benzene, acrolein). 00:09:51 – NRF2 as a genetic switch via antioxidant response elements. 00:10:10 – How NRF2 activation enhances carcinogen excretion via glutathione-S-conjugates. 00:10:34 – Brussels sprouts increase glutathione-S-transferase and reduce DNA damage. 00:11:20 – Broccoli sprout drink increases benzene excretion by 61%. 00:13:31 – Broccoli sprout homogenate increases antioxidant enzymes in the upper airway. 00:15:45 – Cruciferous vegetable consumption and heart disease mortality. 00:16:55 – Broccoli sprout powder improves blood lipids and overall heart disease risk in type 2 diabetics. 00:19:04 – Beginning of aging section. 00:19:21 – Sulforaphane-enriched diet enhances lifespan of beetles from 15 to 30% (in certain conditions). 00:20:34 – Importance of low inflammation for longevity. 00:22:05 – Cruciferous vegetables and broccoli sprout powder seem to reduce a wide variety of inflammatory markers in humans. 00:36:32 – Sulforaphane improves learning in model of type II diabetes in mice. 00:37:19 – Sulforaphane and duchenne muscular dystrophy. 00:37:44 – Myostatin inhibition in muscle satellite cells (in vitro). 00:38:06 – Late-video recap: mortality and cancer, DNA damage, oxidative stress and inflammation, benzene excretion, cardiovascular disease, type II diabetes, effects on the brain (depression, autism, schizophrenia, neurodegeneration), NRF2 pathway. 00:40:27 – Thoughts on figuring out a dose of broccoli sprouts or sulforaphane. 00:41:01 – Anecdotes on sprouting at home. 00:43:14 – On cooking temperatures and sulforaphane activity. 00:43:45 – Gut bacteria conversion of sulforaphane from glucoraphanin. 00:44:24 – Supplements work better when combined with active myrosinase from vegetables. 00:44:56 – Cooking techniques and cruciferous vegetables. 00:46:06 – Isothiocyanates as goitrogens. According to research studies, Nrf2, is a fundamental transcription factor which activates the cells’ protective antioxidant mechanisms to detoxify the human body. The overexpression of Nrf2, however, can cause health issues. The scope of our information is limited to chiropractic and spinal health issues. To discuss the subject matter, please feel free to ask Dr. Jimenez or contact us at�915-850-0900�. Curated by Dr. Alex Jimenez Additional Topic Discussion:�Acute Back Pain Back pain�is one of the most prevalent causes of disability and missed days at work worldwide. Back pain attributes to the second most common reason for doctor office visits, outnumbered only by upper-respiratory infections. Approximately 80 percent of the population will experience back pain at least once throughout their life. The spine is a complex structure made up of bones, joints, ligaments, and muscles, among other soft tissues. Injuries and/or aggravated conditions, such as�herniated discs, can eventually lead to symptoms of back pain. Sports injuries or automobile accident injuries are often the most frequent cause of back pain, however, sometimes the simplest of movements can have painful results. Fortunately, alternative treatment options, such as chiropractic care, can help ease back pain through the use of spinal adjustments and manual manipulations, ultimately improving pain relief. Welcome-Bienvenido's to our blog. We focus on treating severe spinal disabilities and injuries. We also treat Sciatica, Neck and Back Pain, Whiplash, Headaches, Knee Injuries, Sport Injuries,�Dizziness, Poor Sleep, Arthritis. We use advanced proven therapies focused on optimal mobility, health, fitness, and structural conditioning. We use Individualized Diet Plans, Specialized Chiropractic Techniques, Mobility-Agility Training, Adapted Cross-Fit Protocols and the "PUSH System" to treat patients suffering from various injuries and health problems. If you would like to learn more about a Doctor of Chiropractic who uses advanced progressive techniques to facilitate complete physical health, please connect with me. We a focus on simplicity to help restore mobility and recovery. I'd love to see you.
<urn:uuid:578ec87d-f19b-4097-b63e-514d6d42a829>
CC-MAIN-2021-21
https://elpasobackclinic.com/what-are-the-risks-of-nrf2-overexpression/amp/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00457.warc.gz
en
0.889361
5,569
3.15625
3
Recently, experimental evidences regarding the interaction of visible and infrared (IR) light with capacitors highlighted that the experimental results are explained by acknowledging that, in light-matter interaction, the total energy, or energy conserved, is given by the product of light’s power P times its period , i.e. . No other research group to date achieved a similar conclusion. With our own experiments, we have achieved the evidence that as the energy conserved in light-matter interaction is capable of precisely explaining the numerical outcome of voltage from capacitors illuminated by IR light. This result led us to the inquiry whether the validity of might be extended to phenomena other than the interaction of light with capacitors. Since there exist numerous experiments based on light-matter interaction published in the scientific literature, we decided to make use of them to test the hypothesis on . As we find that the energy efficiently captures the orders of magnitude of the outcomes from both our experiments and those of other authors, we conclude that the concept of photon is not necessary in the analysis of light-matter phenomena. Our review of the literature on light-matter interaction, specifically on the photothermoelectric (PTE) effect and the photoredox catalysis reactions (PCRs) , is important to assess the primary role of in light-matter interaction and possibly to pave the way to new results in various areas of science and technology involving electromagnetic waves. In particular, assessing that is the amount of energy conserved in light-matter interaction impacts: 1) the quantitative analysis of light-matter interaction; 2) the determination of the effectiveness of the PTE effect, the PCRs and other mechanisms; and 3) the establishment of the conditions required to obtain similar effects with different wavelengths. We assess the ability of to capture the amount of energy conserved in light-matter interaction by revisiting the analysis to experiments on light-matter interaction performed by other authors. We have chosen two experiments, one related to the PTE effect and the other to PCRs . We use data published in the corresponding articles, and perform our own data analysis assuming as the energy conserved. We then compare our conclusions with those achieved by the authors of the original articles. 3. The Photothermoelectric Effect The paper by Lu et al. describes fabrication and performance of reduced SrTiO3 (r-STO) based PTE photodetectors, producing high photovoltage responsivity ( , i.e. the ratio between the photovoltage produced and the power of the impinging light) and broadband spectral response from 325 nm to 10.57 μm. The authors ascribe the enhanced responsivity to the existence of a Ti-O phonon mode in the long-wavelength infrared region (LWIR), in agreement with findings in the interaction of IR light with thin films . The data provided by Lu et al. do not clearly correlate the photovoltage and responsivity produced by the r-STO PTE photodetectors to the characteristics, e.g. beam size and power P, of the impinging light. This lack of correlation in the trends between light’s characteristics and r-STO PTE photodetector’s outcome ( and ) shadows the role of device size on the enhancement of the PTE effect ascribed to the Ti-O phonon modes. Here, based on the hypothesis that the product of light’s power P times its period , i.e. , is the energy conserved in light-matter interaction, we propose that a trend exists between light’s characteristics and r-STO PTE photodetector outcome. Accordingly, we review the data published by Lu et al. . In addition, we treat the r-STO PTE photodetector as a capacitor with capacitance C, similarly to the treatment given to thermoelectric devices in and due their multi-layer structure alternating conducting and non-conducting elements. The main characteristics of the laser beam used by Lu et al. to illuminate the r-STO PTE photodetectors, the produced photovoltage , and the resulting responsivity , are collected in Table 1. For the illumination with , we chose to display the values of and generated when and displayed in Figure 6(b) of . With the assumptions that is the total energy in light-matter interaction, and treating the r-STO PTE photodetector as a capacitor with capacitance C, we use the equation to evaluate the energy transferred from the light’s beam to the photodetectors such that a photovoltage is produced. This equation agrees with Equations (2a) and (2b) in , and is an approximated expression because here we neglect a thermal component which, in order to be significant, needs to be of the same order of magnitude of . With the Table 1. Characteristics of the impinging light (wavelength , period , power P, and spot radius r) and r-STO PTE photodetector outcome (photovoltage and responsivity , i.e. the ratio between photovoltage produced and power of the impinging light). The data are taken from Lu et al. . The spot radius is the radius of the beam with circular cross-section and diameter equal to the beam size given by Lu et al. . For the illumination with we display the values of photovoltage and responsivity generated when and displayed in Figure 6(b) of . The capacitance of the r-STO PTE photodetectors studied by Lu et al. is estimated as . The average C value of these devices is (1.93 ± 1.65) pF. expression for given above, and assuming the beam sizes given in to be the diameter of the light’s beam with circular cross-section of radius r, we estimate the value of the capacitance of the photodetectors studied by Lu et al. . The values of C are reported in the last column of Table 1. It is interesting to note that the C values obtained with the data provided in are of the same order of magnitude for the five different types of illumination, and have an average value of (1.93 ± 1.65) pF. To achieve our goal of finding a correlation between light’s characteristics and r-STO PTE photodetector outcome in the experiments performed by Lu et al. , we suppose that, at all wavelengths, the beam size is the same, e.g. 10 μm. We then rescale on a cross-sectional area with spot radius the values of C of the r-STO PTE photodetectors used by Lu et al. . To this end, we exploit the direct proportionality between C and A given by , where is the permittivity in vacuum, is the dielectric constant varying between 1 to 103, is the cross-sectional area of the beam hitting the photodetector, and d is the distance between the electrodes, or plates, of the capacitors. The rescaled capacitances are reported in Table 2. We observe that the average C value is (2.04 ± 1.47) pF, similar within the errors to the average capacitance found from the data in Table 1. We also observe that the capacitance of the photodetector used for the illumination with the beam at is three orders of magnitude lower than that of the photodetectors used with the other four wavelengths considered. With the values of the rescaled capacitances C and using , we rescale also the photovoltage Table 2. Here we consider the characteristics of the impinging light (wavelength , period , and power P) to be the same as in Lu et al. . The spot radius, however, is assumed to always be . The capacitances of the r-STO PTE photodetectors are then rescaled on a cross-sectional area with exploiting the direct proportionality between C and A given by , where is the permittivity in vacuum, is the dielectric constant varying between 1 to 103, and d the distance between the electrodes. The average C value is (2.04 ± 1.47) pF. The photovoltage is calculated as , and used to estimate the responsivity . and responsivity , both of which are reported in Table 2. We observe that now the responsivities increase with the light’s wavelength (or with light’s period ), as predicted in . More interestingly, the responsivities at the larger wavelengths, 1550 nm and 10.57 μm (1.14 V/W and 2.85 V/W, respectively), are larger than the highest responsivity reported in (1.13 V/W) and in (0.2 V/W). Additionally, we notice that the responsivity at 532 nm is 0.51 V/W, smaller than the one of 0.57 V/W at 325 nm. We ascribe this finding to the fact that we neglected the thermal component in estimating at the lower wavelengths, where the temperature difference between the plates of the capacitors becomes significant according to findings in . We can further rescale the photovoltage and responsivity by assuming all r-STO PTE photodetectors to have a spot radius and a capacitance , the average value found from the data in Table 2. This assumption is reasonable because capacitance is a device’s property; therefore, if all the devices are prepared in the same way they should have the same C. The newly rescaled values for and are reported in Table 3. We notice that this time the responsivities increase with the light’s wavelength (or with light’s period ), as predicted in . There are very small differences among the values at , 532, and 785 nm. The responsivity at 10.57 μm is 1.72 V/W, again larger than the highest responsivity reported in (1.13 V/W) and in (0.2 V/W). However, the responsivity at is anomalously small (0.22 μm). The values of the responsivities versus light’s period are reported in Figure 1, where the empty stars depict the values from Table 2 and the small empty circles report those from Table 3. In Table 2 and Table 3 we observed three anomalous results with the photodetector used for the illumination with the beam at with spot radius . The first is that the capacitance is three orders of magnitude lower than that for the devices used at all other wavelengths (Table 2). The second is that the responsivity for the device with is anomalously small (0.22 V/W, as in Table 3). The third is that even the intensity used in the case of is anomalously small, as shown in Table 4. We suggest that these anomalies result from sample preparation. In conclusion, we have found that the use of as the total energy, or the energy conserved, in light’s interaction with an r-STO PTE photodetector enables highlighting the trends in the responsivity as a function of light’s wavelength or period. First, the size of the light’s beam cross-sectional area impinging on the r-STO PTE photodetectors plays a major role in defining the performance of the photodetectors. Then, in agreement with , the responsivities increase with increasing light’s wavelength or period. This conclusion is supported by the result, obtained with a 45 pF thermoelectric device, that with microwaves, i.e. with wavelength much larger than that of visible light, the responsivity is . Finally, we explain the anomaly for the device illuminated with light at as consequence of the sample preparation. Table 3. Here we consider the characteristics of the impinging light (wavelength , period , and power P) to be the same as in Lu et al. . The spot radius is always . The capacitances of the r-STO PTE photodetectors are assumed to be all equal to the average value of (2.04 ± 1.47) pF found in Table 2. The photovoltage is calculated as , and used to estimate the responsivity . Table 4. Intensity (power over area) of the impinging light at the various wavelenghts used by Lu et al. to illuminate their r-STO PTE photodetectors . Figure 1. Values of the responsivities versus light’s period . The data are taken from Boone et al. (large empty circles), and from Lu et al. (empty squares). The empty stars depict the responsivity values taken from Table 2, where each value of the capacitance C and photovoltage of the r-STO PTE photodetectors was rescaled assuming a spot size of . The small empty circles represent the values taken from Table 3, where all photodetectors are illuminated on an area that corresponds to the light’s beam cross-section with radius , and all photodetectors are assumed to have capacitance , the average value found in Table 2. The photovoltages produced by the photodetectors are listed in Table 3. 4. The Photoredox Catalysis Reactions The article by Ravetz et al. exploits the mechanism of triplet fusion upconversion in PCRs. Triplet fusion upconversion consists of the transformation of low energy photons into high energy ones. Specifically, starting from near infrared (NIR) photons, Ravetz et al. access photons in the orange to blue light interval to enable several photoredox catalysis reactions (PCRs). Triplet fusion upconversion involves various steps of excitation and decay processes. The main goal of the process is to create a higher-energy singlet exciton which, then, decays giving off a high-energy photon . However, from it is not clear how conservation of energy is satisfied in the upconversion from NIR to blue light. More specifically, one NIR photon with wavelength and frequency has energy , while a blue photon with and has energy . Here is Planck’s constant. The difference in energy between the blue and NIR photons is . Ravetz et al. do not specify what mechanism supplies within the upconversion process. To address the problem of finding the source of satisfying the law of conservation of energy, we assume that, in light-matter interaction, the total energy, or the energy conserved, is given by the product of light’s power P times its period , i.e. , where is . To investigate the effectiveness of as the total energy conserved in PCRs, we collect the information available on light characteristics ( , and P) of the light sources used by Ravetz et al. , and calculate the corresponding . Table 5 summarizes the collected information, including the type of source (laser diode or light emitting diode—LED) and the specific location in in which the information was found. The first two rows in Table 5 describe the characteristics of the laser light used by Ravetz et al. to trigger the reaction forming a crosslinked PMMA gel. The reaction is successful when excited with both NIR and blue light transferred to the reactants through air and water. We notice that the values of the used for the NIR and blue light are with good approximation of the same order of magnitude: 0.1 fJ for the NIR light and 2.4 fJ for the blue light. There is a gap of 2.3 fJ between the values of the NIR and the blue light. The gap can be observed in Figure 2(a) where it is clear that the area of the blue light is larger than the area of the NIR light. We ascribe this gap to the uncertainty of the exact value of P capable of promoting with maximum efficiency the freestanding gel-forming PCRs through the NIR and the blue light sources. The third and fourth rows in Table 5 report the characteristics of the LEDs used by Ravetz et al. to trigger the same crosslinked PMMA gel-forming reaction described above. In this case, according to the data reported in Table S1 of , the authors aimed at performing the reaction by constraining the light to pass through three sheets of white printer paper. The reaction turned out to be successful only using NIR light. It is interesting to note that Ravetz et al. choose the light characteristics of the LED lamp such that the s of both the NIR and the blue light are nearly the same: 37.5 fJ for the NIR light and 53.5 fJ for the blue light. The good match can be observed in Figure 2(b) where we observe that the area of the blue light is of the same order of magnitude of the area of the NIR light. Ravetz et al. ascribe the lack of success of the blue light in producing a successful gel-forming reaction to the insufficient penetration depth of the blue light in the three sheets of white printer paper. We assume however that, with the given values of P and , the reaction would have been successful with both the NIR and the blue light if the reactants would have been illuminated through air and water. Table 5. Type of light source, wavelength ( ), period ( ), and power (P) used by Ravetz et al. in promoting the various photoredox catalysis reactions (PCRs) with either near infrared (NIR) or blue light. For each case, we report the values of , the energy conserved in light-matter interaction. The last column indicates the location in describing the various PCRs discussed here. Figure 2. Representation of for the various illuminations presented by Ravetz et al. : (a) laser diode (Table S1), (b) light emitting diode (Table S1), and (c) laser diode (Figure S9). The blue area is the for the blue light illumination, the dark-orange area is the for the near-infrared illumination, and the green area is the overlap between the two areas. Finally, the fifth and sixth rows in Table 5 illustrate the characteristics of the laser light that Ravetz et al. determined to be successful in promoting the reactions reported in Figure 2 of . The reported values of P actually represent the power absorbed by the reactants to activate those reactions. In the successful cases, the values of , reported in Table 5, are 0.041 fJ for the NIR light and 0.056 fJ for the blue light. The agreement between the values of for the two types of light is excellent, as can be observed in Figure 2(c) where the area of the blue light is of the same order of magnitude of the area of the NIR light. Therefore, from the results in Table 5, we conclude that in all the successful PCRs the amount of energy conserved in the transfer of energy from light to reactants is , independently of the type of light used. This finding enables us to capture the role of the law of conservation of energy in the PCRs presented by Ravetz et al. . The actual values of calculated in reviewing the analysis by Ravetz et al. deserve some comment. Let us pick from Table 5 the for, e.g., the reaction reported in Figure 2(d) of describing the intramolecular [2 + 2] cyclization of enones through a prototypical [Ru(bpy)3]2+-catalyzed reaction and blue light. First, if the whole amount of 0.056 fJ is needed for the activation of one enone molecule, then we estimate that ≈80 Mcal/mole would be required for enones. This is an enormous amount of energy compared to the standard reaction activation energies, which are of the order of about tens of kcal/mole! Perhaps a mechanism exists that reduces the energy of the absorbed blue light by, e.g.: 1) emission of light at a different wavelength, as mentioned in , 2) reflection of light, and, possibly, 3) dissipation of light’s energy in the form of thermal energy. Second, if the energy of an absorbed blue photon is needed to activate one enone, then 64 kcal/mole would be required for enones. This quantity is three orders of magnitude smaller than the corresponding amount in terms of , i.e. ≈80 Mcal/mole, but closer to the standard reaction activation energies. Thus, if the energy of an absorbed blue photon is needed to activate one enone, then there would be almost no fraction of the absorbed blue light available for emission, reflection and dissipation, which, however, are experimentally observed . To summarize, we use the characteristics of the light used to activate PCRs by Ravetz et al. to find the values of , i.e. the product of light’s power P times its period . For each specific successful reaction considered by Ravetz et al. , we find that the values of are nearly the same whether the reactions are activated with NIR or blue light. The different values of the power at the different wavelengths considered (either blue or NIR) enable reaching the same . Other recent articles indicate the crucial role played by power P in determining the success or failure of photo-excited chemical reactions . This finding supports the hypothesis that is the amount of energy conserved in the PCRs presented by Ravetz et al. , and clarifies the role in PCRs of the law of conservation of energy, which is obscured when considering solely the triplet fusion upconversion mechanism in explaining the observed outcome. The evidence so far that the product of light’s power P times its period , i.e. , is the amount of energy conserved in light-matter interaction is provided by experiments examining the outcome from capacitors illuminated by light. As such, plays a primary role in the analysis of light-matter interaction because it competes with photons with energy given by Planck’s constant h times light’s frequency ν. Indeed, both and the photon are called into play when conservation of energy is considered in light-matter interaction. However, as the energy and the energy of the photon hν are different in the orders of magnitude, it appears necessary to shed light on which one of the two energies truly satisfies the energy balance in light-matter interaction. Our research shows results clearly in favor of . Indeed, from the study of the photothermoelectric effect and of photoredox catalysis reactions, we infer that the hypothesis that represents the amount of energy conserved in light-matter interaction is generally true. In addition, through , in the case of the photothermoelectric effect, we unveil that the size of the light’s beam cross-sectional area impinging on the photodetectors plays a major role in defining the performance of the photodetectors. With our analysis, the photodetector responsivities actually turn out to be higher than those reported in the original article. In the case of the photoredox catalysis reactions, we find that the magnitude of involved in successful photoredox catalysis reactions is independent of the type of light used, whether near-infrared or blue. In addition, the involvement of in the description of photoredox catalysis reactions helps to clarify the role of the law of conservation of energy, which was neglected by the authors of the original article. The hypothesis that represents the amount of energy conserved in light-matter interaction was revealed to be also effective in the interaction of light with capacitors , field effect transistors , and, through our analysis, in the photothermoelectric effect excited by visible light and microwaves . We were able to reach the same conclusion in the investigation of infrared spectroscopy and in vision in vertebrates . The validity of this conclusion on as the energy conserved in light-matter interaction with radio waves, microwaves, X-rays and γ rays still needs to be ascertained. Finally, the relationship between and photocurrent formation also needs to be investigated. Assessing that is the amount of energy conserved in light-matter interaction impacts: 1) the quantitative analysis of light-matter interaction; 2) the determination of the effectiveness of devices and phenomena based on light-matter interaction; and 3) the establishment of the conditions to obtain similar effects with different wavelengths. The author thanks the Department of Physics and Astronomy of the James Madison University for supporting the research that results in this article. Boone, D.E., Jackson, C.H., Swecker, A.T., Hergenrather, J.S., Wenger, K.S., Kokhan, O., Terzic, B., Melnikov, I., Ivanov, I.N., Stevens, E.C. and Scarel, G. (2018) Probing the Wave Nature of Light-Matter Interaction. World Journal of Condensed Matter Physics, 8, 62-89. Lu, X., Liang, P. and Bao, X. (2019) Phonon-Enhanced Photothermoelectric Effect in SrTiO3 Ultra-Broad-Band Photodetector. Nature Communications, 10, 138. Ravetz, B.D., Pun, A.B., Churchill, E.M., Congreve, D.N., Rovis, T. and Campos, L.M. (2019) Photoredox Catalysis using Infrared Light via Triplet Fusion Upconversion. Nature, 565, 343-346. Vincent-Johnson, A.J., Vasquez, K.A., Bridstrup, J.E., Masters, A.E., Hu, X. and Scarel, G. (2011) Heat Recovery Mechanism in the Excitation of Radiative Polaritons by Broadband Infrared Radiation in Thin Oxide Films. Applied Physics Letters, 99, Article ID: 131901. Skoblin, G., Sun, J. and Yurgens, A. (2018) Graphene Bolometer with Thermoelectric Readout and Capacitive Coupling to an Antenna. Applied Physics Letters, 112, Article ID: 063501. Cabré, G., Garrido-Charles, A., Moreno, M., Bosch, M., Porta-de-la-Riva, M., Krieg, M., Gascón-Moya, M., Camarero, N., Gelabert, R., Lluch, J.M., Busqué, F., Hernando, J., Gorostiza, P. and Alibés, R. (2019) Rationally Designed Azobenzene Photoswitches for Efficient Two-Photon Neuronal Excitation. Nature Communications, 10, 907. Barati, F., Grossnickle, M., Su, S., Lake, R.L., Aji, V. and Gabor, N.M. (2017) Hot Carrier-Enhanced Interlayer Electron-Hole Pair Multiplication in 2D Semiconductor Heterostructure Photocells. Nature Nanotechnology, 12, 1134-1139. Sarker, B.K., Cazalas, E., Chung, T.-F., Childres, I., Jovanovic, I. and Chen, Y.P. (2017) Position-Dependent and Millimeter-Range Photodetection in Phototransistors with Micrometer-Scale Graphene on SiC. Nature Nanotechnology, 12, 668-674. Scarel, G. and Stevens, E.C. (2019) The Effect of Infrared Light’s Power on the Infrared Spectra of Thin Films. World Journal of Condensed Matter Physics, 9, 1-21. Scarel, G. (2019) Quantum and Non-Quantum Formulation of Eye’s Adaptation to Light’s Intensity Increments. World Journal of Condensed Matter Physics, 9, 62-74.
<urn:uuid:8b814bab-07dc-4ac3-a07d-96eb05ad3e03>
CC-MAIN-2021-21
https://m.scirp.org/papers/95191
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00376.warc.gz
en
0.905063
5,964
2.578125
3
Clair Cameron Patterson June 2, 1922 - December 5, 1995 By George R. Tilton Clair Patterson was an energetic, innovative, determined scientist whose pioneering work stretched across an unusual number of sub-disciplines, including archeology, meteorology, oceanography, and environmental science—besides chemistry and geology. He is best known for his determination of the age of the Earth. That was possible only after he had spent some five years establishing methods for the separation and isotopic analysis of lead at microgram and sub-microgram levels. His techniques opened a new field in lead isotope geochemistry for terrestrial as well as for planetary studies. Whereas terrestrial lead isotope data had been based entirely on galena ore samples, isotopes could finally be measured on ordinary igneous rocks and sediments, greatly expanding the utility of the technique. While subsequently applying the methodology to ocean sediments, he came to the conclusion that the input of lead into the oceans was much greater than the removal of lead to sediments, because human activities were polluting the environment with unprecedented, possibly dangerous, levels of lead. Then followed years of study and debate involving him and other investigators and politicians over control of lead in the environment. In the end, his basic views prevailed, resulting in drastic reductions in the amount of lead entering the environment. Thus, in addition to measuring the age of the Earth and significantly expanding the field of lead isotope geochemistry, Patterson applied his scientific expertise to create a healthier environment for society. Clair Patterson (known as "Pat" to friends) was born and grew up in Mitchellville, Iowa, near Des Moines. His father, whom he describes as "a contentious intellectual Scot," was a postal worker. His mother was interested in education and served on the school board. A chemistry set, which she gave him at an early age, seems to have started a lifelong attraction to chemistry. He attended a small high school with fewer than 100 students, and later graduated from Grinnell College with an A. B. degree in chemistry. There he met his wife-to-be Lorna McCleary. They moved to the University of Iowa for graduate work, where Pat did an M. A. thesis in molecular spectroscopy. After graduation in 1944 both Pat and Laurie were sent to Chicago to work on the Manhattan (atomic bomb) Project at the University of Chicago at the invitation of Professor George Glockler, for whom Pat had done his M. A. research. After several months there, he decided to enlist in the army, but the draft board rejected him because of his high security rating and sent him back to the University of Chicago. There it was decided that both Pat and Laurie would go to Oak Ridge, Tennessee, to continue work on the Manhattan Project. At Oak Ridge, Patterson worked in the 235U electromagnetic separation plant and became acquainted with mass spectrometers. After the war it was natural for him to return to the University of Chicago to continue his education. Laurie obtained a position as research infrared spectroscopist at the Illinois Institute of Technology to support him and their family while he pursued his Ph.D. degree. In those days a large number of scientists had left various wartime activities and had assembled at the University of Chicago. In geochemistry those scientists included Harold Urey, Willard Libby, Harrison Brown, and Anthony Turkevich. Mark Inghram, a mass spectrometer expert in the physics department, also played a critical role in new isotope work that would create new dimensions in geochemistry. The university had created a truly exciting intellectual environment, which probably few, possibly none, of the graduate students recognized at the time. Harrison Brown had become interested in meteorites, and started a program to measure trace element abundances by the new analytical techniques that were developed during the war years. The meteorite data would serve to define elemental abundances in the solar system, which, among other applications, could be used to develop models for the formation of the elements. The first project with Edward Goldberg, measuring gallium in iron meteorites by neutron activation, was already well along when Patterson and I came on board. The plan was for Patterson to measure the isotopic composition and concentration of small quantities of lead by developing new mass spectrometric techniques, while I was to measure uranium by alpha counting. (I finally also ended up using the mass spectrometer with isotope dilution instead of alpha counting.) In part, our projects would attempt to verify several trace element abundances then prevalent in the meteorite literature which appeared (and turned out to be) erroneous, but Harrison also had the idea that lead isotope data from iron meteorites might reveal the isotopic composition of lead when the solar system first formed. He reasoned that the uranium concentrations in iron meteorites would probably be negligible compared to lead concentrations, so that the initial lead isotope ratios would be preserved. That was the goal when Patterson began his dissertation project, however attaining it was to take considerably longer than we imagined at the time. Patterson started lead measurements in 1948 in a very dusty laboratory in Kent Hall, one of the oldest buildings on campus. In retrospect it was an extremely unfavorable environment for lead work. None of the modern techniques, such as laminar flow filtered air, sub-boiling distillation of liquid reagents, and Teflon containers were available in those days. In spite of those handicaps, Patterson was able to attain processing blanks of circa 0.1 microgram, a very impressive achievement at the time, but now approximately equal to the total amount of sample lead commonly used for isotope analyses. His dissertation in 1951 did not report lead analyses from meteorites; instead it gave lead isotopic compositions for minerals separated from a billion-year-old Precambrian granite. On a visit to the U.S. Geological Survey in Washington D.C., Brown had met Esper S. Larsen, Jr., who was working on a method for dating zircon in granitic rocks by an alpha-lead method. Alpha counting was used as a measure of the uranium and thorium content; lead, which was assumed to be entirely radiogenic (produced by the decay of uranium and thorium), was determined by emission spectroscopy. Despite several obvious disadvantages, the method seemed to give reasonable dates on many rocks. Brown saw that the work of Patterson and me would eliminate those problems, so we arranged to study one of Larsen's rocks. We finally obtained lead and uranium data on all of the major, and several of the accessory, minerals from the rock. Particularly important was the highly radiogenic lead found in zircon, which showed that a common accessory mineral in granites could be used for measuring accurate ages. As it happened, the zircon yielded nearly concordant uranium-lead ages, although that did not turn out later to be true for all zircons. In any case, that promising start opened up a new field of dating for geologists, and has led to hundreds of age determinations on zircon. In parallel with the lead work, Patterson participated in an experiment to determine the branching ratio for the decay of 40K to 40Ar and 40Ca. Although the decay constant for beta decay to 40Ca was well established, there was much uncertainty in the constant for decay to 40Ar by K electron capture. This led Mark Inghram and Harrison Brown to plan a cooperative study to measure the branching ratio by determining the radiogenic 40Ar and 40Ca in a 100-million-year-old KCl crystal (sylvite). The Inghram group would measure 40Ar while Patterson and Brown would measure 40Ca. They reported a value that came within circa 4% of the finally accepted value. After graduation, Patterson stayed on with Brown at Chicago in a postdoctoral role to continue the quest toward their still unmet meteorite age goal. He obtained much cleaner laboratory facilities in the new Institute for Nuclear Studies building, where he worked on improvement of analytical techniques. However, after a year this was interrupted when Brown accepted a faculty appointment at the California Institute of Technology. Patterson accompanied him there and built facilities that set new standards for low-level lead work. By 1953 he was finally able to carry out the definitive study, using the troilite (sulfide) phase of the Canyon Diablo iron meteorite to measure the isotopic composition of primordial lead, from which he determined an age for the Earth. The chemical separation was done at CalTech, and the mass spectrometer measurements were still made at the University of Chicago in Mark Inghram's laboratory. Harrison Brown's suspicion was finally confirmed! The answer turned out to be 4.5 billion years, later refined to 4.55 billion years. The new age was substantially older than the commonly quoted age of 3.3 billion years, which was based on tenuous modeling of terrestrial lead evolution from galena deposits. Patterson's reactions on being the first person to know the age of the Earth are interesting and worthy of note. He wrote,1 True scientific discovery renders the brain incapable at such moments of shouting vigorously to the world ''Look at what I've done! Now I will reap the benefits of recognition and wealth." Instead such discovery instinctively forces the brain to thunder "We did it" in a voice no one else can hear, within its sacred, but lonely, chapel of scientific thought. There "we" refers to what Patterson calls "the generations-old community of scientific minds." From my observations, he lived that ethic. To him it must have been an exercise in improving the state of the "community of scientific minds." His attitude recalls the remark of Newton: "If I have seen farther than others, it is because I have stood on the shoulders of giants." The age that Patterson derived has stood the test of time, and is still the quoted value forty-four years later. In the meantime, there have been small changes in the accepted values for the uranium decay constants, improvements in chemical and mass spectrometric techniques, and a better understanding of the physical processes taking place in the early solar system and Earth formation, but these have not substantially changed the age Patterson first gave to us. Some textbooks have given diagrams showing that the logarithm of the supposed age of the Earth plotted against the year in which the ages appeared approximated a straight line, but Patterson's work has finally capped that trend. Patterson next focused on dating meteorites directly instead of inferring their ages from the Canyon Diablo troilite initial lead ratios. He did this by measuring lead isotope ratios in two stone meteorites with spherical chondrules (chondrites) and a second stone without chondrules (achon- drite). A colleague, Leon Silver, had recommended the achondrite because of its freshness and evolved petrologic appearance. Coupled with the iron meteorite troilite lead, the complete data yielded a 207Pb/206Pb age of 4.55 + 0.07 billion years. The achondrite data were especially important because the Pb ratios in the two chondrites were close to those of modern terrestrial lead, raising questions about possible Earth contamination, but the exceptionally high uranium/lead and thorium/lead ratios in the Nuevo Laredo achondrite produced lead with isotope ratios that were unlike any isotopic compositions that have ever been found in terrestrial rocks. They also fit the 4.55 Ga age, which removed any doubts about major errors in the date. The meteorite work led indirectly to his second major scientific accomplishment. The new ability to isolate microgram quantities of lead from ordinary rocks and determine its isotopic composition had opened for the first time the path for measuring lead isotopes in common geological samples, such as granites, basalts, and sediments. That led him to start lead isotope tracer studies as a tool for unraveling the geochemical evolution of the Earth. As part of that project he set out to obtain better data for the isotopic composition of "modern terrestrial lead" by measuring the isotopic composition of lead in ocean sediments. By 1962 Tsaihwa J. Chow and Patterson reported the first results in an encyclopedic publication that initiated Patterson's concern with anthropogenic lead pollution, which was to occupy much of his attention for the remainder of his scientific career. The isotope data revealed interesting patterns for Atlantic and Pacific Ocean leads that could be related to the differences in the ages and compositions of the landmasses draining into those oceans. However, in studying the balance between input and removal of lead in the oceans, the authors calculated that the amount of anthropogenic lead presently dispersed into the environment each year was circa eighty times the rate of deposit into ocean sediments. Thus, the geochemical cycle for lead appeared to be badly out of balance. The authors noted that their calculations were provisional; the analytical data were scarce or of poor precision in many cases, however this was the seminal study that started Patterson's investigations into the lead pollution problem. The limitations in the analytical data on which many of the conclusions in the 1962 paper were based led Patterson to start new investigations to attack the problem. In 1963 he published a report with Mitsunobu Tatsumoto showing that deep ocean water contained 3 to 10 times less lead than surface water, the reverse of the trend for most elements (e.g., barium). This provided new evidence for disturbance in the balance of the natural geochemical cycle for lead by anthropogenic lead input. In the 1965 paper entitled "Contaminated and Natural Lead Environments of Man,"2 Patterson made his first attempt to dispel the then prevailing view that industrial lead had increased environmental lead levels by no more than a factor of approximately two over natural levels. He maintained that the belief arose from the poor quality of lead analyses in prehistoric comparison samples in which much of the lead reported was actually due to underestimation of blank contamination. He compiled the amounts of industrial lead entering the environment from gasoline, solder, paint, and pesticides and showed that they involved very substantial quantities of lead compared to the expected natural flux. He estimated the lead concentration in blood for many Americans to be over 100 times that of the natural level, and within about a factor of two of the accepted limit for symptoms of lead poisoning to occur. R. A. Kehoe, a recognized expert on industrial toxicol- ogy3 accused him of being more of a zealot than a scientist in the warnings he had raised.4 Another leading toxicologist had just returned from a World Health Organization conference where fifteen nations had agreed that environmental lead contributions to the body burden had not changed in any significant way, either in blood or urinary lead contents, over the last two decades. He called Patterson's conclusions "rabble rousing."5 Patterson's reactions are recorded in a letter to editor Katharine Boucot accompanying the revised manuscript: The enclosed manuscript does not constitute basic research and it lies within a field that is outside of my interests. This is not a welcome activity to a physical scientist whose interests are inclined to basic research. My efforts have been directed to this matter for the greater part of a year with reluctance and to the detriment of research in geochemistry. In the end they have been greeted with derisive and scornful insults from toxicologists, sanitary engineers and public health officials because their traditional views are challenged. It is a relief to know that this phase of the work is ended and the time will soon come when my participation in this trying situation will stop.6 Patterson's participation did not stop; instead on October 27, 1965, he wrote to California Governor Pat Brown restating the points from his 1965 review and emphasizing the dangerously high levels of lead in aerosols, particularly in the Los Angeles area. In it he claimed that the California Department of Public Health was not doing all it should to protect the population from the dangers of lead poisoning. His first request drew a polite rejection. A second letter on March 24, 1966, had better success, perhaps because of a letter from a high state official.7 On July 6, 1966, Governor Brown signed a bill directing the State Department of Public Health to hold hearings and to establish air quality standards for California by February 1, 1967. Although that deadline was not met, Patterson clearly played a role in advancing concern over California air control standards. He had simultaneously started parallel actions at the national level as well. On October 7, 1965, he sent a communication similar to the Brown letter to Senator Muskie, chairman of the Subcommittee on Air and Water Pollution. In it he offered to appear before the committee. He was subsequently invited to a hearing held on June 15, 1966, in Washington. There Patterson emphasized that most officials failed to understand the difference between "natural" and "normal" lead body burdens, the former based on incorrect data from pre-industrial humans, the latter on averages in modern populations. In support of that assertion he cited his newer work in Greenland showing the large increases in lead in snow starting with the industrial revolution. He furthermore believed it was wrong for public health agencies to work so closely with lead industries, whom he considered often biased in matters concerning public health. His views drew support from some of the public (e.g., Ralph Nader), but were once again strongly opposed by others, notably by R. A. Kehoe, the highly regarded authority on industrial poisoning. A battle line was drawn that was to last about two decades. By 1970 Patterson and his colleagues had completed studies of snow strata from Greenland and Antarctica that showed clearly the increase in atmospheric lead beginning with the industrial revolution in both regions. Modern Greenland snow contained over 100 times the amount of lead in preindustrial snow, with most of the increase occurring over the last 100 years. The effect was about ten times smaller in Antarctic snow, but it was clearly observable. Later work with improved blanks reduced that figure to two. In 1971 the National Research Council released a report entitled "Airborne Lead in Perspective" to guide the Envi- ronmental Protection Agency's policies on lead pollution. The panel was widely accused of not being forceful enough in interpreting its data and being too heavily weighted toward industrial scientists. 8 Patterson's work was largely ignored, however by December 1973 the EPA did announce a program to reduce lead in gasoline by 60-65% in phased steps. Thus was the beginning of the removal of lead from gasoline. Meanwhile Patterson continued to work on the lead problem from another perspective by measuring lead, barium, and calcium concentrations in bones from 1600-year-old Peruvian skeletons.9 The results indicated a 700- to 1200-fold increase in concentrations of lead in modern man, with no change in barium, a good staind-in for lead, and calcium. In a letter Patterson once said, "I have a passionate interest in this paper."10 In the late 1970s Patterson turned his attention to lead in food. In 1979 he wrote to the commissioner of food and drugs at the Environmental Protection Agency asserting that "your headquarters laboratory cannot correctly analyze for lead in tuna fish muscle."11 He maintained that the laboratory blanks were too high to permit accurate analyses for lead concentrations below 1 ppm. When asked if he could cite other laboratories that agreed with his results, Patterson responded that scientific matters are not decided by majority vote.12 That contact finally led to his participation in a symposium on analytical methods of analyzing for lead in food at the sub-1 ppm level, held October 10, 1981, in Washington. It was attended by both EPA and Bureau of Foods representatives. Patterson made three recommendations for improvements that seem to have been taken seriously.13 These were (1) to use Bureau of Standards mass spectrometers to permit mass spectrometric lead analyses; (2) to equip EPA field laboratories better; and (3) to pro- mote more contacts between EPA and academic laboratories. A few months later Patterson wrote that he believed the analytical work being done at the headquarters EPA laboratory met his standards.14 In 1980 Dorothy M. Settle and Patterson15 published a warning on the amount of lead entering the food chain due to lead solder used in sealing cans. Although the National Marine Services laboratories had reported only twice as much lead in canned albacore muscle as in fresh tuna (700 versus 400 nanograms per gram), the authors found 0.3 nanogram per gram of lead in fresh and 1400 nanograms/gram in canned muscle. Barium varied by only a factor of two in the samples. A sample of fresh muscle prepared at CalTech and analyzed at the fisheries laboratory gave 20 nanograms per gram for lead, still much higher than the CalTech value. By 1993 lead solder was removed from all food containers in the United States. Patterson's influence is again clearly evident. Although he was excluded from the earlier 1971 National Research Council panel that produced the report on airborne lead, in 1978 Patterson was appointed to a new twelve-member NRC panel to evaluate the state of knowledge about environmental issues related to lead poisoning. The panel report16 is noted for containing majority and minority evaluations. The majority report cites the need to reduce lead hazards for urban children; notes that the margin between toxic and typical levels for lead in adults needs better definition; and concedes that typical atmospheric lead concentrations are 10 to 100 times the natural backgrounds for average populations and 1,000 to 10,000 times greater for urban populations. The report asks for further research on these subjects, as well as on relationships between lead ingestion and intellectual ability. The need for improved analytical work was emphasized. In his lengthy 78-page minority report Patterson argued that the majority report was not forceful enough. Basically he said that the dangers of the prevalent practices were already clearly enough defined and that efforts should start immediately to drastically reduce or completely remove industrial lead from the everyday environment. That included gasoline, food containers, foils, paint, and glazes. He also cited water distribution systems. He urged "investigations into biochemical perturbations within cells caused by lead exposures ranging down from typical to 1/1000 of typical." He had long criticized assigning a sharp limit for lead in air or blood to denote a dividing line between poisonous and non-poisonous levels. The above items give some, but by no means a complete, indication of the efforts Patterson devoted toward reducing the environmental lead burden. Many others joined the campaign with the passage of time, but he was clearly a principal player, and could be said to have initiated some of the changes that have occurred. Around 1973 lead began to be reduced in gasoline; it was removed completely in 1987. Lead solder has been removed from U. S. food containers a well as from paints and water lines. By 1991 scientists could report that the lead content of Greenland snow had fallen by a factor of 7.5 since 1971.17 Patterson will be remembered for having first discovered the differences between "natural" and "common" or "typical" lead abundances in the human population, and for arguing that point until it was universally accepted. That in turn has stimulated considerable medical research to study the effects of lead at below the toxic poisoning level on the human learning ability.18 Beginning in the early 1980s, Patterson's interests began to turn toward what I call the third stage of his intellectual career. It involved an introspective, philosophical evaluation of the place of man (H. s. sapiens, as he often stated it) in society. He distinguished between what he termed the engineering versus the scientific modes of thinking. His thoughts are best spelled out in the two articles in the 1994 special issue of Geochimica et Cosmochimica Acta in his honor. He sees the scientific mind as the inquiring mind that seeks to uncover the world's secrets, while the engineering mind seeks to control the natural world. This undoubtedly grew out of his experience as a scientist in discovering the age of the Earth, while the engineering mind would be equated with the technology that utilized the large amounts of lead that had polluted the environment. Thus he says,19 "Most persons cannot see the ills of a culture constructed by 10,000 years of perverted utilitarian rationalizations because they perceive only its material technological forms through the eyes of a diseased Homo sapiens sapiens mind." At the end he was working on a book to express his ideas on those and other matters, such as population control. We will never know what it might have contained, but we can guess that it would have been a stimulating, unique, and undoubtedly controversial treatment. As a person, Patterson was modest about his own accomplishments and generous in acknowledging the contributions of colleagues, especially those of his co-workers. He opened his laboratory to scientists from around the world and trained them in the techniques he had developed. He was self-assured in science and not one to follow the beaten path. Although he was very sensitive to the negative criticisms his work generated, he pursued his beliefs vigorously with what some would (and some did) call a fanatical drive. Perhaps any lesser degree of motivation would have led him to give up the struggle without seeing it through to the finish. He cared deeply about the welfare of society and applied his scientific knowledge toward seeking and making a better future for all. His final efforts on the book he hoped to write were directed toward that goal. His unique personality has been eloquently portrayed in the Saul Bellow novel The Dean's December, in which Patterson is the model for Sam Beech. 20 He was truly a one-of-a-kind person. Patterson's many accomplishments were recognized in 1995 by the award of the Tyler Prize for Environmental Achievement, a most fitting reward for his prolonged efforts on behalf of the environment, the Goldschmidt Medal of the Geochemical Society in 1980, and the J. Lawrence Smith Medal of the National Academy of Sciences in 1973. He was elected to the National Academy of Sciences in 1987, and received honorary doctorates from Grinnell College in 1973 and the University of Paris in 1975, as well as the Professional Achievement Award from the University of Chicago in 1983. An asteroid (2511) and a peak in the Queen Maude Mountains, Antarctica, are named for him. He is survived by his wife Lorna Jean McCleary Patterson, who resides at The Sea Ranch, California, and children Cameroon Clair Patterson, Claire Mai Keister, Charles Warner Patterson, and Susan McCleary Patterson. I thank professor Leon Silver and Dr. Peter Neuschul, California Institute of Technology, and Lorna Patterson for discussions and critical reviews of the manuscript. I am especially indebted to Dr. Neuschul and to the archives collection of the California Institute of Technology for providing many valuable information sources. 1950 With M. G. Inghram, H. Brown, and D. C. Hess. The branching ratio of 40K radioactive decay. Phys. Rev. 80:916-17. 1953 The isotopic composition of meteoritic, basaltic and oceanic leads, and the age of the earth. Subcommittee on Nuclear Processes in Geological Settings. Washington, D.C.: National Academy of Sciences. With H. Brown, G. Tilton, and M. Inghram. Concentration of uranium and lead and the isotopic composition of lead in meteoritic material. Phys. Rev. 92:1234-35. 1955 The Pb207/Pb206 ages of some stone meteorites. Geochim. Cosmochim. Acta 7:151-53. With G. Tilton and M. Inghram. Age of the Earth. Science 212:69-75. With G. R. Tilton, H. Brown, M. Inghram, R. Hayden, D. Hess, and Esper Larsen, Jr. Isotopic composition and distribution of lead, uranium and thorium in a Precambrian granite. Bull. Geol. Soc. Am. 66:1131-1148. 1956 Age of meteorites and the earth. Geochim. Cosmochim. Acta 10:230-37. 1962 With T. J. Chow. The occurrence and significance of lead isotopes in pelagic sediments. Geochim. Cosmochim. Acta 26:263-308. 1963 With M. Tatsumoto. The concentration of common lead in seawater. In Earth Science and Meteoritics, eds. J. Geiss and E. Goldberg, pp. 74-89. Amsterdam: North Holland Publishing Co. With M. Tatsumoto. Concentrations of common lead in some Atlantic and Mediterranean waters and in snow. Nature 199:350-52. 1965 Contaminated and natural environments of man. Arch. Environ. Health 11:344-60. 1969 With M. Murozumi and T. J. Chow. Chemical concentration of pollutant lead aerosols, terrestrial dusts, and sea salts in Greenland and Antarctic snow strata. Geochim. Cosmochim. Acta 33:1247-94. 1974 With Y. Hirao. Lead aerosol pollution in the high Sierras overrides natural mechanisms which exclude lead from a food chain. Science 184:989-92. 1976 With others. Comparison determinations of lead by investigators analyzing individual samples of seawater in both their home laboratory and in an isotope dilution standardization laboratory. Mar. Chem. 4:389-92. 1979 With J. D. Ericson and H. Shirahata. Skeletal concentrations of lead in ancient Peruvians. N. Engl. J. Med. 300:949-51. 1980 An alternate perspective—lead pollution in the human environment: origin, extent, and significance. In Lead in the Human Environment, pp. 265-349. Washington, D.C.: National Academy of Sciences. With C. M. Settle. Lead in albacore: guide to lead pollution in Americans. Science 207:1167-76. 1981 With B. K. Schaule. Lead concentrations in the northeast Pacific: evidence for global anthropogenic perturbations. Earth Planet. Sci. Lett. 54:97-116. 1983 With C. F. Boutron. The occurrence of lead in Antarctic recent snow, firn deposited over the past two centuries and prehistoric ice. Geochim. Cosmochim. Acta 47:1355-68. 1986 With C. F. Boutron. Lead concentration changes in Antarctic ice during the Wisconsin Holocene transition. Nature 323:222-25. 1987 With D. M. Settle. Magnitude of lead flux to the atmosphere from volcanoes. Geochim. Cosmochim. Acta 51:675-81. 1993 With D. M. Settle. New mechanisms in lead biodynamics at ultra-low levels. Neuro Toxicol. 14:291-300. 1994 Definition of separate brain regions used for scientific versus engineering modes of thinking. Geochim. Cosmochim. Acta 58:3321-27. Historical changes in integrity and worth of scientific knowledge. Geochim. Cosmochim. Acta 58:3141-45. With S. Hong, J. P. Candelone, and C. F. Boutron. Greenland ice evidence of hemispheric pollution for lead two millennia ago by Greek and Roman civilizations. Science 265:1841-43. With Y. Erel. Leakage of industrial lead into the hydrocycle. Geochim. Cosmochim. Acta 58:3289-96.
<urn:uuid:20350d9d-b402-4aa0-a000-5344be0e86d8>
CC-MAIN-2021-21
https://www.nap.edu/read/6201/chapter/16
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00497.warc.gz
en
0.967966
6,534
2.84375
3
To use all functions of this page, please activate cookies in your browser. With an accout for my.bionity.com you can always see everything at a glance – and you can configure your own website and individual newsletter. - My watch list - My saved searches - My saved topics - My newsletter Fibromyalgia (FM or FMS) is a chronic syndrome (constellation of signs and symptoms) characterized by diffuse or specific muscle, joint, or bone pain, fatigue, and a wide range of other symptoms. It is not contagious, and recent studies suggest that people with fibromyalgia may be genetically predisposed. It affects more females than males, with a ratio of 9:1 by ACR (American College of Rheumatology) criteria. Fibromyalgia is seen in 3% to 6% of the general population. Recently there has been an increase in the number of diagnoses, which is assumed to be associated with better identification of the disorder. It is most commonly diagnosed in individuals between the ages of 20 and 50, though onset can occur in childhood. The disease is not directly life-threatening. The degree of symptoms may vary greatly from day to day with periods of flares (severe worsening of symptoms) or remission; however, the syndrome is generally perceived as non-progressive. Additional recommended knowledge Fibromyalgia has been studied since the early 1800s and referred to by a variety of former names, including muscular rheumatism and fibrositis. The term fibromyalgia was coined in 1976 to more accurately describe the symptoms, from the Latin fibra (fiber) and the Greek words myo (muscle) and algos (pain). Fibromyalgia was first recognized by the American Medical Association as an illness and a cause of disability in 1987. In an article the same year, in the Journal of the American Medical Association, a physician named Dr. Don Goldenberg called the syndrome Fibromyalgia. The defining symptoms of fibromyalgia are chronic, widespread pain and tenderness to light touch, and usually moderate to severe fatigue. Those affected may also experience heightened sensitivity of the skin (also called allodynia), tingling of the skin (often needle-like), achiness in the muscle tissues, prolonged muscle spasms, weakness in the limbs, and nerve pain. Chronic sleep disturbances are also characteristic of fibromyalgia -- and not just from discomfort: some studies suggest that these sleep disturbances are the result of a sleep disorder called alpha-delta sleep, a condition in which deep sleep (associated with delta EEG waves) is frequently interrupted by bursts of brain activity similar to wakefulness (i.e. alpha waves). Deeper stages of sleep (stages 3 & 4) are often dramatically reduced. In addition, many patients experience cognitive dysfunction (known as "brain fog" or "fibrofog"), which may be characterized by impaired concentration and short-term memory consolidation, impaired speed of performance, inability to multi-task, and cognitive overload. Many experts suspect that "brain fog" is directly related to the sleep disturbances experienced by sufferers of fibromyalgia. However, the relationship has not been strictly established. Other symptoms often attributed to fibromyalgia (possibly due to another comorbid disorder) may include myofascial pain syndrome, chronic paresthesia, physical fatigue, irritable bowel syndrome, genitourinary symptoms (such as those associated with the chronic bladder condition interstitial cystitis), dermatological disorders, headaches, myoclonic twitches, and symptomatic hypoglycemia. Although it is common in people with fibromyalgia for pain to be widespread, it may also be localized in areas such as the shoulders, neck, back, hips, or other areas. Many sufferers also experience varying degrees of temporomandibular joint disorder. Not all patients have all symptoms. Symptoms can have a slow onset, and many patients have mild symptoms beginning in childhood, that are often misdiagnosed as growing pains. Symptoms are often aggravated by unrelated illness or changes in the weather. They can become more tolerable or less tolerable throughout daily or yearly cycles; however, many people with fibromyalgia find that, at least some of the time, the condition prevents them from performing normal activities such as driving a car or walking up stairs. The syndrome does not cause the inflammation as is present in rheumatoid arthritis, although some NSAIDs may temporarily reduce pain symptoms in some patients. Their use however is limited, and often of little to no value in pain management. Variability of symptoms The following factors have been proposed to exacerbate symptoms of pain in patients: Proposed causes and pathophysiology The cause of fibromyalgia is still unknown. Fibromyalgia can, but does not always, start as a result of some trauma such as a traffic accident, major surgery, or disease. Some evidence shows that Lyme Disease may be a trigger of fibromyalgia symptoms. Another study suggests that more than one clinical entity may be involved, ranging from a mild, idiopathic inflammatory process to clinical depression Studies have shown that stress is a significant precipitating factor in the development of fibromyalgia, and that PTSD is linked with fibromyalgia. The Amital study found that 49% of PTSD patients fulfilled the criteria for FMS, compared with none of the controls. Related to this is the idea that fibromyalgia may be a psychosomatic illness. One controversial theory of this nature has been popularized in the books of Dr. John E. Sarno, as a theoretical condition to which Dr. Sarno has given the name of "tension myositis syndrome". Dr. Sarno's theory claims that in many cases chronic pain is the result of physical changes in the body caused by a person's subconscious as a strategy for distracting from painful or dangerous unconscious emotions, such as repressed anger. Dr. Sarno believes that this can be treated through a program of education and attitude change (and in some cases, psychotherapy) which stops the brain from using that chronic pain strategy. Dopamine is a neurotransmitter that is known to play a role in the pathogenesis of Parkinson's disease as well as restless leg syndrome. It has been proposed that fibromyalgia represents a hypodopaminergic state that likely results from a combination of genetic predisposition and exposure to environmental stressors, including inflammatory disorders (e.g. rheumatoid arthritis, systemic lupus erythematousus), systemic viral infections or psychosocial distress. In support of this proposition, a reduction in dopamine synthesis was demonstrated by a study that used positron emission tomography (PET) and demonstrated a reduction in dopamine synthesis in several brain regions in which dopamine plays a role in inhibiting pain perception. A subsequent PET study demonstrated that fibromyalgia patients fail to release dopamine in response to a tonic experimental pain stimulus. Pramipexole is a drug that stimulates dopamine D2/D3 receptors and is used to treat both Parkinson's disease and restless legs syndrome. It has also been shown in controlled trials to have a positive effect on fibromyalgia. Serotonin is a neurotransmitter that is known to play a role in regulating sleep patterns, mood, feelings of well-being, concentration, digestion. One hypothesis for the pathophysiology of fibromyalgia is a dysregulation of serotonin and norepinephrine in the neural synapse, contributing to many associated fibromyalgia symptoms. On October 19 2006, Eli Lilly issued a press release stating they had done trials which found Cymbalta, 60 mg once or twice daily, significantly reduced pain in more than half of women treated for fibromyalgia (FM), with and without major depression, according to 12-week data presented at the annual meeting of the American College of Rheumatology. Eli Lilly is in Phase III of its FM trials and is expected to submit a supplementary new drug application (sNDA) to the FDA for approval of Cymbalta for FM within the next 12 months. Critics argue that randomized controlled trials of FM are difficult due to factors such as a lack of understanding of the pathophysiology and a heterogeneous FM patient population. Although there is a lack of understanding of what causes FM, it is estimated that approximately 5-7% of the U.S. population has FM, representing a large patient clientèle. Eli Lilly hopes Cymbalta will be the first FDA approved medication for FM and had been promoting Cymbalta for FM since 2004. In the study testing the efficacy of Cymbalta for FM, participants completed several questionnaires to measure the amount of pain and discomfort the disease caused them at the beginning of the study, and then at the end of each of the first two weeks and every second week for the remaining 12 weeks of the study. Researchers also tested the participants for depression. Women who took Cymbalta had significantly less pain and discomfort than those who took the placebo. For men, who made up only 11 percent of the study, there was no effect from taking the medication compared with a placebo. Reportedly, depression played no part in whether or not the drug worked to control pain. The change in the level of women's pain was particularly pronounced after a month of taking the drug, then levelled off a bit before dropping again near the end of the study. However, in one of the primary measures of pain there was no significant difference between the two groups at the end of the 12-week trial. Also, because the trial lasted only 12 weeks, it is impossible to tell how well the drug would control treatment for a longer period of time. Lastly, the primary researcher on the project has received more than $10,000 in consulting fees from Eli Lilly, the manufacturer of Cymbalta, all other researchers also had ties to the company, reflecting a conflict of interest. Electroencephalography studies have shown that people with fibromyalgia lack of slow-wave sleep and circumstances that interfere with stage four sleep (pain, depression, serotonin deficiency, certain medications or anxiety) may cause or worsen the condition. According to the sleep disturbance theory, an event such as a trauma or illness causes sleep disturbance and possibly initial chronic pain that may initiate the disorder. The theory supposes that stage 4 sleep is critical to the function of the nervous system, as it is during that stage that certain neurochemical processes in the body 'reset'. In particular, pain causes the release of the neuropeptide substance P in the spinal cord which has the effect of amplifying pain and causing nerves near the initiating ones to become more sensitive to pain. Under normal circumstances, areas around a wound to become more sensitive to pain but if pain becomes chronic and body-wide this process can run out of control. The sleep disturbance theory holds that deep sleep is critical to reset the substance P mechanism and prevent this out-of-control effect. The sleep disturbance/substance P theory could explain "tender points" that are characteristic of fibromyalgia but which are otherwise enigmatic, since their positions don't correspond to any particular set of nerve junctions or other obvious body structures. The theory posits that these locations are more sensitive because the sensory nerves that serve them are positioned in the spinal cord to be most strongly affected by substance P. The theory could also explain some of more general neurological features of fibromyalgia, since substance P is active in many other areas of the nervous system. The sleep disturbance theory could also provide a possible connection between fibromyalgia, chronic fatigue syndrome (CFS) and post-polio syndrome through damage to the ascending reticular activating system of the reticular formation. This area of the brain, in addition to apparently controlling the sensation of fatigue, is known to control sleep behaviors and is also believed to produce some neuropeptides, and thus injury or imbalance in this area could cause both CFS and sleep-related fibromyalgia. Critics of the theory argue that it does not explain slow-onset fibromyalgia, fibromyalgia present without tender points, or patients without heightened pain symptoms, and a number of the non-pain symptoms present in the disorder. Human growth hormone An alternate theory suggests that stress-induced problems in the hypothalamus may lead to reduced sleep and reduced production of human growth hormone (HGH) during slow-wave sleep. People with fibromyalgia tend to produce inadequate levels of HGH. Most patients with FM with low IGF-I levels failed to secrete HGH after stimulation with clonidine and l-dopa. This view is supported by the fact that those hormones under the direct or indirect control of HGH, including IGF-1, cortisol, leptin and neuropeptide Y are abnormal in people with fibromyalgia, In addition, treatment with exogenous HGH or growth hormone secretagogue reduces fibromyalgia related pain and restores slow wave sleep though there is disagreement about the theory. Another theory involves phosphate and calcium accumulation in cells that eventually reaches a level to impede the ATP process, possibly caused by a kidney defect or missing enzyme that prevents the removal of excess phosphates from the blood stream. This theory posits that fibromyalgia is an inherited disorder, and that phosphate build-up in cells is gradual (but can be accelerated by trauma or illness). Calcium is required for the excess phosphate to enter the cells. The additional phosphate slows down the ATP process; however the excess calcium prods the cell to continue producing ATP. Diagnosis is made with a specialized technique called mapping, a gentle palpitation of the muscles to detect lumps and areas of spasm that are thought to be caused by an excess of calcium in the cytosol of the cells. This mapping approach is specific to deposition theory, and is not related to the trigger points of myofascial pain syndrome. While this theory does not identify the causative mechanism in the kidneys, it proposes a treatment known as guaifenesin therapy. This treatment involves administering the drug guaifenesin to a patient's individual dosage, avoiding salicylic acid in medications or on the skin, and, if the patient is also hypoglycemic, a diet designed to keep insulin levels low. The phosphate build-up theory explains many of the symptoms present in fibromyalgia and proposes an underlying cause. The guaifenesin treatment, based on this theory, has received mixed reviews, with some practitioners claiming many near-universal successes and others reporting no success. Only one controlled clinical trial has been conducted to date, and it showed no evidence of the efficacy of this treatment protocol. This study was criticized for not limiting the salicylic acid exposure in patients, and for studying the effectiveness of only guaifenesin, not the entire treatment method. As of 2005, further studies to test the protocol's effectiveness are in the planning stages, with funding for independent studies largely collected from groups which advocate the theory. It should be noted that nothing in the scientific literature supports the proposition that fibromyalgia patients have excessive levels of phosphate in their tissues. Other theories relate to various toxins from the patient's environment, viral causes such as the Epstein-Barr Virus, growth hormone deficiencies possibly related to an underlying (maybe autoimmune) disease affecting the hypothalamus gland, an aberrant immune response to intestinal bacteria, neurotransmitter disruptions in the central nervous system, and erosion of the protective chemical coating around sensory nerves. A 2001 study suggested an increase in fibromyalgia among women with extracapsular silicone gel leakage, compared to women whose implants were not broken or leaking outside the capsule. This association has not repeated in a number of related studies, and the US-FDA concluded "the weight of the epidemiological evidence published in the literature does not support an association between fibromyalgia and breast implants." Due to the multi-systemic nature of illnesses such as fibromyalgia and chronic fatigue syndrome (CFS/ME), an emerging branch of medical science called psychoneuroimmunology (PNI) is looking into how the various theories fit together. Another hypothesis on the cause of symptoms in Fibromyalgia states that patients suffer from vasomotor dysregulation causing improper vascularflow and hypoperfusion (decreased blood flow to a given tissue or organ). Always a comorbid disease? Cutting across several of the above hypotheses is a hypothesis that proposes that fibromyalgia is almost always a comorbid disorder, occurring in combination with some other disorder that likely served to "trigger" the fibromyalgia in the first place. Two possible triggers are gluten sensitivity and/or irritable bowel. Irritable bowel is found at high frequency in fibromyalgia, and a large coeliac support group survey of adult celiacs revealed that 7% had fibromyalgia and also has a co-occurrence with chronic fatique. By this hypothesis, some other disorder (or trauma) occurs first, and fibromyalgia follows as a result. In some cases, the original disorder abates on its own or is separately treated and cured, but the fibromyalgia remains. This is especially apparent when fibromyalgia seems triggered by major surgery. In other cases the two disorders coexist. There is still debate over what should be considered essential diagnostic criteria. The most widely accepted set of classification criteria for research purposes were elaborated in 1990 by the Multicenter Criteria Committee of the the American College of Rheumatology. These criteria, which are known informally as "the ACR 1990" define fibromyalgia according to the presence of the following criteria: A number of other disorders can produce similar symptoms to fibromyalgia: As with many other syndromes, there is no universally accepted cure for fibromyalgia, though some physicians claim to have found cures. However, a steady interest in the syndrome on the part of academic researchers as well as pharmaceutical interests has led to improvements in its treatment, which ranges from symptomatic prescription medication to alternative and complementary medicine. Most medications are used to treat specific symptoms of fibromyalgia, such as muscle pain and insomnia. Tricyclic antidepressants (TCAs) Traditionally, low doses of sedating antidepressants (e.g. amitriptyline and trazodone) have been used to reduce the sleep disturbances that are associated with fibromyalgia and are believed by some practitioners to alleviate the symptoms of the disorder. Because depression often accompanies chronic illness, these antidepressants may provide additional benefits to patients suffering from depression. Amitriptyline is often favoured as it can also have the effect of providing relief from neuralgenic or neuropathic pain. It is to be noted that Fibromyalgia is not considered a depressive disorder; antidepressants are used for their sedating effect to aid in sleep. Selective serotonin reuptake inhibitors (SSRIs) Anti-seizure drugs are also sometimes used, such as gabapentin and pregabalin (Lyrica). Pregabalin, originally used for the nerve pain suffered by diabetics, has been approved by the American Food and Drug Administration for treatment of fibromyalgia. A randomized controlled trial of pregabalin 450 mg/day found that a number needed to treat of 6 patients for one patient to have 50% reduction in pain. Dopamine agonists, such as Mirapex, are now being studied and used to treat fibromyalgia. Cannabis and cannabinoids Fibromyalgia patients frequently self-report using cannabis therapeutically to treat symptoms of the disease. Writing in the July 2006 issue of the journal Current Medical Research and Opinion, investigators at Germany's University of Heidelberg evaluated the analgesic effects of oral THC (∆9-tetrahydrocannabinol) in nine patients with fibromyalgia over a 3-month period. Subjects in the trial were administered daily doses of 2.5 to 15 mg of THC, but received no other pain medication during the trial. Among those participants who completed the trial, all reported a significant reduction in daily recorded pain and electronically induced pain. Previous clinical and preclinical trials have shown that both naturally occurring and endogenous cannabinoids hold analgesic qualities, particularly in the treatment of cancer pain and neuropathic pain, both of which are poorly treated by conventional opioids. As a result, some experts have suggested that cannabinoid agonists would be applicable for the treatment of chronic pain conditions unresponsive to opioid analgesics such as fibromyalgia, and they theorize that the disease may be associated with an underlying clinical deficiency of the endocannabinoid system. Because of its uricosuric effect, guaifenesin (Guai) was chosen for the experimental guaifenesin protocol in the 1990s as a treatment for fibromyalgia, and proponents of the guaifenesin protocol believe that it cures fibromyalgia by removing excess phosphate from the body. A lesser publicized and thus lesser known fact among fibromyalgia sufferers is that guaifenesin has a skeletal muscle relaxant property, and a form of guaifenesin known as guafenesin carbomate is used for this purpose. This may explain some of the symptomatic relief experienced by fibromyalgia sufferers who take guaifenesin. This method of treatment was pioneered by Dr. R. Paul St. Amand of Los Angeles, California and there is a great deal of reported success amongst patients recieving it. During the course of the patient taking Guai, he or she will experience what is known as "cycling", periods of reduced symptoms, followed by periods of more noticable symptoms. Over the course of medication, these periods become further apart and the period of increased symptoms, less noticable. It is noteworthy that the benficial effects of Guai are mitigated by contact with and absorption of salicylates, found in many of plants and their products which in even tiny amounts blocks guaifenesin from binding in the kidneys. It is present in many drugs such as aspirin, Salsalate, Disalcid, Anacin, and Excedrin. Plants produce salicylic acid, so herbal medications must be avoided as well as plant oils, gels and extracts in cosmetics and any product that touches the skin. These ingredients include aloe, castor oil, camphor, and mint. Any plants can be eaten, however, because the small amount of salicylic acid present in food is broken down in the digestive system and tagged with glycine by the liver before reaching the kidneys. For further information see Guaifenesin protocol. Users of Epsom Salts in the gel form (Magnesium Sulfate), have reported significant and lasting relief from pain associated with fibromyalgia. Epsom Salts have long been touted for its ability to reduce pain and swelling. Studies have found exercise improves fitness and sleep and may reduce pain and fatigue in some people with fibromyalgia. Many patients find temporary relief by applying heat to painful areas. Those with access to physical therapy, massage, acupuncture may find them beneficial. Most patients find exercise, even low intensity exercise to be extremely helpful. Osteopathic manipulative therapy can also temporarily relieve pain due to fibromyalgia. A holistic approach—including managing diet, sleep, stress, activity, and pain—is used by many patients. Dietary supplements, massage, chiropractic care, managing blood sugar levels, and avoiding known triggers when possible means living as well as it is in the patient's power to do. As the nature of fibromyalgia is not well understood, some physicians believe that it may be psychosomatic or psychogenic. Although there is no universally accepted cure, some doctors have claimed to have successfully treated fibromyalgia when a psychological cause is accepted. Cognitive behavioral therapy has been shown to improve quality of life and coping in fibromyalgia patients and other sufferers of chronic pain. Neurofeedback has also shown to provide temporary and long-term relief. Treatment for the "brain fog" has not yet been developed, however biofeedback and self-management techniques such as pacing and stress management may be helpful for some patients. The use of medication to improve sleep helps some patients, as does supplementation with folic acid and ginkgo biloba. In a 2001 review of four case studies, symptom alleviation was found by minimising consumption of sodium glutamate. Milnacipran, a member of the new series of drugs known as serotonin-norepinephrine reuptake inhibitors (SNRIs), is available in parts of Europe where it has been safely prescribed for other disorders. On May 22nd, 2007, a Phase III study demonstrated statistically significant therapeutic effects of Milnacipran as a treatment of fibromyalgia syndrome. At this time, only initial top-line results are available and further analyses will be completed in the coming weeks. If ultimately approved by the FDA, Milnacipran could be distributed in the United States as early as summer, 2008. Among the more controversial therapies involves the use of guaifenesin; called St. Amand's protocol or the guaifenesin protocol the efficacy of guaifenesin in treating fibromyalgia has not been proven in properly designed research studies. Indeed, a controlled study conducted by researchers at Oregon Health Science University in Portland failed to demonstrate any benefits from this treatment, though these results have been contested. Living with fibromyalgia Fibromyalgia can affect every aspect of a person's life. While neither degenerative nor fatal, the chronic pain associated with fibromyalgia is pervasive and persistent. FMS can severely curtail social activity and recreation, and as many as 30% of those diagnosed with fibromyalgia are unable to maintain full-time employment. Like others with disabilities, individuals with FMS often need accommodations to fully participate in their education or remain active in their careers. In the United States, those who are unable to maintain a full-time job due to the condition may apply for Social Security Disability benefits. Although fibromyalgia has been recognized as a genuine, severe medical condition by the government, applicants are often denied benefits, since there are no formal diagnostic criteria or medically provable symptoms. Because of this, if an applicant does have a medically verifiable condition that would justify disability benefits in addition to fibromyalgia, it is recommended that they not list fibromyalgia in their claim. However, most are awarded benefits at the judicial level; the entire process often takes two to four years. In the United Kingdom, the Department for Work and Pensions recognizes fibromyalgia as a condition for the purpose of claiming benefits and assistance. Fibromyalgia is often referred to as an "invisible" illness or disability due to the fact that generally there are no outward indications of the illness or its resulting disabilities. The invisible nature of the illness, as well as its relative rarity and the lack of understanding about its pathology, often has psychosocial complications for those that have the syndrome. Individuals suffering from invisible illnesses in general often face disbelief or accusations of malingering or laziness from others that are unfamiliar with the syndrome. There are a variety of support groups on the Web that cater to fibromyalgia sufferers. |This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Fibromyalgia". A list of authors is available in Wikipedia.|
<urn:uuid:7a9635d6-cfa6-46a7-9fee-1a310423af59>
CC-MAIN-2021-21
https://www.bionity.com/en/encyclopedia/Fibromyalgia.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00137.warc.gz
en
0.947793
5,662
2.796875
3
Lexicology and Lexicography Both lexicology and lexicography are derived from the Greek work lexiko (adjective from lexis meaning 'speech', or 'way of speaking' or 'word'). The common concern of both of them is 'word' or the lexical unit of a language. Lexicology is derived from lexico 'word' plus logos 'learning or science' i.e. the science of words. Lexicography is lexico 'word' plus graph 'writing' i.e. the writing of words. The etymological meaning of these words speaks for itself the scope of these branches of linguistics. Lexicology is the science of the study of word whereas lexicography is the writing of the word in some concrete form i.e. in the form of dictionary. As we shall see later, lexicology and lexicography are very closely related, rather the latter is directly dependent on the former and may be called applied lexicology. already noted, both lexicology and lexicography have a common subject 'word'. The sum total of all the words of a language forms the vocabulary or lexical system of a language. The words of a language are like constellations of stars in the firmament. Every word although having its own independent entity is related to others both paradigmatically and syntagmatically. The paradigmatic relations are based on the interdependence of words within the lexical system. The syntagmatic relations show the relation of words in the patterns of arrangement. In other words the vocabulary of a language is not a chaos of diversified phenomena but consists of elements which, though independent, are related in some way. A word has a particular meaning, it has a particular group of sounds, and a particular grammatical function. As such it is a semantic, phonological and grammatical unit. Lexicology studies a word in all these aspects i.e. the patterns of semantic relationship of words as also their phonological, morphological and contextual behaviour. Words undergo constant change in their form and meaning and lexicology studies the vocabulary of a language in terms of its origin, development and current use. The study of the interrelationship of lexical units is done in terms of the contrasts and similarities existing between them. a word does not occur in isolation, lexicology studies it with its combinative possibilities. And thus the scope of lexicology includes the study of phraseological units, set combinations etc. general linguistics, of which lexicology is a branch, lexicology can be both historical and descriptive, the former dealing with the origin and development of the form and meaning of the lexical units in a particular languages across time and the latter studying the vocabulary of a language as a system at a particular point of time. But there are many areas in lexicology, where one cannot be studied in isolation, without regard to the other. They are, thus, interdependent. studies can be of two types, viz., general and special. General lexicology is concerned with the general features of words common to all languages. It deals with something like universals in language. Special lexicology on the other hand studies the words with reference to one particular language. studies can be, further, of comparative and contrastive type wherein the lexical systems of two languages are studies from a contrastive point of view. fulfills the needs of different branches of applied linguistics, viz., lexicography, stylistics, language teaching, etc. the vocabulary or the lexical system of a language forms a system of the language as other systems, its study in lexicology should not be separated from the other constituents of the system. So lexicology is closely related to phonetics and relation between phonetics and lexicology is very important. Words consist of phonemes, which, although not having meaning of their own, serve in formation of morphemes, the level where meaning is expressed. So they serve to distinguish between meanings. Moreover, meaning itself is indispensable for phonemic analysis. The difference of meaning in /pIt/ and /pUt/ helps in the fixation of the phonemes /I/ and /U/. Historical phonetics helps in the study of polysemy, homonymy and link between lexicology and grammar is also very close. Each word has a relation in the grammatical system of a language and belongs to some parts of speech. Lexicology studies this relationship in terms of the grammatical meanings as also their relationship with the lexical meaning. In the field of word formation, lexicology is still more closely related to grammar. Both study the patterns of word formation. is a social phenomenon. The study of language cannot be divorced from the study of the social system and the development in society. The development and progress in the social, political and technological system is manifest in the vocabulary of a language. New words are introduced and old words die out. New meanings are added to words and old meanings are dropped out. Lexicology studies the vocabulary of a language from the sociological points also. also studies the lexicon as lexicology does but "whereas lexicology concentrates more on general properties and features that can be viewed as systematic, lexicography typically has the so to say individuality of each lexical unit in the focus of its interest". (Zgusta 1973, 14). Lexicography has been generally defined as the writing or compiling1 of a lexicon or dictionary, the art or practice of writing dictionaries or the science of methods of compiling dictionaries. The word was used as early as 1680. (Oxford English Dictionary/Lexicography). the word is studied as a part of the system. In lexicography it is studied as an individual unit in respect of its meaning and use from the practical point of its use by the reader of the dictionary for learning the language or comprehending texts in it or for any other purpose like checking correct spelling, pronunciation etc. A word may have different and varied characteristic, all of which may not be needed by a lexicographer. His work is guided more by the purpose of the dictionary and the type of the audience. He presents the words of the lexical system in a way so as to make it more practically useable in real life situation i.e. in actual speech. For example lexicology may give the theoretical basis for enumerating different meanings of a polysemous word, but how these meanings are worded and presented in the dictionary is governed by the practical problems of utility of the dictionary for different types of readers. The aim of lexicology is to study the vocabulary of a language as a system, so the treatment of individual units may not claim to be complete because the number of units is very larger. Its goal is systematization in the study as a whole but not completeness as regards individual units. So it cannot claim to be a perfectly systematic treatment. Here, every entry is treated as an independent problem. Lexicologists present their material in sequence according to their view of the study of vocabulary. The lexicographers are mostly guided by the principle of convenience in retrieval of the data and arrange words usually in alphabetical order. provides the theoretical basis of lexicography. The lexicographer although knowing all the semantic details of a lexical unit might, at times, have to take such decisions and include such features in the definition which might be his own observations. In lexicology the study of words is objective, governed by the theories of semantics and word formation. There is no scope for individual aberrations. In lexicography, in spite of all the best attempts on the part of the lexicographer, many a definition become subjective, i.e. they are not free from the bias of the dictionary maker. (cf. the meaning of oats in Johnson's Dictionary.) lexicology deals with the universal features of the words of languages. In this sense lexicology is not language specific, whereas lexicography is more or less language specific in spite of its universal theoretical background. Its theories have no other validation except for practical applicability in the compilation of a dictionary. lexicology is more theory oriented, lexicography is more concerned with concrete application (i.e. results) of these theories. So "in a certain sense lexicography may be considered a superior discipline to lexicology, for results are more important than intentions and the value of theoretical principles must be estimated according to results". (Doroszewski 1973, 36). is the science and art of compiling dictionary. The word 'dictionary' was first used as Dictionarius in this sense in the 13th century by an English man John Garland. The word Dictionarium was used in the 14th century. The first book published under the English title Dictionary was Latin-English Dictionary by Sir Thomas Elyot (1538). For a medieval scholar a dictionary was a collection of diction or phrases put together for the use of pupils studying Latin. One of the purposes of dictionary in medieval times was glossing texts and employing synonyms for are prepared to serve different practical needs of the people. A reader looks at the dictionary mainly from the following points of view: - (1) as a reference book for different types of information on words e.g. pronunciation, etymology, usage etc. this may be called the store house function of the dictionary. (2) as a reference point for distinguishing the good or proper usage from the bad or wrong usage. This is the legislative or the court house function of the (1755) described the lexicographer as "a writer of dictionaries. harmless drudge that busies himself in tracing the original and detailing the signification of word". Little did he realize at that time that his dictionary would, for almost a century, serve as the 'Bible' of the English language, the second function noted above. these a dictionary also serves as a clearing house of information. In order that these functions be performed adequately, the information in the dictionaries should be collected from as many sources as possible, and should be authentic and easily retrievable. Lexicography in this way is an applied science. Lexicography and Linguistics: as already noted, the basic concern of lexicography is 'word' which is studied in different branches of linguistics, viz, phonetics, grammar, stylistics etc. Lexicography is not only related to linguistics but is an applied discipline under it. The practical problems of lexicography are solved by the application of the researches of linguistic works. As we shall see below, in his entire work from the selection of entries, fixation of head words, the definition of words to the arrangement of meanings and entries, the lexicographer is helped by the work of different branches of linguistics. of the most widely accepted criteria for selection of entries in many dictionaries is usually frequency count. The frequency of head words the lexicographer usually chooses the canonical or the most frequently occurring form of a word. This is found out from the grammatical study of the language. For written languages and languages with established grammatical traditions the problem of selection of the head word is not so difficult as in the case of unwritten languages. Here the lexicographer has to be his own linguist and have recourse to the linguistic analysis of the language. For data collection he takes the help of field linguistics and for analysis, of descriptive linguistics. For giving definitions of flora and fauna as also of artifacts and other cultural items the lexicographer gives encyclopaedic information. For this the principle of the hierarchical structure of the vocabulary in terms of folk taxonomy is utilized by a lexicographer. Thus he enters the domain of ethnolinguistics. giving spellings and pronunciation of words in his dictionary the lexicographer is helped by the phonetic study of the language. For grammatical information he has to depend on the morphological analysis of the language. the determination of the central meaning of a polysemous word the lexicographer is helped by historical linguistics. Etymology gives him the clue to decide the basic meaning. In the fixation of the number of meanings and their interrelationship the lexicographer has to take recourse to the linguistic methods of set collocations, valency and selective restrictions etc. linguistics helps in tracing the origin and development of the form and meaning of the words in historical dictionaries. In descriptive dictionaries such labels as archaic, obsolete etc., denoting the temporal status of words, are decided with the help of historical linguistics. Historical linguistics, especially etymological study, helps in distinguishing between homonymy and polysemy. But where etymological consideration is not applicable for want of such studies it is the native speaker's intuition which is taken as the determining factor. In this the lexicographer is helped by psycholinguistics. Psycholinguistics also helps in providing material for vocabulary development which might be used for the preparation of the graded give status labels like slang, jargon, taboo, figurative, formal, graamya (vulgar) etc. These labels are decided with the help of sociolinguistic and stylistic studies. dialect dictionaries dialectology is a necessary helpmate. basic prerequisite of bilingual dictionaries is a contrastive analysis of the linguistic systems of the two languages. This is provided by contrastive linguistics. this shows that in his work the lexicographer has, to a large extent, always to depend on the findings of different branches of linguistics. But this is not so in actual life. Lexicographical works had preceded grammatical works in many languages. It is not only the findings of linguistics which help in the solution of lexicographical problems, the lexicographical findings are equally utilized by the linguists for different purposes of authenticating their hypothesis, in helping standardization of the languages, especially in the fields of technical terminologies. of a lexicographer are practical and need based requiring at-the-moment solution. The lexicographer cannot wait for certain findings in the field of linguistics or other disciplines for the solution of his problems. It is here that linguistics might fail to meet the needs of a lexicographer. There are different schools of linguistics vying with each other in theoretical researches. The findings of one school are contradicted by the other. There are different studies on the same aspect of a language. Nothing is final. The lexicographer might not afford to wait for the final word to come. Moreover, many languages still remain uninvestigated. So the lexicographer has to find his own way. In his entire work, the lexicographer is guided by the practical considerations of a dictionary user. The linguistic theories are quite important for the lexicographer but practical utility is more basic for him. As rightly put forward by Urdang "Lexicography, in practice is a form of applied linguistics and although more theoreticians would be a welcome addition to the field, they must remember that their theories should be interpretable above all in terms of practicality." (Urdang, 1963, 594) Lexicon and Grammar: the relation between lexicon and grammar has been discussed differently. Bloomfield considers grammar and lexicon (dictionary) as two parts of linguistic description and remarks "lexicon is really an appendix of the grammar, a list of basic irregularities". (Bloomfield 1933, 274)3. His statement seems to be inspired by the fact that grammar takes care of all the regular and predictable forms of the language whereas dictionary gives all the irregular and unpredictable forms as also forms with irregular and unpredictable meanings. In other words, it deals with the individual idiosyncracies of a language. The dictionary gives irregular plurals, irregular forms of verbs and other unpredictable forms in the paradigm of the lexical unit. It does not enter regular inflected forms but gives derivational forms. (See 5.4). it gives all the lexical units of a language because the relation between the form and the meaning is not predictable. It is arbitrary. It is in this sense that Bloomfield calls dictionary an appendix of grammar and a list of basic irregularities. As a matter of fact, there can be no strict separation of the two in terms that the dictionary is concerned with words only, or the grammar is concerned with forms and the dictionary with meaning. (Gleason 1967, 90). Actually the grammatical rules also give or are supposed to include the meaning of constructions. The dictionary gives different grammatical categories of the lexical entry along with its meaning and use. difference between the lexicon and the grammar lies in respect of their being open-ended and close-ended. The grammatical rules of a language are internalized by an individual by the age of five or six years. Practically little is added to the grammatical structure afterwards. On the contrary, the acquisition of vocabulary is an ongoing and continuous process and lasts only at the time of death. Every day a new lexical item is added to the lexicon (the inbuilt dictionary - the lexical stock of a language an individual speaker of a language has in himself.) the lexicon is constantly changing. New words are added, some old words are dropped while some others are modified in their signification. 1967, 93-94) describes the relationship between grammar and lexicon as that of class and member. Grammar sets up classes and studies relationship between them. Dictionary deals with individual isolated items, words and morphemes called members and identifies the class to which a member belongs. Practical and theoretical dictionaries: distinction should be made here between the practical and the theoretical (generative) dictionaries. The practical dictionary is the flesh and blood dictionary compiled by the lexicographer and consulted by the readers for different purposes. The description of this dictionary is the subject matter of this book. The theoretical dictionary is the inbuilt dictionary of an individual speaker of a language. It represents the semantic competence of the person and comprises the total stock of the words a person has acquired in his life. The speaker has this dictionary as an equipment enabling him to chose and use appropriate words in different structures and contexts. The theoretical dictionary or the lexicon of an individual is always changing. Either new words are added, or some words are dropped or some new meanings are added to the existing words because of the needs of communication. It is in this sense that the lexicon is called an open-ended set. difference between the practical and the theoretical dictionary lies in the system of the 'arrangement' of lexical entries. Whereas in the practical dictionary the entries are 'arranged' in some ordered form, in the theoretical dictionary the entries are in an unordered set. lexical entry in the theoretical dictionary is realized in actual speech by virtue of its three properties or characteristics viz., morphological, syntactical and semantic. The morphological characteristics specify the break up of the entry in terms of its different both inflectional and derivational. The morphemic break shows the pronunciation and spelling of the entry. syntactic features are describable in terms of the collocational and combinational possibilities of a word in larger constructions like sentences. These features are marked by such specific parts of speech as noun, adjective or the secondary grammatical categories like transitive, intransitive (of verbs), count, mass (of semantic characteristics relate to the bundle of semantic features of a lexical unit in terms of their oppositeness and contrastiveness. the basis of these specifications of the lexical entry, the speaker is able to 'create' or produce new words or derive new meanings from the existing words with the help of what is called lexical rules. The lexical rules also explain the interrelationship between different lexical units in a language. lexical rules account for the formation of new words in terms of the predictability of their acceptability or otherwise. The acceptability can be of three types:- Actual acceptability, which has been universally accepted as well formed according to rules of word formation and the word has also the social acceptability. (2) Potential acceptability, which can be produced by word formation rules but are not established in the society. (3) Total unacceptability, such word formation are neither permissible by word formation rules nor do they have the acceptability of the society. lexical rules provide the background information about the actual acceptance of lexical units by giving clues for such acceptability. Even among the actual accepted lexical units there are degrees of acceptedness. Some units are more commonly accepted whereas others are less commonly accepted. The practical dictionary records the most commonly accepted units, the less acceptable are either nor recorded or recorded with some delimiting labels. lexical rules are of different types viz., rules of morphological derivation, rules of conversion, rules of semantic transfer etc. The rules of morphological derivation relate to the change in morphological structure by addition of some suffixes or affixes to stems e.g. Hindi ghod?aa 'horse' ghod?ewaalaa 'one who owns a horse' or by the method of compounding etc. rules of conversion relate to change in the syntactic function without affecting the morphological structure. E.g. cut noun, drop noun: drop verb. rules for semantic transfer involve change in the semantic structure of a word. Metaphorical extensions, metonymy and other forms of semantic change are covered by this rule. This rule accounts for the connotational and stylistic meanings of the lexical units which in course of time are systemized, institutionalized and established in the language. lexical rules are of diverse nature. Large number of lexical rules can be applied to one lexical unit e.g. boy, boyish, boyhood etc. the same rule can be applied to different words, e.g. boyhood, girlhood, manhood, womanhood etc. The lexical rules explaining the relationship between different lexical units are related to polysemy, synonymy, hyponymy etc. to Shcherba (Srivastava 1968, 113). 2. For details see Annamalai, E. (1978) 3. Cf. Jesperson, O. "Grammar deals with general facts of language and lexicology with special facts". Philosophy of Grammar, p. 32 4. For details see Leech 1974, pp. 210 ff.
<urn:uuid:101c3109-bb49-486c-a61d-81ea62889ff4>
CC-MAIN-2021-21
http://www.ciil-ebooks.net/html/lexico/link4.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00057.warc.gz
en
0.927194
4,979
3.25
3